Portworx
Author: s | 2025-04-24
Portworx Enterprise. Release notes for Portworx Enterprise. Portworx CSI. Release notes for Portworx CSI. Autopilot. Release notes for the Autopilot component. Operator. Release notes for the Portworx Operator. Stork. Release notes for the Portworx Stork. Portworx Data Services. Release notes for Portworx Data Services. Portworx Backup On-prem
Portworx on k8s - install failure - Portworx on Kubernetes - Portworx
Portworx Experience Portworx by Pure Storage Modern apps are key to competitiveness, but traditional storage can't keep up. Enter Portworx. Portworx provides Kubernetes data services with the performance, availability, data protection, and security your apps require, but in a completely automated, API-driven, and multicloud world. Powerful Subscription Economics Portworx provides a subscription-based data management platform for modern apps running on Kubernetes across hybrid-cloud environments in containers. Digital Transformation Acceleration Together, Portworx provides the most complete data-services platform for building, automating, protecting, and securing all applications. Most Complete Kubernetes Storage Only Portworx provides a fully integrated solution for persistent storage, security, data protection, and automated capacity management for Kubernetes apps. Complete Kubernetes Data Services Deploy in-cloud, on bare metal, or on enterprise arrays, all natively orchestrated in Kubernetes. With Portworx, you can run any modern data service, in any cloud, using any Kubernetes platform. Benefit from built-in high availability, data protection, data security, and hybrid-cloud mobility. Trust a Category Leader and Gold Standard For the fourth consecutive year, Pure Storage is recognized as a Leader and Outperformer in the GigaOm Radar for Enterprise Kubernetes Data Storage. Portworx Products Kubernetes Storage Portworx Enterprise Data management for enterprise apps on Kubernetes. Data Protection PX-Backup Protect all your Kubernetes clusters, applications, and data. Database-as-a-Service Portworx Data Services One-click Data Services on Kubernetes. Free Trial of Portworx No hardware, no setup, no cost—no problem. Try the leading Kubernetes Storage and Data Protection platform according to GigaOm Research. Sign Up Now Don't Take Our Word for It We work with the best organizations across all industries. “I know Portworx is going to keep me safe and secure and, most importantly, make sure my player data isn’t lost.” Rob Cameron Principal SRE, Roblox “We selected Portworx to provide full data automation, data mobility, backup, and recovery across multiple clouds in a secure manner for our customer’s mission critical data.” Adam Mollenkopf GIS Capability Lead, ESRI What is Portworx? Portworx is the Kubernetes Data Services Platform that digital enterprises trust to run mission-critical applications in containers in production. Only Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes. People also ask: 1. How do Portworx products complement Pure Storage offerings? As a 100% software-defined Kubernetes Data Services Platform that can run on any infrastructure, Portworx enables new cloud-native use Context switching is a true drag on productivity, and it slows down the pace at which platform engineers/administrators can operate their infrastructure and application stacks. Portworx has built an OpenShift Dynamic console plugin that enables single-pane-of-glass management of storage resources running on Red Hat OpenShift clusters. This allows platform administrators to use the OpenShift web console to manage not just their applications and their OpenShift cluster, but also their Portworx installation and their stateful applications running on OpenShift.In this blog, we will talk about how users can leverage the OpenShift Dynamic plugin from Portworx to monitor different storage resources running on their OpenShift clusters. The Portworx plugin can be enabled by simply installing (greenfield) or upgrading (brownfield) the Portworx Operator to 23.5.0 or higher release on OpenShift 4.12 clusters. Once the Portworx Operator is installed, the plugin can be easily enabled by selecting a radio button from the OpenShift web console. Figure 1: Enabling the OpenShift Dynamic Console pluginOnce the plugin is enabled, the Portworx Operator will automatically install the plugin pods in the same OpenShift project as the Portworx storage cluster. Once the pods are up and running, administrators will see a message in the OpenShift web console to refresh their browser window for the Portworx tabs to show up in the UI. With this plugin, Portworx has built three different UI pages, including a Portworx Cluster Dashboard that shows up in the left navigation menu, a Portworx tab under Storage –> Storage Class section, and another Portworx tab under Storage –> Persistent Volume Claims section.Portworx Cluster DashboardThe Portworx Cluster Dashboard can be used by platform administrators to monitor the status of their Portworx Storage Cluster and their persistent volumes and storage nodes. Here are a few operations that are now streamlined by the OpenShift Dynamic plugin from Portworx.Figure 2: Portworx Storage Cluster dashboardStorage Capacity Management: Using the cluster details widget, administrators can quickly get an overview of their used vs available storage capacity on their OpenShift cluster. This allows them to better plan their storage expansion operations so they don’t run out of capacity for their stateful applications running on OpenShift. License Management: Using the cluster details widget, administrators can always stay on top of their license expiration, allowing them to easily plan for renewals. Component Versions: Using the cluster details widget, administrators can get an overview of the different versions of Portworx, Operator, and Stork they haveUnhealthy portworx container - Portworx Install - Portworx Cloud
Cases for customers running on a variety of Pure products including FlashArray™, FlashBlade®, and Evergreen//One™, as well as any other underlying on-premises or cloud storage. With Portworx, Pure customers get a modern data platform that manages storage and data protection for their Kubernetes applications, no matter where they are running. 2. Who uses Portworx’s offerings today? As the #1 Kubernetes Storage Platform, according to GigaOm Research, many of the world’s largest digital enterprises use Portworx in production, including Carrefour, Comcast, GE Digital, Kroger, Lufthansa, and T-Mobile. Developers can download Portworx software; use it for free in initial deployments and pilot projects; and then scale it to the largest, most-resilient global deployments with paid editions and enterprise support as they progress through the Kubernetes lifecycle. 3. What does Portworx bring to Pure Storage? Portworx helps Pure Storage realize the vision of a Modern Data Experience™ by providing a subscription-based data-services platform for cloud-native applications running across hybrid-cloud environments in containers. With Portworx, Pure Storage can accelerate their digital transformation by easily running any cloud-native data service, in any cloud, using any Kubernetes platform with built-in high availability, data protection, data security, and hybrid-cloud mobility. Together, Pure and Portworx provide the most complete set of Kubernetes storage choices, from bare metal or cloud storage with Portworx software-defined storage to fully managed all-flash arrays, providing the right storage options at all phases of the cloud-native lifecycle. Resources. Portworx Enterprise. Release notes for Portworx Enterprise. Portworx CSI. Release notes for Portworx CSI. Autopilot. Release notes for the Autopilot component. Operator. Release notes for the Portworx Operator. Stork. Release notes for the Portworx Stork. Portworx Data Services. Release notes for Portworx Data Services. Portworx Backup On-premInstall Portworx CSI - Portworx Documentation
Hyperconverged or disaggregated? Modern design for cloud native data centers and cloud deployments for Kubernetes includes how to deploy compute, networking, and storage. Compute and storage can be combined on the same infrastructure to try and reduce cost and complexity while increasing scalability. Clusters can also be deployed and managed separately in a disaggregated model where there are strict separations of concern for compute and storage. Portworx is most often deployed in a hyperconverged fashion where storage devices are consumed by Portworx on the same worker nodes that application pods, deployments, and StatefulSets run. In this blog we’ll explore the disaggregated model and how it is deployed in comparison to the hyperconverged. We’ll also talk about the pros and cons of each deployment mode. Let’s get started.What we will discuss in the rest of this blog will be both hyperconverged Kubernetes clusters with Portworx and disaggregated Kubernetes clusters with Portworx. For clarification, disaggregation is sometimes used interchangeably with composable disaggregated infrastructure (CDI); however, while CDI is disaggregated in the sense that resources are separated over a network, CDI is different from—although related to—what we are discussing. In this blog, disaggregated refers to the architectural decision to separate Kubernetes compute from storage servers or VMS.Hyperconverged Kuberentes with PortworxI’ll keep this brief, but I want to cover the most common way Portworx is deployed with Kubernetes, which is hyperconverged or converged. If you would like to view a quick refresher lightboard session on this topic, watch the recent pool rebalancing video where I review this topic.In short, Portworx is packaged as a container image and is deployed by a Kubernetes operator or DaemonSet and deployed as a system level container on the worker nodes as a pod. These pods will install the Portworx components, install configuration, benchmark drives, and create storage pools that will join with other Portworx processes running on other Kubernetes nodes. The memory footprint from Portworx is minimal, so Kubernetes workers are configured with enough memory and CPU to accommodate both applications and data management. This reduces the overall footprint of compute, saving cost and complexity while generally allowing Their production applications and so on.Figure 7: Portworx StorageClass TabUsing the Portworx storage cluster tab, administrators can get details about the custom parameters set for each storage class, the number of persistent volumes dynamically provisioned using the storage class, and a table that lists all the persistent volumes deployed using that storage class. The OpenShift dynamic plugin eliminates the need for administrators to use multiple “kubectl get” and “kubectl describe” commands to find all these details—instead, they can just use a simple UI to monitor their storage classes.Portworx Persistent Volume Claim TabPortworx provides native high availability and replication for the persistent volumes deployed on OpenShift clusters—which means that each volume can have multiple replicas running on different OpenShift worker nodes. Instead of using multiple CLI commands to “inspect” the volume, administrators can now use this simple tab to get the following details about individual persistent volumes:Figure 8: Portworx PVC TabVolume Details: Using this tab, administrators can get details like the PVC name, the bound status, the size and the filesystem format for the persistent volume, and all the custom parameters that the volumes inherit from the Portworx StorageClass. Replication Factor and Topology: Portworx allows administrators to quickly identify how many replicas exist for each persistent volume and how they are distributed across their OpenShift cluster using the Portworx tab. Volume Consumers: This tab also allows users to identify the pods that are using the volume to store/persist data. In the case of ReadWriteOnce volume, administrators can look at the individual pod that is storing data on that volume. But, in the case of ReadWriteMany volumes, administrators can get a quick list of all the different pods sharing the volume and see whether those pods are being managed by a Kubernetes object like a Deployment or StatefulSet.In addition to all these tabs, the Portworx OpenShift Dynamic plugin is fully integrated with the OpenShift web console. So, if administrators have a preference for light mode vs dark mode for their web console, Portworx will honor those settings and deliver a seamless experience for the administrators. Portworx introduced this OpenShift dynamic plugin as part of the Operator 23.5.0 release, which only works for internet-connected clusters (non air-gapped clusters) and clusters that do not have PX-Security enabled. However, we have an exciting roadmap ahead of us, which will not only add support for air-gapped clusters and PX-Security-enabled OpenShift clusters but will also add moreportworx/pxc: Portworx Client - GitHub
Kubernetes does a great job of orchestrating your containerized applications and deploying them across all the worker nodes in a cluster. If a node has enough compute capacity to run a specific Kubernetes pod, Kubernetes will schedule that application pod on that worker node. But what if none of the worker nodes in the cluster have enough available capacity to accept new application pods? At this point, Kubernetes will not be able to deploy your application pods, and they will be stuck in a pending state. In addition to this scenario, Kubernetes also does not have the capability to monitor and manage the storage utilization in your cluster. These are two huge problems when it comes to running applications on Kubernetes. This blog covers solutions from Portworx and AWS that will help users architect a solution that helps them remediate these concerns.In this blog, we will look at how Portworx Autopilot and AWS Karpenter work together to help users build a solution on top of AWS EKS clusters that allows them to automatically expand persistent volumes or add more storage capacity to the Kubernetes clusters using Portworx Autopilot. The solution also automatically adds more CPU and memory resources by dynamically adding more worker nodes in the AWS EKS cluster using AWS Karpenter.We can begin by developing a better understanding of automated storage capacity management with Portworx Autopilot. Autopilot is a rule-based engine that responds to changes from a monitoring source. Autopilot allows you to specify monitoring conditions as well as the actions it should take when those conditions occur, which means you can set simple IFTT rules against your EKS cluster and have Autopilot automatically perform a certain action for you if a certain condition has been met. Portworx Autopilot supports the following three use cases:Automatically resizing PVCs when they are running out of capacityScaling Portworx storage pools to accommodate increasing usageRebalancing volumes across Portworx storage pools when they come unbalancedTo get started with Portworx Autopilot, first you will have to deploy Portworx on your Amazon EKS cluster and configure Prometheus and Grafana for monitoring. Once you have that up and running, use the following steps to configure Autopilot and create an Autopilot rule that will monitor the capacity utilization of a persistent volume and scale it up accordingly:Use the following yaml file to deploy Portworx Autopilot on your Amazon EKS cluster. Verify the Prometheus endpoint set in the autopilot-config config map and ensure that it matches the service endpoint for Prometheus in your cluster.# SOURCE: v1kind: ConfigMapmetadata: name: autopilot-config namespace: kube-systemdata: config.yaml: |- providers: - name: default type: prometheus params: url= min_poll_interval: 2---apiVersion: v1kind: ServiceAccountmetadata: name: autopilot-account namespace: kube-system---apiVersion: apps/v1kind: Deploymentmetadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: tier: control-plane name: autopilot namespace: kube-systemspec: selector: matchLabels: name: autopilot strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate replicas: 1 template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: name: autopilot tier: control-plane spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "name" operator: In values: - autopilot topologyKey: "kubernetes.io/hostname" hostPID: false containers: - command: - /autopilotUpgrade Portworx Backup - Portworx Documentation
Running on their OpenShift cluster—instead of having to log into the CLI and fetch the same information using different kubectl/oc/pxctl commands. Observability: Using the activities widget, administrators can get a recap of all the storage-specific Kubernetes events—categorized as “Info” and “Warning/Error” events. This allows them to keep track of what’s going on in their OpenShift clusters without having to use the CLI and context switch. Administrators can expand individual events to get more details about a specific event, which can speed up root cause analysis if there are any issues. Figure 3: Portworx Persistent Volumes tableVolume Management: Instead of going to the CLI and executing Get-Commands to fetch the status of persistent volumes running across different Kubernetes namespaces, administrators can use the “Volumes” table to get an overview of their volume—sorted by different Kubernetes namespaces. This table also highlights the worker node the volume is running on, the number of replicas that exist for that volume, and the amount of capacity provisioned for that volume.Figure 4: Portworx Storage pools tableFigure 5: Portworx drives tableDrive and Storage Pool Management: Portworx allows administrators to have multiple drives backing the cloud-native/software-defined layer, as well as allowing them to create different storage pools based on different media types for these backing drives. Instead of navigating to the CLI and executing a series of “pxctl” commands to get details about their storage pools and backing drives, the Portworx Cluster Dashboard allows administrators to get that information from a couple of simple tables that can be sorted by individual worker nodes that contribute storage to the Portworx StorageCluster.Figure 6: Portworx Storage Nodes tableStorage Node Management: Portworx can be installed in a hyperconverged or a disaggregated deployment model, which means any OpenShift cluster can have all their worker nodes running as storage nodes or only a subset of their worker nodes running as storage nodes, respectively. The Node Summary Table allows administrators to monitor their storage nodes in the cluster, see how storage is distributed across different nodes, and identify issues such as offline or unbalanced nodes.Portworx Storage Class TabPortworx allows platform administrators to create custom Kubernetes StorageClasses to offer different classes of service to their developers. Administrators can customize things like the replication factors, snapshot schedules, file system types, io-profiles, etc. in their storage class definitions. Developers can then choose to use a different storage class for their test/dev workloads and a different storage class for. Portworx Enterprise. Release notes for Portworx Enterprise. Portworx CSI. Release notes for Portworx CSI. Autopilot. Release notes for the Autopilot component. Operator. Release notes for the Portworx Operator. Stork. Release notes for the Portworx Stork. Portworx Data Services. Release notes for Portworx Data Services. Portworx Backup On-premPortworx CSI Operations - Portworx Documentation
Portworx announced Kubernetes Backup is now available in the AWS Marketplace. Using the AWS Marketplace to deploy PX-Backup allows for a seamless installation and billing experience for Portworx customers running in AWS. Getting started with PX-Backup is as simple as performing the following steps:Launch or use an EKS cluster.Subscribe to PX-Backup in the AWS Marketplace.Connect your EKS cluster to your subscription.Install PX-Backup via Helm.That’s it—a seamless workflow to start using PX-Backup in AWS to provide backup and recovery for all of your Kubernetes applications.New to PX-Backup? Let’s recap what it’s all about.PX-Backup is a Kubernetes backup solution that allows you to back up and restore applications and their data across multiple clusters. PX-Backup works with PX-Central, allowing you or any other approved users to manage multiple clusters and their backups from a single UI. Under this principle of multitenancy, authorized users connect through OIDC to create and manage backups for clusters and apps that they have permissions for without needing to go through an administrator.PX-Backup is compatible with any Kubernetes cluster—including managed and cloud deployments like those on Amazon EKS—and does not require Portworx Enterprise to be installed.Getting StartedFirst, you will need an AWS account. If you do not have one yet, sign up. Then head over to the landing page for Portworx PX-Backup on AWS Marketplace. You can get to the landing page by clicking on this link.Once you subscribe to PX-Backup in the marketplace, you will then select the delivery method and launch the offering. From there, you will follow the Portworx documentation on how to connect your PX-Backup cluster to the subscription and install PX-Backup.PX-Backup is installed by the following steps: Head over to PX-Central and generate a PX-Backup Spec that is configured for your AWS cluster and requirements. Then simply run the commands against your AWS cluster to install PX-Backup.Here is a short demo of the installation and getting started experience of PX-Backup via the AWS Marketplace.We are excited about this new opportunity on AWS Marketplace. Ready to get started? Try Portworx for free today, or get started in the AWS Marketplace.Learn more about Portworx andComments
Portworx Experience Portworx by Pure Storage Modern apps are key to competitiveness, but traditional storage can't keep up. Enter Portworx. Portworx provides Kubernetes data services with the performance, availability, data protection, and security your apps require, but in a completely automated, API-driven, and multicloud world. Powerful Subscription Economics Portworx provides a subscription-based data management platform for modern apps running on Kubernetes across hybrid-cloud environments in containers. Digital Transformation Acceleration Together, Portworx provides the most complete data-services platform for building, automating, protecting, and securing all applications. Most Complete Kubernetes Storage Only Portworx provides a fully integrated solution for persistent storage, security, data protection, and automated capacity management for Kubernetes apps. Complete Kubernetes Data Services Deploy in-cloud, on bare metal, or on enterprise arrays, all natively orchestrated in Kubernetes. With Portworx, you can run any modern data service, in any cloud, using any Kubernetes platform. Benefit from built-in high availability, data protection, data security, and hybrid-cloud mobility. Trust a Category Leader and Gold Standard For the fourth consecutive year, Pure Storage is recognized as a Leader and Outperformer in the GigaOm Radar for Enterprise Kubernetes Data Storage. Portworx Products Kubernetes Storage Portworx Enterprise Data management for enterprise apps on Kubernetes. Data Protection PX-Backup Protect all your Kubernetes clusters, applications, and data. Database-as-a-Service Portworx Data Services One-click Data Services on Kubernetes. Free Trial of Portworx No hardware, no setup, no cost—no problem. Try the leading Kubernetes Storage and Data Protection platform according to GigaOm Research. Sign Up Now Don't Take Our Word for It We work with the best organizations across all industries. “I know Portworx is going to keep me safe and secure and, most importantly, make sure my player data isn’t lost.” Rob Cameron Principal SRE, Roblox “We selected Portworx to provide full data automation, data mobility, backup, and recovery across multiple clouds in a secure manner for our customer’s mission critical data.” Adam Mollenkopf GIS Capability Lead, ESRI What is Portworx? Portworx is the Kubernetes Data Services Platform that digital enterprises trust to run mission-critical applications in containers in production. Only Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes. People also ask: 1. How do Portworx products complement Pure Storage offerings? As a 100% software-defined Kubernetes Data Services Platform that can run on any infrastructure, Portworx enables new cloud-native use
2025-04-10Context switching is a true drag on productivity, and it slows down the pace at which platform engineers/administrators can operate their infrastructure and application stacks. Portworx has built an OpenShift Dynamic console plugin that enables single-pane-of-glass management of storage resources running on Red Hat OpenShift clusters. This allows platform administrators to use the OpenShift web console to manage not just their applications and their OpenShift cluster, but also their Portworx installation and their stateful applications running on OpenShift.In this blog, we will talk about how users can leverage the OpenShift Dynamic plugin from Portworx to monitor different storage resources running on their OpenShift clusters. The Portworx plugin can be enabled by simply installing (greenfield) or upgrading (brownfield) the Portworx Operator to 23.5.0 or higher release on OpenShift 4.12 clusters. Once the Portworx Operator is installed, the plugin can be easily enabled by selecting a radio button from the OpenShift web console. Figure 1: Enabling the OpenShift Dynamic Console pluginOnce the plugin is enabled, the Portworx Operator will automatically install the plugin pods in the same OpenShift project as the Portworx storage cluster. Once the pods are up and running, administrators will see a message in the OpenShift web console to refresh their browser window for the Portworx tabs to show up in the UI. With this plugin, Portworx has built three different UI pages, including a Portworx Cluster Dashboard that shows up in the left navigation menu, a Portworx tab under Storage –> Storage Class section, and another Portworx tab under Storage –> Persistent Volume Claims section.Portworx Cluster DashboardThe Portworx Cluster Dashboard can be used by platform administrators to monitor the status of their Portworx Storage Cluster and their persistent volumes and storage nodes. Here are a few operations that are now streamlined by the OpenShift Dynamic plugin from Portworx.Figure 2: Portworx Storage Cluster dashboardStorage Capacity Management: Using the cluster details widget, administrators can quickly get an overview of their used vs available storage capacity on their OpenShift cluster. This allows them to better plan their storage expansion operations so they don’t run out of capacity for their stateful applications running on OpenShift. License Management: Using the cluster details widget, administrators can always stay on top of their license expiration, allowing them to easily plan for renewals. Component Versions: Using the cluster details widget, administrators can get an overview of the different versions of Portworx, Operator, and Stork they have
2025-03-27Cases for customers running on a variety of Pure products including FlashArray™, FlashBlade®, and Evergreen//One™, as well as any other underlying on-premises or cloud storage. With Portworx, Pure customers get a modern data platform that manages storage and data protection for their Kubernetes applications, no matter where they are running. 2. Who uses Portworx’s offerings today? As the #1 Kubernetes Storage Platform, according to GigaOm Research, many of the world’s largest digital enterprises use Portworx in production, including Carrefour, Comcast, GE Digital, Kroger, Lufthansa, and T-Mobile. Developers can download Portworx software; use it for free in initial deployments and pilot projects; and then scale it to the largest, most-resilient global deployments with paid editions and enterprise support as they progress through the Kubernetes lifecycle. 3. What does Portworx bring to Pure Storage? Portworx helps Pure Storage realize the vision of a Modern Data Experience™ by providing a subscription-based data-services platform for cloud-native applications running across hybrid-cloud environments in containers. With Portworx, Pure Storage can accelerate their digital transformation by easily running any cloud-native data service, in any cloud, using any Kubernetes platform with built-in high availability, data protection, data security, and hybrid-cloud mobility. Together, Pure and Portworx provide the most complete set of Kubernetes storage choices, from bare metal or cloud storage with Portworx software-defined storage to fully managed all-flash arrays, providing the right storage options at all phases of the cloud-native lifecycle. Resources
2025-04-09Hyperconverged or disaggregated? Modern design for cloud native data centers and cloud deployments for Kubernetes includes how to deploy compute, networking, and storage. Compute and storage can be combined on the same infrastructure to try and reduce cost and complexity while increasing scalability. Clusters can also be deployed and managed separately in a disaggregated model where there are strict separations of concern for compute and storage. Portworx is most often deployed in a hyperconverged fashion where storage devices are consumed by Portworx on the same worker nodes that application pods, deployments, and StatefulSets run. In this blog we’ll explore the disaggregated model and how it is deployed in comparison to the hyperconverged. We’ll also talk about the pros and cons of each deployment mode. Let’s get started.What we will discuss in the rest of this blog will be both hyperconverged Kubernetes clusters with Portworx and disaggregated Kubernetes clusters with Portworx. For clarification, disaggregation is sometimes used interchangeably with composable disaggregated infrastructure (CDI); however, while CDI is disaggregated in the sense that resources are separated over a network, CDI is different from—although related to—what we are discussing. In this blog, disaggregated refers to the architectural decision to separate Kubernetes compute from storage servers or VMS.Hyperconverged Kuberentes with PortworxI’ll keep this brief, but I want to cover the most common way Portworx is deployed with Kubernetes, which is hyperconverged or converged. If you would like to view a quick refresher lightboard session on this topic, watch the recent pool rebalancing video where I review this topic.In short, Portworx is packaged as a container image and is deployed by a Kubernetes operator or DaemonSet and deployed as a system level container on the worker nodes as a pod. These pods will install the Portworx components, install configuration, benchmark drives, and create storage pools that will join with other Portworx processes running on other Kubernetes nodes. The memory footprint from Portworx is minimal, so Kubernetes workers are configured with enough memory and CPU to accommodate both applications and data management. This reduces the overall footprint of compute, saving cost and complexity while generally allowing
2025-04-14Their production applications and so on.Figure 7: Portworx StorageClass TabUsing the Portworx storage cluster tab, administrators can get details about the custom parameters set for each storage class, the number of persistent volumes dynamically provisioned using the storage class, and a table that lists all the persistent volumes deployed using that storage class. The OpenShift dynamic plugin eliminates the need for administrators to use multiple “kubectl get” and “kubectl describe” commands to find all these details—instead, they can just use a simple UI to monitor their storage classes.Portworx Persistent Volume Claim TabPortworx provides native high availability and replication for the persistent volumes deployed on OpenShift clusters—which means that each volume can have multiple replicas running on different OpenShift worker nodes. Instead of using multiple CLI commands to “inspect” the volume, administrators can now use this simple tab to get the following details about individual persistent volumes:Figure 8: Portworx PVC TabVolume Details: Using this tab, administrators can get details like the PVC name, the bound status, the size and the filesystem format for the persistent volume, and all the custom parameters that the volumes inherit from the Portworx StorageClass. Replication Factor and Topology: Portworx allows administrators to quickly identify how many replicas exist for each persistent volume and how they are distributed across their OpenShift cluster using the Portworx tab. Volume Consumers: This tab also allows users to identify the pods that are using the volume to store/persist data. In the case of ReadWriteOnce volume, administrators can look at the individual pod that is storing data on that volume. But, in the case of ReadWriteMany volumes, administrators can get a quick list of all the different pods sharing the volume and see whether those pods are being managed by a Kubernetes object like a Deployment or StatefulSet.In addition to all these tabs, the Portworx OpenShift Dynamic plugin is fully integrated with the OpenShift web console. So, if administrators have a preference for light mode vs dark mode for their web console, Portworx will honor those settings and deliver a seamless experience for the administrators. Portworx introduced this OpenShift dynamic plugin as part of the Operator 23.5.0 release, which only works for internet-connected clusters (non air-gapped clusters) and clusters that do not have PX-Security enabled. However, we have an exciting roadmap ahead of us, which will not only add support for air-gapped clusters and PX-Security-enabled OpenShift clusters but will also add more
2025-04-04Kubernetes does a great job of orchestrating your containerized applications and deploying them across all the worker nodes in a cluster. If a node has enough compute capacity to run a specific Kubernetes pod, Kubernetes will schedule that application pod on that worker node. But what if none of the worker nodes in the cluster have enough available capacity to accept new application pods? At this point, Kubernetes will not be able to deploy your application pods, and they will be stuck in a pending state. In addition to this scenario, Kubernetes also does not have the capability to monitor and manage the storage utilization in your cluster. These are two huge problems when it comes to running applications on Kubernetes. This blog covers solutions from Portworx and AWS that will help users architect a solution that helps them remediate these concerns.In this blog, we will look at how Portworx Autopilot and AWS Karpenter work together to help users build a solution on top of AWS EKS clusters that allows them to automatically expand persistent volumes or add more storage capacity to the Kubernetes clusters using Portworx Autopilot. The solution also automatically adds more CPU and memory resources by dynamically adding more worker nodes in the AWS EKS cluster using AWS Karpenter.We can begin by developing a better understanding of automated storage capacity management with Portworx Autopilot. Autopilot is a rule-based engine that responds to changes from a monitoring source. Autopilot allows you to specify monitoring conditions as well as the actions it should take when those conditions occur, which means you can set simple IFTT rules against your EKS cluster and have Autopilot automatically perform a certain action for you if a certain condition has been met. Portworx Autopilot supports the following three use cases:Automatically resizing PVCs when they are running out of capacityScaling Portworx storage pools to accommodate increasing usageRebalancing volumes across Portworx storage pools when they come unbalancedTo get started with Portworx Autopilot, first you will have to deploy Portworx on your Amazon EKS cluster and configure Prometheus and Grafana for monitoring. Once you have that up and running, use the following steps to configure Autopilot and create an Autopilot rule that will monitor the capacity utilization of a persistent volume and scale it up accordingly:Use the following yaml file to deploy Portworx Autopilot on your Amazon EKS cluster. Verify the Prometheus endpoint set in the autopilot-config config map and ensure that it matches the service endpoint for Prometheus in your cluster.# SOURCE: v1kind: ConfigMapmetadata: name: autopilot-config namespace: kube-systemdata: config.yaml: |- providers: - name: default type: prometheus params: url= min_poll_interval: 2---apiVersion: v1kind: ServiceAccountmetadata: name: autopilot-account namespace: kube-system---apiVersion: apps/v1kind: Deploymentmetadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: tier: control-plane name: autopilot namespace: kube-systemspec: selector: matchLabels: name: autopilot strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate replicas: 1 template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: name: autopilot tier: control-plane spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "name" operator: In values: - autopilot topologyKey: "kubernetes.io/hostname" hostPID: false containers: - command: - /autopilot
2025-04-09