We introduced ROSA in last week’s blog and how each ROSA cluster provides a fully managed control plane,data plane – deployed into a customer’s VPCs . ROSA is a fully managed service where Red Hat SREs with AWS support perform installs, maintenance, upgrades of customer’s clusters. Let us now discuss what a reference architecture looks like but first a quick background on OpenShift Node types.
OpenShift Node Types
OpenShift is a technology built on Kubernetes with Red Hat enhancements. It comprises of three node types and the corresponding services –
-
- Master Nodes – Master nodes correspond to Kubernetes Master nodes and control the OpenShift environment. They host the OpenShift console used to administer the deployment, the registry console, the API Server, etcd, the controller manager, and HAProxy.
- Infrastructure Nodes – Infrastructure nodes provide supporting functionality such as hosting Routers (used to expose K8s services to external clients), the internal registry (OCR – a containerized Docker Registry) and monitoring service(based on Prometheus and managed by an Operator). OpenShift separates these nodes for better capacity planning and also to ensure that they don’t compete with your applications for resources.
- Worker Nodes- The ‘App Nodes’ or simply ‘Nodes’ are Worker nodes that run user applications deployed into OpenShift as containers. In addition, these nodes contain agent runtimes such as those needed for pod networking, monitoring, and DNS resolution.
ROSA Reference Architecture
The above is an architecture diagram of what a ROSA deployment looks like when deployed into a customer’s AWS account in a full production setting. Key things to note –
- The ROSA service leverages installer provisioned infrastructure provided by the OpenShift Container Platform. IPI leverages the AWS cloud provider in Kubernetes to ensure that your clusters are deployed according to best practices. The installer is also responsible for resting instances, security groups, ELBs and so forth.
- ROSA supports both single and multi AZ deployment options. Red Hat recommends multi AZ for production deployments so that your clusters can survive AZ outages.
- Single AZ deployments are used when that level of availability is not required, for example, development clusters. The ROSA topology includes a 3 node Control Plane server (includes API server, scheduler and etcd). In a multi AZ deployment each of those servers ends. up in a different AZ. Access to the K8s APIs and Kubectl is through internal NLBs which allows transparent maintenance of control plane nodes. Access to end user applications goes through a different set of ELBs. These sit in front of the OpenShift HAProxy and route requests into the cluster. Because the ROSA service runs within a customer’s AWS account, they can control whether these clusters control resources within your AWS environment.
- A ROSA topology also includes 2 infrastructure nodes which run cluster support services such as container registry (uses S3 as backend) and Prometheus and Alert Manager (which use EBS for their persistence). The OpenShift Routers which handle end-user applications also run in these nodes.
- As discussed above, the third type of computing resource in a ROSA cluster is the Worker pool. By default, each will have 3 nodes, and each in a different AZ. ROSA Cli utility can be used to scale these up or down. Separately from the default worker pool, you can create newer workers pools that use different instance types.
- For cluster access, both public and private access is provided – depending on who is accessing the clusters. Private access is provided by VPC peering. The ROSA service lets you change cluster access visibility as well as toggling accessing the API server or Cluster ingress.
- Private cluster access can be implemented using one of three methods – VPC Peering, or using AWS VPN, or setting up a dedicated network connection between your network and an AWS Direct Connect location.
ROSA as an offering can make it easy for existing OpenShift Container Platform customers to focus on adding value to their business by focusing on their application code and by moving all of the complexity and pain of cluster lifecycle management to Red Hat and AWS.