OpenShift Router Sharding for Network Traffic Isolation

In enterprise practice, multiple OpenShift clusters are often deployed: for development testing, production, etc. Each cluster is independent and isolated by physical resources. This approach is simple to manage and easy to understand but consumes more resources, as each cluster requires additional control and operational nodes. Is there a way to run different environments on the same cluster while isolating them? The answer is yes.

To achieve resource isolation for different environments, we need to plan the computing resources—host machines—carefully, and we also need to plan the network. Isolation of the host can be achieved by adding labels to the hosts to plan pod scheduling. In this article, we will focus on isolating the network Route part for the development testing environment and the production environment.

OpenShift Cluster Route Sharding Mechanism

As we all know, OpenShift manages north-south traffic through Routes. The essence of a Route is a Haproxy/Nginx service, similar to Ingress in K8S. By default, the OpenShift cluster Router is shared globally, meaning that when creating a new Route resource, updating a Pod, or renewing a certificate, all OpenShift Router Pods will update the Haproxy/Nginx configuration and reload it. All backend applications can access the Route through any Router service. By creating multiple Router services and using the Route sharding mechanism, we can configure different applications to different Routers, achieving isolation of application Router services. The diagram below illustrates the architecture of multiple Router node sharding.

OpenShift Router Sharding for Network Traffic Isolation

Multi-Router Node Sharding

This architecture diagram does not consider node isolation but only performs traffic segmentation through appropriate routing.

  1. The traffic entry is the external load balancer of the cluster. We only consider the access situation of *.apps-prod.example.com and *.apps-dev.example.com domain names. The backend service for the *.apps-prod.example.com domain is router-prod, and the backend service for the *.apps-dev.example.com domain is router-dev.

  2. Each router enforces the Route domain name subdomain format [optional]. The subdomain for the router-prod route is: ${name}-${namespace}.apps-prod.example.com, and the subdomain for the router-dev route is: ${name}-${namespace}.apps-dev.example.com.

$ oc adm router router-prod --replicas=2 --force-subdomain='${name}-${namespace}.apps-prod.example.com'
$ oc adm router router-dev --replicas=1 --force-subdomain='${name}-${namespace}.apps-dev.example.com'

For the already deployed Router services, the following command can be used to set:

$ oc adm router router-prod  --replicas=2 --force-subdomain='${name}-${namespace}.apps-prod.example.com' --dry-run -o yaml | oc apply -f -

At this point, the host of all newly created Routes cannot be set custom and will be forced to be set to two formats: ${name}-${namespace}.apps-prod.example.com and ${name}-${namespace}.apps-dev.example.com.

  1. The next and most important step is to set the Project filters for each Router, so that only Route resources under Projects with specified Labels can create configurations under that Router. The router-prod route filter is set to: router=prod, and the router-dev route filter is set to: router=dev.

$ oc set env dc/router-prod NAMESPACE_LABELS="router=prod"
$ oc set env dc/router-dev NAMESPACE_LABELS="router=dev"

Please ensure that the Router application with the label router=prod is deployed on Infra nodes with the label router=prod, and likewise for the Router application with the label router=dev. The combined script for these creation steps is as follows, specifying Node and environment variables during creation:

$ # prod router nodes
$ oc label node infra1 "router=prod"
$ oc label node infra2 "router=prod"
$ oc adm router router-prod --replicas=2 --force-subdomain='${name}-${namespace}.apps-prod.example.com' --selector=router=prod
$ oc set env dc/router-prod NAMESPACE_LABELS="router=prod"

$ # dev router nodes
$ oc label node infra3 "router=dev"
$ oc adm router router-dev --replicas=1 --force-subdomain='${name}-${namespace}.apps-dev.example.com' --selector=router=dev
$ oc set env dc/router-dev NAMESPACE_LABELS="router=dev"
  1. Setting the corresponding Labels for the Project will automatically match the Route resources under that Project with the Router service. Creating a new project with the label router=prod will configure the Route resources under that Project in the prod Router service. Similarly, Route resources under the label router=dev will be configured in the dev Router service.

$ # Create project project-prod-1 and set Label router=prod
$ oc new-project project-prod-1
$ oc label namespace project-prod-1 router=prod

$ # Create project project-dev-1 and set Label router=dev
$ oc new-project project-dev-1
$ oc label namespace project-dev-1 router=dev
  1. At this point, the created applications will automatically configure Router selection. The Route created under Project router=prod will automatically be configured under Router router=prod, with the domain name format: ${name}-${namespace}.apps-prod.example.com. Similarly, the Route created under Project router=dev will be automatically configured under Router router=dev, with the domain name format: ${name}-${namespace}.apps-dev.example.com.

Conclusion

  • The article on resource management in enterprise-level container cloud platform construction summarizes resource management into four parts: computing, network, storage, and image repository. To truly achieve isolation between different environments, all four aspects need to be considered. This article primarily explains the isolation of north-south traffic in the network part.

  • Through the Route traffic sharding mechanism, applications in different environments can be deployed in the same OpenShift cluster, reducing the number of clusters while meeting the requirements for north-south traffic isolation, thus saving management and hardware costs.

  • To achieve east-west traffic isolation in the cluster network, firewalls can be established between host machines in different environments. Additionally, OpenShift’s ovs-multitenant or ovs-networkpolicy network policies can also be used. You can read the previously written article: OpenShift’s Network Policy NetworkPolicy.

  • Computing isolation must ensure that applications in different environments do not run on the same host machine to avoid mutual influence and resource contention. This requires using OpenShift’s scheduling policies. You can read the previously written article: Mastering Pod Scheduling in OpenShift.

  • Storage isolation can be achieved by creating different storage classes to serve different environments.

  • Image repository isolation can be achieved by creating multiple image repositories, or by using a single set of image repositories while using different projects for logical isolation between images.

Reference Articles

OpenShift Router Sharding for Production and Development Traffic. The mechanism for loading OpenShift Route configuration can refer to the article: OpenShift Router Configuration Reload Mechanism.

Leave a Comment