
Amir Rawdat
Position: Technical Marketing Engineer
Company: F5
Customers often ask us: “Since the OCP Router is free to use, why should I use the NGINX Ingress Controller in OpenShift?” In the article “Why You Need to Deploy an Enterprise-Grade Ingress Controller on OpenShift,” guest blogger Max Mortillaro from GigaOm shares several reasons for using the NGINX Ingress Controller from a qualitative analysis perspective, including advanced traffic management, ease of use, JWT authentication, and WAF integration. Additionally, from a quantitative analysis perspective, deploying an enterprise-grade Ingress Controller on OpenShift is also very important.
To this end, we performed performance testing on the OCP Router and the NGINX Ingress Controller based on NGINX Plus (nginxinc/kubernetes-ingress) in the OCP environment, during which we dynamically increased and decreased the number of upstream servers (backend pods).
In our performance testing, we primarily evaluated the performance of the tools from two aspects:
-
Factor 1: Latency Performance of Dynamic Deployments
We found that latency distribution is the most effective metric for measuring end-user experience in dynamic deployments. The higher the system access latency, the greater the impact on user access experience. We also found that to understand the true user access experience, it is necessary to consider the distribution of the maximum access latency of the application. For a detailed explanation, please refer to the “Performance Results” section in the blog post “NGINX and HAProxy: User Experience Based on Public Cloud Environments.”
-
Factor 2: Timeouts and Errors
If latency issues are found during the dynamic deployment of the application, it is often because the system struggles to handle dynamic configuration reloads, resulting in timeouts or errors.

Performance Test Results
Let’s get straight to the interesting part and look at the results. Below are details about the test topology and methodology.
As mentioned above, we considered two factors when evaluating performance: access latency and timeouts/errors.
As shown in the figure below, the increased latency of the NGINX Ingress Controller throughout the test is negligible, with 99.999% of application access latency being less than 700 milliseconds. In contrast, the OCP Router increases latency at relatively low percentiles, with latency growing exponentially, and 99.99% of access latency tending toward a stable value, where latency can reach as high as 25,000 milliseconds (25 seconds). This indicates that in a frequently changing and iterating cluster environment, the OCP Router may lead to a poor user experience.

Test Configuration and Methodology
The NGINX Ingress Controller and OpenShift Router are the systems under test (SUT), and we conducted the same tests on both. The SUT offloads TLS 1.3 connections from clients and forwards client requests to the backend applications over separate connections.
The test clients are hosted on independent machines running CentOS 7, which are in the same LAN environment as the OpenShift cluster.
The SUT and backend applications are deployed in an OCP cluster hosted on VMware vSphere 6.7.0.45100.
For TLS encrypted connections, we used a 2048-bit RSA key and PFS perfect forward secrecy algorithm for encryption.
Each response from the backend application contains approximately 1KB of basic service metadata and a 200 OK HTTP status code.
Client Deployment
We used wrk2 (version 4.0.0) to run the following script on the client machine, testing at a sustained throughput of 1000 requests per second (RPS, set with the -R option) for 60 seconds (set with the -d option):
./wrk -t 2 -c 50 -d 60s -R 1000 -L https://ingress-url:443/
SUT Software Used
-
OpenShift Platform version 4.8, which includes the default HAProxy-based OCP Router
-
NGINX Ingress Controller version 1.11.0 (NGINX Plus R22)
Backend Application Deployment
We tested the dynamically deployed backend applications, during which we used the following script to periodically increase and decrease the number of backend applications. This simulates a dynamic OpenShift environment and measures how effectively the NGINX Ingress Controller or OCP Router adapts to changes in endpoints.
while [ 1 -eq 1 ]
do
oc scale deployment nginx-backend –replicas=4
sleep 10
oc scale deployment nginx-backend –replicas=2
sleep 10
done

Conclusion
Most companies adopting microservices technology are pushing new development projects at an increasing frequency through CI/CD pipelines. For this reason, the functionality and performance of the data plane to keep pace with these new technologies without impacting end-user experience is particularly important. To provide the best end-user experience, one key requirement is to consistently deliver low-latency connections to all clients under any circumstances.
Based on the performance test results, the NGINX Ingress Controller provides the best end-user access experience in containerized environments that require rapid updates and iterations. Click “Read the Original” at the end of the article to download the NGINX Ingress Controller Free Trial and learn how to deploy using the NGINX Ingress Operator to easily get started with NGINX.
Resource Download

