Scaling WebSocket Servers

Posted By : Rajesh Kumar | 16-Sep-2022

What is Horizontal Scaling?
Horizontal scaling means scaling out refers to adding additional nodes or machines to your infrastructure according to new demands. Adding more servers is the only solution if the server does not have the capacity and capability to perform actions. A load balancer will be required to distribute the traffic to multiple microservice instances.

We need scaling or horizontal scaling when the user base increases and the server is not able to handle user requests. To give better and high performance to users we need to increase the capacity of the server. So, it is necessary to provide such a feature to increase/decrease the number of servers whenever require to meet the user’s demand.

Quick Recap

We implemented NodeJs, express js, and Redis Pub/Sub for the WebSocket server. Communication between the web application and mobile application(android,ios) (frontend) and WebSocket server is via WebSocket, while communication between the microservices (backend) and WebSocket server is via API and publish-subscribe messaging pattern.

What Are The Issues and Solutions?

The design work which we did earlier was perfectly fine in a setup where we only have a single instance of each microservices. However, having a single instance is not practically correct for the production environment. Generally, we deploy microservices with multiple replicas or instances for high availability and high performance in the production environment. There are some issues when we try to horizontally scale the multiple WebSocket microservice servers or backend microservices.

Issue #1: Messages are lost due to the load balancer

When we add APIs for backend microservices to send messages to the WebSocket server for unidirectional real-time communication, the load balancer helps to handle traffic redirection when scaling more WebSocket servers or instances.

An instance of the web application or mobile application establishes a WebSocket connection to instance B. When the backend server tries to send messages to the web application or mobile application, the load balancer redirects the API request to instance A. In this case instance, A does not have a socket connection to that particular instance of the web application or mobile application, so messages will be lost.

Solution for Issue #1: Broadcast messages using Redis Pub/Sub

We can introduce a broadcasting channel using the Redis publish-subscribe messaging pattern where all messages received from the backend microservices will be broadcasted to all WebSocket server instances. This ensures that all web applications or mobile application instances (frontend) will receive that message via WebSocket from the WebSocket server.

Issue #2: Messages are processing duplicately because multiple backend subscribers to a single topic

We used Redis Pub/Sub to handle bidirectional real-time communication between the WebSocket server instances and backend microservices. When we scale up the many WebSocket servers and backend microservices, Redis Pub/Sub receives duplicate messages.

Take a look at the message flow in each direction in bidirectional real-time communication.

  • Message Flow: Microservices to frontend application (no duplicate message processing) → Main requirement for all instances of the WebSocket server to receive the messages as each frontend application establishes a WebSocket connection with only a single WebSocket server instance. So, when messages flow from the backend microservices to the frontend application (backend service → WebSocket server → frontend application(web/mobile)), only one instance of the frontend application will receive the message, which is the correct behavior of the application.
  • Message Flow: Frontend application to microservices (duplicate messages processing) → When messages are flowing from the frontend application to the backend microservices (frontend(web/mobile) → WebSocket server → backend microservice), we would expect only one instance or microservice of the backend to process the message. So, all backend microservices as subscribers will receive the message, resulting in the message being processed multiple times, which is incorrect behavior of the application.

Solution for Issue #2: Redis Pub/Sub with consumer groups

We shall use the concept of Consumer Groups which is introduced by Redis Streams. In this, only one subscriber receives the message for processing. This Redis stream service ensures that there will be no duplicated message processing as only one backend microservice instance will receive the message.

Conclusion

We take all the design considerations on how to scale the WebSocket server in a microservice architecture horizontally. Firstly, we are using Redis publish-subscribe messaging patterns to ensure that there is no message loss or duplicated message processing in the process of real-time communication between the web application and mobile application and also in backend microservices.

We, at Oodles, provide complete ERP application development services to help enterprises sail through their routine operational challenges. Our development team uses the latest tech stack and agile methodologies to build custom enterprise solutions at cost-effective rates. To learn more about our custom iERP development services, drop us a line at [email protected]


Please remove URL from text

Comment is required

Sending message..
Connect with us