bugl
bugl
HomeLearnPatternsSearch
HomeLearnPatternsSearch

Loading lesson path

Learn/Node.js/Perfomance & Scaling
Node.js•Perfomance & Scaling

Node.js Cluster Module

Concept visual

Node.js Cluster Module

Pointer walk
two pointers
leftright102132436485116
left=0
right=6
1
3

Start at both ends

What is the Cluster Module?

The Cluster module provides a way to create multiple worker processes that share the same server port. Since Node.js is single-threaded by default, the Cluster module helps your application utilize multiple CPU cores, significantly improving performance on multi-core systems. Each worker runs in its own process with its own event loop and memory space, but they all share the same server port. The master process is responsible for creating workers and distributing incoming connections among them.

Importing the Cluster Module

The Cluster module is included in Node.js by default. You can use it by requiring it in your script:

const cluster = require('cluster');
const os = require('os');
// Check if this is the master process if (cluster.isMaster) {
console.log(`Master process ${process.pid} is running`);
} else {
console.log(`Worker process ${process.pid} started`);
}

How Clustering Works

The Cluster module works by creating a master process that spawns multiple worker processes. The master process doesn't execute the application code but manages the workers. Each worker process is a new Node.js instance that runs your application code independently.

Note:

Under the hood, the Cluster module uses the Child Process module's fork() method to create new workers.

Process Type

Responsibility

Master

Creating and managing worker processes

Monitoring worker health

Restarting crashed workers

Load balancing (distributing connections)

Worker

Running the actual application code

Handling incoming requests

Processing data

Executing business logic

Creating a Basic Cluster

Here's a simple example of creating a cluster with worker processes for each CPU:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// This is the master process console.log(`Master ${process.pid} is running`);
// Fork workers for each CPU core for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
// Listen for worker exits cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
// You can fork a new worker to replace the dead one console.log('Forking a new worker...');
cluster.fork();
});
} else {

// This is a worker process

// Create an HTTP server http.createServer((req, res) => {
res.writeHead(200);
res.end(`Hello from Worker ${process.pid}\n`);
// Simulate CPU work let i = 1e7;
while (i > 0) { i--; }
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}

In this example:

The master process detects the number of CPU cores

It forks one worker per CPU

Each worker creates an HTTP server on the same port (8000) The cluster module automatically load balances the incoming connections If a worker crashes, the master creates a new one

Worker Communication

You can communicate between master and worker processes using the send() method and message events, similar to how IPC works in the Child Process module.

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Track request count for each worker const requestCounts = {};
// Fork workers for (let i = 0; i < numCPUs; i++) {
const worker = cluster.fork();
requestCounts[worker.id] = 0;
// Listen for messages from this worker worker.on('message', (msg) => {
if (msg.cmd === 'incrementRequestCount') {
requestCounts[worker.id]++;
console.log(`Worker ${worker.id} (pid ${worker.process.pid}) has handled ${requestCounts[worker.id]} requests`);
}
});
}
// Every 10 seconds, send the request count to each worker setInterval(() => {
for (const id in cluster.workers) {
cluster.workers[id].send({
cmd: 'requestCount', requestCount: requestCounts[id]
});
}
console.log('Total request counts:', requestCounts);
}, 10000);
// Handle worker exit cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
// Fork a new worker to replace it const newWorker = cluster.fork();
requestCounts[newWorker.id] = 0;
});
} else {
// Worker process console.log(`Worker ${process.pid} started`);
let localRequestCount = 0;
// Handle messages from the master process.on('message', (msg) => {
if (msg.cmd === 'requestCount') {
console.log(`Worker ${process.pid} has handled ${msg.requestCount} requests according to master`);
}
});
// Create an HTTP server http.createServer((req, res) => {
// Notify the master that we handled a request process.send({ cmd: 'incrementRequestCount' });
// Increment local count localRequestCount++;
// Send response res.writeHead(200);
res.end(`Hello from Worker ${process.pid}, I've handled ${localRequestCount} requests locally\n`);
}).listen(8000);
}

Zero-Downtime Restart

One of the main benefits of clustering is the ability to restart workers without downtime. This is useful for deploying updates to your application.

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Store workers const workers = [];
// Fork initial workers for (let i = 0; i < numCPUs; i++) {
workers.push(cluster.fork());
}
// Function to restart workers one by one function restartWorkers() {
console.log('Starting zero-downtime restart...');
let i = 0;
function restartWorker() {
if (i >= workers.length) {
console.log('All workers restarted successfully!');
return;
}
const worker = workers[i++];
console.log(`Restarting worker ${worker.process.pid}...`);
// Create a new worker const newWorker = cluster.fork();
newWorker.on('listening', () => {
// Once the new worker is listening, kill the old one worker.disconnect();
// Replace the old worker in our array workers[workers.indexOf(worker)] = newWorker;
// Continue with the next worker setTimeout(restartWorker, 1000);
});
}
// Start the recursive process restartWorker();
}
// Simulate a restart after 20 seconds setTimeout(restartWorkers, 20000);
// Handle normal worker exit cluster.on('exit', (worker, code, signal) => {
if (worker.exitedAfterDisconnect !== true) {
console.log(`Worker ${worker.process.pid} died unexpectedly, replacing it...`);
const newWorker = cluster.fork();
workers[workers.indexOf(worker)] = newWorker;
}
});
} else {

// Worker process

// Create an HTTP server http.createServer((req, res) => {
res.writeHead(200);
res.end(`Worker ${process.pid} responding, uptime: ${process.uptime().toFixed(2)} seconds\n`);
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}

This example demonstrates:

Creating an initial set of workers

Replacing each worker one by one

Ensuring a new worker is listening before disconnecting the old one

Gracefully handling unexpected worker deaths

Previous

Node.js Child Process Module

Next

Node.js Worker Threads Module