1200 Questions • Gamified Learning • Video Tutorials
Node.js is an open-source, cross-platform JavaScript runtime environment that executes JavaScript code outside a web browser. It's built on Chrome's V8 JavaScript engine and uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.
Key reasons for popularity:
// Simple Node.js server example
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello, Node.js World!');
});
server.listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
Join our Node.js Certification Course at leadwithskills.com to build real-world applications and become job-ready in 12 weeks!
Enroll Now - Limited SeatsNode.js architecture is built around several key components that work together to provide its unique capabilities:
Core Components:
Architecture Layers:
The event-driven model is a programming paradigm where the flow of the program is determined by events such as user actions, sensor outputs, or message passing. In Node.js, this model is fundamental to its non-blocking I/O operations.
Key Components:
// Event-driven programming example
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
// Register event listener
myEmitter.on('userRegistered', (user) => {
console.log(`New user registered: ${user.name}`);
// Send welcome email, create profile, etc.
});
myEmitter.on('userRegistered', (user) => {
console.log(`Analytics recorded for: ${user.email}`);
});
// Emit event
myEmitter.emit('userRegistered', {
name: 'John Doe',
email: 'john@example.com'
});
Node.js has revolutionized backend development by enabling JavaScript on the server-side, offering numerous advantages for modern web applications.
Primary Use Cases:
Our Node.js Certification Course at leadwithskills.com teaches you to build scalable, secure backend systems used by companies like Netflix, Uber, and PayPal.
Node.js offers significant advantages over traditional server-side technologies like Apache, PHP, or Java-based servers:
Performance Comparison:
Technical Advantages:
| Feature | Node.js | Traditional Servers |
|---|---|---|
| Concurrency Model | Event-driven, non-blocking | Multi-threaded, blocking I/O |
| Memory Usage | Low footprint per connection | High memory for thread stacks |
| Scalability | Vertical and horizontal scaling | Primarily horizontal scaling |
JavaScript is a programming language, while Node.js is a runtime environment that allows JavaScript to run on the server-side.
Key Differences:
V8 is Google's open-source high-performance JavaScript and WebAssembly engine, written in C++. It's the same engine that powers Google Chrome and Chromium-based browsers.
Key Features of V8:
// V8 optimizations in action
function calculateSum(array) {
let sum = 0;
for (let i = 0; i < array.length; i++) {
sum += array[i]; // V8 optimizes this loop
}
return sum;
}
const numbers = [1, 2, 3, 4, 5];
console.log(calculateSum(numbers)); // 15
Node.js achieves asynchrony through its event-driven, non-blocking I/O model. Instead of waiting for operations to complete, it continues executing other code and uses callbacks to handle completed operations.
Asynchronous Patterns:
// Asynchronous file reading example
const fs = require('fs');
console.log('Start reading file...');
// This doesn't block the event loop
fs.readFile('example.txt', 'utf8', (err, data) => {
if (err) throw err;
console.log('File content:', data);
});
console.log('This runs immediately, before file reading completes');
Struggling with callbacks and promises? Our Node.js Certification Course at leadwithskills.com breaks down complex async concepts with real-world projects and expert mentorship.
libuv is a multi-platform C library that provides support for asynchronous I/O operations. It's a critical component of Node.js that handles the event loop and provides the abstraction for non-blocking operations.
Key Features of libuv:
package.json is like the ID card for your Node.js project. It tells everyone what your project is about and what it needs to work.
Think of it as a recipe. It lists all the ingredients your project needs. Without it, your project won't know what to install or how to run.
Here's what it contains:
Here's a simple example:
{
"name": "my-cool-app",
"version": "1.0.0",
"description": "A simple Node.js app",
"main": "app.js",
"scripts": {
"start": "node app.js",
"dev": "nodemon app.js"
},
"dependencies": {
"express": "^4.18.0"
},
"devDependencies": {
"nodemon": "^2.0.0"
}
}
When you run "npm install", it reads this file and downloads everything listed. It's that simple. This file makes sure everyone working on your project has the same setup.
You create package.json by running "npm init" in your project folder. It asks you questions and builds the file for you. Or you can use "npm init -y" to accept all the defaults.
The version numbers with ^ and ~ are important. They tell npm which updates are safe to install. ^4.18.0 means "any version 4.x.x that's 4.18.0 or higher". It won't jump to version 5.
package.json also helps when you deploy your app. Services like Heroku read this file to know how to start your application. They look at the "scripts" section to find the start command.
Without package.json, you'd have to manually tell everyone what to install. You'd have to remember all the dependencies yourself. It would be a mess.
This file is the heart of any Node.js project. It keeps everything organized and makes sharing code much easier.
When you type "npm start", Node.js looks in your package.json file for a script called "start". It runs whatever command you put there.
It's like telling a chef "start cooking" - they look at the recipe and begin following the instructions.
Here's what typically happens:
// In package.json
{
"scripts": {
"start": "node server.js"
}
}
When you run "npm start", it executes "node server.js". Your server starts running.
But "start" is special. If you don't define a start script, npm tries to run "node server.js" by default. It's one of the few scripts that has a default behavior.
Other scripts work differently. If you type "npm run dev" and there's no "dev" script, it just fails. No guessing.
// These all work differently npm start // Runs "start" script or defaults to node server.js npm run dev // Only runs if "dev" script exists npm test // Runs "test" script npm run custom-task // Runs "custom-task" script
Why use npm scripts instead of running node directly? Several reasons.
First, it makes your project easier for others to use. They don't need to remember which file to run. Just type "npm start" and it works.
Second, you can chain commands together. Like running a build step before starting:
{
"scripts": {
"start": "npm run build && node server.js",
"build": "webpack --mode production"
}
}
Third, npm scripts have access to node_modules/.bin. You can run tools like nodemon without installing them globally.
{
"scripts": {
"dev": "nodemon server.js"
}
}
Even though nodemon isn't installed globally, npm finds it in node_modules and runs it. This makes your project self-contained.
npm start is just the beginning. You can create scripts for anything: testing, building, deploying, cleaning. It's like having a control panel for your project.
The real power comes when you combine scripts. You can set up complex workflows that anyone can run with one command.
Next time you see "npm start", remember it's just looking in package.json and running whatever you told it to. Simple but powerful.
NPM stands for Node Package Manager. It's like a giant app store for Node.js code.
Imagine you're building a house. You could cut down trees to make wood, but that takes forever. NPM lets you just go to the store and buy pre-cut wood. It saves you tons of time.
Here's what NPM does for you:
Some common NPM commands:
npm install express # Install a package npm start # Run your app npm test # Run tests npm publish # Share your code
Without NPM, you'd have to manually download every piece of code you need. You'd have to check for updates yourself. It would be a huge headache.
NPM has over a million packages available. That's like having a million tools in your toolbox. Whatever you need to build, someone has probably already made a tool for it.
Dependencies are like ingredients you need to cook a meal. DevDependencies are like the kitchen tools you use to prepare it.
Here's the simple difference: dependencies are needed to run your app. DevDependencies are only needed while you're developing it.
Think about building a car:
Real examples:
// Dependencies - needed in production
"dependencies": {
"express": "^4.18.0", // Web framework
"mongoose": "^6.0.0" // Database library
}
// DevDependencies - only for development
"devDependencies": {
"nodemon": "^2.0.0", // Auto-restarts server
"jest": "^27.0.0" // Testing framework
}
When you deploy your app to a server, you only install dependencies. This makes your app smaller and faster. DevDependencies stay on your development machine.
Here's how you install each:
npm install express # Goes to dependencies npm install --save-dev jest # Goes to devDependencies
Mixing them up won't break your app, but it's good practice to keep them separate. It keeps things organized.
The node_modules folder is where all your downloaded packages live. It's like a storage room for all the code your project needs.
When you run "npm install", NPM creates this folder and puts everything in it. Every package, every dependency, every tool - they all go here.
Important things to know:
Here's what it looks like:
node_modules/ ├── express/ │ ├── index.js │ └── package.json ├── mongoose/ │ ├── lib/ │ └── package.json └── ...hundreds more
Why not commit it to Git? Several reasons. The folder can be huge - gigabytes sometimes. Different operating systems handle files differently. Most importantly, package.json has all the information needed to recreate it.
If you delete node_modules, no problem. Just run "npm install" and everything comes back. The package.json file remembers exactly what you need.
Sometimes node_modules causes issues. If things aren't working right, deleting this folder and running npm install again often fixes it. It's like restarting your computer when something acts weird.
The global object is like the shared workspace in Node.js. Everything placed here is available everywhere in your code.
In browsers, this is called "window". In Node.js, it's called "global". It's the same idea - a place for stuff that needs to be available to all parts of your program.
Common global things in Node.js:
You can add your own stuff to global too:
// Make a variable available everywhere global.appName = "My Cool App"; // Now you can use it in any file console.log(appName); // Works anywhere!
But be careful with this. Global variables can make code hard to understand. If everything is global, you don't know where things are coming from.
Here's how you use some built-in globals:
console.log("Hello world"); // Output to console
setTimeout(() => {
console.log("This runs later");
}, 1000); // Wait 1 second
console.log(__dirname); // Shows current folder
The global object is always there. You don't need to import it or require it. It's just available, like air in a room.
Most of the time, you'll use the built-in globals. Adding your own should be rare. It's like putting something in a shared family fridge - only do it if everyone really needs it.
The process object is like the control panel for your Node.js program. It gives you information about and control over what's happening right now.
Think of it as the dashboard of your car. It shows your speed, fuel level, and lets you control things like the radio. Process does the same for your running program.
Here are the most useful parts of process:
Let me show you some examples:
// See command line arguments console.log(process.argv); // Run: node app.js hello world // Output: ['node', '/app.js', 'hello', 'world'] // Access environment variables console.log(process.env.PORT); // If PORT=3000, this shows 3000 // Get current folder console.log(process.cwd()); // Shows /users/yourname/projects // Stop the program process.exit(0); // 0 means success
Environment variables are really important. They let you configure your app without changing code. Like setting the database password or API keys.
Here's a real example using process:
// Get the port from environment or use default
const port = process.env.PORT || 3000;
// Start server on that port
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
This code is flexible. If you set PORT=5000 in environment, it uses 5000. Otherwise, it uses 3000. This is how apps adapt to different environments.
Process also handles signals. Like when someone presses Ctrl+C to stop your program. You can catch that and clean up properly.
Require is how you bring code from other files into your current file. It's like inviting friends over to help you with a project.
Without require, every file would be alone. You'd have to copy the same code everywhere. Require lets you share code between files easily.
There are three main ways to use require:
Examples of each:
// Core module - no path needed
const fs = require('fs');
const http = require('http');
// Your own file - use relative path
const myModule = require('./my-module');
const utils = require('../lib/utils');
// NPM package - just the name
const express = require('express');
const mongoose = require('mongoose');
When you require a file, Node.js runs that file and gives you whatever it exports. It only runs the file once, even if you require it multiple times.
Here's a complete example:
// math-utils.js - the file being required
function add(a, b) {
return a + b;
}
function multiply(a, b) {
return a * b;
}
module.exports = { add, multiply };
// app.js - the file doing the requiring
const math = require('./math-utils');
console.log(math.add(2, 3)); // 5
console.log(math.multiply(2, 3)); // 6
Require is synchronous. That means it loads everything immediately. Your code waits until the required file is fully loaded before continuing.
Node.js caches required files. If you require the same file twice, you get the same object. This is efficient but can cause confusion if you're expecting fresh copies.
File paths matter. "./" means same folder. "../" means parent folder. No "./" means core module or NPM package.
Require makes Node.js modular. You can break big programs into small, manageable pieces. Each piece does one thing well.
module.exports is how you share your code with other files. It's like packing a lunchbox - you decide what to put in it for others to use.
When another file requires your file, they get whatever you put in module.exports. If you don't export anything, they get an empty object.
There are several ways to use module.exports:
// Export a single function
module.exports = function add(a, b) {
return a + b;
};
// Export an object with multiple things
module.exports = {
add: function(a, b) { return a + b; },
multiply: function(a, b) { return a * b; },
PI: 3.14159
};
// Export things one by one
module.exports.add = function(a, b) {
return a + b;
};
module.exports.PI = 3.14159;
Here's a complete example showing how it works:
// calculator.js - the exporting file
function add(a, b) {
return a + b;
}
function subtract(a, b) {
return a - b;
}
// Decide what to share
module.exports = {
add,
subtract
};
// app.js - the importing file
const calc = require('./calculator');
console.log(calc.add(5, 3)); // 8
console.log(calc.subtract(5, 3)); // 2
You can export any type of value: functions, objects, numbers, strings, even classes.
What happens if you forget module.exports? The requiring file gets an empty object. Nothing breaks, but they can't use your code.
You can only have one module.exports per file. If you assign to it multiple times, only the last assignment counts.
Here's a class example:
// user.js
class User {
constructor(name) {
this.name = name;
}
greet() {
return `Hello, ${this.name}!`;
}
}
module.exports = User;
// app.js
const User = require('./user');
const john = new User('John');
console.log(john.greet()); // Hello, John!
module.exports makes your code reusable. Write once, use everywhere. It's one of the most important concepts in Node.js.
This confuses many people. module.exports is the real deal. Exports is just a helper that points to module.exports.
Think of it like this: module.exports is the actual house. Exports is the front door. Most of the time, they lead to the same place.
Here's the technical truth: exports is a variable that starts off pointing to module.exports. But if you reassign exports, they become different things.
// This works - exports points to module.exports
exports.add = function(a, b) {
return a + b;
};
// This breaks the connection
exports = function() {
return "I'm broken now";
};
Let me show you what happens:
// File: math.js
exports.add = (a, b) => a + b;
exports = { multiply: (a, b) => a * b };
// File: app.js
const math = require('./math');
console.log(math.add(2, 3)); // Error! add doesn't exist
Why does this happen? Because when you reassign exports, it no longer points to module.exports. The require function only looks at module.exports.
Here's the safe way to do it:
// Always use module.exports for reassigning
module.exports = {
add: (a, b) => a + b,
multiply: (a, b) => a * b
};
// Or use exports for adding properties
exports.add = (a, b) => a + b;
exports.multiply = (a, b) => a * b;
My advice: stick with module.exports. It's always clear what you're doing. No surprises.
Some people like exports because it's shorter. But module.exports is only 7 characters longer. Is that really worth the confusion?
Here's a simple rule: if you're exporting multiple things, use module.exports with an object. If you're exporting one thing, use module.exports directly.
Both work. But module.exports is safer. You'll never accidentally break your exports.
CommonJS is the way Node.js organizes code into separate files. It's the system that makes require and module.exports work.
Before CommonJS, JavaScript had no good way to split code into files. Everything was global. CommonJS fixed this.
The main ideas of CommonJS are simple:
Here's how it works in practice:
// user-module.js
const users = [];
function addUser(name) {
users.push(name);
}
function getUsers() {
return users;
}
module.exports = { addUser, getUsers };
// app.js
const userModule = require('./user-module');
userModule.addUser('Alice');
userModule.addUser('Bob');
console.log(userModule.getUsers()); // ['Alice', 'Bob']
Each module has its own space. The "users" array in user-module.js is private. Other files can't access it directly. They can only use the exported functions.
This is called encapsulation. It's like having private rooms in a house. You control what others can see and use.
CommonJS loads modules synchronously. That means when you require a file, your code waits until that file is fully loaded before continuing.
This is different from ES modules (the new JavaScript standard). ES modules can load asynchronously. But CommonJS is still what Node.js uses by default.
Why synchronous loading? It makes code easier to understand. You know exactly when modules are loaded. No surprises.
CommonJS also caches modules. If you require the same file twice, you get the same object. This is efficient and prevents duplicate work.
// Both get the same object
const mod1 = require('./my-module');
const mod2 = require('./my-module');
console.log(mod1 === mod2); // true
CommonJS made Node.js possible. Without it, building large applications would be much harder. You'd have to manage all your dependencies manually.
It's not perfect. The synchronous loading can be slow for large applications. But for most use cases, it works great.
__dirname gives you the folder path where your current file lives. It's always available, no imports needed.
Think of it as your file's home address. No matter where you run your program from, __dirname always points to the same place.
This is super useful for reading files or including other files. You always know where to look.
// File: /users/john/projects/app.js
console.log(__dirname);
// Output: /users/john/projects
// Read a file in the same folder
const fs = require('fs');
const data = fs.readFileSync(__dirname + '/data.txt');
Without __dirname, you'd have to guess paths. Or use complex relative paths. __dirname makes it simple.
It's different from process.cwd(). process.cwd() shows where you ran the command. __dirname shows where the file is located.
// If you run: cd /tmp && node /projects/app.js console.log(__dirname); // /projects console.log(process.cwd()); // /tmp
You'll use __dirname all the time. Any time you need to access files relative to your code, use __dirname.
Combine it with path.join for clean code:
const path = require('path');
const filePath = path.join(__dirname, 'data', 'users.json');
This works on Windows, Mac, and Linux. path.join handles the slashes correctly for each system.
__filename shows the complete path to your current file. It's like __dirname but includes the actual filename too.
Think of __dirname as your home address and __filename as your exact room number. One gets you to the building, the other to your specific apartment.
// If your file is at /projects/app.js console.log(__filename); // Output: /projects/app.js console.log(__dirname); // Output: /projects
You might not use __filename as much as __dirname. Most times you just need the folder, not the specific file.
But it's handy for logging or debugging. When something goes wrong, you can see exactly which file had the issue.
// Good for error messages
console.log(`Error in file: ${__filename}`);
// Shows: Error in file: /projects/app.js
Both __dirname and __filename are available in every file. No need to require anything. They're like built-in helpers that always know where they are.
Just remember the difference: __dirname gives you the folder, __filename gives you the folder plus file name.
The event loop is what makes Node.js able to do many things at once. It's like a restaurant manager who handles multiple orders without getting stuck on one.
Without the event loop, Node.js would be like a cashier who helps one customer completely before moving to the next. The line would get really long.
Here's how it works in simple terms:
console.log('Start cooking'); // Runs immediately
setTimeout(() => {
console.log('Food is ready'); // Runs later
}, 1000);
console.log('Set the table'); // Runs immediately
Output would be: Start cooking, Set the table, Food is ready. The event loop handled the timeout while other code kept running.
The event loop has different phases. Like different stations in a kitchen: grill, fryer, salad station. Each phase handles different types of tasks.
It's not really complicated once you get the pattern. Code that can run now runs now. Code that needs to wait gets scheduled. The event loop manages the waiting.
This is why Node.js is great for I/O operations. Reading files, network requests, database calls - these all take time. The event loop handles the waiting so your code isn't stuck.
Both setImmediate and setTimeout run code later. But they run at different times in the event loop.
setImmediate runs right after the current operation finishes. setTimeout runs after a minimum time delay.
// See the difference
setImmediate(() => {
console.log('setImmediate runs');
});
setTimeout(() => {
console.log('setTimeout runs');
}, 0);
// Output might be:
// setImmediate runs
// setTimeout runs
// Or sometimes the reverse - it depends
Think of setImmediate as "do this as soon as you're done with current work". Think of setTimeout as "do this after waiting X milliseconds".
In practice, they're often close. But setImmediate has a slight timing advantage in some cases.
Here's when to use each:
Most people use setTimeout more often. It's more predictable because you specify the exact delay.
setImmediate is useful in specific cases. Like when you're doing I/O operations and want to continue right after they finish.
const fs = require('fs');
fs.readFile('file.txt', () => {
setImmediate(() => {
console.log('This runs after file reading');
});
});
The key thing to remember: both let you schedule work. But they schedule it in different parts of the event loop.
process.nextTick() is like cutting in line. It makes your code run before anything else that's waiting.
When you call process.nextTick(), your callback jumps to the front of the queue. It runs after the current code finishes but before the event loop continues.
console.log('Start');
process.nextTick(() => {
console.log('Next tick message');
});
console.log('End');
// Output:
// Start
// End
// Next tick message
See how "Next tick message" appears after "End"? That's because nextTick runs after the current operation but before the event loop moves on.
It's different from setImmediate. nextTick runs sooner. Much sooner.
process.nextTick(() => {
console.log('This runs first');
});
setImmediate(() => {
console.log('This runs later');
});
// Output:
// This runs first
// This runs later
Why would you use this? Sometimes you need to complete work before the event loop continues. Like when you're setting up event listeners or doing cleanup.
But be careful. Using too many nextTick calls can starve other operations. It's like letting one person cut in line too many times.
Good uses of nextTick:
Most of the time, you won't need nextTick. But when you do need it, it's very useful.
This is a common interview question. Both schedule work for later, but at very different times.
process.nextTick runs immediately after the current operation. setImmediate runs on the next iteration of the event loop.
Think of it like this: nextTick is "do this right now, as soon as you finish what you're doing". setImmediate is "do this on your next break".
console.log('Start');
process.nextTick(() => {
console.log('Next tick - runs first');
});
setImmediate(() => {
console.log('Set immediate - runs later');
});
console.log('End');
// Output:
// Start
// End
// Next tick - runs first
// Set immediate - runs later
nextTick has higher priority. It runs before I/O operations, before timers, before everything else that's waiting.
setImmediate runs in the check phase of the event loop. That's after I/O operations complete.
Here's a practical example showing the timing:
const fs = require('fs');
fs.readFile('file.txt', () => {
process.nextTick(() => {
console.log('Next tick inside I/O');
});
setImmediate(() => {
console.log('Set immediate inside I/O');
});
});
// Output order:
// Next tick inside I/O
// Set immediate inside I/O
Even inside I/O callbacks, nextTick runs first. It always jumps to the front of the line.
When to use which:
Most code uses setImmediate or setTimeout. nextTick is for special cases where timing really matters.
Callbacks are functions you pass to other functions to be called later. They're like giving someone your phone number and saying "call me when you're done".
In Node.js, callbacks are everywhere. Any time you have to wait for something - reading files, network requests, database queries - you use callbacks.
// Simple callback example
function doWork(callback) {
console.log('Working...');
// Simulate work taking time
setTimeout(() => {
callback('Work complete');
}, 1000);
}
// Using the callback
doWork((message) => {
console.log(message); // "Work complete" after 1 second
});
Node.js uses an error-first callback pattern. This means the first argument to any callback is always an error object.
const fs = require('fs');
fs.readFile('file.txt', (error, data) => {
if (error) {
console.log('Error reading file:', error);
return;
}
console.log('File content:', data);
});
This pattern is consistent across Node.js. You always check for errors first. If no error occurred, the error argument will be null or undefined.
Callbacks can be nested. This is called "callback hell" when it gets too deep.
// Nested callbacks - gets messy
fs.readFile('file1.txt', (err, data1) => {
if (err) return console.log(err);
fs.readFile('file2.txt', (err, data2) => {
if (err) return console.log(err);
fs.readFile('file3.txt', (err, data3) => {
if (err) return console.log(err);
console.log(data1, data2, data3);
});
});
});
That's why promises and async/await were invented. They make asynchronous code easier to read.
But callbacks are still important to understand. Much of Node.js is built on them. Many libraries still use callbacks.
Callbacks make Node.js non-blocking. Instead of waiting for slow operations, you provide a callback and keep doing other work.
Callback hell is when you have too many nested callbacks. The code becomes a pyramid shape that's hard to read and maintain.
It looks like this:
getData((err, data) => {
if (err) return handleError(err);
processData(data, (err, result) => {
if (err) return handleError(err);
saveData(result, (err, saved) => {
if (err) return handleError(err);
sendEmail(saved, (err, response) => {
if (err) return handleError(err);
console.log('All done!');
});
});
});
});
See the pyramid shape? Each callback is indented further right. This is callback hell.
It's hard to read. Hard to debug. Easy to make mistakes.
Here are ways to avoid callback hell:
1. Use named functions
function handleData(err, data) {
if (err) return handleError(err);
processData(data, handleProcessed);
}
function handleProcessed(err, result) {
if (err) return handleError(err);
saveData(result, handleSaved);
}
function handleSaved(err, saved) {
if (err) return handleError(err);
sendEmail(saved, handleEmailSent);
}
function handleEmailSent(err, response) {
if (err) return handleError(err);
console.log('All done!');
}
getData(handleData);
2. Use promises
getData()
.then(processData)
.then(saveData)
.then(sendEmail)
.then(() => console.log('All done!'))
.catch(handleError);
3. Use async/await
async function doWork() {
try {
const data = await getData();
const result = await processData(data);
const saved = await saveData(result);
await sendEmail(saved);
console.log('All done!');
} catch (error) {
handleError(error);
}
}
Promises and async/await are the modern solutions. They make asynchronous code look like regular synchronous code.
But you'll still see callbacks in older code. It's good to know how to handle them.
The key is to keep your code flat. Avoid deep nesting. Use the tools Node.js provides to manage asynchronous operations cleanly.
Promises are a cleaner way to handle asynchronous operations. They represent a value that might be available now, or in the future, or never.
Think of a promise like ordering food at a restaurant. They give you a receipt (the promise) that eventually becomes your food (the result).
A promise can be in one of three states:
Here's how to create a promise:
function readFilePromise(filename) {
return new Promise((resolve, reject) => {
fs.readFile(filename, (err, data) => {
if (err) {
reject(err); // Promise rejected
} else {
resolve(data); // Promise fulfilled
}
});
});
}
And here's how to use a promise:
readFilePromise('file.txt')
.then(data => {
console.log('File content:', data);
})
.catch(error => {
console.log('Error:', error);
});
You can chain promises together. This is much cleaner than nested callbacks.
getUser(userId)
.then(user => getOrders(user.id))
.then(orders => processOrders(orders))
.then(result => console.log('Done:', result))
.catch(error => console.log('Error:', error));
Promises have some useful methods:
// Wait for all promises to complete
Promise.all([promise1, promise2, promise3])
.then(results => {
console.log('All done:', results);
});
// Wait for first promise to complete
Promise.race([promise1, promise2])
.then(firstResult => {
console.log('First one done:', firstResult);
});
Most modern Node.js APIs return promises. But you can convert callback-based functions to promises using util.promisify.
const util = require('util');
const readFile = util.promisify(fs.readFile);
// Now readFile returns a promise
readFile('file.txt')
.then(data => console.log(data));
Promises make asynchronous code easier to read and write. They're a big improvement over callbacks.
Async/await is syntactic sugar for promises. It makes asynchronous code look and behave like synchronous code.
The async keyword goes before a function. It makes the function return a promise. The await keyword can only be used inside async functions.
// Regular function
function normalFunction() {
return 'hello';
}
// Async function
async function asyncFunction() {
return 'hello';
}
// Both can be called the same way
console.log(normalFunction()); // 'hello'
console.log(asyncFunction()); // Promise { 'hello' }
Await pauses the execution of an async function until a promise settles. It looks like waiting, but it's not blocking.
async function getData() {
try {
const user = await getUser(123);
const orders = await getOrders(user.id);
const processed = await processOrders(orders);
console.log('Result:', processed);
} catch (error) {
console.log('Error:', error);
}
}
This code looks synchronous. But it's actually asynchronous. While waiting for getUser, other code can run.
Compare with promises:
// With promises
getUser(123)
.then(user => getOrders(user.id))
.then(orders => processOrders(orders))
.then(result => console.log('Result:', result))
.catch(error => console.log('Error:', error));
// With async/await
async function getData() {
try {
const user = await getUser(123);
const orders = await getOrders(user.id);
const result = await processOrders(orders);
console.log('Result:', result);
} catch (error) {
console.log('Error:', error);
}
}
Async/await is easier to read. The code flows top to bottom. Error handling uses regular try/catch.
You can use await with any function that returns a promise. Most modern Node.js APIs return promises.
For callback-based functions, you can promisify them first:
const util = require('util');
const readFile = util.promisify(fs.readFile);
async function readConfig() {
const data = await readFile('config.json');
return JSON.parse(data);
}
Async/await doesn't replace promises. It works with them. Under the hood, async functions are still using promises.
This is the modern way to write asynchronous code in Node.js. It's clean, readable, and less error-prone.
Buffer is a global class for working with binary data. It's like a container for raw memory outside the JavaScript heap.
Before Buffer, JavaScript couldn't handle binary data well. Buffer lets you work with files, network packets, and other raw data.
You create buffers in different ways:
// Create from string
const buf1 = Buffer.from('hello');
// Create empty buffer of specific size
const buf2 = Buffer.alloc(10);
// Create buffer from array
const buf3 = Buffer.from([1, 2, 3, 4]);
Buffers have a length and can be accessed like arrays:
const buf = Buffer.from('hello');
console.log(buf.length); // 5
console.log(buf[0]); // 104 (ASCII for 'h')
console.log(buf.toString()); // 'hello'
You use buffers when working with files, networks, or any binary data:
const fs = require('fs');
// Read file as buffer
const buffer = fs.readFileSync('image.jpg');
console.log('File size:', buffer.length);
// Modify buffer
buffer[0] = 255; // Change first byte
// Write buffer back to file
fs.writeFileSync('modified.jpg', buffer);
Buffers are especially important for network operations. When you receive data over the network, it often comes as a buffer.
const net = require('net');
const server = net.createServer(socket => {
socket.on('data', buffer => {
console.log('Received', buffer.length, 'bytes');
console.log('Data:', buffer.toString());
});
});
In modern Node.js, you often work with strings instead of buffers. But when performance matters, or when dealing with binary data, buffers are essential.
Buffers are fixed-size. Once created, you can't change their size. But you can create new buffers from parts of existing ones.
const buf = Buffer.from('hello world');
const slice = buf.slice(0, 5); // 'hello'
const copy = Buffer.from(buf); // Copy entire buffer
Understanding buffers is important for advanced Node.js work. They're the foundation for streams and many performance optimizations.
Streams are a way to handle reading/writing data continuously. Instead of loading everything into memory at once, you process data in chunks.
Think of streams like a water hose. You don't need to store all the water in a bucket first. You can use the water as it flows through the hose.
There are four types of streams:
Here's a simple file copy using streams:
const fs = require('fs');
const readable = fs.createReadStream('large-file.txt');
const writable = fs.createWriteStream('copy.txt');
readable.pipe(writable);
The pipe method connects streams. Data flows from readable to writable automatically.
You can also handle stream events manually:
const readable = fs.createReadStream('file.txt');
readable.on('data', chunk => {
console.log('Received chunk:', chunk.length, 'bytes');
});
readable.on('end', () => {
console.log('No more data');
});
readable.on('error', err => {
console.log('Error:', err);
});
Streams are memory efficient. You can process huge files without running out of memory.
// This would fail with large files
const data = fs.readFileSync('huge-file.txt'); // All in memory
// This works with files of any size
const stream = fs.createReadStream('huge-file.txt'); // Chunks in memory
You can transform data as it flows through streams:
const fs = require('fs');
const zlib = require('zlib');
// Compress file while copying
fs.createReadStream('file.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('file.txt.gz'));
Streams are everywhere in Node.js. HTTP requests and responses are streams. File operations use streams. Even the console uses streams.
Understanding streams is key to building efficient Node.js applications. They let you handle large amounts of data without overwhelming your server.
Both read files, but they work very differently. readFile loads the entire file into memory. createReadStream reads the file in chunks.
readFile is like pouring a bucket of water into a container. You get all the water at once. createReadStream is like drinking from a water hose. You get the water continuously.
Here's readFile:
const fs = require('fs');
// Loads entire file into memory
fs.readFile('large-file.txt', (err, data) => {
if (err) throw err;
console.log('File size:', data.length);
// data contains the entire file
});
And here's createReadStream:
const fs = require('fs');
// Reads file in chunks
const stream = fs.createReadStream('large-file.txt');
let totalSize = 0;
stream.on('data', chunk => {
totalSize += chunk.length;
console.log('Received chunk:', chunk.length);
});
stream.on('end', () => {
console.log('Total size:', totalSize);
});
The key differences:
For a 1GB file, readFile would try to load all 1GB into memory. This could crash your program. createReadStream would read it in small chunks, using very little memory.
readFile is synchronous in nature (even though the API is asynchronous). It reads the whole file before giving you the data.
createReadStream is truly asynchronous. You start getting data immediately, while the file is still being read.
Use readFile when:
Use createReadStream when:
Most real-world applications use streams for file operations. It's more scalable and reliable.
The cluster module lets you create multiple Node.js processes that share the same server port. It's like having multiple workers instead of just one.
Since Node.js is single-threaded, it can only use one CPU core. The cluster module lets you use all available CPU cores.
Here's a basic cluster setup:
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Fork workers
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker) => {
console.log(`Worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello from worker ' + process.pid);
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
This code starts multiple workers. Each worker handles requests independently. If one worker crashes, others keep running.
The master process doesn't handle requests. It just manages the workers. The workers do the actual work.
Load balancing happens automatically. The operating system distributes incoming connections to different workers.
Workers don't share memory. Each worker has its own memory space. If you need to share data between workers, you have to use inter-process communication.
// In master
cluster.on('message', (worker, message) => {
console.log(`Message from worker ${worker.id}:`, message);
});
// In worker
process.send({ hello: 'from worker' });
The cluster module is built into Node.js. You don't need to install anything extra.
In production, you might use tools like PM2 instead of writing cluster code yourself. PM2 handles process management, restarting crashed workers, and zero-downtime reloads.
But understanding the cluster module is important. It shows how Node.js can scale beyond single-core limitations.
Not every application needs clustering. For I/O-heavy applications, a single Node.js process can handle thousands of concurrent connections. But for CPU-heavy applications, clustering can help.
The util module contains utility functions that are helpful but not big enough to deserve their own module. It's like a toolbox with various handy tools.
Some of the most useful functions in util:
util.promisify - converts callback-based functions to promise-based
const util = require('util');
const fs = require('fs');
const readFile = util.promisify(fs.readFile);
// Now you can use promises
readFile('file.txt')
.then(data => console.log(data));
util.inherits - implements inheritance between classes (older style)
const util = require('util');
function Animal(name) {
this.name = name;
}
Animal.prototype.speak = function() {
console.log(this.name + ' makes a noise');
};
function Dog(name) {
Animal.call(this, name);
}
util.inherits(Dog, Animal);
Dog.prototype.speak = function() {
console.log(this.name + ' barks');
};
const dog = new Dog('Rex');
dog.speak(); // Rex barks
util.format - formatted string output, like printf
const util = require('util');
const message = util.format('Hello %s, you have %d messages', 'John', 5);
console.log(message); // Hello John, you have 5 messages
util.types - type checking for various values
const util = require('util');
console.log(util.types.isDate(new Date())); // true
console.log(util.types.isRegExp(/abc/)); // true
util.debuglog - conditional debug logging
const util = require('util');
const debug = util.debuglog('app');
debug('This only appears if NODE_DEBUG=app');
The util module keeps growing. New versions of Node.js add more utility functions.
Many util functions become so popular they eventually move to the global scope. Like util.promisify - it's used everywhere now.
You don't need to memorise all util functions. Just know they exist. When you need a utility function, check the util module first.
The Node.js documentation has the complete list of util functions. It's worth browsing through them to see what's available.
Error-first callbacks are a convention in Node.js where the first parameter of a callback is always reserved for an error object.
This pattern is used throughout Node.js. It makes error handling consistent and predictable.
The pattern looks like this:
function asyncOperation(arg, callback) {
// Some async work...
if (errorHappened) {
callback(new Error('Something went wrong'));
} else {
callback(null, result);
}
}
And you use it like this:
asyncOperation('input', (err, result) => {
if (err) {
// Handle error
console.error('Error:', err.message);
return;
}
// Use result
console.log('Success:', result);
});
The rules are simple:
This pattern appears everywhere in Node.js core modules:
const fs = require('fs');
fs.readFile('file.txt', (err, data) => {
if (err) {
console.error('File error:', err);
return;
}
console.log('File content:', data);
});
const dns = require('dns');
dns.lookup('google.com', (err, address) => {
if (err) {
console.error('DNS error:', err);
return;
}
console.log('Address:', address);
});
Why this pattern? It forces you to handle errors. You can't accidentally ignore them because they're right there as the first parameter.
Before error-first callbacks, different libraries used different error handling patterns. Some put errors last. Some used separate error callbacks. It was messy.
Node.js standardized on error-first callbacks. This made all asynchronous APIs consistent.
Even with promises and async/await becoming popular, error-first callbacks are still important. Much existing Node.js code uses them. Many libraries still use them.
When you write your own asynchronous functions that use callbacks, follow this pattern. It's what Node.js developers expect.
The path module helps you work with file and directory paths. It handles the differences between operating systems so you don't have to.
Windows uses backslashes: C:\users\john\file.txt Linux and Mac use forward slashes: /home/john/file.txt
The path module lets you write code that works everywhere.
Here are the most useful path functions:
path.join - joins path segments together
const path = require('path');
const fullPath = path.join(__dirname, 'files', 'data.txt');
// On Unix: /projects/files/data.txt
// On Windows: C:\projects\files\data.txt
path.resolve - resolves a sequence of paths into an absolute path
const path = require('path');
const absolute = path.resolve('files', 'data.txt');
// Resolves to current working directory + files/data.txt
path.basename - gets the filename from a path
const path = require('path');
const filename = path.basename('/users/john/data.txt');
console.log(filename); // data.txt
path.dirname - gets the directory name from a path
const path = require('path');
const dir = path.dirname('/users/john/data.txt');
console.log(dir); // /users/john
path.extname - gets the file extension
const path = require('path');
const ext = path.extname('/users/john/data.txt');
console.log(ext); // .txt
path.parse - breaks a path into its components
const path = require('path');
const parsed = path.parse('/users/john/data.txt');
console.log(parsed);
// {
// root: '/',
// dir: '/users/john',
// base: 'data.txt',
// ext: '.txt',
// name: 'data'
// }
Always use path.join instead of string concatenation. path.join handles the slashes correctly for your operating system.
// Don't do this const badPath = __dirname + '/files/data.txt'; // Do this instead const goodPath = path.join(__dirname, 'files', 'data.txt');
The path module is small but essential. You'll use it in almost every Node.js application that works with files.
It's one of those modules you import automatically when starting a new project. Like fs and http, it's part of the Node.js core toolkit.
The os module provides information about the operating system. It lets your Node.js program understand and adapt to its environment.
Here are some useful os functions:
os.platform() - tells you which operating system you're running on
const os = require('os');
console.log(os.platform());
// 'darwin' on Mac, 'win32' on Windows, 'linux' on Linux
os.arch() - tells you the processor architecture
const os = require('os');
console.log(os.arch());
// 'x64' for 64-bit, 'arm' for ARM processors
os.cpus() - information about CPU cores
const os = require('os');
const cpus = os.cpus();
console.log('Number of CPUs:', cpus.length);
console.log('First CPU:', cpus[0].model);
os.totalmem() - total system memory
const os = require('os');
const totalMemory = os.totalmem();
console.log('Total memory:', totalMemory / 1024 / 1024 / 1024, 'GB');
os.freemem() - free system memory
const os = require('os');
const freeMemory = os.freemem();
console.log('Free memory:', freeMemory / 1024 / 1024, 'MB');
os.hostname() - the computer's hostname
const os = require('os');
console.log('Hostname:', os.hostname());
os.networkInterfaces() - network interface information
const os = require('os');
const interfaces = os.networkInterfaces();
console.log('Network interfaces:', interfaces);
You might use the os module for:
Here's a practical example - a simple system monitor:
const os = require('os');
function systemInfo() {
return {
platform: os.platform(),
arch: os.arch(),
cpus: os.cpus().length,
totalMemory: Math.round(os.totalmem() / 1024 / 1024 / 1024) + ' GB',
freeMemory: Math.round(os.freemem() / 1024 / 1024) + ' MB',
uptime: Math.round(os.uptime() / 60 / 60) + ' hours'
};
}
console.log(systemInfo());
The os module is read-only. It only provides information. You can't change system settings with it.
It's not something you use in every application. But when you need system information, it's very handy.
The child_process module lets you run other programs from your Node.js application. It's like having your program give orders to other programs.
You can run system commands, other scripts, or any executable file.
There are four ways to create child processes:
exec - runs a command and buffers the output
const { exec } = require('child_process');
exec('ls -la', (error, stdout, stderr) => {
if (error) {
console.error('Error:', error);
return;
}
console.log('Output:', stdout);
});
execFile - runs an executable file directly
const { execFile } = require('child_process');
execFile('node', ['--version'], (error, stdout) => {
if (error) throw error;
console.log('Node version:', stdout);
});
spawn - runs a command with streaming output
const { spawn } = require('child_process');
const child = spawn('ls', ['-la']);
child.stdout.on('data', data => {
console.log('Output:', data.toString());
});
child.stderr.on('data', data => {
console.error('Error:', data.toString());
});
child.on('close', code => {
console.log('Process exited with code:', code);
});
fork - special case of spawn for Node.js scripts
const { fork } = require('child_process');
const child = fork('child-script.js');
child.on('message', message => {
console.log('Message from child:', message);
});
child.send({ hello: 'from parent' });
When to use each:
Child processes are useful for:
Here's a real example - converting an image using ImageMagick:
const { exec } = require('child_process');
exec('convert input.jpg -resize 50% output.jpg', (error) => {
if (error) {
console.error('Conversion failed:', error);
} else {
console.log('Image converted successfully');
}
});
Child processes run independently. If your main Node.js process crashes, child processes keep running. You need to manage them carefully.
Security warning: be careful with user input in child processes. Malicious users could inject commands. Always validate and sanitize input.
Node.js struggles with CPU-intensive tasks because it's single-threaded. While one task is using the CPU, everything else waits.
Think of Node.js as a single cashier in a store. If one customer has a complicated order, everyone else waits. CPU-intensive tasks are those complicated orders.
Here are ways to handle CPU-intensive tasks in Node.js:
1. Use child processes
const { fork } = require('child_process');
// Offload heavy computation to child process
const child = fork('heavy-computation.js');
child.on('message', result => {
console.log('Computation result:', result);
});
child.send({ numbers: [1, 2, 3, 4, 5] });
2. Use worker threads (Node.js 10.5+)
const { Worker } = require('worker_threads');
const worker = new Worker(`
const { parentPort } = require('worker_threads');
parentPort.on('message', numbers => {
const result = numbers.map(n => n * n);
parentPort.postMessage(result);
});
`, { eval: true });
worker.on('message', result => {
console.log('Result:', result);
});
worker.postMessage([1, 2, 3, 4, 5]);
3. Use the cluster module
const cluster = require('cluster');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
} else {
// Each worker handles requests
require('./server.js');
}
4. Split work into smaller chunks
function processChunk(chunk) {
// Process a small piece
return chunk.map(x => x * x);
}
function heavyWork(data, chunkSize = 1000) {
const results = [];
function processNextChunk() {
if (data.length === 0) {
// All done
return results;
}
const chunk = data.splice(0, chunkSize);
results.push(...processChunk(chunk));
// Allow other operations to run
setImmediate(processNextChunk);
}
processNextChunk();
}
5. Use native modules
Write performance-critical parts in C++ and call them from Node.js. This is advanced but very effective.
What counts as CPU-intensive?
For I/O-intensive tasks, Node.js excels. For CPU-intensive tasks, you need to plan carefully.
The best approach depends on your specific use case. Sometimes offloading to a child process is simplest. Sometimes worker threads are better. Sometimes you need a different technology entirely.
Knowing these options helps you build applications that handle all types of workloads effectively.
Join our Node.js Certification Course at leadwithskills.com to build real-world applications and become job-ready in 12 weeks!
Enroll Now - Limited SeatsModules are like separate files that contain related code. Each module is its own little world with its own variables and functions.
Think of modules like different rooms in a house. Each room has its own furniture and purpose. The kitchen has cooking tools, the bedroom has sleeping stuff. They're separate but part of the same house.
In Node.js, every JavaScript file is a module. When you create a file called `math.js`, that's a module. When you create `app.js`, that's another module.
Modules help you:
Here's a simple example. You have a math module:
// math.js
function add(a, b) {
return a + b;
}
function multiply(a, b) {
return a * b;
}
module.exports = { add, multiply };
And you use it in another file:
// app.js
const math = require('./math.js');
console.log(math.add(2, 3)); // 5
console.log(math.multiply(2, 3)); // 6
Without modules, you'd have to put all your code in one giant file. It would be messy and hard to maintain.
Modules make your code modular (hence the name). You can work on different parts separately. You can test them independently. You can reuse them in different projects.
Node.js has two module systems: CommonJS (the older one with require/module.exports) and ES modules (the newer one with import/export). Most Node.js code still uses CommonJS.
module.exports is how you share stuff from your module with other files. It's like packing a box with things you want to give to someone.
When another file requires your module, they get whatever you put in module.exports. If you don't put anything there, they get an empty object.
Here are different ways to use module.exports:
Export a single function:
// logger.js
module.exports = function(message) {
console.log(`LOG: ${message}`);
};
// app.js
const log = require('./logger');
log('Hello world'); // LOG: Hello world
Export multiple things as an object:
// math.js
module.exports = {
add: (a, b) => a + b,
subtract: (a, b) => a - b,
PI: 3.14159
};
// app.js
const math = require('./math');
console.log(math.add(5, 3)); // 8
console.log(math.PI); // 3.14159
Export things one by one:
// utils.js
module.exports.sayHello = function() {
return 'Hello!';
};
module.exports.sayGoodbye = function() {
return 'Goodbye!';
};
// app.js
const utils = require('./utils');
console.log(utils.sayHello()); // Hello!
You can export any type of value: functions, objects, numbers, strings, even classes.
What happens if you forget module.exports? The requiring file gets an empty object. Nothing breaks, but they can't use your code.
You can only have one module.exports per file. If you assign to it multiple times, only the last assignment counts.
Here's a class example:
// User.js
class User {
constructor(name) {
this.name = name;
}
greet() {
return `Hello, ${this.name}!`;
}
}
module.exports = User;
// app.js
const User = require('./User');
const john = new User('John');
console.log(john.greet()); // Hello, John!
module.exports makes your code reusable. Write once, use everywhere.
This confuses many people. module.exports is the real deal. exports is just a helper that points to module.exports.
Think of it like this: module.exports is the actual house. exports is the front door. Most of the time, they lead to the same place.
Here's the technical truth: exports is a variable that starts off pointing to module.exports. But if you reassign exports, they become different things.
// This works - exports points to module.exports
exports.add = function(a, b) {
return a + b;
};
// This breaks the connection
exports = function() {
return "I'm broken now";
};
Let me show you what happens:
// File: math.js
exports.add = (a, b) => a + b;
exports = { multiply: (a, b) => a * b };
// File: app.js
const math = require('./math');
console.log(math.add(2, 3)); // Error! add doesn't exist
Why does this happen? Because when you reassign exports, it no longer points to module.exports. The require function only looks at module.exports.
Here's the safe way to do it:
// Always use module.exports for reassigning
module.exports = {
add: (a, b) => a + b,
multiply: (a, b) => a * b
};
// Or use exports for adding properties
exports.add = (a, b) => a + b;
exports.multiply = (a, b) => a * b;
My advice: stick with module.exports. It's always clear what you're doing. No surprises.
Some people like exports because it's shorter. But module.exports is only 7 characters longer. Is that really worth the confusion?
Here's a simple rule: if you're exporting multiple things, use module.exports with an object. If you're exporting one thing, use module.exports directly.
Both work. But module.exports is safer. You'll never accidentally break your exports.
require() is how you bring code from other files into your current file. It's like inviting friends over to help you with a project.
Without require(), every file would be alone. You'd have to copy the same code everywhere. require() lets you share code between files easily.
There are three main ways to use require():
Examples of each:
// Core module - no path needed
const fs = require('fs');
const http = require('http');
// Your own file - use relative path
const myModule = require('./my-module');
const utils = require('../lib/utils');
// NPM package - just the name
const express = require('express');
const mongoose = require('mongoose');
When you require a file, Node.js runs that file and gives you whatever it exports. It only runs the file once, even if you require it multiple times.
Here's a complete example:
// math-utils.js - the file being required
function add(a, b) {
return a + b;
}
function multiply(a, b) {
return a * b;
}
module.exports = { add, multiply };
// app.js - the file doing the requiring
const math = require('./math-utils');
console.log(math.add(2, 3)); // 5
console.log(math.multiply(2, 3)); // 6
require() is synchronous. That means it loads everything immediately. Your code waits until the required file is fully loaded before continuing.
Node.js caches required files. If you require the same file twice, you get the same object. This is efficient but can cause confusion if you're expecting fresh copies.
File paths matter. "./" means same folder. "../" means parent folder. No "./" means core module or NPM package.
require() makes Node.js modular. You can break big programs into small, manageable pieces. Each piece does one thing well.
When you call require(), Node.js goes through several steps to find and load the module. It's like a detective following clues to find the right file.
Here's what happens internally:
Step 1: Resolve the path
Node.js figures out exactly which file to load. It checks in this order:
Step 2: Load the file
Once Node.js finds the file, it reads it and wraps the content in a function. Your code becomes:
(function(exports, require, module, __filename, __dirname) {
// Your module code goes here
});
Step 3: Execute the code
Node.js runs the wrapped function. This is where your module code actually executes.
Step 4: Return module.exports
Whatever was assigned to module.exports gets returned to the require() call.
Step 5: Cache the module
Node.js stores the module in a cache. Next time you require() the same file, it returns the cached version.
Here's a simplified version of what happens:
// Pseudocode of require() internals
function require(path) {
// 1. Resolve the full path
const filename = resolvePath(path);
// 2. Check cache
if (cache[filename]) {
return cache[filename].exports;
}
// 3. Create module object
const module = { exports: {} };
// 4. Read and wrap file content
const code = readFile(filename);
const wrappedCode = `(function(exports, require, module, __filename, __dirname) {
${code}
})`;
// 5. Execute the code
const execute = new Function('exports', 'require', 'module', '__filename', '__dirname', wrappedCode);
execute(module.exports, require, module, filename, path.dirname(filename));
// 6. Cache and return
cache[filename] = module;
return module.exports;
}
This wrapping is important. It means each module has its own scope. Variables you declare in a module don't leak into other modules.
The caching is also important for performance. Loading files from disk is slow. Caching makes require() fast after the first time.
Understanding how require() works helps you debug module loading issues and write better module code.
CommonJS is the module system that Node.js uses by default. It's the system that makes require() and module.exports work.
Before CommonJS, JavaScript had no good way to split code into files. Everything was global. CommonJS fixed this.
The main ideas of CommonJS are simple:
Here's how it works in practice:
// user-module.js
const users = [];
function addUser(name) {
users.push(name);
}
function getUsers() {
return users;
}
module.exports = { addUser, getUsers };
// app.js
const userModule = require('./user-module');
userModule.addUser('Alice');
userModule.addUser('Bob');
console.log(userModule.getUsers()); // ['Alice', 'Bob']
Each module has its own space. The "users" array in user-module.js is private. Other files can't access it directly. They can only use the exported functions.
This is called encapsulation. It's like having private rooms in a house. You control what others can see and use.
CommonJS loads modules synchronously. That means when you require() a file, your code waits until that file is fully loaded before continuing.
This is different from ES modules (the new JavaScript standard). ES modules can load asynchronously. But CommonJS is still what Node.js uses by default.
Why synchronous loading? It makes code easier to understand. You know exactly when modules are loaded. No surprises.
CommonJS also caches modules. If you require() the same file twice, you get the same object. This is efficient and prevents duplicate work.
// Both get the same object
const mod1 = require('./my-module');
const mod2 = require('./my-module');
console.log(mod1 === mod2); // true
CommonJS made Node.js possible. Without it, building large applications would be much harder. You'd have to manage all your dependencies manually.
It's not perfect. The synchronous loading can be slow for large applications. But for most use cases, it works great.
CommonJS was created for server-side JavaScript. That's why it uses synchronous loading - servers have access to the filesystem and can load files quickly.
ES modules are the new standard way to work with modules in JavaScript. They use import and export statements instead of require() and module.exports.
ES modules are part of the JavaScript language itself, while CommonJS was created specifically for Node.js.
Here's how ES modules work:
// math.js - exporting
export function add(a, b) {
return a + b;
}
export function multiply(a, b) {
return a * b;
}
export const PI = 3.14159;
// app.js - importing
import { add, multiply, PI } from './math.js';
console.log(add(2, 3)); // 5
console.log(multiply(2, 3)); // 6
console.log(PI); // 3.14159
You can also export a default value:
// logger.js
export default function(message) {
console.log(message);
}
// app.js
import log from './logger.js';
log('Hello world');
To use ES modules in Node.js, you need to:
Here's the package.json approach:
// package.json
{
"type": "module"
}
// Now you can use .js files with ES modules
import { add } from './math.js';
Key differences between ES modules and CommonJS:
ES modules are the future. New projects should use them. But you'll still see lots of CommonJS in existing code.
ES modules use the import and export keywords to share code between files. Here are all the different ways you can use them.
Named exports (export multiple things):
// math.js
export function add(a, b) {
return a + b;
}
export function subtract(a, b) {
return a - b;
}
export const PI = 3.14159;
// Import specific named exports
import { add, PI } from './math.js';
console.log(add(2, 3)); // 5
console.log(PI); // 3.14159
// Import all named exports as an object
import * as math from './math.js';
console.log(math.add(2, 3)); // 5
console.log(math.subtract(5, 2)); // 3
Default export (export one main thing):
// User.js
export default class User {
constructor(name) {
this.name = name;
}
}
// Import default export (no braces needed)
import User from './User.js';
const john = new User('John');
Export lists:
// utils.js
function formatDate(date) {
return date.toISOString();
}
function capitalize(str) {
return str.charAt(0).toUpperCase() + str.slice(1);
}
// Export multiple things at once
export { formatDate, capitalize };
Renaming imports and exports:
// math.js
function addNumbers(a, b) {
return a + b;
}
// Export with different name
export { addNumbers as add };
// Import with different name
import { add as sum } from './math.js';
console.log(sum(2, 3)); // 5
Combining default and named exports:
// api.js
export default class API {
constructor(url) {
this.url = url;
}
}
export function createRequest() {
return { method: 'GET' };
}
// Import both default and named
import API, { createRequest } from './api.js';
Dynamic imports (load modules on demand):
// Load module only when needed
async function loadModule() {
const module = await import('./heavy-module.js');
module.doSomething();
}
Remember: ES module imports are hoisted to the top. They always run first, before any other code in your file.
Also, import paths must be strings. You can't use variables in import statements (except with dynamic imports).
Built-in modules are the core modules that come with Node.js. You don't need to install them - they're available as soon as you install Node.js.
Think of built-in modules like the basic tools that come with a new house. When you move in, you already have lights, plumbing, and electricity. You don't need to install them separately.
Here are the most important built-in modules:
File System (fs): Work with files and directories
const fs = require('fs');
fs.readFile('file.txt', 'utf8', (err, data) => {
console.log(data);
});
HTTP (http): Create web servers and make HTTP requests
const http = require('http');
const server = http.createServer((req, res) => {
res.end('Hello World');
});
server.listen(3000);
Path (path): Work with file and directory paths
const path = require('path');
const fullPath = path.join(__dirname, 'files', 'data.txt');
OS (os): Get information about the operating system
const os = require('os');
console.log(os.platform()); // linux, win32, darwin
console.log(os.cpus().length); // number of CPU cores
Events (events): Create and handle custom events
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
console.log('Event occurred!');
});
Other important built-in modules:
You can see all built-in modules in the Node.js documentation. There are about 40 of them, each serving a specific purpose.
Built-in modules are optimized and well-tested. They're part of Node.js itself, so they're very stable and reliable.
When you need functionality, always check if there's a built-in module first before looking for external packages.
The fs module (File System module) lets you work with files and directories. It's one of the most commonly used built-in modules.
With the fs module, you can:
Reading files:
const fs = require('fs');
// Asynchronous read
fs.readFile('file.txt', 'utf8', (err, data) => {
if (err) throw err;
console.log(data);
});
// Synchronous read
const data = fs.readFileSync('file.txt', 'utf8');
console.log(data);
Writing files:
const fs = require('fs');
// Asynchronous write
fs.writeFile('output.txt', 'Hello World', 'utf8', (err) => {
if (err) throw err;
console.log('File written');
});
// Append to file
fs.appendFile('log.txt', 'New entry\n', 'utf8', (err) => {
if (err) throw err;
});
Working with directories:
const fs = require('fs');
// Create directory
fs.mkdir('new-folder', (err) => {
if (err) throw err;
});
// Read directory contents
fs.readdir('./documents', (err, files) => {
if (err) throw err;
console.log('Files:', files);
});
File information:
const fs = require('fs');
fs.stat('file.txt', (err, stats) => {
if (err) throw err;
console.log('Is file?', stats.isFile());
console.log('Size:', stats.size);
console.log('Created:', stats.birthtime);
});
Promise-based fs (modern approach):
const fs = require('fs').promises;
async function readFile() {
try {
const data = await fs.readFile('file.txt', 'utf8');
console.log(data);
} catch (err) {
console.error(err);
}
}
The fs module has both synchronous and asynchronous methods. Asynchronous methods are preferred because they don't block your program.
Most fs methods follow the error-first callback pattern. The first argument is always an error object (or null if no error).
For new code, use the promise-based version (fs.promises) with async/await. It's cleaner and easier to read.
The fs module is essential for any application that needs to work with files: web servers, command-line tools, data processing applications, etc.
The http module lets you create web servers and make HTTP requests. It's the foundation for building web applications in Node.js.
Creating a web server:
const http = require('http');
const server = http.createServer((request, response) => {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World!\n');
});
server.listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
Making HTTP requests:
const http = require('http');
const options = {
hostname: 'api.example.com',
port: 80,
path: '/users',
method: 'GET'
};
const req = http.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
console.log('Response:', JSON.parse(data));
});
});
req.end();
The http module is low-level but powerful. Most people use frameworks like Express that build on top of it.
The path module helps you work with file and directory paths in a way that works on any operating system.
const path = require('path');
// Join path segments
const fullPath = path.join('users', 'john', 'documents', 'file.txt');
// Get filename
const filename = path.basename('/users/john/document.txt');
// Get directory name
const dir = path.dirname('/users/john/document.txt');
// Get file extension
const ext = path.extname('/users/john/document.txt');
// Create absolute path
const absolute = path.resolve('src', 'app.js');
Always use path.join() instead of string concatenation for paths.
The os module provides information about the operating system and computer.
const os = require('os');
console.log('Platform:', os.platform());
console.log('CPU Cores:', os.cpus().length);
console.log('Total Memory:', os.totalmem());
console.log('Free Memory:', os.freemem());
console.log('Home Directory:', os.homedir());
console.log('Uptime:', os.uptime());
Useful for system monitoring tools and applications that need to adapt to different environments.
The util module contains utility functions that are helpful but not big enough to deserve their own module.
const util = require('util');
// Convert callback-based functions to promises
const readFile = util.promisify(fs.readFile);
// Format strings
const message = util.format('Hello %s, you have %d messages', 'John', 5);
// Inherit from other classes (older style)
util.inherits(ChildClass, ParentClass);
// Debug logging
const debug = util.debuglog('app');
debug('This appears if NODE_DEBUG=app');
util.promisify is especially useful for working with older callback-based code.
Creating a custom module is simple - every JavaScript file is already a module. You just need to export what you want to share.
// my-module.js
const privateVariable = 'This is private';
function privateFunction() {
return 'Only available inside this module';
}
function publicFunction() {
return 'This can be used by other files';
}
// Export what you want to share
module.exports = {
publicFunction,
version: '1.0.0'
};
Then use it in another file:
// app.js
const myModule = require('./my-module');
console.log(myModule.publicFunction()); // This can be used by other files
console.log(myModule.version); // 1.0.0
Variables and functions not exported are private to the module.
Third-party packages are code written by other developers that you can use in your projects.
// 1. Install the package
npm install express
// 2. Require it in your code
const express = require('express');
// 3. Use it
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
Popular third-party packages:
Always check package quality, maintenance, and security before using.
The npm registry is a huge database of JavaScript packages. It's like an app store for Node.js code.
When you run `npm install express`, npm looks in the registry for the express package and downloads it.
Key facts about npm registry:
You can search for packages on npmjs.com or use:
npm search package-name npm view package-name
The registry ensures packages are available to everyone and handles version management.
Global packages are installed system-wide and can be used from any directory.
# Install globally npm install -g package-name # Examples of useful global packages npm install -g nodemon npm install -g create-react-app npm install -g @angular/cli
Global packages are typically:
Find where global packages are installed:
npm root -g
List global packages:
npm list -g --depth=0
Use global packages sparingly. Most packages should be project dependencies.
npx is a tool that comes with npm (version 5.2+) that lets you run packages without installing them globally.
# Run a package without installing it npx create-react-app my-app npx eslint src/ npx http-server
How npx works:
Benefits of npx:
You can also run specific versions:
npx package-name@version
npx has largely replaced the need for global installations.
Yarn is an alternative package manager to npm. It was created by Facebook and is faster and more reliable.
# Yarn commands vs npm yarn add express # npm install express yarn remove express # npm uninstall express yarn upgrade # npm update yarn run script-name # npm run script-name
Key features of Yarn:
Install Yarn:
npm install -g yarn
Yarn and npm are mostly compatible. You can switch between them in the same project.
Modern npm has caught up with many Yarn features, so the choice is often personal preference.
Uninstalling packages removes them from your project and updates package.json.
# Uninstall and remove from package.json npm uninstall package-name # Uninstall but keep in package.json npm uninstall package-name --no-save # Uninstall development dependency npm uninstall --save-dev package-name # Uninstall globally npm uninstall -g package-name
What happens when you uninstall:
After uninstalling, you might want to:
# Remove unused dependencies npm prune # Reinstall everything fresh rm -rf node_modules && npm install
Always test your application after uninstalling packages to make sure nothing broke.
dependencies are packages needed to run your application. devDependencies are packages only needed during development.
// package.json
{
"dependencies": {
"express": "^4.18.0", // Needed in production
"mongoose": "^6.0.0" // Needed in production
},
"devDependencies": {
"nodemon": "^2.0.0", // Only for development
"jest": "^27.0.0" // Only for development
}
}
Install as dependency:
npm install express --save # or npm install express
Install as dev dependency:
npm install jest --save-dev # or npm install -D jest
When you deploy to production, only dependencies are installed:
npm install --production
Common devDependencies:
peerDependencies are packages that your package needs, but expects the user to install themselves.
// If you're creating a React component library
{
"name": "my-react-components",
"peerDependencies": {
"react": "^17.0.0",
"react-dom": "^17.0.0"
}
}
When someone installs your package, they'll get a warning if they don't have the peer dependencies installed.
Use cases for peerDependencies:
Example: A webpack plugin would list webpack as a peer dependency because it needs webpack to work, but doesn't want to install its own version.
In npm 7+, peer dependencies are automatically installed. In older versions, users had to install them manually.
Dependency conflicts happen when different packages require different versions of the same package.
Common conflict scenarios:
Package A requires lodash@^4.0.0 Package B requires lodash@^3.0.0
Solutions:
# See dependency tree
npm ls package-name
# Try to deduplicate
npm dedupe
# In Yarn, force a version
// package.json
{
"resolutions": {
"**/lodash": "^4.0.0"
}
}
Prevention:
Sometimes you have to choose between updating packages or finding alternatives.
Semantic versioning (semver) is a system for version numbers that tells you about the changes in each release.
Version format: MAJOR.MINOR.PATCH
Version ranges in package.json:
{
"dependencies": {
"express": "4.18.2", // Exact version
"lodash": "^4.17.21", // Compatible with 4.17.21, but can update to 4.x.x
"react": "~17.0.2", // Compatible with 17.0.2, but can update to 17.0.x
"mongoose": ">=6.0.0", // Version 6.0.0 or higher
"axios": "*" // Any version (not recommended)
}
}
Common version prefixes:
Always use specific version ranges to avoid breaking changes.
package.json is the configuration file for your Node.js project. It's like the ID card and instruction manual combined.
{
"name": "my-project",
"version": "1.0.0",
"description": "A sample Node.js project",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js",
"test": "jest"
},
"dependencies": {
"express": "^4.18.0"
},
"devDependencies": {
"nodemon": "^2.0.0"
},
"keywords": ["node", "api"],
"author": "Your Name",
"license": "MIT"
}
Key purposes of package.json:
Create package.json:
npm init # Interactive setup npm init -y # Accept all defaults
Every Node.js project should have a package.json file.
Scripts in package.json let you define custom commands for your project.
{
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js",
"test": "jest",
"build": "webpack --mode production",
"lint": "eslint src/",
"clean": "rm -rf dist/"
}
}
Run scripts:
npm run dev # Runs the "dev" script npm test # Short for "npm run test" npm start # Short for "npm run start"
Special scripts:
Example with pre/post scripts:
{
"scripts": {
"prestart": "npm run build",
"start": "node dist/server.js",
"poststart": "echo 'Server started successfully'"
}
}
Scripts can run any command available in your system PATH or node_modules/.bin.
Keeping packages updated is important for security and features.
Check for outdated packages:
npm outdated
Update packages:
npm update # Update all packages within version ranges npm update package-name # Update specific package npm install package-name@latest # Update to latest version
Major version updates:
npm install package-name@next # Update to next major version npx npm-check-updates # Check and update major versions
Using npm-check-updates:
# Install the tool npm install -g npm-check-updates # Check what can be updated ncu # Update package.json ncu -u # Install updated versions npm install
Update strategies:
Always test your application after updating packages.
Publishing a package lets you share your code with the world.
Step 1: Create your package
mkdir my-package cd my-package npm init -y
Step 2: Add your code
// index.js
module.exports = function() {
return 'Hello from my package!';
};
Step 3: Configure package.json
{
"name": "my-unique-package-name",
"version": "1.0.0",
"description": "My awesome package",
"main": "index.js",
"keywords": ["utility", "helper"],
"author": "Your Name",
"license": "MIT"
}
Step 4: Login to npm
npm login # Enter your username, password, and email
Step 5: Publish
npm publish
Update your package:
npm version patch # Bump version (patch, minor, major) npm publish
Unpublish (within 72 hours):
npm unpublish package-name
Best practices:
npm link lets you work on a package locally and test it in another project without publishing.
Step 1: Create your local package
# In the package directory cd /path/to/my-package npm link
Step 2: Use it in your project
# In your project directory cd /path/to/my-project npm link my-package
Step 3: Use the linked package
const myPackage = require('my-package');
console.log(myPackage());
Unlink when done:
# In your project directory npm unlink my-package # In the package directory npm unlink
Using Yarn:
# In package directory yarn link # In project directory yarn link "package-name"
Benefits of npm link:
Limitations:
Scoped packages are npm packages that belong to a specific namespace, usually an organization or user.
Scoped package names:
@company/package-name @username/package-name @organization/tool-name
Install scoped packages:
npm install @company/package-name npm install @angular/core npm install @babel/cli
Publish scoped packages:
{
"name": "@my-org/my-package",
"version": "1.0.0"
}
Then publish:
npm publish --access public # Public scoped package npm publish --access private # Private scoped package (requires paid npm account)
Benefits of scoped packages:
Using scoped packages in code:
const companyPackage = require('@company/package-name');
import { Component } from '@angular/core';
Many large organizations use scoped packages to manage their internal and public packages.
Private packages are only accessible to authorized users, perfect for company internal code.
Method 1: Scoped private packages (npm paid account)
# Publish as private npm publish --access private # Install private package npm install @company/private-package
Method 2: Git repository
{
"dependencies": {
"private-package": "git+ssh://git@github.com:company/private-package.git"
}
}
Method 3: Private registry
# Set private registry npm config set registry https://registry.company.com # Install from private registry npm install private-package
Method 4: GitHub Packages
{
"dependencies": {
"@company/package": "github:company/repo"
}
}
Authentication for private packages:
# Login to npm npm login # For GitHub packages, create .npmrc //npm.pkg.github.com/:_authToken=TOKEN
Using environment variables:
# In .npmrc
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
//registry.company.com/:_authToken=${COMPANY_TOKEN}
Private packages are essential for enterprise development and proprietary code.
Monorepos are single repositories that contain multiple packages or projects.
Monorepo structure:
monorepo/ ├── packages/ │ ├── package-a/ │ │ ├── package.json │ │ └── index.js │ ├── package-b/ │ │ ├── package.json │ │ └── index.js │ └── shared/ │ ├── package.json │ └── utils.js ├── apps/ │ ├── web-app/ │ └── mobile-app/ └── package.json
Tools for monorepos:
Using Yarn Workspaces:
// package.json (root)
{
"name": "monorepo",
"private": true,
"workspaces": ["packages/*", "apps/*"]
}
// Install dependencies across all packages
yarn install
Using npm Workspaces:
// package.json (root)
{
"name": "monorepo",
"workspaces": ["packages/*", "apps/*"]
}
// Install dependencies
npm install
Benefits of monorepos:
Used by companies like Google, Facebook, and Microsoft for large codebases.
pnpm is a fast, disk space efficient package manager that serves as an alternative to npm and Yarn.
Install pnpm:
npm install -g pnpm
Basic commands:
pnpm install # Install dependencies pnpm add package-name # Add dependency pnpm remove package-name # Remove dependency pnpm update # Update dependencies
How pnpm works differently:
Benefits of pnpm:
Migration from npm/Yarn:
# Delete node_modules and package-lock.json rm -rf node_modules package-lock.json # Install with pnpm pnpm install
pnpm is fully compatible with package.json and works with most npm workflows.
node_modules is the directory where npm installs all your project's dependencies.
Structure of node_modules:
node_modules/ ├── express/ │ ├── index.js │ └── package.json ├── lodash/ │ ├── map.js │ └── package.json └── ...hundreds more
Key facts about node_modules:
What happens during npm install:
Best practices:
The node_modules directory is what makes Node.js dependency management work.
npm caching stores downloaded packages locally to speed up future installations.
Cache location:
# View cache location npm config get cache # Typical locations: # Linux: ~/.npm # Windows: %AppData%/npm-cache # macOS: ~/.npm
Cache commands:
npm cache verify # Verify the cache integrity npm cache clean # Clear the cache (older versions) npm cache ls # List cached packages
How caching works:
Benefits of caching:
Cache management:
The cache is one reason why `npm install` is fast after the first time.
npm shrinkwrap creates a lock file that locks down the exact versions of all dependencies.
Create shrinkwrap file:
npm shrinkwrap
This creates `npm-shrinkwrap.json` that contains exact versions of all dependencies.
Shrinkwrap vs package-lock.json:
When to use shrinkwrap:
Example shrinkwrap file:
{
"name": "my-package",
"version": "1.0.0",
"dependencies": {
"express": {
"version": "4.18.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.18.2.tgz",
"integrity": "sha512-..."
}
}
}
With modern npm (version 5+), package-lock.json is usually sufficient for most use cases.
package-lock.json is a file that records the exact versions of all installed packages.
Purpose of package-lock.json:
Example package-lock.json content:
{
"name": "my-project",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"dependencies": {
"express": "^4.18.0"
}
},
"node_modules/express": {
"version": "4.18.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.18.2.tgz",
"integrity": "sha512-...",
"dependencies": {
"accepts": "~1.3.8"
}
}
}
}
Key points:
Yarn equivalent: yarn.lock
pnpm equivalent: pnpm-lock.yaml
Always commit package-lock.json to ensure consistent installations across all environments.
Version conflicts occur when different packages require incompatible versions of the same dependency.
Identify conflicts:
npm ls package-name npm outdated npm audit
Common resolution strategies:
Using npm overrides (npm 8.3+):
{
"overrides": {
"lodash": "^4.17.21"
}
}
Using Yarn resolutions:
{
"resolutions": {
"**/lodash": "^4.17.21"
}
}
Manual resolution steps:
# 1. Check what's installed npm ls problematic-package # 2. Try to deduplicate npm dedupe # 3. Update packages npm update package-with-conflict # 4. If needed, force a version npm install package-name@desired-version # 5. Test thoroughly
Prevention:
nvm (Node Version Manager) lets you install and switch between different Node.js versions.
Install nvm:
# On macOS/Linux curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash # On Windows # Use nvm-windows: https://github.com/coreybutler/nvm-windows
Basic nvm commands:
nvm install 18.0.0 # Install specific version nvm install --lts # Install latest LTS version nvm use 16.0.0 # Switch to version 16.0.0 nvm use node # Switch to default version nvm ls # List installed versions nvm ls-remote # List available versions nvm current # Show current version nvm alias default 18 # Set default version
Using different versions per project:
# Create .nvmrc file in project echo "18.0.0" > .nvmrc # Automatically use correct version nvm use
Benefits of nvm:
Alternative tools:
nvm is essential for Node.js developers working on multiple projects.
Local packages are installed in your project's node_modules folder. Global packages are installed system-wide and can be used from any directory.
Local packages:
npm install express # Installs in ./node_modules
const express = require('express'); # Used in your code
Global packages:
npm install -g nodemon # Installs system-wide nodemon server.js # Can run from any directory
When to use each:
Finding installation locations:
npm root # Local node_modules location npm root -g # Global installation location
Most packages should be installed locally. Only install globally when you need the package as a command-line tool.
npm audit scans your dependencies for known security vulnerabilities and suggests fixes.
Basic audit commands:
npm audit # Check for vulnerabilities npm audit fix # Automatically fix vulnerabilities npm audit fix --force # Fix even with breaking changes npm audit --json # Output in JSON format
Understanding audit output:
# Example output === npm audit security report === Critical Command Injection in some-package Package some-package Dependency of my-package Path my-package > some-package More info https://npmjs.com/advisories/1234 # Fix available Run `npm audit fix` to fix these issues.
What npm audit checks:
Integrating with CI/CD:
# Fail build if critical vulnerabilities found
npm audit --audit-level=critical
# In package.json scripts
{
"scripts": {
"security-check": "npm audit --audit-level=moderate"
}
}
Run npm audit regularly as part of your development workflow.
Optional dependencies are packages that your module would like to use if available, but aren't required for it to work.
Defining optional dependencies:
{
"name": "my-package",
"optionalDependencies": {
"fsevents": "^2.0.0",
"better-sqlite3": "^7.0.0"
}
}
How optional dependencies work:
Checking for optional dependencies in code:
let fsevents;
try {
fsevents = require('fsevents');
} catch (error) {
// fsevents is not available
// Fall back to alternative implementation
fsevents = null;
}
if (fsevents) {
// Use fsevents for better performance
} else {
// Use fallback implementation
}
Common use cases:
Optional dependencies make your package more flexible and platform-independent.
npm config lets you manage npm configuration settings for your project or globally.
Viewing configuration:
npm config list # Show all config settings npm config get registry # Get specific setting npm config list -l # Show all defaults too
Setting configuration:
npm config set registry https://registry.npmjs.org/ npm config set save-exact true npm config set init-author-name "Your Name"
Common useful settings:
# Always save exact versions npm config set save-exact true # Set default license npm config set init-license "MIT" # Use a different registry npm config set registry https://registry.company.com # Set timeout for slow networks npm config set fetch-retry-maxtimeout 60000
Configuration files:
Using .npmrc file:
# .npmrc in project root registry=https://registry.npmjs.org/ save-exact=true package-lock=true
Configuration settings help customize npm behavior for your workflow.
Bundled dependencies are packages that are included inside your package rather than being installed from the registry.
Defining bundled dependencies:
{
"name": "my-package",
"bundledDependencies": [
"local-dependency",
"modified-package"
]
}
How bundled dependencies work:
Creating a package with bundled dependencies:
my-package/ ├── node_modules/ │ └── local-dependency/ # This gets bundled ├── package.json └── index.js
Use cases for bundled dependencies:
Alternative: Using file dependencies
{
"dependencies": {
"local-package": "file:../local-package"
}
}
Bundled dependencies are less common now with better package management options available.
npm workspaces allow you to manage multiple packages in a single repository (monorepo).
Setting up workspaces:
{
"name": "my-monorepo",
"workspaces": [
"packages/*",
"apps/*"
]
}
Project structure:
my-monorepo/ ├── packages/ │ ├── utils/ │ │ └── package.json │ └── components/ │ └── package.json ├── apps/ │ ├── web-app/ │ └── mobile-app/ └── package.json
Workspace commands:
npm install # Install all workspace dependencies npm run test --workspaces # Run test in all workspaces npm run build --workspace=web-app # Run in specific workspace
Adding dependencies to workspaces:
npm install react --workspace=web-app npm install lodash -w utils -w components
Linking between workspaces:
// In apps/web-app/package.json
{
"dependencies": {
"@my-monorepo/utils": "*",
"@my-monorepo/components": "*"
}
}
Benefits of workspaces:
Workspaces are npm's built-in solution for monorepos.
Native modules are Node.js packages that contain C++ code and need to be compiled for your specific platform.
Common native modules:
npm install bcrypt # Password hashing npm install sqlite3 # SQLite database npm install sharp # Image processing
Compilation requirements:
Installing build tools:
# macOS xcode-select --install # Ubuntu/Debian sudo apt-get install build-essential # Windows npm install --global windows-build-tools
Forcing compilation:
npm rebuild # Rebuild all native modules npm install --build-from-source # Force compilation from source
Using pre-built binaries:
# Many packages offer pre-built binaries # They automatically download for your platform npm install sharp
Troubleshooting compilation issues:
# Check node-gyp version node-gyp --version # Clean and rebuild npm cache clean --force rm -rf node_modules npm install
Native modules provide better performance but require compilation setup.
npm hooks are scripts that run automatically during package installation and publishing.
Installation lifecycle scripts:
{
"scripts": {
"preinstall": "echo 'About to install'",
"install": "node-gyp rebuild",
"postinstall": "echo 'Installation complete'"
}
}
Publishing lifecycle scripts:
{
"scripts": {
"prepublish": "npm test",
"prepare": "npm run build",
"prepublishOnly": "npm run lint",
"postpublish": "echo 'Package published'"
}
}
Common lifecycle scripts:
Use cases for lifecycle scripts:
{
"scripts": {
"postinstall": "node scripts/setup.js",
"prepublish": "npm run build && npm test",
"preinstall": "node scripts/check-compatibility.js"
}
}
Best practices:
Lifecycle scripts automate common tasks during package management.
npm integrates well with Continuous Integration and Deployment pipelines for automated testing and deployment.
Basic CI/CD setup:
# .github/workflows/ci.yml (GitHub Actions)
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '18'
cache: 'npm'
- run: npm ci
- run: npm test
- run: npm run build
Using npm ci for CI:
npm ci # Clean install for CI environments
Benefits of npm ci:
Security scanning in CI:
# In package.json scripts
{
"scripts": {
"ci": "npm ci && npm test && npm run build && npm audit"
}
}
Environment-specific installs:
# Install only production dependencies npm ci --production # Install all dependencies (development + production) npm ci
Cache configuration for faster builds:
# Cache node_modules in CI
- name: Cache node modules
uses: actions/cache@v2
with:
path: node_modules
key: ${{ runner.os }}-node-${{ hashFiles('package-lock.json') }}
Proper npm usage in CI/CD ensures reliable and fast deployments.
Following best practices ensures your Node.js projects are maintainable, secure, and reliable.
Dependency management:
# Use exact versions in production npm install express --save-exact # Regular updates npm outdated npm update
Security practices:
# Regular security audits npm audit npm audit fix # Update vulnerable packages npm update vulnerable-package
Version control:
# Commit these files package.json package-lock.json # or yarn.lock .npmrc (if project-specific) # Add to .gitignore node_modules/ .npm *.log
Script organization:
{
"scripts": {
"dev": "nodemon server.js",
"test": "jest",
"test:watch": "jest --watch",
"build": "webpack --mode production",
"lint": "eslint src/",
"lint:fix": "eslint src/ --fix"
}
}
Project structure:
my-project/ ├── src/ # Source code ├── tests/ # Test files ├── docs/ # Documentation ├── scripts/ # Build scripts ├── package.json └── README.md
Key best practices:
Good package management leads to more stable and secure applications.
The fs module (File System module) is a built-in Node.js module that lets you work with files and directories on your computer.
Think of the fs module as a set of tools for reading, writing, and managing files. It's like having a digital file cabinet where you can store, retrieve, and organize your data.
Basic usage:
const fs = require('fs');
What you can do with fs module:
Example - reading a file:
const fs = require('fs');
fs.readFile('example.txt', 'utf8', (error, data) => {
if (error) {
console.log('Error reading file:', error);
return;
}
console.log('File content:', data);
});
The fs module has both synchronous (blocking) and asynchronous (non-blocking) methods. Asynchronous methods are preferred because they don't stop your program while working with files.
This module is essential for any Node.js application that needs to work with files, like web servers, data processing tools, or configuration managers.
Reading files asynchronously means your program doesn't wait for the file to be read. It continues doing other work and gets notified when the file is ready.
Basic async file reading:
const fs = require('fs');
fs.readFile('data.txt', 'utf8', (error, data) => {
if (error) {
console.log('Error reading file:', error);
return;
}
console.log('File content:', data);
});
console.log('This runs immediately, before file reading completes');
Output would be:
This runs immediately, before file reading completes File content: [file contents]
With error handling:
const fs = require('fs');
fs.readFile('missing-file.txt', 'utf8', (error, data) => {
if (error) {
if (error.code === 'ENOENT') {
console.log('File does not exist');
} else {
console.log('Other error:', error);
}
return;
}
console.log('File content:', data);
});
Reading binary files (images, etc.):
const fs = require('fs');
fs.readFile('image.jpg', (error, data) => {
if (error) throw error;
// data is a Buffer object
console.log('Image size:', data.length, 'bytes');
});
Key points about async reading:
Async file reading is the preferred method in Node.js because it keeps your application responsive.
Reading files synchronously means your program waits until the file is completely read before continuing. It's simpler but blocks other operations.
Basic sync file reading:
const fs = require('fs');
try {
const data = fs.readFileSync('data.txt', 'utf8');
console.log('File content:', data);
} catch (error) {
console.log('Error reading file:', error);
}
console.log('This runs only after file reading completes');
Output would be:
File content: [file contents] This runs only after file reading completes
Without try-catch (not recommended):
const fs = require('fs');
const data = fs.readFileSync('data.txt', 'utf8');
console.log('File content:', data);
Reading binary files synchronously:
const fs = require('fs');
try {
const buffer = fs.readFileSync('image.jpg');
console.log('Image size:', buffer.length, 'bytes');
} catch (error) {
console.log('Error:', error);
}
When to use synchronous reading:
When NOT to use synchronous reading:
Synchronous methods are easier to understand but can make your application unresponsive.
Writing to files lets you save data from your program to disk. The fs module provides several ways to write files.
Basic async file writing:
const fs = require('fs');
const content = 'Hello, this is my file content!';
fs.writeFile('output.txt', content, 'utf8', (error) => {
if (error) {
console.log('Error writing file:', error);
return;
}
console.log('File written successfully!');
});
Synchronous file writing:
const fs = require('fs');
const content = 'Hello, this is my file content!';
try {
fs.writeFileSync('output.txt', content, 'utf8');
console.log('File written successfully!');
} catch (error) {
console.log('Error writing file:', error);
}
Writing JSON data:
const fs = require('fs');
const user = {
name: 'John Doe',
age: 30,
email: 'john@example.com'
};
fs.writeFile('user.json', JSON.stringify(user, null, 2), 'utf8', (error) => {
if (error) throw error;
console.log('JSON file written!');
});
Important notes about writeFile:
Common error scenarios:
File writing is essential for saving application data, logs, user uploads, and configuration files.
Appending to a file means adding content to the end of an existing file without deleting what's already there. It's perfect for logs and data collection.
Basic async file appending:
const fs = require('fs');
fs.appendFile('log.txt', 'New log entry\n', 'utf8', (error) => {
if (error) {
console.log('Error appending to file:', error);
return;
}
console.log('Content appended!');
});
Synchronous file appending:
const fs = require('fs');
try {
fs.appendFileSync('log.txt', 'New log entry\n', 'utf8');
console.log('Content appended!');
} catch (error) {
console.log('Error:', error);
}
Appending multiple times:
const fs = require('fs');
function logMessage(message) {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] ${message}\n`;
fs.appendFile('application.log', logEntry, 'utf8', (error) => {
if (error) {
console.log('Logging failed:', error);
}
});
}
// Usage
logMessage('Application started');
logMessage('User logged in');
logMessage('Data processed');
Appending vs Writing:
Use cases for appendFile:
Performance consideration:
For high-frequency appends (like logging), consider using streams or buffering to reduce disk I/O operations.
Appending is much safer than writing when you want to preserve existing content.
Deleting files (also called "unlinking") removes them from the filesystem. Be careful - deleted files are usually gone forever!
Basic async file deletion:
const fs = require('fs');
fs.unlink('file-to-delete.txt', (error) => {
if (error) {
console.log('Error deleting file:', error);
return;
}
console.log('File deleted successfully');
});
Synchronous file deletion:
const fs = require('fs');
try {
fs.unlinkSync('file-to-delete.txt');
console.log('File deleted');
} catch (error) {
console.log('Error:', error);
}
Safe deletion with existence check:
const fs = require('fs');
const path = require('path');
function safeDelete(filename) {
fs.access(filename, fs.constants.F_OK, (error) => {
if (error) {
console.log('File does not exist');
return;
}
fs.unlink(filename, (error) => {
if (error) {
console.log('Error deleting:', error);
return;
}
console.log('File deleted safely');
});
});
}
safeDelete('temp-file.txt');
Common deletion errors:
Why it's called "unlink":
In Unix systems, files are connected to the filesystem through links. Deleting a file actually means removing its link. The name stuck in Node.js.
Safety tips:
File deletion is permanent in most cases, so always be cautious.
Renaming files changes their name or moves them to a different location. In Node.js, renaming and moving are the same operation.
Basic async file renaming:
const fs = require('fs');
fs.rename('old-name.txt', 'new-name.txt', (error) => {
if (error) {
console.log('Error renaming file:', error);
return;
}
console.log('File renamed successfully');
});
Synchronous file renaming:
const fs = require('fs');
try {
fs.renameSync('old-name.txt', 'new-name.txt');
console.log('File renamed');
} catch (error) {
console.log('Error:', error);
}
Moving files to different directories:
const fs = require('fs');
// Move file to different folder
fs.rename('current-file.txt', '../archive/old-file.txt', (error) => {
if (error) {
console.log('Error moving file:', error);
return;
}
console.log('File moved successfully');
});
Safe rename with checks:
const fs = require('fs');
function safeRename(oldPath, newPath) {
// Check if destination already exists
fs.access(newPath, (error) => {
if (!error) {
console.log('Destination already exists');
return;
}
// If we get here, destination doesn't exist
fs.rename(oldPath, newPath, (error) => {
if (error) {
console.log('Error renaming:', error);
return;
}
console.log('File renamed safely');
});
});
}
safeRename('old.txt', 'new.txt');
Common rename errors:
Important notes:
File renaming is useful for organizing files, versioning, and moving data between locations.
Creating directories (folders) helps organize your files. The fs module provides methods to create single directories or nested directory structures.
Basic async directory creation:
const fs = require('fs');
fs.mkdir('new-folder', (error) => {
if (error) {
console.log('Error creating directory:', error);
return;
}
console.log('Directory created successfully');
});
Synchronous directory creation:
const fs = require('fs');
try {
fs.mkdirSync('new-folder');
console.log('Directory created');
} catch (error) {
console.log('Error:', error);
}
Creating nested directories:
const fs = require('fs');
// Create parent/child/grandchild structure
fs.mkdir('parent/child/grandchild', { recursive: true }, (error) => {
if (error) {
console.log('Error creating directories:', error);
return;
}
console.log('All directories created');
});
Creating directories with specific permissions:
const fs = require('fs');
// Create directory with read/write/execute permissions for owner
fs.mkdir('secure-folder', { mode: 0o755 }, (error) => {
if (error) throw error;
console.log('Directory created with specific permissions');
});
Complete example with error handling:
const fs = require('fs');
function createProjectStructure() {
const folders = [
'project/src',
'project/dist',
'project/docs',
'project/tests'
];
folders.forEach(folder => {
fs.mkdir(folder, { recursive: true }, (error) => {
if (error) {
console.log(`Error creating ${folder}:`, error);
} else {
console.log(`Created: ${folder}`);
}
});
});
}
createProjectStructure();
Common errors:
Key points:
recursive: true for nested directories
Directory creation is essential for organizing files in applications, temporary data, and project structures.
Removing directories deletes them from the filesystem. You need different methods for empty directories vs directories with content.
Removing empty directory (async):
const fs = require('fs');
fs.rmdir('empty-folder', (error) => {
if (error) {
console.log('Error removing directory:', error);
return;
}
console.log('Directory removed');
});
Removing empty directory (sync):
const fs = require('fs');
try {
fs.rmdirSync('empty-folder');
console.log('Directory removed');
} catch (error) {
console.log('Error:', error);
}
Removing directory with contents (Node.js 14+):
const fs = require('fs');
// Remove directory and all its contents
fs.rm('folder-with-files', { recursive: true }, (error) => {
if (error) {
console.log('Error:', error);
return;
}
console.log('Directory and all contents removed');
});
Manual removal of directory with contents (older approach):
const fs = require('fs');
const path = require('path');
function removeDirectory(dirPath) {
if (fs.existsSync(dirPath)) {
fs.readdirSync(dirPath).forEach(file => {
const curPath = path.join(dirPath, file);
if (fs.lstatSync(curPath).isDirectory()) {
// Recursively remove subdirectories
removeDirectory(curPath);
} else {
// Delete file
fs.unlinkSync(curPath);
}
});
// Remove empty directory
fs.rmdirSync(dirPath);
}
}
removeDirectory('old-project');
Safe directory removal:
const fs = require('fs');
function safeRemoveDirectory(dirPath) {
fs.stat(dirPath, (error, stats) => {
if (error) {
console.log('Directory does not exist');
return;
}
if (!stats.isDirectory()) {
console.log('Path is not a directory');
return;
}
fs.rm(dirPath, { recursive: true }, (error) => {
if (error) {
console.log('Error removing:', error);
return;
}
console.log('Directory removed safely');
});
});
}
safeRemoveDirectory('temp-data');
Important warnings:
Directory removal is useful for cleaning up temporary files, cache directories, and old project versions.
fs.promises is a modern version of the fs module that uses Promises instead of callbacks. It's cleaner and works well with async/await.
Using fs.promises:
const fs = require('fs').promises;
Reading files with promises:
const fs = require('fs').promises;
async function readFile() {
try {
const data = await fs.readFile('data.txt', 'utf8');
console.log('File content:', data);
} catch (error) {
console.log('Error reading file:', error);
}
}
readFile();
Writing files with promises:
const fs = require('fs').promises;
async function writeFile() {
try {
await fs.writeFile('output.txt', 'Hello World', 'utf8');
console.log('File written!');
} catch (error) {
console.log('Error:', error);
}
}
writeFile();
Multiple file operations with promises:
const fs = require('fs').promises;
async function processFiles() {
try {
// Read multiple files
const [file1, file2] = await Promise.all([
fs.readFile('file1.txt', 'utf8'),
fs.readFile('file2.txt', 'utf8')
]);
// Process the data
const result = file1 + file2;
// Write result
await fs.writeFile('combined.txt', result, 'utf8');
console.log('Files processed successfully');
} catch (error) {
console.log('Error processing files:', error);
}
}
processFiles();
Comparing callback vs promise style:
// Callback style (old way)
fs.readFile('file.txt', 'utf8', (error, data) => {
if (error) throw error;
console.log(data);
});
// Promise style (modern way)
fs.promises.readFile('file.txt', 'utf8')
.then(data => console.log(data))
.catch(error => console.log(error));
// Async/await style (cleanest)
async function readFile() {
try {
const data = await fs.promises.readFile('file.txt', 'utf8');
console.log(data);
} catch (error) {
console.log(error);
}
}
Benefits of fs.promises:
Available methods in fs.promises:
readFile(), writeFile(),
appendFile()
mkdir(), rmdir(),
rm()
rename(), unlink()stat(), readdir()For new projects, fs.promises is the recommended way to work with files in Node.js.
Checking file existence is a common operation before reading, writing, or deleting files. There are several ways to do this in Node.js.
Using fs.access() (recommended):
const fs = require('fs');
fs.access('file.txt', fs.constants.F_OK, (error) => {
if (error) {
console.log('File does not exist');
} else {
console.log('File exists');
}
});
Using fs.promises.access():
const fs = require('fs').promises;
async function checkFile() {
try {
await fs.access('file.txt');
console.log('File exists');
} catch (error) {
console.log('File does not exist');
}
}
checkFile();
Checking different permissions:
const fs = require('fs');
fs.access('file.txt', fs.constants.F_OK | fs.constants.R_OK, (error) => {
if (error) {
console.log('File does not exist or is not readable');
} else {
console.log('File exists and is readable');
}
});
Common permission constants:
fs.constants.F_OK - File existsfs.constants.R_OK - File is readablefs.constants.W_OK - File is writablefs.constants.X_OK - File is executableWhy NOT to use fs.exists() (deprecated):
// DON'T DO THIS - it's deprecated
if (fs.existsSync('file.txt')) {
// Race condition: file might be deleted between check and use
fs.readFile('file.txt', (error, data) => {
// Could get error here if file was deleted
});
}
Better pattern - try to use the file directly:
const fs = require('fs');
// Instead of checking first, just try the operation
fs.readFile('file.txt', 'utf8', (error, data) => {
if (error) {
if (error.code === 'ENOENT') {
console.log('File does not exist');
} else {
console.log('Other error:', error);
}
return;
}
console.log('File content:', data);
});
Key points:
fs.access() for checking
existence/permissions
fs.exists()File existence checking is essential for validation, security, and error prevention.
Copying files creates a duplicate of a file in a new location. Node.js provides several methods for file copying, each with different use cases.
Using fs.copyFile() (simple copying):
const fs = require('fs');
fs.copyFile('source.txt', 'destination.txt', (error) => {
if (error) {
console.log('Error copying file:', error);
return;
}
console.log('File copied successfully');
});
Using fs.promises.copyFile():
const fs = require('fs').promises;
async function copyFile() {
try {
await fs.copyFile('source.jpg', 'backup.jpg');
console.log('File copied');
} catch (error) {
console.log('Error:', error);
}
}
copyFile();
Copying with streams (for large files):
const fs = require('fs');
function copyLargeFile(source, destination) {
return new Promise((resolve, reject) => {
const readStream = fs.createReadStream(source);
const writeStream = fs.createWriteStream(destination);
readStream.on('error', reject);
writeStream.on('error', reject);
writeStream.on('finish', resolve);
readStream.pipe(writeStream);
});
}
// Usage
copyLargeFile('large-video.mp4', 'video-backup.mp4')
.then(() => console.log('Large file copied'))
.catch(error => console.log('Error:', error));
Copying with progress tracking:
const fs = require('fs');
function copyWithProgress(source, destination) {
return new Promise((resolve, reject) => {
const readStream = fs.createReadStream(source);
const writeStream = fs.createWriteStream(destination);
let bytesCopied = 0;
const fileSize = fs.statSync(source).size;
readStream.on('data', (chunk) => {
bytesCopied += chunk.length;
const percent = ((bytesCopied / fileSize) * 100).toFixed(1);
console.log(`Copy progress: ${percent}%`);
});
readStream.on('error', reject);
writeStream.on('error', reject);
writeStream.on('finish', resolve);
readStream.pipe(writeStream);
});
}
Copy modes (preserving timestamps):
const fs = require('fs');
// COPYFILE_EXCL - fails if destination exists
// COPYFILE_FICLONE - try to use copy-on-write (if supported)
// COPYFILE_FICLONE_FORCE - use copy-on-write (fail if not supported)
fs.copyFile('source.txt', 'destination.txt', fs.constants.COPYFILE_EXCL, (error) => {
if (error) {
console.log('Error:', error);
return;
}
console.log('File copied (exclusive)');
});
When to use each method:
File copying is essential for backups, data processing, and file management operations.
Streams are one of the most powerful concepts in Node.js. They allow you to handle data in chunks as it becomes available, rather than loading everything into memory at once.
Think of streams like a water pipe:
Why streams are important:
// Without streams - loads entire file into memory
fs.readFile('huge-file.txt', (error, data) => {
// All 2GB of data is now in memory!
console.log(data.length);
});
// With streams - processes data in chunks
const stream = fs.createReadStream('huge-file.txt');
stream.on('data', (chunk) => {
// Only small chunks in memory at a time
console.log('Received chunk:', chunk.length, 'bytes');
});
Types of streams:
Real-world stream examples:
// Video streaming
const videoStream = fs.createReadStream('movie.mp4');
response.writeHead(200, { 'Content-Type': 'video/mp4' });
videoStream.pipe(response);
// Data processing pipeline
fs.createReadStream('data.csv')
.pipe(csvParser()) // Parse CSV
.pipe(dataFilter()) // Filter rows
.pipe(jsonConverter()) // Convert to JSON
.pipe(fs.createWriteStream('output.json'));
Stream events:
const stream = fs.createReadStream('file.txt');
stream.on('data', (chunk) => {
console.log('Data chunk:', chunk.length);
});
stream.on('end', () => {
console.log('No more data');
});
stream.on('error', (error) => {
console.log('Error:', error);
});
stream.on('close', () => {
console.log('Stream closed');
});
Benefits of streams:
Streams are fundamental to Node.js performance and are used everywhere - from file operations to HTTP servers to data processing.
Readable streams are used to read data from a source. They emit data events as chunks become available and are perfect for handling large files or continuous data.
Creating a readable stream from a file:
const fs = require('fs');
const readableStream = fs.createReadStream('large-file.txt', 'utf8');
readableStream.on('data', (chunk) => {
console.log('Received chunk:', chunk.length, 'characters');
console.log('Chunk content:', chunk.substring(0, 50) + '...');
});
readableStream.on('end', () => {
console.log('Finished reading file');
});
readableStream.on('error', (error) => {
console.log('Error reading file:', error);
});
Stream options for better control:
const fs = require('fs');
const stream = fs.createReadStream('large-file.txt', {
encoding: 'utf8',
highWaterMark: 64 * 1024, // 64KB chunks
start: 0, // Start reading from byte 0
end: 1024 * 1024 // Read only first 1MB
});
stream.on('data', (chunk) => {
console.log('Chunk size:', chunk.length);
});
Creating a custom readable stream:
const { Readable } = require('stream');
class NumberStream extends Readable {
constructor(max) {
super();
this.current = 1;
this.max = max;
}
_read() {
if (this.current > this.max) {
this.push(null); // No more data
} else {
const number = this.current.toString();
this.push(number + '\n');
this.current++;
}
}
}
// Usage
const numberStream = new NumberStream(10);
numberStream.pipe(process.stdout);
Using readable streams with async iteration (Node.js 10+):
const fs = require('fs');
async function processFile() {
const stream = fs.createReadStream('data.txt', 'utf8');
for await (const chunk of stream) {
console.log('Processing chunk:', chunk.length);
// Process each chunk
}
console.log('File processing complete');
}
processFile().catch(console.error);
Pausing and resuming streams:
const fs = require('fs');
const stream = fs.createReadStream('data.txt', 'utf8');
stream.on('data', (chunk) => {
console.log('Received data');
// Pause stream to process data slowly
stream.pause();
setTimeout(() => {
console.log('Processing complete, resuming...');
stream.resume();
}, 1000);
});
Common readable stream events:
data - emitted when data is availableend - emitted when no more dataerror - emitted on errorclose - emitted when stream is closedreadable - emitted when data can be readReadable streams are essential for efficient data processing, especially with large files or continuous data sources.
Writable streams are used to write data to a destination. They receive data and handle the writing process, managing backpressure and errors automatically.
Creating a writable stream to a file:
const fs = require('fs');
const writableStream = fs.createWriteStream('output.txt', 'utf8');
writableStream.write('First line\n');
writableStream.write('Second line\n');
writableStream.write('Third line\n');
writableStream.end('Final line\n');
Using events with writable streams:
const fs = require('fs');
const stream = fs.createWriteStream('log.txt', 'utf8');
stream.on('finish', () => {
console.log('All data has been written');
});
stream.on('error', (error) => {
console.log('Error writing:', error);
});
stream.on('drain', () => {
console.log('Buffer is empty, ready for more data');
});
// Write some data
stream.write('Log entry 1\n');
stream.write('Log entry 2\n');
stream.end('Final entry\n');
Creating a custom writable stream:
const { Writable } = require('stream');
class LoggerStream extends Writable {
constructor(options) {
super(options);
}
_write(chunk, encoding, callback) {
const message = chunk.toString().trim();
console.log(`[LOG] ${new Date().toISOString()}: ${message}`);
callback(); // Signal that writing is complete
}
}
// Usage
const logger = new LoggerStream();
logger.write('Application started');
logger.write('User logged in');
logger.end('Application stopped');
Handling backpressure with drain event:
const fs = require('fs');
function writeLargeData(data, writableStream) {
let i = 0;
function write() {
let ok = true;
while (i < data.length && ok) {
// Try to write next chunk
ok = writableStream.write(data[i]);
i++;
}
if (i < data.length) {
// Had to stop early - wait for drain
writableStream.once('drain', write);
} else {
// All data written
writableStream.end();
}
}
write();
}
const stream = fs.createWriteStream('large-output.txt');
const data = Array(1000).fill().map((_, i) => `Line ${i}\n`);
writeLargeData(data, stream);
Writable stream options:
const fs = require('fs');
const stream = fs.createWriteStream('file.txt', {
encoding: 'utf8',
flags: 'a', // Append mode
highWaterMark: 16 * 1024 // 16KB buffer
});
Common flags for file streams:
'w' - Write (default, creates or overwrites)
'a' - Append (creates or appends)'wx' - Write exclusive (fails if file exists)
'ax' - Append exclusive (fails if file exists)
Writable streams are essential for efficient data writing, especially when dealing with large amounts of data or continuous output.
Duplex streams are streams that are both readable and writable. They represent a two-way communication channel where data flows in both directions independently.
Think of duplex streams like a telephone:
Common examples of duplex streams:
Creating a custom duplex stream:
const { Duplex } = require('stream');
class EchoStream extends Duplex {
constructor(options) {
super(options);
this.buffer = [];
}
// Readable side
_read(size) {
if (this.buffer.length > 0) {
// Push data from buffer to readable side
this.push(this.buffer.shift());
} else {
// No data available
this.push(null);
}
}
// Writable side
_write(chunk, encoding, callback) {
// Echo the data back (convert to uppercase)
const echoed = chunk.toString().toUpperCase();
this.buffer.push(Buffer.from(echoed));
// Trigger read to make data available
this._read();
callback(); // Writing complete
}
}
// Usage
const echo = new EchoStream();
echo.on('data', (data) => {
console.log('Received:', data.toString());
});
echo.write('hello');
echo.write('world');
echo.end();
Using TCP socket (built-in duplex stream):
const net = require('net');
const server = net.createServer((socket) => {
// socket is a duplex stream
console.log('Client connected');
// Read data from client
socket.on('data', (data) => {
console.log('Received:', data.toString());
// Write data back to client
socket.write('Echo: ' + data);
});
socket.on('end', () => {
console.log('Client disconnected');
});
});
server.listen(3000, () => {
console.log('Server listening on port 3000');
});
Duplex stream with transform capability:
const { Duplex } = require('stream');
class DelayStream extends Duplex {
constructor(delayMs, options) {
super(options);
this.delay = delayMs;
this.buffer = [];
}
_read(size) {
if (this.buffer.length > 0) {
const chunk = this.buffer.shift();
setTimeout(() => {
this.push(chunk);
}, this.delay);
} else {
this.push(null);
}
}
_write(chunk, encoding, callback) {
this.buffer.push(chunk);
callback();
}
}
// Usage - adds delay to data flow
const delay = new DelayStream(1000); // 1 second delay
delay.on('data', (data) => {
console.log('Delayed data:', data.toString());
});
delay.write('Hello');
delay.write('World');
delay.end();
Key characteristics of duplex streams:
Duplex streams are fundamental for any bidirectional communication in Node.js, from network protocols to custom data processing pipelines.
Transform streams are a special type of duplex stream that modify or transform data as it passes through. They're perfect for data processing pipelines.
Think of transform streams like a food processor:
Common examples of transform streams:
Creating a custom transform stream:
const { Transform } = require('stream');
class UpperCaseTransform extends Transform {
_transform(chunk, encoding, callback) {
// Transform the data
const upperChunk = chunk.toString().toUpperCase();
// Push the transformed data
this.push(upperChunk);
// Signal that transformation is complete
callback();
}
}
// Usage
const upperCase = new UpperCaseTransform();
// Pipe data through the transform
process.stdin
.pipe(upperCase)
.pipe(process.stdout);
// Type in terminal, see uppercase output
More complex transform - CSV to JSON:
const { Transform } = require('stream');
class CSVToJSONTransform extends Transform {
constructor(options) {
super({ ...options, objectMode: true });
this.headers = null;
this.isFirstChunk = true;
}
_transform(chunk, encoding, callback) {
const lines = chunk.toString().split('\n');
for (const line of lines) {
if (line.trim() === '') continue;
const values = line.split(',');
if (this.isFirstChunk) {
// First line contains headers
this.headers = values;
this.isFirstChunk = false;
} else {
// Convert to object
const obj = {};
this.headers.forEach((header, index) => {
obj[header.trim()] = values[index] ? values[index].trim() : '';
});
// Push the JSON object
this.push(JSON.stringify(obj) + '\n');
}
}
callback();
}
}
// Usage
const csvToJson = new CSVToJSONTransform();
fs.createReadStream('data.csv')
.pipe(csvToJson)
.pipe(fs.createWriteStream('output.json'));
Using built-in transform streams:
const fs = require('fs');
const zlib = require('zlib');
// Compression pipeline
fs.createReadStream('large-file.txt')
.pipe(zlib.createGzip()) // Transform: compress
.pipe(fs.createWriteStream('large-file.txt.gz'));
// Decompression pipeline
fs.createReadStream('large-file.txt.gz')
.pipe(zlib.createGunzip()) // Transform: decompress
.pipe(fs.createWriteStream('decompressed.txt'));
Transform stream with error handling:
const { Transform } = require('stream');
class NumberMultiplier extends Transform {
constructor(factor) {
super({ objectMode: true });
this.factor = factor;
}
_transform(chunk, encoding, callback) {
try {
const number = parseInt(chunk.toString());
if (isNaN(number)) {
throw new Error('Invalid number');
}
const result = number * this.factor;
this.push(result.toString());
callback();
} catch (error) {
callback(error); // Pass error to stream
}
}
}
const multiplier = new NumberMultiplier(10);
multiplier.on('error', (error) => {
console.log('Transform error:', error.message);
});
multiplier.pipe(process.stdout);
multiplier.write('5'); // Output: 50
multiplier.write('abc'); // Will emit error
Key benefits of transform streams:
Transform streams are the building blocks of data processing pipelines in Node.js, enabling efficient and modular data transformation.
Proper error handling in streams is crucial because unhandled stream errors can crash your application. Streams use EventEmitter for error handling.
Basic error handling with event listeners:
const fs = require('fs');
const readStream = fs.createReadStream('nonexistent-file.txt');
readStream.on('error', (error) => {
console.log('Read stream error:', error.message);
});
const writeStream = fs.createWriteStream('/readonly/location.txt');
writeStream.on('error', (error) => {
console.log('Write stream error:', error.message);
});
Error handling in pipelines (Node.js 10+):
const fs = require('fs');
const { pipeline } = require('stream');
// pipeline automatically handles errors and cleanup
pipeline(
fs.createReadStream('input.txt'),
fs.createWriteStream('output.txt'),
(error) => {
if (error) {
console.log('Pipeline failed:', error);
} else {
console.log('Pipeline succeeded');
}
}
);
Error handling with pipe() (manual approach):
const fs = require('fs');
function handleStreamErrors(source, destination) {
source.on('error', (error) => {
console.log('Source error:', error);
destination.destroy(error); // Destroy destination too
});
destination.on('error', (error) => {
console.log('Destination error:', error);
source.destroy(error); // Destroy source too
});
}
const source = fs.createReadStream('input.txt');
const destination = fs.createWriteStream('output.txt');
handleStreamErrors(source, destination);
source.pipe(destination);
Custom error handling in transform streams:
const { Transform } = require('stream');
class SafeTransform extends Transform {
_transform(chunk, encoding, callback) {
try {
// Simulate potential error
if (chunk.includes('error')) {
throw new Error('Invalid data detected');
}
const processed = chunk.toString().toUpperCase();
this.push(processed);
callback();
} catch (error) {
// Pass error to stream
callback(error);
}
}
}
const transform = new SafeTransform();
transform.on('error', (error) => {
console.log('Transform error handled:', error.message);
});
transform.write('hello');
transform.write('this will cause error'); // Triggers error
transform.write('world'); // This won't be processed
Graceful error recovery:
const fs = require('fs');
const { Transform } = require('stream');
class ResilientTransform extends Transform {
constructor() {
super();
this.errorCount = 0;
this.maxErrors = 3;
}
_transform(chunk, encoding, callback) {
// Simulate occasional errors
if (Math.random() < 0.3 && this.errorCount < this.maxErrors) {
this.errorCount++;
console.log(`Error ${this.errorCount}, skipping chunk`);
callback(); // Continue without pushing data
} else {
this.push(chunk);
callback();
}
}
}
const resilient = new ResilientTransform();
resilient.on('error', (error) => {
console.log('Fatal error:', error);
});
// This stream will tolerate some errors before failing
Best practices for stream error handling:
pipeline() for complex chainsProper error handling makes your stream-based applications robust and production-ready.
File descriptors are low-level identifiers that operating systems use to track open files. In Node.js, you can work with file descriptors for more control over file operations.
Think of file descriptors like library cards:
Opening a file and getting its descriptor:
const fs = require('fs');
// Open file and get file descriptor
fs.open('example.txt', 'r', (error, fd) => {
if (error) {
console.log('Error opening file:', error);
return;
}
console.log('File descriptor:', fd);
// Always close the file descriptor when done
fs.close(fd, (error) => {
if (error) console.log('Error closing:', error);
});
});
Reading from a file using file descriptor:
const fs = require('fs');
fs.open('data.txt', 'r', (error, fd) => {
if (error) throw error;
const buffer = Buffer.alloc(1024); // Buffer to hold data
fs.read(fd, buffer, 0, buffer.length, 0, (error, bytesRead, buffer) => {
if (error) throw error;
console.log('Bytes read:', bytesRead);
console.log('Content:', buffer.toString('utf8', 0, bytesRead));
fs.close(fd, (error) => {
if (error) console.log('Error closing:', error);
});
});
});
Writing to a file using file descriptor:
const fs = require('fs');
fs.open('output.txt', 'w', (error, fd) => {
if (error) throw error;
const content = 'Hello, World!';
const buffer = Buffer.from(content, 'utf8');
fs.write(fd, buffer, 0, buffer.length, 0, (error, written, buffer) => {
if (error) throw error;
console.log('Bytes written:', written);
fs.close(fd, (error) => {
if (error) console.log('Error closing:', error);
});
});
});
Common file open flags:
'r' - Read (file must exist)'r+' - Read and write (file must exist)'w' - Write (creates or truncates)'w+' - Read and write (creates or truncates)
'a' - Append (creates or appends)'a+' - Read and append (creates or appends)
Getting file information with file descriptor:
const fs = require('fs');
fs.open('example.txt', 'r', (error, fd) => {
if (error) throw error;
fs.fstat(fd, (error, stats) => {
if (error) throw error;
console.log('File size:', stats.size);
console.log('Created:', stats.birthtime);
console.log('Modified:', stats.mtime);
fs.close(fd, (error) => {
if (error) console.log('Error closing:', error);
});
});
});
Standard file descriptors:
When to use file descriptors:
File descriptors give you fine-grained control over file operations but require more careful resource management.
File watching allows your application to react when files or directories change. This is useful for live reloading, automatic builds, and monitoring.
Watching a single file for changes:
const fs = require('fs');
const watcher = fs.watch('config.json', (eventType, filename) => {
if (filename) {
console.log(`File ${filename} changed: ${eventType}`);
if (eventType === 'change') {
console.log('Content was modified');
// Reload configuration, etc.
}
}
});
// Stop watching after 30 seconds
setTimeout(() => {
watcher.close();
console.log('Stopped watching file');
}, 30000);
Watching a directory for changes:
const fs = require('fs');
const watcher = fs.watch('src/', (eventType, filename) => {
console.log(`Event: ${eventType} on file: ${filename}`);
if (eventType === 'rename') {
// File was created or deleted
console.log('File was created or deleted');
} else if (eventType === 'change') {
// File content changed
console.log('File content was modified');
}
});
Using fs.watchFile() for polling (less efficient):
const fs = require('fs');
// Check file every 2 seconds
fs.watchFile('data.txt', { interval: 2000 }, (curr, prev) => {
console.log(`File modified at: ${curr.mtime}`);
if (curr.mtime !== prev.mtime) {
console.log('File content changed');
// Handle the change
}
});
// Stop watching
// fs.unwatchFile('data.txt');
More robust watching with fs.watch (recursive):
const fs = require('fs');
const watcher = fs.watch('project/', {
recursive: true // Watch subdirectories too
}, (eventType, filename) => {
console.log(`Change detected in: ${filename}`);
// Debounce rapid changes
clearTimeout(watcher.debounceTimer);
watcher.debounceTimer = setTimeout(() => {
console.log('Processing changes...');
// Your change handling logic here
}, 100);
});
File watching with error handling:
const fs = require('fs');
function startWatching(path) {
try {
const watcher = fs.watch(path, { recursive: true }, (event, filename) => {
console.log(`Change: ${event} on ${filename}`);
});
watcher.on('error', (error) => {
console.log('Watcher error:', error);
// Attempt to restart watching
setTimeout(() => startWatching(path), 1000);
});
return watcher;
} catch (error) {
console.log('Failed to start watching:', error);
}
}
const watcher = startWatching('./src');
Using chokidar (popular third-party library):
// First: npm install chokidar
const chokidar = require('chokidar');
const watcher = chokidar.watch('src/', {
ignored: /node_modules/, // Ignore node_modules
persistent: true,
ignoreInitial: true
});
watcher
.on('add', path => console.log(`File ${path} added`))
.on('change', path => console.log(`File ${path} changed`))
.on('unlink', path => console.log(`File ${path} removed`))
.on('error', error => console.log(`Watcher error: ${error}`));
Common use cases for file watching:
File watching is essential for creating responsive applications that react to filesystem changes in real-time.
File permissions control who can read, write, or execute files. Node.js provides methods to set and modify these permissions using Unix-style permission codes.
Understanding permission codes:
// Permission codes use octal notation // 0o644 = owner: read/write, group: read, others: read // 0o755 = owner: read/write/execute, group: read/execute, others: read/execute // Breakdown: // 4 = read, 2 = write, 1 = execute // First digit: owner, second: group, third: others // 6 = 4+2 (read+write), 5 = 4+1 (read+execute), 7 = 4+2+1 (all)
Setting permissions with chmod:
const fs = require('fs');
// Make file readable/writable by owner, readable by others
fs.chmod('config.json', 0o644, (error) => {
if (error) {
console.log('Error setting permissions:', error);
return;
}
console.log('Permissions updated');
});
Using symbolic notation:
const fs = require('fs');
// Make script executable
fs.chmod('script.sh', '755', (error) => {
if (error) throw error;
console.log('Script is now executable');
});
// Add write permission for group
fs.chmod('data.txt', 'g+w', (error) => {
if (error) throw error;
console.log('Group write permission added');
});
Setting permissions when creating files:
const fs = require('fs');
// Create file with specific permissions
fs.writeFile('secret.txt', 'confidential data', {
mode: 0o600 // Only owner can read/write
}, (error) => {
if (error) throw error;
console.log('Secure file created');
});
// Create directory with specific permissions
fs.mkdir('private-folder', {
mode: 0o700 // Only owner has access
}, (error) => {
if (error) throw error;
console.log('Private directory created');
});
Common permission patterns:
const fs = require('fs');
const permissions = {
PUBLIC_FILE: 0o644, // -rw-r--r--
PRIVATE_FILE: 0o600, // -rw-------
EXECUTABLE: 0o755, // -rwxr-xr-x
PRIVATE_DIR: 0o700, // drwx------
PUBLIC_DIR: 0o755 // drwxr-xr-x
};
fs.chmod('app.js', permissions.EXECUTABLE, (error) => {
if (error) throw error;
console.log('App is now executable');
});
Checking current permissions:
const fs = require('fs');
fs.stat('file.txt', (error, stats) => {
if (error) throw error;
// stats.mode contains permission information
console.log('File mode:', stats.mode.toString(8)); // Octal representation
console.log('Is directory:', stats.isDirectory());
console.log('Is file:', stats.isFile());
});
A symbolic link (symlink) is a special file that points to another file or directory. It's like a shortcut or reference to the actual file.
Creating symbolic links:
const fs = require('fs');
// Create symlink to a file
fs.symlink('./original.txt', './link-to-original.txt', 'file', (error) => {
if (error) {
console.log('Error creating symlink:', error);
return;
}
console.log('Symbolic link created');
});
// Create symlink to a directory
fs.symlink('../parent-folder', './parent-link', 'dir', (error) => {
if (error) throw error;
console.log('Directory symlink created');
});
Reading symbolic links:
const fs = require('fs');
// Get the target of a symlink
fs.readlink('./link-to-original.txt', (error, linkString) => {
if (error) throw error;
console.log('Symlink points to:', linkString);
});
Checking if a path is a symbolic link:
const fs = require('fs');
fs.lstat('./possible-link', (error, stats) => {
if (error) throw error;
if (stats.isSymbolicLink()) {
console.log('This is a symbolic link');
fs.readlink('./possible-link', (error, target) => {
console.log('It points to:', target);
});
} else {
console.log('This is a regular file/directory');
}
});
Real-world use cases:
const fs = require('fs');
// Use case 1: Version management
fs.symlink('/apps/myapp-v1.2.3', '/apps/myapp-current', 'dir', (error) => {
if (error) throw error;
console.log('Current version symlink created');
});
// Use case 2: Configuration switching
fs.symlink('./configs/production.json', './config/current.json', 'file', (error) => {
if (error) throw error;
console.log('Production config activated');
});
Difference between hard links and symbolic links:
const fs = require('fs');
// Hard link - direct reference to file data
fs.link('original.txt', 'hard-link.txt', (error) => {
if (error) console.log('Hard link creation failed');
});
// Symbolic link - points to file path
fs.symlink('original.txt', 'soft-link.txt', 'file', (error) => {
if (error) console.log('Soft link creation failed');
});
Key differences:
The path module provides utilities for working with file and directory paths. It helps you write cross-platform code that works on Windows, macOS, and Linux.
Importing the path module:
const path = require('path');
Basic path operations:
const path = require('path');
const filePath = '/users/john/documents/file.txt';
console.log('Directory:', path.dirname(filePath));
console.log('Filename:', path.basename(filePath));
console.log('Extension:', path.extname(filePath));
console.log('Filename without extension:', path.basename(filePath, '.txt'));
Output:
Directory: /users/john/documents Filename: file.txt Extension: .txt Filename without extension: file
Path parsing:
const path = require('path');
const parsed = path.parse('/home/user/project/src/index.js');
console.log(parsed);
Output:
{
root: '/',
dir: '/home/user/project/src',
base: 'index.js',
ext: '.js',
name: 'index'
}
Platform-specific path handling:
const path = require('path');
console.log('Platform path separator:', path.sep);
console.log('Path delimiter:', path.delimiter);
console.log('Current working directory:', process.cwd());
// Cross-platform path joining
const fullPath = path.join('users', 'john', 'documents', 'file.txt');
console.log('Joined path:', fullPath);
Resolving relative paths:
const path = require('path');
// Resolve to absolute path
console.log('Absolute path:', path.resolve('src', 'app.js'));
console.log('With CWD:', path.resolve('./config.json'));
// Get relative path between two paths
const from = '/users/john/documents';
const to = '/users/john/downloads/file.zip';
console.log('Relative path:', path.relative(from, to));
Normalizing paths:
const path = require('path');
// Remove redundant segments
console.log('Normalized:', path.normalize('/users/../john/./documents//file.txt'));
// Output: /john/documents/file.txt
// Handle .. and . correctly
console.log('Resolved with ..:', path.resolve('/users/john', '../mary/file.txt'));
// Output: /users/mary/file.txt
Why use path module instead of string manipulation:
// ❌ Don't do this - platform specific
const badPath = 'folder' + '\\' + 'subfolder' + '\\' + 'file.txt';
// ✅ Do this - cross-platform
const goodPath = path.join('folder', 'subfolder', 'file.txt');
Path joining and resolution are essential for building correct file paths in a cross-platform way.
path.join() - Combine path segments:
const path = require('path');
// Basic joining
const fullPath = path.join('users', 'john', 'documents', 'file.txt');
console.log(fullPath);
// On Unix: users/john/documents/file.txt
// On Windows: users\john\documents\file.txt
// With absolute paths
const absolutePath = path.join('/users', 'john', 'documents', 'file.txt');
console.log(absolutePath);
// Output: /users/john/documents/file.txt
// Handles trailing slashes
const cleanPath = path.join('users/', '/john/', '/documents/');
console.log(cleanPath);
// Output: users/john/documents
path.resolve() - Create absolute paths:
const path = require('path');
// Resolve relative to current working directory
console.log(path.resolve('src', 'app.js'));
// If CWD is /home/user, output: /home/user/src/app.js
// With absolute path segment
console.log(path.resolve('/etc', 'config.json'));
// Output: /etc/config.json
// Using .. to go up directories
console.log(path.resolve('/users/john', '../mary', 'file.txt'));
// Output: /users/mary/file.txt
path.relative() - Find relative path between two paths:
const path = require('path');
const from = '/users/john/documents';
const to = '/users/john/downloads/file.zip';
console.log(path.relative(from, to));
// Output: ../downloads/file.zip
const from2 = '/project/src/components';
const to2 = '/project/assets/images/logo.png';
console.log(path.relative(from2, to2));
// Output: ../../assets/images/logo.png
Practical examples:
const path = require('path');
const fs = require('fs');
// Building project paths safely
function getProjectPaths(projectRoot) {
return {
src: path.join(projectRoot, 'src'),
dist: path.join(projectRoot, 'dist'),
config: path.join(projectRoot, 'config.json'),
nodeModules: path.join(projectRoot, 'node_modules')
};
}
const paths = getProjectPaths('/my-project');
console.log(paths.src); // /my-project/src
// Resolving module paths
function requireLocal(modulePath) {
const absolutePath = path.resolve(process.cwd(), modulePath);
return require(absolutePath);
}
Working with file extensions:
const path = require('path');
function changeExtension(filePath, newExtension) {
const dir = path.dirname(filePath);
const name = path.basename(filePath, path.extname(filePath));
return path.join(dir, name + newExtension);
}
console.log(changeExtension('/src/app.js', '.ts'));
// Output: /src/app.ts
console.log(changeExtension('config.json', '.backup'));
// Output: config.backup
Common path patterns:
const path = require('path');
// Get directory of current module
const currentDir = __dirname;
// Get path to parent directory
const parentDir = path.join(__dirname, '..');
// Create path relative to current file
const configPath = path.join(__dirname, 'config', 'app.json');
// Resolve from project root
const projectRoot = path.resolve(__dirname, '..', '..');
const assetPath = path.join(projectRoot, 'assets', 'images');
Reading JSON files is a common task in Node.js applications. The fs module makes it straightforward to read and parse JSON data.
Basic JSON file reading:
const fs = require('fs');
// Read and parse JSON file
fs.readFile('config.json', 'utf8', (error, data) => {
if (error) {
console.log('Error reading file:', error);
return;
}
try {
const config = JSON.parse(data);
console.log('Database host:', config.database.host);
console.log('App port:', config.app.port);
} catch (parseError) {
console.log('Error parsing JSON:', parseError);
}
});
Using require() for JSON files:
// Simple way for static configuration
// Note: This caches the result and is synchronous
const config = require('./config.json');
console.log('App name:', config.appName);
// For dynamic reloading, use fs.readFile
function loadConfig() {
delete require.cache[require.resolve('./config.json')];
return require('./config.json');
}
Reading JSON with fs.promises:
const fs = require('fs').promises;
async function readJSONFile(filePath) {
try {
const data = await fs.readFile(filePath, 'utf8');
return JSON.parse(data);
} catch (error) {
console.log('Error reading JSON file:', error);
throw error;
}
}
// Usage
async function main() {
try {
const users = await readJSONFile('users.json');
const settings = await readJSONFile('settings.json');
console.log('Loaded users:', users.length);
console.log('App theme:', settings.theme);
} catch (error) {
console.log('Failed to load configuration');
}
}
main();
Handling large JSON files with streams:
const fs = require('fs');
const { Transform } = require('stream');
class JSONParser extends Transform {
constructor(options) {
super({ ...options, objectMode: true });
this.buffer = '';
}
_transform(chunk, encoding, callback) {
this.buffer += chunk.toString();
// Try to parse complete JSON objects
try {
const data = JSON.parse(this.buffer);
this.push(data);
this.buffer = '';
} catch (error) {
// Incomplete JSON, wait for more data
}
callback();
}
}
// For newline-delimited JSON files
fs.createReadStream('large-data.jsonl')
.pipe(split()) // Split by newlines
.pipe(new JSONParser())
.on('data', (obj) => {
console.log('Parsed object:', obj);
});
Validating JSON schema:
const fs = require('fs').promises;
async function loadValidatedConfig() {
const schema = {
type: 'object',
required: ['appName', 'port', 'database'],
properties: {
appName: { type: 'string' },
port: { type: 'number', minimum: 1, maximum: 65535 },
database: {
type: 'object',
required: ['host', 'name'],
properties: {
host: { type: 'string' },
name: { type: 'string' }
}
}
}
};
const data = await fs.readFile('config.json', 'utf8');
const config = JSON.parse(data);
// Simple validation
if (!config.appName || !config.port) {
throw new Error('Invalid configuration: missing required fields');
}
return config;
}
Writing JSON files with formatting:
const fs = require('fs');
const data = {
appName: 'My Application',
version: '1.0.0',
settings: {
theme: 'dark',
notifications: true
},
users: ['john', 'jane', 'bob']
};
// Write with 2-space indentation
fs.writeFile('app-config.json', JSON.stringify(data, null, 2), 'utf8', (error) => {
if (error) throw error;
console.log('Configuration saved');
});
Parsing large files requires streaming and chunk processing to avoid memory issues. Here are efficient techniques for handling large files.
Streaming large CSV files:
const fs = require('fs');
const readline = require('readline');
async function processLargeCSV(filePath) {
const fileStream = fs.createReadStream(filePath);
const rl = readline.createInterface({
input: fileStream,
crlfDelay: Infinity // Handle Windows line endings
});
let lineCount = 0;
let headers = [];
for await (const line of rl) {
if (lineCount === 0) {
// First line contains headers
headers = line.split(',');
} else {
// Process data line
const values = line.split(',');
const row = {};
headers.forEach((header, index) => {
row[header] = values[index] || '';
});
// Process the row
console.log('Processing row:', row);
// Simulate processing time
await new Promise(resolve => setTimeout(resolve, 10));
}
lineCount++;
if (lineCount % 1000 === 0) {
console.log(`Processed ${lineCount} lines`);
}
}
console.log(`Finished processing ${lineCount} lines`);
}
processLargeCSV('large-data.csv').catch(console.error);
Processing large JSON files with streams:
const fs = require('fs');
const { Transform } = require('stream');
class JSONStreamProcessor extends Transform {
constructor(options) {
super({ ...options, objectMode: true });
this.buffer = '';
this.objectCount = 0;
}
_transform(chunk, encoding, callback) {
this.buffer += chunk.toString();
// Process complete JSON objects (assuming one per line)
const lines = this.buffer.split('\n');
this.buffer = lines.pop(); // Keep incomplete line
for (const line of lines) {
if (line.trim()) {
try {
const obj = JSON.parse(line);
this.objectCount++;
this.processObject(obj);
} catch (error) {
console.log('Error parsing JSON line:', error);
}
}
}
callback();
}
processObject(obj) {
// Your processing logic here
this.push(JSON.stringify(obj) + '\n');
if (this.objectCount % 1000 === 0) {
console.log(`Processed ${this.objectCount} objects`);
}
}
}
// Usage
fs.createReadStream('large-data.jsonl')
.pipe(new JSONStreamProcessor())
.pipe(fs.createWriteStream('processed-data.jsonl'));
Memory-efficient file processing with buffers:
const fs = require('fs');
function processFileInChunks(filePath, chunkSize = 64 * 1024) {
return new Promise((resolve, reject) => {
const stats = fs.statSync(filePath);
const fileSize = stats.size;
let bytesProcessed = 0;
let position = 0;
const processNextChunk = () => {
if (position >= fileSize) {
resolve();
return;
}
const readSize = Math.min(chunkSize, fileSize - position);
const buffer = Buffer.alloc(readSize);
fs.open(filePath, 'r', (error, fd) => {
if (error) {
reject(error);
return;
}
fs.read(fd, buffer, 0, readSize, position, (error, bytesRead) => {
if (error) {
fs.close(fd, () => reject(error));
return;
}
// Process the chunk
processChunk(buffer, bytesRead, position);
bytesProcessed += bytesRead;
position += bytesRead;
// Progress reporting
const percent = ((bytesProcessed / fileSize) * 100).toFixed(1);
console.log(`Progress: ${percent}%`);
fs.close(fd, (error) => {
if (error) console.log('Close error:', error);
setImmediate(processNextChunk); // Prevent stack overflow
});
});
});
};
processNextChunk();
});
}
function processChunk(buffer, bytesRead, position) {
// Your chunk processing logic here
const chunkText = buffer.toString('utf8', 0, bytesRead);
// Process chunkText...
}
Using readline for log file analysis:
const fs = require('fs');
const readline = require('readline');
async function analyzeLogFile(filePath) {
const fileStream = fs.createReadStream(filePath);
const rl = readline.createInterface({
input: fileStream,
crlfDelay: Infinity
});
const analysis = {
totalLines: 0,
errorCount: 0,
warningCount: 0,
infoCount: 0,
ipAddresses: new Set()
};
for await (const line of rl) {
analysis.totalLines++;
if (line.includes('ERROR')) analysis.errorCount++;
if (line.includes('WARNING')) analysis.warningCount++;
if (line.includes('INFO')) analysis.infoCount++;
// Extract IP addresses (simple regex)
const ipMatch = line.match(/\b(?:\d{1,3}\.){3}\d{1,3}\b/);
if (ipMatch) {
analysis.ipAddresses.add(ipMatch[0]);
}
// Process every 10,000 lines to show progress
if (analysis.totalLines % 10000 === 0) {
console.log(`Processed ${analysis.totalLines} lines...`);
}
}
return {
...analysis,
uniqueIPs: analysis.ipAddresses.size
};
}
// Usage
analyzeLogFile('server.log')
.then(result => {
console.log('Analysis result:', result);
})
.catch(console.error);
Key strategies for large files:
zlib is a Node.js module that provides compression and decompression functionality using the Gzip, Deflate, and Brotli algorithms.
Basic file compression:
const fs = require('fs');
const zlib = require('zlib');
// Compress a file
fs.createReadStream('large-file.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('large-file.txt.gz'))
.on('finish', () => {
console.log('File compressed successfully');
});
File decompression:
const fs = require('fs');
const zlib = require('zlib');
// Decompress a file
fs.createReadStream('large-file.txt.gz')
.pipe(zlib.createGunzip())
.pipe(fs.createWriteStream('decompressed-file.txt'))
.on('finish', () => {
console.log('File decompressed successfully');
});
Node.js provides multiple compression methods through the zlib module for different use cases.
Different compression methods:
const fs = require('fs');
const zlib = require('zlib');
// Gzip compression (most common)
fs.createReadStream('data.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('data.txt.gz'));
// Deflate compression
fs.createReadStream('data.txt')
.pipe(zlib.createDeflate())
.pipe(fs.createWriteStream('data.txt.deflate'));
// Brotli compression (modern, efficient)
fs.createReadStream('data.txt')
.pipe(zlib.createBrotliCompress())
.pipe(fs.createWriteStream('data.txt.br'));
Synchronous compression:
const fs = require('fs');
const zlib = require('zlib');
const data = fs.readFileSync('large-file.txt');
const compressed = zlib.gzipSync(data);
fs.writeFileSync('large-file.txt.gz', compressed);
fs-extra is a popular third-party module that enhances the built-in fs module with additional methods and better promise support.
Installing fs-extra:
npm install fs-extra
Key differences and advantages:
const fs = require('fs');
const fse = require('fs-extra');
1. Promise-based by default:
// fs-extra uses promises (no callbacks needed)
async function withFsExtra() {
try {
const data = await fse.readFile('file.txt', 'utf8');
await fse.writeFile('output.txt', data);
await fse.copy('src/', 'dist/');
console.log('All operations completed');
} catch (error) {
console.log('Error:', error);
}
}
// vs native fs with promises
async function withNativeFs() {
const fs = require('fs').promises;
try {
const data = await fs.readFile('file.txt', 'utf8');
await fs.writeFile('output.txt', data);
// No built-in copy method
console.log('Operations completed');
} catch (error) {
console.log('Error:', error);
}
}
2. Additional utility methods:
const fse = require('fs-extra');
// Copy directories recursively
await fse.copy('/source/dir', '/dest/dir');
// Ensure directory exists (create if not)
await fse.ensureDir('/path/to/directory');
// Remove directory and all contents
await fse.remove('/path/to/directory');
// Read JSON file and parse automatically
const config = await fse.readJson('config.json');
// Write JSON with automatic stringification
await fse.writeJson('data.json', { name: 'John', age: 30 });
// Check if path exists (returns boolean)
const exists = await fse.pathExists('/some/path');
// Empty directory (remove all contents)
await fse.emptyDir('/path/to/directory');
3. Better error messages and handling:
const fse = require('fs-extra');
// More descriptive errors
try {
await fse.move('/nonexistent/src', '/dest');
} catch (error) {
console.log(error.message);
// "ENOENT: no such file or directory, stat '/nonexistent/src'"
}
4. Additional useful methods:
const fse = require('fs-extra');
// File operations
await fse.move('/old/path', '/new/path'); // Move/rename
await fse.ensureFile('/path/to/file.txt'); // Ensure file exists
await fse.createFile('/path/to/newfile.txt'); // Create empty file
// Directory operations
const files = await fse.readdir('/path'); // Returns array
const stats = await fse.stat('/path/to/file');
// Path operations
await fse.ensureSymlink('/target', '/link'); // Create symlink
await fse.ensureLink('/target', '/link'); // Create hard link
When to use fs-extra vs native fs:
The encoding option specifies how to interpret file data - as text with a specific character encoding or as raw binary data.
Common encoding options:
const fs = require('fs');
// Read as UTF-8 text (most common)
fs.readFile('file.txt', 'utf8', (error, data) => {
console.log('UTF-8 text:', data);
});
// Read as ASCII
fs.readFile('file.txt', 'ascii', (error, data) => {
console.log('ASCII text:', data);
});
// Read as base64 (for binary data)
fs.readFile('image.jpg', 'base64', (error, data) => {
console.log('Base64 encoded:', data.substring(0, 100) + '...');
});
// Read as hex
fs.readFile('file.bin', 'hex', (error, data) => {
console.log('Hex representation:', data.substring(0, 50));
});
// No encoding - get Buffer object
fs.readFile('file.txt', (error, data) => {
console.log('Buffer:', data);
console.log('As string:', data.toString('utf8'));
});
Encoding in write operations:
const fs = require('fs');
// Write with UTF-8 encoding
fs.writeFile('text.txt', 'Hello World', 'utf8', (error) => {
if (error) throw error;
});
// Write binary data from base64
const base64Data = 'SGVsbG8gV29ybGQ='; // "Hello World" in base64
fs.writeFile('from-base64.txt', base64Data, 'base64', (error) => {
if (error) throw error;
});
// Write without encoding (Buffer)
const buffer = Buffer.from('Hello World', 'utf8');
fs.writeFile('from-buffer.txt', buffer, (error) => {
if (error) throw error;
});
Detecting file encoding:
const fs = require('fs');
const jschardet = require('jschardet'); // npm install jschardet
function detectEncoding(filePath) {
const buffer = fs.readFileSync(filePath);
const detection = jschardet.detect(buffer);
console.log('Detected encoding:', detection.encoding);
console.log('Confidence:', detection.confidence);
return detection.encoding;
}
// Usage
const encoding = detectEncoding('unknown-file.txt');
const content = fs.readFileSync('unknown-file.txt', encoding);
Handling different encodings automatically:
const fs = require('fs');
const iconv = require('iconv-lite'); // npm install iconv-lite
function readFileWithEncoding(filePath, possibleEncodings = ['utf8', 'win1252', 'iso-8859-1']) {
const buffer = fs.readFileSync(filePath);
for (const encoding of possibleEncodings) {
try {
const content = iconv.decode(buffer, encoding);
// Check if decoding produced valid text
if (!content.includes('�')) { // No replacement characters
return { content, encoding };
}
} catch (error) {
continue;
}
}
throw new Error('Could not decode file with any of the provided encodings');
}
// Usage
try {
const result = readFileWithEncoding('european-file.txt');
console.log('File encoding:', result.encoding);
console.log('Content:', result.content);
} catch (error) {
console.log('Error:', error.message);
}
Common encoding scenarios:
const fs = require('fs');
// Scenario 1: Reading configuration files (usually UTF-8)
const config = JSON.parse(fs.readFileSync('config.json', 'utf8'));
// Scenario 2: Reading legacy files (might be different encoding)
const legacyContent = fs.readFileSync('old-file.txt', 'latin1');
// Scenario 3: Working with binary files (images, etc.)
const imageBuffer = fs.readFileSync('photo.jpg');
// No encoding specified - returns Buffer
// Scenario 4: Creating data URLs for web
const imageBase64 = fs.readFileSync('icon.png', 'base64');
const dataUrl = `data:image/png;base64,${imageBase64}`;
// Scenario 5: Reading files for cryptographic operations
const fileHash = fs.readFileSync('document.pdf', 'binary');
Supported encodings in Node.js:
utf8 - Standard Unicode (default)utf16le - Little-endian UTF-16latin1 - ISO-8859-1ascii - 7-bit ASCIIbase64 - Base64 encodinghex - Hexadecimalucs2 - Alias of utf16lebinary - Binary data (deprecated)Temporary files are useful for intermediate processing, caching, or storing data that doesn't need to persist. Node.js provides several ways to create them.
Using the os.tmpdir() method:
const fs = require('fs');
const os = require('os');
const path = require('path');
function createTempFile(content, extension = '.tmp') {
const tempDir = os.tmpdir();
const tempFile = path.join(tempDir, `temp-${Date.now()}${extension}`);
fs.writeFileSync(tempFile, content);
return tempFile;
}
// Usage
const tempPath = createTempFile('Temporary data', '.txt');
console.log('Temp file created:', tempPath);
// Clean up when done
fs.unlinkSync(tempPath);
Using the tmp package (recommended):
const tmp = require('tmp'); // npm install tmp
// Create temporary file
tmp.file((error, path, fd, cleanupCallback) => {
if (error) throw error;
console.log('Temp file path:', path);
console.log('File descriptor:', fd);
// Write to temp file
fs.writeSync(fd, 'Temporary content');
// Automatically clean up on process exit
cleanupCallback();
});
// Create temporary directory
tmp.dir((error, path, cleanupCallback) => {
if (error) throw error;
console.log('Temp directory:', path);
// Create files in temp directory
const filePath = path + '/data.txt';
fs.writeFileSync(filePath, 'Some data');
// Clean up when done
cleanupCallback();
});
With promises using tmp-promise:
const tmp = require('tmp-promise'); // npm install tmp-promise
async function useTempFiles() {
try {
// Create temp file with automatic cleanup
const { path, fd, cleanup } = await tmp.file();
await fs.promises.writeFile(path, 'Temporary data');
const data = await fs.promises.readFile(path, 'utf8');
console.log('Read from temp file:', data);
// Manual cleanup (or it happens automatically)
await cleanup();
} catch (error) {
console.log('Error:', error);
}
}
useTempFiles();
Secure temporary files with specific permissions:
const tmp = require('tmp');
// Create secure temp file (readable only by current user)
tmp.file({
mode: 0o600, // Only owner can read/write
prefix: 'secret-',
postfix: '.tmp'
}, (error, path, fd) => {
if (error) throw error;
fs.writeFileSync(path, 'Sensitive data');
console.log('Secure temp file created:', path);
});
Temporary files with automatic cleanup:
const fs = require('fs');
const os = require('os');
const path = require('path');
class TempFileManager {
constructor() {
this.files = new Set();
this.setupCleanup();
}
createTempFile(content, extension = '.tmp') {
const tempPath = path.join(os.tmpdir(), `app-${process.pid}-${Date.now()}${extension}`);
fs.writeFileSync(tempPath, content);
this.files.add(tempPath);
return tempPath;
}
cleanup() {
for (const filePath of this.files) {
try {
if (fs.existsSync(filePath)) {
fs.unlinkSync(filePath);
console.log('Cleaned up:', filePath);
}
} catch (error) {
console.log('Error cleaning up', filePath, error.message);
}
}
this.files.clear();
}
setupCleanup() {
// Clean up on process exit
process.on('exit', () => this.cleanup());
process.on('SIGINT', () => {
this.cleanup();
process.exit();
});
}
}
// Usage
const tempManager = new TempFileManager();
const temp1 = tempManager.createTempFile('Data 1');
const temp2 = tempManager.createTempFile('Data 2', '.json');
console.log('Temp files created:', temp1, temp2);
Use cases for temporary files:
const fs = require('fs');
const tmp = require('tmp');
// Use case 1: File processing pipeline
async function processLargeFile(inputPath) {
const { path: tempPath, cleanup } = await tmp.file();
try {
// Step 1: Filter data to temp file
await filterData(inputPath, tempPath);
// Step 2: Transform data
await transformData(tempPath);
// Step 3: Generate output
const result = await generateOutput(tempPath);
return result;
} finally {
// Always clean up
await cleanup();
}
}
// Use case 2: Caching expensive operations
const cache = new Map();
async function getCachedData(key, dataGenerator) {
if (cache.has(key)) {
return fs.promises.readFile(cache.get(key), 'utf8');
}
const { path: tempPath, cleanup } = await tmp.file();
const data = await dataGenerator();
await fs.promises.writeFile(tempPath, data);
cache.set(key, tempPath);
// Set up cleanup after 1 hour
setTimeout(() => {
if (cache.has(key)) {
fs.unlinkSync(cache.get(key));
cache.delete(key);
}
cleanup();
}, 60 * 60 * 1000);
return data;
}
File uploads in Node.js typically involve handling multipart/form-data from HTTP requests and saving files securely.
Using multer (most popular solution):
const express = require('express');
const multer = require('multer');
const path = require('path');
const app = express();
// Configure storage
const storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, 'uploads/');
},
filename: (req, file, cb) => {
// Create safe filename
const uniqueName = Date.now() + '-' + Math.round(Math.random() * 1E9);
const extension = path.extname(file.originalname);
cb(null, file.fieldname + '-' + uniqueName + extension);
}
});
// File filter for security
const fileFilter = (req, file, cb) => {
const allowedTypes = ['image/jpeg', 'image/png', 'application/pdf'];
if (allowedTypes.includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error('Invalid file type'), false);
}
};
const upload = multer({
storage: storage,
limits: {
fileSize: 5 * 1024 * 1024 // 5MB limit
},
fileFilter: fileFilter
});
// Handle single file upload
app.post('/upload', upload.single('document'), (req, res) => {
if (!req.file) {
return res.status(400).json({ error: 'No file uploaded' });
}
res.json({
message: 'File uploaded successfully',
filename: req.file.filename,
size: req.file.size,
path: req.file.path
});
});
// Handle multiple files
app.post('/upload-multiple', upload.array('photos', 5), (req, res) => {
const files = req.files.map(file => ({
filename: file.filename,
size: file.size
}));
res.json({ message: 'Files uploaded', files });
});
app.listen(3000);
Manual file upload handling with streams:
const http = require('http');
const fs = require('fs');
const path = require('path');
const server = http.createServer((req, res) => {
if (req.method === 'POST' && req.url === '/upload') {
const uploadDir = './uploads';
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir);
}
const filename = `file-${Date.now()}.bin`;
const filePath = path.join(uploadDir, filename);
const writeStream = fs.createWriteStream(filePath);
let uploadedBytes = 0;
req.on('data', (chunk) => {
uploadedBytes += chunk.length;
writeStream.write(chunk);
// Basic progress tracking
console.log(`Uploaded: ${uploadedBytes} bytes`);
});
req.on('end', () => {
writeStream.end();
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
message: 'Upload complete',
filename: filename,
size: uploadedBytes
}));
});
req.on('error', (error) => {
writeStream.destroy();
fs.unlinkSync(filePath); // Clean up partial file
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Upload failed' }));
});
} else {
res.writeHead(404);
res.end('Not found');
}
});
server.listen(3000);
Secure file upload validation:
const multer = require('multer');
const path = require('path');
function createSecureUpload(config = {}) {
const { allowedTypes = [], maxSize = 5 * 1024 * 1024 } = config;
const storage = multer.diskStorage({
destination: 'uploads/',
filename: (req, file, cb) => {
// Sanitize filename
const originalName = file.originalname;
const sanitized = originalName.replace(/[^a-zA-Z0-9.\-]/g, '_');
const uniqueName = Date.now() + '-' + sanitized;
cb(null, uniqueName);
}
});
const fileFilter = (req, file, cb) => {
// Check file type
if (allowedTypes.length > 0 && !allowedTypes.includes(file.mimetype)) {
return cb(new Error(`File type ${file.mimetype} not allowed`), false);
}
// Check file extension
const ext = path.extname(file.originalname).toLowerCase();
const allowedExts = ['.jpg', '.jpeg', '.png', '.pdf', '.txt'];
if (!allowedExts.includes(ext)) {
return cb(new Error(`File extension ${ext} not allowed`), false);
}
cb(null, true);
};
return multer({
storage,
limits: { fileSize: maxSize },
fileFilter
});
}
// Usage
const secureUpload = createSecureUpload({
allowedTypes: ['image/jpeg', 'image/png', 'application/pdf'],
maxSize: 10 * 1024 * 1024 // 10MB
});
app.post('/secure-upload', secureUpload.single('file'), (req, res) => {
// File is validated and safe to use
res.json({ message: 'Secure upload successful', file: req.file });
});
Handling large file uploads with progress:
const express = require('express');
const multer = require('multer');
const app = express();
// In-memory storage for progress tracking (use Redis in production)
const uploadProgress = new Map();
const upload = multer({ dest: 'uploads/' });
app.post('/upload-with-progress', upload.single('largeFile'), (req, res) => {
const uploadId = req.headers['x-upload-id'];
if (uploadId && uploadProgress.has(uploadId)) {
uploadProgress.delete(uploadId); // Clean up
}
res.json({ message: 'Upload complete', file: req.file });
});
// Progress tracking endpoint
app.get('/upload-progress/:id', (req, res) => {
const progress = uploadProgress.get(req.params.id) || 0;
res.json({ progress });
});
// Client would send chunks with progress tracking
/*
// Client-side example:
const uploadFile = async (file, uploadId) => {
const chunkSize = 1024 * 1024; // 1MB chunks
const totalChunks = Math.ceil(file.size / chunkSize);
for (let chunkIndex = 0; chunkIndex < totalChunks; chunkIndex++) {
const chunk = file.slice(chunkIndex * chunkSize, (chunkIndex + 1) * chunkSize);
const formData = new FormData();
formData.append('chunk', chunk);
formData.append('chunkIndex', chunkIndex);
formData.append('totalChunks', totalChunks);
formData.append('uploadId', uploadId);
formData.append('filename', file.name);
await fetch('/upload-chunk', {
method: 'POST',
body: formData
});
// Update progress
const progress = ((chunkIndex + 1) / totalChunks) * 100;
await fetch(`/upload-progress/${uploadId}`, {
method: 'POST',
body: JSON.stringify({ progress })
});
}
};
*/
Video files require special stream handling due to their large size and the need for efficient processing and streaming.
Basic video streaming server:
const http = require('http');
const fs = require('fs');
const path = require('path');
const server = http.createServer((req, res) => {
const videoPath = path.join(__dirname, 'videos', 'sample.mp4');
const videoSize = fs.statSync(videoPath).size;
// Parse Range header for partial content
const range = req.headers.range;
if (range) {
// Handle partial content request (video seeking)
const parts = range.replace(/bytes=/, '').split('-');
const start = parseInt(parts[0], 10);
const end = parts[1] ? parseInt(parts[1], 10) : videoSize - 1;
const chunksize = (end - start) + 1;
// Create read stream for specific range
const file = fs.createReadStream(videoPath, { start, end });
res.writeHead(206, {
'Content-Range': `bytes ${start}-${end}/${videoSize}`,
'Accept-Ranges': 'bytes',
'Content-Length': chunksize,
'Content-Type': 'video/mp4'
});
file.pipe(res);
} else {
// Full video request
res.writeHead(200, {
'Content-Length': videoSize,
'Content-Type': 'video/mp4'
});
fs.createReadStream(videoPath).pipe(res);
}
});
server.listen(3000, () => {
console.log('Video server running on port 3000');
});
Video processing with ffmpeg and streams:
const { spawn } = require('child_process');
const fs = require('fs');
const path = require('path');
function convertVideo(inputPath, outputPath, format = 'mp4') {
return new Promise((resolve, reject) => {
const ffmpeg = spawn('ffmpeg', [
'-i', inputPath,
'-c:v', 'libx264',
'-c:a', 'aac',
'-f', format,
'pipe:1' // Output to stdout
]);
const outputStream = fs.createWriteStream(outputPath);
let errorData = '';
ffmpeg.stdout.pipe(outputStream);
ffmpeg.stderr.on('data', (data) => {
errorData += data.toString();
});
ffmpeg.on('close', (code) => {
if (code === 0) {
resolve(outputPath);
} else {
reject(new Error(`FFmpeg failed: ${errorData}`));
}
});
ffmpeg.on('error', reject);
});
}
// Usage
convertVideo('input.avi', 'output.mp4')
.then(outputPath => console.log('Video converted:', outputPath))
.catch(error => console.log('Conversion failed:', error));
Video thumbnail generation:
const { spawn } = require('child_process');
const fs = require('fs');
function generateThumbnail(videoPath, thumbnailPath, time = '00:00:01') {
return new Promise((resolve, reject) => {
const ffmpeg = spawn('ffmpeg', [
'-i', videoPath,
'-ss', time, // Seek to specific time
'-vframes', '1', // Capture 1 frame
'-vf', 'scale=320:-1', // Resize to 320px width
'-f', 'image2',
'pipe:1'
]);
const thumbnailStream = fs.createWriteStream(thumbnailPath);
let errorData = '';
ffmpeg.stdout.pipe(thumbnailStream);
ffmpeg.stderr.on('data', (data) => errorData += data.toString());
ffmpeg.on('close', (code) => {
if (code === 0) {
resolve(thumbnailPath);
} else {
reject(new Error(`Thumbnail generation failed: ${errorData}`));
}
});
ffmpeg.on('error', reject);
});
}
Video streaming with adaptive bitrate:
const express = require('express');
const fs = require('fs');
const path = require('path');
const app = express();
// Serve different quality versions
app.get('/video/:quality/:filename', (req, res) => {
const { quality, filename } = req.params;
const videoDir = path.join(__dirname, 'videos', quality);
const videoPath = path.join(videoDir, filename);
if (!fs.existsSync(videoPath)) {
return res.status(404).send('Video not found');
}
const videoSize = fs.statSync(videoPath).size;
const range = req.headers.range;
if (range) {
const parts = range.replace(/bytes=/, '').split('-');
const start = parseInt(parts[0], 10);
const end = parts[1] ? parseInt(parts[1], 10) : videoSize - 1;
const chunksize = (end - start) + 1;
const file = fs.createReadStream(videoPath, { start, end });
res.writeHead(206, {
'Content-Range': `bytes ${start}-${end}/${videoSize}`,
'Accept-Ranges': 'bytes',
'Content-Length': chunksize,
'Content-Type': 'video/mp4'
});
file.pipe(res);
} else {
res.writeHead(200, {
'Content-Length': videoSize,
'Content-Type': 'video/mp4'
});
fs.createReadStream(videoPath).pipe(res);
}
});
// Serve master playlist for HLS
app.get('/master.m3u8', (req, res) => {
const masterPlaylist = `#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=800000,RESOLUTION=640x360
/video/360p/index.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1400000,RESOLUTION=854x480
/video/480p/index.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2800000,RESOLUTION=1280x720
/video/720p/index.m3u8`;
res.set('Content-Type', 'application/vnd.apple.mpegurl');
res.send(masterPlaylist);
});
app.listen(3000);
Video upload with progress and validation:
const multer = require('multer');
const path = require('path');
const videoStorage = multer.diskStorage({
destination: 'uploads/videos/',
filename: (req, file, cb) => {
const uniqueName = `video-${Date.now()}${path.extname(file.originalname)}`;
cb(null, uniqueName);
}
});
const videoUpload = multer({
storage: videoStorage,
limits: {
fileSize: 100 * 1024 * 1024 // 100MB limit
},
fileFilter: (req, file, cb) => {
// Check if file is a video
if (file.mimetype.startsWith('video/')) {
cb(null, true);
} else {
cb(new Error('Only video files are allowed'), false);
}
}
});
app.post('/upload-video', videoUpload.single('video'), (req, res) => {
// Additional video validation
const videoInfo = {
filename: req.file.filename,
originalName: req.file.originalname,
size: req.file.size,
mimetype: req.file.mimetype,
path: req.file.path
};
// You could now process the video further
// - Generate thumbnails
// - Create different quality versions
// - Extract metadata
res.json({
message: 'Video uploaded successfully',
video: videoInfo
});
});
Buffers and streams are both used for handling binary data, but they serve different purposes and have different characteristics.
Buffers - Fixed-size memory chunks:
// Buffers store fixed amounts of data in memory
const buffer = Buffer.from('Hello World', 'utf8');
console.log('Buffer size:', buffer.length);
console.log('Buffer content:', buffer.toString());
// Buffers are good for small, complete data
fs.readFile('small-file.txt', (error, buffer) => {
// Entire file content is loaded into memory
console.log('File content as buffer:', buffer);
});
Streams - Continuous data flow:
// Streams process data in chunks as it becomes available
const stream = fs.createReadStream('large-file.txt');
stream.on('data', (chunk) => {
// Only small chunks in memory at a time
console.log('Received chunk:', chunk.length, 'bytes');
});
highWaterMark controls the internal buffer size of streams, affecting memory usage and backpressure behavior.
Configuring highWaterMark:
const fs = require('fs');
// Custom highWaterMark for better memory control
const readStream = fs.createReadStream('large-file.txt', {
highWaterMark: 64 * 1024 // 64KB chunks instead of default 16KB
});
const writeStream = fs.createWriteStream('output.txt', {
highWaterMark: 128 * 1024 // 128KB buffer
});
Backpressure occurs when data is being produced faster than it can be consumed, causing memory issues and potential crashes. It's like a traffic jam in your data pipeline.
Understanding backpressure:
const fs = require('fs');
// Problem: Fast producer, slow consumer
const fastReadStream = fs.createReadStream('huge-file.txt'); // Fast
const slowWriteStream = fs.createWriteStream('output.txt'); // Slow (maybe disk is busy)
// Without backpressure handling, this could cause memory issues
fastReadStream.pipe(slowWriteStream);
How backpressure works automatically with pipe():
const fs = require('fs');
// pipe() automatically handles backpressure
fs.createReadStream('input.txt')
.pipe(fs.createWriteStream('output.txt'));
// What happens internally:
// 1. Read stream reads data until write stream's buffer is full
// 2. Write stream emits 'drain' event when buffer is empty
// 3. Read stream resumes reading
// 4. Process repeats until complete
Manual backpressure handling with drain event:
const fs = require('fs');
function copyWithBackpressure(source, destination) {
return new Promise((resolve, reject) => {
const readStream = fs.createReadStream(source);
const writeStream = fs.createWriteStream(destination);
readStream.on('data', (chunk) => {
// Try to write the chunk
const canContinue = writeStream.write(chunk);
if (!canContinue) {
// Pause reading until drain event
readStream.pause();
console.log('Backpressure: Pausing read stream');
writeStream.once('drain', () => {
// Resume reading when buffer is empty
readStream.resume();
console.log('Backpressure: Resuming read stream');
});
}
});
readStream.on('end', () => {
writeStream.end();
});
writeStream.on('finish', resolve);
readStream.on('error', reject);
writeStream.on('error', reject);
});
}
// Usage
copyWithBackpressure('large-file.txt', 'copy.txt')
.then(() => console.log('Copy completed with backpressure handling'))
.catch(console.error);
Backpressure in transform streams:
const { Transform } = require('stream');
class SlowTransform extends Transform {
constructor(options) {
super(options);
}
_transform(chunk, encoding, callback) {
console.log('Transforming chunk of', chunk.length, 'bytes');
// Simulate slow processing
setTimeout(() => {
const transformed = chunk.toString().toUpperCase();
this.push(transformed);
callback();
}, 100); // 100ms delay per chunk
}
}
// This will naturally create backpressure
const slowTransform = new SlowTransform();
fs.createReadStream('input.txt', { highWaterMark: 1024 })
.pipe(slowTransform)
.pipe(fs.createWriteStream('output.txt'));
Monitoring backpressure:
const fs = require('fs');
function monitorStreams(source, destination) {
const readStream = fs.createReadStream(source);
const writeStream = fs.createWriteStream(destination);
let readPaused = false;
let drainEvents = 0;
readStream.on('data', (chunk) => {
const canContinue = writeStream.write(chunk);
if (!canContinue && !readPaused) {
readStream.pause();
readPaused = true;
console.log('🚦 Read stream PAUSED due to backpressure');
}
});
writeStream.on('drain', () => {
drainEvents++;
if (readPaused) {
readStream.resume();
readPaused = false;
console.log('✅ Read stream RESUMED after drain event #' + drainEvents);
}
});
readStream.on('end', () => {
writeStream.end();
console.log('📊 Statistics:');
console.log(' - Drain events:', drainEvents);
console.log(' - Final read paused:', readPaused);
});
writeStream.on('finish', () => {
console.log('✨ Write completed successfully');
});
}
monitorStreams('large-file.txt', 'output.txt');
Backpressure solutions and best practices:
const fs = require('fs');
// Solution 1: Use pipeline() for automatic backpressure
const { pipeline } = require('stream');
pipeline(
fs.createReadStream('input.txt'),
fs.createWriteStream('output.txt'),
(error) => {
if (error) {
console.log('Pipeline failed:', error);
} else {
console.log('Pipeline completed');
}
}
);
// Solution 2: Adjust highWaterMark for your use case
const optimizedReadStream = fs.createReadStream('file.txt', {
highWaterMark: 64 * 1024 // 64KB chunks
});
const optimizedWriteStream = fs.createWriteStream('output.txt', {
highWaterMark: 128 * 1024 // 128KB buffer
});
// Solution 3: Use async processing for CPU-intensive operations
class AsyncTransform extends Transform {
async _transform(chunk, encoding, callback) {
try {
// Simulate async operation (database call, API request, etc.)
const result = await processChunkAsync(chunk);
this.push(result);
callback();
} catch (error) {
callback(error);
}
}
}
Signs of backpressure problems:
File size detection is essential for validation, progress tracking, and memory management. Node.js provides several methods to get file size information.
Using fs.stat() for file size:
const fs = require('fs');
// Async file size detection
fs.stat('document.pdf', (error, stats) => {
if (error) {
console.log('Error getting file stats:', error);
return;
}
console.log('File size:', stats.size, 'bytes');
console.log('Human readable:', formatFileSize(stats.size));
console.log('Is file:', stats.isFile());
console.log('Is directory:', stats.isDirectory());
console.log('Created:', stats.birthtime);
console.log('Modified:', stats.mtime);
});
// Helper function for human-readable sizes
function formatFileSize(bytes) {
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
let size = bytes;
let unitIndex = 0;
while (size >= 1024 && unitIndex < units.length - 1) {
size /= 1024;
unitIndex++;
}
return `${size.toFixed(2)} ${units[unitIndex]}`;
}
Using fs.promises.stat():
const fs = require('fs').promises;
async function getFileInfo(filePath) {
try {
const stats = await fs.stat(filePath);
return {
size: stats.size,
sizeHuman: formatFileSize(stats.size),
isFile: stats.isFile(),
isDirectory: stats.isDirectory(),
created: stats.birthtime,
modified: stats.mtime,
accessed: stats.atime
};
} catch (error) {
console.log('Error getting file info:', error);
throw error;
}
}
// Usage
getFileInfo('large-video.mp4')
.then(info => {
console.log('File information:');
console.log(' Size:', info.sizeHuman);
console.log(' Type:', info.isFile ? 'File' : 'Directory');
console.log(' Modified:', info.modified.toLocaleString());
})
.catch(console.error);
Getting size of multiple files:
const fs = require('fs').promises;
const path = require('path');
async function getDirectorySize(directoryPath) {
try {
const files = await fs.readdir(directoryPath);
let totalSize = 0;
const fileDetails = [];
for (const file of files) {
const filePath = path.join(directoryPath, file);
const stats = await fs.stat(filePath);
if (stats.isFile()) {
totalSize += stats.size;
fileDetails.push({
name: file,
size: stats.size,
sizeHuman: formatFileSize(stats.size)
});
}
}
return {
totalSize,
totalSizeHuman: formatFileSize(totalSize),
fileCount: fileDetails.length,
files: fileDetails.sort((a, b) => b.size - a.size) // Sort by size descending
};
} catch (error) {
console.log('Error reading directory:', error);
throw error;
}
}
// Usage
getDirectorySize('./documents')
.then(result => {
console.log(`Directory contains ${result.fileCount} files`);
console.log(`Total size: ${result.totalSizeHuman}`);
console.log('Largest files:');
result.files.slice(0, 5).forEach(file => {
console.log(` ${file.name}: ${file.sizeHuman}`);
});
});
File size validation for uploads:
const fs = require('fs');
class FileSizeValidator {
constructor(maxSizeBytes) {
this.maxSizeBytes = maxSizeBytes;
}
async validate(filePath) {
try {
const stats = await fs.promises.stat(filePath);
if (stats.size > this.maxSizeBytes) {
const maxSizeHuman = formatFileSize(this.maxSizeBytes);
const actualSizeHuman = formatFileSize(stats.size);
throw new Error(
`File too large: ${actualSizeHuman} exceeds maximum ${maxSizeHuman}`
);
}
return {
isValid: true,
size: stats.size,
sizeHuman: formatFileSize(stats.size)
};
} catch (error) {
return {
isValid: false,
error: error.message
};
}
}
}
// Usage
const validator = new FileSizeValidator(10 * 1024 * 1024); // 10MB limit
validator.validate('uploaded-file.jpg')
.then(result => {
if (result.isValid) {
console.log('✅ File size OK:', result.sizeHuman);
} else {
console.log('❌ File validation failed:', result.error);
}
});
Monitoring file size changes:
const fs = require('fs');
class FileSizeMonitor {
constructor(filePath, intervalMs = 5000) {
this.filePath = filePath;
this.intervalMs = intervalMs;
this.previousSize = 0;
this.intervalId = null;
}
start() {
console.log(`Starting file size monitor for: ${this.filePath}`);
this.intervalId = setInterval(async () => {
try {
const stats = await fs.promises.stat(this.filePath);
const currentSize = stats.size;
if (currentSize !== this.previousSize) {
const change = currentSize - this.previousSize;
const changeSymbol = change > 0 ? '📈' : '📉';
console.log(
`${changeSymbol} File size: ${formatFileSize(currentSize)} ` +
`(${change > 0 ? '+' : ''}${formatFileSize(change)})`
);
this.previousSize = currentSize;
}
} catch (error) {
console.log('Error monitoring file:', error.message);
}
}, this.intervalMs);
}
stop() {
if (this.intervalId) {
clearInterval(this.intervalId);
console.log('File size monitor stopped');
}
}
}
// Usage - great for monitoring log files or downloads
const monitor = new FileSizeMonitor('application.log', 2000);
monitor.start();
// Stop after 30 seconds
setTimeout(() => monitor.stop(), 30000);
Getting file size without loading content:
const fs = require('fs');
// Efficient way to get size without reading file content
function getFileSizeSync(filePath) {
try {
const stats = fs.statSync(filePath);
return stats.size;
} catch (error) {
return -1; // File doesn't exist or error
}
}
// For very large files, use streams to estimate size
function estimateFileSize(filePath) {
return new Promise((resolve, reject) => {
const stream = fs.createReadStream(filePath);
let size = 0;
stream.on('data', (chunk) => {
size += chunk.length;
});
stream.on('end', () => {
resolve(size);
});
stream.on('error', reject);
// Read just enough to get size, then destroy
stream.on('readable', () => {
stream.destroy(); // Stop reading
resolve(size);
});
});
}
Reading specific byte ranges is essential for partial file access, seeking in large files, and efficient data extraction.
Using fs.read() with file descriptors:
const fs = require('fs');
function readBytes(filePath, start, length) {
return new Promise((resolve, reject) => {
fs.open(filePath, 'r', (error, fd) => {
if (error) {
reject(error);
return;
}
const buffer = Buffer.alloc(length);
fs.read(fd, buffer, 0, length, start, (error, bytesRead, buffer) => {
fs.close(fd, (closeError) => {
if (error) {
reject(error);
} else if (closeError) {
reject(closeError);
} else {
resolve({
bytesRead,
data: buffer.slice(0, bytesRead)
});
}
});
});
});
});
}
// Usage: Read first 100 bytes
readBytes('large-file.bin', 0, 100)
.then(({ bytesRead, data }) => {
console.log(`Read ${bytesRead} bytes`);
console.log('Hex dump:', data.toString('hex'));
})
.catch(console.error);
Using createReadStream with start/end options:
const fs = require('fs');
function readFileRange(filePath, start, end) {
return new Promise((resolve, reject) => {
const chunks = [];
const stream = fs.createReadStream(filePath, {
start: start,
end: end - 1, // end is inclusive, so subtract 1
highWaterMark: 64 * 1024 // 64KB chunks
});
stream.on('data', (chunk) => {
chunks.push(chunk);
});
stream.on('end', () => {
const data = Buffer.concat(chunks);
resolve({
start,
end,
length: data.length,
data: data
});
});
stream.on('error', reject);
});
}
// Usage: Read bytes 1000-2000
readFileRange('large-file.txt', 1000, 2000)
.then(result => {
console.log(`Read bytes ${result.start}-${result.end}: ${result.length} bytes`);
console.log('Content:', result.data.toString('utf8'));
});
Reading file headers and metadata:
const fs = require('fs');
class FileHeaderReader {
static async readJPEGHeader(filePath) {
// JPEG files start with FF D8 FF
const header = await readBytes(filePath, 0, 20);
const signature = header.data.toString('hex', 0, 4);
if (signature !== 'ffd8ff') {
throw new Error('Not a valid JPEG file');
}
return {
type: 'JPEG',
signature: signature,
data: header.data
};
}
static async readPNGHeader(filePath) {
// PNG files start with 89 50 4E 47
const header = await readBytes(filePath, 0, 8);
const signature = header.data.toString('hex', 0, 8);
if (signature !== '89504e470d0a1a0a') {
throw new Error('Not a valid PNG file');
}
return {
type: 'PNG',
signature: signature,
data: header.data
};
}
static async readFileType(filePath) {
const header = await readBytes(filePath, 0, 8);
const hexSignature = header.data.toString('hex');
const signatures = {
'ffd8ff': 'JPEG',
'89504e470d0a1a0a': 'PNG',
'47494638': 'GIF',
'25504446': 'PDF',
'504b0304': 'ZIP',
'7f454c46': 'ELF' // Executable
};
for (const [signature, type] of Object.entries(signatures)) {
if (hexSignature.startsWith(signature)) {
return type;
}
}
return 'Unknown';
}
}
// Usage
FileHeaderReader.readFileType('image.jpg')
.then(type => console.log('File type:', type))
.catch(console.error);
Efficient large file processing with chunks:
const fs = require('fs');
class FileChunkReader {
constructor(filePath, chunkSize = 64 * 1024) {
this.filePath = filePath;
this.chunkSize = chunkSize;
this.position = 0;
}
async next() {
try {
const result = await readBytes(this.filePath, this.position, this.chunkSize);
if (result.bytesRead === 0) {
return { done: true }; // End of file
}
const chunk = {
data: result.data,
start: this.position,
end: this.position + result.bytesRead - 1,
size: result.bytesRead
};
this.position += result.bytesRead;
return { done: false, value: chunk };
} catch (error) {
throw error;
}
}
async *[Symbol.asyncIterator]() {
let chunk;
do {
chunk = await this.next();
if (!chunk.done) {
yield chunk.value;
}
} while (!chunk.done);
}
}
// Usage with async iteration
async function processLargeFile(filePath) {
const reader = new FileChunkReader(filePath, 1024 * 1024); // 1MB chunks
let totalBytes = 0;
for await (const chunk of reader) {
console.log(`Processing chunk: ${chunk.start}-${chunk.end} (${chunk.size} bytes)`);
// Process the chunk here
totalBytes += chunk.size;
// Simulate processing time
await new Promise(resolve => setTimeout(resolve, 10));
}
console.log(`Total bytes processed: ${totalBytes}`);
}
processLargeFile('huge-file.bin').catch(console.error);
Reading specific file structures:
const fs = require('fs');
async function readCSVRow(filePath, rowNumber, delimiter = ',') {
// This is a simplified example - real CSV parsing is more complex
const stats = await fs.promises.stat(filePath);
const approximateRowSize = 100; // Estimate
const startByte = Math.max(0, (rowNumber - 1) * approximateRowSize - 1000);
const readSize = 2000; // Read extra to ensure we get the full row
const chunk = await readFileRange(filePath, startByte, startByte + readSize);
const text = chunk.data.toString('utf8');
const lines = text.split('\n');
// Find our target row (adjusting for the partial read)
const targetLine = lines[lines.length - 2]; // Simplified
if (targetLine) {
return targetLine.split(delimiter);
}
return null;
}
// Reading specific binary structures
async function readBinaryStructure(filePath, structure) {
const reads = structure.map(async ({ offset, size, name, type }) => {
const data = await readBytes(filePath, offset, size);
let value;
switch (type) {
case 'uint32':
value = data.data.readUInt32LE(0);
break;
case 'string':
value = data.data.toString('utf8').replace(/\0/g, '');
break;
case 'hex':
value = data.data.toString('hex');
break;
default:
value = data.data;
}
return { name, value, offset, size };
});
return Promise.all(reads);
}
// Usage: Read specific fields from a binary file
const fileStructure = [
{ offset: 0, size: 4, name: 'magicNumber', type: 'hex' },
{ offset: 4, size: 4, name: 'fileSize', type: 'uint32' },
{ offset: 8, size: 16, name: 'creator', type: 'string' }
];
readBinaryStructure('data.bin', fileStructure)
.then(fields => {
console.log('File structure:');
fields.forEach(field => {
console.log(` ${field.name}: ${field.value} (offset: ${field.offset})`);
});
});
fs.access() checks a user's permissions for a file or directory without opening it. It's the modern replacement for the deprecated fs.exists().
Basic file existence checking:
const fs = require('fs');
// Check if file exists
fs.access('config.json', fs.constants.F_OK, (error) => {
if (error) {
console.log('File does not exist');
} else {
console.log('File exists');
}
});
Checking multiple permissions:
const fs = require('fs');
// Check read and write permissions
fs.access('data.txt', fs.constants.R_OK | fs.constants.W_OK, (error) => {
if (error) {
console.log('Cannot read/write file:', error.message);
} else {
console.log('File is readable and writable');
}
});
Available permission constants:
const fs = require('fs');
// Common permission constants
const permissions = {
F_OK: fs.constants.F_OK, // File exists
R_OK: fs.constants.R_OK, // Readable
W_OK: fs.constants.W_OK, // Writable
X_OK: fs.constants.X_OK // Executable
};
// Check all permissions
fs.access('script.sh',
fs.constants.F_OK | fs.constants.R_OK | fs.constants.W_OK | fs.constants.X_OK,
(error) => {
if (error) {
console.log('Missing permissions:', error.message);
} else {
console.log('File exists and is readable, writable, and executable');
}
}
);
Using fs.access() with promises:
const fs = require('fs').promises;
async function checkFileAccess(filePath) {
try {
// Check if file exists and is readable
await fs.access(filePath, fs.constants.F_OK | fs.constants.R_OK);
console.log('File is accessible and readable');
return true;
} catch (error) {
if (error.code === 'ENOENT') {
console.log('File does not exist');
} else if (error.code === 'EACCES') {
console.log('No permission to access file');
} else {
console.log('Other error:', error.message);
}
return false;
}
}
// Usage
checkFileAccess('important-file.txt')
.then(accessible => {
if (accessible) {
console.log('Proceeding with file operations...');
}
});
Safe file operations pattern:
const fs = require('fs').promises;
async function safeFileOperation(filePath, operation) {
try {
// First check if we can access the file
await fs.access(filePath, fs.constants.F_OK | fs.constants.R_OK);
// Then perform the actual operation
return await operation(filePath);
} catch (error) {
if (error.code === 'ENOENT') {
throw new Error(`File not found: ${filePath}`);
} else if (error.code === 'EACCES') {
throw new Error(`Permission denied: ${filePath}`);
} else {
throw error;
}
}
}
// Usage
async function readConfig() {
return safeFileOperation('config.json', async (filePath) => {
const data = await fs.readFile(filePath, 'utf8');
return JSON.parse(data);
});
}
readConfig()
.then(config => console.log('Config loaded:', config))
.catch(error => console.log('Error:', error.message));
Checking directory permissions:
const fs = require('fs').promises;
async function checkDirectoryAccess(dirPath) {
try {
await fs.access(dirPath, fs.constants.F_OK | fs.constants.R_OK | fs.constants.W_OK);
console.log('Directory is accessible, readable, and writable');
return true;
} catch (error) {
switch (error.code) {
case 'ENOENT':
console.log('Directory does not exist');
break;
case 'EACCES':
console.log('No permission to access directory');
break;
case 'ENOTDIR':
console.log('Path is not a directory');
break;
default:
console.log('Unknown error:', error.message);
}
return false;
}
}
// Usage
checkDirectoryAccess('/tmp')
.then(accessible => {
if (accessible) {
console.log('Can use this directory for temporary files');
}
});
Why fs.access() is better than fs.exists():
// ❌ OLD WAY (deprecated and has race conditions)
if (fs.existsSync('file.txt')) {
// File might be deleted between check and use
fs.readFile('file.txt', (error, data) => {
// Could get ENOENT error here!
});
}
// ✅ NEW WAY (race condition safe)
fs.access('file.txt', fs.constants.F_OK, (error) => {
if (error) {
console.log('File does not exist or cannot be accessed');
} else {
// Safe to use the file
fs.readFile('file.txt', (error, data) => {
// Handle read operation
});
}
});
// ✅ EVEN BETTER: Just try the operation and handle errors
fs.readFile('file.txt', 'utf8', (error, data) => {
if (error) {
if (error.code === 'ENOENT') {
console.log('File does not exist');
} else if (error.code === 'EACCES') {
console.log('No permission to read file');
} else {
console.log('Other error:', error);
}
} else {
console.log('File content:', data);
}
});
Common error codes from fs.access():
ENOENT - File or directory does not existEACCES - Permission deniedENOTDIR - Path is not a directoryEISDIR - Path is a directory (when expecting file)fs.readdir() reads the contents of a directory, returning an array of filenames. It's essential for file management, directory scanning, and building file explorers.
Basic directory reading:
const fs = require('fs');
const path = require('path');
// Read directory contents (async)
fs.readdir('./documents', (error, files) => {
if (error) {
console.log('Error reading directory:', error);
return;
}
console.log('Files in directory:');
files.forEach(file => {
console.log(' -', file);
});
});
Using fs.promises.readdir():
const fs = require('fs').promises;
async function listDirectory(dirPath) {
try {
const files = await fs.readdir(dirPath);
console.log(`Contents of ${dirPath}:`);
files.forEach(file => {
console.log(` 📄 ${file}`);
});
return files;
} catch (error) {
console.log('Error reading directory:', error.message);
throw error;
}
}
// Usage
listDirectory('./src')
.then(files => {
console.log(`Found ${files.length} files`);
})
.catch(console.error);
Reading with file types using withFileTypes:
const fs = require('fs').promises;
async function listDirectoryDetailed(dirPath) {
try {
// Use withFileTypes to get Dirent objects
const entries = await fs.readdir(dirPath, { withFileTypes: true });
console.log(`Detailed contents of ${dirPath}:`);
const result = {
files: [],
directories: [],
symlinks: [],
others: []
};
for (const entry of entries) {
const info = {
name: entry.name,
path: path.join(dirPath, entry.name),
isFile: entry.isFile(),
isDirectory: entry.isDirectory(),
isSymbolicLink: entry.isSymbolicLink()
};
if (entry.isFile()) {
result.files.push(info);
console.log(` 📄 ${entry.name} (file)`);
} else if (entry.isDirectory()) {
result.directories.push(info);
console.log(` 📁 ${entry.name}/ (directory)`);
} else if (entry.isSymbolicLink()) {
result.symlinks.push(info);
console.log(` 🔗 ${entry.name} (symlink)`);
} else {
result.others.push(info);
console.log(` ❓ ${entry.name} (other)`);
}
}
return result;
} catch (error) {
console.log('Error reading directory:', error);
throw error;
}
}
// Usage
listDirectoryDetailed('./project')
.then(({ files, directories }) => {
console.log(`Found ${files.length} files and ${directories.length} directories`);
});
Recursive directory scanning:
const fs = require('fs').promises;
const path = require('path');
async function scanDirectory(dirPath, maxDepth = 3, currentDepth = 0) {
if (currentDepth > maxDepth) {
return [];
}
try {
const entries = await fs.readdir(dirPath, { withFileTypes: true });
const results = [];
for (const entry of entries) {
const fullPath = path.join(dirPath, entry.name);
const item = {
name: entry.name,
path: fullPath,
depth: currentDepth,
isFile: entry.isFile(),
isDirectory: entry.isDirectory(),
size: 0
};
if (entry.isFile()) {
try {
const stats = await fs.stat(fullPath);
item.size = stats.size;
item.modified = stats.mtime;
} catch (error) {
item.error = error.message;
}
}
results.push(item);
// Recursively scan subdirectories
if (entry.isDirectory() && currentDepth < maxDepth) {
const subItems = await scanDirectory(fullPath, maxDepth, currentDepth + 1);
results.push(...subItems);
}
}
return results;
} catch (error) {
console.log(`Error scanning ${dirPath}:`, error.message);
return [];
}
}
// Usage
scanDirectory('.', 2) // Scan up to 2 levels deep
.then(files => {
console.log(`Scanned ${files.length} items total`);
const totalSize = files
.filter(f => f.isFile)
.reduce((sum, file) => sum + file.size, 0);
console.log(`Total file size: ${formatFileSize(totalSize)}`);
// Show largest files
const largestFiles = files
.filter(f => f.isFile)
.sort((a, b) => b.size - a.size)
.slice(0, 5);
console.log('Largest files:');
largestFiles.forEach(file => {
console.log(` ${file.path}: ${formatFileSize(file.size)}`);
});
});
Filtering and searching files:
const fs = require('fs').promises;
const path = require('path');
class FileSearcher {
static async findByExtension(dirPath, extensions) {
const files = await fs.readdir(dirPath, { withFileTypes: true });
const matched = [];
for (const entry of files) {
if (entry.isFile()) {
const ext = path.extname(entry.name).toLowerCase();
if (extensions.includes(ext)) {
matched.push({
name: entry.name,
path: path.join(dirPath, entry.name),
extension: ext
});
}
}
}
return matched;
}
static async findByName(dirPath, pattern, caseSensitive = false) {
const files = await fs.readdir(dirPath, { withFileTypes: true });
const matched = [];
const regex = new RegExp(
caseSensitive ? pattern : pattern,
caseSensitive ? '' : 'i'
);
for (const entry of files) {
if (regex.test(entry.name)) {
matched.push({
name: entry.name,
path: path.join(dirPath, entry.name),
isFile: entry.isFile(),
isDirectory: entry.isDirectory()
});
}
}
return matched;
}
static async findLargeFiles(dirPath, minSizeBytes) {
const files = await fs.readdir(dirPath, { withFileTypes: true });
const largeFiles = [];
for (const entry of files) {
if (entry.isFile()) {
const fullPath = path.join(dirPath, entry.name);
try {
const stats = await fs.stat(fullPath);
if (stats.size >= minSizeBytes) {
largeFiles.push({
name: entry.name,
path: fullPath,
size: stats.size,
sizeHuman: formatFileSize(stats.size)
});
}
} catch (error) {
// Skip files we can't stat
}
}
}
return largeFiles.sort((a, b) => b.size - a.size);
}
}
// Usage examples
async function searchFiles() {
// Find all JavaScript and TypeScript files
const codeFiles = await FileSearcher.findByExtension('./src', ['.js', '.ts', '.jsx', '.tsx']);
console.log('Code files:', codeFiles.map(f => f.name));
// Find files containing "test" in name
const testFiles = await FileSearcher.findByName('./src', 'test');
console.log('Test files:', testFiles.map(f => f.name));
// Find files larger than 1MB
const largeFiles = await FileSearcher.findLargeFiles('.', 1024 * 1024);
console.log('Large files:');
largeFiles.forEach(file => {
console.log(` ${file.name}: ${file.sizeHuman}`);
});
}
searchFiles().catch(console.error);
Directory monitoring with readdir:
const fs = require('fs').promises;
class DirectoryWatcher {
constructor(dirPath, intervalMs = 5000) {
this.dirPath = dirPath;
this.intervalMs = intervalMs;
this.previousFiles = new Set();
this.intervalId = null;
}
async getCurrentFiles() {
try {
const files = await fs.readdir(this.dirPath);
return new Set(files);
} catch (error) {
console.log('Error reading directory:', error.message);
return new Set();
}
}
start() {
console.log(`Starting directory watcher for: ${this.dirPath}`);
this.intervalId = setInterval(async () => {
const currentFiles = await this.getCurrentFiles();
// Find new files
const added = [...currentFiles].filter(file => !this.previousFiles.has(file));
// Find deleted files
const removed = [...this.previousFiles].filter(file => !currentFiles.has(file));
if (added.length > 0 || removed.length > 0) {
console.log('📁 Directory changes detected:');
added.forEach(file => console.log(` ➕ Added: ${file}`));
removed.forEach(file => console.log(` ➖ Removed: ${file}`));
// Update previous files
this.previousFiles = currentFiles;
}
}, this.intervalMs);
}
stop() {
if (this.intervalId) {
clearInterval(this.intervalId);
console.log('Directory watcher stopped');
}
}
}
// Usage
const watcher = new DirectoryWatcher('./uploads', 2000);
watcher.start();
// Stop after 1 minute
setTimeout(() => watcher.stop(), 60000);
The http module is a core Node.js module that provides functionality to create HTTP servers and make HTTP requests. It's the foundation for web servers and API clients in Node.js.
Importing the http module:
const http = require('http');
Key capabilities of the http module:
Basic HTTP server example:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello World!');
});
server.listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
Making HTTP requests:
const http = require('http');
const options = {
hostname: 'jsonplaceholder.typicode.com',
port: 80,
path: '/posts/1',
method: 'GET'
};
const req = http.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
console.log('Response:', JSON.parse(data));
});
});
req.on('error', (error) => {
console.log('Error:', error);
});
req.end();
Key classes in the http module:
http.Server - HTTP server classhttp.ClientRequest - HTTP client requestshttp.ServerResponse - Server response objecthttp.IncomingMessage - Incoming request objecthttp.Agent - Manages connection persistenceThe http module is low-level but powerful, giving you complete control over HTTP communication. Most web frameworks (like Express) are built on top of it.
Creating an HTTP server in Node.js is straightforward using the http module's createServer method. Here are different ways to create and configure servers.
Basic HTTP server:
const http = require('http');
// Create server with request handler
const server = http.createServer((request, response) => {
response.writeHead(200, { 'Content-Type': 'text/plain' });
response.end('Hello from Node.js server!');
});
// Start listening on port 3000
server.listen(3000, () => {
console.log('Server is running on http://localhost:3000');
});
Server with different response types:
const http = require('http');
const server = http.createServer((req, res) => {
// Set content type based on URL
if (req.url === '/json') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Hello JSON', timestamp: new Date() }));
} else if (req.url === '/html') {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end('Hello HTML
Welcome to our server
');
} else {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello World!');
}
});
server.listen(3000);
Using separate request event handlers:
const http = require('http');
// Create server without immediate request handler
const server = http.createServer();
// Handle requests using event listeners
server.on('request', (req, res) => {
console.log(`${new Date().toISOString()} - ${req.method} ${req.url}`);
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Request received!');
});
// Handle server errors
server.on('error', (error) => {
console.error('Server error:', error);
});
// Handle client errors
server.on('clientError', (error, socket) => {
console.error('Client error:', error);
socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');
});
server.listen(3000, () => {
console.log('Server ready on port 3000');
});
Advanced server with routing:
const http = require('http');
const url = require('url');
const server = http.createServer((req, res) => {
const parsedUrl = url.parse(req.url, true);
const pathname = parsedUrl.pathname;
// Simple routing
switch (pathname) {
case '/':
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(`
<html>
<body>
<h1>Home Page</h1>
<a href="/about">About</a>
<a href="/api/data">API</a>
</body>
</html>
`);
break;
case '/about':
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end('<h1>About Us</h1><p>This is our about page</p>');
break;
case '/api/data':
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
users: ['John', 'Jane', 'Bob'],
timestamp: new Date().toISOString()
}));
break;
default:
res.writeHead(404, { 'Content-Type': 'text/html' });
res.end('<h1>404 - Page Not Found</h1>');
}
});
server.listen(3000);
Server with request logging middleware:
const http = require('http');
function logger(req, res, next) {
const timestamp = new Date().toISOString();
console.log(`[${timestamp}] ${req.method} ${req.url} - ${req.headers['user-agent']}`);
next();
}
function requestHandler(req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello World!');
}
const server = http.createServer((req, res) => {
logger(req, res, () => {
requestHandler(req, res);
});
});
server.listen(3000);
Key server methods:
server.listen() - Start listening for connectionsserver.close() - Stop the server from accepting new connectionsserver.setTimeout() - Set socket timeout in millisecondsserver.maxHeadersCount - Limit maximum header countGraceful shutdown:
const server = http.createServer((req, res) => {
res.end('Server is running');
}).listen(3000);
// Handle graceful shutdown
process.on('SIGTERM', () => {
console.log('Received SIGTERM, shutting down gracefully');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});
GET requests are used to retrieve data from the server. They should be idempotent and safe, meaning they don't change server state.
Basic GET request handling:
const http = require('http');
const url = require('url');
const server = http.createServer((req, res) => {
// Only handle GET requests
if (req.method !== 'GET') {
res.writeHead(405, { 'Content-Type': 'text/plain' });
res.end('Method Not Allowed');
return;
}
const parsedUrl = url.parse(req.url, true);
const pathname = parsedUrl.pathname;
// Handle different routes
if (pathname === '/') {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(`
<html>
<head><title>Home</title></head>
<body>
<h1>Welcome</h1>
<form action="/search" method="GET">
<input type="text" name="q" placeholder="Search...">
<button type="submit">Search</button>
</form>
</body>
</html>
`);
} else if (pathname === '/search') {
// Handle search with query parameters
const query = parsedUrl.query.q || '';
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
searchTerm: query,
results: [`Result 1 for ${query}`, `Result 2 for ${query}`],
timestamp: new Date().toISOString()
}));
} else {
res.writeHead(404, { 'Content-Type': 'text/plain' });
res.end('404 - Not Found');
}
});
server.listen(3000);
GET request with query parameters:
const http = require('http');
const url = require('url');
const products = [
{ id: 1, name: 'Laptop', price: 999 },
{ id: 2, name: 'Phone', price: 599 },
{ id: 3, name: 'Tablet', price: 399 }
];
const server = http.createServer((req, res) => {
if (req.method === 'GET' && req.url.startsWith('/api/products')) {
const parsedUrl = url.parse(req.url, true);
const query = parsedUrl.query;
let responseData = products;
// Filter by name if provided
if (query.name) {
responseData = responseData.filter(product =>
product.name.toLowerCase().includes(query.name.toLowerCase())
);
}
// Filter by max price if provided
if (query.maxPrice) {
responseData = responseData.filter(product =>
product.price <= parseFloat(query.maxPrice)
);
}
// Sort if specified
if (query.sort) {
responseData.sort((a, b) => {
if (query.sort === 'price') return a.price - b.price;
if (query.sort === 'name') return a.name.localeCompare(b.name);
return 0;
});
}
res.writeHead(200, {
'Content-Type': 'application/json',
'Cache-Control': 'public, max-age=300' // Cache for 5 minutes
});
res.end(JSON.stringify({
products: responseData,
count: responseData.length,
filters: query
}));
}
});
server.listen(3000);
GET request with path parameters simulation:
const http = require('http');
const url = require('url');
const users = {
1: { id: 1, name: 'John Doe', email: 'john@example.com' },
2: { id: 2, name: 'Jane Smith', email: 'jane@example.com' },
3: { id: 3, name: 'Bob Johnson', email: 'bob@example.com' }
};
const server = http.createServer((req, res) => {
if (req.method === 'GET') {
// Match /api/users/123 pattern
const userMatch = req.url.match(/^\/api\/users\/(\d+)$/);
if (userMatch) {
const userId = parseInt(userMatch[1]);
const user = users[userId];
if (user) {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(user));
} else {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'User not found' }));
}
return;
}
// Get all users
if (req.url === '/api/users') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(Object.values(users)));
return;
}
}
res.writeHead(404, { 'Content-Type': 'text/plain' });
res.end('Not Found');
});
GET with conditional requests (ETag):
const http = require('http');
const crypto = require('crypto');
const server = http.createServer((req, res) => {
if (req.method === 'GET' && req.url === '/api/config') {
const config = {
appName: 'My Application',
version: '1.0.0',
features: ['auth', 'payments', 'reports'],
lastUpdated: new Date().toISOString()
};
// Generate ETag from config content
const configString = JSON.stringify(config);
const etag = crypto.createHash('md5').update(configString).digest('hex');
// Check if client has cached version
const clientETag = req.headers['if-none-match'];
if (clientETag === etag) {
// Content hasn't changed
res.writeHead(304, { // Not Modified
'ETag': etag,
'Cache-Control': 'public, max-age=300'
});
res.end();
} else {
// Send fresh content
res.writeHead(200, {
'Content-Type': 'application/json',
'ETag': etag,
'Cache-Control': 'public, max-age=300'
});
res.end(configString);
}
}
});
GET request with pagination:
const http = require('http');
const url = require('url');
// Sample data
const allItems = Array.from({ length: 100 }, (_, i) => ({
id: i + 1,
name: `Item ${i + 1}`,
value: Math.random() * 1000
}));
const server = http.createServer((req, res) => {
if (req.method === 'GET' && req.url.startsWith('/api/items')) {
const parsedUrl = url.parse(req.url, true);
const query = parsedUrl.query;
const page = parseInt(query.page) || 1;
const limit = parseInt(query.limit) || 10;
const startIndex = (page - 1) * limit;
const endIndex = startIndex + limit;
const items = allItems.slice(startIndex, endIndex);
const totalPages = Math.ceil(allItems.length / limit);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
items,
pagination: {
page,
limit,
totalItems: allItems.length,
totalPages,
hasNext: page < totalPages,
hasPrev: page > 1
}
}));
}
});
Best practices for GET requests:
POST requests are used to submit data to the server, typically for creating new resources. Unlike GET requests, POST requests can have a request body.
Basic POST request handling:
const http = require('http');
const server = http.createServer((req, res) => {
if (req.method === 'POST') {
let body = '';
// Collect data from request
req.on('data', (chunk) => {
body += chunk.toString();
});
// Process when data is complete
req.on('end', () => {
try {
const data = JSON.parse(body);
// Process the data (e.g., save to database)
console.log('Received data:', data);
res.writeHead(201, {
'Content-Type': 'application/json',
'Location': '/api/users/123' // URL of created resource
});
res.end(JSON.stringify({
message: 'Resource created successfully',
id: 123,
data: data
}));
} catch (error) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
error: 'Invalid JSON data'
}));
}
});
} else {
res.writeHead(405, { 'Content-Type': 'text/plain' });
res.end('Method Not Allowed');
}
});
server.listen(3000);
Handling different content types:
const http = require('http');
const querystring = require('querystring');
const server = http.createServer((req, res) => {
if (req.method === 'POST') {
const contentType = req.headers['content-type'];
let body = '';
req.on('data', (chunk) => {
body += chunk.toString();
});
req.on('end', () => {
let parsedData;
try {
if (contentType === 'application/json') {
parsedData = JSON.parse(body);
} else if (contentType === 'application/x-www-form-urlencoded') {
parsedData = querystring.parse(body);
} else if (contentType === 'text/plain') {
parsedData = { text: body };
} else {
throw new Error('Unsupported content type');
}
// Process the data
console.log('Parsed data:', parsedData);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
status: 'success',
received: parsedData
}));
} catch (error) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
error: error.message
}));
}
});
} else {
res.writeHead(405, { 'Content-Type': 'text/plain' });
res.end('Method Not Allowed');
}
});
POST with file upload handling:
const http = require('http');
const fs = require('fs');
const path = require('path');
const server = http.createServer((req, res) => {
if (req.method === 'POST' && req.url === '/upload') {
const contentLength = parseInt(req.headers['content-length']);
const contentType = req.headers['content-type'];
// Basic validation
if (contentLength > 10 * 1024 * 1024) { // 10MB limit
res.writeHead(413, { 'Content-Type': 'application/json' });
return res.end(JSON.stringify({ error: 'File too large' }));
}
if (!contentType.includes('multipart/form-data')) {
res.writeHead(415, { 'Content-Type': 'application/json' });
return res.end(JSON.stringify({ error: 'Unsupported media type' }));
}
const uploadDir = './uploads';
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir);
}
const filename = `upload-${Date.now()}.dat`;
const filepath = path.join(uploadDir, filename);
const writeStream = fs.createWriteStream(filepath);
let uploadedBytes = 0;
req.on('data', (chunk) => {
uploadedBytes += chunk.length;
writeStream.write(chunk);
// Simple progress tracking
const progress = ((uploadedBytes / contentLength) * 100).toFixed(1);
console.log(`Upload progress: ${progress}%`);
});
req.on('end', () => {
writeStream.end();
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
message: 'Upload successful',
filename: filename,
size: uploadedBytes,
path: filepath
}));
});
req.on('error', (error) => {
writeStream.destroy();
fs.unlinkSync(filepath); // Clean up
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Upload failed' }));
});
} else {
res.writeHead(404, { 'Content-Type': 'text/plain' });
res.end('Not Found');
}
});
POST with authentication and validation:
const http = require('http');
const crypto = require('crypto');
// In-memory store (use database in production)
const users = [];
const sessions = {};
const server = http.createServer((req, res) => {
if (req.method === 'POST') {
if (req.url === '/api/register') {
handleRegistration(req, res);
} else if (req.url === '/api/login') {
handleLogin(req, res);
} else {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Endpoint not found' }));
}
}
});
function handleRegistration(req, res) {
let body = '';
req.on('data', (chunk) => {
body += chunk.toString();
});
req.on('end', () => {
try {
const { email, password, name } = JSON.parse(body);
// Validation
if (!email || !password || !name) {
throw new Error('Missing required fields');
}
if (users.find(user => user.email === email)) {
throw new Error('User already exists');
}
if (password.length < 6) {
throw new Error('Password too short');
}
// Create user (in real app, hash the password!)
const user = {
id: users.length + 1,
email,
password, // In production, use bcrypt!
name,
createdAt: new Date()
};
users.push(user);
res.writeHead(201, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
message: 'User registered successfully',
user: { id: user.id, email: user.email, name: user.name }
}));
} catch (error) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: error.message }));
}
});
}
function handleLogin(req, res) {
let body = '';
req.on('data', (chunk) => {
body += chunk.toString();
});
req.on('end', () => {
try {
const { email, password } = JSON.parse(body);
const user = users.find(u => u.email === email && u.password === password);
if (!user) {
throw new Error('Invalid credentials');
}
// Create session
const sessionId = crypto.randomBytes(16).toString('hex');
sessions[sessionId] = {
userId: user.id,
createdAt: new Date()
};
res.writeHead(200, {
'Content-Type': 'application/json',
'Set-Cookie': `sessionId=${sessionId}; HttpOnly; Max-Age=3600`
});
res.end(JSON.stringify({
message: 'Login successful',
user: { id: user.id, email: user.email, name: user.name }
}));
} catch (error) {
res.writeHead(401, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: error.message }));
}
});
}
POST with rate limiting:
const http = require('http');
const requestCounts = new Map();
function rateLimit(ip, limit = 100, windowMs = 60000) {
const now = Date.now();
const windowStart = now - windowMs;
if (!requestCounts.has(ip)) {
requestCounts.set(ip, []);
}
const requests = requestCounts.get(ip);
// Remove old requests outside the window
while (requests.length > 0 && requests[0] < windowStart) {
requests.shift();
}
// Check if over limit
if (requests.length >= limit) {
return false;
}
// Add current request
requests.push(now);
return true;
}
const server = http.createServer((req, res) => {
const clientIP = req.socket.remoteAddress;
if (req.method === 'POST') {
// Apply rate limiting
if (!rateLimit(clientIP, 10, 60000)) { // 10 requests per minute
res.writeHead(429, {
'Content-Type': 'application/json',
'Retry-After': '60'
});
return res.end(JSON.stringify({
error: 'Too many requests',
message: 'Please try again later'
}));
}
// Process the request
let body = '';
req.on('data', (chunk) => body += chunk.toString());
req.on('end', () => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Request processed' }));
});
}
});
Best practices for POST requests:
In Node.js HTTP servers, the request (http.IncomingMessage) and response (http.ServerResponse) objects represent the incoming HTTP request and outgoing response.
Request Object (http.IncomingMessage):
const http = require('http');
const server = http.createServer((req, res) => {
// Request properties
console.log('HTTP Method:', req.method);
console.log('URL:', req.url);
console.log('HTTP Version:', req.httpVersion);
console.log('Headers:', req.headers);
console.log('Remote Address:', req.socket.remoteAddress);
console.log('Remote Port:', req.socket.remotePort);
// Read request body
let body = '';
req.on('data', chunk => body += chunk.toString());
req.on('end', () => {
console.log('Request Body:', body);
});
});
Common request properties:
const http = require('http');
const server = http.createServer((req, res) => {
// Method and URL
console.log('Method:', req.method); // GET, POST, etc.
console.log('URL:', req.url); // /path?query=string
// Headers (all lowercase)
console.log('User Agent:', req.headers['user-agent']);
console.log('Content-Type:', req.headers['content-type']);
console.log('Authorization:', req.headers['authorization']);
console.log('Accept:', req.headers['accept']);
// Connection info
console.log('Remote IP:', req.socket.remoteAddress);
console.log('Remote Family:', req.socket.remoteFamily);
console.log('Secure:', req.socket.encrypted ? 'HTTPS' : 'HTTP');
// URL components (with url module)
const url = require('url');
const parsedUrl = url.parse(req.url, true);
console.log('Pathname:', parsedUrl.pathname);
console.log('Query:', parsedUrl.query);
// Request timing
console.log('Request received at:', new Date().toISOString());
});
Response Object (http.ServerResponse):
const http = require('http');
const server = http.createServer((req, res) => {
// Set status code and headers
res.statusCode = 200; // or res.writeHead(200, headers)
res.setHeader('Content-Type', 'application/json');
res.setHeader('X-Powered-By', 'Node.js');
// Write response body
res.write('Hello ');
res.write('World');
res.end('!'); // End the response
// Alternative: write everything at once
// res.end('Hello World!');
});
Common response methods and properties:
const http = require('http');
const server = http.createServer((req, res) => {
// Method 1: Set status and headers separately
res.statusCode = 201;
res.setHeader('Content-Type', 'application/json');
res.setHeader('Cache-Control', 'no-cache');
// Method 2: Set everything at once with writeHead
res.writeHead(200, {
'Content-Type': 'text/html',
'Set-Cookie': ['session=abc123', 'theme=dark']
});
// Check if headers are sent
console.log('Headers sent:', res.headersSent);
// Write response in chunks
res.write('');
res.write('Hello
');
res.write('');
// End the response
res.end();
// Events
res.on('finish', () => {
console.log('Response sent successfully');
});
res.on('close', () => {
console.log('Connection closed before response could be sent');
});
});
Advanced request handling with streams:
const http = require('http');
const fs = require('fs');
const server = http.createServer((req, res) => {
// Log request details
console.log(`${req.method} ${req.url} from ${req.socket.remoteAddress}`);
// Handle different content types in request
if (req.headers['content-type'] === 'application/json') {
let data = '';
req.on('data', chunk => data += chunk);
req.on('end', () => {
try {
const jsonData = JSON.parse(data);
console.log('Received JSON:', jsonData);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ received: true, data: jsonData }));
} catch (error) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Invalid JSON' }));
}
});
} else {
// Stream the request to a file (for large uploads)
const writeStream = fs.createWriteStream(`upload-${Date.now()}.tmp`);
req.pipe(writeStream);
writeStream.on('finish', () => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Upload received');
});
}
});
Advanced response streaming:
const http = require('http');
const fs = require('fs');
const server = http.createServer((req, res) => {
if (req.url === '/stream') {
// Stream large file efficiently
const fileStream = fs.createReadStream('large-file.txt');
res.writeHead(200, {
'Content-Type': 'text/plain',
'Transfer-Encoding': 'chunked'
});
fileStream.pipe(res);
fileStream.on('error', (error) => {
console.log('Stream error:', error);
if (!res.headersSent) {
res.writeHead(500, { 'Content-Type': 'text/plain' });
}
res.end('Error streaming file');
});
} else if (req.url === '/events') {
// Server-sent events
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
});
let count = 0;
const interval = setInterval(() => {
count++;
res.write(`data: Message ${count} at ${new Date().toISOString()}\n\n`);
if (count >= 10) {
clearInterval(interval);
res.write('data: END\n\n');
res.end();
}
}, 1000);
req.on('close', () => {
clearInterval(interval);
});
}
});
Key request object properties:
req.method - HTTP method (GET, POST, etc.)req.url - Request URL stringreq.headers - Request headers objectreq.httpVersion - HTTP versionreq.socket - Underlying socketreq.body - Not available by default (need to parse)Key response object methods:
res.writeHead() - Set status and headersres.setHeader() - Set a single headerres.write() - Write response bodyres.end() - End the responseres.statusCode - Set status coderes.getHeader() - Get a header valueURL parameters can be parsed using Node.js's built-in url module or by manually parsing the URL string. There are two types: query parameters and path parameters.
Using the url module (recommended):
const http = require('http');
const url = require('url');
const server = http.createServer((req, res) => {
// Parse URL with query string
const parsedUrl = url.parse(req.url, true);
console.log('Pathname:', parsedUrl.pathname);
console.log('Query parameters:', parsedUrl.query);
// Access specific parameters
const name = parsedUrl.query.name || 'World';
const age = parsedUrl.query.age;
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
message: `Hello ${name}`,
age: age ? parseInt(age) : null,
query: parsedUrl.query
}));
});
server.listen(3000);
Manual URL parsing for path parameters:
const http = require('http');
const server = http.createServer((req, res) => {
// Parse path parameters like /users/123/posts/456
const pathParams = req.url.match(/^\/users\/(\d+)\/posts\/(\d+)$/);
if (pathParams) {
const userId = pathParams[1];
const postId = pathParams[2];
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
userId: parseInt(userId),
postId: parseInt(postId),
message: `User ${userId}'s post ${postId}`
}));
} else {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Not found' }));
}
});
Advanced parameter parsing with validation:
const http = require('http');
const url = require('url');
class ParameterParser {
static parseSearchParams(req) {
const parsedUrl = url.parse(req.url, true);
const query = parsedUrl.query;
// Validate and transform parameters
const params = {
page: this.validateNumber(query.page, 1, 1, 1000),
limit: this.validateNumber(query.limit, 10, 1, 100),
sort: this.validateString(query.sort, 'createdAt', ['createdAt', 'updatedAt', 'title']),
order: this.validateString(query.order, 'desc', ['asc', 'desc']),
search: this.sanitizeString(query.search, '')
};
return params;
}
static validateNumber(value, defaultValue, min, max) {
const num = parseInt(value);
if (isNaN(num)) return defaultValue;
return Math.max(min, Math.min(max, num));
}
static validateString(value, defaultValue, allowedValues) {
if (!value) return defaultValue;
return allowedValues.includes(value) ? value : defaultValue;
}
static sanitizeString(value, defaultValue) {
if (!value) return defaultValue;
// Basic sanitization
return value.toString().trim().substring(0, 100);
}
}
const server = http.createServer((req, res) => {
if (req.url.startsWith('/api/products')) {
const params = ParameterParser.parseSearchParams(req);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
params,
message: 'Search parameters processed'
}));
}
});
Query strings are the part of the URL after the ? character. They contain key-value pairs that can be easily parsed in Node.js.
Using the url module:
const http = require('http');
const url = require('url');
const server = http.createServer((req, res) => {
const parsedUrl = url.parse(req.url, true); // true to parse query string
// Access query parameters
const query = parsedUrl.query;
console.log('Full query object:', query);
console.log('Name:', query.name);
console.log('Age:', query.age);
console.log('Filters:', query.filters);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
query: query,
path: parsedUrl.pathname
}));
});
Manual query string parsing:
const http = require('http');
const querystring = require('querystring');
const server = http.createServer((req, res) => {
const urlParts = req.url.split('?');
const path = urlParts[0];
const queryString = urlParts[1] || '';
// Parse query string manually
const query = querystring.parse(queryString);
// Handle array parameters (filters=js&filters=node)
if (query.filters) {
if (!Array.isArray(query.filters)) {
query.filters = [query.filters];
}
}
res.end(JSON.stringify({ query, path }));
});
The url module provides utilities for URL resolution and parsing. It's essential for working with URLs in Node.js applications.
URL parsing and manipulation:
const url = require('url');
// Parse a URL string
const urlString = 'https://example.com:8080/path/name?query=string#hash';
const parsedUrl = url.parse(urlString, true);
console.log('Protocol:', parsedUrl.protocol); // https:
console.log('Host:', parsedUrl.host); // example.com:8080
console.log('Hostname:', parsedUrl.hostname); // example.com
console.log('Port:', parsedUrl.port); // 8080
console.log('Pathname:', parsedUrl.pathname); // /path/name
console.log('Query:', parsedUrl.query); // { query: 'string' }
console.log('Hash:', parsedUrl.hash); // #hash
URL formatting:
const url = require('url');
// Create URL from object
const urlObject = {
protocol: 'https',
host: 'api.example.com',
pathname: '/v1/users',
query: {
page: '1',
limit: '10'
}
};
const formattedUrl = url.format(urlObject);
console.log(formattedUrl); // https://api.example.com/v1/users?page=1&limit=10
Sending JSON responses involves setting the correct Content-Type header and stringifying JavaScript objects.
Basic JSON response:
const http = require('http');
const server = http.createServer((req, res) => {
const data = {
message: 'Hello World',
timestamp: new Date().toISOString(),
status: 'success'
};
res.writeHead(200, {
'Content-Type': 'application/json',
'Cache-Control': 'no-cache'
});
res.end(JSON.stringify(data));
});
JSON response with error handling:
const http = require('http');
const server = http.createServer((req, res) => {
try {
// Your business logic
if (req.url === '/api/users') {
const users = [
{ id: 1, name: 'John' },
{ id: 2, name: 'Jane' }
];
sendJSON(res, 200, { users });
} else {
sendJSON(res, 404, { error: 'Not found' });
}
} catch (error) {
sendJSON(res, 500, { error: 'Internal server error' });
}
});
function sendJSON(res, statusCode, data) {
res.writeHead(statusCode, {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
});
res.end(JSON.stringify(data));
}
HTTP headers provide metadata about the request or response. They can be set using res.setHeader() or res.writeHead().
Setting individual headers:
const http = require('http');
const server = http.createServer((req, res) => {
// Set headers individually
res.setHeader('Content-Type', 'application/json');
res.setHeader('X-Powered-By', 'Node.js');
res.setHeader('Cache-Control', 'no-cache, no-store, must-revalidate');
res.setHeader('Access-Control-Allow-Origin', '*');
res.end(JSON.stringify({ message: 'Hello' }));
});
Setting headers with writeHead:
const http = require('http');
const server = http.createServer((req, res) => {
// Set all headers at once
res.writeHead(200, {
'Content-Type': 'application/json',
'X-Powered-By': 'Node.js',
'Cache-Control': 'public, max-age=3600',
'Set-Cookie': ['session=abc123; HttpOnly', 'theme=dark']
});
res.end(JSON.stringify({ message: 'Hello' }));
});
HTTP methods (verbs) indicate the desired action to be performed on a resource. The most common methods are GET, POST, PUT, DELETE, PATCH.
Common HTTP methods:
const http = require('http');
const server = http.createServer((req, res) => {
switch (req.method) {
case 'GET':
// Retrieve resource
res.end('GET: Reading resource');
break;
case 'POST':
// Create new resource
res.end('POST: Creating resource');
break;
case 'PUT':
// Update entire resource
res.end('PUT: Updating resource');
break;
case 'PATCH':
// Partial update
res.end('PATCH: Partial update');
break;
case 'DELETE':
// Delete resource
res.end('DELETE: Deleting resource');
break;
case 'OPTIONS':
// CORS preflight
res.writeHead(204, {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE'
});
res.end();
break;
default:
res.writeHead(405, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Method not allowed' }));
}
});
CORS (Cross-Origin Resource Sharing) is a security mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served.
Simple CORS implementation:
const http = require('http');
const server = http.createServer((req, res) => {
// Set CORS headers
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization');
// Handle preflight requests
if (req.method === 'OPTIONS') {
res.writeHead(204);
res.end();
return;
}
// Your normal request handling
res.end(JSON.stringify({ message: 'CORS enabled' }));
});
CORS can be enabled manually by setting appropriate headers and handling preflight OPTIONS requests.
Complete CORS implementation:
const http = require('http');
function enableCORS(res) {
res.setHeader('Access-Control-Allow-Origin', process.env.ALLOWED_ORIGIN || '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS, PATCH');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization, X-Requested-With');
res.setHeader('Access-Control-Allow-Credentials', 'true');
res.setHeader('Access-Control-Max-Age', '86400'); // 24 hours
}
const server = http.createServer((req, res) => {
// Enable CORS for all responses
enableCORS(res);
// Handle preflight requests
if (req.method === 'OPTIONS') {
res.writeHead(204);
res.end();
return;
}
// Your application logic
if (req.method === 'GET' && req.url === '/api/data') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ data: 'CORS enabled response' }));
} else {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Not found' }));
}
});
HTTP redirects are handled using 3xx status codes and the Location header to tell the client where to go next.
Different types of redirects:
const http = require('http');
const server = http.createServer((req, res) => {
if (req.url === '/old-page') {
// 301 Moved Permanently
res.writeHead(301, {
'Location': '/new-page'
});
res.end();
} else if (req.url === '/temp-redirect') {
// 302 Found (Temporary)
res.writeHead(302, {
'Location': '/temporary-page'
});
res.end();
} else if (req.url === '/see-other') {
// 303 See Other (for POST redirect)
res.writeHead(303, {
'Location': '/result-page'
});
res.end();
} else {
res.end('Current page');
}
});
HTTP status codes are 3-digit numbers returned by servers to indicate the result of a request. They are grouped into five classes.
Common status codes:
const http = require('http');
const server = http.createServer((req, res) => {
// 2xx Success
if (req.url === '/success') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'OK' }));
}
// 3xx Redirection
else if (req.url === '/moved') {
res.writeHead(301, { 'Location': '/new-location' });
res.end();
}
// 4xx Client Errors
else if (req.url === '/not-found') {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Not found' }));
}
else if (req.url === '/unauthorized') {
res.writeHead(401, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Unauthorized' }));
}
// 5xx Server Errors
else if (req.url === '/error') {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Internal server error' }));
}
});
HTTPS servers provide encrypted communication using SSL/TLS. They are essential for secure web applications.
Creating an HTTPS server:
const https = require('https');
const fs = require('fs');
// Read SSL certificate and key
const options = {
key: fs.readFileSync('server-key.pem'),
cert: fs.readFileSync('server-cert.pem')
};
const server = https.createServer(options, (req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Secure HTTPS server!');
});
server.listen(443, () => {
console.log('HTTPS server running on port 443');
});
SSL certificates are used to enable HTTPS and secure communication. You can use self-signed certificates for development or obtain certificates from a CA for production.
Using SSL certificates:
const https = require('https');
const fs = require('fs');
// For development (self-signed)
const options = {
key: fs.readFileSync('./ssl/private-key.pem'),
cert: fs.readFileSync('./ssl/certificate.pem'),
// Optional: Add CA bundle for production
// ca: fs.readFileSync('./ssl/ca-bundle.crt'),
// Security settings
secureProtocol: 'TLSv1_2_method',
ciphers: 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
};
const server = https.createServer(options, (req, res) => {
res.end('Secure connection established');
});
server.listen(443);
The net module provides asynchronous network API for creating stream-based TCP servers and clients.
Basic TCP server:
const net = require('net');
const server = net.createServer((socket) => {
console.log('Client connected from:', socket.remoteAddress);
socket.write('Hello from TCP server!\r\n');
socket.on('data', (data) => {
console.log('Received:', data.toString());
socket.write('Echo: ' + data);
});
socket.on('end', () => {
console.log('Client disconnected');
});
});
server.listen(3000, () => {
console.log('TCP server listening on port 3000');
});
TCP servers handle raw TCP connections, useful for custom protocols, real-time applications, or when you need low-level control.
Advanced TCP server:
const net = require('net');
class TCPServer {
constructor(port) {
this.port = port;
this.clients = new Map();
this.server = net.createServer(this.handleConnection.bind(this));
}
handleConnection(socket) {
const clientId = `${socket.remoteAddress}:${socket.remotePort}`;
console.log(`Client ${clientId} connected`);
this.clients.set(clientId, socket);
socket.on('data', (data) => {
this.handleMessage(clientId, data.toString());
});
socket.on('end', () => {
console.log(`Client ${clientId} disconnected`);
this.clients.delete(clientId);
});
socket.on('error', (error) => {
console.log(`Client ${clientId} error:`, error.message);
this.clients.delete(clientId);
});
// Send welcome message
socket.write(`Welcome! You are client ${clientId}\r\n`);
}
handleMessage(clientId, message) {
console.log(`Message from ${clientId}: ${message.trim()}`);
// Broadcast to all clients
this.broadcast(`${clientId}: ${message}`, clientId);
}
broadcast(message, excludeClientId = null) {
this.clients.forEach((socket, clientId) => {
if (clientId !== excludeClientId) {
socket.write(message);
}
});
}
start() {
this.server.listen(this.port, () => {
console.log(`TCP server running on port ${this.port}`);
});
}
}
// Usage
const tcpServer = new TCPServer(3000);
tcpServer.start();
TCP clients connect to TCP servers to send and receive data. The net module provides the Socket class for this purpose.
TCP client connection:
const net = require('net');
// Create TCP client
const client = new net.Socket();
client.connect(3000, 'localhost', () => {
console.log('Connected to TCP server');
client.write('Hello from client!');
});
client.on('data', (data) => {
console.log('Received from server:', data.toString());
});
client.on('close', () => {
console.log('Connection closed');
});
client.on('error', (error) => {
console.log('Connection error:', error.message);
});
A socket is an endpoint for sending or receiving data across a computer network. In Node.js, sockets are used for both TCP and UDP communication.
Socket characteristics:
const net = require('net');
const server = net.createServer((socket) => {
// Socket properties
console.log('Remote address:', socket.remoteAddress);
console.log('Remote port:', socket.remotePort);
console.log('Local address:', socket.localAddress);
console.log('Local port:', socket.localPort);
console.log('Bytes read:', socket.bytesRead);
console.log('Bytes written:', socket.bytesWritten);
// Socket events
socket.on('data', (data) => {
console.log('Data received:', data.toString());
});
socket.on('end', () => {
console.log('Client ended connection');
});
socket.on('close', () => {
console.log('Socket closed');
});
socket.on('error', (error) => {
console.log('Socket error:', error.message);
});
// Socket methods
socket.write('Hello client!');
socket.setEncoding('utf8');
socket.setTimeout(30000); // 30 seconds
});
WebSockets provide full-duplex communication channels over a single TCP connection, ideal for real-time applications.
Using the ws library:
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });
wss.on('connection', (ws) => {
console.log('New WebSocket connection');
ws.on('message', (message) => {
console.log('Received:', message.toString());
// Echo back to client
ws.send(`Echo: ${message}`);
// Broadcast to all clients
wss.clients.forEach((client) => {
if (client !== ws && client.readyState === WebSocket.OPEN) {
client.send(`Broadcast: ${message}`);
}
});
});
ws.on('close', () => {
console.log('WebSocket connection closed');
});
// Send welcome message
ws.send('Welcome to WebSocket server!');
});
Socket.io is a library that enables real-time, bidirectional and event-based communication between web clients and servers.
Socket.io server setup:
const http = require('http');
const socketIo = require('socket.io');
const server = http.createServer();
const io = socketIo(server, {
cors: {
origin: "*",
methods: ["GET", "POST"]
}
});
io.on('connection', (socket) => {
console.log('User connected:', socket.id);
// Join room
socket.on('join-room', (room) => {
socket.join(room);
socket.to(room).emit('user-joined', socket.id);
});
// Handle chat messages
socket.on('send-message', (data) => {
io.to(data.room).emit('new-message', {
user: socket.id,
message: data.message,
timestamp: new Date()
});
});
// Handle disconnection
socket.on('disconnect', () => {
console.log('User disconnected:', socket.id);
});
});
server.listen(3000);
A WebSocket server handles WebSocket connections and enables real-time communication between server and clients.
Complete WebSocket server:
const WebSocket = require('ws');
class WebSocketServer {
constructor(port) {
this.port = port;
this.wss = new WebSocket.Server({ port });
this.clients = new Map();
this.setupEventHandlers();
}
setupEventHandlers() {
this.wss.on('connection', (ws, req) => {
const clientId = this.generateClientId();
console.log(`Client ${clientId} connected from ${req.socket.remoteAddress}`);
this.clients.set(clientId, ws);
this.setupClientHandlers(clientId, ws);
// Send welcome message
this.sendToClient(clientId, {
type: 'welcome',
clientId: clientId,
message: 'Connected to WebSocket server'
});
// Notify other clients
this.broadcast({
type: 'user-joined',
clientId: clientId,
timestamp: new Date()
}, clientId);
});
}
setupClientHandlers(clientId, ws) {
ws.on('message', (data) => {
try {
const message = JSON.parse(data);
this.handleMessage(clientId, message);
} catch (error) {
this.sendToClient(clientId, {
type: 'error',
message: 'Invalid JSON format'
});
}
});
ws.on('close', () => {
console.log(`Client ${clientId} disconnected`);
this.clients.delete(clientId);
this.broadcast({
type: 'user-left',
clientId: clientId,
timestamp: new Date()
});
});
ws.on('error', (error) => {
console.log(`Client ${clientId} error:`, error.message);
this.clients.delete(clientId);
});
}
handleMessage(clientId, message) {
switch (message.type) {
case 'chat':
this.broadcast({
type: 'chat',
from: clientId,
text: message.text,
timestamp: new Date()
});
break;
case 'ping':
this.sendToClient(clientId, { type: 'pong' });
break;
default:
this.sendToClient(clientId, {
type: 'error',
message: 'Unknown message type'
});
}
}
sendToClient(clientId, data) {
const ws = this.clients.get(clientId);
if (ws && ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify(data));
}
}
broadcast(data, excludeClientId = null) {
this.clients.forEach((ws, clientId) => {
if (clientId !== excludeClientId && ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify(data));
}
});
}
generateClientId() {
return Math.random().toString(36).substring(2, 15);
}
start() {
console.log(`WebSocket server running on port ${this.port}`);
}
}
// Usage
const wsServer = new WebSocketServer(8080);
wsServer.start();
HTTP streaming allows sending data in chunks as it becomes available, rather than waiting for the complete response.
HTTP response streaming:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, {
'Content-Type': 'text/plain',
'Transfer-Encoding': 'chunked'
});
// Send data in chunks
let count = 0;
const interval = setInterval(() => {
count++;
res.write(`Chunk ${count}\n`);
if (count >= 10) {
clearInterval(interval);
res.end('Stream complete');
}
}, 1000);
});
Streaming responses are handled using Node.js streams to efficiently send large amounts of data or real-time updates.
File streaming response:
const http = require('http');
const fs = require('fs');
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('large-file.txt');
res.writeHead(200, {
'Content-Type': 'text/plain',
'Content-Disposition': 'attachment; filename="large-file.txt"'
});
fileStream.pipe(res);
fileStream.on('error', (error) => {
res.writeHead(500);
res.end('Error streaming file');
});
});
HTTP pipelining is a technique where multiple HTTP requests are sent on a single connection without waiting for the corresponding responses, improving performance by reducing latency.
Understanding HTTP pipelining:
const http = require('http');
// Server that handles pipelined requests
const server = http.createServer((req, res) => {
console.log(`Processing ${req.method} ${req.url}`);
// Simulate processing time
setTimeout(() => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end(`Response for ${req.url}\n`);
}, 1000);
});
server.listen(3000);
// Client making pipelined requests
function makePipelinedRequests() {
const requests = [
{ method: 'GET', path: '/api/users' },
{ method: 'GET', path: '/api/products' },
{ method: 'POST', path: '/api/orders' }
];
const options = {
hostname: 'localhost',
port: 3000,
method: 'GET'
};
// Note: Modern HTTP/1.1 clients handle pipelining automatically
// This is a simplified demonstration
requests.forEach((reqConfig, index) => {
const req = http.request({ ...options, path: reqConfig.path }, (res) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('end', () => {
console.log(`Response ${index + 1}:`, data.trim());
});
});
req.end();
});
}
// Note: HTTP/2 multiplexing has largely replaced pipelining
// Pipelining can be problematic with head-of-line blocking
Key points about HTTP pipelining:
Timeouts are crucial for preventing hanging requests and managing resources efficiently in HTTP servers and clients.
Server-side timeout management:
const http = require('http');
const server = http.createServer((req, res) => {
// Set timeout for this request (10 seconds)
req.setTimeout(10000, () => {
console.log('Request timeout for:', req.url);
if (!res.headersSent) {
res.writeHead(408, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Request timeout' }));
}
});
// Simulate long processing
if (req.url === '/slow') {
setTimeout(() => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Slow response completed' }));
}, 15000); // This will timeout
} else {
res.end('OK');
}
});
// Set server-level timeout
server.setTimeout(30000, (socket) => {
console.log('Socket timeout, closing connection');
socket.destroy();
});
server.listen(3000);
Client-side timeout management:
const http = require('http');
function makeRequestWithTimeout() {
const options = {
hostname: 'example.com',
port: 80,
path: '/api/data',
method: 'GET',
timeout: 5000 // 5 second timeout
};
const req = http.request(options, (res) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('end', () => {
console.log('Response:', data);
});
});
// Handle timeout
req.setTimeout(5000, () => {
console.log('Request timeout');
req.destroy();
});
// Handle errors
req.on('error', (error) => {
console.log('Request error:', error.message);
});
req.end();
}
Advanced timeout strategy:
class TimeoutManager {
constructor(defaultTimeout = 10000) {
this.defaultTimeout = defaultTimeout;
this.timeouts = new Map();
}
setRequestTimeout(req, res, timeout = this.defaultTimeout) {
const timeoutId = setTimeout(() => {
this.handleTimeout(req, res);
}, timeout);
this.timeouts.set(req, timeoutId);
// Clear timeout when request completes
res.on('finish', () => {
this.clearTimeout(req);
});
}
handleTimeout(req, res) {
if (!res.headersSent) {
res.writeHead(408, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
error: 'Request timeout',
message: 'The request took too long to process'
}));
}
req.destroy();
this.clearTimeout(req);
}
clearTimeout(req) {
const timeoutId = this.timeouts.get(req);
if (timeoutId) {
clearTimeout(timeoutId);
this.timeouts.delete(req);
}
}
}
// Usage
const timeoutManager = new TimeoutManager(8000);
const server = http.createServer((req, res) => {
timeoutManager.setRequestTimeout(req, res);
// Your request handling logic
setTimeout(() => {
res.end('Response after delay');
}, 5000); // Within timeout
});
http.Agent is responsible for managing connection persistence and reuse for HTTP clients. It maintains a pool of sockets for keep-alive connections.
Using http.Agent:
const http = require('http');
// Custom agent with specific settings
const agent = new http.Agent({
keepAlive: true,
keepAliveMsecs: 10000,
maxSockets: 10,
maxFreeSockets: 5,
timeout: 60000
});
// Make requests using the custom agent
function makeRequests() {
const options = {
hostname: 'jsonplaceholder.typicode.com',
port: 80,
path: '/posts/1',
method: 'GET',
agent: agent // Use our custom agent
};
const req = http.request(options, (res) => {
console.log('Status:', res.statusCode);
console.log('Connection reused:', res.reusedSocket);
});
req.end();
}
// Monitor agent statistics
console.log('Agent stats:', {
totalSockets: agent.totalSocketsCount,
freeSockets: Object.keys(agent.freeSockets).length,
requests: Object.keys(agent.requests).length
});
Global agent configuration:
const http = require('http');
// Configure global agent
http.globalAgent.keepAlive = true;
http.globalAgent.maxSockets = 50;
http.globalAgent.maxFreeSockets = 10;
// All requests will now use keep-alive by default
function makeMultipleRequests() {
for (let i = 1; i <= 5; i++) {
const req = http.request({
hostname: 'api.example.com',
path: `/data/${i}`
}, (res) => {
console.log(`Request ${i} - Reused: ${res.reusedSocket}`);
});
req.end();
}
}
Custom agent for specific domains:
const http = require('http');
// Create specialized agents for different services
const apiAgent = new http.Agent({
keepAlive: true,
maxSockets: 20,
timeout: 30000
});
const imageAgent = new http.Agent({
keepAlive: false, // Don't keep alive for image requests
maxSockets: 50
});
class HttpClient {
constructor() {
this.agents = {
'api.example.com': apiAgent,
'images.example.com': imageAgent,
'default': new http.Agent({ keepAlive: true })
};
}
request(url, options = {}) {
const urlObj = new URL(url);
const agent = this.agents[urlObj.hostname] || this.agents.default;
return http.request({
hostname: urlObj.hostname,
path: urlObj.pathname + urlObj.search,
agent: agent,
...options
});
}
}
// Usage
const client = new HttpClient();
const req = client.request('https://api.example.com/data');
req.end();
Concurrent request handling is essential for building scalable servers. Node.js's event-driven architecture makes it excellent for handling many simultaneous connections.
Basic concurrent request handling:
const http = require('http');
const server = http.createServer(async (req, res) => {
const requestId = Math.random().toString(36).substring(7);
console.log(`[${requestId}] Started processing ${req.url}`);
// Simulate async work (database query, API call, etc.)
await simulateAsyncWork();
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
requestId,
message: 'Request completed',
url: req.url
}));
console.log(`[${requestId}] Completed processing`);
});
async function simulateAsyncWork() {
// Simulate variable processing time (100-2000ms)
const delay = Math.random() * 1900 + 100;
return new Promise(resolve => setTimeout(resolve, delay));
}
server.listen(3000, () => {
console.log('Server ready for concurrent requests on port 3000');
});
Concurrent requests with connection limiting:
const http = require('http');
class ConcurrentRequestHandler {
constructor(maxConcurrent = 100) {
this.maxConcurrent = maxConcurrent;
this.activeRequests = 0;
this.queue = [];
}
handleRequest(req, res) {
return new Promise((resolve) => {
const requestHandler = () => {
this.activeRequests++;
// Simulate request processing
setTimeout(() => {
res.end('Request processed');
this.activeRequests--;
this.processQueue();
resolve();
}, 1000);
};
if (this.activeRequests < this.maxConcurrent) {
requestHandler();
} else {
this.queue.push(requestHandler);
console.log(`Request queued. Queue length: ${this.queue.length}`);
}
});
}
processQueue() {
while (this.queue.length > 0 && this.activeRequests < this.maxConcurrent) {
const nextHandler = this.queue.shift();
nextHandler();
}
}
}
const requestHandler = new ConcurrentRequestHandler(5); // Max 5 concurrent
const server = http.createServer((req, res) => {
requestHandler.handleRequest(req, res);
});
Making concurrent client requests:
const http = require('http');
async function makeConcurrentRequests(urls, concurrencyLimit = 5) {
const results = [];
let currentIndex = 0;
async function processNext() {
if (currentIndex >= urls.length) return;
const url = urls[currentIndex++];
try {
const result = await fetchUrl(url);
results.push({ url, status: 'success', data: result });
} catch (error) {
results.push({ url, status: 'error', error: error.message });
}
// Process next URL
await processNext();
}
// Start concurrent processing
const workers = Array(concurrencyLimit).fill().map(() => processNext());
await Promise.all(workers);
return results;
}
function fetchUrl(url) {
return new Promise((resolve, reject) => {
const req = http.request(url, (res) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('end', () => resolve(data));
});
req.on('error', reject);
req.setTimeout(5000, () => {
req.destroy();
reject(new Error('Timeout'));
});
req.end();
});
}
// Usage
const urls = [
'http://jsonplaceholder.typicode.com/posts/1',
'http://jsonplaceholder.typicode.com/posts/2',
'http://jsonplaceholder.typicode.com/posts/3'
];
makeConcurrentRequests(urls, 2)
.then(results => console.log('Concurrent results:', results));
A proxy server acts as an intermediary between clients and other servers, useful for caching, filtering, load balancing, or bypassing restrictions.
Basic HTTP proxy server:
const http = require('http');
const https = require('https');
const proxy = http.createServer((clientReq, clientRes) => {
console.log(`Proxying: ${clientReq.method} ${clientReq.url}`);
// Parse the target URL from request
const targetUrl = new URL(clientReq.url, `http://${clientReq.headers.host}`);
const options = {
hostname: targetUrl.hostname,
port: targetUrl.port || (targetUrl.protocol === 'https:' ? 443 : 80),
path: targetUrl.pathname + targetUrl.search,
method: clientReq.method,
headers: { ...clientReq.headers }
};
// Remove hop-by-hop headers
delete options.headers['proxy-connection'];
delete options.headers['connection'];
delete options.headers['keep-alive'];
// Choose protocol
const protocol = targetUrl.protocol === 'https:' ? https : http;
const proxyReq = protocol.request(options, (proxyRes) => {
// Forward status and headers
clientRes.writeHead(proxyRes.statusCode, proxyRes.headers);
// Pipe the response
proxyRes.pipe(clientRes);
});
// Handle errors
proxyReq.on('error', (error) => {
console.log('Proxy error:', error.message);
clientRes.writeHead(500, { 'Content-Type': 'text/plain' });
clientRes.end('Proxy error: ' + error.message);
});
// Pipe the request body
clientReq.pipe(proxyReq);
});
proxy.listen(8080, () => {
console.log('Proxy server running on port 8080');
});
Advanced proxy with caching:
const http = require('http');
const https = require('https');
const fs = require('fs');
const path = require('path');
class CachingProxy {
constructor(cacheDir = './cache') {
this.cacheDir = cacheDir;
this.ensureCacheDir();
}
ensureCacheDir() {
if (!fs.existsSync(this.cacheDir)) {
fs.mkdirSync(this.cacheDir, { recursive: true });
}
}
getCacheKey(url) {
return Buffer.from(url).toString('base64').replace(/[^a-zA-Z0-9]/g, '_');
}
getCachedResponse(url) {
const cacheFile = path.join(this.cacheDir, this.getCacheKey(url));
try {
if (fs.existsSync(cacheFile)) {
const cached = JSON.parse(fs.readFileSync(cacheFile, 'utf8'));
// Check if cache is still valid (5 minutes)
if (Date.now() - cached.timestamp < 5 * 60 * 1000) {
return cached;
}
}
} catch (error) {
// Cache corrupted, ignore
}
return null;
}
cacheResponse(url, response) {
const cacheFile = path.join(this.cacheDir, this.getCacheKey(url));
const cached = {
url,
timestamp: Date.now(),
statusCode: response.statusCode,
headers: response.headers,
body: response.body
};
fs.writeFileSync(cacheFile, JSON.stringify(cached));
}
handleRequest(clientReq, clientRes) {
const targetUrl = clientReq.url.substring(1); // Remove leading /
// Check cache first
const cached = this.getCachedResponse(targetUrl);
if (cached) {
console.log('Serving from cache:', targetUrl);
clientRes.writeHead(cached.statusCode, cached.headers);
clientRes.end(cached.body);
return;
}
// Forward request
const url = new URL(targetUrl);
const options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: clientReq.method,
headers: { ...clientReq.headers }
};
const protocol = url.protocol === 'https:' ? https : http;
const proxyReq = protocol.request(options, (proxyRes) => {
let body = '';
proxyRes.on('data', chunk => body += chunk);
proxyRes.on('end', () => {
// Cache successful responses
if (proxyRes.statusCode === 200) {
this.cacheResponse(targetUrl, {
statusCode: proxyRes.statusCode,
headers: proxyRes.headers,
body: body
});
}
clientRes.writeHead(proxyRes.statusCode, proxyRes.headers);
clientRes.end(body);
});
});
clientReq.pipe(proxyReq);
}
}
// Usage
const cachingProxy = new CachingProxy();
const server = http.createServer((req, res) => {
cachingProxy.handleRequest(req, res);
});
server.listen(8080);
An HTTP client is a program that sends HTTP requests to servers and processes the responses. Node.js provides built-in HTTP client capabilities through the http and https modules.
Built-in HTTP client examples:
const http = require('http');
const https = require('https');
// Simple GET request
function simpleGet(url) {
return new Promise((resolve, reject) => {
const protocol = url.startsWith('https') ? https : http;
const req = protocol.get(url, (res) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('end', () => {
resolve({
statusCode: res.statusCode,
headers: res.headers,
body: data
});
});
});
req.on('error', reject);
req.setTimeout(5000, () => {
req.destroy();
reject(new Error('Request timeout'));
});
});
}
// POST request with JSON body
function postJSON(url, data) {
return new Promise((resolve, reject) => {
const urlObj = new URL(url);
const protocol = urlObj.protocol === 'https:' ? https : http;
const postData = JSON.stringify(data);
const options = {
hostname: urlObj.hostname,
port: urlObj.port,
path: urlObj.pathname,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(postData)
}
};
const req = protocol.request(options, (res) => {
let responseData = '';
res.on('data', chunk => responseData += chunk);
res.on('end', () => {
resolve({
statusCode: res.statusCode,
body: responseData
});
});
});
req.on('error', reject);
req.write(postData);
req.end();
});
}
// Usage examples
simpleGet('http://jsonplaceholder.typicode.com/posts/1')
.then(result => console.log('GET result:', result));
postJSON('http://jsonplaceholder.typicode.com/posts', {
title: 'Test Post',
body: 'This is a test',
userId: 1
}).then(result => console.log('POST result:', result));
HTTP requests can be made using Node.js's built-in modules or third-party libraries. Here are various ways to make HTTP requests.
Using built-in http module:
const http = require('http');
const https = require('https');
class HTTPClient {
static request(options) {
return new Promise((resolve, reject) => {
const protocol = options.protocol === 'https:' ? https : http;
const req = protocol.request(options, (res) => {
const response = {
statusCode: res.statusCode,
headers: res.headers,
body: ''
};
res.on('data', chunk => response.body += chunk);
res.on('end', () => resolve(response));
});
req.on('error', reject);
req.setTimeout(options.timeout || 5000, () => {
req.destroy();
reject(new Error('Request timeout'));
});
if (options.body) {
req.write(options.body);
}
req.end();
});
}
static get(url, options = {}) {
const urlObj = new URL(url);
return this.request({
protocol: urlObj.protocol,
hostname: urlObj.hostname,
port: urlObj.port,
path: urlObj.pathname + urlObj.search,
method: 'GET',
...options
});
}
static post(url, data, options = {}) {
const urlObj = new URL(url);
const body = JSON.stringify(data);
return this.request({
protocol: urlObj.protocol,
hostname: urlObj.hostname,
port: urlObj.port,
path: urlObj.pathname,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(body),
...options.headers
},
body: body,
...options
});
}
}
// Usage
HTTPClient.get('https://jsonplaceholder.typicode.com/posts/1')
.then(response => console.log('GET:', response.body));
HTTPClient.post('https://jsonplaceholder.typicode.com/posts', {
title: 'Test',
body: 'Content',
userId: 1
}).then(response => console.log('POST:', response.body));
node-fetch is a light-weight module that brings window.fetch to Node.js, providing a familiar API for making HTTP requests similar to browsers.
Using node-fetch:
const fetch = require('node-fetch');
// Basic GET request
async function fetchData() {
try {
const response = await fetch('https://jsonplaceholder.typicode.com/posts/1');
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
console.log('Fetched data:', data);
} catch (error) {
console.log('Fetch error:', error.message);
}
}
// POST request with JSON
async function postData() {
const response = await fetch('https://jsonplaceholder.typicode.com/posts', {
method: 'POST',
body: JSON.stringify({
title: 'Test Post',
body: 'This is a test',
userId: 1
}),
headers: {
'Content-Type': 'application/json'
}
});
return await response.json();
}
// Advanced usage with timeouts and retries
class FetchClient {
constructor(defaultOptions = {}) {
this.defaultOptions = {
timeout: 5000,
retries: 3,
...defaultOptions
};
}
async fetchWithRetry(url, options = {}) {
let lastError;
for (let attempt = 1; attempt <= this.defaultOptions.retries; attempt++) {
try {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), this.defaultOptions.timeout);
const response = await fetch(url, {
...this.defaultOptions,
...options,
signal: controller.signal
});
clearTimeout(timeoutId);
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response;
} catch (error) {
lastError = error;
console.log(`Attempt ${attempt} failed:`, error.message);
if (attempt < this.defaultOptions.retries) {
await this.delay(1000 * attempt); // Exponential backoff
}
}
}
throw lastError;
}
delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Usage
const client = new FetchClient({ timeout: 3000, retries: 2 });
client.fetchWithRetry('https://api.example.com/data')
.then(response => response.json())
.then(data => console.log('Data:', data))
.catch(error => console.log('All attempts failed:', error.message));
Axios is a popular Promise-based HTTP client for both browser and Node.js that provides a simple and clean API.
Basic axios usage:
const axios = require('axios');
// GET request
async function getData() {
try {
const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1');
console.log('Response:', response.data);
} catch (error) {
console.log('Error:', error.message);
}
}
// POST request
async function createPost() {
const response = await axios.post('https://jsonplaceholder.typicode.com/posts', {
title: 'Test Post',
body: 'This is a test',
userId: 1
});
return response.data;
}
// Concurrent requests
async function makeConcurrentRequests() {
try {
const [userResponse, postResponse] = await Promise.all([
axios.get('https://jsonplaceholder.typicode.com/users/1'),
axios.get('https://jsonplaceholder.typicode.com/posts/1')
]);
console.log('User:', userResponse.data);
console.log('Post:', postResponse.data);
} catch (error) {
console.log('Error in concurrent requests:', error.message);
}
}
Advanced axios configuration:
const axios = require('axios');
// Create axios instance with default config
const apiClient = axios.create({
baseURL: 'https://api.example.com/v1',
timeout: 5000,
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-token-here'
}
});
// Request interceptor
apiClient.interceptors.request.use(
(config) => {
console.log(`Making ${config.method.toUpperCase()} request to ${config.url}`);
return config;
},
(error) => {
return Promise.reject(error);
}
);
// Response interceptor
apiClient.interceptors.response.use(
(response) => {
console.log(`Received ${response.status} from ${response.config.url}`);
return response;
},
(error) => {
if (error.response) {
console.log(`Server responded with ${error.response.status}`);
} else if (error.request) {
console.log('No response received:', error.message);
} else {
console.log('Request setup error:', error.message);
}
return Promise.reject(error);
}
);
// Usage with the configured client
async function useApiClient() {
try {
const response = await apiClient.get('/users');
return response.data;
} catch (error) {
if (error.response && error.response.status === 401) {
// Handle unauthorized
console.log('Authentication required');
}
throw error;
}
}
Keep-alive connections allow multiple HTTP requests to be sent over the same TCP connection, reducing latency and improving performance.
Enable keep-alive in Node.js:
const http = require('http');
// Server with keep-alive
const server = http.createServer((req, res) => {
res.writeHead(200, {
'Content-Type': 'text/plain',
'Connection': 'keep-alive'
});
res.end('Hello with keep-alive');
});
server.keepAliveTimeout = 30000; // 30 seconds
server.headersTimeout = 35000; // 35 seconds
// Client with keep-alive
function makeKeepAliveRequests() {
const agent = new http.Agent({
keepAlive: true,
keepAliveMsecs: 1000,
maxSockets: 10,
maxFreeSockets: 5
});
// Make multiple requests using the same agent
for (let i = 0; i < 5; i++) {
const req = http.request({
hostname: 'localhost',
port: 3000,
path: `/request-${i}`,
agent: agent,
headers: { 'Connection': 'keep-alive' }
}, (res) => {
console.log(`Request ${i} - Reused socket: ${res.reusedSocket}`);
res.on('data', () => {}); // Consume data
});
req.end();
}
}
Response compression reduces bandwidth usage by compressing response bodies using algorithms like gzip, deflate, or brotli.
Compressing responses with zlib:
const http = require('http');
const zlib = require('zlib');
const server = http.createServer((req, res) => {
const responseData = {
message: 'This is a compressed response',
timestamp: new Date().toISOString(),
data: Array(1000).fill('Some repeated data to make compression worthwhile')
};
const jsonData = JSON.stringify(responseData);
// Check client's accepted encodings
const acceptEncoding = req.headers['accept-encoding'] || '';
if (acceptEncoding.includes('gzip')) {
zlib.gzip(jsonData, (err, compressed) => {
if (err) {
res.writeHead(500);
res.end('Compression error');
return;
}
res.writeHead(200, {
'Content-Type': 'application/json',
'Content-Encoding': 'gzip',
'Vary': 'Accept-Encoding'
});
res.end(compressed);
});
} else if (acceptEncoding.includes('deflate')) {
zlib.deflate(jsonData, (err, compressed) => {
res.writeHead(200, {
'Content-Type': 'application/json',
'Content-Encoding': 'deflate'
});
res.end(compressed);
});
} else {
// No compression
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(jsonData);
}
});
Content negotiation allows clients and servers to agree on the best representation of a resource based on factors like content type, language, and encoding.
Content negotiation implementation:
const http = require('http');
const server = http.createServer((req, res) => {
const resource = {
id: 1,
title: 'Sample Resource',
content: 'This is the content',
languages: {
en: 'Hello World',
es: 'Hola Mundo',
fr: 'Bonjour le monde'
}
};
// Content-Type negotiation
const acceptHeader = req.headers['accept'] || 'text/plain';
if (acceptHeader.includes('application/json')) {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(resource));
} else if (acceptHeader.includes('text/html')) {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(`
<html>
<body>
<h1>${resource.title}</h1>
<p>${resource.content}</p>
</body>
</html>
`);
} else if (acceptHeader.includes('text/xml')) {
res.writeHead(200, { 'Content-Type': 'text/xml' });
res.end(`
<resource>
<id>${resource.id}</id>
<title>${resource.title}</title>
<content>${resource.content}</content>
</resource>
`);
} else {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end(`${resource.title}: ${resource.content}`);
}
});
Cookies are small pieces of data stored by the browser and sent with HTTP requests. They're commonly used for session management, personalization, and tracking.
Cookie management in Node.js:
const http = require('http');
function parseCookies(cookieHeader) {
const cookies = {};
if (cookieHeader) {
cookieHeader.split(';').forEach(cookie => {
const [name, value] = cookie.trim().split('=');
cookies[name] = value;
});
}
return cookies;
}
function setCookie(res, name, value, options = {}) {
const cookieOptions = [
`${name}=${value}`,
options.path ? `Path=${options.path}` : 'Path=/',
options.domain ? `Domain=${options.domain}` : '',
options.expires ? `Expires=${options.expires.toUTCString()}` : '',
options.maxAge ? `Max-Age=${options.maxAge}` : '',
options.secure ? 'Secure' : '',
options.httpOnly ? 'HttpOnly' : '',
options.sameSite ? `SameSite=${options.sameSite}` : ''
].filter(opt => opt).join('; ');
res.setHeader('Set-Cookie', cookieOptions);
}
const server = http.createServer((req, res) => {
// Parse incoming cookies
const cookies = parseCookies(req.headers.cookie);
console.log('Received cookies:', cookies);
// Session management example
if (req.url === '/login') {
// Set session cookie
setCookie(res, 'sessionId', 'abc123', {
httpOnly: true,
secure: true,
sameSite: 'Strict',
maxAge: 24 * 60 * 60 // 1 day
});
setCookie(res, 'user', 'john_doe', {
maxAge: 7 * 24 * 60 * 60 // 1 week
});
res.end('Logged in successfully');
} else if (req.url === '/logout') {
// Clear cookies by setting expired dates
setCookie(res, 'sessionId', '', { maxAge: 0 });
setCookie(res, 'user', '', { maxAge: 0 });
res.end('Logged out successfully');
} else {
// Check for session
if (cookies.sessionId) {
res.end(`Welcome back! User: ${cookies.user || 'Unknown'}`);
} else {
res.end('Please log in');
}
}
});
Security headers help protect web applications from common attacks like XSS, clickjacking, and MIME sniffing.
Implementing security headers:
const http = require('http');
function setSecurityHeaders(res) {
// Content Security Policy
res.setHeader('Content-Security-Policy',
"default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'"
);
// Prevent MIME sniffing
res.setHeader('X-Content-Type-Options', 'nosniff');
// Prevent clickjacking
res.setHeader('X-Frame-Options', 'DENY');
// XSS protection
res.setHeader('X-XSS-Protection', '1; mode=block');
// Referrer policy
res.setHeader('Referrer-Policy', 'strict-origin-when-cross-origin');
// Feature policy
res.setHeader('Feature-Policy',
"geolocation 'none'; microphone 'none'; camera 'none'"
);
// Remove server identification
res.removeHeader('X-Powered-By');
}
const server = http.createServer((req, res) => {
// Set security headers for all responses
setSecurityHeaders(res);
// Your application logic
if (req.url === '/secure') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Secure endpoint' }));
} else {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(`
<html>
<head>
<title>Secure App</title>
</head>
<body>
<h1>Secure Application</h1>
<p>This app uses security headers</p>
</body>
</html>
`);
}
});
// HTTPS redirect middleware
function requireHTTPS(req, res, next) {
if (req.headers['x-forwarded-proto'] !== 'https' && process.env.NODE_ENV === 'production') {
return res.writeHead(301, {
'Location': `https://${req.headers.host}${req.url}`
}).end();
}
next();
}
Express.js is a minimal and flexible Node.js web application framework that provides a robust set of features for building web and mobile applications.
Key characteristics of Express.js:
Basic Express application:
const express = require('express');
const app = express();
const port = 3000;
// Define a route
app.get('/', (req, res) => {
res.send('Hello World from Express!');
});
// Start the server
app.listen(port, () => {
console.log(`Express server running at http://localhost:${port}`);
});
What Express provides:
Express vs Native HTTP module:
// Native HTTP module
const http = require('http');
const server = http.createServer((req, res) => {
if (req.url === '/' && req.method === 'GET') {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello World');
}
});
// Express.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
Express abstracts away the low-level HTTP details and provides a clean, intuitive API for building web applications.
Express.js is the most popular Node.js web framework due to its simplicity, flexibility, and extensive ecosystem.
Key reasons for Express.js popularity:
1. Minimal and Unopinionated:
// Express doesn't force any specific structure
const express = require('express');
const app = express();
// You can organize your app however you want
app.get('/', (req, res) => res.send('Home'));
app.post('/users', (req, res) => res.send('Create User'));
// No forced MVC, no specific folder structure
2. Extensive Middleware Ecosystem:
// Thousands of middleware packages available
app.use(express.json()); // Parse JSON bodies
app.use(express.urlencoded({ extended: true })); // Parse form data
app.use(require('cors')()); // Enable CORS
app.use(require('helmet')()); // Security headers
app.use(require('morgan')('combined')); // Logging
3. Easy Routing System:
// Intuitive routing API
app.get('/users', (req, res) => { /* Get all users */ });
app.get('/users/:id', (req, res) => { /* Get user by ID */ });
app.post('/users', (req, res) => { /* Create user */ });
app.put('/users/:id', (req, res) => { /* Update user */ });
app.delete('/users/:id', (req, res) => { /* Delete user */ });
4. Large Community and Ecosystem:
5. Performance and Scalability:
// Lightweight and fast
// Can handle high traffic loads
// Suitable for microservices architecture
// Example of scalable structure
const userRoutes = require('./routes/users');
const productRoutes = require('./routes/products');
app.use('/api/users', userRoutes);
app.use('/api/products', productRoutes);
6. Template Engine Support:
// Works with multiple template engines
app.set('view engine', 'ejs'); // Embedded JavaScript
app.set('view engine', 'pug'); // Pug (formerly Jade)
app.set('view engine', 'handlebars'); // Handlebars
app.get('/', (req, res) => {
res.render('index', { title: 'Home Page', users: [...] });
});
7. Corporate Backing and Long-term Support:
8. Learning Curve:
Express.js can be installed using npm (Node Package Manager) and initialized in a new or existing Node.js project.
Installing Express in a new project:
# Create a new directory for your project mkdir my-express-app cd my-express-app # Initialize a new Node.js project npm init -y # Install Express.js npm install express
Package.json after installation:
{
"name": "my-express-app",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js"
},
"dependencies": {
"express": "^4.18.0"
}
}
Basic Express application structure:
// index.js
const express = require('express');
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.send('Hello World!');
});
app.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
});
Running the application:
# Run with Node.js
node index.js
# Or add to package.json scripts
# "scripts": {
# "start": "node index.js",
# "dev": "nodemon index.js"
# }
npm start
# or for development with auto-restart
npm run dev
Installing with additional useful packages:
# Common packages for Express development npm install express npm install -D nodemon # Auto-restart in development npm install helmet # Security headers npm install cors # CORS middleware npm install morgan # Logging middleware npm install dotenv # Environment variables npm install compression # Response compression
Development dependencies setup:
{
"name": "my-express-app",
"version": "1.0.0",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js",
"test": "jest"
},
"dependencies": {
"express": "^4.18.0",
"helmet": "^6.0.0",
"cors": "^2.8.5",
"morgan": "^1.10.0"
},
"devDependencies": {
"nodemon": "^2.0.0",
"jest": "^28.0.0"
}
}
Global vs Local installation:
# Don't do this (global installation not recommended) npm install -g express # Do this (local installation in your project) npm install express
Using Express Generator (for quick setup):
# Install Express generator globally npm install -g express-generator # Create a new Express application express my-app cd my-app npm install # Start the application npm start
Creating a simple Express server involves requiring the Express module, creating an application instance, defining routes, and starting the server.
Basic Express server:
const express = require('express');
const app = express();
const port = 3000;
// Define routes
app.get('/', (req, res) => {
res.send('Welcome to the Home Page!');
});
app.get('/about', (req, res) => {
res.send('About Us Page');
});
app.get('/contact', (req, res) => {
res.send('Contact Us Page');
});
// Start the server
app.listen(port, () => {
console.log(`Express server running on port ${port}`);
console.log(`Visit: http://localhost:${port}`);
});
Enhanced server with middleware:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
// Middleware
app.use(express.json()); // Parse JSON bodies
app.use(express.urlencoded({ extended: true })); // Parse URL-encoded bodies
// Routes
app.get('/', (req, res) => {
res.json({
message: 'Welcome to our API',
timestamp: new Date().toISOString(),
version: '1.0.0'
});
});
app.get('/health', (req, res) => {
res.status(200).json({ status: 'OK', uptime: process.uptime() });
});
// 404 handler
app.use('*', (req, res) => {
res.status(404).json({ error: 'Route not found' });
});
// Error handler
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Server with environment configuration:
const express = require('express');
require('dotenv').config(); // Load environment variables
const app = express();
const port = process.env.PORT || 3000;
const environment = process.env.NODE_ENV || 'development';
// Development-specific middleware
if (environment === 'development') {
const morgan = require('morgan');
app.use(morgan('dev')); // Log requests
}
// Production-specific middleware
if (environment === 'production') {
const helmet = require('helmet');
const compression = require('compression');
app.use(helmet()); // Security headers
app.use(compression()); // Compress responses
}
// Basic routes
app.get('/', (req, res) => {
res.send(`
Express Server
Environment: ${environment}
Port: ${port}
`);
});
// API routes
app.get('/api/users', (req, res) => {
res.json([
{ id: 1, name: 'John Doe', email: 'john@example.com' },
{ id: 2, name: 'Jane Smith', email: 'jane@example.com' }
]);
});
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
environment: environment
});
});
app.listen(port, () => {
console.log(`${environment.toUpperCase()} server running on port ${port}`);
});
Server with different response types:
const express = require('express');
const app = express();
const port = 3000;
// Different response types
app.get('/text', (req, res) => {
res.send('Plain text response');
});
app.get('/html', (req, res) => {
res.send(`
HTML Response
HTML Content
This is an HTML response from Express
`);
});
app.get('/json', (req, res) => {
res.json({
message: 'JSON response',
data: {
users: 150,
active: true,
features: ['auth', 'payments', 'reports']
},
timestamp: new Date()
});
});
app.get('/status', (req, res) => {
res.status(201).json({
message: 'Resource created successfully',
id: 12345
});
});
app.get('/download', (req, res) => {
res.download('./files/document.pdf'); // File download
});
app.get('/redirect', (req, res) => {
res.redirect('/json'); // Redirect to another route
});
app.listen(port, () => {
console.log(`Server with multiple response types running on port ${port}`);
});
Key components of an Express server:
express() - Create application instanceapp.get() - Handle GET requestsapp.post() - Handle POST requestsapp.use() - Use middlewareapp.listen() - Start the serverres.send() - Send responseres.json() - Send JSON responseres.status() - Set HTTP status codeA route in Express defines how the application responds to client requests to a particular endpoint (URI) and HTTP method.
Basic route structure:
app.METHOD(PATH, HANDLER)
Where:
app - Express application instanceMETHOD - HTTP method (get, post, put, delete, etc.)PATH - URL path on the serverHANDLER - Function executed when route is matchedBasic route examples:
const express = require('express');
const app = express();
// GET route
app.get('/', (req, res) => {
res.send('Home Page');
});
// POST route
app.post('/users', (req, res) => {
res.send('Create a new user');
});
// PUT route
app.put('/users/:id', (req, res) => {
res.send(`Update user ${req.params.id}`);
});
// DELETE route
app.delete('/users/:id', (req, res) => {
res.send(`Delete user ${req.params.id}`);
});
// PATCH route
app.patch('/users/:id', (req, res) => {
res.send(`Partially update user ${req.params.id}`);
});
Route with multiple HTTP methods:
// Handle multiple methods for the same path
app.route('/books')
.get((req, res) => {
res.send('Get all books');
})
.post((req, res) => {
res.send('Add a new book');
})
.put((req, res) => {
res.send('Update all books');
});
app.route('/books/:id')
.get((req, res) => {
res.send(`Get book ${req.params.id}`);
})
.put((req, res) => {
res.send(`Update book ${req.params.id}`);
})
.delete((req, res) => {
res.send(`Delete book ${req.params.id}`);
});
Route parameters:
// Route with parameters
app.get('/users/:userId', (req, res) => {
res.send(`User ID: ${req.params.userId}`);
});
app.get('/posts/:postId/comments/:commentId', (req, res) => {
res.json({
postId: req.params.postId,
commentId: req.params.commentId
});
});
// Optional parameters
app.get('/products/:category?', (req, res) => {
if (req.params.category) {
res.send(`Products in category: ${req.params.category}`);
} else {
res.send('All products');
}
});
Route handlers with multiple functions:
// Multiple callback functions
app.get('/example',
(req, res, next) => {
console.log('First callback');
next();
},
(req, res, next) => {
console.log('Second callback');
next();
},
(req, res) => {
res.send('Final response');
}
);
// Array of callback functions
const logRequest = (req, res, next) => {
console.log(`${req.method} ${req.path}`);
next();
};
const validateUser = (req, res, next) => {
if (req.query.token === 'secret') {
next();
} else {
res.status(401).send('Unauthorized');
}
};
const getUserData = (req, res) => {
res.json({ user: 'John Doe', role: 'admin' });
};
app.get('/admin', [logRequest, validateUser, getUserData]);
Route matching patterns:
// String patterns
app.get('/ab?cd', (req, res) => {
// Matches: /acd, /abcd
res.send('ab?cd');
});
app.get('/ab+cd', (req, res) => {
// Matches: /abcd, /abbcd, /abbbcd, etc.
res.send('ab+cd');
});
app.get('/ab*cd', (req, res) => {
// Matches: /abcd, /abxcd, /abANYcd, /ab123cd, etc.
res.send('ab*cd');
});
// Regular expressions
app.get(/.*fly$/, (req, res) => {
// Matches: /butterfly, /dragonfly, etc.
res.send('/.*fly$/');
});
app.get(/^\/users\/([0-9]+)$/, (req, res) => {
// Matches: /users/123, /users/456, etc.
res.send(`User ID: ${req.params[0]}`);
});
Common route methods:
app.get() - GET requestsapp.post() - POST requestsapp.put() - PUT requestsapp.delete() - DELETE requestsapp.patch() - PATCH requestsapp.all() - All HTTP methodsapp.use() - Middleware for all methodsExpress provides flexible ways to handle multiple routes using different HTTP methods, route parameters, and organized routing structures.
Basic multiple route handling:
const express = require('express');
const app = express();
// Multiple HTTP methods for same path
app.get('/products', (req, res) => {
res.send('Get all products');
});
app.post('/products', (req, res) => {
res.send('Create new product');
});
app.get('/products/:id', (req, res) => {
res.send(`Get product ${req.params.id}`);
});
app.put('/products/:id', (req, res) => {
res.send(`Update product ${req.params.id}`);
});
app.delete('/products/:id', (req, res) => {
res.send(`Delete product ${req.params.id}`);
});
// Different paths with similar logic
app.get('/users', (req, res) => {
res.send('Get all users');
});
app.get('/admins', (req, res) => {
res.send('Get all admins');
});
app.get('/customers', (req, res) => {
res.send('Get all customers');
});
Using app.route() for cleaner code:
app.route('/articles')
.get((req, res) => {
res.send('Get all articles');
})
.post((req, res) => {
res.send('Create new article');
});
app.route('/articles/:id')
.get((req, res) => {
res.send(`Get article ${req.params.id}`);
})
.put((req, res) => {
res.send(`Update article ${req.params.id}`);
})
.delete((req, res) => {
res.send(`Delete article ${req.params.id}`);
});
Route parameters are named URL segments used to capture values at specific positions in the URL. They're defined with colon syntax and accessible via req.params.
Basic route parameters:
const express = require('express');
const app = express();
// Single parameter
app.get('/users/:userId', (req, res) => {
res.send(`User ID: ${req.params.userId}`);
});
// Multiple parameters
app.get('/posts/:postId/comments/:commentId', (req, res) => {
res.json({
postId: req.params.postId,
commentId: req.params.commentId
});
});
// Optional parameters (with ?)
app.get('/books/:category?', (req, res) => {
if (req.params.category) {
res.send(`Books in category: ${req.params.category}`);
} else {
res.send('All books');
}
});
// Parameter with constraints using regex
app.get('/products/:id(\\d+)', (req, res) => {
// Only matches if :id is numeric
res.send(`Product ID: ${req.params.id}`);
});
Query parameters are key-value pairs in the URL after the ? symbol. Express automatically parses them and makes them available in req.query.
Accessing query parameters:
const express = require('express');
const app = express();
// Basic query parameters
app.get('/search', (req, res) => {
const { q, category, sort } = req.query;
res.json({
query: q,
category: category,
sort: sort,
allParams: req.query
});
});
// URL: /search?q=javascript&category=books&sort=price
// Multiple values for same parameter
app.get('/filters', (req, res) => {
// URL: /filters?tags=js&tags=node&tags=express
const tags = Array.isArray(req.query.tags) ? req.query.tags : [req.query.tags];
res.json({ tags });
});
// With default values
app.get('/products', (req, res) => {
const page = parseInt(req.query.page) || 1;
const limit = parseInt(req.query.limit) || 10;
const sort = req.query.sort || 'name';
res.json({
page,
limit,
sort,
message: `Showing page ${page} with ${limit} items, sorted by ${sort}`
});
});
app.use() is used to mount middleware functions at a specified path. It can be used for application-level middleware, router-level middleware, and error handling.
Using app.use() for middleware:
const express = require('express');
const app = express();
// Application-level middleware (runs for every request)
app.use((req, res, next) => {
console.log(`${req.method} ${req.url} at ${new Date()}`);
next();
});
// Mount middleware at specific path
app.use('/api', (req, res, next) => {
console.log('API route accessed');
next();
});
// Multiple middleware functions
app.use(express.json()); // Parse JSON bodies
app.use(express.urlencoded({ extended: true })); // Parse form data
// Third-party middleware
app.use(require('cors')()); // Enable CORS
app.use(require('helmet')()); // Security headers
// Serving static files
app.use(express.static('public'));
// Router-level middleware
app.use('/users', require('./routes/users'));
Middleware are functions that have access to the request object (req), response object (res), and the next middleware function in the application's request-response cycle.
Middleware execution flow:
Request → Middleware 1 → Middleware 2 → ... → Route Handler → Response
Types of middleware:
const express = require('express');
const app = express();
// 1. Application-level middleware
app.use((req, res, next) => {
console.log('Application middleware - runs for every request');
next();
});
// 2. Router-level middleware
const router = express.Router();
router.use((req, res, next) => {
console.log('Router middleware - runs for router routes');
next();
});
// 3. Error-handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).send('Something broke!');
});
// 4. Built-in middleware
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// 5. Third-party middleware
app.use(require('cors')());
app.use(require('helmet')());
Custom middleware functions are created to handle specific tasks like authentication, logging, data validation, etc.
Creating custom middleware:
const express = require('express');
const app = express();
// Simple logging middleware
app.use((req, res, next) => {
console.log(`${req.method} ${req.originalUrl} - ${new Date().toISOString()}`);
next();
});
// Authentication middleware
const authenticate = (req, res, next) => {
const token = req.headers.authorization;
if (!token) {
return res.status(401).json({ error: 'No token provided' });
}
// Verify token (simplified)
if (token !== 'secret-token') {
return res.status(401).json({ error: 'Invalid token' });
}
req.user = { id: 1, name: 'John Doe' }; // Attach user to request
next();
};
// Validation middleware
const validateUser = (req, res, next) => {
const { name, email, age } = req.body;
if (!name || !email) {
return res.status(400).json({ error: 'Name and email are required' });
}
if (age && (age < 0 || age > 150)) {
return res.status(400).json({ error: 'Invalid age' });
}
next();
};
// Usage
app.post('/users', authenticate, validateUser, (req, res) => {
res.json({ message: 'User created', user: req.user });
});
next() is a function passed to middleware that, when called, passes control to the next middleware function in the stack.
Using next() in middleware:
const express = require('express');
const app = express();
// Calling next() to continue
app.use((req, res, next) => {
console.log('Middleware 1');
next(); // Pass control to next middleware
});
app.use((req, res, next) => {
console.log('Middleware 2');
next(); // Continue to route handler
});
app.get('/', (req, res) => {
res.send('Hello World');
});
// Using next('route') to skip remaining middleware
app.get('/user/:id', (req, res, next) => {
if (req.params.id === '0') {
next('route'); // Skip to next route with same path
} else {
next(); // Continue to next middleware
}
}, (req, res, next) => {
res.send('Regular user');
});
app.get('/user/:id', (req, res) => {
res.send('Special user');
});
// Error handling with next(err)
app.get('/error', (req, res, next) => {
const err = new Error('Something went wrong');
err.status = 500;
next(err); // Pass error to error handler
});
404 errors are handled by creating a catch-all route at the end of your middleware stack that responds when no other routes match.
404 error handling:
const express = require('express');
const app = express();
// Your regular routes
app.get('/', (req, res) => {
res.send('Home Page');
});
app.get('/about', (req, res) => {
res.send('About Page');
});
// 404 handler - catches all unmatched routes
app.use('*', (req, res) => {
res.status(404).json({
error: 'Route not found',
path: req.originalUrl,
method: req.method,
timestamp: new Date().toISOString()
});
});
// Alternative 404 handler with HTML response
app.use('*', (req, res) => {
res.status(404).send(`
404 - Not Found
404 - Page Not Found
The requested URL ${req.originalUrl} was not found on this server.
Go to Home Page
`);
});
Global error handling is done using error-handling middleware with four parameters (err, req, res, next) placed after all other middleware.
Global error handler:
const express = require('express');
const app = express();
// Regular routes
app.get('/', (req, res) => {
res.send('Home Page');
});
app.get('/error', (req, res) => {
throw new Error('This is a test error');
});
// Async error example
app.get('/async-error', async (req, res, next) => {
try {
await someAsyncOperation();
res.send('Success');
} catch (error) {
next(error); // Pass to error handler
}
});
// Global error handling middleware
app.use((err, req, res, next) => {
console.error('Error stack:', err.stack);
// Set default status code
const statusCode = err.status || err.statusCode || 500;
// Development vs production error responses
const isDevelopment = process.env.NODE_ENV === 'development';
res.status(statusCode).json({
error: isDevelopment ? err.message : 'Something went wrong!',
...(isDevelopment && { stack: err.stack }),
...(err.details && { details: err.details }),
timestamp: new Date().toISOString()
});
});
JSON responses are sent using res.json() which automatically sets the Content-Type header to application/json.
Sending JSON responses:
const express = require('express');
const app = express();
// Basic JSON response
app.get('/api/user', (req, res) => {
res.json({
id: 1,
name: 'John Doe',
email: 'john@example.com',
active: true
});
});
// JSON with status code
app.post('/api/users', (req, res) => {
// Create user logic...
res.status(201).json({
success: true,
message: 'User created successfully',
data: {
id: 123,
name: 'New User'
}
});
});
// Error JSON response
app.get('/api/error', (req, res) => {
res.status(400).json({
success: false,
error: {
code: 'VALIDATION_ERROR',
message: 'Invalid input data',
details: ['Name is required', 'Email must be valid']
}
});
});
// JSON with custom headers
app.get('/api/data', (req, res) => {
res.set({
'Content-Type': 'application/json',
'Cache-Control': 'no-cache',
'X-Custom-Header': 'custom-value'
}).json({
data: [1, 2, 3, 4, 5],
count: 5,
timestamp: new Date().toISOString()
});
});
res.sendFile() sends a file as an HTTP response. It automatically sets the Content-Type based on the file extension.
Using res.sendFile():
const express = require('express');
const path = require('path');
const app = express();
// Send file with absolute path
app.get('/download', (req, res) => {
const filePath = path.join(__dirname, 'files', 'document.pdf');
res.sendFile(filePath);
});
// Send file with custom filename
app.get('/download-custom', (req, res) => {
const filePath = path.join(__dirname, 'files', 'report.pdf');
res.sendFile(filePath, {
headers: {
'Content-Disposition': 'attachment; filename="monthly-report.pdf"'
}
});
});
// Send HTML file
app.get('/page', (req, res) => {
res.sendFile(path.join(__dirname, 'views', 'index.html'));
});
// Send file with error handling
app.get('/safe-download', (req, res) => {
const fileName = req.query.file;
// Security: Validate filename to prevent directory traversal
if (!fileName || fileName.includes('..')) {
return res.status(400).send('Invalid filename');
}
const filePath = path.join(__dirname, 'uploads', fileName);
res.sendFile(filePath, (err) => {
if (err) {
console.error('File send error:', err);
if (!res.headersSent) {
res.status(404).send('File not found');
}
}
});
});
Static files (CSS, JavaScript, images) are served using express.static() middleware.
Serving static files:
const express = require('express');
const path = require('path');
const app = express();
// Serve files from 'public' directory
app.use(express.static('public'));
// Now you can access:
// http://localhost:3000/css/style.css
// http://localhost:3000/js/app.js
// http://localhost:3000/images/logo.png
// Multiple static directories
app.use(express.static('public'));
app.use(express.static('uploads'));
// Virtual path prefix
app.use('/static', express.static('public'));
// Now files are available at:
// http://localhost:3000/static/css/style.css
// With cache control
app.use('/assets', express.static('public', {
maxAge: '1d', // Cache for 1 day
etag: false // Disable ETag generation
}));
// Security: Don't serve sensitive files
app.use(express.static('public', {
dotfiles: 'deny', // Deny access to dotfiles (.env, .gitignore)
index: false // Don't serve directory index
}));
express.static() is built-in Express middleware that serves static files and is the simplest way to serve images, CSS, JavaScript, etc.
express.static() options:
const express = require('express');
const app = express();
app.use(express.static('public', {
// Cache control
maxAge: '1d', // Cache for 1 day
etag: true, // Enable ETag (default)
// Security options
dotfiles: 'ignore', // 'allow', 'deny', 'ignore'
index: 'index.html', // Default file to serve
// Redirect trailing slash
redirect: true,
// Custom headers
setHeaders: (res, path, stat) => {
res.set('X-Custom-Header', 'static-file');
// Set specific headers for certain file types
if (path.endsWith('.css')) {
res.set('Content-Type', 'text/css');
}
}
}));
// Multiple static configurations
app.use('/css', express.static('styles', {
maxAge: '7d',
index: false
}));
app.use('/js', express.static('scripts', {
maxAge: '1h'
}));
JSON request bodies are parsed using express.json() middleware, which makes the parsed data available in req.body.
Parsing JSON bodies:
const express = require('express');
const app = express();
// Parse JSON bodies
app.use(express.json());
// Now req.body contains parsed JSON
app.post('/api/users', (req, res) => {
const { name, email, age } = req.body;
// Validate and process the data
if (!name || !email) {
return res.status(400).json({
error: 'Name and email are required'
});
}
res.status(201).json({
message: 'User created',
user: { name, email, age }
});
});
// With custom options
app.use(express.json({
limit: '10mb', // Maximum request body size
strict: true, // Only accept arrays and objects
type: 'application/json' // Only parse when Content-Type matches
}));
// Example request that would be parsed:
// POST /api/users
// Content-Type: application/json
// {
// "name": "John Doe",
// "email": "john@example.com",
// "age": 30
// }
body-parser was previously a separate middleware for parsing request bodies, but is now built into Express as express.json() and express.urlencoded().
Using body-parser (legacy approach):
// Old way (before Express 4.16)
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
// Modern way (Express 4.16+)
app.use(express.json()); // Replaces bodyParser.json()
app.use(express.urlencoded({ extended: true })); // Replaces bodyParser.urlencoded()
// Comparison of old vs new
const oldWay = {
json: "app.use(bodyParser.json())",
urlencoded: "app.use(bodyParser.urlencoded({ extended: true }))"
};
const newWay = {
json: "app.use(express.json())",
urlencoded: "app.use(express.urlencoded({ extended: true }))"
};
// Both parse:
// - JSON bodies (Content-Type: application/json)
// - URL-encoded bodies (Content-Type: application/x-www-form-urlencoded)
express.Router() creates modular, mountable route handlers. A Router instance is a complete middleware and routing system.
Using express.Router():
// routes/users.js
const express = require('express');
const router = express.Router();
// Middleware specific to this router
router.use((req, res, next) => {
console.log('User routes accessed');
next();
});
// Define routes
router.get('/', (req, res) => {
res.json({ message: 'Get all users' });
});
router.get('/:id', (req, res) => {
res.json({ message: `Get user ${req.params.id}` });
});
router.post('/', (req, res) => {
res.json({ message: 'Create user', data: req.body });
});
router.put('/:id', (req, res) => {
res.json({ message: `Update user ${req.params.id}` });
});
router.delete('/:id', (req, res) => {
res.json({ message: `Delete user ${req.params.id}` });
});
module.exports = router;
// In main app file
const express = require('express');
const app = express();
const userRoutes = require('./routes/users');
app.use('/api/users', userRoutes);
// Now all user routes are available under /api/users
Route modularization involves organizing routes into separate files using express.Router() for better maintainability.
Route modularization structure:
// Project structure:
// routes/
// users.js
// products.js
// orders.js
// app.js
// routes/users.js
const express = require('express');
const router = express.Router();
router.get('/', (req, res) => {
res.json({ users: [] });
});
router.post('/', (req, res) => {
// Create user logic
res.status(201).json({ message: 'User created' });
});
module.exports = router;
// routes/products.js
const express = require('express');
const router = express.Router();
router.get('/', (req, res) => {
res.json({ products: [] });
});
router.get('/:id', (req, res) => {
res.json({ product: { id: req.params.id } });
});
module.exports = router;
// app.js
const express = require('express');
const app = express();
const userRoutes = require('./routes/users');
const productRoutes = require('./routes/products');
app.use('/api/users', userRoutes);
app.use('/api/products', productRoutes);
URL parameters are captured from the route path and accessed via req.params object.
URL parameter examples:
const express = require('express');
const app = express();
app.get('/users/:userId/posts/:postId', (req, res) => {
res.json({
userId: req.params.userId,
postId: req.params.postId
});
});
Multiple middlewares can be applied by passing an array or multiple arguments to app.use() or route methods.
Multiple middleware examples:
const express = require('express');
const app = express();
const auth = (req, res, next) => { next(); };
const log = (req, res, next) => { next(); };
const validate = (req, res, next) => { next(); };
app.use([auth, log]); // Array of middlewares
app.get('/protected', auth, validate, (req, res) => {
res.send('Protected route');
});
Request logging can be implemented using custom middleware or third-party packages like morgan.
Custom request logging:
const express = require('express');
const app = express();
app.use((req, res, next) => {
console.log(`${new Date().toISOString()} - ${req.method} ${req.url}`);
next();
});
Get personalized guidance from our course advisor. We'll help you understand if this program is right for your career goals.