🚀Debugging Microservices & Distributed Systems
10 min read

Comprehensive React Testing: Handling API Calls with Mock Service Worker

Testing can be tricky, especially when it comes to handling API calls.

Traditionally, we’ve mocked API calls in React apps using libraries like Jest’s manual mocks or axios-mock-adapter. While these approaches work, they come with significant drawbacks:

  • Disconnect from reality: These mocks substitute the real fetch or axios functions with mock versions. As a result, your tests don’t interact with the actual network, meaning they don’t test how your application would behave with real HTTP requests and responses. This can lead to tests that pass even when there are underlying issues with the network layer in a production environment.
__mocks__/fetch.js
export default function fetch(url) {
return Promise.resolve({
json: () => Promise.resolve({ data: 'mocked data' }),
});
}
// In your test file
import fetch from 'node-fetch'; // or 'isomorphic-fetch'
jest.mock('node-fetch', () => require('../__mocks__/fetch'));
test('fetches data from an API', async () => {
const response = await fetch('/api/data');
const data = await response.json();
expect(data).toEqual({ data: 'mocked data' });
});

The mock function returns a hardcoded response, so you aren’t testing how the real fetch would behave, such as how it handles network errors, timeouts, or different response formats.

  • Maintenance overhead: You often need to update mocks in multiple places when your API changes.

When your API changes, you have to update your mocks in multiple places, which can be time-consuming and error-prone.

__mocks__/fetch.js
export default function fetch(url) {
if (url === '/api/data') {
return Promise.resolve({
json: () => Promise.resolve({ data: 'mocked data' }),
});
}
if (url === '/api/other-data') {
return Promise.resolve({
json: () => Promise.resolve({ data: 'other mocked data' }),
});
}
// Add new cases as the API grows
}
// In your test file
test('fetches other data from an API', async () => {
const response = await fetch('/api/other-data');
const data = await response.json();
expect(data).toEqual({ data: 'other mocked data' });
});
  • Limited scope: They typically only work for the specific library you’re mocking (e.g., axios mocks don’t help if you’re using fetch).
__mocks__/fetch.js
// Switching from axios to fetch
export default function fetch(url) {
return Promise.resolve({
json: () => Promise.resolve({ data: 'mocked data' }),
});
}
// Test file using fetch instead of axios
test('fetches data using fetch', async () => {
const response = await fetch('/api/data');
const data = await response.json();
expect(data).toEqual({ data: 'mocked data' });
});
// Test file using axios
import axios from 'axios';
import MockAdapter from 'axios-mock-adapter';
const mock = new MockAdapter(axios);
mock.onGet('/api/data').reply(200, { data: 'mocked data' });
test('fetches data using axios', async () => {
const response = await axios.get('/api/data');
expect(response.data).toEqual({ data: 'mocked data' });
});

Jest’s manual mocks and axios-mock-adapter are specific to the library you’re mocking. If you switch from axios to fetch, you need a different mocking strategy.

  • Inconsistency between environments: Your mocks might work differently in Jest versus a browser environment, leading to false positives.
// Jest test environment
import fetch from 'node-fetch';
jest.mock('node-fetch', () => require('../__mocks__/fetch'));
test('fetches data in Jest', async () => {
const response = await fetch('/api/data');
const data = await response.json();
expect(data).toEqual({ data: 'mocked data' });
});
// In a real browser environment
fetch('/api/data')
.then(response => response.json())
.then(data => console.log(data)) // May not match the mocked data
.catch(error => console.error('Error:', error));

Mocks might behave differently in Jest versus a browser environment, leading to false positives. For example, a mock might pass in Jest but fail in a real browser where the actual network request behaves differently.

A Different Approach with Mock Service Worker

MSW takes a fundamentally different approach. Instead of replacing your application’s network calls, it intercepts actual HTTP requests at the network level. This seemingly small difference has profound implications:

  • Realistic testing: Your application code runs exactly as it would in production. It makes real fetch or axios calls, which are then intercepted by MSW.
  • Environment consistency: The same MSW setup works in Node.js for unit tests and in browsers for integration tests or even development mocking.
  • API-agnostic: Whether you’re using fetch, axios, or any other HTTP client, MSW intercepts all requests the same way.
  • Easier debugging: Because your app is making real network calls, you can use browser dev tools to inspect these requests, even in tests.

Dashboard Scenario

You’re tasked with building an admin dashboard for a SaaS product. This dashboard needs to display:

  1. A list of active users
  2. Key performance metrics for each user
  3. Overall system health status

To gather this data, your React application needs to make three distinct API calls:

  1. Fetch the list of active users
  2. For each user, fetch their individual performance metrics
  3. Fetch the current system health status

This scenario is complex because it involves multiple, interdependent API calls. It’s a perfect case to demonstrate the power of MSW in testing. Here’s how you might set up the MSW handlers for this scenario.

src/mocks/handlers.js
import { http } from 'msw'
export const handlers = [
// Handler for fetching active users
http.get('/api/users/active', async () => {
return new Response(
JSON.stringify([
{ id: 1, name: 'Alice', email: '[email protected]' },
{ id: 2, name: 'Bob', email: '[email protected]' },
])
)
}),
// Handler for fetching user metrics
http.get('/api/users/:userId/metrics', async ({ request }) => {
const { userId } = request.params
const metrics = {
1: { activeProjects: 3, completedTasks: 28, responseTime: '2.3 hours' },
2: { activeProjects: 5, completedTasks: 42, responseTime: '1.8 hours' },
}
return new Response(
JSON.stringify(metrics[userId])
)
}),
// Handler for fetching system health status
http.get('/api/system/health', async () => {
return new Response(
JSON.stringify({ status: 'healthy', uptime: '99.99%', connections: 1205 })
)
})
]

Now, let’s craft a comprehensive test for our AdminDashboard component. Here’s what we need to test.

  • Correct rendering of user info, metrics, and system status from multiple API endpoints.
  • Proper handling of asynchronous API calls.
  • Accurate display of data after all API responses are received.
src/components/AdminDashboard.test.js
import React from 'react'
import { render, screen, waitFor } from '@testing-library/react'
import { server } from '../mocks/server'
import AdminDashboard from './AdminDashboard'
// Start MSW before all tests
beforeAll(() => server.listen())
// Reset MSW handlers after each test to ensure test isolation
afterEach(() => server.resetHandlers())
// Clean up MSW after all tests
afterAll(() => server.close())
test('renders admin dashboard with user metrics and system health', async () => {
render(<AdminDashboard />)
// Wait for all API responses
await waitFor(() => {
expect(screen.getByText('Alice ([email protected])')).toBeInTheDocument()
expect(screen.getByText('Bob ([email protected])')).toBeInTheDocument()
})
// Check that user metrics are displayed correctly
expect(screen.getByText('Alice: 3 active projects, 28 completed tasks')).toBeInTheDocument()
expect(screen.getByText('Bob: 5 active projects, 42 completed tasks')).toBeInTheDocument()
// Check system health status
expect(screen.getByText('System Status: healthy')).toBeInTheDocument()
expect(screen.getByText('Uptime: 99.99%')).toBeInTheDocument()
expect(screen.getByText('Active Connections: 1205')).toBeInTheDocument()
})

This test above highlights several important aspects:

  • Complex Data Dependencies: The dashboard relies on multiple API calls, some of which (user metrics) depend on the results of others (active users list).
  • Different Data Types: We’re dealing with both lists (users) and individual records (metrics, system health).
  • Real-world Complexity: This mimics actual dashboard requirements, with various data points and potential for different states (e.g., loading, error, empty data).

By using MSW, we can simulate all these API calls in our test environment. This allows us to write a comprehensive test that covers the entire data flow of our AdminDashboard component, from API requests to final rendered output.

The power of this approach becomes even more apparent when you consider testing edge cases. For instance, you could easily simulate the following scenarios.

  • A scenario where there are no active users

To simulate a scenario with no active users, you can adjust the /api/users/active handler to return an empty array.

src/mocks/handlers.js
import { http } from 'msw'
export const handlers = [
http.get('/api/users/active', async () => {
return new Response(JSON.stringify([])) // Simulate no active users
}),
// Other handlers remain the same...
]
  • A case where the system health is critical

To simulate a case where the system health is critical, you can modify the /api/system/health handler to return critical status.

src/mocks/handlers.js
import { http } from 'msw'
export const handlers = [
// Other handlers remain the same...
http.get('/api/system/health', async () => {
return new Response(
JSON.stringify({ status: 'critical', uptime: '90.00%', connections: 500 })
)
}),
]
  • Situations where some API calls succeed while others fail

To simulate a situation where some API calls succeed while others fail, you can modify the handlers to return errors for specific endpoints:

src/mocks/handlers.js
import { http } from 'msw'
export const handlers = [
http.get('/api/users/active', async () => {
return new Response(
JSON.stringify([
{ id: 1, name: 'Alice', email: '[email protected]' },
{ id: 2, name: 'Bob', email: '[email protected]' },
])
)
}),
http.get('/api/users/:userId/metrics', async ({ request }) => {
const { userId } = request.params
if (userId === '2') {
return new Response(
JSON.stringify({ error: 'Failed to fetch user metrics' }),
{ status: 500 }
)
}
return new Response(
JSON.stringify({
activeProjects: userId === '1' ? 3 : 5,
completedTasks: userId === '1' ? 28 : 42,
responseTime: userId === '1' ? '2.3 hours' : '1.8 hours',
})
)
}),
http.get('/api/system/health', async () => {
return new Response(
JSON.stringify({ status: 'healthy', uptime: '99.99%', connections: 1205 })
)
}),
]

Using These Edge Cases in Tests

Now you can use these modified handlers in your tests to simulate the different scenarios.

src/components/AdminDashboard.test.js
import React from 'react'
import { render, screen, waitFor } from '@testing-library/react'
import { server } from '../mocks/server'
import AdminDashboard from './AdminDashboard'
// Set up and tear down MSW server
beforeAll(() => server.listen())
afterEach(() => server.resetHandlers())
afterAll(() => server.close())
test('renders admin dashboard with no active users', async () => {
server.use(
rest.get('/api/users/active', (req, res, ctx) => {
return res(ctx.json([])) // No active users
})
)
render(<AdminDashboard />)
await waitFor(() => {
expect(screen.getByText('No active users')).toBeInTheDocument()
})
})
test('renders admin dashboard with critical system health', async () => {
server.use(
rest.get('/api/system/health', (req, res, ctx) => {
return res(
ctx.json({
status: 'critical', // Simulate critical system health
uptime: '90.00%',
activeConnections: 500,
})
)
})
)
render(<AdminDashboard />)
await waitFor(() => {
expect(screen.getByText('System Status: critical')).toBeInTheDocument()
expect(screen.getByText('Uptime: 90.00%')).toBeInTheDocument()
expect(screen.getByText('Active Connections: 500')).toBeInTheDocument()
})
})
test('renders admin dashboard with partial API failures', async () => {
server.use(
rest.get('/api/users/:userId/metrics', (req, res, ctx) => {
const { userId } = req.params
if (userId === '2') {
return res(
ctx.status(500), // Simulate an error for user 2's metrics
ctx.json({ error: 'Failed to fetch user metrics' })
)
}
return res(
ctx.json({
activeProjects: userId === '1' ? 3 : 5,
completedTasks: userId === '1' ? 28 : 42,
averageResponseTime: userId === '1' ? '2.3 hours' : '1.8 hours',
})
)
})
)
render(<AdminDashboard />)
await waitFor(() => {
expect(screen.getByText('Alice: 3 active projects, 28 completed tasks')).toBeInTheDocument()
expect(screen.getByText('Failed to fetch user metrics')).toBeInTheDocument()
})
})
  • No Active Users: The first test sets up the handler to return an empty array for the active users API, simulating a scenario where no users are active.

  • Critical System Health: The second test modifies the system health API response to reflect a critical system status.

  • Partial API Failures: The third test simulates a situation where one of the API calls (fetching metrics for a specific user) fails while the others succeed.

By dynamically adjusting your MSW handlers, you can create a wide range of scenarios to thoroughly test your application’s behavior under different conditions. This flexibility is one of the most powerful features of using MSW.

How MSW Works Under the Hood

MSW operates by intercepting network requests at the service worker level. This is a key distinction from traditional mocking methods, which typically override specific functions (like fetch or axios). Here’s a quick overview of how MSW does its magic:

  • Service Worker Integration: MSW installs a service worker in your application. This service worker intercepts all network requests made by your application, regardless of the HTTP client used (e.g., fetch, axios, etc.).

  • Request Matching: When a request is intercepted, MSW checks it against a list of handlers you’ve defined. Each handler specifies a route (e.g., /api/users/active) and the conditions under which it should respond.

  • Mocked Responses: If a request matches one of your handlers, MSW constructs a mocked response using the data you’ve provided. This response is then sent back to your application as if it were a real network response.

  • Transparency and Flexibility: Because MSW operates at the network level, your application behaves exactly as it would with a real backend. This means you can use real browser dev tools to inspect requests and responses, making debugging and testing more transparent and closer to real-world scenarios.

Understanding how MSW works under the hood helps clarify why it’s such a versatile tool.

Unlike other mocking methods that can be brittle and tied to specific libraries, MSW offers a more holistic approach that closely mirrors actual application behavior. This makes your tests more reliable and reduces the risk of false positives or negatives.


Related Articles

If you enjoyed this article, you might find these related pieces interesting as well.

Recommended Engineering Resources

Here are engineering resources I've personally vetted and use. They focus on skills you'll actually need to build and scale real projects - the kind of experience that gets you hired or promoted.

Imagine where you would be in two years if you actually took the time to learn every day. A little effort consistently adds up, shaping your skills, opening doors, and building the career you envision. Start now, and future you will thank you.


This article was originally published on https://www.trevorlasn.com/blog/react-testing-mock-service-worker. It was written by a human and polished using grammar tools for clarity.

Interested in a partnership? Shoot me an email at hi [at] trevorlasn.com with all relevant information.