Lighthouse CI solves a specific problem: you fix performance issues, merge the PR, then someone adds a 500KB image next week and performance tanks again.
Lighthouse CI is a command-line tool that runs Google Lighthouse audits in your CI pipeline. It tests your site against performance budgets and fails the build if metrics drop below thresholds.
Installation
npm install -g @lhci/cli@latestCreate lighthouserc.json in your project root. I’m using Astro, so the default port is 4321:
{ "ci": { "collect": { "url": [ "http://localhost:4321/", "http://localhost:4321/blog" ], "numberOfRuns": 3 }, "assert": { "assertions": { "categories:performance": ["error", {"minScore": 0.9}], "first-contentful-paint": ["error", {"maxNumericValue": 2000}], "largest-contentful-paint": ["error", {"maxNumericValue": 2500}], "cumulative-layout-shift": ["error", {"maxNumericValue": 0.1}] } } }}Running Locally First
Start by running Lighthouse CI locally. This lets you set realistic performance budgets before adding it to your pipeline.
npm run devlhci autorunThe autorun command runs three audits per URL, takes the median scores, and checks them against your assertions. When audits fail, you’ll see exactly which metrics are over budget.
This is where you tune your configuration. If your site consistently scores 0.85 on performance but you set minScore: 0.9, every build fails. Run locally, see what scores you actually get, then set budgets slightly above your current performance. The goal is to prevent regressions, not to fail every build.
For static sites, point directly to your build output:
{ "ci": { "collect": { "staticDistDir": "./dist" } }}For apps that need a running server, tell Lighthouse CI how to start it:
{ "ci": { "collect": { "url": ["http://localhost:3000/"], "startServerCommand": "npm run preview" } }}Once your assertions pass locally, add Lighthouse CI to your pipeline.
Framework-Specific Setup
Next.js:
Next.js uses port 3000 by default. Your configuration needs npm run build to generate the production build, then npm start to serve it:
{ "ci": { "collect": { "url": ["http://localhost:3000/"], "startServerCommand": "npm run build && npm start" } }}For static exports (output: 'export' in next.config.js), use staticDistDir:
{ "ci": { "collect": { "staticDistDir": "./out" } }}Nuxt:
Nuxt uses port 3000 by default. Build with npm run build, then preview with npm run preview (which uses nuxi preview internally):
{ "ci": { "collect": { "url": ["http://localhost:3000/"], "startServerCommand": "npm run build && npm run preview" } }}For static generation (nuxi generate), point to the output directory:
{ "ci": { "collect": { "staticDistDir": "./.output/public" } }}Adding to CI
GitHub Actions (.github/workflows/lighthouse.yml):
name: Lighthouse CIon: [pull_request]
jobs: lighthouse: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '20'
- name: Install dependencies run: npm ci
- name: Build site run: npm run build
- name: Serve site run: npm run preview &
- name: Wait for server run: npx wait-on http://localhost:4321
- name: Run Lighthouse CI run: | npm install -g @lhci/cli@latest lhci autorunGitLab CI (.gitlab-ci.yml):
lighthouse: image: cypress/browsers:node16.17.0-chrome106 script: - npm ci - npm run build - npm run preview & - npx wait-on http://localhost:4321 - npm install -g @lhci/cli@latest - lhci autorun only: - merge_requestsPerformance Budgets
The assertions block defines what passes and what fails.
Score-based budgets:
"categories:performance": ["error", {"minScore": 0.9}]Performance score must be 90 or above. Lighthouse scores range from 0 to 1.
Metric-based budgets:
"first-contentful-paint": ["error", {"maxNumericValue": 2000}]First Contentful Paint must be under 2000ms. This is more reliable than scores because it tests actual render timing.
Disabling audits:
"uses-responsive-images": "off","offscreen-images": "off"Turn off audits that don’t apply to your use case. Responsive image warnings are often false positives for modern image formats.
What Happens When It Fails
Checking assertions against 2 URL(s), 3 run(s) each.
✘ http://localhost:4321/
categories:performance failure for minScore assertion expected: >= 0.9 found: 0.87
largest-contentful-paint failure for maxNumericValue assertion expected: <= 2500 found: 3200
Assertion failed. Exiting with status code 1.The build fails. Fix the performance issue, push again, repeat.
Running Lighthouse CI with Desktop Settings
Lighthouse CI runs mobile audits by default. Mobile simulates a slower device with 4x CPU slowdown and slow 3G network conditions. For desktop testing, configure the preset in your settings:
{ "ci": { "collect": { "settings": { "preset": "desktop" } } }}Desktop audits use faster throttling and no CPU slowdown. Test the device type your users actually use.
Storage Options
"upload": { "target": "temporary-public-storage"}This stores reports at https://googlechrome.github.io/lighthouse-ci/viewer/ for 7 days. The URL appears in CI logs.
For permanent storage, use the Lighthouse CI Server or store results as build artifacts.
When It’s Most Valuable
Lighthouse CI catches regressions that code review misses. A developer adds moment.js (231KB). Code looks fine. CI fails because First Contentful Paint jumped from 1.8s to 3.2s.
Without CI, that ships to production. With CI, you catch it in the PR and suggest date-fns (13KB) instead.
It works best for content sites, marketing pages, and documentation. It’s less useful for dashboards and apps where performance varies based on data.