Accessibility Test

Promotional graphic about Lighthouse accessibility scores. Features the title, an illustration of a lighthouse shining a beam of light at night, and a YouTube subscribe button.

Lighthouse Accessibility Score | Insights and Limitations

Banner comparing top accessibility tools with headline 'Compare the Best Accessibility Tools | Updated Weekly'. Shows three recommended tools with ratings: UserWay (8/10) for AI-powered WCAG compliance, AccessiBe (7/10) for automated ADA compliance, and AudioEye (9.5/10, labeled 'Best Overall') offering hybrid solution with automation and expert audits. Last updated February 15, 2025. The page helps users compare features, pricing and benefits for WCAG, ADA, and Section 508 compliance.

What Lighthouse Actually Tests (Spoiler: Not Everything)


Lighthouse is an automated tool, and that automation is both its greatest strength and its most significant weakness. It’s incredibly fast at scanning your page for a specific set of technical issues that can be programmatically detected. Think of it as a machine that can check for machine-readable problems. It’s good at catching low-hanging fruit and common mistakes that developers might make during the build process.

So, what’s on its checklist? Lighthouse primarily looks for the presence and proper implementation of certain HTML attributes and technical standards. For example, it checks for basic color contrast ratios between text and its background. It verifies that <img> elements have alt attributes and that form inputs are associated with <label> elements. It also looks for ARIA attributes, like role or aria-label, and checks if they are used according to specification. These are all foundational elements of an accessible website. Having a tool that can instantly flag them is undeniably valuable. It can catch a missing alt tag before your code ever gets to production. Saving a screen reader user from hearing “image123.jpg” instead of a meaningful description.

But the key word is automated. Lighthouse can only test things that have a clear pass/fail answer based on code. It can tell you if an alt tag exists, but it has no way of knowing if that alt text is actually descriptive or helpful. It can confirm that a button has an accessible name, but it can’t tell you if that button is reachable or usable with a keyboard. This is the fundamental limitation of all automated accessibility checkers. They excel at verifying technical compliance against a predefined set of rules but fall short when it comes to assessing genuine human usability.

Coverage Gaps That Leave Users Stuck


The real test of accessibility isn’t whether your code validates; it’s whether a person can successfully complete a task on your website. This is where Lighthouse’s blind spots become critical. Many of the most severe barriers for users with disabilities are related to interactivity, context, and logical flow, things an automated scan simply can’t understand. One of the biggest gaps is keyboard navigation. Many users, including those with motor disabilities and blind users who rely on screen readers, navigate websites exclusively with a keyboard. They use the Tab key to move between interactive elements like links, buttons, and form fields. Lighthouse doesn’t test this flow. It can’t tell you if a user can even reach every interactive element, or if they get stuck in a “keyboard trap” where they can tab into a component but can’t tab out.

Another major blind spot is the screen reader experience. While Lighthouse checks for some ARIA attributes, it doesn’t simulate the experience of a blind user. It can’t tell you if the link text “Click Here” makes sense out of context, or if the content on your page is read in a logical order. A visually organized page might be a nonsensical jumble when read aloud by a screen reader if the underlying DOM structure is messy. Dynamic content, like pop-up modals, error messages that appear on the fly, or live-updating content, is another area where automated tools struggle. Lighthouse might not even see these elements, let alone test whether they are announced properly to screen reader users or can be managed without a mouse.

Illustration promoting WCAG 2.2 Simplified: 2025 Compliance Essentials with a purple background, a YouTube subscribe button, and a person working on a laptop next to a webpage design featuring tools and gears. Accessibility-Test.org logo included at the bottom with the tagline 'Compare. Check. Comply.

Why a Perfect Score Doesn’t Mean Accessible


Getting a 100 on your Lighthouse accessibility score is a moment to celebrate, briefly. It means you’ve successfully passed all the automated checks that Lighthouse is capable of performing. You’ve likely fixed issues with color contrast, added labels to your form fields, and ensured your images have alt tags. These are all important, and you should be proud of that accomplishment. It shows a commitment to cleaning up the technical foundations of your site. However, equating that perfect score with a perfectly accessible website is a dangerous mistake.

Think of it this way: a perfect Lighthouse score means you’ve built a house with solid foundations, correctly wired electricity, and properly installed plumbing. But it doesn’t tell you if the doors are wide enough for a wheelchair, if the light switches are reachable, or if the layout of the rooms makes sense for the people living there. It confirms the technical specifications are met, but it says nothing about the human experience. Your “perfect” site could still be completely unusable for a keyboard-only user who gets trapped in a pop-up modal. It could be a confusing mess for a screen reader user because the content order is illogical.

This is why a perfect score is not a certificate of accessibility. It’s a green light indicating that you’ve passed the first, easiest round of testing. The real work begins after the automated scan ends. The next steps involve manual testing, navigating with a keyboard, listening with a screen reader, and ensuring that the interactive experience is not just technically compliant, but genuinely usable. A perfect Lighthouse score is a good start, but it’s the start of the journey, not the finish line.

Running Lighthouse Audits the Right Way

Now that we understand its limitations, how can we use Lighthouse effectively? The key is to integrate it into your development process as a routine check, not a final judgment. It’s a tool for catching regressions and spotting common errors early and often. When used consistently and with the right setup, it can be an invaluable part of a larger accessibility strategy, helping you maintain a baseline of technical quality.

The goal is to make running a Lighthouse audit as natural as running a spell check. It should be a quick, easy step that developers can perform on their own local machines before they even commit their code. This proactive approach helps catch issues when they are cheapest and easiest to fix. Waiting until the end of a project to run an accessibility scan often leads to a long list of problems that are much harder to address. By embedding Lighthouse into the daily workflow, you can build a culture where accessibility is a continuous concern, not an afterthought.

Purple banner featuring the text 'European Accessibility Act (EAA) - Step By Step for Businesses in 2025' with a computer screen displaying the EAA logo surrounded by EU stars. Includes a YouTube 'Subscribe' button and Accessibility-Test.org logo with the tagline 'Compare. Check. Comply.' Decorative icons such as gears and code snippets are also visible.

DevTools Setup for Consistent Results


The most straightforward way to run a Lighthouse audit is directly from the Chrome DevTools, which is likely already a familiar environment for most developers. To get the most reliable and consistent results, there are a few simple best practices to follow. First and foremost, always run Lighthouse in an incognito window. This is crucial because browser extensions, especially ad blockers or CSS modifiers, can interfere with the audit and skew your scores. An incognito window provides a cleaner, more controlled environment that better reflects how a new user might experience your site.

To run the audit, open Chrome DevTools (using Ctrl+Shift+I or Cmd+Option+I), go to the “Lighthouse” tab, and select the “Accessibility” category. You can deselect the other categories like Performance and SEO to speed up the audit if you’re only focused on accessibility at that moment. Choose the “Desktop” or “Mobile” device setting depending on what you want to test. Once you’ve configured these options, click “Analyze page load.” The audit will run for about a minute and then present you with your score and a report detailing any issues it found. The report will highlight specific elements that failed the checks and provide links to learn more about each issue, making it an excellent educational tool for developers who are new to accessibility.

CI Integration That Catches Regressions

While running audits manually in DevTools is great for individual developers, integrating Lighthouse into your Continuous Integration (CI) pipeline takes its power to the next level. A CI server, like GitHub Actions, Jenkins, or CircleCI, automatically runs a series of checks every time a developer tries to merge new code. By adding a Lighthouse audit to this process, you can prevent accessibility regressions from ever making it into your main codebase. This creates an automated safety net that protects the quality of your product.

Setting up Lighthouse in a CI environment involves using a tool like the Lighthouse CI CLI. You can configure it to run an audit on specific URLs whenever a pull request is created. The real power comes from setting performance budgets and accessibility score thresholds. For example, you can configure your CI pipeline to fail the build if the accessibility score drops below a certain number, say 95. This immediately alerts the team that the new code has introduced an accessibility issue. The pull request can be blocked from merging until the issue is fixed and the score is back above the threshold. This automated enforcement ensures that accessibility isn’t just a suggestion but a mandatory part of your development standards, helping you maintain a high level of quality over time.

Illustration of individuals interacting with accessible digital tools, including a person in a wheelchair using a magnifying glass to view a screen displaying growth charts. Surrounding elements include a book, plants, and people engaging with technology. The text reads 'The Top Benefits of Accessible Websites in 2025' with a 'YouTube Video Included!' banner and a red 'Subscribe' button. The Accessibility-Test.org logo is displayed at the bottom with the tagline 'Compare. Check. Comply

Beyond the Score: Common Issues Lighthouse Won’t Flag


Relying solely on a Lighthouse score can mask serious usability problems. The tool is excellent at what it does, but its scope is inherently limited to what can be checked by a machine. The most critical accessibility barriers often lie in the realm of user interaction and logical understanding, areas where automated tools are blind. These are not edge cases; they are common failures that can render a website completely unusable for people who rely on assistive technologies.

To truly understand your site’s accessibility, you must look beyond the number and investigate the user experience manually. These are the issues that don’t show up in an automated report but are immediately obvious to a person navigating with a keyboard or listening with a screen reader. Uncovering these problems requires a shift in mindset: from “does the code pass a test?” to “can a person accomplish their goal?” The following sections explore some of the most common and severe issues that Lighthouse simply cannot detect.

The Keyboard Navigation Black Hole

For a website to be accessible, every interactive element, links, buttons, form fields, menus, must be reachable and operable using only the keyboard. This is non-negotiable for users with motor impairments who cannot use a mouse and for screen reader users. Lighthouse does not test this. Your site could get a perfect 100 and still be a minefield of keyboard traps and dead ends. A common problem is the keyboard trap, where a user can tab into a component, like a complex widget or a third-party embed, but cannot tab back out. They are stuck, and their only recourse is to close the tab and start over.

Another frequent failure is an illogical focus order. Keyboard navigation should follow the visual reading order of the page, typically from left to right, top to bottom. However, if the DOM structure of your HTML is out of sync with the visual layout created by CSS, tabbing through the page can feel like a chaotic journey. A user might tab from the header to the footer, then back to the main content, completely disorienting them. Furthermore, the lack of a visible focus indicator is a massive issue that Lighthouse doesn’t always flag effectively. If users can’t see which element currently has keyboard focus, they are navigating blind. It’s like trying to click a mouse on a screen where the cursor is invisible.

Screen Readers Getting Lost in Translation

A screen reader is a software application that converts text and interface elements into synthesized speech or braille, allowing blind and visually impaired users to navigate and interact with digital content. For these users, the underlying code structure is the experience. Lighthouse can check for some technical markers, like ARIA attributes, but it cannot interpret them in context. One of the most common problems it misses is vague link text. A page might be filled with links that say “Learn More” or “Click Here.” While a sighted user can see the surrounding context, a screen reader user navigating by links will hear a meaningless list: “Learn More, Learn More, Click Here, Learn More.” The link text itself must be descriptive.

Similarly, Lighthouse cannot assess the logical flow of content. A developer might use CSS to visually position a section at the top of the page, but if that section’s HTML is at the bottom of the document, a screen reader will read it last. This creates a confusing and disjointed experience. The handling of dynamic content is another major challenge. When an error message appears, a shopping cart is updated, or new content is loaded on the page, these changes must be announced to the screen reader user. If they are not, the user is left unaware of critical updates to the interface, making it impossible to proceed.

Interactive ARIA Widgets | Implementation Guide for Developers" with a purple background. Features the accessibility-test.org logo with tagline "COMPARE. CHECK. COMPLY." at bottom left. Shows illustrations of a computer screen and mobile device on the right, with a person pointing at them. Includes text "YouTube Video Included!" and a red Subscribe button. Decorative plant at the bottom.

When “Good” Color Contrast Isn’t Good Enough


Lighthouse does a decent job of checking color contrast based on the WCAG guidelines, which is a huge help. It can flag text that is too light against its background, a common problem that affects users with low vision. However, its analysis is limited and doesn’t cover all scenarios. For instance, text on top of images or gradients is a notorious trouble spot. Lighthouse might only sample a few points, and if the text happens to fall over a part of the image where the contrast is sufficient, it will pass the test. But for a user, other parts of the text may be completely unreadable as they cross over different colors in the image.

The contrast of non-text elements is another area where automated tools fall short. Icons, form borders, and focus indicators must also have sufficient contrast to be perceivable. A faint border on a text input field might be invisible to someone with low vision, making it difficult to know where to click or type. The focus indicator itself is a prime example. Even if it’s technically present, if its color has poor contrast against the background or the element it’s highlighting, it’s essentially useless. A blue focus ring around a blue button will disappear. These nuanced, context-dependent contrast issues are where manual testing becomes essential to ensure the interface is visually clear for everyone.

The Pre-Merge Accessibility Ritual

Thinking about accessibility shouldn’t be a separate, isolated task. It should be a natural part of the development process, a ritual performed before any new code is merged into the main project. This “pre-merge ritual” is about combining the speed of automated tools with the insight of quick, targeted manual checks. It’s a lightweight process that doesn’t add significant overhead but provides immense value by catching a much wider range of issues than Lighthouse alone ever could.

This ritual empowers developers to take ownership of accessibility. It shifts the responsibility from a single specialist or a final QA phase to the entire team. By building this habit, you create a culture where accessibility is considered from the very beginning. The goal is to make these checks as routine as writing unit tests or reviewing code for style. It’s a simple, powerful way to move beyond just chasing a score and start focusing on building genuinely usable and inclusive digital experiences for all users.

Combining Lighthouse with Axe for Better Coverage

Lighthouse is a great first step, but for a more thorough automated analysis, you should pair it with the axe DevTools browser extension. In fact, the accessibility engine inside Lighthouse is powered by axe, but Lighthouse doesn’t run all of axe’s rules. By using the dedicated axe DevTools extension, you get access to a more extensive set of tests, which can uncover more issues automatically. Think of Lighthouse as the quick scan and axe DevTools as the deep scan. Running both gives you a more complete picture of your site’s technical accessibility health.

The process is simple. After running your Lighthouse audit, open the axe DevTools extension in the same browser tab and run its scan. It will often find additional issues that Lighthouse missed, such as more complex ARIA-related problems or certain color contrast failures. The axe extension also provides excellent educational resources, explaining not just what the problem is, but why it’s a problem and how to fix it. This combination is powerful: Lighthouse provides the high-level score that can be easily tracked over time in a CI pipeline, while axe DevTools offers the more detailed, granular analysis needed for effective debugging.

Adding 5-Minute Keyboard Tests That Matter


After the automated tools have done their job, the most important part of the ritual begins: a quick, manual keyboard check. This shouldn’t take more than five minutes, but it can uncover some of the most severe accessibility barriers. This simple test requires no special software, just your keyboard.

Here’s a checklist for your 5-minute keyboard test:

  1. Tab Through Everything: Starting from the top of the page, press the Tab key repeatedly. Can you reach every interactive element, including links, buttons, form fields, and media controls?
  2. Visible Focus: As you tab, is there always a highly visible indicator showing you which element is currently active? If the focus ring disappears or is hard to see, that’s a failure.
  3. Logical Order: Does the focus move in a predictable order that matches the visual layout? Or does it jump around the page erratically?
  4. Operate Controls: Can you activate every button and link using the Enter key? Can you check and uncheck boxes using the Spacebar?
  5. Escape from Traps: If you open a modal dialog or a pop-up menu, can you close it using the Escape key? Can you continue tabbing within the modal without the focus escaping to the background?

This short manual test moves beyond technical compliance to assess real-world usability. It catches the keyboard traps, invisible focus indicators, and illogical navigation flows that automated tools like Lighthouse will always miss.

Talking About Lighthouse Scores Without Overpromising

How you communicate the results of a Lighthouse audit is just as important as running it. A raw score, especially a high one, can be easily misinterpreted by stakeholders, project managers, and clients who may not be familiar with the nuances of accessibility testing. Presenting a score of 98 as “we’re 98% accessible” is not only inaccurate but also sets dangerous expectations and can lead to a premature sense of accomplishment, halting further, more critical testing efforts.

The key is to frame the Lighthouse score correctly. It’s not a grade on your final exam; it’s a progress report from one specific type of homework. It’s a metric that reflects your site’s health regarding a subset of technical best practices. Your job is to provide the context, explaining what the number means, what it doesn’t mean, and what the necessary next steps are. Clear, honest communication helps manage expectations and builds support for the more in-depth manual testing that is required to achieve genuine accessibility.

What to Tell Stakeholders About the Numbers

When presenting a Lighthouse score to stakeholders, avoid celebrating it as a measure of full compliance. Instead, describe it as a “technical health check” or a “baseline automated result.” Use an analogy they can understand. For example, explain that the Lighthouse score is like a spell check for your website’s code. It’s great at catching typos and basic grammatical errors, but it can’t tell you if the story is compelling or even makes sense. It’s a tool that helps us clean up the easy-to-find technical mistakes so we can focus our human attention on the more complex usability issues.

Be prepared to explain what the score doesn’t cover. Mention that it cannot test for keyboard accessibility, screen reader usability, or logical content flow. Frame the score as a starting point. A good way to phrase it is: “Our Lighthouse score is 95, which is excellent. This means we’ve passed all the automated checks for issues like color contrast and form labels. Our next step is to begin manual testing with keyboards and screen readers to ensure the site is truly usable for everyone.” This approach demonstrates competence, manages expectations, and clearly outlines the path forward.

Promotional image for a YouTube video titled 'How AI Is Revolutionizing Website Accessibility Testing in 2025.' The image features a purple background with white text, a graphic representation of a human head with circuit-like designs symbolizing AI, and branding elements from accessibility-test.org. A red 'Subscribe' button is included to encourage viewers to subscribe to the channel. The logo at the bottom includes the text 'COMPARE. CHECK. COMPLY.' alongside the accessibility-test.org brand name.

When to Run Deeper Tests Instead


Lighthouse is a great tool for quick, routine checks, but there are specific moments in a project’s lifecycle when you must go deeper. Relying on a simple automated scan during these critical times is not enough. One of the most important moments is before a major public launch or redesign. Before you present a new website or feature to the world, a full accessibility audit, combining automated scanning with extensive manual testing by experts, is essential to avoid releasing a product with significant barriers. Another trigger is when you need to meet specific legal or contractual requirements, such as compliance with the Americans with Disabilities Act (ADA) or Section 508. These standards require more than just passing an automated test; they demand a level of usability that can only be verified through manual evaluation, often including testing by users with disabilities.

Automated testing tools provide a fast way to identify many common accessibility issues. They can quickly scan your website and point out problems that might be difficult for people with disabilities to overcome.


Banner comparing top accessibility tools with headline 'Compare the Best Accessibility Tools | Updated Weekly'. Shows three recommended tools with ratings: UserWay (8/10) for AI-powered WCAG compliance, AccessiBe (7/10) for automated ADA compliance, and AudioEye (9.5/10, labeled 'Best Overall') offering hybrid solution with automation and expert audits. Last updated February 15, 2025. The page helps users compare features, pricing and benefits for WCAG, ADA, and Section 508 compliance.

Run a FREE scan to check compliance and get recommendations to reduce risks of lawsuits


Webpage interface with the heading 'Is your website Accessible & Compliant?' featuring a shield logo, a URL input field, country compliance options, and a 'Start Accessibility Scan' button.

Final Thoughts


Want More Help?


Try our free website accessibility scanner to identify heading structure issues and other accessibility problems on your site. Our tool provides clear recommendations for fixes that can be implemented quickly.

Join our community of developers committed to accessibility. Share your experiences, ask questions, and learn from others who are working to make the web more accessible.