The "Architects of AI" were named Time's person of the year Thursday, with the magazine citing 2025 as when the potential of artificial intelligence "roared into view" with no turning back.
People are also reading…
Time CEO Jessica Sibley, second from right, joined by OpenAI Chief Global Affairs Officer Chris Lehane, second left, rings the New York Stock Exchange opening bell Thursday for TIME's "Person of the Year."
Time CEO Jessica Sibley is interviewed Thursday on the floor of the New York Stock Exchange, adjacent to TIME's "Person of the Year" cover.
Businesses are increasingly turning to AI to ensure accessibility for people with disabilities. Is it working?
AI tools can create more accessible experiences
One of the most important frontiers of accessibility has been online. By adhering to the Web Content Accessibility Guidelines issued by the World Wide Web Consortium, designers can create websites and web-based environments that can be accessed by everyone, regardless of ability.
In more practical terms, adhering to these standards will mean ensuring content has enough contrast so people with limited vision or colorblindness can read text. Adding "alt text" to images allows visual information to be shared with screen reader users, and captions on videos allow people who are deaf or hard of hearing to properly understand the information conveyed. It's also important to ensure people can navigate pages more easily.
New AI-driven tools are crucial for meeting these guidelines more efficiently and effectively. For instance, they can auto-generate captions, suggest alternative text for images, or flag insufficient contrast.Â
While these advancements make it easier for companies to comply with the guidelines, human oversight remains essential. For example, some of Google's AI-generated search result summaries have contained errors, which, when disseminated on websites, can misinform and harm users with disabilities. In 2023, researchers at Pennsylvania State University found that some AI models used to categorize large amounts of text exhibited biases against people with disabilities. These models tend to classify sentences as negative or "toxic" based on the presence of disability-related terms without regard for the context.Â
To address these problems, experts emphasize the importance of involving the user community—including those with disabilities—in all stages of AI development.
"AI data systems must be trained with diverse datasets that include representation of people with disabilities to minimize bias," the United Access Board, a governmental agency, advised during its 2024 Preliminary Findings on Artificial Intelligence. This should include a thorough evaluation of AI tools in the hiring process and for job-related activities "to identify potential discriminatory impacts on applicants and employees with disabilities."
The board also noted concerns about AI-powered surveillance tools known as "bossware technologies," which may not be correctly calibrated for employees with disabilities. This can be a problem if companies attempt to monitor things like employee fatigue or movement based on wearable technology that may not properly assess people with physical disabilities.
