Oct 3, 2019

AI for Inclusivity: A Snapshot of Where We Need to Go

By Conrad Egusa

A daily perusal of science and tech news will no doubt acquaint readers with two competing visions for what impact AI may potentially have on social inclusion and equality. On the one hand, there’s great anxiety over AI’s tendency to exacerbate long-standing inequality problems and create new ones in the process. 

And yet, a genuine hope persists that AI might be a panacea for some of society’s most entrenched issues; and that hope is renewed in the many places we see AI already making a major impact. 

In truth, there’s validity in both these perspectives on AI and inclusivity. And for that reason, in striving to ensure these new technologies have a positive social impact, it’s so important to be conscious of where progress has already been made. It’s also essential to recognize areas where collaboration will be the key to fully unlocking the promise of inclusivity offered by AI. 

Breaking Barriers for the Disabled 

Sensory impairments, physical handicaps, and other disabilities make it difficult for someone to have a fully-realized life without relying on the assistance of caretakers and family members. Even with a strong support system, every day can be a challenge. It comes as no surprise that disabled Americans are twice as likely to be unemployed as their able-bodied counterparts. 

For centuries, human ingenuity has led to the development of tools that have helped remove obstacles for the disabled: wheelchairs, eyeglasses, and hearing aids are a few of the tools that come to mind. More advanced solutions have emerged in the past couple of decades, but many are cost-prohibitive. In fact, only one out of every 10 people around the world who could use an assistive tool actually has access to it. 

Some AI tools have the power to not only bridge more impairment gaps more efficaciously, but also to be far more accessible than predecessors. Google, for example, has developed an AI algorithm that enables a smartphone to interpret sign language and read it aloud. And then there’s Microsoft’s Seeing AI, which is a free app that describes the objects it sees through a camera. With more than 3.3 billion smartphone users in the world today, and access to AI tools like these is just a download away. 

Training Systems with Better Data to Reduce Bias 

There are lots of reasons to be optimistic when it comes to bridging gaps in society through AI. But it’s not time to rest on our laurels; nor does it mean, necessarily, that there will be a time when AI has solved all of the world’s inclusivity problems. In fact, AI for inclusivity will probably always require, at minimum, active human stewardship in order to run properly. 

Even the most open-minded people (including scientists) are susceptible to their own biases. Because everything a person has learned and experienced affects the way they see the world, bias is inevitably a part of human psychology. This is why teams are ideally built with people possessing diverse sets of experiences, as a way to offset biases inhering from one particular sort of experience. 

But even if with diverse and inclusive teams, AI systems are often only as unbiased as the data used to train them. 

Let’s imagine a mortgage lender was working on an algorithm for deciding whether to give someone a loan. If the data collected included a photo on a driver’s license, the algorithm might be better at perceiving the faces of white people better than people of color -- simply for the fact that it’s been trained with a higher volume of pictures of caucasian applicants. This could cause the system to be less likely to rate someone who is not white as worthy of receiving a loan. 

It’s not a simple thing to overcome inclusivity problems like this, but it is possible. Using more comprehensive and balanced datasets is an important measure to take -- and the more diverse the teams reviewing the parameters for the data being collected, the less susceptible teams are to blind spots. 

Another massive challenge lingers: Legislating a technology that has the power to fundamentally alter society in profound ways. When we consider some of the influential figures of the tech community who have raised concerns about AI, it seems as though the world’s legislative bodies are more aware than ever of the need to both enable and regulate AI in a way that’s ethical.  


It is fair to say that AI has already shown great potential as a force for greater inclusion and therefore equality. It’s also not overly optimistic to say there’s enough evidence to hope that AI could be the most powerful tool the world has ever known for bridging the gap between the disabled and the rest of society. 

And yet, without conscientious development work performed by diverse and inclusive teams, AI could reinforce the same biases that continue to hamper society, effectively shutting out millions of people around the world from having normal, productive lives -- all because of the circumstances of their birth. 

In the end, there’s enough to feel hopeful and encouraged about where AI has led and will continue to lead us. But as we hold fast to this optimism, we must ensure that inclusion and mindfulness are not simply a ‘consideration’ for AI developers going forward, but rather the priority. In this way, it’s reasonable to imagine that AI could be the answer to so many of the biggest challenges facing society in our lifetime. 

By Conrad Egusa

Conrad Egusa is the CEO at Publicize and a member of the Innovation Outreach editorial team. He was earlier a writer for VentureBeat and has contributed to publications including Forbes and TechCrunch.