When it comes to artificial intelligence, the possibilities seem endless. From life-saving medical advances to making shopping more convenient for consumers, people dream about the ways AI can help them achieve things they’ve never been able to do before.
It is not without justification—the ability of AI to process increasing amounts of data in a faster way than any previous innovation ever could means unprecedented gains can be made.
But there are limitations, and some people feel the way AI is talked about sets expectations higher than they should be. Just how intelligent is artificial intelligence? Let’s explore.
How AI is being used
In almost any industry, you can point to some way artificial intelligence is being used to make a difference. Scientists are using an AI tool to help predict Arctic sea ice loss. There are countless examples of the ways the medical field is using AI, including everything from biomedical research and identifying diseases in x-rays to tasks such as record-keeping or prescription fulfillment.
AI is one area where the pandemic might have spurred growth rather than slow it. Data from PitchBook shows almost $38 billion has been invested in AI startups so far in 2021, on pace to double the amount from 2020.
That number doesn’t even include the amount of research or experimentation that continues in the field. A team of computer science students at Emory University are working on advancing a chatbot that can make logical inferences that can hold deeper, more nuanced conversations with humans. There is even a play written and performed live with AI—where an audience watches as the play’s creators prompt the AI to produce a script that actors will then perform. And that exercise gets at some of the problems people associate with artificial intelligence.
When people refer to the tasks conducted by AI they do so in familiar terms. Intelligence is a term used to describe living beings—“natural” intelligence— which is why the “artificial” distinction is made for AI. And because AI uses machine learning—where it can figure out how something is constructed to continue performing tasks without constant human intervention—it is said to be able to “think.” But the truth is any “thinking” done by AI is led by humans, and the flawed world in which humans live.
That’s where bias makes its way into the world of artificial intelligence. Organizations use AI to handle huge amounts of data—so much that it is generally counter-efficient for them to take the time to go through that data for problematic information. As a result, some work produced by artificial intelligence replicates the biases such as misogyny, racism, and homophobia, for example, that are seen in the human world. So that AI play? There are inevitably some uncomfortable moments reflecting AI’s “understanding” of the world.
The same is true with health information. Alistair Erskine, the chief digital health officer at Mass General Brigham, recently said on the Smarter Healthcare podcast, “AI is dependent on the data that feeds it. And that data in some cases can be very biased, either in the way that it’s inputted, or even in the way that the population is organized within one area of the market. [Also] Just because the model was working today doesn’t mean the model is going to work well tomorrow. It may need to be re-trained. We’re going to have to constantly go back through our governance model and figure out how to support it.” In other words, the intelligence aspect of AI is only as intelligent as the people who are putting it to use.
Overcoming the challenges
A very important aspect of figuring out a solution to this problem is the fact that so many of the people using and designing AI have identified the problem. They are well aware of it and are working to address it. The work being done at Emory is a good example of an AI correction. The original chatbot did a good job, but the longer a conversation lasted the deeper the AI went into a conversational flowchart, increasing the chances the it would totally miss the point of a question. The further development of the chatbot allowed the AI to make more logical inferences deeper into a conversation. The solution was human-driven. As graduate student Han He says, “A computer cannot deal with ambiguity, it can only deal with structure.” Humans are providing the structure.
There’s also an effort to solve the problem from the start of new projects. The National Science Foundation and the Department of Homeland Security are funding Athena, an artificial intelligence research center that’s part of a $220 million investment in 11 AI research institutes in 40 states. Athena is led by Duke University and includes, among other prominent colleges and universities, MIT and Yale. Among Athena’s goals? Work by researchers to ensure the center meets racial and gender diversity goals by year five of the project.