Google CEO Sundar Pichai explained how easy it is to unintentionally create a sexist, racist AI bot (GOOG, GOOGL)

[ad_1]

Sundar Pichai

  • On Tuesday, during Google’s big I/O tech conference, Sundar Pichai explained what Google is doing to make its AI less racist and sexist.
  • He promised to share what Google is doing with the world.
  • But he also explained how easy it is, even for well-intentioned computer scientists, to get it wrong and create biased AI tech.
  • And he noticeably fell short of using the “r” word — a.k.a. regulation — or even hinting that it was needed.
  • Visit Business Insider’s homepage for more stories.

2019 has been the year where major tech companies, led by Microsoft, have been strangely calling for regulation on themselves, or at least around one particular aspect of their technology: artificial intelligence.

On Tuesday, during Google’s big I/O tech conference, Sundar Pichai explained what Google is doing to make its own AI less racist and sexist, although he fell short of using the “r” word — a.k.a. regulation — or even hinting that it was needed.

He did, however, do a good job explaining how easy it is for computer scientists to create computer software models that are sexist and racist. He explained that it’s not enough to focus on if their smart computer program simply performs the task at hand. 

Instead, Pichai said, they need to focus on the data they use to train their computers.

“It’s not enough to know if a model works. We need to know how it works. We need to ensure that our AI models don’t reinforce bias that exist in the real world,” Pichai said.

For instance, Google offers on its cloud AI algorithms that can identify the subjects of photos. If someone uses this tech to build a model that identifies doctors, and they trained their computer by using a lot of photos of men in white coats with stethoscopes, the computer will likely wrongly conclude that being male is an important factor in detecting a photo of a doctor. 

Read more: Satya Nadella says the ‘brilliant jerk’ phenom in tech ‘is done,’ but it isn’t

In a more shocking example, Pichai talked about an AI algorithm trying to detect skin cancer.

“To be effective, it would need to be able to recognize a wide variety of skin tones, representative of the entire population,” he said.

Think about the possibilities if that AI model didn’t absolutely know that people come in all sorts of colors. Someone may be told they have skin cancer when they don’t; others might be inaccurately given the all-clear. 

Pichai then vowed to not only make sure that Google’s own tech isn’t racist or biased, but to share what they are doing with other computer scientists.

“There’s a lot more to do, but we are committed to building AI in a way that’s fair and works to everyone, including identifying and addressing bias in our own ML models and sharing open data sets to help you for everyone,” he said.

Sundar Pichai

That sounds nice and it’s a great commitment — but its also table stakes. 

Other companies are calling for regulation to cover AI. Congress is already working on a number of bills that will regulate the ethical use of AI. 

And those calling for regulation are companies that have been called on the carpet over their AI tech already.

For instance, after the American Civil Liberties Union published a report claiming Amazon’s facial recognition AI tech wasn’t trustworthy, particularly across race, and then showed how it misidentified a bunch of members of Congress, several members sent letters asking Amazon for information. Amazon countered in a blog post that questioned the validity of the ACLU’s tests.

In the meantime, Amazon has been selling its tech to law enforcement, who are not always aware of its limitations, according to a recent report by The Washington Post. In years past, Amazon also got called out for building an AI tool to hire people that it shut down because it was discriminating against women.

Microsoft, too, was called out a couple of years ago by an MIT researcher who found three leading facial-recognition systems — created by Microsoft, IBM, and China’s Megvii — were doing a terrible job identifying non-white faces. And Microsoft very publicly learned an embarrassing lesson about training AI models when it had to yank a Twitter chatbot called Tay offline within 24 hours after Tay began spewing racist and sexist tweets, using words taught to it by trolls.

Microsoft says it has since drastically improved its AI tech. It also says it now limits who it sells its AI tech to — for example, the company recently made news when it refused to sell its AI tech to a law enforcement agency. 

But Microsoft is also leading the industry by vocally calling for regulation on AI. Its rival, Amazon, joined Microsoft’s call for regulation in February. Later that month, Google published a paper that also called for regulation, while at the same time, argued that for the most part, regulators should stay hands off.

So here we are: All the major makers of AI tech know that it can be used for good and for ill. They even know that even the best intentioned AI computer scientist can create biased tech. All of them say they are working to improve their tech, too. Should we simply trust them?

Read more of Business Insider’s Google I/O coverage:

  • Google just unveiled its next major smartphones, the Pixel 3a and Pixel 3a XL
  • Incognito mode is coming to Google Maps and Google search
  • One of the biggest crowd pleasers at Google’s developer conference was a new ‘Stop!’ voice command to quickly shut off Google Assistant’s alarm
  • Google’s new $229 ‘smart hub’ device has a built-in Nest camera that can recognize your face

SEE ALSO: Everything Google announced at its biggest conference of the year

Join the conversation about this story »

NOW WATCH: Warren Buffett, the third-richest person in the world, is also one of the most frugal billionaires. Here’s how he makes and spends his fortune.



[ad_2]