- Tech billionaire Peter Thiel painted a gloomy picture of artificial intelligence in his NYT’s op-ed on Thursday, detailing the technology’s use case as primarily a military one. But the expert’s we spoke to disagree.
- “I don’t think we can say AI is a military technology,” Dawn Song, a Computer Science Professor at the University of California, Berkeley. “AI, machine learning technology is just like any other technologies. Technology itself is neutral.”
- Experts also pointed to the many upsides for consumers that AI offers, which Thiel failed to mention in his piece.
- Visit Business Insider’s homepage for more stories.
Tech billionaire Peter Thiel painted a gloomy picture of artificial intelligence in his NYT’s op-ed on Thursday, detailing the technology’s real value and purpose as primarily a military one.
“The first users of the machine learning tools being created today will be generals,” Thiel declared in his 1,200-word piece. “A.I. is a military technology.”
Thiel’s portrayal is a far cry from the optimistic view that many in Silicon Valley have embraced. Artificial intelligence has promised to give us the next, best Netflix recommendations, let us search the internet using our voices, and do away with humans behind the wheel. It’s also expected to have a huge impact in medicine and agriculture. But instead, Thiel says that AI’s real home is on the battlefield — whether that be in the physical or cyber worlds.
Multiple AI experts that Business Insider spoke with on Friday, however, disagree with Thiel’s assertion that AI is inherently a military-first technology and say that it can be used for far greater good than alluded to in Thiel’s fiery op-ed.
“I don’t think we can say AI is a military technology,” Dawn Song, a Computer Science Professor at the University of California, Berkeley and faculty member of the Berkeley Artificial Intelligence Research (BAIR) Lab, told Business Insider on Friday. “AI, machine learning technology is just like any other technologies. Technology itself is neutral.”
Song said that just like nuclear or security encryption technologies, artificial intelligence can be used in either good ways or bad, but to describe it as something in which people should inherently be afraid would be missing the point.
Read more: Peter Thiel slammed Google in a scathing New York Times op-ed, but failed to mention that he works for and invests in the search giant’s rivals
Fatma Kilinc-Karzan, an Operations Research Associate Professor at Carnegie Mellon University, told us that Thiel’s views on AI were “way too pessimistic” and that not enough light was shined on its positive, every-day use cases.
“Sure, AI is used in the military quite a bit,” Kilinc-Karzan said. “But its everyday use in simplifying and enabling modern life and business is largely overlooked in this view.”
Kilinc-Karzan said that the same technologies targeted by Thiel — like deep learning and automated vision — are already being used positively for a wide variety of commercial and medical applications, like driverless cars and improved CT and MRI machines that make it easier for doctors to detect different types of cancers.
In his piece, Thiel acknowledged AI as a “dual-use” technology — meaning it has both military and civilian applications — though the tech billionaire failed to specifically point out any of its consumer upsides.
“[Thiel’s] view overlooked the fact that AI is being used in daily life by everyone in the US,” Kilinc-Karzan said. “That seems very minor to him. He didn’t discuss that impact. It is true that the military will pick up and use whatever is the most powerful, but that will be the case regardless of what technology we’re talking about.”
The overarching theme of Thiel’s piece was that Google — a US company — had created an AI research lab in China, a country which has established the precedent that all research done within its borders be shared with its national army.
Berkeley’s Song agreed that AI projects needed to be handled carefully, but stressed that portraying the technology as intrinsically evil, especially at the expense of curtailing innovation, was wrong.
“It’s important for us to advance AI so that we can have the societal benefits from its advancements,” Song said. “Of course, we need to be careful about how the technology is being used, but I think it’s important to keep in mind that technology is neutral.”
SEE ALSO: Peter Thiel turns both barrels on Silicon Valley’s ‘extreme strain of parochialism’
Join the conversation about this story »
NOW WATCH: Why Apple’s Mac Pro ‘trash can’ was a colossal failure
[ad_2]