NEW YORK Five leading tech companies launched a new effort Wednesday to head off government regulation of artificial intelligence, the fast-growing field at the heart of self-driving cars, digital assistants and other emerging technologies.
Through the so-called Partnership on Artificial Intelligence, Amazon, Facebook, Google, IBM and Microsoft pledged to address the privacy, security and ethical challenges of AI by funding new research and setting up industry best practices as they invest heavily in complex algorithms that can understand human speech or comb through vast amounts of data.
“The positive impacts of AI will depend not only on the quality of our algorithms, but on the level of public engagement,” said Mustafa Suleyman, the co-founder of DeepMind, an artificial intelligence company purchased by Google, and a co-chair of the new group. He and other leaders, speaking with reporters Wednesday, said the technology could reduce traffic congestion, tackle climate change and more.
But artificial intelligence could also disrupt entire industries, replacing human workers with machines or posing safety and consumer protection concerns, especially in cases where AI is working in high-risk areas like health care. And for that reason, it’s attracting attention from the government raising the specter of new federal rules that could restrict the nascent field.
The White House, for example, has explored the policy challenges surrounding AI since this summer. By fall, the Obama administration is expected to produce a report that explores policy issues like transparency, as regulators look to ensure that AI systems and their often invisible algorithms aren’t discriminating against users on the basis of race, gender or class. The administration is also preparing a document that identifies ways the federal government can use its research dollars to spur more AI innovation.
For now, the new partnership for AI “does not intend to lobby government or other policymaking bodies,” the group stressed in a news release on Wednesday. But its founding members are certainly fearful that regulators in the United States and beyond could get involved.
“There’s been concern that in the echo chamber of anxiety, the government itself will be misinformed,” said Eric Horvitz, a managing director at Microsoft Research and another co-chair of the new group. “And I think one of my motivations is to continue to educate, and think about those practices, and education includes government at multiple levels.”
“There’s no explicit attempt at the notion of self-regulation, to repel government intrusion, but I think it’s a very healthy sign on our own there’s an interest and energy to think through hard questions,” Horvitz said.
The group’s five founding members already have extensive influence operations in the nation’s capital focused on the privacy, security and safety issues surrounding AI. And they’re working on recruiting additional corporate allies, such as Apple and they say the iPhone giant is weighing whether to take part.