Thursday, February 14, 2019

Uncanny Valley yet again ...





It gets better.


IN 2015, CAR-AND-ROCKET man Elon Musk joined with influential startup backer Sam Altman to put artificial intelligence on a new, more open course. They cofounded a research institute called OpenAI to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.

“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.

Beyond our control seems to apply here, does it not?

No comments: