Wednesday, February 16, 2022
Why the Future Doesn't Need Us - 2022
Image by Getty Images/Futurism
The news of
AI
possibly becoming self aware is most disquieting without question.
Amid a maelstrom set off by a prominent AI researcher saying that some AI may already be achieving limited consciousness,
one MIT AI researcher is saying the concept might not be so far-fetched.
Our story starts with Ilya Sutskever, head scientist at the Elon Musk cofounded research group OpenAI. On February 9, Sutskever tweeted that
“it may be that today’s large neural networks are slightly conscious.”
In response, many others in the AI research space decried the OpenAI scientist’s claim, suggesting that it was harming machine learning’s reputation and amounted to little more than a
“sales pitch”
for OpenAI work.
That backlash has now generated its own clapback from MIT computer scientist Tamay Besiroglu, who’s now bucking the trend by coming to Sutskever’s defense.
“Seeing so many prominent [machine learning] folks ridiculing this idea is disappointing,”
Besiroglu tweeted.
“It makes me less hopeful in the field’s ability to seriously take on some of the profound, weird and important questions that they’ll undoubtedly be faced with over the next few decades.”
Besiroglu also pointed to a preprint study in which he and some collaborators found that machine learning models have roughly doubled in intelligence every six months since 2010.
Strikingly, Besiroglu drew a line on the on chart of the progress at which, he said, the models may have become
“maybe slightly conscious.”
Image via Tamay Besiroglu/Twitter
While fears of conscious computers go as far back as the infamous
“Maschinenmensch”
in 1927’s
“Metropolis,”
researchers have repeatedly punted on the concept, saying that it was too far in the future to worry about just yet — and that in any case, misuses of less advanced AI are already widespread.
AI has advanced so rapidly in recent years that many humans, whose advanced intelligence capabilities evolved many tens of thousands of years ago, seem to be struggling with the dizzying concept of conscious computers.
If nothing else, this iteration of the conscious AI debate could well be pushing the boundaries of what constitutes consciousness
— and may, ultimately, play a role in how we come to define it.
Back in 2010 ...
Why the Future Doesn't Need Us - Bill Joy - 2010
What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions.
As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones.
Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently.
At that stage the machines will be in effective control.
People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
It's already happening ...
No comments:
Post a Comment
Newer Post
Older Post
Home
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment