Thursday, January 30, 2020

Conundrums ...


This sentence is false.  Conundrum No. 1: aka the Liar Paradox; Conundrum No. 2: What if an AI doesn't do what is asked of it vs what if an AI does ... constitutes a conundrum of a most interesting perspective depending on how said AI responds to the question asked.

To whit ...

A now-classic thought experiment illustrating this problem was posed by the Oxford philosopher Nick Bostrom in 2003. Bostrom imagined a superintelligent robot, programmed with the seemingly innocuous goal of manufacturing paper clips. The robot eventually turns the whole world into a giant paper clip factory.

Leading one astray aka Rev 1.0 ...

The most alarming example is one that affects billions of people. YouTube, aiming to maximize viewing time, deploys AI-based content recommendation algorithms. Two years ago, computer scientists and users began noticing that YouTube’s algorithm seemed to achieve its goal by recommending increasingly extreme and conspiratorial content. One researcher reported that after she viewed footage of Donald Trump campaign rallies, YouTube next offered her videos featuring “white supremacist rants, Holocaust denials and other disturbing content.” The algorithm’s upping-the-ante approach went beyond politics, she said: “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.” As a result, research suggests, YouTube’s algorithm has been helping to polarize and radicalize people and spread misinformation, just to keep us watching. “If I were planning things out, I probably would not have made that the first test case of how we’re going to roll out this technology at a massive scale,” said Dylan Hadfield-Menell, an AI researcher at the University of California, Berkeley.

Any questions?

No comments: