Wednesday, January 16, 2019

Caveats


BRT has waxed "poetic" about AI, man's open ended tech merging analog to digital at unprecedented speeds as seen by the rapid advancements of Deep Mind and Watson, among significant others, in their ability to solve complex problems thought to be beyond the realm of AI until now. With this being said, this marriage of analog to digital via neural nets has opened up the era of software designing software due to the fact that in order to evolve, genetic algorithms must be able to adapt to the vagaries of the real world at speeds extending beyond the capabilities of human coders, which means we don't know how this tech actually works. 


A Wicked Problem looms.



There's a mystery afoot ...

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Questions, questions.

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better then human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. Theodore Kaczynski

Almost 20 years ago, Bill Joy wrote an essay titled Why the Future doesn't Need Us. a thought provoking piece to the max.

To whit:

Part of the answer certainly lies in our attitude toward the new—in our bias toward instant familiarity and unquestioning acceptance. Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies—robotics, genetic engineering, and nanotechnology—pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once—but one bot can become many, and quickly get out of control.

Think about this in terms of autonomous weaponized bots, Musk, Hawking and the military certainly have.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

It's already happening ...


The reality today is that artificial intelligence is leading us toward a new algorithmic warfare battlefield that has no boundaries or borders, may or may not have humans involved, and will be impossible to understand and perhaps control across the human ecosystem in cyberspace, geospace and space (CGS). As a result, the very idea of the weaponization of artificial intelligence, where a weapon system that, once activated across CGS, can select and engage human and non-human targets without further intervention by a human designer or operator, is causing great fear.

The thought of any intelligent machine or machine intelligence to have the ability to perform any projected warfare task without any human involvement and intervention -- using only the interaction of its embedded sensors, computer programming and algorithms in the human environment and ecosystem -- is becoming a reality that cannot be ignored anymore.

Skynet anyone?

What could possibly go wrong?

ROBOT CANNON KILLS 9, WOUNDS 14

We're not used to thinking of them this way. But many advanced military weapons are essentially robotic – picking targets out automatically, slewing into position, and waiting only for a human to pull the trigger. Most of the time. Once in a while, though, these machines start firing mysteriously on their own. The South African National Defence Force "is probing whether a software glitch led to an antiaircraft cannon malfunction that killed nine soldiers and seriously injured 14 others during a shooting exercise on Friday."

Other reports have suggested a computer error might have been to blame. Defence pundit Helmoed-Römer Heitman told the Weekend Argus that if “the cause lay in computer error, the reason for the tragedy might never be found."

In conclusion ...

Facebook Shuts Down AI Robot After It Creates Its Own Language
When English wasn't efficient enough, the robots took matters into their own hands.

Open ended indeed.



Addendum: 

US Navy moves toward unleashing killer robot ships on the world’s oceans

Enter: the rise of the machines.

Boxall’s plan to develop and unleash unmanned killer robot ships is an integral part of the Navy’s new tactics to counter Chinese maritime advancements and, to a more limited extent, those of Russia.

Any questions?

No comments: