Autonomous bots are becoming an ominous reality as there is too much money to be made by building and selling this hardware to interested parties as seen by this piece in New Atlas titled Kalashnikov’s new autonomous weapons and the “Terminator conundrum”
Earlier this month, the Russian weapons manufacturer Kalashnikov Group made a low-key announcement with frightening implications. The company revealed it had developed a range of combat robots that are fully automated and used artificial intelligence to identify targets and make independent decisions. The revelation rekindled the simmering, and controversial, debate over autonomous weaponry and asked the question, at what point do we hand control of lethal weapons over to artificial intelligence?
It gets better or ... MY DRONESKI JUST ATE YOUR ETHICS
However, maintaining human control exposes players to other disadvantages that may be decisive should the other player opt for full autonomy. First, fully autonomous systems could operate in a much faster decision cycle than human-in-the-loop systems. An inferior platform can defeat a superior one if the inferior platform can, to borrow a phrase from John Boyd, get inside its enemy’s decision loop. This places a manned, semi-autonomous future force at significant risk when encountering a fully autonomous first echelon or defensive screens of a less scrupulous enemy.
Something to consider don't you think?
No comments:
Post a Comment