Thursday, March 18, 2021

Evolution yet again


 Artificial life evolution, can this happen? Interesting question is it not? Seems people have talked about this notion beginning in the 1950's when computing was but a dream save for visionaries like Alan Turing and John VonNewman, among significant others.

I could stridently insist that natural selection is the only way that complex life can evolve, but that’s not strictly true. We can already design computers that can learn and reason and—almost—convince an observer that their behavior might be human. It’s not unreasonable that in 100 or 200 years, our computer systems will be effectively sentient: human-like robots, similar to Star Trek’s Commander Data. Alien civilizations that are considerably more advanced than us are likely already capable of such creations. The possibility—likelihood, even—of such robotic life has implications for our predictions about life on alien planets.

But what if it were all different? What would life look like if it did know where it was going?

The 1950s physicist Anatoly Dneprov wrote quirky and characteristically Soviet science fiction. His novel Crabs on the Island tells the story of two engineers conducting an experiment in cybernetics on a deserted island. A single self-replicating robot (a “crab”) is released, and forages for the raw materials to build other robots. Soon the island is overrun with baby robot crabs. But the crabs begin to mutate. Some are larger than others, and ruthlessly cannibalize the smaller robots for spare parts to build even larger robots. How would such an experiment end? Catastrophically, of course, as is consistent with the genre, with robot crabs spreading exponentially across the entire island.

But there is no certitude. Quantum and the only constant being that of change rules, even over super AI.

How likely is it that this universe of interconnected computers would be doing nothing but communicating, reproducing, and carrying out their singular goal? Possibly not very. If such an alien world of artificially intelligent organisms really exists, there are some things it cannot avoid—no matter how intelligent or how well designed. On the one hand, artificial intelligence cannot improve without change, and change brings the risk of mutation. On the other hand, even the cleverest strategy is potentially open to exploitation—game theory cannot be discounted, even by a computer of sci-fi-level superintelligence.

The Tao rules ...

No comments: