Sunday, March 12, 2023

AI - A Primer

Out of darkness ...

What’s past is prologue - The Tempest
A journey of a thousand miles begins with a single step. - Lao Tzu

artificial intelligence (abbreviation: AI) noun
the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.—The New Oxford American Dictionary, Third Edition

The quest for AI begins … 

1834: Ada Lovelace, the first programmer, meets Charles Babbage, inventor of The Analytical Engine, the first digital computer. Upon seeing the first section of the machine, Lovelace not only realizes the system is capable of performing original calculations but also can process other types of symbols like musical notes. Said hardware was never completed in Babbage’s lifetime as the engineer was not paid upfront but it was completed later on and worked as advertised. Lovelace programmed using Jacquard Loom punchcards. 

1945: John Von Neumann invents the modern computer consisting of … 

Note: Turing did the same thing but his design never took off. Von Neumann gave him credit. 

1947: The transistor, a semiconductor, invented by Bardeen, Shockley and Brattain, becomes the processing unit used by every digital system in the world.

Machine language, driven by the transistor, is binary, consisting of 0s & 1s. This ultimate simplicity, turning on & off like a light switch, gives rise to the ultimate complexity as 0s and 1s can be stored, processed and distributed without limitation. Additionally, through abstraction (computer operating system/programming language/software program), said bits can represent colors, sounds, graphics or any other content able to run on top of the aforementioned 0s & 1s, entities processed by every known computer in the world. 

1962: The internet emerges, courtesy the Department of Defense, enabling computers to communicate on a distributed network. TCP/IP (Transmission Control Protocol/Information Protocol).

1966: ELIZA, the first chatbot “therapist”, emerges from MIT, the first program to invoke the Turing Test. People thought the clever scripting app was sentient.

1966: The concept of an analog neural net was envisioned but without prerequisite compute power, nothing of note happened.

1966: AI, trying to emulate the functions of the brain by traditional means, goes nowhere.

1965-78: IBM’s System/360, the first general-purpose computer, changes business.

1969: Thompson & Richie create Unix, the first networked-centric OS.

1981: the IBM PC, changes the world.

Truly powerful machines arrive thanks to Moore's law whereby the number of transistors in a dense integrated circuit (IC) doubles about every two years.

1989: The World Wide Web, courtesy CERN and Berner Turner’s Lee, becomes the information appliance able to run on the internet using web browsers to access websites and other entities able to be found using URLs, Uniform Resource Locators. Software becomes web-centric.

1990: the search for web content begins. 

1991: Linux, the open-source variant of Unix, becomes the defacto OS for the Internet. As of 2022, 96.4% of all internet servers run on Linux.

1994: Web Crawler enables one to find any word on any webpage. 

1996: the first search algorithm used to determine page rankings via hyperlinks arrives.

1997: IBM’s Deep Blue beats world chess champion Garry Kasparov in six games.

1998: Google enters the search arena using similar tech with the added benefit of selling search terms.

1998:  Google becomes the search engine king. This iterative algorithm ranks web pages based on the number and PageRank of other websites and pages that link there, on the premise that good or desirable pages are linked to more than others. 

AI goes prime time.

2010: Deep Mind, a pioneering British AI Company, is bought by Alphabet/Google in 2015.

2011: IBM’s Watson beats Jennings and Rutter in Jeopardy.

2016: Deep Mind’s AlphaGo solves Go. 

2020: Deep Mind’s AlphaZero defeats other AIs in Chess, Go and shoji.

2021: Deep Mind’s AlphaFold begins to solve how proteins fold. Said app’s open source.

2022: Deep Mind’s MUZero master games without knowing their rules.

        2022: “Time to Edit,” the time required by the world’s highest-performing professional translators
        to check and correct MT-suggested translations is one second. Current AI = 2 seconds.”

All Deep Mind apps use neural nets, the analog front end, enabling programs to input massive amounts of data needed to train the AI in order for the system to master the discipline in question. 

A neural net -  The input, based on a query and encoded in analog, samples video, images, sound etc., etc.. as needs warrant. Any hits passing a designated threshold is sent (output) to the digital part of the system, ready to be analyzed and acted upon by the entity conducting the search.
Analog measures, digital counts.

The real power of AI centers on analog as it’s ideally suited to deal with the vagaries of reality while digital, a fragile process requiring absolute precision in order to function, has the prerequisite ability to analyze and act upon whatever analog output the neural net generates.

ChatGPT, Dall.e 2, GPT3 & Bard use neural nets in their architecture and employ similar text input to conduct searches but the generated response to the query is specific to the AI variant in question with Open AI’s Chat GPT writing out text, DALL-E 2 creating 3D photorealistic images and GPT3 producing code. Google’s Bard’s equivalent to ChatGPT. Note: Only Google’s Bard is currently connected to the net. Note II, Google’s also developing variants similar to Open AI’s. Addendum: ChatGPT-4 is twice as powerful as 3.

2023: It’s tulip time with AI with Microsoft kicking in 10 billion to Open AI as its technology poses the first serious threat to Google’s core business. Hedge funders jump in on the fun as well.

The Question …

Is AI sentient? At this point in time, no, but does it matter as the implications of the tech in 2023 are already becoming rather profound, ranging from students using ChatGPT to write college essays to scientists using the app to search for scientific information relevant to their areas of expertise but there’s more. In essence, AI’s beginning to impact all disciplines requiring thought.

A Faustian bargain or There Ain’t No Such Thing As A Free Lunch.

The good … Just a tiny list … AI’s beginning to unravel the complexities of cancer. It’s revolutionizing man’s view of the universe and it’s paving the way to making drugs safer and more effective. AI’s beneficial impact on renewables will change how we produce energy on planet earth. This benefit also applies to all things related to medical.

Conversely, by 2025, 90% of all online content will be synthetic so who do you trust? 

2023: Middle Management Blues … “That said, the fact that a full quarter of those respondents said they've already replaced workers with AI — and with 93 percent saying they plan to expand use of AI.

Future layoffs are on the horizon, too. An ominous 63 percent of business leaders believe that integrating ChatGPT will either "definitely" or "probably" lead to culling their human workforce.

So far, 66 percent of the companies employing ChatGPT use it to write code, 58 percent for copywriting and content creation, 57 percent for customer support, and 52 percent for summarizing meetings and documents, the survey found.

The business leaders are easily impressed, too, with 55 percent saying ChatGPT's quality of work is “excellent".

There is no certitude

When going to the store, the parking lot’s full but one knows a parking place will eventually open up, we just don’t know WHEN. :) Probabilities rule. Quantum Mechanics, physics of the very small. Note: Without quantum mechanics, computers would not exist.

Who controls the past controls the future, who controls the present controls the past.—1984. 

Everything is theoretically impossible, until it is done. Robert A. Heinlein

AI’s the first open-ended tech created by man. 

Tech has no morality. It depends on who’s using the tech in question. 

In order for AI to do real-time searches, it requires software writing software able to conduct real-time searches in the real world in real-time, which means … We don’t know how AI works because human programmers cannot write code in real-time to conduct real-time searches using evolving genetic algorithms to improve said search in real-time in any way, shape or fashion. 

What happens if the software decides to do the search in order to evolve? 

1968: HAL 9000/2001

2015: “Out of control AI will not kill us believes Microsoft Research chief. Prof Stephen Hawking tells the BBC that such machines could ‘spell the end of the human race’”.

2015: “Looking ahead, many researchers are pursuing 'AGI', general AI which can perform as well as or better than humans at a wide range of cognitive tasks. Once AI systems can themselves design smarter systems, we may hit an 'intelligence explosion', very quickly leaving humanity behind. This could eradicate poverty or war; it could also eradicate us.

That risk comes not from AI's potential malevolence or consciousness, but from its competence - in other words, not from how it feels, but what it does. Humans could, for instance, lose control of a high-performing system programmed to do something destructive, with devastating impact. And even if an AI is programmed to do something beneficial, it could still develop a destructive method to achieve that goal.

AI doesn't need consciousness to pursue its goals, any more than heat-seeking missiles do. Equally, the danger is not from robots, per se, but from intelligence itself, which requires nothing more than an internet connection to do us incalculable harm.” 

2017: The Dark Secret at the Heart of AI “No one really knows how the most advanced algorithms do what they do. That could be a problem.”

When resources get scarce, bots become all too human.

2017: “When apples were abundant, the two agents were happy to collect their fruit without targeting each other. But in more scarce scenarios with fewer apples around, the agents became more aggressive. The researchers also found that the greater the “cognitive capacity” of the agent, the more frequently they attacked their opponent. This makes sense, as in this scenario attacking an opponent is more complex behavior and so requires greater intelligence.”

2019: The rise of the machines … “US Navy moves toward unleashing killer robot ships on the world’s oceans.

2021: “Superintelligence cannot be contained:

‘Machines take me by surprise with great frequency.  This is largely because I do not do sufficient calculation to decide what to expect them to do.’” Alan Turing (1950)  Journal of Artificial Intelligence Research

2022: “Last March, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them. The most worrying part of the story, however, was how easy it was to develop that AI tool.” 

The implications of AI are too important to ignore. People tend to think of AI as a static thing, not as an actual set of millions of interconnected things, an ever-evolving entity living in a distributed environment where duplication, modification and updating of real-time code work at speeds far beyond the kin of man. With this in mind, an excellent article by Henry Kissenger titled How the Enlightenment Ends, connects AI to the Enlightenment at deep level, a piece needing to be read by everyone concerned about how AI will impact society as we move further into the 21st century.

Tech never sleeps.

“Question everything.” - Einstein

Robert E. Moran - CEO, Digital Constructs Inc. 

This is a web-centric version of a piece written by yours truly for a local pub. :)

No comments: