J.R. Dunn | American Thinker
Technical topics, of any sort at all, are generally subject to serious distortion when they hit the level of public discussion. There are many reasons for this – ideology, click-lust, and the sheer inability of the average journo school grad to adequately wrap his head around whatever concept is under consideration.
There’s no end of examples: Just think of the garbage written about global warming or COVID.
The latest of these topics is Artificial Intelligence (AI). Commentary on AI has exploded across the media sphere since the release of ChatGPT, an AI app purportedly capable of learning how to produce prose in any style at request. The consensus, to quote a style not yet mastered by ChatGPT, is almost uniformly “a tale told by an idiot, full of sound and fury, signifying nothing.”
The media uproar has been characterized by two approaches — the first (and most common) is complete lack of understanding of the technology. The second is an impression of the topic derived from movies, largely HAL 9000 and Skynet (an older generation would add Colossus). These AI entities are uniformly insane, malevolent, or both (though not to the level of the one envisioned in Harlan Ellison’s “I Have no Mouth, and I Must Scream” which is so overcome by existential loathing that it destroys all humanity except for five individuals, whom it then sets out to torture for all eternity). For some reason, nobody ever suggests the AI Samantha in the superb film Her, who is cheerful, helpful, and even loving. That says more about human nature than it does Artificial Intelligence.
Artificial Intelligence was introduced by Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” Turing had first proposed computers in the 1930s and then played a role in building the earliest working models for the British codebreakers at Bletchley Park. In this paper he suggested something that came to be called the Turing Test, intended to answer the question as to whether an AI should be treated as a self-aware entity – as another person. Turing’s argument is that if you converse with an AI – ask it questions and receive answers – and cannot decide whether you are interacting with a human person or a machine, you must consider it to be an intelligent, self-aware entity. (The Turing Test has been philosophically challenged since them, while at the same time being subject to cheating by some AI researchers, who have pulled tricks such as personifying the AI as a twelve-year-old or a foreigner who speaks English as a second language.)
Turing’s speculations fell on fertile ground. While the original Bletchley Park “bombes” (so-called due to the ticking noise they made while calculating) had been shut down after the war, more advanced computers such as UNIVAC were being designed and built during the early 50s. They were greeted with wild speculation along with musing on what it all meant for the fate of humanity. Conclusions were largely unanimous: “thinking machines” would soon outdo mere humans, who would then be either destroyed or shoved aside to go quietly extinct.
Eighty years on, little has changed. The debate continues on the same shallow, uninformed level while we eagerly await for AM or HAL to appear and start torturing or murdering us.
So what is the problem here? First and above all, when we speak of AI in the 21st century, we’re discussing two distinct and separate types as if they were one and the same thing. These are what I call “App AI,” which includes ChatGPT and the numerous AI art apps making the rounds, and “General Intelligence AI,” the movie-style HALs and Skynets capable of taking over everything and doing what they damn well please.
Up until now, all that we’ve seen are App AIs. These are software, generally operating on neural nets, devoted to one particular task – text creation or artwork – that feature algorithms capable of modifying the responses of the program as it “learns” more about the task. AI learning is accomplished through “supervised learning,” in which mere humans set the parameters and goals, oversee the process, and examine and judge the results. Until now this human interaction has proven strictly necessary — “unsupervised learning,” when it has been attempted, usually goes off the rails pretty quickly. The App AI’s single task comprises their entire universe and they can’t simply take what they’ve learned and apply it to other fields. As Erik J. Larson puts it in The Myth of Artificial Intelligence (which should be read by anybody with an interest in the topic), “…chess-playing systems don’t play the more complex game of Go. Go systems don’t even play chess.” So no such AI is ever going to quit sampling internet imagery and try to take over the Pentagon. (This also applies to the guy who claimed, a couple weeks back, that ChatGPT is already “running the financial system.”)
There’s been a lot of speculation recently as to whether these systems will supplant humans working in particular fields. The answer is no — not yet, and probably not ever. A few weeks ago, Monica Showalter, esteemed by all AT readers, ran a Turing Test of sorts on ChatGPT. She entered the prompt “Write a piece on the future of the airline industry in the style of Thomas Lifson.” What she got was a bland, gassy, ill-written piece filled with clichés, non-sequiturs, and outright errors, none of which, I can state with authority, has ever been characteristic of Thomas’s writing.
But couldn’t an App AI conceivably learn enough, experience enough, and develop enough to stretch its electronic tentacles into fields that it was never intended for?
That brings us to General Intelligence AI, the realm of HAL and Samantha, the Holy Grail of AI research, and what Stephan Hawking and Elon Musk have both warned us against.
Turing had originally dismissed notions of machine intelligence due to the fact that machines lacked intuition – the human facility that enables us to skip step-by-step procedures and go immediately to the heart of a problem. There exists no way to quantify intuition – along with other related human capabilities such as imagination. Though Turing ignored this factor in his 1950 paper, it remains true today. There is no means of breaking down intuition, imagination, or simple common sense to make them programmable.
One of the shocking developments in AI research late in the last century was the revelation that machines can’t deal with the everyday. A program could play chess, model the interior of an M-class star, or plot a rocket trajectory with ease, but ask it to pilot a robot down a hall and it will immediately run into a wall and suffer a complete breakdown. This is something that Elon himself has encountered with his “self-driving” cars.
The statistical techniques that AI programs utilize – rifling through thousands, millions, or conceivably billions of possible solutions before they select the most probable – simply cannot replace the human attributes we all take for granted.
We don’t actually know what “intuition” or “common sense” are, which means that we don’t know what thinking is. And if that’s the case, how can we hope to duplicate it? It took something on the order of three-and-a-half million years for intelligence to develop in human beings. Nobody, however adept, will replicate that in a handful of years.
We are likely to find that conscious intelligence is an emergent property arising from elements we can now scarcely conceptualize, much less understand. And if we can’t understand it, it’s unlikely that we will be able to transfer it to silicon chips.
So Skynet is not going to be stomping on our skulls just yet. Which doesn’t mean that people will stop working on General Intelligence AI. That’s no bad thing – such research will teach us a lot about ourselves, possibly including things we’d rather not know. And if by some wild chance such an effort was successful, we’d still have little reason to worry. As Elon has pointed out, such an entity would be isolated in a research facility and dependent on extraordinarily complex and sensitive hardware. Any change in the system, such as a malfunction or brownout, would be likely to shut the whole thing down. (Which raises other questions: are we morally justified in creating an intelligent entity that will inevitably succumb to malfunctions? I would say “no.”)
As for App AI, it is simply a new kind of tool, and Sapiens does well with tools. It will continue to improve, providing us with more capabilities and potential. At the moment, App AI is at about at the same level as home computing was in the early 80s, when it was limited to trivia such as primitive games. The prospects are enormous.
But ChatGPT will not take over and force everyone to read its stuff eighteen hours a day. Nor will AIs put everybody out of work, something that has been predicted since Kurt Vonnegut published Player Piano in 1952. Since the 1970s, it has been clear that infotech actually creates jobs by expanding existing industries and establishing new ones. We have no reason to think that AI will be any different.
AI will also provide us with a new armory of digital defenses against the current efforts by the WEF, the tech giants, and the elites to force technofeudalism on us. And that, playmates, will be something worth having.
DISCLAIMER: The opinions, beliefs and viewpoints expressed by the various author’s articles on this Opinion piece or elsewhere online or in the newspaper where we have articles with the header “COLUMN/EDITORIAL & OPINION” do not necessarily reflect the opinions, beliefs and viewpoints or official policies of the Publisher, Editor, Reporters or anybody else in the Staff of the Hemet and San Jacinto Chronicle Newspaper.
Find your latest news here at the Hemet & San Jacinto Chronicle