The Case Against the Case Against AI

A review of The Age of AI and Our Human Future by Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher. Little, Brown and Company, 272 pages (November, 2021).

Potential bridges across the menacing chasm of incompatible ideas are being demolished by a generation of wannabe autocrats presenting alternative facts as objective knowledge. This is not new. The twist is that modern network-delivery platforms can insert, at scale, absurd information into national discourse. In fact, sovereign countries intent on political mischief and social disruption already do this to their adversaries by manipulating the stories they see on the Internet.

About half the country gets its news from social media. These digital platforms dynamically tune the content they suggest according to age, gender, race, geography, family status, income, purchase history, and, of course, user clicks and cliques. We know that their algorithms demote and promote perspectives that may come from opposing viewpoints and amplify like-minded “us-against-them” stories, further exacerbating emotional response on their websites and in the real world. Even long-established and once-reputable outlets capture attention by manufactured outrage and fabricated scandal. Advertising revenue pays for it all, but the real products here are the hundreds of millions of users who think they are getting a free service. Their profiles are sold by marketeers to the highest corporate bidder.

In her book, The Age of Surveillance Capitalism, Shoshana Zuboff meticulously describes how our online behavior and demographic data are silently extracted and monitored for no greater purpose than selling us more stuff. She quotes UC Berkeley Emeritus Professor and Google’s Chief Economist Hal Varian, who explains how computer-mediated transactions “facilitate data extraction and analysis” and customize user experiences. Google, the market leader in online advertising for over a decade, has benefited most of all. Their third quarter revenue in 2021 for advertising alone was up 43 percent (compared to the same period in 2020) to $53 billion. The models and algorithms work.

Let’s leave aside for a moment the moral and ethical considerations of data rendition that internet properties use to collect information, and focus instead on what they do with it. This brings us to the epistemic threshold of artificial intelligence and machine learning, or “AI and ML.” In their new book, The Age of AI and Our Human Future, Henry Kissinger (Richard Nixon’s Secretary of State), Eric Schmidt (the former chair and CEO of Google), and Daniel Huttenlocher (the current and inaugural dean of the Schwarzman College of Computing at MIT) sound the alarm about the technological dangers and philosophical pitfalls of uncontrolled and unregulated AI. They join an august list of technology luminaries and public intellectuals who have expressed similar concerns, including Stephen Hawking, Sam Harris, Nick Bostrom, Ray Kurzweil, and Elon Musk.

Their persuasive arguments are brilliantly cast in the sweep of historical context and implicitly coupled to their global reputations. According to Kissinger, Schmidt, and Huttenlocher (KSH), only with official government oversight and enforceable multilateral agreement can we avoid an apocalyptic future that subordinates humanity to (mis)anthropic computers and robots. They cast some of AI’s early results as the first step on the alluring-but-broken pavement to cyborg perdition. They urgently recommend international negotiations comparable in diplomatic and moral complexity to cold war nuclear-weapon treaties. And I believe that they are mistaken in perspective and tone.

The fundamental principle upon which I base my counter-argument is that, as Erik J. Larson explains in The Myth of Artificial Intelligence, “Machine learning systems are [just] sophisticated counting machines.” The underlying math is inherently retrospective and can only create statistical models of behavior. Curiously, this is a derivative insight of Varian’s paper; winning platforms have insurmountable advantages because they have the most users, which makes them attractive to more users. As long as there are no material outliers in the data, their stochastic distributions can be, as compactly stated by the OED, “analyzed statistically but not predicted precisely.”

Of course, the algorithms are brilliant, the insights sublime. The number of behavioral and demographic attributes that network platforms collect and correlate is a secret (this itself is a topic of contention), but it is probably in the hundreds or thousands. But whatever magic there is exists in the underlying platforms and their attached memory, not in some latent superhuman insight that should invite concerns about autonomously directed objectives, never mind cyberdreams that threaten our survival. (We have much bigger problems than that today: climate change, social unrest, religious zealots with strategic weapons, to name three.)

A trite-but-revealing counterexample is sometimes referred to as Russell’s Turkey, after one of the most influential logicians of the last century. In formal terms, it demonstrates the inherent limits of inductive inference, which basically says, “just because it’s happened a lot, doesn’t mean that there’s an undiscovered law of nature that says it will keep happening.” A turkey may conclude that it will be fed every day because that is what happens until it is slaughtered just before Christmas or Thanksgiving. The outlier can make all the difference. Ostensibly rare events—which in the real world are not-so-rare—are explored in Nassim Taleb’s iconic book, The Black Swan.

Larson’s and Taleb’s identical punchline is that any inductive model will look good for a while, right up until something is different: a so-called fat tail distribution. AI and ML models are inherently inductive—all their “predictions” are based on the past—and therefore can be deceptively incorrect at the moment it matters most. That’s okay for Google (another ad awaits!) but it is not okay for autonomous weapons systems, because the mistakes are irreversible and, using Taleb’s definition of fat tails, “carry a massive impact.” Kinetic decisions cannot be undone.

KSH understand this, of course, but they conflate the “ineffable logic” of AI with “[not] understanding precisely how or why it is working” at any given moment. Indeed, on the same page, they go on to say that “we are integrating nonhuman intelligence into the basic fabric of human activity.” Belief in non-human intelligence is akin to other kinds of human convictions that may feel correct but lack objective evidence. Automated pattern-matching and expert digital systems lack emotion, volition, and integrated judgement. I am not ruling out the possibility of a machine replicating these human-like features some day, but today that question is philosophical, not scientific.

Some computers take action or make decisions based on sensory inputs, yes, but they are no more intelligent than a programmable coffee pot. Cockpit instruments, for example, do more than boil water on a timer—dog-fighting jets are at the other end of the competency spectrum—because a team of humans designed and built them to perform more complicated tasks. In the case of a coffee pot we know precisely how and why it is working; in the case of a cockpit (or a generative speech synthesizer), the number of possible decisions is larger and their impact more important, but both are still constrained. It cannot do surgery. It lacks moral reflection.

KSH emphasize that AI can play better chess than any human ever has; has discovered a medicine no human ever thought of; and can even generate text able to “unveil previously imperceptible but potentially vital aspects of reality.” But these are just systematic and exhaustive explorations of kaleidoscopic possibilities with data that was provided, and rules that were defined, by humans. AI systems can check combinations (really, sequences) of steps and ideas that no human had previously considered. KSH, however, connect these astonishing technical achievements to “ways of knowing that are not available to human consciousness,” and call upon Nobel laureate Frank Wilczek’s Fundamentals for support:

The abilities of our machines to carry lengthy yet accurate calculations, to store massive amounts of information, and to learn by doing at an extremely fast pace are already opening up qualitatively new paths towards understanding. They will move the frontier of knowledge in directions, and arrive at places, that unaided human brains can’t go. Aided brains, of course, can help in the exploration.

On this point we can all agree. But then Wilczek goes on to say, indeed in the very next paragraph, that:

A special quality of humans, not shared by evolution or, as yet, by machines, is our ability to recognize gaps in our understanding and to take joy in the process of filling them in. It is a beautiful thing to experience the mysterious, and powerful, too.

This is the philosophical crux of the counter-argument. If we thought at gigahertz clock speeds, and were not distracted or biologically needy, we could get there too. That AI finds new pathways to success in old games, modern medicine, creative arts, or war that we might never have considered is magnificent, but it is not mystical. (My company does similar things today for claims adjudication and cybersecurity infrastructure.) No amount of speed, or data, or even alternative inference frameworks, will enable a machine to conceive an authentically new science or technology; they can only improve the science and technology we already have.

Indeed, that same class of backward-looking event-counting algorithm might deliver a dangerous answer fast in exactly the moment when a slow, deliberative, and intuitive process would serve our security interests better. The book might have included an example of how just one person, Stanislav Petrov, averted a nuclear holocaust by overriding a machine recommendation with human judgment in a situation neither had seen before—the very definition of an outlier. The Vincennes tragedy, where an ascending Iranian civilian airliner was mistakenly tagged as a descending F14 “based on erroneous information in a moment of combat stress and perceived lethal attack,” also deserves mention; over-reliance on a machine cost hundreds of lives. Neither incident would have benefited from “better AI.”

This is where KSH and their cohort philosophically misalign to the fundamental limits of inference engines. Some of the technical impediments to artificial general intelligence are insurmountable today, while others remain demonstrably out of reach. The fear, or hope, is that one day we will break through these barriers to deeper understanding of intelligence, sentience, consciousness, and morality, and then we will be able to reduce these insights to code. But there is no reason to believe we are anywhere near such a breakthrough. Indeed, what we do understand indicates, because of how logical propositions are expressed and interpreted, that the goal is impossible. For example, as Larson summarized:

[Kurt] Gödel showed unmistakably that mathematics—all of mathematics with certain straightforward assumptions—is strictly speaking, not mechanical or “formalizable.” [He] proved that there must exist some statements in any formal (mathematical or computational) system that are True, with capital-T standing, yet not provable in the system itself using any of its rules. The True statement can be recognized by a human mind, but is (provably) not provable by the system it’s formulated in.
[Mathematicians thought] that machines could crank out all truths in different mathematical systems by simply applying the rules correctly. It’s a beautiful idea. It’s just not true.

In other words, there are ideas that no machine will ever find, even if we do not yet fully understand why humans can and sometimes do. The best argument that KSH can make in riposte is that even if there is a small chance of conceptual extension, we should take care to (and invest resources into) neutralizing it now, before it is too late. But this is worse than implausible, it is unnecessary. The talent and training we need to address existential threats are already inadequate; to add yet more burden would misallocate the real intelligence we do have to the wrong class of problem.

AI only knows what is in the data. The unfortunate use of the term “learning” is a simple, but potent, source of confusion and apprehension. These machines are nothing more than wonderfully sophisticated calculators, even the ones that sound human (and pass the so-called Turing test) or can generate fake videos that an ordinary person would believe is real. Today, they are incapable of discovering fundamentally new facts (chess moves don’t count), or assigning new objectives (nor does protein folding), other than what is contained in the data they consume or the probabilistic rules they follow that are, by definition, consistent but not complete. There are some true things AI will never see without a lot of help.

I acknowledge that there are deep and legitimate concerns about the freedom-limiting dystopia we create by ubiquitous sensors, website trackers, and behavioral analytics. There is nothing particularly magnificent about the hatred and violence induced by AI-enabled social media platforms designed to produce addictive behavior, narrow users’ perspectives, and irretrievably sharpen convictions. Indeed, all three of the authors established their reputations by assembling as much data as possible to predict human behavior. Their sincere concern about technological overreach and privacy intrusion, from point assembly to coercive control, is justified.

But to allow Alan Turing to establish the standard of artificial intelligence is like allowing Thomas Jefferson to establish the standard of political liberty. These men made spectacular contributions to our scientific understanding and our social compact, respectively; one held AI intuitions that have since been proven wrong, the other held slaves. Does that diminish their genius? Of course not. But the comparison is apt, and their mortal discourse is subject to immortal debate that, as far as we can see today, no machine will understand, unravel, or replicate.


This is a companion discussion topic for the original entry at https://quillette.com/2022/01/07/the-case-against-the-case-against-ai/
1 Like

AI only knows what is in the data. The unfortunate use of the term “learning” is a simple, but potent, source of confusion and apprehension. These machines are nothing more than wonderfully sophisticated calculators, even the ones that sound human… Today, they are incapable of discovering fundamentally new facts…, or assigning new objectives…, other than what is contained in the data they consume or the probabilistic rules they follow that are, by definition, consistent but not complete. There are some true things AI will never see without a lot of help.

A couple things:

  1. I don’t buy that there’s a bright line between “learning” as done by AI programs and as done by humans. A lot of what humans do that is generally considered learning or training, both in the workplace and in school is you memorize a set of facts pertinent to certain objectives, and then you figure out how to apply the facts to accomplish the objectives, and over time you “learn” how to perform some function with relative efficiency and efficacy. Not sure I see how this differs from what AI programs are doing. How do people learn algebra? How do people learn chemistry, law, economics? I guess the idea is that the AI is able only to optimize functions, and is not able to generate original ideas or insights. I would argue this is less important than the author seems to think. I’m going to say something rather elitist here and everyone will just have to deal with it: how many people actually generate original ideas or insights that turn out to be valuable or useful? Very few, in my estimation. Both the number and percentages are pretty small. So AI programs can’t do what most humans can’t do, either, I guess is my opinion, and I’m not sure this is particularly meaningful.

  2. “Today, they are incapable of…” Right. Today.

3 Likes

It’s a strong and persuasive argument, until one considers that we don’t apply the stages of human consciousness in terms of their application in the field of AI. There is a pretty good literature on human development which shows that in terms of human general intelligence, we don’t actually the thinking and intelligence-led environment interactive qualities of human existence until we achieve theta waves at or around age 2.

This may sound abhorrent, and it should be considering how cute and adorable babies can be- but it is quite true. Babies seem like they are absorbing every detail, learning and developing as the go, but most of this is pure data gathering and organic brain growth. It also doesn’t mean that conscious and thinking memory isn’t possible before this- human variability is quite substantial in cognitive terms and particularly traumatic events can get logged into long-term memory by virtue of sheer overwhelming shock. I, for example, can remember launching myself from the top of a set of stairs riding in a toy car and crashing through the opaque plate glass window at the bottom of the stairs. I can’t have been much older than one at the time.

But the transition to theta is when we really begin to form substantial stores of memory and learn to interact with the world in a thinking considered manner. It’s also when we begin to learn to socialise and for most of us, begin to acquire the innate moral foundation of reciprocity, which many wrongly assume is naturally innate at birth. Here is a source of the various stages of human brain wave development:

Here is the problem- human thinking consciousness and human general intelligence probably only requires two ingredients to develop. One is the iterative ability to process and confirm data. Iain McGilchrist writes about it in his book The Master and His Emissary: The Divided Brain and the Making of the Western World. The other is the ability to compare and contrast this sensory data with observed knowledge stored in long-term memory. From this we build our intellectual structures for viewing the world and against this wordless observation the power of language is puny- until, that is, we learn to use words to convey observations and form more complex intellectual structures somewhat detached from physical reality.

At some point probably during the earlier Delta stage the ability to abstract is born, as witnessed by the fact that younger children can translate two dimensional pictures into mental translations. As far as I am aware, only the dolphins have been shown to share this ability with us- they can actually identify individual humans from a picture on a screen.

In this light, and in the full humility of the fact that we are not born cognitively as fully formed human identities, it behoves us to be somewhat terrified of the very real prospect of artificial general intelligence which will quickly surpass us, probably within most of our lifetimes.

The other problem is our lack of humility. We are insanely stupid when it comes to anticipating the 2nd and 3rd order effects of social changes. Don’t get me wrong- in the course of the latter part of the Twentieth Century there were some amazing landmarks of social change- but at the same time we made mistake which were so insanely stupid, they have already begun to look to current generations as though they had malign and diabolical intent.

So can we even be trusted with the simulated intelligence we have already developed? Of course not! All social media for children younger than 16 years old should have been long since banned- for the simple reason reason that it supplants the very normal teenage angst and pain which forces to cling to friends of our own age to bear it, forming deep bonds which will serve as model for all human friendships throughout our lives.

It has become apparent that our supposed intelligence is rapidly outstripping our wisdom. In no area is this tendency more evident than in our lack of fear over the imminent possibility of artificial general intelligence.

Too true. A true Turing test would be to hook a camera to the internet monitoring an experiment to detect quantum entanglement and make humans incapable of accessing the camera. As soon as we detect a wave collapse, we will know that we have conscious, thinking and self-aware AGI.

2 Likes

I have strong opinions on most issues, but the potential danger of AI is not one of them. Very smart people make very strong arguments on all sides of the debate.

2 Likes

That’s where I am. Here’s an overview of some of AI’s potential dangers.

3 Likes

One large issue is Human’s tendency to anthropomorphize everything. It’s like the funny video/photos of a polar bear and a sled dog. Everyone’s like “awwwwwww, that polar bear is cuddling / showing affection to the dog.” Yet the polar bear expert looks at it and says. “Oh dear, polar bears use their paws to explore new food items, it probably isn’t hungry, but you’ll want to move that dog because that bear will remember where to find it when it is.”

We do the same thing with AIML. In this case we apply too human ethics / considerations to the AIML when what we are dealing with is cold hard algorithms. So when we have an ML that misclassifies a black person as a gorilla we call it ‘racist’. No…it’s not racist, it’s inaccurate. Racist would be a misclassification that triggers an even which says, because you are classified as a gorilla, you MUST BE a gorilla and therefore we’ll pick you up and put you in a zoo. Again, it isn’t racist AI, AI can’t be racist, it is merely inaccurate AI.

The same goes for other ethical considerations. Thousands of accidents happen every day in the US between human drivers, even horrific trolley car quandaries. Yet when we get done, and the human chose track A over track B, we have forgiveness. Now put a self driving car where you find the logic between A or B and we start to lose our minds. We have no forgiveness for machines.

So one case against AI ever getting anywhere the dystopias envision is that humans are actually ‘wrong’ in their individual decision making quite an awful lot and only ‘right’ in their decision making with large samples (and often barely so). AI is no less susceptible because life is too computationally intensive to be ‘right’ all the time, and we don’t have a lot of forgiveness when many machines are wrong.

2 Likes

Great source, but the Horowitz Tyranny of AI Design has been with us for decades- within our financial and insurance risk adjustment systems. The problem is you can take out any type of arbitrary class you like- gender, race, etc- but the arbitrary class will always reassert itself at the fine detail level, through circumstance related historical decisions, geography, consumer choice and whole host of other data points- it’s a form of collectivism through statistical tendency.

You can try and remove discrimination as much as you like, but it will always bounce back through the statistical data. If anything, the problem only becomes worse, the more data finance and insurance owns on people. It’s a far bigger problem than coding for bias- I don’t even know whether it’s possible to solve. The only thing I could think of to at least partially fix it would be to perform a complete data purge on every citizen at 25- at least then bad decision stemming from a poor background when young wouldn’t haunt people for the rest of their lives, and would shift the data skew of certain groups.

Sometimes its the institutional decision-making itself which becomes the agent of self-fulfilling prophecy.

2 Likes

It’s always been interesting to me that the fact that young men must pay higher auto insurance premiums than young women has not determined to be gender discrimination. (Not that I have a problem with it, I just think it’s interesting.)

3 Likes

Well, it’s because the risks for young men are disproportionately high compared to older men, whereas young women are a comparatively safe risk compared to older women.

The main risk for men is driving too fast. The main risk for women is being distracted by children whilst driving. Men are actually safer per road mile, but drive three times as much per year.

Young men’s insurance is also higher because in many countries (like the UK) all road-related injuries and medical costs are paid for through care insurance (even with NHS care). Speed doesn’t just kill, it injures and maims for life.

It’s also why insurance companies prefer it if young men are insured on dad’s insurance, even with their own cars. Fear of dad killing them if they screw up his insurance, makes them drive more safely.

3 Likes

“A guy I know” worked, approximately in 2012, for the largest non-internal (meaning, not Google, Facebook, etc.) data broker in the U.S.A. In other words, “big brother”.

The data store contained records for pretty-close-to every resident of the U.S.A. At that time the company maintained 1,500 potential data points on each person; for any given individual, the typical number of actual data points (meaning, non-null values) describing them averaged about 500.

Typical data points:

  • age
  • sex
  • income
  • number of children in household
  • number of seniors in household
  • magazines subscribed to
  • marital status

… and so on.

FYI

1 Like

Point of sale information tracking who has cats, dogs, hunting and fishing licenses, who reads sci-fi and who reads non-fiction, beauty care products, premium travel-oriented credit cads, shopping at wholesale stores such as Costco and Sam’s. Recreational cyclists, runners, walkers, boaters, skiiers.

Of course all of these markers are useful to retailers who want to advertise to you the kinds of stuff you’ve bought in the past, but put them all together and it’s quite a profile of your tastes and behaviors.

Yep all those things you list. It was quite sobering, how well, based on those things, the company could predict certain things.

Another wrinkle - the sources of all that information (e.g. magazine publishers), quite often wanted the fact that they “shared” (=sold) this information, to be kept secret.

True of course, but it ain’t hardly new. In the early 70’s I was in political publishing and fundraising, and everybody ‘sold’ their mailing lists to everybody else. But of course we didn’t give you the actual names and addresses to add into your database; we just gave you a shot at mailing to the addresses we shared with you. If they responded to you, then they became ‘your names’ which you in turn could license others to mail to.