March 14, 2023

❏ Notes to Self 001 - Projecting Intelligence

Continuing to collect links and insights that are helping me make sense of the most recent season of Is Artificial Intelligence Intelligent? Of course the day after I wrote the ChatGPT edition of the becoming, Intelligencer published You Are Not a Parrot which says a lot of what I was trying to say but much better, and with more experts.

Then John Maeda sent out the March 2023 #DesignInTech briefing, in which he includes a passage from Computer Power and Human Reason, penned by one Dr. Joseph Weizenbaum who apparently invented the chatbot back in 1967, so definitely has some street cred:

I knew of course that people form all sorts of emotional bonds to machines, for example, to musical instruments, motorcycles, and cars. And I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short exposures to their machines. What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. This insight led me to attach new importance to questions of the relationship between the individual and the computer, and hence to resolve to think about them.

Once more for everyone in the back, extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. I mean, this feels like what is happening, no?

To keep our heads spinning, GPT-4 was just released, along with a white paper I haven't read (yet) but it sure doesn't sound very open.

My working assumption right now is that to whatever extent intelligence is being attributed to these tools, we are dealing with projection. Because if you look at the larger zeitgeist and polarizations that are happening in this country (around the world, really) and the intense desire to dehumanize a large part of the planet (I mean, the same desire that's been around for at least 500 years with the rampant violence and subjugation of humans and nature), what is with the urge to humanize a Large Language Model? And if you took the other side and assume these projections are based on positive qualities, then we run the risk of transforming these models into gods.

But either way you look at it, the one thing that remains true for projections is that they function to absolve us of any responsibility in the matter. And that is what scares me the most.

I am beginning to think these projections matter more than whether or not these models are actually intelligent (I need a philosophy tutor just to approach the depth of knowledge and nuance required to be able to coherently contribute to a conversation around "what is intelligence?"). I do think that these models will become part of agentic systems that will be tasked with going off and doing things like scientific research, engineering, business/military/political strategy, hacking, and social persuasion/manipulation (see Joe Carlsmith's paper examining the risk factors of power-seeking AI for more on this). Which to me reads like creating tools that further the extractive and violent model of human living (e.g. colonialism, white supremacy). If domination is what we reify, domination we will get. Who is building LLMs that will write poetry, compose sonnets, and delight in the beauty of a butterfly? Not as a cute parlor trick but in a capacity that supports the tending of the human parts of ourselves (and no, this does not look like a 'mental health app' coughed up from the hellscape of capitalist consumerism).

In The Trap of Gargantius, the First Sally of inventors Klapacius and Trurl is at the behest of a king in search of a way of augmenting his army to act as one, which presumably translates into conquest on the battle field. To this end, the two intrepid inventors go about outfitting each soldier with the hardware required to link one to the next. But as each individual connects and the collective mind grows, the unexpected begins to happen.

The fearsome metallic clatter of closing contacts reverberated over the future battlefield; in the place of a thousand bombardiers and grenadiers, commandos, lancers, gunners, snipers, sappers and marauders—there stood two giant beings, who gazed at one another through a million eyes across a mighty plain that lay beneath billowing clouds. There was absolute silence. That famous culmination of consciousness which the great Gargantius had predicted with mathematical precision was now reached on both sides. For beyond a certain point militarism, a purely local phenomenon, becomes civil, and this is because the Cosmos Itself is by nature wholly civilian, and indeed, the minds of both armies had assumed truly cosmic proportions! Thus, though on the outside armor still gleamed, as well as the death-dealing steel of artillery, within there surged an ocean of mutual good will, tolerance, an all-embracing benevolence, and bright reason. - Lem, Stanislaw. The Cyberiad (p. 42).

So I'm on the lookout for mischievous robot inventors instigating random acts of kindness, humor, and goodwill.

Oh, and one more thing. A collection of suggested readings from the #StochasticParrotsDay live stream hosted by the DAIR institute.