the becoming: February 28, 2023

The ChatGPT Edition

Flower courtesy of Midjourney, the prompt has been lost to time. Flower illustration courtesy of Midjourney. The exact prompt has been lost to time.

Hello! I’m Andi and you are reading the becoming - February, 2023 edition.

Writing this month’s newsletter has been an experience. The first draft is now a published essay on Medium, On Belonging, a reflection on how we belong and what it means to come home. And the final edition, that you are reading now, has turned into a longer than expected exploration of AI and what it means to be human. Less about the technology itself and more about who we are, what we are learning about ourselves, and how we are being shaped by the world and tools we are creating. It also happens to contain 100x more links than any other newsletter I’ve written to date.

This is an archived version of the becoming. You can sign up to receive future editions using the form at the bottom of this page.

What does it mean to be human?

A few weeks ago I was at a design conference for design leaders appropriately named Leading Design , right here in San Francisco. Organized and curated by the fine folks at Clear Left, it was an event filled with two days of thoughtful talks, wonderful people, and for the first time this year, amazing weather. We heard from leaders at unicorns, in the traditional financial sector, government, and also crypto (not an uncontroversial topic, that). Across such a wide variety of industries and companies, the most common theme that kept coming up, like the refrain in a song, was an invitation to be human at work.

I’m not going to argue with this message! I think overall it’s a good one - trying to ratchet up the level of humanity in business - but sometimes I wonder at the sheer magnitude of the task. The systems we’ve built weren’t exactly created with our humanity in mind, but rather efficiency and power. And it is not that we haven’t benefitted, but our gains have come at a cost. No one at the conference explicitly said what we were being when we are not being human, but I have to guess some kind of extension or cog of an organizational system that is afraid of what it means to be human.

In parallel, ChaptGPT and its AI brethren have kicked off yet another round of musings about what it means to be human.

Reid Hoffman thinks that technology itself makes us more human, writing [t]echnology is the thing that makes us us. Through the tools we create, we become neither less human nor superhuman, nor post-human. We become more human.” This feels very on brand for Silicon Valley.

Steven Pinker takes a more circumspect approach, describing how technology like large language models (LLMs) can shed light on what makes us human precisely because they are so different than us: “[s]ince LLMs operate so differently from us, they might help us understand the nature of human intelligence. They might deepen our appreciation of what human understanding does consist of when we contrast it with systems that superficially seem to duplicate it, exceed it in some ways, and fall short in others.”

And Douglas Rushkoff points out that rather than learn what makes us human, we are instead re-shaping ourselves to be able to interact with technology: “[I]n essence, these intriguing demonstrations of apparent AI self-awareness may say less about machine consciousness than they do about their capacity to manipulate human perception. In other words, if AIs are now passing the Turing test, it may say less about how human they have become than how robotic and programmable we have become, ourselves.

All three are right, in a way. Building tools is an activity of humans, but this activity can, and does quite frequently, dehumanize us. We look at things that exist in the world, like forests and communities, and we see something functional that can be extracted, abstracted, refined, and reconstituted back in the network for leverage. We see the roundness of a tree trunk and that is what we extract from it, discarding everything else about what the tree does in the world - provide shade, produce oxygen, a home for animals, a place to rest - as part of a forest ecosystem. We see the strong ties of communities and we extract that into a graph, discarding everything else about what and how those relationships exist and flourish in the world. So yes, we can make tools and this is very human, but these tools are extracted from something much more vast and beautiful, and the tools we create to extend ourselves also disrupt the direct feedback loops of our actions, further isolating and insulating us from the complexity of the world. And in the same way that we learn to see and extract functional qualities from beautifully complex interweavings of existence and purpose, we also do the same to ourselves. We become an extraction of function at the expense of our own complexity, and at the expense of our humanness.

What I just described is the tl;dr of the tl;dr of Andrew Feenberg’s instrumentalization theory (more on this in a subsequent issue), an essential framework for articulating both the functional constitution of technology as well as how technological objects become realized and integrated into natural, technical, and social environments. This doesn’t have to be the way technology shapes us, but it is the path we’ve chosen so far. When control and profits are placed above the dignity of human beings then we reveal our complicity in creating the conditions we find ourselves in.

Feenberg, exploring the nature of technology in parallel with the nature of management says, “The driver of an automobile accelerates to high speeds while experiencing only a slight pressure and small vibrations; the marksman shoots and experiences only a small force transmitted to his shoulder by the stock of the gun. By the same token management controls workers while minimizing and channeling resistance so far as possible.

None of this is to say we should get rid of technology (or managers) or stop exploring the nature and potential applications of large language models! What I am saying (I think) is that collectively we are sensing that something is off. Not just the uneasiness of climate change, the entrenchment of white supremacy in every inch our systems, or the geopolitical instability as we transition to a multipolar world. Collectively we are exhausted and I think one contributing factor is the energy it takes to maintain the cognitive dissonance of being a function (a worker, a consumer) pretending to be a human. And then we look up and see OpenAI talk about how this new era of technology is going to usher in human flourishing and we all roll our eyes and think, “for which humans, exactly?” In these moments, a small voice whispers to me that maybe we don’t have to go all in on these systems, that there is a path to making our own lives bigger and the systems smaller.

Frank Lanz, kicking off the first issue of Donkeyspace, is trying to make sense of it all, too.

“Somehow we’ve conjured up an infinite hallucinogenic dreamscape out of Bayesian statistics and we’re already kind of bored by it. We made software that writes software but we’re not exactly sure how it works. We don’t program computers anymore, we deprogram them. Every day a new paper or press release announces that another deep philosophical thought experiment has become an engineering problem. And the smartest people in the room can’t agree whether we’re all going to die, or we’re all going to get rich, or none of it matters. I’m uh… I’m having trouble keeping up.”

It’s true, I am already kind of bored by it. But some of this I think is because I am tired of the endless hype cycles and harm these systems cause when they are rolled out with the urgency of being first to market, the first to control the narrative. I’m also not sure how much of what we are seeing today with things like ChatGPT and Midjourney would count as “intelligence” rather than yet another (impressive) tool created to extend the work of humans. I mean, what are these systems going to do in the absence of a prompt? Nothing. Absolutely nothing.

In the early 2000s I ran a MegaHAL bot on IRC that basically logged the entire channel as its corpus of data. It was fun and over time Fie, the bot, got pretty good at mimicking the general culture and tone of the channel. ChatGPT feels similar to me, just with a lot more data. And I appreciate getting a great outline or the summarization of a bunch of data as much as the next person, but this doesn’t feel very life giving (for us) or intelligent (for AI).

Don’t get me wrong, I absolutely think there are great uses for the tools that are emerging - Jorge Arango recently posted a piece about how he uses it in his workflow, Notably.ai had an interesting thread about using it in research (also see Andrew Hinton’s thread on research + ChatGPT), and The Atlantic recently published an article about a writer who has used OpenAI’s Playground feature to automate a tedious bit of repetitive writing. But I am skeptical of the hype and the frenzy of activity dedicated to cultivating the prompt engineer skillset. Maybe because it’s all so… tediously banal. Where is the weirdness, the play, the exploration?

This is, after all, Rushkoff’s suggestion for how to stay connected to our humanity and to each other:

The answer is not to reject AI, but to work on retrieving and recognizing our humanity so we’re not so easily fooled into submission by these would-be conquerors. That means studying and engaging in community, the arts, spirituality, and play. More of us need to be building and using AI’s that are designed for assisting and augmenting such human choice and activity, rather than controlling it. There may be less money to be made in the short-term, but more of a human civilization to be manifest in the long run.

What does it look like to engage in community, the arts, spirituality, and play with AI by our sides assisting and augmenting our choice and activity? How do we make these things a part of our life and not something we squeeze into 20 minutes a day as long as an app reminds us to? If we can’t make community, the arts, spirituality, and play the foundation of our lives and work as humans, how can we hope to partner with AI to make this happen? I mean, I want this! But how do we make this happen?

Because right now these “intelligent” systems that can pass the bar exam are telling us how much they can’t engage in spirituality and play. The bar is a highly structured exam based on a thoroughly vetted and documented set of case data that describes the history of law in a country. This is exactly the kind of data something like ChatGPT needs to seem as human as a lawyer. But it still can’t write a Spozit.

So, I guess if you are the type of human that is exceedingly logical with deep knowledge of a subject, your “being human” might look a little more like ChatGPT, but I am feeling pretty okay about how human the soft animal of my body is, and the knowing it contains.

Speaking of which, I’d love to see more conversations about embodiment, somatics, and the biology of cognition. Where are they? I know they are out there - who can point me in the right direction? While we are at it, where are the folks talking about field theory? Most of what I’ve read around AI, technology, and being human is largely grounded in the realm of the mind. We are missing what emerges when we are together in groups, beyond what we bring as individuals. And Jan Roubal and Gianni Franscetti point out:

“There is something new that appears in a meeting of people that transcends the individuals involved and even the relationship they cocreate. The whole of the situation is more than the sum of the people who meet each other. Moreover, the situation is forever changing from one moment to the next. This constant change, the flow of the situation, follows its own dynamics, and the people involved are constantly transformed by it, since they are functions of the situation in every here-and-now moment.

A million years ago I attended an executive program at Singularity University. At the time synthetic biology was on the horizon, a promise to unify wetware and software and we still might get there some day. But I think when it comes to embodiment this is where we really start to get a sense for what makes us human. How we can be in the field, attuned to the field, and allowing the ways in which situations change, moment to moment, to reveal what holds meaning, to clue us in to what we are up to together? How is it that we can know - that we can sense - what is happening not just for ourselves, but in the collective conscious and the Anima Mundi?

Our insatiable quest to build tools, to control the world and to know ourselves will continue and accelerate with the arrival of LLMs. There are things we will learn about the nature of intelligence and things we will learn about our own nature. Bruce Feiler shares his insight that ChatGPT can help teach prosocial behaviors, which, to bring this back around to where we began, could definitely be a benefit in work environments. Maybe that’s what conference headliners will be talking about in a few years, “How ChatGPT Made Me and My Team More Human.”

Of course, I can’t help but think of the movie, “Her,” or my classmate Jacob Ciocci’s senior project at Oberlin College in which a guy in the midwest falls in love with a girl online - they have so much in common! - only to learn that the object of his affection is a bot built from his own photos, journal entries, and other online detritus. Sometimes it feels like we are heading in this direction, preventing ourselves from feeling our humanity by insulating ourselves from the beautiful assortment of human beings that exist in this world. Going back to Rushkoff, he urges us that maybe “…instead of being afraid of AI, we should learn to be less afraid of other people.”

Maybe that’s where we are at. What it means to be human. To stop being afraid of other people.


I don’t have a tidy conclusion to what turned into a rather long reflection piece, just a lot of curiosity and fascination about this stuff. I am rooting for us as humans to get back to being human, whether that’s at work or elsewhere. I am eagerly waiting for us to lean into our complexity and jettison functional understandings of ourselves for more more poetic and playful ways of being together in this world.


Worth noting:

What do you get when a philosopher, two cognitive scientists, and an education scientist walk into a bar? Critical Ignoring, a better way to deal with the hellscape that is the internet.

Are your values really your own, or have you been betraying yourself all along? Elizabeth Rayner Howes seems very good at this being human thing.

Apropos to absolutely nothing I have ever talked about here, I had to throw in this incredible website that documents artifacts found while excavating the North/South metro line in Amsterdam, which goes all the way back to the year -119000 (!).

To counterbalance such a far afield share, here is John Cutler’s incredible short book (née Google Doc) that helps teams sort out the messy details of product work.

Where do things like note taking and woodworking actually happen? I felt some serendipitous resonance between Jorge’s piece on Thinking Places and Jack’s thoughts on Working with Your Hands. What is happening between our brains and hands and thinking and making, and where is it all happening?


If you made it this far, my guess is you were just scrolling to see how long the damn thing was. Hi! Nice to see you. Thanks for being here, and see you next month!