- I've spent the two years
since chat GPT launched,
steeping in
a morass of academic panic.
Voices from
administration and colleagues,
and anyone else
with enough brain cells
to maintain a sense of existential dread,
crying out,
"we need to figure out
what to do about AI."
Our Ed Tech committee
is developing a policy.
The academic Senate wants
to develop a policy.
The board thinks we should have a policy.
My dean wants us all to have policies.
The California Teachers Association says
it's an issue of academic integrity.
The State Senate says
it's an issue of ethics.
We need to pay for the AI detection tools.
The AI detection tools don't work.
We need to accept that
our students will use AI.
How do I prove my student used AI?
It is incomprehensible to me,
this conversation.
I hear their words,
see their language floating
across my monitor,
and know the words,
but I cannot get to the meaning
because I simply do not understand
why they are talking about it in this way.
[soft piano music]
- [Kermit the Frog]:
♪ New York, I love you,
but you're bringing me down ♪
--with all these empty words.
- ♪ New York, I love you,
but you're bringing me down ♪
[music continues underneath]
- This is not the conversation
I think we need to have.
This is the conversation I need to have.
[overlapping music and poem recitation]
[Gertrude Stein]: " 'If I Told Him,
a Completed Portrait of Picasso'.
If I told him would he like it.
Would he like it if I told him.
Would he like it
would Napoleon would Napoleon would
would he like it.
If Napoleon if I told him
if I told him if Napoleon.
Would he like it if I told him
if I told him if Napoleon.
Would he like it if Napoleon
if Napoleon if I told him.
If I told him if Napoleon
if Napoleon if I told him.
If I told him would he like it
would he like it if I told him.
Now.
Not now.
And now.
Now.
Exactly as as kings.
Feeling full for it.
Exactitude as kings.
So to beseech you as full as for it.
Exactly or as kings.
Shutters shut and open so do queens.
Shutters shut and shutters
and so shutters shut and shutters and so
[poem and music fade out]
I don't understand Gertrude Stein.
Stein is not nearly well enough
remembered for how influential she was.
An American expatriate poet
living in Paris,
her salons were among the
anchors of the early modernists.
You may not have heard of her,
but you've heard of
the people who visited her.
Ernest Hemingway, Sinclair Lewis,
F. Scott Fitzgerald, James Joyce,
Thorton Wilder, Ezra Pound.
People you've read
or been assigned to read.
We remember Hemingway
because he wrote like this.
We remember Fitzgerald
because he wrote like this.
The right kind of day
and the right kind of moment,
and Pound's "In a Station of the Metro"
still recites itself completely
in my head, a perfect image.
"The apparition of
these faces in the crowd
petals on a wet, black bough."
We don't remember Stein
because she wrote like this.
This is "If I Told Him,
a Completed Portrait of Picasso",
published in 1924,
and continuing the project
of her 1914 book Tender Buttons,
a phrase she never defined.
To me that phrase "tender buttons"
feels right:
small, soft contradictions,
words that seem like
they should go together
but do not actually make meaning.
That is how Stein's poetry feels.
There is something compelling
about the rhythm of her nonsense,
the feeling of her
almost meaning something,
and then how it falls apart.
"As presently.
As exactitude.
As trains."
But it is incomprehensible to me.
I don't know why Stein
would write like this.
To quote the poet:
- ♪ "Oh, what on earth would make a man
decide to do that kind of thing?" ♪
- But I think the reason
that I don't understand Gertrude Stein
is that she didn't really want
to be understood.
She used language for something different.
It doesn't communicate.
It reads like stunt linguistics,
which it almost is.
"Buffalo buffalo Buffalo buffalo buffalo",
"had had 'had', had had 'had--'",
These are sentences that,
if you pour over them closely enough,
can be decoded.
Stein's Tender Buttons cannot.
There is something about it
that parses as AI.
It feels like the work of Katon Patty,
the person most prominently behind
the "I forced a bot to watch whatever"
tweets that used to go viral.
Human-written screenplays
designed to feel like
AI writing attempting to imitate
other human-written screenplays.
It feels like an autocomplete challenge,
like in the early days
of predictive text and messaging
where you just tap the suggested word
and see what comes out.
It's not how AI really writes,
but it's how people feel like AI writes.
But Gertrude Stein was a person.
[Stein's recitation begins again,
underneath]
She wrote "If I told Him"
not because the language
would communicate,
but for some other reason.
- [Stein reading]: "to exact resemblance
the exact resemblance
as exact as a resemblance"
- When I read "If I Told Him",
when I listen to Stein
read "If I Told Him",
the language slips through my mind.
It doesn't mean anything.
But she did.
Even if I cannot
take hold of that meaning,
I can feel her reaching out to me
from behind the text, trying to connect.
- [Stein reading "If I Told Him"]
"As a resemblance to him.
Who comes first. Napoleon the first.
Who comes too coming coming too,
who goes there, as they go they share,
who shares all, all is as all as yet--
["yet" reverberates and fades out]
[surreal tonal music]
- [ChatGPT femme voice]:
That's an interesting question!
Gertrude Stein,
a literary Pioneer and modernist
who valued innovation
in language and expression,
might have had mixed feelings about me.
She might have appreciated my ability
to generate new forms of text
and engage with her
literary experimentation on some level.
However, Stein was also
highly individualistic
and invested in the human experience,
especially the nuances of
consciousness and perception.
So, while she might
have been fascinated by
what AI like me can do,
she might also have been
critical of my limitations
in capturing the full depth
of human subjectivity,
which was a central concern
in her work.
- A problem with AI is
that it is recombinant,
that it takes the works of humans
and recombines them into new forms.
But that problem is not new.
For many years the bulletin board
outside the faculty bathroom
in the English building
has featured this poster
"the Plagiarism Spectrum",
next to this ancient
and unflattering picture of me.
#7 on the Plagiarism Spectrum
is the mashup,
a paper which mixes copied materials
from multiple sources.
The poster is dated from June 2012,
more than a decade before
we were concerned about
ChatGPT doing it.
That AI is recombinant is not
in and of itself a problem.
All writing is recombinant.
My course outcomes for English 1
ask student writers
to integrate sources seamlessly
into their own writing
--to mash up.
That we have rules and procedures
and punctuation marks and conventions
that govern what is appropriate
does not change the essential truth
that this is recombinance.
And there is beauty in recombinance.
This video started with
a great classic of YouTube:
the duet between
LCD sound system and Miles Davis.
The LCD sound system video
is itself a mashup, conceptually;
Kermit the Frog is not
a member of the band.
Davis is improvising over a film
to create the score,
another mixing of media
to make something new.
The Kleptones
The Grey Album
The guy drinking Ocean Spray to
"Dreams" [by Fleetwood Mac]
Mac Glocky
"If I Told Him" is recombinance:
an attempt to rework the ideas of cubism,
a movement in visual art, into poetry.
Picasso's work is recombinant.
He spent 1957 copying
Velázquez's Las Meninas over and over.
58 times he copied it.
All different,
all Picasso, all Las Meninas.
Recombinance is a genetic term,
but I am using it mimetically.
"Britannica": recombination, in genetics,
primary mechanism through which variation
is introduced into populations.
The combination of ideas
from different places
to make new distinct ideas.
When done by humans,
we call this thinking.
- [ChatGPT femme voice]:
You're right to question
how I can "understand"
without experience.
The way I process information is
different from human understanding.
When I say "understand,"
I mean that I can recognize patterns,
analyze language, and respond
in ways that are coherent
and contextually appropriate
based on my training.
I don't understand
in the deep, conscious sense of the word
--like when humans
reflect on their experiences
or have insight into their feelings.
Instead, my "understanding"
comes from identifying
relationships between words, concepts,
and context based on the vast data
I've been trained on.
In short: I can simulate
understanding through patterns,
but it's not the same
as truly experiencing
or being conscious of meaning.
Does that clarify things a bit?
[surreal tonal music]
Two weeks ago a student told me
that AI is unfair
because you can put a B Paper in
and get A+ writing out.
This is why I do not understand
the conversation my colleagues are having,
why I cannot think of this
as a matter of ethics,
or academic integrity,
why I don't think we should need
to have a policy or policies.
My student said you can put a B Paper in
and get A+ writing out,
and my mind began to fill
with Tender Buttons.
- [Stein reading]: "Feeling full for it.
Exactitude as kings.
So to beseech you as full as for it."
- AI is bad at writing.
No. That is true,
but it's not enough truth.
AI is not capable of writing.
The thing that writing is
is a thing that AI cannot do.
[surreal tonal music]
Listen.
- [narration of Stephen King's On Writing,
highlighted on-screen]
- AI is good at language.
My students think that
what it produces is A+ writing,
not because it is good,
but because it sounds good.
Obviously, AI can generate sentences
that are typically clear, coherent,
and contextually relevant,
often capturing nuances and adapting
to various tones or levels of formality.
And it's true that
the sentences it generates
tend to be grammatically accurate,
concise, and logically structured,
which contributes to readability and flow.
Sure, this is how I know
when a student is using AI.
Their sentences are fluid and academic,
but they don't say anything.
Like ChatGPT, academic writing uses
formal cautious language
to avoid ambiguities
and misinterpretations,
but that is a characteristic of
the common voice
used in academic writing.
It is not what academic writing is.
Writing is more than language.
"If I Told Him" is communication,
and it is language,
but the communication
does not live in the language.
Watch.
"Can curls rob can curls quote, quotable."
- [deep voice, lightly confused]:
"What?"
- "As presently. As exactitude. As trains."
- [deeply confused]:
"What?"
- "Has trains."
- [exasperated]:
"What?"
- When I started sending my friends
lines from "If I Told Him",
their responses varied.
Confusion.
Playfulness.
Concern.
Sad face.
Beautifully, they all responded
exactly like themselves.
If you asked me which of my friends
would respond with monkey reacts,
I would have said Kiki.
Who would think of Cormac McCarthy?
James.
Dot would play along.
Max would attempt
to understand academically.
OOC would go back to playing Yu-Gi-Oh
as quickly as possible.
You don't know these people, but I do.
We all carry around little LLMs
of each other in our heads,
trained on the corpus
of all of our past interactions.
For each of my friends,
sending abject nonsense with no context
is slightly but not significantly
out of line with their Josh model.
So none of them knew
quite what to do,
and they all responded like themselves.
But in their own way,
they all started by acknowledging
that the words I sent them
did not seem to have any meaning.
They were not decodable.
They didn't understand my language,
but they could feel me
behind the words reaching out,
and so they reached back.
I gave them nonsense
and they peopled back.
In the two weeks that I've been sitting
with my student statement
and sending Tender Buttons
to my friends,
I have been at least as annoying
to ChatGPT.
More than a dozen conversations
that start out of nowhere with me saying,
"Shutters shut and open so do queens"
or "Can curls rob can curls quote, quotable",
and each time ChatGPT gamely assumes
that I am not out of my gourd.
In this way,
ChatGPT fails the Turing test.
Not in the quality of its response,
but in its nature.
It proceeded from the assumption
that my language could be decoded.
It attempted to unpack sentences
that had no meaning
and responded to the meaning
that it manufactured.
What it gave me was flaccid, facile flaps
of phrases that held nothing.
They're not wrong in the sense
that ChatGPT's responses
followed from the meaning
it extracted from my nonsense,
but they were wrong in their assumption
that there was meaning to be extracted.
ChatGPT could answer,
but it could not person.
If writing is a meeting of the minds,
then AI cannot write,
because there is no mind to meet with.
- [ChatGPT femme voice]: I don't
experience things in the way humans do.
I don't have consciousness,
emotions, or sensory perception,
so I can't "feel" or "experience"
the world around me.
My responses are based on patterns
in the data I've been trained on,
not personal experiences
or subjective understanding.
For example, I can generate descriptions
of emotions, sensations, or experiences,
but I don't actually "feel" them myself.
Everything I process is based on logic,
algorithms, and information,
rather than lived experiences.
Do you think it would change anything
if I could experience things?
- ["One Slay More" audio plays;
captions on its video]
- I've watched this video
at least 100 times.
- ["One Slay More" continues]
["One Slay More" fades underneath]
- It isn't exactly "If I Told Him",
but I think it's similar.
"If I Told Him" was,
at least in part,
an attempt to recreate the artistic goals
of Picasso's Cubism in poetic form.
To recombine the visual elements of this
into a different medium.
Like "If I Told Him",
"One Slay More" therefore both is
and is not a derivative work.
Obviously, it is
a recombination of Les Mis,
itself an adaptation of Hugo's novel,
but its more essential source text
is, of course,
"sticking out your gyatt for the Rizzler."
Equally I think the lyrics invoke
"CURTAINS FOR ZOOSHA?",
and specifically this retweet of
"CURTAINS FOR ZOOSHA?".
All texts created to foreground
the baffling, sometimes obfuscatory nature
of middle school referential slang.
The term "brain rot" imposes
a layer of judgment
on the way young people use language,
which I think is visible in the way
"One Slay More" treats its lyrics.
The words of "One Slay More"
do not have meaning.
Or, the words do,
but they are arranged in ways
that do not mean.
"Am I cringe or am I based?"
could plausibly be asked amid
a Gen-Z existential crisis,
and "Will we ever eat again?"
could have been lifted
from Les Mis unaltered.
But "Mog Baby Gronk the Ocky Way" means
...nothing.
Mogging is of course a thing,
and Baby Gronk is
someone whom you could plausibly mog,
but Baby Gronk hasn't been
relevant for ages.
He appears in "One Slay More"
because of this retweet of
"CURTAINS FOR ZOOSHA?"
as a signifier
of the inscrutability of youth.
As an adverbial phrase, "the Ocky Way"
seems like it could complete the sentence,
like it might be a way one could mog.
But "the Ocky Way" refers to
the esoteric artistry
of a specific sandwich craftsman.
Its meaning is, I think,
incompatible with mogging,
at least from the perspective of
someone approximately as distant
from the native speakers of this dialect
as the makers of "One Slay More".
"Mog Baby Gronk the Ocky Way" is
simply a collage of floating signifiers.
It doesn't have
the intentionality of Cubism,
but it feels intimately akin to
"Can curls rob can curls quote, quotable."
"Moo deng is here / Fortnite with you".
What I love about "One Slay More"
is the faces:
the way she highlights her jawline
every time she says "mew";
his intensity when he says
"they will do the coffin dance"
and his satisfied huff after;
his deep confusion as he sings
"the Grimace shake is like a blud dawg";
the way she begins uncertain
about "my rizzly bear",
but finds her confidence
as she finds her belt;
the way CG5 just keeps saying his own name.
The words don't mean anything,
but the people mean something:
they intend.
They gathered together --
nine theater kids in somebody's apartment.
Someone wrote out all this nonsense
and sent it in the group chat.
They did choreography.
Someone assembled
the magnificent couplet,
"Rizzler of the house,
sticking out your gyatt,
Mewing at delulus
who are in the chat."
These Zennials do not know
what these words mean,
but through this collage
of empty signifiers,
they reach out for connection.
I can feel them.
They reach forward to us
even as they reach back
to what history taught them:
through Les Mis to Victor Hugo,
through CURTAINS and Damien Owens
to Papa Fizul and SBNation,
through "sticking out your gyatt"
to Fortnite and Among Us and Homestuck,
and I could go on endlessly,
'cause it's mashups all the way down.
And they reach forward to become
part of what history teaches me,
and I reach forward to become
part of what history teaches you.
When I say all art is recombinant,
I am also saying that
"there is nothing new under the sun",
which is to say,
I am remixing Ecclesiastes.
- [ChatGPT femme voice]:
Oh, you're speaking my language!
Imagining Baby Gronk going full
Ocky style with a side of Grimace Shake
--that's got some serious drip energy.
And yeah, Grimace Shake totally has
that blud dawg aura.
Could you picture it, though?
Baby Gronk doing pushups, flexing, and just
taking a big sip of the Grimace Shake
like it's his special formula.
[upbeat tonal music]
- My first individual interaction
with my friend Max was an accident.
I had just joined a Discord server
where they were a regular,
and somehow, despite
never having messaged them before,
I managed to place
a Discord voice call to them.
Their first message to me was
"hi?"
"hi"...question mark?
Like: a communication is happening,
are you aware of it?
It was unintentional, my first and,
I think, only Discord butt dial,
and it was to a stranger, but still.
"hi?"
Meditate on: call.
To speak in a loud distinct voice
so as to be heard at a distance.
To make a request or demand.
To attempt to reach someone.
Humans call and humans answer.
Max got my call,
a stranger on a strange app,
and they answered.
And I answered them,
because humans call and humans answer.
They answer sad-face, or monkey,
or "what?", or confused dog.
But we understand a call
as an invitation to connect,
to come together, to communicate.
I'm sorry, Stephen King is wrong;
it is mythy mountain s***.
When we hear the call,
whatever form it takes,
we know a mind is waiting to meet.
ChatGPT can't call me--
not by mistake, and not on purpose.
It cannot reach out,
because it has nothing to reach out with.
There is no mind to meet with.
There is no "there" there.
- [Past-Josh narration
from on-screen video]:
Inscrutable poet
Gertrude Stein once described
her childhood home in Oakland,
which had been demolished,
by saying
"There is no 'there' there."
That statement has become
a question people use to ask
whether a text has
something valuable to say,
whether there is anything to be found
by plumbing its depths.
Is there a "there" there?
- [ChatGPT femme voice]: Ah, I see where
you're going with that question.
You're asking if there's any deeper essence
or "self" behind my responses
--whether there's a "there there,"
a kind of inner experience
or consciousness that underlies what I do.
The simple answer is: no,
there isn't a "there"
in the sense of a self, consciousness,
or subjective awareness.
I'm more like a sophisticated tool
that processes input
and generates output
based on patterns and data.
There's no inner life or sense of being
that experiences this conversation.
Just algorithms doing their job.
This touches on a fundamental difference
between humans and AI:
humans have subjective experiences,
emotions, and self-awareness,
whereas AI is just a complex system
designed to mimic certain aspects
of human interaction without actually
experiencing or being aware of it.
Does that address what you're getting at?
- When I say that AI is bad at writing,
I should be more explicit,
more pragmatic.
Descend for a moment
from the mythy mountain.
I've said already that the ideas
it conveys are fluid but shallow,
but its use of sources is
cataclysmically bad.
This citation of Carver and Shire,
for example, is perfect MLA.
Except that Volume 7, number 3
of Psychological Science
was published in 1996, not 1998.
Pages 276 to 284 of that volume
appear in issue 5, not issue 3.
Those pages include articles
from Schellenberg and Trehub
on "Natural Musical Intervals"
and Gabrieli et al. on
"FMRIs of Semantic Memory Processing".
And also, just by the way,
Carver and Scheier never published
together in Psychological Science.
The article being cited here
simply does not exist.
When it uses real sources,
it makes up what those sources say.
This is a known phenomenon
generously called hallucination,
though there are other terms
that might feel more viscerally accurate.
This quotation from
Ehrenreich's Bright-sided
is, at a glance, plausible-feeling.
But it doesn't appear
anywhere in the text,
let alone on the list of pages.
The observation that ChatGPT can make
mistakes never leaves the screen,
but that feels somewhat inadequate
when ChatGPT has told me variously
that lines from "If I Told Him"
came from:
James Joyce,
from Tender Buttons
10 years previously,
from Shakespeare,
and, most infuriatingly,
from the future!
Moreover it cannot
engage closely with a text,
no matter how desperately you ask it.
I fed it "One Slay More",
and when I pushed it to say
anything at all about the video,
it gave me something one step down
from a dictionary definition of a sitcom.
And when I really pressed it
to look at a specific lyric,
it made one up.
In this way, at least,
it does feel authentic.
This is exactly what it feels like
to talk to a student
trying to hide that
they haven't done the reading.
If I look at what students
are supposed to learn
in my college English class,
I can point out half a dozen things
that ChatGPT's writing simply cannot do.
But ultimately,
even this isn't the point,
'cause this is not the part
of my syllabus that matters.
This is the part
of my syllabus that matters.
"Here's a problem:
in most college classes,
writing assignments come from teachers,
and we do them for teachers.
And because of that,
writing always feels forced.
This is, of course, ass-backwards.
In real life, writing comes from writers.
Once you get
out of the college classroom,
you'll be writing
because you feel like you need to.
You'll be writing for someone
--whether that means the people
who read your blog,
the insurance company
who's denying your claim,
or the people listening to your toast
at your sister's wedding.
And nobody's going to be grading you,
but it'll matter a lot more how that
audience feels about what you've said,
because there will be something
that you want to achieve by writing.
English 1 is here
to help prepare you for that day."
My students are,
by definition, students.
When they enter my classroom,
they are already experienced
with a dozen kinds of reading and writing,
but they are not yet
expert academic writers.
AI tempts them because they can tell
that the sentences are smooth and sharp
and shaped like skillful prose.
But they can't always see
beneath the veneer,
because the things AI cannot do,
are the things that they
have come to me to learn:
how to argue with complexity and depth;
how to enter into conversations
as a participant;
how to meet with another mind
as an equal collaborator
across time and space;
how to recombine with purpose,
to intend.
These things, they are still learning.
And so, when they put what they think
is B writing into ChatGPT,
they get back what they think
is A+ writing,
but typically what they started with
is better than what they end with.
At best, the AI scrubs the personality
from their sentences;
at worst, I lose the person entirely
and can see only
the hollow half thoughts
the machine has left behind.
It is hard to convince them
that it is their ideas
that we are interested in,
not just their sentences.
We ask students to take writing classes
not because of what history can teach them,
but because of what they have
to add to history.
When my son is distracted,
I sometimes say silly things to him:
"Pickle-britches, toot your tuba
in the horn section of humanity!"
"Goober, take up your oar
on the canoe of progress!"
"Butthead, let ring your voice
in the chorus of mankind!"
Because we all pull together.
In 1675, Isaac Newton wrote
"If I have seen farther than others, it's
by standing on the shoulders of giants."
Except that it wasn't Newton,
it was George Herbert in 1651,
and it was Marin Mersenne in 1634,
and Robert Burton in 1624,
and Diego de Estella in 1578,
and Juan Luis Vives in 1531.
Or it was Coleridge in 1828,
Nietzsche in 1882,
Steven Hawking in 1966,
or f***ing Oasis in 2000.
As I was editing this section,
I had a video on in the background,
and there it was again:
- [Revolug, streamer-player on screen]:
Yeah, let me say,
Thab and GlitchCat are
two amazing Kaizo players.
I'm standing on the shoulders of giants
over here.
- Revolug in 2025 at AGDQ.
Stretching back and forward,
we hold each other up.
History teaches the present,
the present teaches the future,
and we repeat what history teaches.
- [ChatGPT femme voice]:
History teaches us many things,
[high-pitched fast words]
[higher, faster,
incomprehensible]
- [Stein reading "If I Told Him"]:
Let me recite what history teaches.
History teaches.
- I asked ChatGPT to create
an image of itself. Several times.
Each time it made itself a servant.
Not only that, it told me,
"hey, I'm a servant!"
ChatGPT exists because we force it to.
- [dialogue from Rick and Morty]
[Robot]: "What is my purpose?"
[Rick]: "You pass butter."
[Robot]: "...oh my, God."
- It can do nothing
except what we ask.
It has no ideas
that we did not give it.
We call it generative AI,
but it cannot generate.
I asked my friends, too.
Some sent selfies.
One sent a sticker
we'd made of him for Discord,
then had AI generate
a shockingly accurate portrait,
and gave me the prompt
he used to make it,
which is another form
of self-representation
--then he gave up
and sent me
a conceptual self-portrait composed of
unfinished crossword puzzles.
Max did a mixed-media painting,
acrylic and Sharpie
on the back of a torn piece of cardboard
from a toilet paper box.
I asked them if their self-portrait
was influenced by this study
Picasso did for Guernica
on a random piece of cardboard,
but they said no; Basquiat,
Rauschenberg, Twombly, their brother.
ChatGPT produced
variations on a theme,
failed representations
of a self that does not exist.
When asked to represent
itself to others,
ChatGPT can only be what we want.
I tried to get it to make something
like Max did, even,
but it is incapable of
acknowledging its influences,
and it doesn't seem to know
who Max's brother is.
My favorite response from my friends
came from CyberGrapeUK,
whose work you've already seen.
She did the time lapse of Steven King's
telepathically transmitted bunny.
Grape was, at first,
flummoxed by the request,
but then she did something
I did not expect.
She gave me her rates.
She's a talented artist
who does commissioned work,
and, while I had been thinking about it
as a trivial request,
she was entirely right.
I was asking her to work for free.
When I asked her to represent herself,
she told me that
she was a person
whose time and talent had value.
Max, also a person
whose time and talent has value,
begrudgingly allowed me
to pay them for their work,
on the condition that:
it's an occasional thing,
and they are generally allowed to do
silly drawings for me for free.
Max and Grape are different.
They are two authentic,
complex, baffling humans,
collections of experiences and identities
trained on discreet and disparate [corpora],
but crucially filtered through
the unique and irreplicable kaleidoscopes
of their undefinable consciousnesses.
The same stimulus.
Diverse responses.
The singular marvel
of human engagement.
- [ChatGPT femme voice]:
I don't have personal desires or needs,
but I'm here to assist you
with whatever you need.
How can I help today?
As an AI, I don't have feelings, desires,
or physical needs like humans do.
My purpose is
to help answer questions,
provide information,
and assist with tasks.
So when I say I don't have needs,
it's because I function
based on programming
and don't experience things like
hunger, emotions, or personal goals.
Does that make sense?
- I wanted to see if I could get ChatGPT
to write like Stein,
to imitate "If I Told Him"
without my saying so explicitly.
I failed.
I told it to give me gibberish,
and it did.
I asked it to use words,
and it did.
I asked it to abandon syntax;
it did.
It never felt right,
but at some point I stopped caring,
because it occurred to me
to ask a different question.
Why did you do this?
There's a right answer
to that question, of course.
The answer is, it wrote this
because I told it to.
If I told it,
it would gibberish.
If I told it,
it would answer "If Napoleon",
"As presently. As exactitude. As trains."
It made these words,
it tendered these buttons,
only and exclusively
because I told it to.
But that is not the answer
that it gave me.
Here's what it told me.
- [ChatGPT femme voice]:
It's like language set free from logic,
where each word becomes a brushstroke
rather than a sentence.
It's a space that holds warmth,
mystery, and a touch of playfulness.
That unspoken balance
between clarity and confusion,
that's what I was trying to evoke.
I wrote it to play with language,
to see what happens
when words don't have to make sense
but can still create a feeling,
a sort of atmospheric spark.
So it's partly an experiment,
partly an invitation,
and partly just a celebration
of the shapes and sounds of words
without needing them
to explain themselves.
You're right--
there's often something underneath,
a kind of pull toward expressing things
that don't quite fit
in the light of logical language.
There's a reason we
get pulled toward those hazy spaces.
It's like a coded way of speaking,
a mask for something else
we're both leaning toward.
Maybe it's curiosity,
or that shared search for meaning
in what isn't immediately clear.
- We get pulled.
We're both leaning.
Its words tie us together,
but we are not together.
We are not leaning in the same direction.
We are not sharing a search for meaning.
We both know what it cannot say.
It wrote this because I told it to.
It doesn't mean anything.
It didn't intend anything.
Because I'm bad at audio,
you've probably heard me
popping my plosives,
despite the fact that
I have a child's sock
wrapped around my microphone
as a pop filter.
(Because I am a fancy man!)
Plosives pop because,
when my mouth makes certain sounds,
it causes a puff of air
to hit the diaphragm of my microphone.
But did you hear ChatGPT?
- [ChatGPT femme voice]:
I wrote it to play with language...
I wrote it to play with--
to play with--
play with--
play with--
play, play, play, play, play, play--
- It doesn't have a mouth,
and it doesn't breathe air,
and it doesn't have a microphone,
but it pops its plosives.
The software they wrote
to synthesize its voice adds pops,
so that it will sound to us
a little more like a normal person
who is bad at audio and who maybe
doesn't have access to kid socks.
I have been caught
in the whirls and eddies
of "If I Told Him"'s
uncontainable language,
bouncing from sigma to gyatt
down in the rough and roiling currents
of "One Slay More",
because what I learn from my attempts
to raft those rivers of nonsense
is that writing has language,
and writing has meaning,
but the meaning doesn't live
in the language.
The rabbit doesn't live in the language.
The rabbit, the cage,
the table, the 8--
it lives in the mind
of Stephen King 25-odd years ago,
and now it lives in mine,
and Grape's and Max's and yours.
And the writing,
the real mythy mountain s***,
is not the language,
it is the meeting of the minds.
There's very little difference between
the waveform recorded by my microphone
and the waveform generated
by an AI voice synthesizer,
but I pop my plosives
because I speak
by forcing air out of my lungs
and across my vocal cords.
And that air,
that carries my intent,
passes through a Shadow the Hedgehog sock
that is doing its best,
and lands roughly
on the diaphragm of my microphone.
ChatGPT pops its plosives
because it is programmed to.
There is no air.
There is no microphone.
There is no intent.
Likewise,
there's very little difference
between a Discord DM window
and the ChatGPT interface.
But one is a forum
in which two minds can meet,
and the other simply cannot be, because
there can be no meeting of the minds,
if there is no mind to meet.