Well, buckle up or click away; this is going to be a long one.
I started writing my “Creative Practice in the Age of AI” series of posts 1) because I was personally curious enough about generative AI (GenAI) and Large Language Models (LLMs) to explore the technology firsthand and 2) because I’ve had so much fun and experienced such an incredible explosion of creativity since beginning this experiment. But it has been eye opening over the past few months to discover that not everyone shares my curiosity and excitement about the GenAI’s potential for boosting human creativity. In fact, I am coming to realize that AI appears to rank right up there with politics and religion as a taboo topic so tightly wedded to a person’s core values and world view that it elicits a visceral emotional reaction.
Today’s post is one I began writing back in July. You can see how long it is. It has been really hard to write—and even harder to cut down to size. I wanted to respond to a couple of articles that had just been published and were generating a lot of buzz online. But articulating my response with enough explanation that someone without an English PhD could understand my perspective meant LOTS of words. Too many words. Even more words than I currently have, which sadly is just shy of 4,000.
Too long, didn’t read
Maybe I should give the TLDR right up front: I don’t believe generative AI is frying our brains or hollowing out our capacity to think. The rest of this piece explains why, but if you want the short version—that’s it.
A bit of personal history
I am old enough to remember writing papers by hand and then typing them on a typewriter before submitting. When word processors first arrived in the 1980s, writing teachers had a small technology panic. Scholars debated whether composing on a screen would destroy planning, encourage superficial revision, or unravel the discipline of forming complete thoughts.
I remember the shift in my own writing quite clearly. Before computers, my process was linear and methodical: outline first, write second, type third. “Plan the flight, then fly the plan.” Word processing blew that up. Writing became fluid, recursive, improvisational; revising became discovery. Looking back, it’s obvious the computer didn’t erode thinking—it changed the texture of thinking and opened new cognitive possibilities. The moment feels uncannily familiar today.
I also arrived in graduate school just as Computers and Composition released its first issue in 1983. It was a typewritten, photocopied, stapled-together newsletter—pre-desktop publishing, pre-Windows, decidedly handmade. In that inaugural issue, teachers announced grants to study “computer-assisted composition” and organized a special-interest group at the national conference because composing on a computer was still so new that no one knew what it might do to writers.
A mere forty years ago, word processing itself was considered a potentially mind-altering technology. Would it weaken handwriting? Encourage laziness? Reshape cognitive habits? The answer turned out to be yes, in some ways, but no in others. Most importantly, though, the technology created new forms of literacy and new ways of working.
What we’re experiencing now with generative AI isn’t unprecedented. It’s the next chapter in a long story about how writing technologies reorganize the process of thinking.
Summer 2025 and the Great Brain Rot
On June 10, researchers associated with MIT posted a preprint titled “Your Brain on ChatGPT.” Using EEG and behavioral measures, they reported that writers who relied heavily on ChatGPT showed lower neural engagement and weaker recall—findings that understandably raised concerns about over-reliance and fading cognitive effort.
But the study itself was small and exploratory—just 34 participants—and the results were less scary than the headlines suggested. In early sessions, heavy AI users did show reduced mental involvement. But later on, when participants who had begun without AI were introduced to it only after forming their own ideas, the picture shifted. Under those conditions—more interactive, less passive—researchers saw different neural patterns and, in some cases, better task performance. (Link to the MIT article’s web page here https://doi.org/10.48550/arXiv.2506.08872; or, link directly to the full PDF file online HERE)
Less than a week later, Nature Reviews Bioengineering ran a short editorial, “Writing Is Thinking,” arguing that the craft of human-generated scientific writing matters and should be recognized as centrally important for structured thinking and for communicating complex ideas. The editorial urges care in delegating drafting work to LLMs and stresses that writing—the labor of shaping and clarifying—is itself a form of thought. (Link to article, with downloadable PDF HERE.)
Both the MIT study and the Nature editorial generated breathless headlines—“AI will fry your brain!”—but if you look past the noise, you find something more nuanced. The MIT paper documents measurable differences in neural engagement under controlled conditions; it doesn’t prove that AI destroys human cognition.
Plus, let me state the obvious, just in case anyone reading this is unfamiliar with the principal allusion underlying the MIT paper’s apparent thesis. The paper’s very title seems to be an attention grab via connection with the anti-drug message of the 1980s “This Is Your Brain on Drugs” campaign —a rhetorical flourish at best, a strategic bit of branding more likely, and fearmongering at worst. And of course, it is unsettling to think about generative AI not only reshaping our brain’s ability to function but also possibly taking away functionality.
Yikes! Is that really what’s happening? Are we all doomed to fried eggs for brains? If so, make mine sunny side up, please😊
Because guess what? Both the MIT pre-print and the Nature editorial are very much in keeping with a pattern we’ve seen many times before—technology sparking fears of intellectual decline, even as it opens new possibilities.
Before we decide whether AI is harming our minds, we should remember that this isn’t our first cognitive panic. Whenever a technology changes the way we think, we worry it will change who we are. So let’s zoom out. We’ve been here before—long before laptops, even before the printing press.
Plato, memory, and the invention of writing
In Phaedrus (one of Plato’s “Dialogues”), Plato famously attacks writing as a technology that will undermine memory and the capacity for true understanding. Writing is “an elixir not of memory, but of reminding,” and it offers “the appearance of wisdom, not its reality.” Plato also likens written words to painting: The painted image “has the attitude of life,” yet “if you question them they preserve a solemn silence.” Writing, Plato says, cannot answer back; it cannot be interrogated in the way living speech can. That worry—that a fixed external record will replace active, responsive thinking—is the heart of his critique. (Link to PDF excerpt of Benjamin Jowett’s translation of Phaedrus)
A thought: Plato’s concern about written documents’ inability to be interrogated seems remarkably similar to Nature’s concern that LLMs (and by extension AI-written documents) lack “accountability,” although that word’s meaning is somewhat undefined and implies that LLMs are somehow coming up with article ideas, doing research, and then writing the articles entirely on their own, with essentially zero human “thinking” inputs.
Looks like Plato’s 2,000-year-old anxiety has been updated for the 21st century, zeroing in on the fear that if people outsource thought to a tool (like writing or other symbols on the page), they (or we, in the form of collective humanity) will lose something essential.
But what, exactly, is lost? Why did MIT feel compelled to employ the “fried egg” metaphor?
And even if it turns out our brains are fried, what does that mean? Are we “sunny side up”—fresh and still able to blend with new ideas—or “over hard” and cooked to rubbery resistance?
It also raises a bigger question: What is the relationship between our tools/technologies and our very identities? Are we, in some ways, our tools? Do the tools we use (writing, symbols, and now AI) actually shape the way we think and not just what we know?
That’s the question the next section of this post examines.
Alexander Luria and How New Tools Change Minds (Literally 😊)
In the early 1930s, Soviet psychologist Alexander Luria traveled to rural communities in Uzbekistan and Kyrgyzstan during a moment of upheaval. Stalin’s regime was forcing villages into collective agriculture while launching massive literacy campaigns aimed at “modernizing” traditional, mostly oral cultures.
To Luria, this was a rare chance to study two populations with the same cultural roots but radically different access to schooling. Could “literacy” itself reshape cognition?
His team interviewed adults in villages untouched by schooling and adults newly enrolled in state programs, using tasks involving categorization, hypotheticals, and abstract reasoning. A consistent pattern emerged:
- People without schooling answered using concrete, situational logic
- People with schooling used abstraction and context-free categories
One famous example involved a 39-year-old peasant farmer named Rakmat. When Luria showed him a hammer, saw, hatchet, and a log, and asked which one didn’t belong, Rakmat answered:
They all belong. You need all of them.
A saw, hammer, or hatchet is useless without wood.
Luria tried again, saying, “Some people group the tools together and leave out the log.”
Rakmat responded:
But what would you build with?
A man with tools and no wood isn’t thinking straight.
Separating tools from the work they do made no sense to Rakmat. The items belonged together in a situation, not in a scientific category.
In stark contrast, villagers who had at least some schooling—and therefore knowledge of reading, writing, and arithmetic—immediately answered:
The hammer, saw, and hatchet — they’re tools.
Same objects. Entirely different mental framework.
Luria concluded that literacy and formal schooling didn’t merely add to a person’s store of knowledge; they actually completely restructured the cognitive landscape. Reasoning pathways were altered and new ways of connecting emerged. The use of symbols (letters, numbers) support levels of abstraction not seen in illiterate, unschooled populations (or, completely oral cultures). Written texts and mathematical equations support imagining hypotheticals. And schooling in general encourages visualization and mental models beyond what is materially present.
So where Plato feared writing might “fry” memory (ironic, that, as the memory of Plato’s words is preserved only via the technology of writing), Luria showed that writing and, more broadly, symbolic systems (as “psychological tools”) strengthen abstraction.
New technologies rewire what “smart” looks like. A technology that diminishes one capacity (e.g., rote recall, as in memorizing a poem) might enhance another (e.g., abstract reasoning). The net effect isn’t simply smarter-or-dumber, but a rebalancing of strengths and weaknesses.
That’s a very helpful lens for thinking about generative AI.
Marshall McLuhan on how technologies change us
Marshall McLuhan (1911–1980) was a Canadian media theorist best known for the phrases “the medium is the message” and “the global village.” Writing in the 1960s, he argued that new communication technologies don’t merely deliver content via a new medium. Oh no, no, no. Every new technology fundamentally reshapes the nature of human thought and society at its core.
The following quote, taken from near the beginning of McLuhan’s most famous essay, “The Medium Is the Message,” is the most helpful illustration of this idea I’ve found:
The “message” of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs. The railway did not introduce movement or transportation or wheel or road into human society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure. This happened, whether the railway functioned in a tropical or a northern environment, and is quite independent of the freight or content of the railway medium. The airplane, on the other hand, by accelerating the rate of transportation, tends to dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for.
For McLuhan, the printing press, radio, television—and now digital media—all reorganize perception and social relations in ways more profound than their “content” alone. That perspective is what makes his insights especially useful when we think about AI today.
Although we don’t quite understand yet what the “message” of AI as a “medium” will be, its implications for reshaping human affairs are on a par with the impacts of the railway and the airplane. We fear that generative AI has the potential to wreck our very brains—as the MIT study so alarmingly appears to herald—yet, history suggests that initially feared technologies often give way to new literacies, conventions, and cognitive practices that reorganize how we attend, remember, and reason.
My claim — a short thesis, owing a debt to Piaget and Vygotsky
As I said earlier, the MIT paper’s provocative title, “Your Brain on ChatGPT,” gestures toward the well-known PSA spots’ logic of “your brain on X.” While that rhetorical move is useful for grabbing headlines, it obscures a more interesting reality: AI is not a drug that fries a brain on contact. Instead, it may be a technology that lends itself to use in ways that either dull or sharpen certain habits. If we treat LLMs as a crutch and outsource judgment wholesale, well, of course we risk atrophy.

“Your Brain on Drugs” (cyclonebill, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons)
But . . . what if we don’t “outsource” and offload our writing in the way Nature’s editorial implies we will, left to our own devices? Suppose we instead treat generative AI as a “slightly advanced guide”—a prompt-collaborator that pushes, questions, and rehearses in a way that accelerates our thinking and creativity.
Jean Piaget (1896–1980), the Swiss psychologist often called the father of developmental psychology, spent decades observing children and mapping predictable stages of reasoning. He argued that learners build understanding by working just beyond what they already know. Too easy and nothing changes; too hard and progress stalls. Growth happens in that “just-a-bit-beyond” zone, where guidance stretches existing mental models without overwhelming them.
Working around the same time, Russian psychologist Lev Vygotsky emphasized that learning is fundamentally social. His concept of the “zone of proximal development” describes the space between what we can do alone and what we can do with a bit of well-placed help. With a “more knowledgeable other,” we gain access to strategies and language that stretch our cognitive reach.
This same principle—guidance just beyond what we know—is exactly where I see AI’s potential today. Using generative AI thoughtfully doesn’t have to flatten our skills. It can function as that slightly more advanced collaborator, nudging us toward work we might not yet have been able to produce alone.
If we treat these generative AI systems as blunt substitutes, then yes, they can dull our skills. But if we use them as “more knowledgeable ‘others’”—collaborative tools that challenge, scaffold, and stretch us—perhaps we can actually expand our capacity to think.
Or, in other words, if this really is our brain on ChatGPT, it isn’t necessarily getting fried; it’s being scrambled into something new. To repeat myself yet again (sorry!), the arrival of large language models is not the first time a technology has raised fears of cognitive decline, nor will it be the last.
The real question isn’t whether or not AI will alter us. It will. What matters more is how self-aware we are as the change occurs. Do we actively shape our new reality through deliberate decision and reflection, or do we passively accept changes foisted upon us by technology and society?
More fundamentally, at least as far as I’m concerned: Are we having fun engaging with AI as a creative partner?
From linear to recursive to hyperlinked at warp speed: My own journey
I’ve already lived through a shift where the tool changed the way the mind moved. Back in the day (40-50 years ago!), writing a paper for me meant planning and outlining, then drafting and typing up a first draft. I started graduate school before the first computer labs were installed at my university. In my first semester as a teaching assistant with two sections of freshman composition, only one of my fifty students used a computer to write his papers. He was a doctor’s son; computers were expensive. By the time I finished my PhD, every student I taught wrote their papers on computers. I should say composed their papers on computers. Crucial phrasing: “composed” on the computer.
In the early days of word processing and LANs (local area networks), I took my grad‑school handwritten drafts into our campus computer labs. That’s what everyone did, used the computer as a fancy typewriter. Correcting typos was easier on a computer (no blobs of Wite-Out to mar your manuscript), but the creative part was no different than using your manual Smith Corona. Typing already-drafted text into Word (or, more likely at that point, WordPerfect) without making revisions (no RE-VISION, or re-seeing) is like using ChatGPT for nothing more than grammar and punctuation cleanup—you’re not composing with the tool, just using it for a final polish.
In other words, missing the point of the tool’s capabilities entirely.
As a professor with a PhD in English and 40+ years teaching writing, especially rhetoric and composition, I believe generative AI and large language models will be hugely beneficial to human thinking. Yes, they hallucinate. Yes, they’re imperfect. But every powerful tool requires understanding of its logic and limits for someone to use it well. Just as people had to figure out how to compose texts on computers instead of typewriters, now we’re fumbling through the early era of generative AI. The conventions will emerge from practice. Strong writers will develop an ability to collaborate with the tool rather than subjugating themselves and their intellectual processes to it.
To echo and extrapolate from Alexander Luria’s findings that literacy gave people new cognitive tools, I expect AI literacy—knowing how to interact with LLMs—will do the same for us today. Once we integrate it into our daily work and creative practices, it will open up higher-level thinking in ways we can’t yet imagine.
Practical takeaways for writers, artists, and teachers
Generative AI has become a lot like politics or religion—people’s views run deep. So tread softly. I’m curious about AI and genuinely enjoy exploring it. I think it may help us become stronger thinkers. Others may disagree, even strongly, and that’s fine. I just hope we can stay in conversation with goodwill.
Read the studies for yourself. Don’t let the headlines do your thinking for you. The MIT paper is a real heavyweight, packed with technical details; the Nature editorial is short, easy-to-read, and all big-picture.
Teach and practice AI literacy. Know when you’re using a model as a brainstorming partner, when as a stylistic editor, and when you must rely on your own unassisted reasoning. There’s really only one way to develop this kind of judgment, and that is to plunge in and engage with the technology to bring difficult, complex projects into being.
Treat LLMs like any other intellectual tool—neither magical nor poisonous. They can preserve and extend thought, or they can enable outsourcing. The difference is how we choose to work.
The real experiment begins now—and we get to decide what kind of thinkers this new era makes us.
Sources and suggested further reading (quick links)
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., et al., Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint (10 June 2025). https://doi.org/10.48550/arXiv.2506.08872 (article web page) and https://arxiv.org/pdf/2506.08872 (Full PDF)
- Writing Is Thinking. Editorial, Nature Reviews Bioengineering (16 June 2025). https://www.nature.com/articles/s44222-025-00323-4
- Plato, Phaedrus (Jowett translation) — https://www.gutenberg.org/files/1636/1636-h/1636-h.htm (complete text, Project Gutenberg)
- Alexander Luria, studies of cognitive development in Central Asia (1931–1933); summaries in Cognitive Development and later commentaries on literacy and abstract thinking. Glozman, J. M. (2018). A reproduction of Luria’s expedition to Central Asia: What is culture for? Psychology in Russia: State of the Art, 11(2), 4–18. https://doi.org/10.11621/pir.2018.0201 [Special issue: Luria’s legacy in cultural-historical psychology]
- For more information about Luria’s studies, see the translation of his 1979 book, The Making of Mind, especially Chapter 4, “Cultural Differences in Thinking,” at the Marxists.org archives entry on Alexander Luria’s work.)
- A fun link to a website chronicling contemporary accounts of the rise of computers, word processing, and composition scholarship, HERE.)
- Especially for writing teachers: A fun link (HERE) to the very first Computers and Composition publication, 1983. Click on “Announcements,” the only bit of that issue available via PDF.
This Week’s Suggested Creative Practice
Tools, Minds, and the Guides Just Ahead
Every post in this Creative Practice in the Age of AI series includes a “creative invitation”—a handful of small exercises you can try on your own terms. If you’re not TOO EXHAUSTED from reading this entire post, this week’s exercises are designed to help you reflect on the ideas explored in the post above: how new tools, from writing as a technology itself to computers to generative AI, can reshape thought, memory, and creativity. These exercises invite you to play with perspective, experiment with abstraction, and consider how guidance, prompts, and new media can stretch your thinking.
1) Luria’s Categories Try this variation on Luria’s task: take three objects around you (for example, “lamp, book, hammer”). Write down two different ways of relating these objects to each other: First, in ways that are concrete, practical, situational, and second, in ways that are more abstract, conceptual, taxonomic. Notice how shifting frames reshapes thought. What relationships between objects appear or disappear when you think of them 1) as things that gain meaning via their interactions versus 2) things that gain meaning via their similar characteristics. Or even more fundamentally: “identity” through participatory action versus “identity” through intrinsic traits.
2 ) The AI as Socratic Friend Instead of using ChatGPT to generate answers or execute your commands, use it only to ask you questions about your draft or idea. What happens when you let the tool play Socrates, the questioner who leads you through a problem via guided Q+A, versus its usual role? (The connection to today’s post is that Socrates stars in Phaedrus, just as he does in most of Plato’s dialogues 🙂 )
3) McLuhan’s Medium Reversal Take one of your most common tools (calendar app, notes app, email). Imagine suddenly losing it. How would your habits, memory, or communication reorganize themselves without it? Or, try “reverse engineering” a seemingly obvious fact of life. For example, examine the Motion Picture Association film rating system (G, PG, PG-13, R, NC-17). What purpose does it serve? When did it come into being? How has it changed over the years, and why? What is the “message” of that “medium”?
4) Your “Brain on X” Analogy Come up with your own version of “This is your brain on…” using something other than AI or drugs (e.g., “This is your brain on coffee,” “This is your brain on spreadsheets”). Write a playful PSA-style description. What truths or exaggerations come out?
5) The Apprentice’s Twist Artists and writers often begin by imitating masters. Choose a masterwork (a painting, a poem, a film scene). Recreate it as faithfully as you can—but make a significant alteration that makes your work startlingly new while somehow still remaining recognizable as the original.
Piaget would call this “accommodation,” the opposite of assimilation. Instead of making the square peg fit into a round hole, you reshape the round hole to accommodate pegs of both shapes. For example, the plot of a mystery novel usually follows a formula: a crime has occurred (probably a murder), and a main character (professional detective or amateur sleuth) investigates clues in order to solve it. How much can a novel deviate from that formula and still be a mystery novel? Agatha Christie’s fame is partly due to her ability to deviate in ways that redefined the formula and enlarged our idea of what a mystery novel could be.
Think about your “tweak” and how imitation leads naturally into creativity. How does your change add significance and transform the original into something new?


I need to come back to this when I have more time to read it carefully. My initial reaction skimming it was that I haven’t heard anyone go so far as to claim Ai is frying our brains … and then just yesterday read a post on Substack saying exactly that. 😂
I must admit, I found Microsoft’s Copilot very helpful in answer technical questions about my coding project. Seriously sped things up. I’m leery of the ability to mass-produce content, though. More noise on the net. A lot more.
LikeLiked by 1 person
“More noise on the net. A lot more.” Yes, I agree with your feelings on this. For a long time now I’ve thought about how much stuff is being maintained (for what?) and the environmental cost of keeping it up via the electricity required to power the servers, etc. I’m definitely old enough to remember (pre-EPA) when “pollution” was a huge concern. Noise, pollution, light pollution, air, pollution, water pollution. We could add Internet pollution to that today, or more relevantly, AI pollution, lol
LikeLiked by 1 person