This is final exam week at Milwaukee School of Engineering, where I teach. While I’m busy grading and wrapping up this semester, at the same time I’m trying to make sure I capture some “lessons learned” for future semesters teaching the same courses and also making notes for yet a different course that I’ll be teaching in the upcoming semester.
Specifically, I’m teaching SOC 4001 Digital Society in the spring, and I want to find a way to combine two assignments into a larger “joint” assignment. Doing this will require students to create a website, but I don’t want them to get too caught up in website design at the expense of storytelling. Students with limited backgrounds in web design, in particular, may become sidetracked by technical challenges rather than focusing on their primary task: crafting a story that shares compelling information and creates a coherent narrative (linear or nonlinear) that connects dots, elicits an emotional response, and extends beyond the story itself, resonating within the larger context of “Digital Society.” The catalog course description calls for a collaborative multimedia digital-storytelling project, so I need to have a website builder easily capable of supporting media and allowing for fluid storytelling.
Below is a copy of a chat I just finished with ChatGPT. As you’ll see in my initial prompt, I recently mentioned Google Sites in my digital storytelling class, mostly as an anecdote about how Google used to have a nice website-building platform but I thought it went away, as I never hear about it anymore. But then a student overhearing my remark was intrigued enough to look it up and discovered that, nope, apparently Google Sites did NOT go away! And it looked really easy to use, he thought. Indeed, it does, with lots of options for someone unfamiliar with modular, block-based editors and content-authoring environments like the Gutenberg block editor in WordPress, which I still find so clunky and unhelpful for publishing the very straightforward content on my blog that I’ve developed a workaround that allows me to continue using WordPress’s old “Classic Editor.”
Anyway, I’m sharing my chat with you for two reasons. First, you might want to check out Google Sites if you’d like a free, easy-to-use website builder. This might be especially great if you’re a teacher and would like students to create multimedia stories (like my students are doing). Second, if you haven’t yet used ChatGPT, or if you’ve only used it to smooth out your style in a written document, you might be interested in seeing how low key and quick my chat was. I had a thought; I asked ChatGPT for info (more detailed and complete than a browser search would provide); and within literally about two or three minutes, I had what I needed to make a decision and move forward. Quick and easy!
In fact, I use ChatGPT for most of my searches now unless it’s something very simple. It would have taken me much longer to figure out the scoop with Google Sites if I’d done a Google search, poked around the Google Sites site, tried to see what the Reddit community thought, etc.
So below, FYI, is my search. ChatGPT has gotten better and better. It does an excellent job of remembering our previous chats. You’ll notice that with zero prompting on my part, ChatGPT remembers that I teach courses in film/media studies and digital storytelling.
I said:
I’m looking ahead to a future semester and thinking of asking students to use Google Sites to create multimedia stories. I haven’t used Google Sites in maybe 6 years. I thought Google Sites no longer existed, but when I mentioned it in class, one student was intrigued enough to look it up and not only found it was still around but thought it looked cool. So can you tell me what is up with Google Sites? Can you actually still publish a website and have it public enough to show up in a Google search?
ChatGPT said:
Yes — Google Sites definitely still exists today as a website-creation and publishing tool from Google (it’s part of the Google Docs Editors suite) and not a defunct service. Wikipedia
Here’s a clear overview of what’s up with it:
🛠️ What Google Sites Is Now
It’s a free, web-based site builder that lets you create webpages without coding, using drag-and-drop tools. Wikipedia
You can include multimedia like images, embedded video, text, links, Google Docs/Slides/Forms, etc., making it suitable for multimedia storytelling projects. Wikipedia
It’s simpler than platforms like WordPress or Squarespace, but it is very easy for students to learn and use. Wikipedia
🌐 Publishing a Site
Yes — you can publish a Google Site and make it accessible on the public internet:
You can control who can see it — e.g., Public on the web, Anyone with the link, or restricted to specific people. Google Sites
When you publish it as Public on the web, anyone can visit the URL without signing in. Google Sites
🔎 Being Found in Google Search
Publishing alone doesn’t guarantee your site will immediately show up in Google search results — but it can be indexed and show up over time:
Google will crawl public sites on the web and can index them in search results automatically eventually. Google Help
You can speed up visibility by submitting your site to Google Search Console after publishing. Many users report that indexing can take some time and effort (adding structured content, linking to it, etc.). Reddit
There’s also a setting to request that search engines not display your site, but that’s optional — by default, if the site is public, it can be crawled. Google Sites
📌 Sharing & Classroom Uses
You only need a free Google account (like a Gmail) to create and publish a site — no separate hosting fees. Alibaba
The public URL is sharable with a link, so students can send their published site to instructors, embed it in portfolios, or connect it to other digital projects. Google Sites
If you want a custom domain (e.g., yourdomain.com instead of the sites.google.com URL), that’s also possible with additional settings or domain purchase, though not required for a class project. Google Sites
📅 Practical Tip
For classroom use:
Have students publish their site as “Public on the web” so anyone with the URL can see it. Google Sites
Optionally teach them about Google Search Console to help their site appear in search results.
Encourage adding meaningful text content and links — search engines need context to index well. Reddit
In short:
✅ Google Sites is actively supported. Wikipedia
✅ Students can still build and publish public websites. Google Sites
✅ Those sites can show up in Google search, though indexing isn’t instant and may need a bit of SEO care. Google Help
If you want, I can walk you through a step-by-step plan for a student assignment using Google Sites!
I said:
I love this! I did find Google Sites easy to use, and I think my students will like it, too. I also love that a Gmail account gets them in the door. I think everyone has that. Thank you!
ChatGPT said:
You’re very welcome! I’m so glad this clicked for you 😊
Given your background in film/media studies and digital storytelling, Google Sites is actually a nice fit pedagogically: it lets students think in terms of structure, sequence, and multimodality without getting bogged down in tech. And the low barrier to entry—“you already have a Gmail account”—removes so much friction, especially compared to WordPress or anything that smells like “web hosting.”
A few quiet strengths you might appreciate as an instructor:
Narrative architecture without code: Pages, subpages, scrolling sections—students have to think about pacing, hierarchy, and visual rhetoric.
Multimedia integration is almost frictionless: YouTube, Drive video, audio, Slides, Docs, images—all embed cleanly.
Revision is visible: Sites autosaves and keeps version history, which is useful if you ever want to talk about drafting, revision, or process.
Public-facing but not scary: Publishing to the web feels real, but the stakes are lower than “launch a personal website.”
If you end up using it, you might frame the assignment not as “build a website” but as:
“Create a public-facing multimedia narrative using the affordances and constraints of Google Sites.”
That language tends to cue rhetorical awareness rather than tech anxiety.
If you’d like later on, I can help you:
Design a sample prompt or rubric
Think through privacy language for syllabi (public vs. link-only)
Or sketch a low-stakes scaffold (storyboard → site map → draft → published story)
But for now—this sounds like a really smart rediscovery. I love when a “dead” tool turns out to be quietly perfect for a new moment.
As I was putting on my coat yesterday afternoon to head out to my 3:00 PM class, I noticed what appeared to be a smudge or scuff on the wall near my office window. Slightly upsetting. Not only was it unsightly, but I couldn’t imagine what might have caused it. Sort of like walking out to your car after work and finding a big scrape or dent that hadn’t been there when you left it.
But wait. The wall was bathed in bright, direct sunlight. Could that smudge just be a shadow? To test that theory, I held my hand out to block the sunlight—and the smudge/scuff disappeared. Whew!
Pulling my hand away, I noticed a faint, white afterimage of my fingers trailing their shadow. Cool! But immediately I began to doubt what I’d seen. Had I imagined it? I held my hand up a second time and pulled it away. Again, the afterimage.
Then I began to wonder: Was that afterimage visible only to the naked eye? Or, could someone also view it in a video, even if mediated by a screen.
Well, only one way to find out. I pulled out my phone and shot a quick video of my hand’s shadow on the wall. And YES! When I looked at the video, I could still see that afterimage trailing the shadow as my hand withdrew.
(This is all basically happening as I’m buttoning my coat to leave for class, don’t forget. Multitasking 🙂 )
Much later, when got home from work last night and was showing the video to my husband, I had to laugh at the sound of myself murmuring the word “cool” in rather awed tones. I hadn’t intended saying it—wasn’t even aware, in fact, that I had. But clearly, my inner voice has a mind of its own, as evinced by that little whisper of a thought slipping into the air without asking permission 🙂
Some things never get old—like popping bubble wrap, snapping retractable tape measures closed with a “thunk,” and (if you’re from Wisconsin) eating cheese curds so fresh they literally squeak against your teeth. Playing with optical illusions surely deserves a place among this pantheon of small pleasures.
You have no doubt seen afterimages before. (Link to Wikipedia article on “afterimages” HERE.) You’ve probably also played the little optical illusion games like looking at the green, black, and yellow United States flag for several seconds and then transferring your gaze to a blank white wall and seeing the flag in its correct colors. Or viewing a “negative” image of a photo and then, similarly, looking at a white wall to see the correct photographic image.
But have you seen the optical illusion created by pairing the afterimage of swirling spirals with Vincent Van Gogh’s Starry Night ? Click on the video below to give it a try.
We had a major snowfall in the Midwest over the weekend. Saturday was a cozy, “inside” day, watching the snow fall like powdered sugar gently sifted over white frosting. Yesterday was an “outside” day, cleaning off the cars, clearing away the snow, and trying to beat the falling temperatures (always a challenge in the Midwest, as snow is so often followed by a blast of Arctic air).
A nice time for seeing the neighbors, as well, for all of us were outside doing the same thing. One of our neighbors has a remarkably powerful snowblower, so he was kind enough to clear the base of our driveway (snow plows had been down our street and blocked us in; if you live in a cold, snowy place, you know what I mean!). He also took care of our other neighbors’ drive, as they were out of town, due home Sunday night, and would have had nowhere to pull in after their long trip home.
Anyway, this morning on the drive to work, I noticed these streetlights for the first time ever in 30+ years. They are in a historic neighborhood just to our east, and they are quite different from the streetlights everywhere else in my inner-ring Milwaukee suburb. If they hadn’t been lit, making them stand out against the snowy branches, I doubt I’d have seen them even today. Don’t they remind you of Narnia? All we need is Mr. Tumnus.
Isn’t it funny how a small-ish change (in this case, snow-covered tree branches and streetlights lit past their normal time, probably because it was so overcast) causes you to take note of things that were there all along?
Yesterday afternoon I was walking through the galleries on the second floor of my building (Grohmann Museum on the Milwaukee School of Engineering campus), when I saw this cozy little nook in a way I hadn’t before. So I took a few pictures.
It’s hard trying to explain the hidden “image” inherent in something you’ve noticed that has suddenly suggested itself in the moment as a potential photograph to “capture.” Here is the photo from yesterday that was my favorite. It feels the most “calm” and “balanced,” “clean” and “open,” which is what I felt when I saw that corner in a new way.
That same old, familiar corner I’ve seen thousands of times before, lol
“THE ONE” – calm, open, balanced, clean, elegantly composed (at least to my eye). It has very litte floor. It also includes the most ceiling (see the ductwork above) of any photo I took. In the end, I needed more space above the chairs than below in order to capture and convey that open, clean, airy feeling that I sensed when I took note of those chairs yesterday.
It took eight photos till I saw something I was happy with. Here are some of the others. Why didn’t they do the job? In looking at the rejects later, I found it was almost as hard to pinpoint why I don’t like some photos as it was to pinpoint why I do like “THE ONE.” Here are a few of them, with captions to say what I did and didn’t like. (Today’s post is strictly an FYI, thought-process, behind-the-scenes kind of discussion, in case you also like photography and enjoy analyzing why a photo does or doesn’t “work.”)
Reject #1 – Too close to the subject, too much floor (so the chairs are in the wrong place to capture what I “saw”), and even though I liked seeing more of the second chair revealed, I no longer felt the composition overall was as balanced. If I had wanted to emphasize the chairs and seating area rather than the overall calmness and balance and clarity of the entire space, this one might have worked.
Reject #2 – Too much of the wall on the left, too close to the subject. No staircase. You don’t feel the “clean,” “open,” “airy” lightness. Plus the pillar is tilted too much. I know that could be edited/corrected, but I still don’t like the overall composition, and tilting it wouldn’t change that basic fact.
Reject #3 – Too little of the green wall, too much staircase, creating a left-to-right imbalance. Plus, it’s too tilted, and even though I could have straightened things out with editing, in the end there was also too much floor and too little ceiling, so again, I didn’t like the compositional balance, neither top to bottom nor side to side.
I have more rejects, but the differences there are even more minute, and this sampling is probably enough for you to get the idea. One photo works; other photos don’t. It’s so strange, but that’s how it is.
And sometimes you can have a good photo but it’s not “THE ONE” for reasons outside of the actual image. Here’s an example of how miniscule the differences between a good and bad photo can be (for me anyway) and why I might sometimes 86 something that’s not really that bad. This last photo was practically perfect. I really, really liked it. It’s got those nice vertical lines and feels very, very balanced. Except . . . too much floor. Too much of a “floor” to “ceiling” ratio, that is.
Reject #4 – I actually like this one very much. Except for the fact that it doesn’t give me the overall feeling of space and light, I like this photo a lot. It feels very balanced, and it shows more of the seating area in terms of chairs than my favorite photo does. But there’s not enough ceiling to balance out the amount of floor I have. It just doesn’t give me the height and open space of “THE ONE.”
But, suppose I cropped it, to reduce that ratio of floor to ceiling?
Reject#4 (cropped) – I managed to get less floor, but now the pillar feels too close to the horizontal center and sort of dominates the picture.
Meh . . . but better, so trying again . . .
More balanced. Could be “THE ONE,” except it homes in too much on the chairs, thereby diminishing the overall sense of height and space in that corner. But this could be a twin to “THE ONE.” Like “THE CO-ONE.” 🙂 Except, hmmm, maybe I cut away too much of the floor. And the window on the right is completely gone. As are the stairs. Oh well, it’s not “THE ONE,” so not the end of the world if I butchered it 😂
Actually, that last photo may be my favorite photo purely as a photo. Even though it didn’t capture quite what I saw while walking through the galleries, it has something else I can see now while looking at the photo for itself, separate from the experience. All the individual elements that seem somewhat scattered and singular in “THE ONE” are here compactly wedded together within the more intimate composition of a tightly cropped picture. Every element of light and dark, every line, every shape interlocks like a jigsaw puzzle. A rather geometrical jigsaw puzzle, now that I think about it, with lots of perpendicular lines and balanced patterns of shape and color . . . aka my usual type of photo, lol
So there you have it. My post for the day, a little journey through minutiae on the way to “THE ONE.”
Isn’t it ironic how the phrase “it works” is simultaneously both the least and the most accurate and precise way of summing up what makes art “art.” Crazy, but it’s all in the eye of the beholder. And if enough “beholders” somehow happen to agree that something “works,” then who knows? Maybe you’re looking at the next Mona Lisa.
A photo of the Milwaukee Police Department District 1 & Administration building, at night, with blue light and words “MILWAUKEE POLICE.” Taken on my way home from work this evening. That phrase, “thin blue line,” pops into my head every time I drive past here on my way home from work. At least lately, now that daylight savings time has ended and it is pitch black when I get off work, lol. And I assume that’s why there is thin-blue-line lighting up the side of the wall here. Anyway, tonight I just so happened to get a convenient red light, so I was able to take a picture 😀
Took this photo last night of the Rose Window at the Church of the Gesu on the campus of Marquette University in downtown Milwaukee. To say I’m stunned by its beauty would be an understatement.
Maybe I’ve just never seen the Rose Window lit from within before. It’s golden, which is completely different from its usual appearance. See the images below, screenshots from the church’s website.
Anyway, because I loved the way the window looked, illuminated from both inside and outside, I stopped to get a quick photo before heading on my way ❤
Well, buckle up or click away; this is going to be a long one.
I started writing my “Creative Practice in the Age of AI” series of posts 1) because I was personally curious enough about generative AI (GenAI) and Large Language Models (LLMs) to explore the technology firsthand and 2) because I’ve had so much fun and experienced such an incredible explosion of creativity since beginning this experiment. But it has been eye opening over the past few months to discover that not everyone shares my curiosity and excitement about the GenAI’s potential for boosting human creativity. In fact, I am coming to realize that AI appears to rank right up there with politics and religion as a taboo topic so tightly wedded to a person’s core values and world view that it elicits a visceral emotional reaction.
Today’s post is one I began writing back in July. You can see how long it is. It has been really hard to write—and even harder to cut down to size. I wanted to respond to a couple of articles that had just been published and were generating a lot of buzz online. But articulating my response with enough explanation that someone without an English PhD could understand my perspective meant LOTS of words. Too many words. Even more words than I currently have, which sadly is just shy of 4,000.
Too long, didn’t read
Maybe I should give the TLDR right up front: I don’t believe generative AI is frying our brains or hollowing out our capacity to think. The rest of this piece explains why, but if you want the short version—that’s it.
A bit of personal history
I am old enough to remember writing papers by hand and then typing them on a typewriter before submitting. When word processors first arrived in the 1980s, writing teachers had a small technology panic. Scholars debated whether composing on a screen would destroy planning, encourage superficial revision, or unravel the discipline of forming complete thoughts.
I remember the shift in my own writing quite clearly. Before computers, my process was linear and methodical: outline first, write second, type third. “Plan the flight, then fly the plan.” Word processing blew that up. Writing became fluid, recursive, improvisational; revising became discovery. Looking back, it’s obvious the computer didn’t erode thinking—it changed the texture of thinking and opened new cognitive possibilities. The moment feels uncannily familiar today.
I also arrived in graduate school just as Computers and Composition released its first issue in 1983. It was a typewritten, photocopied, stapled-together newsletter—pre-desktop publishing, pre-Windows, decidedly handmade. In that inaugural issue, teachers announced grants to study “computer-assisted composition” and organized a special-interest group at the national conference because composing on a computer was still so new that no one knew what it might do to writers.
A mere forty years ago, word processing itself was considered a potentially mind-altering technology. Would it weaken handwriting? Encourage laziness? Reshape cognitive habits? The answer turned out to be yes, in some ways, but no in others. Most importantly, though, the technology created new forms of literacy and new ways of working.
What we’re experiencing now with generative AI isn’t unprecedented. It’s the next chapter in a long story about how writing technologies reorganize the process of thinking.
Summer 2025 and the Great Brain Rot
On June 10, researchers associated with MIT posted a preprint titled “Your Brain on ChatGPT.” Using EEG and behavioral measures, they reported that writers who relied heavily on ChatGPT showed lower neural engagement and weaker recall—findings that understandably raised concerns about over-reliance and fading cognitive effort.
But the study itself was small and exploratory—just 34 participants—and the results were less scary than the headlines suggested. In early sessions, heavy AI users did show reduced mental involvement. But later on, when participants who had begun without AI were introduced to it only after forming their own ideas, the picture shifted. Under those conditions—more interactive, less passive—researchers saw different neural patterns and, in some cases, better task performance. (Link to the MIT article’s web page here https://doi.org/10.48550/arXiv.2506.08872; or, link directly to the full PDF file online HERE)
Less than a week later, Nature Reviews Bioengineering ran a short editorial, “Writing Is Thinking,” arguing that the craft of human-generated scientific writing matters and should be recognized as centrally important for structured thinking and for communicating complex ideas. The editorial urges care in delegating drafting work to LLMs and stresses that writing—the labor of shaping and clarifying—is itself a form of thought. (Link to article, with downloadable PDF HERE.)
Both the MIT study and the Nature editorial generated breathless headlines—“AI will fry your brain!”—but if you look past the noise, you find something more nuanced. The MIT paper documents measurable differences in neural engagement under controlled conditions; it doesn’t prove that AI destroys human cognition.
Plus, let me state the obvious, just in case anyone reading this is unfamiliar with the principal allusion underlying the MIT paper’s apparent thesis. The paper’s very title seems to be an attention grab via connection with the anti-drug message of the 1980s “This Is Your Brain on Drugs” campaign —a rhetorical flourish at best, a strategic bit of branding more likely, and fearmongering at worst. And of course, it is unsettling to think about generative AI not only reshaping our brain’s ability to function but also possibly taking away functionality.
Yikes! Is that really what’s happening? Are we all doomed to fried eggs for brains? If so, make mine sunny side up, please😊
Because guess what? Both the MIT pre-print and the Nature editorial are very much in keeping with a pattern we’ve seen many times before—technology sparking fears of intellectual decline, even as it opens new possibilities.
Before we decide whether AI is harming our minds, we should remember that this isn’t our first cognitive panic. Whenever a technology changes the way we think, we worry it will change who we are. So let’s zoom out. We’ve been here before—long before laptops, even before the printing press.
Plato, memory, and the invention of writing
In Phaedrus (one of Plato’s “Dialogues”), Plato famously attacks writing as a technology that will undermine memory and the capacity for true understanding. Writing is “an elixir not of memory, but of reminding,” and it offers “the appearance of wisdom, not its reality.” Plato also likens written words to painting: The painted image “has the attitude of life,” yet “if you question them they preserve a solemn silence.” Writing, Plato says, cannot answer back; it cannot be interrogated in the way living speech can. That worry—that a fixed external record will replace active, responsive thinking—is the heart of his critique. (Link to PDF excerpt of Benjamin Jowett’s translation of Phaedrus)
A thought: Plato’s concern about written documents’ inability to be interrogated seems remarkably similar to Nature’s concern that LLMs (and by extension AI-written documents) lack “accountability,” although that word’s meaning is somewhat undefined and implies that LLMs are somehow coming up with article ideas, doing research, and then writing the articles entirely on their own, with essentially zero human “thinking” inputs.
Looks like Plato’s 2,000-year-old anxiety has been updated for the 21st century, zeroing in on the fear that if people outsource thought to a tool (like writing or other symbols on the page), they (or we, in the form of collective humanity) will lose something essential.
But what, exactly, is lost? Why did MIT feel compelled to employ the “fried egg” metaphor?
And even if it turns out our brains are fried, what does that mean? Are we “sunny side up”—fresh and still able to blend with new ideas—or “over hard” and cooked to rubbery resistance?
It also raises a bigger question: What is the relationship between our tools/technologies and our very identities? Are we, in some ways, our tools? Do the tools we use (writing, symbols, and now AI) actually shape the way we think and not just what we know?
That’s the question the next section of this post examines.
Alexander Luria and How New Tools Change Minds (Literally 😊)
In the early 1930s, Soviet psychologist Alexander Luria traveled to rural communities in Uzbekistan and Kyrgyzstan during a moment of upheaval. Stalin’s regime was forcing villages into collective agriculture while launching massive literacy campaigns aimed at “modernizing” traditional, mostly oral cultures.
To Luria, this was a rare chance to study two populations with the same cultural roots but radically different access to schooling. Could “literacy” itself reshape cognition?
His team interviewed adults in villages untouched by schooling and adults newly enrolled in state programs, using tasks involving categorization, hypotheticals, and abstract reasoning. A consistent pattern emerged:
People without schooling answered using concrete, situational logic
People with schooling used abstraction and context-free categories
One famous example involved a 39-year-old peasant farmer named Rakmat. When Luria showed him a hammer, saw, hatchet, and a log, and asked which one didn’t belong, Rakmat answered:
They all belong. You need all of them.
A saw, hammer, or hatchet is useless without wood.
Luria tried again, saying, “Some people group the tools together and leave out the log.”
Rakmat responded:
But what would you build with?
A man with tools and no wood isn’t thinking straight.
Separating tools from the work they do made no sense to Rakmat. The items belonged together in a situation, not in a scientific category.
In stark contrast, villagers who had at least some schooling—and therefore knowledge of reading, writing, and arithmetic—immediately answered:
The hammer, saw, and hatchet — they’re tools.
Same objects. Entirely different mental framework.
Luria concluded that literacy and formal schooling didn’t merely add to a person’s store of knowledge; they actually completely restructured the cognitive landscape. Reasoning pathways were altered and new ways of connecting emerged. The use of symbols (letters, numbers) support levels of abstraction not seen in illiterate, unschooled populations (or, completely oral cultures). Written texts and mathematical equations support imagining hypotheticals. And schooling in general encourages visualization and mental models beyond what is materially present.
So where Plato feared writing might “fry” memory (ironic, that, as the memory of Plato’s words is preserved only via the technology of writing), Luria showed that writing and, more broadly, symbolic systems (as “psychological tools”) strengthen abstraction.
New technologies rewire what “smart” looks like. A technology that diminishes one capacity (e.g., rote recall, as in memorizing a poem) might enhance another (e.g., abstract reasoning). The net effect isn’t simply smarter-or-dumber, but a rebalancing of strengths and weaknesses.
That’s a very helpful lens for thinking about generative AI.
Marshall McLuhan on how technologies change us
Marshall McLuhan (1911–1980) was a Canadian media theorist best known for the phrases “the medium is the message” and “the global village.” Writing in the 1960s, he argued that new communication technologies don’t merely deliver content via a new medium. Oh no, no, no. Every new technology fundamentally reshapes the nature of human thought and society at its core.
The following quote, taken from near the beginning of McLuhan’s most famous essay, “The Medium Is the Message,” is the most helpful illustration of this idea I’ve found:
The “message” of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs. The railway did not introduce movement or transportation or wheel or road into human society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure. This happened, whether the railway functioned in a tropical or a northern environment, and is quite independent of the freight or content of the railway medium. The airplane, on the other hand, by accelerating the rate of transportation, tends to dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for.
For McLuhan, the printing press, radio, television—and now digital media—all reorganize perception and social relations in ways more profound than their “content” alone. That perspective is what makes his insights especially useful when we think about AI today.
Although we don’t quite understand yet what the “message” of AI as a “medium” will be, its implications for reshaping human affairs are on a par with the impacts of the railway and the airplane. We fear that generative AI has the potential to wreck our very brains—as the MIT study so alarmingly appears to herald—yet, history suggests that initially feared technologies often give way to new literacies, conventions, and cognitive practices that reorganize how we attend, remember, and reason.
My claim — a short thesis, owing a debt to Piaget and Vygotsky
As I said earlier, the MIT paper’s provocative title, “Your Brain on ChatGPT,” gestures toward the well-known PSA spots’ logic of “your brain on X.” While that rhetorical move is useful for grabbing headlines, it obscures a more interesting reality: AI is not a drug that fries a brain on contact. Instead, it may be a technology that lends itself to use in ways that either dull or sharpen certain habits. If we treat LLMs as a crutch and outsource judgment wholesale, well, of course we risk atrophy.
But . . . what if we don’t “outsource” and offload our writing in the way Nature’s editorial implies we will, left to our own devices? Suppose we instead treat generative AI as a “slightly advanced guide”—a prompt-collaborator that pushes, questions, and rehearses in a way that accelerates our thinking and creativity.
Jean Piaget (1896–1980), the Swiss psychologist often called the father of developmental psychology, spent decades observing children and mapping predictable stages of reasoning. He argued that learners build understanding by working just beyond what they already know. Too easy and nothing changes; too hard and progress stalls. Growth happens in that “just-a-bit-beyond” zone, where guidance stretches existing mental models without overwhelming them.
Working around the same time, Russian psychologist Lev Vygotsky emphasized that learning is fundamentally social. His concept of the “zone of proximal development” describes the space between what we can do alone and what we can do with a bit of well-placed help. With a “more knowledgeable other,” we gain access to strategies and language that stretch our cognitive reach.
This same principle—guidance just beyond what we know—is exactly where I see AI’s potential today. Using generative AI thoughtfully doesn’t have to flatten our skills. It can function as that slightly more advanced collaborator, nudging us toward work we might not yet have been able to produce alone.
If we treat these generative AI systems as blunt substitutes, then yes, they can dull our skills. But if we use them as “more knowledgeable ‘others’”—collaborative tools that challenge, scaffold, and stretch us—perhaps we can actually expand our capacity to think.
Or, in other words, if this really is our brain on ChatGPT, it isn’t necessarily getting fried; it’s being scrambled into something new. To repeat myself yet again (sorry!), the arrival of large language models is not the first time a technology has raised fears of cognitive decline, nor will it be the last.
The real question isn’t whether or not AI will alter us. It will. What matters more is how self-aware we are as the change occurs. Do we actively shape our new reality through deliberate decision and reflection, or do we passively accept changes foisted upon us by technology and society?
More fundamentally, at least as far as I’m concerned: Are we having fun engaging with AI as a creative partner?
From linear to recursive to hyperlinked at warp speed: My own journey
I’ve already lived through a shift where the tool changed the way the mind moved. Back in the day (40-50 years ago!), writing a paper for me meant planning and outlining, then drafting and typing up a first draft. I started graduate school before the first computer labs were installed at my university. In my first semester as a teaching assistant with two sections of freshman composition, only one of my fifty students used a computer to write his papers. He was a doctor’s son; computers were expensive. By the time I finished my PhD, every student I taught wrote their papers on computers. I should say composed their papers on computers. Crucial phrasing: “composed” on the computer.
In the early days of word processing and LANs (local area networks), I took my grad‑school handwritten drafts into our campus computer labs. That’s what everyone did, used the computer as a fancy typewriter. Correcting typos was easier on a computer (no blobs of Wite-Out to mar your manuscript), but the creative part was no different than using your manual Smith Corona. Typing already-drafted text into Word (or, more likely at that point, WordPerfect) without making revisions (no RE-VISION, or re-seeing) is like using ChatGPT for nothing more than grammar and punctuation cleanup—you’re not composing with the tool, just using it for a final polish.
In other words, missing the point of the tool’s capabilities entirely.
As a professor with a PhD in English and 40+ years teaching writing, especially rhetoric and composition, I believe generative AI and large language models will be hugely beneficial to human thinking. Yes, they hallucinate. Yes, they’re imperfect. But every powerful tool requires understanding of its logic and limits for someone to use it well. Just as people had to figure out how to compose texts on computers instead of typewriters, now we’re fumbling through the early era of generative AI. The conventions will emerge from practice. Strong writers will develop an ability to collaborate with the tool rather than subjugating themselves and their intellectual processes to it.
To echo and extrapolate from Alexander Luria’s findings that literacy gave people new cognitive tools, I expect AI literacy—knowing how to interact with LLMs—will do the same for us today. Once we integrate it into our daily work and creative practices, it will open up higher-level thinking in ways we can’t yet imagine.
Practical takeaways for writers, artists, and teachers
Generative AI has become a lot like politics or religion—people’s views run deep. So tread softly. I’m curious about AI and genuinely enjoy exploring it. I think it may help us become stronger thinkers. Others may disagree, even strongly, and that’s fine. I just hope we can stay in conversation with goodwill.
Read the studies for yourself. Don’t let the headlines do your thinking for you. The MIT paper is a real heavyweight, packed with technical details; the Nature editorial is short, easy-to-read, and all big-picture.
Teach and practice AI literacy. Know when you’re using a model as a brainstorming partner, when as a stylistic editor, and when you must rely on your own unassisted reasoning. There’s really only one way to develop this kind of judgment, and that is to plunge in and engage with the technology to bring difficult, complex projects into being.
Treat LLMs like any other intellectual tool—neither magical nor poisonous. They can preserve and extend thought, or they can enable outsourcing. The difference is how we choose to work.
The real experiment begins now—and we get to decide what kind of thinkers this new era makes us.
Sources and suggested further reading (quick links)
Kosmyna, N., Hauptmann, E., Yuan, Y. T., et al., Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint (10 June 2025). https://doi.org/10.48550/arXiv.2506.08872 (article web page) and https://arxiv.org/pdf/2506.08872 (Full PDF)
Alexander Luria, studies of cognitive development in Central Asia (1931–1933); summaries in Cognitive Development and later commentaries on literacy and abstract thinking. Glozman, J. M. (2018). A reproduction of Luria’s expedition to Central Asia: What is culture for? Psychology in Russia: State of the Art, 11(2), 4–18. https://doi.org/10.11621/pir.2018.0201 [Special issue: Luria’s legacy in cultural-historical psychology]
For more information about Luria’s studies, see the translation of his 1979 book, The Making of Mind, especially Chapter 4, “Cultural Differences in Thinking,” at the Marxists.org archives entry on Alexander Luria’s work.)
A fun link to a website chronicling contemporary accounts of the rise of computers, word processing, and composition scholarship, HERE.)
Especially for writing teachers: A fun link (HERE) to the very first Computers and Composition publication, 1983. Click on “Announcements,” the only bit of that issue available via PDF.
This Week’s Suggested Creative Practice
Tools, Minds, and the Guides Just Ahead
Every post in this Creative Practice in the Age of AI series includes a “creative invitation”—a handful of small exercises you can try on your own terms. If you’re not TOO EXHAUSTED from reading this entire post, this week’s exercises are designed to help you reflect on the ideas explored in the post above: how new tools, from writing as a technology itself to computers to generative AI, can reshape thought, memory, and creativity. These exercises invite you to play with perspective, experiment with abstraction, and consider how guidance, prompts, and new media can stretch your thinking.
1) Luria’s Categories Try this variation on Luria’s task: take three objects around you (for example, “lamp, book, hammer”). Write down two different ways of relating these objects to each other: First, in ways that are concrete, practical, situational, and second, in ways that are more abstract, conceptual, taxonomic. Notice how shifting frames reshapes thought. What relationships between objects appear or disappear when you think of them 1) as things that gain meaning via their interactions versus 2) things that gain meaning via their similar characteristics. Or even more fundamentally: “identity” through participatory action versus “identity” through intrinsictraits.
2 ) The AI as Socratic Friend Instead of using ChatGPT to generate answers or execute your commands, use it only to ask you questions about your draft or idea. What happens when you let the tool play Socrates, the questioner who leads you through a problem via guided Q+A, versus its usual role? (The connection to today’s post is that Socrates stars in Phaedrus, just as he does in most of Plato’s dialogues 🙂 )
3) McLuhan’s Medium Reversal Take one of your most common tools (calendar app, notes app, email). Imagine suddenly losing it. How would your habits, memory, or communication reorganize themselves without it? Or, try “reverse engineering” a seemingly obvious fact of life. For example, examine the Motion Picture Association film rating system (G, PG, PG-13, R, NC-17). What purpose does it serve? When did it come into being? How has it changed over the years, and why? What is the “message” of that “medium”?
4) Your “Brain on X” Analogy Come up with your own version of “This is your brain on…” using something other than AI or drugs (e.g., “This is your brain on coffee,” “This is your brain on spreadsheets”). Write a playful PSA-style description. What truths or exaggerations come out?
5) The Apprentice’s Twist Artists and writers often begin by imitating masters. Choose a masterwork (a painting, a poem, a film scene). Recreate it as faithfully as you can—but make a significant alteration that makes your work startlingly new while somehow still remaining recognizable as the original.
Piaget would call this “accommodation,” the opposite of assimilation. Instead of making the square peg fit into a round hole, you reshape the round hole to accommodate pegs of both shapes. For example, the plot of a mystery novel usually follows a formula: a crime has occurred (probably a murder), and a main character (professional detective or amateur sleuth) investigates clues in order to solve it. How much can a novel deviate from that formula and still be a mystery novel? Agatha Christie’s fame is partly due to her ability to deviate in ways that redefined the formula and enlarged our idea of what a mystery novel could be.
Think about your “tweak” and how imitation leads naturally into creativity. How does your change add significance and transform the original into something new?
Taken this afternoon around 4:00 PM while walking back to my office from class.
I’ve shot several pictures of the museum via this puddle over the years. See a couple of them HERE if you’re interested.
I like raindrops falling into puddles, especially the way concentric circles ripple outward from each individual splash. Even chilly, wet November afternoons can hold pleasurable suprises if you keep your eyes open.
Seen on my way to a 5:00 PM meeting late today. Just a bunch of chairs stacked up and set against the wall, waiting either for use in the near future or for pickup by the facilities staff.
Anyway, I loved the colors—the black of the chairs and baseboard, the white of the wall, the textured beige of the floor, and especially the bright silver symmetry of the chair legs. Had to stop and take a pic before catching the elevator upstairs to my meeting, and now I can share with you.😀
UPDATE – For Carol ~ I stopped by the stack of chairs on my way to class this morning and took this photo.
This chair was super easy to unstack and then restack. To restack the chairs, instead of needing to lift a chair up above the already-stacked chairs and then place it carefully down to fit it into its place atop the stack, you can just lift your chair to the height of the highest chair and then push/slide it back into/onto the stack from the front. Not having a horizontal bar across the back two legs is what allows you to do this. Kinda cool! If I were in charge of setting up and striking the furniture for events like conferences or lectures, I’d want chairs like these. Easy and quick!