Editor’s Note: This article was originally published in Present Tense and is republished here by The Towson Torch in support of the upcoming Towson Growth Summit: Artificial Intelligence and Workforce Development.
Cal Bowman is a Towson-based strategist, consultant, and Start-up mentor working at the intersection of innovation, public leadership, and workforce growth. He writes Present Tense, a publication exploring clarity, courage, becoming, and the design of better systems.

Photo Credit: Author: The Symbiocene
A few years ago, during a family vacation to San Francisco, my kids stood on a street corner near the Wharf, watching a Waymo glide past. They were horrified. “I would never get in a self-driving car,” one of them said. “That’s way too scary.”
I looked at the car, then back at the chaotic swarm of distracted human drivers around us, and told them: “The car doesn’t need to be perfect. It just needs to be better than us. And frankly, that’s a pretty low bar.”
But here’s the problem: when we apply that same “low bar” logic to AI in the workplace, we shrink ourselves. When we treat upskilling as a way to “keep up,” we’re implicitly accepting the premise that humans are just slightly buggy, slower processors. We accept that we are replaceable, optimizable, and perhaps a bit too inefficient for the modern world. (Author's note: As the father of a newly driving 16-year-old, even slightly better than humans is okay on the road, but it’s a hollow ambition for our careers.)

Photo Credit: Author. Bowmans in San Francisco, decidedly NOT in a Waymo.
The framing is almost always defensive. AI is survival gear. AI is an insurance policy. AI as a shield against irrelevance. If your ambition fits inside what you can already execute manually, AI will always feel like a shortcut. It’s just a way to get to the weekend faster.
But if your ambition exceeds your current capacity, AI becomes an amplifier.
The Humanities of the Machine Age
True workforce development isn’t about learning to “prompt” so you don’t get fired. It’s about the humanities of the machine age. It’s about leaning into the three things that we experience and apply fundamentally differently than a processor: Judgment, Empathy, and Synthesis. While a machine can simulate these outputs through sheer scale and statistical probability, it lacks the lived context to understand why they matter. It can generate a “judgment” based on a pattern, but it can’t feel the weight of the consequences. We aren’t competing on who can process faster; we are competing on who can define what is worth processing in the first place.
This is where the ceiling rises. When the “cost” of generating an answer, a draft, or a design drops to near zero, the value of the work shifts from the execution to the intent. If you use AI as a shortcut, you’re just producing more “average” faster. But if you use it as an amplifier, you’re using that reclaimed time to exercise a higher level of humanity. You’re moving from being the person who “does the thing” to being the architect who decides “which thing is worth doing.”
The Gravity of the Mean
Why does the “shortcut” method always lead to mediocrity? It’s baked into the architecture of the tools. When we rely on the machine to do our thinking, we fall victim to three specific technical devaluations (see footnote for some interesting reading):
The Regression to the Mean: AI is trained on the “middle” of the internet. It has read Pulitzer Prize winners, but it has also read millions of mediocre LinkedIn posts and generic corporate memos. It aims for the mathematical center of that data to ensure it’s “correct.” If you just hit “generate” and “paste,” you are publishing the literal definition of “the most likely thing a human would say.” That is the definition of average.
The “Hallucination of Politeness”: Because these models are tuned for safety and helpfulness, they tend to smooth over the “edges” that make human work interesting. They avoid the bold claim or the uncomfortable truth. You get a polished, professional-sounding output that has all the friction and therefore all the soul, sanded off.
High-Speed Derivative Work: If a thousand people use the same tool to solve the same problem as a shortcut, they receive roughly the same 85% viable solution. You aren’t creating a competitive advantage; you’re participating in a race to the middle, producing “more” at a volume that actually devalues the work itself.
The Paradox of the “Last Mile”
However, there is a nuanced trap in how we measure “productivity” today. Recent research from MIT (see footnote, Noy and Zhang, 2023) shows that AI is a phenomenal equalizer, helping lower-skilled workers “catch up” to high performers by automating the baseline requirements of a task. But we have to be honest about what “catching up” actually means: it means moving toward the center.
For the novice, AI is a ladder to the middle; for the expert, it is a tether to the average.
This creates what I call the “Last Mile” Paradox. If the machine can handle the first 85% of a project, the research, the drafting, the structural scaffolding, that 85% essentially becomes a commodity. Its market value drops to near zero because everyone can now produce it at high speed.
This doesn’t make the work less valuable; it makes the remaining 15% the part that requires human judgment, empathy, and synthesis, infinitely more precious. When the “average” becomes free, the “exceptional” becomes the only thing worth paying for. The value of that last mile doesn’t just stay the same; it skyrockets. The “High Ceiling” is found in that final 15%, where we stop processing data and start practicing the art of becoming.
The Superman Problem
This decision on how to handle that final 15% brings us to a different kind of 'low bar' conversation, one that took place on a farm in Kansas. In the 2013 Superman film, Man of Steel, Jonathan Kent tells a young Clark:
“You just have to decide what kind of a man you want to grow up to be, Clark; because whoever that man is, good character or bad, he’s... He’s gonna change the world.”
Today, we are handing out digital capes to everyone. AI is giving us superpowers, enabling us to process a thousand documents in seconds or generate complex code with a single sentence, which were previously reserved for the elite or the highly specialized.
But a superpower without a philosophy is just a shortcut. Or the origin story of a villain. If you use these tools without a clear sense of “who you want to be,” you default to the machine’s “average.” You become a faster version of everyone else. But if you bring a distinct human character to the tool, you aren’t just “keeping up.” You are, for better or worse, changing the world of your work.
Take Refik Anadol. He is perhaps the best modern example of someone who decided exactly what kind of architect he wanted to be in this new world. He didn’t look at AI and see a way to automate a painting; he saw a way to paint with the “collective memory” of humanity. He used his superpower to amplify a specific, deeply human intent.

Photo Credit: Refik Anadol Studio AI and Architecture
The Anadol Method: Data as a Humanities Project
While most are looking for shortcuts, Anadol is looking for “data pigmentation.” His work provides a blueprint for the humanities of the machine age because it relies on the very pillars we often ignore in traditional upskilling.
When Anadol created Unsupervised for the Museum of Modern Art, he didn’t ask the AI to “make art like MoMA.” That would have been a regression to the mean. Instead, he applied judgment to curate a site-specific dataset of the museum’s history and synthesis to link that data to real-time inputs like the weather and the movement of the crowds.
More recently, with the launch of the Large Nature Model (LNM), he moved into the realm of empathy. He didn’t just scrape the internet for photos of trees; he collaborated with indigenous communities like the Yawanawá to collect ethically sourced data from the rainforest. He is using the machine to help us “hear” and “see” nature in ways our manual senses never could.
This is the “High Ceiling” in action. By bringing a distinct human character to the tool, Anadol isn’t just generating content; he’s creating a new category of human expression. He proves that the machine doesn’t replace the architect; it finally gives the architect the scale to match the ambition.
The Character of the Architect
If you don’t provide the judgment, the machine provides the probability.
The Machine’s Probability: What is the most likely, safest, most common response?
The Human’s Character: What is the right, most impactful, or most courageous response?
True workforce development in the machine age is less about learning the “superpower” and more about developing the person who wields it. It’s about sharpening your judgment so you know when the machine’s “average” isn’t good enough. It’s about deepening your empathy so you know who the superpower is meant to help. And it’s about mastering synthesis so you can connect those superpowers to a vision that exceeds your manual capacity.
We aren’t here to be faster machines. We are here to be more expansive humans.
For Further Reading
If you want to dive deeper into the friction between human intent and machine probability, here are a few starting points that informed my thinking for this piece.
1. On the “Machine Hallucination”
Read: Refik Anadol Studio: Projects and Research. Why: While I use AI as a level to keep lines straight, Anadol uses it to make them melt. He treats data as a form of "pigment" and "lumber," using massive datasets to create fluid, architectural-scale art that he calls "machine hallucinations". I included this because it represents the far end of the spectrum, where the tool isn't just correcting the plumb line, but imagining entirely new rooms based on collective memory and data.
2. On the “Stochastic Parrot”
Read: On the Dangers of Stochastic Parrots by Bender, Gebru, et al. Why: This is the foundational paper that popularized the idea that LLMs don’t “know” anything, they just predict the next likely word. I included this because it explains why, without a human hand on the wheel, AI defaults to a “regression to the mean.” It’s a bit academic, but it’s the best explanation of why the machine’s version of the truth is often just the most “average” one.
3. On the “Hallucination of Politeness”
Read: A General Language Assistant as a Laboratory for Alignment by Askell, et al. (Anthropic). Why: This explores the “HHH” framework (Helpful, Harmless, Honest). I cited this because it shows how the “politeness” we see in AI isn’t an accident; it’s a design choice. It’s the “safety sanding” that often smooths over the bold, idiosyncratic edges that make human writing and judgment valuable.
4. On the Risk of “Model Collapse”
Read: Model Collapse: AI-generated Data Makes Models Forget by Shumailov, et al. (Nature/University of Oxford). Why: This paper explains a fascinating and terrifying phenomenon: what happens when AI starts training on its own output? It effectively creates a “race to the middle” in which the original data is lost in favor of derivative echoes. It’s the technical backing for my “85% logic” that the machine gets us close, but the last mile of innovation is a purely human endeavor.
5. The “Variance” Concept (The Real 2023 Research)
Read: Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence by Noy and Zhang (2023) (MIT). Why: This is the study that actually proves the “85% solution” point. They found that AI significantly helped lower-skilled workers “catch up” to high-performers, effectively decreasing the variance in output. It proves that AI is a “great equalizer.”
Author’s Note: This study proves that AI raises the floor for everyone, but my argument is that it also threatens to lower the ceiling for the expert. It turns a unique skill into a baseline commodity.
6. On the “Commodification of Execution”
Read: Strategy and the Internet by Michael Porter (Harvard Business Review). Why: While written in 2001, for many, this is the “bible” for understanding the Productivity Frontier. Porter explains that when everyone adopts the same high-speed tools, everyone becomes more efficient, but no one gains a competitive advantage. It’s the original theory behind why “the average” becomes a commodity. I included this because it’s the economic foundation for the “Last Mile Paradox.” If everyone has the same superpower, the only way to win is to be the one who knows where to fly.
7. On the “Philosophy of Synthesis”
Read: A World Appears: A Journey into Consciousness by Michael Pollan. Why: This work explores how the mind synthesizes reality into meaning and Pollan’s recent book highlights the “Humanities” side of the equation. While AI processes data, humans synthesize meaning. It’s the “how-to” guide for the final 15% of the work that a machine simply cannot replicate.



