I feel weird about using AI to write. And I can't stop.
Can knowledge, without discovering it alongside others, ever turn into wisdom?
Me, Myself, and AI.
I've been experiencing a bit of a crisis—ethical, ideological, and maybe even about my identity. Surely, I can’t be the only one thinking about this and grappling with it.
As an early emigre to the Chat GPT universe, I might be a bit deeper down the rabbit hole than others, but I'm having complex feelings about mind, knowledge, and these tools—how they show up in my work products and the impact this is having. It’s made me wonder about my place in the world, how I'm contributing and how I’m presenting myself.
Academic Science seems to be grappling with its own relationship to technology on the highest levels of research.
Nature: AI comes to the Nobels: double win sparks debate about scientific fields
While many researchers celebrated this year’s chemistry and physics prizes, others were disappointed by the focus on computational methods.
“I’m speechless. I like machine learning and artificial neural networks as much as the next person, but hard to see that this is a physics discovery,” Jonathan Pritchard, an astrophysicist at Imperial College London wrote on X. “Guess the Nobel got hit by AI hype.”
I spend most of my working hours using some form of AI—sometimes virtually nonstop. And it’s easy to keep working on a problem through AI when it’s right there in your pocket after hours. Unlike Slack, I can use the same app for important discoveries, like answering, “What pleasant smells do mice not like?” and “When did Meryl Streep go from being like a good character actor to like almost a running joke of respectability?”
OpenAI’s GPT o1-mini correctly answers, respectively, peppermint and The Iron Lady (2011).
But while I’m in the app getting these consequential answers or asking “what’s ANOTHER alternative to eggs in a recipe because that sounds gross” — maybe I’ll just dip back into that thread where I was noodling on a work question. And now I’m collaborating with a co-worker whose bandwidth is only limited by the stock price of Nvidia.
I score high on the personality trait of “high need for cognition”, which doesn’t mean I’m smart, it just means I’m thinking a lot, usually about dumb shit — like how the studio that made Exorcist 4 thought Paul Schrader’s working print was so bad they fired him and hired Renny Harlin to immediately remake the whole movie. When Harlin’s version flopped, they gave Schrader $35k to finish his and threw that into theaters, recycling the score and skimping on ADR. It also flopped. Why am I thinking about things like this all the time? I don’t know. But ChatGPT let’s me hit that cognitive button very, very easily.
Who’s Zoomin’ Who?
I heard a pretty jarring story from a friend. They had to interview a candidate for a job at their company, more to assess their cultural fit than their technical skills. Yet, they still had to ask the candidate a bunch of technical, job-specific questions. The candidate seemed somewhat stiff, staring from the other side of the Zoom screen, but then replied with full, succinct paragraph answers to questions that he couldn’t have known ahead of time.
As my friend was typing up his review, another interviewer of the candidate popped in and asked, "Hey, do you think that guy was using AI to listen to your interview and feed him the answers?"
And suddenly, like the end of The Usual Suspects it all fell into place—it seemed assured that this was exactly what was happening.
This is apparently truly a thing.
There are Reddit posts of job recruiters talking about having to deal with this, and some quick searching can turn up Chrome plugins that will listen to conversations in real time, “detecting questions in real-time and providing tailored, AI-generated responses”.
The tool I found allegedly uses information from your CV to give you the answers. It’s hard to imagine why you’d need a realtime AI to tell you what your resume says or explain what you did at a past job unless, maybe, you didn’t really do those things?

This situation seems like an obvious ethical line crossed—it's deceptive. And weird. And wait, what’s the point? They’re gonna find out eventually. It feels like a mystery method approach to getting a job. It might get you in the door, but this can’t help your career.
But there is an unavoidable “we’re through the looking glass” aspect to this. Something major about our relationship to knowledge is changing. Rapidly.
When Chat GPT launched publicly, OpenAI CEO Sam Altman spoke about how this technology could disrupt the “knowledge economy” — something we didn’t even think was possible. AI was coming for factory jobs, not ones you had to go to college for.
Knowledge that used to be exclusive—something you’d go to, e.g. law school for, is now more accessible and digestible and free. Maybe future job interviews will focus more on qualities like critical thinking and social skills, rather than deep technical knowledge?
Maybe job interviews might evolve to be more about assessing how candidates use tools to find answers rather than pretending to have all the knowledge themselves. What if, instead of trying to pretend we know about things we don’t (something we all did in job interviews long before AI) candidates said, "That’s a great question. I don’t know the specifics, but I know how to find that information quickly and apply it"? That also, by the way, was a good answer in a job interview long before AI.
But the collapse of the knowledge economy, and the expensive cultural gatekeeping around the higher education needed to succeed in it, doesn't necessarily pave a path toward greater social equity.
Social skills are often evaluated through class based and racialized lenses. As explored in the HBO documentary Persona, personality tests have emerged in workplace interviews as legal proxies for discrimination, able to (consciously or not to the hiring managers) screen out people with mental health issues or minority-status related traumas, without ever asking directly about these things. So what exactly equitable hiring based on “personality over knowledge” will look like is unclear.
Also, in my work life, with own biases and preferences, the skill that I see as vital is what I’d just call “general competency”. I’m sure you can find those traits in a mix of the Big 5 or Hexaco, but there’s a level of just … having your shit together and the autonomy to figure it out that really seems to be what we all need to develop more of. I’ve found that collections of these traits to be pretty evenly distributed through the human population, an Ivy League degree no guarantee of their presence.
It’s from those prior experiences, and the assumptions they give me, and a lot of my own shit where I am deeply skeptical of authority figures, that’s given me an intense independent streak — and the power of AI has exponentially fueled that impulse in me. For better and worse.
AI as a Research Assistant or Copy Editor — not bad!
Up until a couple of weeks ago, I used AI only in very specific ways in my writing. One way was for preliminary research. For example, some of my writing made mention of national and global trends, like the economic shifts of the 1980s—a decade I wasn't alive in.
I had general notions, fueled by documentaries, books, and even shows like Black Monday, a criminally underrated Showtime series that I just learned was cancelled 2.5 years ago and will have no fourth season!
When I needed more specific information about the 1980s economy (or to counter my assumptions), I’d turn to AI and ask things like: “Did the Reagan tax policies of the 1980s affect corporate philanthropic giving?”
The model will give you its POV, but more importantly, it can point you toward sources, which you can then verify, making it a good factual mapping tool that you can’t fully at its word, but its right-ish. It’s something like chatting with a slightly intoxicated economics professor who, though he might mix facts about the Fed rates in 1986, and then puts a hand on your knee (prompting you to ask for “the checks, plural”), he does give you a sense of where to start your own research.
The other way I used AI was to condense overly wordy paragraphs. A common critique of my writing is that it's too long, and I don't disagree. Perhaps, to my own detriment, my writing is more about expressing myself than being understood. Much of my impulse comes from releasing the tension of bottled-up thoughts, putting them out there, and walking away.
I had to see a speech therapist when I was very young because I struggled to get my thoughts out, not really stuttering — more like ideas didn’t fit through my tiny mouth and I would get tongue tied and melt down.
If I’m honest, hitting “publish” is less like taking my carefully prepared manuscript to the printers with pride and joy, and more like flushing a toilet in a public restroom and walking away as fast as I can. I’m often relieving an anxiety more than sharing a treasure, and while that works well with riffing and personal sharing, I acknowledge this is the wrong M.O. when it comes to trying to generate new knowledge and wisdom in the world.
New levels of, and my personal conflict with, AI
For a recent project, I leaned more heavily on AI. It involved a series of six posts about ideological tensions through the lens of four characters, each affected differently by today’s polarizing crises. These writings explored how 20th-century ideological content failed to deliver on its promises, creating deep divides between people. The narrative focused on their paths—whether they transcended conflicts or descended into more entrenched positions.
What’s novel about this project was that I wasn’t doing any of the actual writing. It was all generated by AI prompts I fed into a model I had instructed to write similar to non-fiction science and culture authors I like. And the results were riveting—remarkably vivid, with a strong internal consistency in how the characters behaved and reacted. The AI’s portrayal of how people might interpret or reject each other's viewpoints felt surprisingly real.
Maybe this is misanthropic of me, but I believe that while these models can’t predict human behavior, they are surprisingly good at capturing the possible ways people might react when ideological lines are crossed. I’d argue that’s not because they’ve achieved human levels of consciousness, but because when we as humans act from a place of ideology and reactivity, we are being quite … robotic. The machines understand us best when we’re at our worst.
Yet, this project left me feeling uneasy. I wasn’t upfront about using AI to create it because I wanted to see how people would react — but tell them once the last chapter dropped. But there were hints. The co-author was listed as
, who bio read “A super-intelligent alien meta-consciousness that is merely a B student on his home planet,” an allusion to the artifice of the content, and the impressiveness and fallibility of its source.The real discomfort lies in the blurred line between what’s authentically me and what’s generated by AI. There’s a sense of loss in not knowing where I end and the AI begins, especially when dealing with something that feels significant, like, my latest project: the potential of semantic analysis to understand human needs.
I’ve long been interested in understanding fundamental human needs. This goes back to my first job out of college, working with kids caught up in the juvenile justice system in Nashville. I researched how social skills could impact life outcomes, which led me to explore personality tests like the Big Five and eventually HEXACO. These traits—openness, conscientiousness, extraversion, agreeableness, emotional stability, and [in HEXACO only] honesty-humility — are thought to explain most of the variation in how people, all other factors accounted for, behave. They’re believed to have evolved for specific reasons, and there’s evidence that they’re embedded in our brain chemistry.
But applying this to human needs felt out of reach. It would require a comprehensive list of human needs across languages, probably significant grant money to hire translators and linguists, and advanced statistical and coding skills—skills I don’t have.
I got a D+ in statistics in High School (in a class taught by a teacher that really like me). But with AI, suddenly, the tools to do this kind of analysis are within reach. AI can generate the omni-lingual content to analyze (with many caveats there), suggest mathematical operations to analyze it, and even write the code to perform these operations.
I recently talked for an hour and a half into my phone about my ideas related to this. How tracing personality traits through language could build connections with personality science, metaphysics, socio-biology, the social determinants of health, systems theory — all that jazz. And indeed, my thoughts were kinda like jazz, just riffing. I’d argue with a real method to the madness, but easily mistaken as noise.
Then I transcribed my 90 minutes of thoughts, gave it to Chat GPT, and asked it to write up a formal, structured document based on my spoken thoughts, and I hit publish. Flush!
But on re-reading it, it felt phony, like the line between my input and the AI’s was too blurry.
These capabilities brings a new, strong sense of imposter syndrome. And a likelihood that I am, indeed, impostering. I am never not thinking about Dr. Ian Malcom, but all this brings his points vividly home.
The manifesto of sorts I put out, when I was cooled off from my tunnel-visioned flow-state k-hole, was pretty “Chat-GPT’d” — its style was extremely coherent, but in an uncanny way. It’s like the difference between cubic zirconia and real diamonds. The tell is that the cubics are fake because they have no flaws, no texture.
My defensiveness wants to say, to imaginary critics: “YOU get Chat GPT to produce you something like that! Quiz me, bitch! I understand these things.”
But underneath that defensiveness is a weird, sour feeling of displacement. Perhaps its picking up on a learned helplessness, an anxiety about “do I remember how to think without these tools? Can I get my words out without this? Am I that kid again, that can’t get the words out of my little mouth?”
Maybe what bothered me the most is it phrased things more formally than I would — maybe in a way that would make me be taken more seriously.
It’s that last part I struggle with the most, and it makes me think about that job-interviewee using the AI in real time to seem legit. Was I being deceptive, or just working smarter and not harder?
Was it saving time for me, in the way a professor can outsource the hard work of drafting his publications to their TA, or a Supreme Court Justice uses their clerks to research and write their rulings?
Or was I putting a receiver in my ear during the chess tournament, getting moves from a grandmaster?
A thing I struggle with is the detailed, process oriented stuff of life. The hanging up of clothes, taking out the trash, copy editing my writing, of it all. That’s what I’d outsource if I could in life. It’s an ADHD thing partly — the invention and discovery is the fun, the maintenance is life ending. I can also twist this into an ideology, one that has just enough truth in it to be sticky.
I charmed my way through passing high school, doing my work “on time” was not a thing I had the ability to do. I still don’t know how I got through a very rigorous, extremely reading dense undergrad without being diagnosed with ADHD for another 12 years.
These experiences, and the financial element, have always made being formally engaged with “the sciences”, via grad school or otherwise, seem inaccessible, and my politics gave me rationalizations for this.
I do think there’s a degree of self-seriousness that should be cast off from our institutions, shed in a Gen-X, post-modern, “your rules are bullshit, man!” kind of way. Or at least there used to be a lot of this stodginess.
With the whole world becoming fully post-modern, with politicians getting their training in the WWE ring and science’s reputation being eroded at the time we need it more than ever in the history of the planet, having some respect for the process of things seems important.
These are the current pathways we’ve figured out to iterate on knowledge. For all the problems of access and gatekeeping of the academy and the peer review-i-verse, we haven’t come up with something better.
The social interconnection of people reviewing, analyzing, supporting, critiquing, and even feuding, turns the raw minerals of ideas into actual gems of wisdom.
With things like ChatGPT, I’ve learned more quickly and deeply than I thought possible, almost completely outside that academic universe. I think of the description from Shakespeare’s Cymbeline of the character Posthumus, who, adopted by the king, seemed superhumanly able to absorb knowledge, gulping it down like oxygen.
The king he takes the babe
To his protection, calls him Posthumus Leonatus…
Puts to him all the learning that his time
Could make him the receiver of, where he took,
As we do air, fast as ’twas ministered.
Cymbeline, Act 1, Scene 1
We’re all suddenly thrust into the royal court’s inner scholastic circle, able to suck down knowledge and wisdom at rates unprecedented in the known universe. But as the Chat GPT analysis of my college thesis says:
Posthumus must abandon his unrealistic self-idealization and recognize his capacity for both good and bad, allowing him to see himself and others more realistically. Only through this humility can he achieve a balanced sense of self, integrate his spiritual and material aspects, and become a more complete person capable of genuine connection and action. This process of self-recognition and humility is essential for Posthumus to move beyond his previous, rigid self-image and become truly whole.
As Christopher McCandless (aka Alexander Supertramp) wrote in the margins of his worn copy of Doctor Zhivago, dying alone in an abandoned bus in the Alaskan wilderness, "Happiness is only real when shared."
And I think the same is true for wisdom.
P.S.
This article, by the way, has had some AI copy editing. I recorded 56 min of thoughts on this, done while doing household chores and walking the dog. My Pixel phone auto transcribed it, and then I fed it into Chat-GPT with the prompt:
“Reformat transcript into blog post format without changing content, but fix clear typos or restarts: …”
From there I added some “me” back into it. Maybe this is still too much. Maybe I just need a real editor—someone with whom I’d have an actual relationship.
But that will make it harder to just flush out my ideas whenever I get backed up.
¯\_(ツ)_/¯
For transparent funzies, here’s the original transcript and what the chat gave me.