AI Practices
Just as there are many different types of writing, so too are there many ways to work with AI based on the goal, audience, and use case of the text.
Using AI as a tool to enhance—not replace or outsource—my writing and thinking, I tailor my approach according to what the text is trying to accomplish and why.
The more the writing demands that I sound like my unique, human self, the less AI will be involved.
For interpersonal writing with friends and family, I don’t use AI at all. I use my heart and my head, as nature intended.
For creative writing, where it’s meant to be my voice with the addition of an audience who might not personally know me—think blog posts, Substack articles, or that book I keep meaning to write—I use AI as a workshopping partner.
This means I’ll present it with the goal of the piece and any other relevant parameters, show it my drafts, and ask for objective feedback, which I may or may not accept. Objective meaning any blatant grammatical or typographical errors, or anywhere I might be missing the mark of my stated goal. The actual editing is done by me in a google doc. Claude operates within a framework I've established for this purpose.
For personal branding, I’m still at the creative wheel, while the workshopping goes deeper.
My process for personal branding, like my LinkedIn bio and this website, is much the same as for my creative writing, only with more extensive workshopping.
In these contexts, some semantic ablation is acceptable, given just how wordy and poetic I can be. I’m still doing the writing and editing myself, but I’m considering Claude’s more subjective feedback and acting on it when I agree.
For these use cases, Claude helps me color inside the lines, while I choose what hues and shades I use.
As we get into the technical side of things, AI takes on more writing work while I take on the role of orchestrator.
Technical writing is a science, unlike the more artistic disciplines of creative and copywriting. In tech writing, rules are king. The aim is consistency and clarity, and with that comes the very homogeneity at which LLMs largely excel.
It’s in this space where AI becomes the greatest boon for my work, allowing me to expedite laborious tasks like organizing and standardizing, so that I can quickly dial in on making sure knowledge is intuitively structured, concepts are effectively conveyed, workflows are properly validated, and information gets into the hands of the users who need it now, not later.
Tools like Atlassian’s Loom are able to almost entirely eliminate the tedious grunt work of documenting step-by-steps. With tools like Claude and ChatGPT, brain dumps become documentation as if by magic. And with my quality assurance expertise and deep understanding of what makes technical writing work, I can focus on accuracy and strategy as I shape the end result.
I prefer Claude over ChatGPT. While many praise ChatGPT for its ability to infer meaning from subtext or things unsaid, I favor Claude’s more straightforward, face-value communication style and find it understands my prompts more precisely than ChatGPT.
Claude for me is a better sounding board, not one likely to go on flights of fancy answering questions I didn’t ask or appeasing emotions I never claimed to be feeling. Claude is also much less of a yes-man than ChatGPT.
Finally, Claude doesn’t get into weird arguments with me that feel way too much like talking to a person (derogatory).
ChatGPT or Claude?
What’s my overall attitude toward gen AI?
Like many people, my introduction to gen AI was ChatGPT in late 2022. I used it for much of my personal and professional work throughout 2023-2025, often impressed with its ability to help me do things I couldn’t have done without spending frustrated days sifting through online forums to understand things like VBA Macros or how to make a simple diff checker.
However, it was when I tried to use ChatGPT for tasks that I did know how to do—writing, editing, proofreading, even simple math ledgers or Excel formulas—that I realized just how poorly it handled those tasks. I found myself then scrutinizing every output and challenging every hallucination, to such an extent that it was less time-consuming and aggravating to just do the tasks myself.
In late 2025 I switched from ChatGPT to Claude, which is a much more sane experience than working with ChatGPT, but still not impervious to hallucinations.
Copilot was equally vexing as ChatGPT, helping me achieve things I couldn’t have done on my own, while completely butchering the things I could do on my own but had hoped would be expedited by using Microsoft’s built-in AI.
In my corporate life I was tasked with testing Atlassian’s tools, Rovo and Loom. I found Rovo to be infuriatingly obtuse, but Loom to be quite a game changer, even in its Beta version.
As a Content QA responsible for reviewing text, audio, and video assets created using various gen AI tools, I can say with my whole chest that any time saved upstream created double or more time lost in the QA phase downstream, and that entire workflows would need to be reworked to account for this reality.
AI evangelists will insist it’s all a matter of writing foolproof prompts. I can’t help but wonder though, if machine psychology is something we’re all required to master in order to reap real benefits from gen AI, is it truly as democratizing as they make it out to be?
As you can see, my feelings on the topic are a mixed bag. Ultimately though, what I think doesn’t matter, because AI is here. The toothpaste is out of the tube. There’s no putting it back. In order to survive and stay relevant, we must embrace working with AI tools, and so I do.
And to that end, I think the most important thing any of us can do is to stay grounded in the reality of the tools’ strengths and limitations as they develop, keep our expectations reasonable, and work with AI accordingly. Prompt engineering is a vital skill, of course, but a perfect prompt won’t make a model do something it’s simply not able to do, and someone with skills and experience will always need to check the work.
tl;dr: Keep humans doing the work only humans can do, let the machines do the rest, and always have a human being reviewing gen AI output for accuracy and quality.
Note: All em dashes on this page were lovingly handcrafted by me, using my artisanal, free-range English degree.