A lot of discussion here revolves around choosing tools that align with our values and using them to enhance agency and focus, so I’m curious where everyone thinks AI fits into that approach, if at all.
I’m conflicted, because I think it can easily become a crutch, and studies suggest it can diminish critical thinking and originality when used as the first step in the creative process.
Beyond that, things get murkier. Personally, I refuse to use generative AI for anything that I would put my name on.
So I won’t use it to draft emails or other personal writing, but I am fine with having it proofread or suggest minor revisions for tone or brevity. I’m also okay with using it for more mundane output. Part of my work involves creating product listings with descriptions and specs, and it can be useful there, although it still gets confused often enough that the amount of time saved is debatable.
I also use it, like everyone else, as a replacement for Google, which is now so enshittified that it’s practically useless for many tasks. It has become my first stop for retrieving basic facts, answering technical questions, and comparing products.
I don’t ask it for novel opinions though, and I use Perplexity most often because it’s straightforward, provides citations, and doesn’t pretend to be your friend. Sycophancy is still a problem though, and it hallucinates often enough that it can’t be trusted without double checking all but the most basic facts. It is still better than the alternatives though.
I don’t write code often, but I see no problem with casual vibe coding or using it for boilerplate if not complex solutions. It is probably already at the level where very basic web development and similar work is trivial.
I draw the line at using it for art though. AI art, no matter how technically good it becomes, is fundamentally anti-human. It removes the two most important aspects of the process: self-expression and focused attention. It’s also inherently regressive, dooming us to endlessly recycle and reconfigure the images of the past without exploring anything entirely new.
Overall, I’m fine with AI as a tool so long as it extends rather than replaces human capability. Using it for voice transcription, summarization, editing, research, and planning has been incredibly helpful. Summarizing videos and extracting relevant quotes using YouTube’s new Ask feature, for example, has already proven to be invaluable.
In music, using AI to isolate or enhance vocals and other elements is great, but generating them is not, since tools like Suno replace the central part of the creative process in the same way image generators do.
And it goes without saying that I never assign agency to AI by treating it like a friend or therapist, which I assume is true for everyone here and anyone who knows how an LLM actually works. Seeing people get drawn into these kinds of interactions is depressing, as is watching the youngest generation lose the ability to think critically and write effectively.
I do think it’s possible to use AI extensively and avoid that trap, but I can’t decide if I’m relying on it too much or too little. I’m interested to hear where everyone else draws the line and why.