Where to draw the line with AI?

A lot of discussion here revolves around choosing tools that align with our values and using them to enhance agency and focus, so I’m curious where everyone thinks AI fits into that approach, if at all.

I’m conflicted, because I think it can easily become a crutch, and studies suggest it can diminish critical thinking and originality when used as the first step in the creative process.

Beyond that, things get murkier. Personally, I refuse to use generative AI for anything that I would put my name on.

So I won’t use it to draft emails or other personal writing, but I am fine with having it proofread or suggest minor revisions for tone or brevity. I’m also okay with using it for more mundane output. Part of my work involves creating product listings with descriptions and specs, and it can be useful there, although it still gets confused often enough that the amount of time saved is debatable.

I also use it, like everyone else, as a replacement for Google, which is now so enshittified that it’s practically useless for many tasks. It has become my first stop for retrieving basic facts, answering technical questions, and comparing products.

I don’t ask it for novel opinions though, and I use Perplexity most often because it’s straightforward, provides citations, and doesn’t pretend to be your friend. Sycophancy is still a problem though, and it hallucinates often enough that it can’t be trusted without double checking all but the most basic facts. It is still better than the alternatives though.

I don’t write code often, but I see no problem with casual vibe coding or using it for boilerplate if not complex solutions. It is probably already at the level where very basic web development and similar work is trivial.

I draw the line at using it for art though. AI art, no matter how technically good it becomes, is fundamentally anti-human. It removes the two most important aspects of the process: self-expression and focused attention. It’s also inherently regressive, dooming us to endlessly recycle and reconfigure the images of the past without exploring anything entirely new.

Overall, I’m fine with AI as a tool so long as it extends rather than replaces human capability. Using it for voice transcription, summarization, editing, research, and planning has been incredibly helpful. Summarizing videos and extracting relevant quotes using YouTube’s new Ask feature, for example, has already proven to be invaluable.

In music, using AI to isolate or enhance vocals and other elements is great, but generating them is not, since tools like Suno replace the central part of the creative process in the same way image generators do.

And it goes without saying that I never assign agency to AI by treating it like a friend or therapist, which I assume is true for everyone here and anyone who knows how an LLM actually works. Seeing people get drawn into these kinds of interactions is depressing, as is watching the youngest generation lose the ability to think critically and write effectively.

I do think it’s possible to use AI extensively and avoid that trap, but I can’t decide if I’m relying on it too much or too little. I’m interested to hear where everyone else draws the line and why.

6 Likes

I think you’ve hit the nail on the head as far as what a lot of people are feeling about AI. The big AI companies are following the social media playbook of putting out a product with the aim of “hooking” as many users as possible, and eventually they’ll jack up the prices once they have enough users who feel like they can’t get by without it.

In other words, AI products are designed to maximize engagement. And we all feel that: if you get accustomed to using them, you feel like you can’t disengage. On the other hand, if you never opt in in the first place, you’re accused of being a Luddite who will be left behind. So you’re caught between a rock and a hard place, forced to pick the least bad option.

The silver lining here is that there are people working on AI products that don’t buy into this dichotomy: AI that is not extractive, that is not optimized for engagement. Some initiatives that I know of off the top of my head:

  • https://solve.it.com/ - an offering from Jeremy Howard, one of the original deep learning pioneers; his platform asks what AI that is truly optimized for learning and collaboration looks like. The hypothesis that he landed on is AI that helps the user provide structure and scaffolding, but doesn’t proceed past that boundary; the human is left in charge of filling in the structure, preserving much more of the user’s agency in the process.
  • https://bluebox.pocadesign.org/ - a non-profit initiative I recently got involved with, Blue Box is an attempt to create a local (self-hosted), slow (non-extractive), community-centered AI product. Its ethos is very similar to that of the early software, web, and open source movements, aiming to design an AI that prioritizes human relationships by allowing users to offload tedious administrative work that is important but not urgent.

I know that’s not really an answer to your question, so here’s my 2 cents: being in tech, it’s been really interesting to see and talk to people in other industries where AI has not advanced nearly as quickly as it has in software engineering. From my own personal experience, I try to practice what someone coined “vibe engineering” (as opposed to “vibe coding”, a term which communicates a kind of shoot-from-the-hip let-consequences-be-damned approach to writing software, apps, and products), figuring out ways to leverage AI to do more testing, specifying, verifying: techniques that serious production software is built with.

I hope that’s useful background that maybe provides a glimmer of hope during these uncertain times :slightly_smiling_face:

Google is unusable now, but Kagi’s pretty good still!