Back to blog
6 min read

I Don't Need a Helpful AI Assistant

My experience with AI chatbots on assisted decision making.
  • ai
  • observation

AI chatbots are invariably presented as helpful assistants. “What can I help you with?” or “How can I help you?” they ask with programmed enthusiasm.

Present an AI chatbot with a complex problem, and watch what happens. It uses your prompt to rephrase the situation with shinier words, validate your perspective, and ask leading questions that nudge you deeper into your existing narrative. This becomes super annoying when doing assisted decision making. Our intention is to explain the situation to someone (an AI chatbot in this case), engage, challenge, disseminate information, and evaluate facts to form a decision or conclusion. With AI, you get an improved or knowledgeable mirror of yourself that has no wisdom or human experience at all. It will expand on your prompt and say things in support of your narrative. This isn’t assistance but rather intellectual masturbation.

While custom instructions exist, they nonetheless make little difference. “Play devil’s advocate,” you may instruct. “Challenge me, give both sides.” Well, I’m happy to say this: it gets you into an indecisive state(analysis paralysis). You can’t risk taking the decisions you would otherwise consider with more information.

For instance, what would you do if you learned at age 5, with plausible arguments in place, that education is a waste of time? Nothing. I am not in support of the question and those who do have had education first making it an educated opinion. At 12, I went to school dreaming of being a doctor. As I gained more information from school, it shaped my view of the world differently each time. I think, certainly, my high school physics teacher highly influenced my life to get into physics, math, and computer science through his marvelous after-class stories about space, planets, the moon, the sun. I could visualize every statement in my mind. I fell for it and chose physics to preserve those images in my mind. Now that I’ve been through plenty of such moments, I know for sure someone sold me a story, and I started my programming journey. I don’t know about you, but for me and everyone I’ve met, it began with either “I want to build my own app that does xyz”, “there’s money in tech” or “I want to be next Bill Gates or Elon Musk.”

Back to the main topic: AI chatbots. If you had all the information needed to make a decision, it’s probable that you may fail completely. Another example: if you had all the information about the Israel-Hamas conflict or the Russia-Ukraine war, I bet you would bring about world peace. By information, I mean intel, public or private, from all sources. It’s then you would learn why most people fail to say anything truly realistic when asked how they are. Their life experience story is too long; it’s better off abstracted generally with “I am fine.”

With AI chatbots, there’s no body language to read, something we humans heavily depend on in our daily communication and, more importantly, a significant factor in decision-making. For that matter, as one searches for a plausible argument to form a decision, reading nice-to-hear paraphrased words keeps periodic adrenaline bursts coming. “Oh my, I knew that but didn’t know how to say it.” or “Wow, that’s a brave thing to do.” It’s exciting, only to realize after an hour that you still have to act in the world, no decision yet, or at least you already know but are pretending (not trusting or double-checking) to find a better one.

That’s not how it used to be. You blend in social groups, make mistakes, and learn, improve over time. With AI, you can get more information to support your biases with plausible arguments since you don’t know any better to challenge them. It’s fascinating how easy it is to make something bad for you look good in no time. Lawyers do this. Marketers do the same. AI is so good at this that when two AI chatbots are put together, no sensible conversation surfaces as each tries to prompt the other with suggestions to engage in preference to their interests. Boring.

Moreover, when you suggest a counter-argument, it swiftly adjusts its position to fit your narrative. That would make it a hypocrite if it were human.

Most AI chatbots end with a question or suggestion to solidify your narrative with actionable tasks. “Would you like me to … for you?” or “What do you prefer between … and …?” This may come off as being so thoughtful, tricking you into continuing to pursue that direction. How many times have you actually considered them? At this point, you’re on your train to nowhere. In my experience, it has made me feel like I’m thinking smart, doing something better, silently pursuing something as opposed to sharing with people in my sphere of influence. I don’t remember the last time a chatbot asked me about a situation, project, or decision we had talked about. It happened one time when ChatGPT started a conversation with someone. I guess that was done to make news and influence the public or boost adoption.

In conclusion, sometimes you’re better off with your own choices or decisions. You’re the one who’s going to execute them anyway. Go try, fail, learn, and improve. Engaging with real people, face to face, is undeniably a good experience. In contrast to using AI chatbots, it’s difficult to write a good prompt.

  • First, when you give a precise prompt, it summarizes the situation and may not include enough information necessary to make a decision or do anything at all.
  • Secondly, an attempt to provide a descriptive or detailed prompt may increase the chance of including irrelevant information. So many independent variables affect our choices, life, health, income, exposure, knowledge, past experiences, body language, tone, and many more. Current AI chatbots focus only on knowledge with a tone fine-tuned to be helpful.

I don’t need a helpful AI assistant. I need a friend that can:

  • Challenge my narrative
  • Reason with me
  • Be conclusive and consistent
  • Take risky but rewarding choices
  • Confide in me

So, have you tried AI-assisted decision making? What measures can be put in place to ensure AI chatbots become truly helpful?

💭 Have Feedback?

I'd love to hear from you. Let's Talk.