So basically...you're talking about AI's insidious cultivation of trust, and why large language models (which I think you probably should spell out in your first reference to LLMs, just to include those who don't know the acronym) simply tell you what you want to hear, rather than being honest and admitting when they don't really know. Which ironically is the way to build better trust. Or have I got this wrong?
This was an excellent article. I will share it on LinkedIn.
I appreciated your reasoning: we trust it because it's designed to make us trust it (acting benevolently). That's a huge point!
I'm glad you suggested being overt in our prompts. "Please double-check your answer is a regular part of my prompts."
When I use AI (multiple times a day), I use Claude.ai. I don't trust the ChatGPT guardrails. I don't entirely trust Claude's guardrails, though I feel Anthropic is more transparent about what they are doing (and Claude's concerning behaviors) and are presently guided by their constitution. I do double-check references.
Thank you. Refreshing, irreverently honest, and made me smile and then laugh. My current relationship with using generative AI / LLMs is similar to my perplexed experience when COVID vaccinations were made mandatory for a variety of activities eg travel to visit or care for loved ones. An ethical dilemma in the face of little factual information about longer term effects and more. Your article sparked more food for thought for my writings about ethical fading. Again, thank you.
So basically...you're talking about AI's insidious cultivation of trust, and why large language models (which I think you probably should spell out in your first reference to LLMs, just to include those who don't know the acronym) simply tell you what you want to hear, rather than being honest and admitting when they don't really know. Which ironically is the way to build better trust. Or have I got this wrong?
This was an excellent article. I will share it on LinkedIn.
I appreciated your reasoning: we trust it because it's designed to make us trust it (acting benevolently). That's a huge point!
I'm glad you suggested being overt in our prompts. "Please double-check your answer is a regular part of my prompts."
When I use AI (multiple times a day), I use Claude.ai. I don't trust the ChatGPT guardrails. I don't entirely trust Claude's guardrails, though I feel Anthropic is more transparent about what they are doing (and Claude's concerning behaviors) and are presently guided by their constitution. I do double-check references.
Thank you. Refreshing, irreverently honest, and made me smile and then laugh. My current relationship with using generative AI / LLMs is similar to my perplexed experience when COVID vaccinations were made mandatory for a variety of activities eg travel to visit or care for loved ones. An ethical dilemma in the face of little factual information about longer term effects and more. Your article sparked more food for thought for my writings about ethical fading. Again, thank you.