Discussion about this post

User's avatar
Brad Hamilton's avatar

So basically...you're talking about AI's insidious cultivation of trust, and why large language models (which I think you probably should spell out in your first reference to LLMs, just to include those who don't know the acronym) simply tell you what you want to hear, rather than being honest and admitting when they don't really know. Which ironically is the way to build better trust. Or have I got this wrong?

Expand full comment
Davis Michael's avatar

This was an excellent article. I will share it on LinkedIn.

I appreciated your reasoning: we trust it because it's designed to make us trust it (acting benevolently). That's a huge point!

I'm glad you suggested being overt in our prompts. "Please double-check your answer is a regular part of my prompts."

When I use AI (multiple times a day), I use Claude.ai. I don't trust the ChatGPT guardrails. I don't entirely trust Claude's guardrails, though I feel Anthropic is more transparent about what they are doing (and Claude's concerning behaviors) and are presently guided by their constitution. I do double-check references.

Expand full comment
1 more comment...

No posts