Discussion about this post

User's avatar
Joe Lazer (FKA Lazauskas)'s avatar

Great post, Shane. I wonder how you square the importance of intellectual honesty with the incentive system of the attention economy. Sam Altman wins by continually saying outlandish BS in the press, vacillating between straight-up lies about AI's development timetable and predictions of AI's existential threat. The market rewards him; he controls the narrative, earns media and attention, and OpenAI's market share and valuation goes up and up. A startup like Cluely gets $15M in funding for an AI that'll help you cheat ... that isn't even real. We need intellectually honest leaders ... but doesn't the game also reward intellectual dishonesty?

Expand full comment
Michelle W.'s avatar

All of this is true, Shane, thank you. I personally think that for the time being AI should be totally banned for political use and carefully but strictly limited for personal use. That being said, it has its upsides. Anyway it's been (is being) imposed on the translation industry and in my experience is more an annoyance than a real help other than as a writing tool that saves me from further carpal tunnel - unless I have the option to train it myself on the platforms and in the tools I use, and it then *stays* trained and doesn't try to override over 40 years of experience in my linguistic and industry fields. If it does, then I send it sarcastic notes and put everything back the way it should be, but that's time I'm not paid for. That's my personal beef with it. "Hal" is not what I want; but I'm all for a tool that helps with typing, consistency and handling regular client glossaries. And if AI can advance climate, medical and scientific research without closing the outside bay door I'm good with that, too. But we the users need to be AI's masters, not a small group of shadowy whoevers. Take care.

Expand full comment
3 more comments...

No posts