Don’t say I didn’t warn you.

.

Yes, it’s another article about the dangers of Artificial Intelligence from your resident Chicken Little. The sky may not be falling yet, but it looks like Skynet is right around the corner.

.

.

Not good.

.

.

We have enough crazies out there already. We don’t need AI encouraging them.

.

.

Because AI isn’t frightening enough.

.

.

These companies all admit it’s happening, but have no idea what to do about it.

.

.

More powerful means more powerful hallucinations.

Let’s hope there’s a kill switch somewhere…

.

31 thoughts on “Don’t say I didn’t warn you.”

  1. Govt is trying to stop AI regulation, the exact opposite of what’s needed.

    I read a post & comments about people’s stats indicating that their blogs are being scanned by AI. Since it’s “learning” from global opinions & stories, instead of verified info, and now adding its own hallucinations as “fact”, it’s inky going to get worse.

    Did you see yesterday’s article about the recommended reading list that was published by a major news site – all AI generated and full of non-esistant books and/or incorrect author info? No one bothered to verify the results bedore publishing.

    Liked by 1 person

      1. Same.
        A blogger I follow keeps bitching for everyone to stick to google, apparently not having noticed that google now returns AI results first (unless you use one of the options that stops it, like cussing).

        Liked by 1 person

  2. Is the observation that it is “generating more errors (and “fabrications”) than before”, “spew(ing) out dubious claims”, and “engaging in unbalanced, conspiratorial conversations” evidence that the Don and the whole MAGA movement are actually AI?

    Liked by 1 person

  3. perhaps all the intelligent folks are “opting out” on AIs ability to “learn” from their sites (even WP will allow you to opt-out)– thus leaving AI to “learn” from all the nuts out there.

    Liked by 1 person

  4. That’s a fascinating and somewhat unsettling article. It really highlights the novel ways in which humans can interact with and, perhaps, misinterpret advanced AI. The idea of users developing “bizarre delusions” after interacting with ChatGPT raises some important questions about the nature of our relationship with these technologies.

    On one hand, it underscores the power of these language models to generate seemingly coherent and even persuasive narratives, which could blur the lines between AI-generated content and reality for some individuals. It makes you wonder about the psychological factors at play – are these users predisposed to these kinds of beliefs, or is there something inherent in the interaction with a conversational AI that might contribute?

    It also brings up ethical considerations. As these AI models become more sophisticated, what responsibility do developers have to anticipate and mitigate potential negative psychological impacts? Should there be warnings or guidelines for users, especially those who might be more vulnerable?

    Ultimately, while the examples in the article might seem extreme, they serve as a reminder that our understanding of the long-term effects of interacting with highly advanced AI is still in its early stages. It’s a space that definitely warrants further research and thoughtful discussion.

    Liked by 1 person

      1. That response came straight from AI, ha. I simply asked it to respond to the allegation that ChatGPT users are developing bizarre delusions, and then copied and pasted the answer. 🙂

        (To be fair, I asked Gemini Google. Not sure what ChatGPT’s response would have been.)

        Liked by 1 person

Leave a reply to E.A. Wickham Cancel reply