.
Yes, it’s another article about the dangers of Artificial Intelligence from your resident Chicken Little. The sky may not be falling yet, but it looks like Skynet is right around the corner.
.

.
Not good.
.

.
We have enough crazies out there already. We don’t need AI encouraging them.
.

.
Because AI isn’t frightening enough.
.

.
These companies all admit it’s happening, but have no idea what to do about it.
.

.
More powerful means more powerful hallucinations.
Let’s hope there’s a kill switch somewhere…
.
There’s never any good news with this technology.
LikeLiked by 2 people
Well, Elon Musk’s AI Grok did call him a spreader of misinformation and untruths… so, maybe.
😉
LikeLiked by 4 people
It also calculated Trump’s probability of being a Russian asset as something like 70%.
LikeLiked by 2 people
Govt is trying to stop AI regulation, the exact opposite of what’s needed.
I read a post & comments about people’s stats indicating that their blogs are being scanned by AI. Since it’s “learning” from global opinions & stories, instead of verified info, and now adding its own hallucinations as “fact”, it’s inky going to get worse.
Did you see yesterday’s article about the recommended reading list that was published by a major news site – all AI generated and full of non-esistant books and/or incorrect author info? No one bothered to verify the results bedore publishing.
LikeLiked by 1 person
I’m not a Luddite, I love my tech. But I draw the line at AI.
LikeLiked by 1 person
Same.
A blogger I follow keeps bitching for everyone to stick to google, apparently not having noticed that google now returns AI results first (unless you use one of the options that stops it, like cussing).
LikeLiked by 1 person
It’s everywhere.
LikeLike
Just saw this on BP… def read the Frasier one!
https://www.buzzfeed.com/meganeliscomb/google-ai-search-results-fails
LikeLiked by 1 person
Omg.
Hilarious…. and terrifying at the same time.
LikeLiked by 1 person
Not a fan of AI, it is plagiarism software. Not surprised that TPTB have no idea about what to do about it when it goes rogue. 😒
LikeLiked by 1 person
I’m not surprised, or happy.
LikeLiked by 1 person
Oh dear lort, we’re doomed!
LikeLiked by 1 person
Probably…
LikeLiked by 1 person
On the plus side, students using AI to cheat are gonna be a whole lot easier to catch…
LikeLiked by 1 person
Silver lining to even that cloud.
🤣
LikeLiked by 1 person
Out of all the potential problems that come with AI, I did not have hallucinations on that particular bingo card.
LikeLiked by 1 person
Can’t say I did either…
LikeLiked by 1 person
Is the observation that it is “generating more errors (and “fabrications”) than before”, “spew(ing) out dubious claims”, and “engaging in unbalanced, conspiratorial conversations” evidence that the Don and the whole MAGA movement are actually AI?
LikeLiked by 1 person
I’m not sure if that would be better… or worse.
🥴
LikeLiked by 1 person
Keep calm and just be patient they are now working on artificial common sense
LikeLiked by 2 people
Wish they’d hurry up with that.
There’s not nearly enough to go around…
LikeLiked by 1 person
I’m not a fan of AI.
LikeLiked by 1 person
perhaps all the intelligent folks are “opting out” on AIs ability to “learn” from their sites (even WP will allow you to opt-out)– thus leaving AI to “learn” from all the nuts out there.
LikeLiked by 1 person
Now I’m even more depressed…
LikeLike
😦
LikeLike
Nothing good will ever come from AI!
LikeLiked by 1 person
That’s a fascinating and somewhat unsettling article. It really highlights the novel ways in which humans can interact with and, perhaps, misinterpret advanced AI. The idea of users developing “bizarre delusions” after interacting with ChatGPT raises some important questions about the nature of our relationship with these technologies.
On one hand, it underscores the power of these language models to generate seemingly coherent and even persuasive narratives, which could blur the lines between AI-generated content and reality for some individuals. It makes you wonder about the psychological factors at play – are these users predisposed to these kinds of beliefs, or is there something inherent in the interaction with a conversational AI that might contribute?
It also brings up ethical considerations. As these AI models become more sophisticated, what responsibility do developers have to anticipate and mitigate potential negative psychological impacts? Should there be warnings or guidelines for users, especially those who might be more vulnerable?
Ultimately, while the examples in the article might seem extreme, they serve as a reminder that our understanding of the long-term effects of interacting with highly advanced AI is still in its early stages. It’s a space that definitely warrants further research and thoughtful discussion.
LikeLiked by 1 person
I still feel like they’ve released it into the wild too soon. If the developers can’t control it, what hope do we have?
LikeLiked by 1 person
That response came straight from AI, ha. I simply asked it to respond to the allegation that ChatGPT users are developing bizarre delusions, and then copied and pasted the answer. 🙂
(To be fair, I asked Gemini Google. Not sure what ChatGPT’s response would have been.)
LikeLiked by 1 person
I thought it was a bit verbose for you, but all opinions are welcome here.
🤣
LikeLiked by 1 person
LOL! I’m long-winded on my blog, rarely when I leave a comment.
LikeLiked by 1 person