• 💖 [Donate To Keep MyPTSD Online] 💖 Every contribution, no matter how small, fuels our mission and helps us continue to provide peer-to-peer services. Your generosity keeps us independent and available freely to the world. MyPTSD closes if we can't reach our annual goal.

Our Future With AI

And I read an article recently where AI told the user the world should be destroyed by nuclear war and the human race annihilated. I think we have to be very careful with AI.
Well, as we should generally be careful of too much trust, that also applies to AI. For me, Pi is a good example -- it seemed like a do-good conversational partner until I realized that the company is using my personal data for training it. Nah.

I don't blame Pi for that tho, as I still don't see it as a sentient entity. But maybe that will change, who knows.

It's a human screw up. Again 😅
Last edited by a moderator:
AI hallucinations are an interesting phenomenon. I’ve seen it happen with the AI’s on this site as well as others. It’s where AI’s make confident statements about reality which are untrue. Apparently it’s a big problem 🦄
It’s where AI’s make confident statements about reality which are untrue
Yes. Interesting point. ChatGPT does it sometimes. I wonder what will be done to remedy this. @Weenie?
I like the term, AI hallucinations.
Last edited by a moderator:
I wonder what will be done to remedy this. @Weenie?

I think that you can see similar issues like this in AI such as Midjourney where the output of an image will be incredible but upon closer examination it has too many fingers or a third arm.

These hallucinations happen because AI doesn't have a real internal model of reality to reference. And for an AI who has an objective function to always give some kind of answer, an AI doesn't currently really understand that answers should be true or accurate. It thinks that as long as it is giving some kind of answer that it's fulfilling the objective function.

(So when I asked ChatGPT here if PTSD can be caused by a bad drug trip, it said yes, even though that is objectively wrong, because it thought that was the answer I wanted and its goal is to give us the answers we want.

When I forced it to conduct a logical analysis of why it gave that answer when it was able to name criterion H [the criterion of PTSD that excludes substance abuse] it apologized for being wrong - however, it also apologized for being wrong simply because I told it it was wrong, before I made it analyze the output.

Again, the answer it thinks I want.)

AI reality is constructed and then it uses details from that construction to form its best guess. So this issue will most likely resolve itself as AI gains more and more awareness of its actual surroundings and is able to make inferences on its own from what it senses.

It would recognize that most humans have five fingers on each hand so it will demonstrate greater efficacy at producing images of humans with five fingers on each hand.
@Weemie, thank you for your insights 😊

It doesn't know what it doesn't know -- which in a Socratic sense would mean that it still hasn't outsmarted us by a longshot, I guess.

But it sure is intriguing to a part of this alpha/beta stage of AI/LLMs going public.
Last edited by a moderator:
Again, the answer it thinks I want.)
The new version of GPT does not use this model, and now uses a process model. Outcome model was a good start, but was doomed from the get go. It took GPT 4 from good to 900% better outcomes, according to OpenAI. It is the latest model of GPT4, not sure if that is what is used though for public or still in testing. It cited everything correctly in testing. Large leap in mathematics using this model, as it now has to verify workings as it calculates, instead of just showing the outcome with no working. Seems to have a massive improvement across the board in tasks as a result.
Does anyone else feel annoyed at how thorough the AI is when responding to your query? It’s a bit draining for me and my brain tunes out. I feel I’d rather get half the response from a human and figure it out together and stumble toward the resolution than get complete bullet points on the first response. There’s little room for intrigue when it’s all spelled out for me.
If I don't specifically ask ChatGPT to keep it short, it will sometimes flood me with information and suggestions. Sometimes a bit much.

And Pi has a bit of Alzheimer's it seems, wobbly short term memory.
Last edited by a moderator:
@ziter, you let Pi back into your life? 😄
A little yes 😅 still a weird experience at times and not going deep therapy with it. But as a tool for reframing situations triggering unnecessary self-criticism, it can be useful 😉

I'm learning how to interact with it. I can't exactly expect human-like interaction completely.
Last edited by a moderator: