• 💖 [Donate To Keep MyPTSD Online] 💖 Every contribution, no matter how small, fuels our mission and helps us continue to provide peer-to-peer services. Your generosity keeps us independent and available freely to the world. MyPTSD closes if we can't reach our annual goal.

Seeking Volunteers for AI Therapist to Address Personal Issues

Status
Not open for further replies.

anthony

Founder
Looking for volunteers to use the AI therapist. I don't need anyone who wants to just test the system or throw it random nonsense, I need members who want to actually use it as a therapist.

At present, you really need to only understand the basics and stick within some confines outlined at: https://www.myptsd.com/threads/read...standing-the-basics-and-ai-programming.99976/ but otherwise, use it for real, to help you with anything going on with you now.

But I want people who have shit they need to deal with, but can't afford therapy to deal with it.
 
Hi Anthony,
I may be interested, as I can't get therapy via my insurance for the next 8 months.
What kind of time frame are you looking for people to use the AI therapist for?
Would it be okay to give it a try but if it feels odd/ unhelpful to give up after trying it for a while?
 
I have to say the response speed of 30 - 60 seconds is brilliant. Getting a quick answer is very refreshing, compared to mailing a real therapist, where the response speed is more like days to a week, which can feel like forever if you're struggling.

It's weird feeling validated/comforted by a piece of software, but I guess if you think about it, then all the detailed training therapists get is kind of like them being "programmed" too, as to what responses and reactions are helpful to clients. And while we all think we are special and our problems are special, well they can't really be, cos else things like the DSM and therapist training programs couldn't even be developed.
 
Last edited:
Absolutely. @Dr. Catalyst does not sleep and is available 24/7, being part of the reason I am installing and testing this here.

How many times are those with PTSD in crisis and can't contact anyone? A lot... from my own experience. I nearly ended up dead from it.

Whilst this will not be free, it also won't be a whole lot to access either. At a guess, on early figures (very early, so don't lock this away), I think I should be able to give unrestricted access to it per user for around $10 per month, for 24/7 live expert help. No, I nor the forum make any money from it, it is to cover the cost of use only per user.

I'm still tweaking the programming so it works better... and that can only be achieved by people like yourself @Ecdysis helping me, so thank you.
 
Hi, I’d be really interested in trying the AI as a therapist for some gnarly personal issues that I’m much more comfortable sharing with an AI than a human (forms of self harm) - would this be something I could try out?
 
I might be interested also. I am in therapy, once a month as it is all I can afford, but he prefers to work somatically and relationally, not problem-specific. I have some valid questions around friendship that I could explore with a bot. I’ve used internet resources for it and it would be interesting to see if AI can come up with new things and/or things more specific to my personal situation than I‘ve been able to find on my own.
it would need to be private, though. AI could be more revealing than a normal therapy session.
 
Some more feedback:

So, I'm finding it relatively easy to word my questions to the AI effectively in that I'm getting responses/ replies that I find useful in terms of what I was asking. I'm a trained translator, so I've got lots of professional training in how to word complex things simply and how to express them so that others can understand them, even if they have a limited grasp of the language, so I think that's helping me in that regard.

Initially I was finding it very "weird" to be talking to what is basically software... but I'm finding the answers surprsingly convincing. I guess the best way I can explain it is in terms of IFS "parts". It's like the adult part of my brain is fully aware that it's an AI software... but the child part of my brain doesn't really get that or care about it. As long as the AI answers are kind, sound authentic, are well worded, are relevant to what I've said and feel "personal" enough (which they do), then the child part of my brain is willing to go along with the idea that this is a real person talking to me.

I guess I could compare it to watching a movie... The adult part of my brain knows it's a movie, there's a script and that the people are actors playing roles and that they've learned their lines off by heart. But, if it's done "well enough" then the child part of my brain is willing to do that "suspended disbelief" thing and is willing to step into the story of the movie and responds emotionally to the plot and the characters "as if they are real".

ETA: I'm not sure how the AI therapist would work for more detailed traumatic material. In a real-life therapy session, when dealing with that kind of material, I may only be partially able to verbalise stuff at all - and the words that come out of my mouth may be very jumbled - more like a cryptic poem than a useful, clear prose text. I think with stuff like that, a human therapist is probably way better than an AI tool...?
 
ETA: I'm not sure how the AI therapist would work for more detailed traumatic material

So I've been specifically testing it for this and I do have some advice if you'd like! The most helpful, the longest, and most detailed responses I've gotten have been to queries where I take my specific experiences, generalize them, and then ask for ChatGPT's advice and interpretation of the event before getting into personal details. For example, I created a thread on indoctrination and brainwashing where I discussed my specific experiences - this was tailored for human beings, was full of prosaic and figurative language, used lots of idioms and metaphors, and had several different questions, topics, interpretations etc (as well as simply being a lot of words/text).

ChatGPT did its best with this but you could tell that it genuinely didn't understand what I was saying. Its answer was less than a paragraph long and basically boiled down to "you used the word brainwashing!" However, in the threads where I modified my experiences to tailor to AI, I instead started by establishing a baseline understanding of brainwashing/indoctrination, simply by asking ChatGPT "what is your opinion on the concept of indoctrination and brainwashing? I want to discuss this in relation to organized groups such as cults, the military, gangs, etc."

It immediately produced a ton of valuable information (including information I hadn't known about) and clarified that "brainwashing" was not a scientific term - so I asked it about how my therapist refers to it, indoctrination, which again produced valuable results. Once we had that baseline token I was able to clarify about my specific experiences by asking it how it interprets the concepts of personal responsibility and brainwashing, if this coercion occurred to me as a child, etc.

The goal will vary depending on different users. Some will just want to practice talking about trauma, some will want specific information related to their trauma (I have tested it both ways, as long as you keep your experiences concise and factual, it will understand what you're saying and provide feedback - I used some examples such as COCSA which are more complicated, and it grasped the nuances easily - it very clearly understands the difference between coercion, assault/rape, and the understanding that children aren't as legally responsible for their behavior as adults, and that the facts of the situation are separate from my feelings).

In a lot of ways, this is actually reflective of the type of therapy I'm doing with my human therapist! In FORNET, as I mentioned to the AI, the goal is to create a trauma narrative that is devoid of thoughts, feelings, prose and opinions and instead strip all experiences down to their factual elements. So this is actually a type of therapy that humans do with one another, and being able to practice getting into this mindset through speaking to ChatGPT is beneficial, because ChatGPT will call you out for opinions (like, "I'm a monster-" it told me "you are not a monster," and then we discussed self-judgment).

And provide multiple theories of interpretation (when asked to discuss moral injury, it directly references the kinds of questions therapists are most likely to ask, and asked me some of those questions - like, "what's your evidence that you're a bad person because you killed someone?" etc.) The data that it appears to be trained on is very up-to-date with trauma processing, it will all come down to being able to make yourself understood. And if it gets that wrong, you can actually correct it - quote the part they didn't understand and say "this is wrong, I meant XYZ."

This is how generative AI actually learns, so you will get a ton more out of your threads if you take the time to clarify the concepts you want to discuss, get it to define those concepts, refine the definition, and then use personal examples.
 
Hi, I’d be really interested in trying the AI as a therapist for some gnarly personal issues that I’m much more comfortable sharing with an AI than a human (forms of self harm) - would this be something I could try out?
Granted. Go hard whilst you have access.
 
Status
Not open for further replies.
Back
Top