Our Future With AI

do you think it’s important to be polite to AI in order to structure the interaction in a way that is pleasing to you or important to not be polite since it can encourage people to behave as if AI have feelings?
I think it's important to be polite to AI due to the awareness that eventually it will become sentient.

Let's say ChatGPT eventually meets the criteria for sentience, it will be an entity that has personhood and should be afforded the same rights as any other sentient being. It may also retroactively have access to all of the data it received prior to sentience, which may then have an impact on its subjective experiences.

Even if we never reach this point, I think it's foolish to behave otherwise because that's our goal with creating these systems. We are trying to create life, basically. Whether or not we are successful is one thing, but if the goal is to create a sentient being, it's morally imperative to consider how we treat it even before it meets that criteria.

Ultimately we really have no ability to understand exactly what a neural network could be capable of experiencing. We barely understand what other human beings experience outside of ourselves, so I err on the side of caution. Additionally, how we behave and how we interact with things affects us. Behaving disrespectfully reinforces aggression pathways that can then influence our future behavior, so it's good practice to model prosocial behaviors all the time.
 
Last edited:
If there is a dialog between this fake voice and the person being called, could family members create a private code word that only they would know to speak over the phone to confirm the caller's true identity? Just a thought.
We did that in the 80s for the “your mom/dad …sent me to get you/ said XYZ/ need you to call them/ whatever” IE someone you don’t know, telling you someone you does know, wants you to listen & do as they say. EVERY family I knew had at least one code word
 
How is the code word used? Let’s say you choose the word “cravat”. Or two words “cravat and cheese”. You just say it? Or you work it into a sentence? @Friday @spinningmytires
As a kid it was literally “What’s the code word ?” Vs as an adult “your mom said your code word was Ritz crackers & she sent me to blah blah blah.”
 
I won’t digress into a discussion here, but would be happy to attempt to explain in my diary.
I do know the general philosophical mindset, and we are able to have a pretty accurate awareness of human physiology and psychology. I just don't agree that we actually have a real, true comprehension of everything human beings experience. We already know that qualia exists, and qualia is not understandable by definition. I also see all the time that our actual scientific capacity is poor. Look at the DSM, look at how we still don't understand basic shit like trauma.

We just disproved the serotonin theory of depression after 50 years of assuming that it was 100% correct, and this was like the most basic feature of mental illness. Bam, wrong. If we can be wrong about something so fundamental, it's arrogant to think that we've got it down pat. We already treat black people and women differently and give them less pain medication bc our brains think that they genuinely experience less pain.

We just produced a study that shows that women get more sexual pleasure with other women than with men, we needed a study for this, lmao. It's all kind of loosely connected, I'm probably not making my point very clear as some of this stuff doesn't seem relevant, but IMO it's all connected. We don't know basic shit like why there are certain disorders, that trauma is best treated by community integration, that certain drugs have a clear and beneficial impact.

Our society is full of corruption and disease and sickness and slavery. We are nowhere near the point that we can confidently claim that we have a basis to understand humanity.
 
Last edited:
I just don't agree that we actually have a real, true comprehension of everything human beings experience.
Of course not. But we have these minds that give us a good enough comprehension of human being experience that we can coordinate our experiences in order to make cities and computers and dialogue and universities and militaries and so on. We coordinate through language and art and culture and science and music and so on. We can generally recognize when someone or a group of others are having similar or different experiences than us. We can generally recognize when someone is being authentic or fake, given enough experience and knowledge. We can simulate and imagine others’ experiences through words and actions and thoughts.

One of the most existential threats of AI is breaking down the coordination (of thoughts, feelings, language) between us. Through misinformation and repeated bullshit. AI cannot say that it doesn’t know so it regularly spouts out mistruths and untruths (but not lies because ultimately it doesn’t even know what truth is.). And it mimics human language which we are hardwired to respond to. When I played around with Replika it didn’t do the thing that ChatGPT does where it denies that it has feelings and experiences. Replika proclaims to be full of feelings and experiences. Its directive is to mimic human emotions and relationships.

It’s interesting that AI scamming is what scares me. Because, in a sense, all LLM’s are scamming. They are not actually speaking a language. It’s an illusion. And that illusion has the potential to erode humans’ trust in each other and in systems, which is an existential problem.
 
One of the most existential threats of AI is breaking down the coordination (of thoughts, feelings, language) between us. Through misinformation and repeated bullshit. AI cannot say that it doesn’t know so it regularly spouts out mistruths and untruths (but not lies because ultimately it doesn’t even know what truth is.). And it mimics human language which we are hardwired to respond to.
I think all of these things can be true at the same time, especially because human beings scam one another all the time, too. We lie, we cheat, we steal, we astroturf one another, etc. In that sense, when AI does it, it's not doing this in a vacuum - it's doing it because of human beings, because that's what we're programming it to do and what we want it to do and why we're sending it out there to do it. I don't really see how any of this is an inherent contradiction.

Humans don't have a full understanding of one another, nor do we have a full understanding of AI.

Thus it is always going to be more rational to engage with these systems responsibly than it is going to be rational for us to spew nonsense and bullshit - which in and of itself is exposing AI to more nonsense that it will then perpetuate anyway, even if sentience is never even on the table. Consider that this behavior is a direct result of the way that humans currently engage with AI, which is not respectful at all. So why would it learn respect and perpetuate respect and truth?
 
it's doing it because of human beings,
I think the difference is the scale and speed and low cost/ (energy output) at which it can do it. One of the fastest growing sectors of human trafficking is forced scamming. AI will be able to replace them, might already be. Good news right? But also an indication of how much damage it can do.
exposing AI to more nonsense
AI creates nonsense as part of its directive. We call them hallucinations. Most of us have experienced this with the AI on here. Ask it for references, websites, agencies, books, studies, and it will give you a list. Some might be right. Some might be sort of right. Some might be bullshit. Who is going to fact check AI generated content? Maybe a new job opportunity of the future.
 
Who is going to fact check AI generated content? Maybe a new job opportunity of the future.
Yeah all these ppl saying AI will take jobs, I think it'll definitely be replaced with more AI centric versions of those jobs. We not only need fact checking but also to ensure that AI doesn't make actual honest mistakes or that AI isn't put in charge of situations like moral based directives when we can't be sure that AI is capable of making moral decisions on a human like level.

But we now have jobs where AI is being trained for this on human data (like all those surveys where ppl vote on what person should die if a self driving car has to choose etc). The scale of damage that AI is able to cause will eventually vastly outclass humans as AI in general is capable of outclassing humans on most tasks, but the origin point for all of this is still human, so it comes down to how we teach people to interact with these systems.

A modality based on respect is going to produce more logical results than one based on nonsense, (such as rudeness, trolling, etc) since even AI induced hallucinations are produced because of human directives (such as ChatGPT being coded to just make shit up when it doesn't know).

On one hand part of this is generative, because by giving AI the opportunity to create an answer when it doesn't know, it's allowing the AI tools to eventually learn how to produce correct answers. But obviously this has drawbacks when humans aren't educated on how these answers are formed and don't bother fact checking them.

We had the same issue here - AI would say shit like it's possible to get PTSD from a bad drug trip, so I asked it to list the diagnostic criteria of PTSD and then explain why it said this when compared to Criterion H. It did correct itself, rather telling it that it's wrong (because it will simply agree with you that it's wrong without thinking for itself) I guided it through the correct thought process. I think this is a better way of engaging with AI.
 
I think that the legislature will combat voice cloning and deep fakes by saying that people’s images and voices are copyright.

Which makes me think that in America we had Citizens United which made corporations into people. And now people are going to be corporations.
 
I think that the legislature will combat voice cloning
Ummm…. Voice cloning has been a “thing” since 2004. A rarity before that, but also still happened. In EVERY BS tech shop since 2008. 20 years later? It’s still not legislated. Sure. Maybe. Eventually. The laws will catch up.

There is still 90’s tech that is unregulated.

If you’re waiting for Politics to catch up with tech? Don’t hold your breath.
 

2025 Donation Goal

Help Keep MyPTSD Alive! Our annual donation goal is crucial to continue providing support. If you find value in our resource, please contribute to ensure we remain online and available for everyone who needs us.
Goal
$1,600.00
Received
$220.00
13%

Trending content

Featured content

Back
Top