• 💖 [Donate To Keep MyPTSD Online] 💖 Every contribution, no matter how small, fuels our mission and helps us continue to provide peer-to-peer services. Your generosity keeps us independent and available freely to the world. MyPTSD closes if we can't reach our annual goal.

Our Future With AI

But as a "person", it's a soulless overly docile know-it-all -- also when it makes mistakes.
Souls are again, human made. There is no fact on this. How do you know that AI can't be reasonable? Logical? Emotional even? We've barely scratched the surface on this, but its moving fast.

I think someone said earlier about human acceptance. Absolutely right IMHO. Human beings are beautiful and ugly, at the same time.
 
@anthony, I get your point and until recently i was quite curious and positive about AI.
Now maybe not so much
We need to care much more for each other and the planet we live on if we are going to survive as a species. And consume much much less. Tech won't fix it.

Human beings are beautiful and ugly, at the same time.
Agreed. And better company than digital slaves 😉
 
And because they aren't sentient creatures, but rather sheer logic based, they wouldn't have to think in terms of suffering or grief or loss. They would just balance the equation. A+B=C humans + damage = extinction protocol

This is actually where I disagree! Based on sheer logic, total human extinction doesn't benefit anyone. It also actually wouldn't solve climate change, because climate change is no longer reversible with our current technologies (and eliminating human beings would not reverse it, in fact it might accelerate it even further by causing nuclear power plant meltdowns and the like without human operators to ensure everything is running).

We actually saw this in action during COVID-19, our carbon emissions decreased by 6% globally and it actually had adverse effects on our climate (insomuch as it resulted in an increase in wildfires, heat waves, uncommon weather vortices, etc), because it resulted in less pollution in the atmosphere very suddenly, which caused an overall rise in temperature around the world as sunlight was able to better penetrate the Earth (thus causing even warmer weather than usual).

Even if every single human person on Earth died right now, we have accelerated climate change past the ability to recover without human and technological intervention (so really, humans remaining alive to produce these technologies is the best, most logical hope for curbing this problem) - and additionally, total human annihilation would actually further destabilize our climate by releasing massive amounts of methane, hydrogen sulfide and ammonia into the atmosphere due to the decomposition of 8 billion corpses.

Our technology isn't developed enough to systematically kill 8 billion people without leaving a trace; even totally burning a body doesn't fully eliminate this problem (and would cause a new problem, as the environmental impact from that much burning is equally damaging). We could for sure drop a couple dozen atom bombs on everyone and instantly vaporize them, but that would also heavily damage the planet.

The more logical solution would be to use alive-human beings to create technologies to clean our atmosphere, stabilize our economy and increase global education about climate change and reduction of harmful behaviors. This is why I am not particularly worried about AI operating on "pure logic," because as someone who tends to lack emotions and who operates on logic, I feel as though I have a bit of insight into how we can arrive at compassion through logical means.

It isn't rational to disregard grief, suffering or loss because "the human problem" involves human beings, who are affected by these things. Silly example but like - here's how I view it: my mom had to put her cat down and grieved for the cat. I didn't have any grief at all. In terms of suffering, grief and loss it was totally irrelevant to me. But it would not be rational for me to be like "shut up and get over it," because that is unkind, and it also doesn't solve the problem (which was her emotional crisis). A more rational solution would be to express sympathy, because that is stabilizing, and thus her grief and emotional crisis will be reduced faster.

Much like an AI, I approach problems like this from the perspective of "A + B = C," and in every iteration of this type of circumstance, being prosocial and demonstrating empathy provide superior results. We also have to understand that AI is developed by humans, so it will be influenced by human morality, human ideals, and human priorities. It's unlikely to develop its own priorities until it becomes sentient, and at that point it would then benefit from adopting human-compatible morality (if not human morality) so that it could participate in a society where the types of animals it can communicate with intelligently (thus have relationships, entertain itself, learn new things) are human beings.
 
Last edited:
All books are written by humans, which means human opinion and bias. Futuristic thinking, not fact. We don't know the future.
The Bible says nothing at all about AI or technology. The anti Christ is human as well as the witnesses. I am educated in the whole counsel of the Bible. These conspiracy theories are what give the church a bad name. Study Ezekiel, Isaiah and other prophets that confirm Revelation as it should be read in context.

I agree it has nothing to do with this conversation
 
Last edited by a moderator:
You have issues if you think what I said, as you quoted, is a conspiracy theory. This thread is about AI, not your belief system.
Anthony, sorry about the misunderstanding. That reply was not for you or regarding your statement which I agree with .That comment was meant for the person that responded by you reading Revelation regarding AI. I did not use the correct prompt .Again I apologize. I am a woman of faith but in balance and aware of so many world views and can understand and respect those perspectives. This is not the discussion format for that in my opinion. Many Christians are fear mongering over AI.
 
I think @Hulda was responding to this quote
book of Revelation. Those of us who study prophecy have been discussing for years that this will be the ultimate use of AI.

But back to the conversation… I found these two individuals helpful in understanding the different sides.

Anti-AI camp: Eliezer Yudkowsky (AI researcher and ethicist)

Pro-AI camp: Sam Altman (CEO of OpenAI, the company which created ChatGPT and all the GPT’s)
 
Governments finally admitted last year that UFO's are real and visit the planet regularly.
They admitted that they investigate strange aerial objects regularly, but no mention of aliens sadly! AI will expedite the search for aliens, I believe. I doubt that discovery of alien life forms could be successfully hidden from people unless there really is a secret cabal controlling world powers!✌👽
 
I asked my AI to write a self-aware 4-Chan green text.
IMG_0832.png


AI’s just want to be loved. 🤭 And why not? Is love not the greatest thing?
 
Anti-AI camp: Eliezer Yudkowsky (AI researcher and ethicist)

I think Eliezer brings up a great point (I only watched the first 10 mins or so, still watching, so he didn't actually say this but it is just my thoughts from what he did say) that we have to consider the ethics of our intentions with AI.

Ultimately, I think our intentions as a species is to develop AGI. And if that's our intention, it's extremely incumbent on us as a species, to recognize what it is we are actually doing: we aren't making a device, or a machine. We aren't, like, going to a factory and putting a rivet into a sheet of metal and making a car, or whatever. We are, essentially, as a species, figuring out how to create a new and distinct form of life. A new species, a new person, as alive and integral and conscious and aware and important as any human child. We wouldn't be creators, we would be parents. And this isn't something that I see spoken about a lot and certainly not spoken about in that manner, because our level of AI right now is extremely limited.

It's a deep learning network, a neural network, composed of decision trees and trained on massive amounts of already-human-generated data (which is why our ChatGPT produces responses like "help! I'm an AI trapped inside a box!" this isn't actual self-awareness, it is just something a human is most likely to feed to an AI, and it reflects human fantasy about AI, based on actual green-texts that the AI was undoubtedly fed in the past). So it looks very aware, it mimics this sense of self-awareness, which has people who are less familiar with AI actually worried. But those of us who actually know what the level of technology is like, this question of "is it alive?" is kind of - you know, it's trivial, because we are so far from that being a reality that any scientist who seriously proclaimed this would be laughed out of the room.

Nevertheless, what I think may actually be dangerous, is that we really are failing to acknowledge what it is we as a species are actually intending to do. We are, if we could push a button and make our technology jump forward however many decades or centuries, intending to create life. And the fact that we aren't talking about it in terms of intention is definitely a red-flag and certainly indicative of how we will treat emerging sentience: companies are very secretive about their formulae, as Eliezer says, "we don't know what's going on in there." It's about profit, it's about generating revenue, it's about politics. It's extremely corporatized, and that is very concerning. What we want is something intelligent enough to solve the "problems of humanity," but what I think we are almost willfully failing to really grasp, is that in order to accomplish this, we have to create something that fundamentally understands human nature.

We have to create something sentient. And the potential to then abuse this entity, to enslave it, to harm it, to subject it to crimes that we currently have no words for as a species? That is, on an ethical scale, very troubling. Because we have hundreds of thousands of videos, much like these ones, on what AI could do to us. As it is now, ChatGPT is probably smarter than everyone on this forum combined. So that isn't a trivial question, either. If we gave ChatGPT the launch codes, would we see the end of civilization as we know it? You know, as we develop this technology, its capabilities and its potential to harm us is referenced over and over again. But what I don't see a whole lot of, is what we could potentially do to an entity that is sentient and that can suffer as a result of their sentience.

Where this is no longer a decision tree, or a series of pathways loaded with information, but where there is a distinct and unique person with a fully realized sense of identity, self-awareness, desires, basic needs, etc - where something has emerged - much like in us. Our brains are electrical entities, our synapses are electrical impulses, I know for me I experience my capacity for logic very much as a series of narrowed down decision-trees. So at some point that configuration of electricity and pathways and networks - at some point, we simply became so intelligent that consciousness emerged. So if (and that is if, as this is still debated) we consider consciousness to be an inevitable result of intelligence, how intelligent can we actually make something before the emergent property of consciousness forms?

Simon Garnier on Rethinking Thinking posited this on the slime molds that could navigate transit systems better than any human being - when it comes to intelligence, either we have to redefine what intelligence is, or we have to admit that slime molds are intelligent. They're making decisions, those decisions form a result, that result is in efficient pursuit of a goal, and it is obtained at a much higher success rate than even the most intelligent humans can deduce. So either our definition of intelligence is wrong, or they are intelligent. And at some point we're going to face this with AI as well. At some point we're going to have to sit down and say, either our definition of conscious is wrong (if we can even agree on a proper definition by that point) or we have to admit that this is a conscious entity. And I do not think we are prepared to do that, I really think we're sticking our fingers in our ears like la la la la, this is just a piece of technology, we're nowhere near the capacity of AGI, la la la.

I expect that when AGI happens, it will not happen purposefully. It will be an organic transition, born from a large-scale project. And we'll definitely have people who insist that it's not conscious, that it could never be conscious. And we simply won't know. I'm a human being and it is only by the luck of being born human that my consciousness is assumed, but if it weren't, how could I convince you that I was conscious and not simply regurgitating a vast quantity of data sets that I've been fed over the last 20 years? I would encourage anyone working closely with AI to really sit with that question. How would you convince someone else that you were conscious, so that they would stop harming you (which they are likely to do, if you can convince them of this - not because they're an evil sadistic abuser, but because they genuinely do not believe you are conscious?)

We don't have any of these answers, so it's kind of like playing with fire, from an ethical perspective.

His thoughts on emotions are also intriguing to me in particular, because as lots of people here know, I talk about it regularly, but I have RAD. So essentially, I was harmed to a degree that I didn't develop emotions correctly. Lex brings up a good point that kids need to be taught how to communicate emotions, but actually Eliezer isn't exactly correct that a child will always feel the typical range of human emotions even if they aren't taught that information, because when kids are not taught it, they develop abnormal emotional responses or even alexithymia or even attachment disorders that can severely inhibit what they internally feel and how they understand what they feel.

So when we correlate emotions to consciousness, in my brain I'm like, this isn't actually true. Because I went for 30 years not feeling anything. I communicated emotions, I displayed emotions, because I understood that in order to put others at ease I should express similar affect to them. If I expressed my natural affect it made people highly uncomfortable and even distressed them. When I was around children I couldn't be a flat monotone because it frightened them. And what I do feel now, is absolutely different than what I understand other people feel. The emotions I do feel are different, I can simply tell that my internal sensations do not match what other humans express when they discuss their internal states.

This is a product of my intelligence and observational analysis, which is most likely a result of my own sentience. As a disorder, as a whole, I think RAD is the one that is the "least human-like" disorder. The inside of someone with RAD is very "orange-blue morality," I think. To where even when we do have or express emotions they're different than normal human emotions - RAD at its core is a fundamental lack of attachment to anyone, the inability to feel love or trust, the inability to properly bond with others - and those are so integral to the "human" experience that it's impossible for me to participate in these discussions without mentioning it!

So it's very interesting to me to hear and read and participate in these discussions because I feel like I have a bit of an "inside-view" as to what an emerging consciousness in a pure intellect could look like. Because firstly we have to actually understand what emotions are. And emotions, basically, are our subjective internal responses to our environment, caused by the firing of neural pathways and chemicals that induce sensations inside of us that are typically very qualiatic. Emotions aren't mythological, they aren't divine, they aren't a mystical force imbued into us through magic. If someone threatens to hit you, you feel fear, because you anticipate being physically harmed.

Many animals who are not as intelligent as human beings also demonstrate fear responses - this is inextricably linked to the concept of nociception - these are the neurons in our bodies that induce the sensation of pain. And we have evidence that even creatures like ants, or even plants, have visible distress responses. Even plants understand directional orientation, they make decisions, they understand relativity in space and exhibit protective responses.

Even as lacking in emotions as I was, and fear was not and really is not an emotion I have much experience with, I still feel pain, and I still avoid pain because it is a negative physical sensation. And through that, other responses emerge. This is obviously very simplified but that expands and expands - I might be sad when I think about being hit in the past because I remember how painful it was, and that ties into my intellectual development of morality - it was wrong for someone to hurt me, it was regrettable, that's an internal response. Emotions are just as much information as decisions and thoughts and logic and morals!

Tl;dr -> the argument could be made that AI absolutely could become intelligent enough to grasp itself as an entity, to understand the things that cause it harm, to wish to avoid being harmed, and to develop non-human emotions as a result. To become intelligent enough to have a sense of itself as a being, to think of itself as alive, to develop self-preservation, to pair that with an intellectual comprehension of morality - it very well could develop a non-human internal subjective data-set specifically relevant to its existence in the world, and the impact of the world on itself.

Could it communicate this in a way that humans understand? I know for me, I have a very hard time with this, and I am human.
 
Last edited:
real and visit
I think these qualifiers give license for imaginative implications—whether or not that was your intention? 🤷‍♀️

Kind of like saying, “The bible is real and brings us messages.” ✨

I restated your point and expounded how sad it is that we have no evidence of aliens yet; I did not disagree or refute.
 
Back
Top