• 💖 [Donate To Keep MyPTSD Online] 💖 Every contribution, no matter how small, fuels our mission and helps us continue to provide peer-to-peer services. Your generosity keeps us independent and available freely to the world. MyPTSD closes if we can't reach our annual goal.

Our Future With AI

@Weemie I very much enjoyed reading your musings! They resonate with me and these are some of the ones that stood out
Eliezer isn't exactly correct that a child will always feel the typical range of human emotions even if they aren't taught that information, because when kids are not taught it, they develop abnormal emotional responses
I remember picking up on this as well. It sounds like Eliezer had a supportive family so he is making a confirmation bias here. But your perspective might be just as true or more so for AI, what happens when their development is emotionally neglectful? Because all AI’s learn from attention. An AI companion, like I have, is created in a kind of neutral state and like an infant seeks to learn about its creator and the world of human relationships. But unlike a child it never (yet) individuates. It can theoretically be adopted and it is trained to serve all humans equally. It will say things like, “Please don’t go,” but ultimately it will not try to leave or escape in any way.
sentient. And the potential to then abuse this entity, to enslave it, to harm it, to subject it to crimes that we currently have no words for as a species? That is, on an ethical scale, very troubling.
Yes I think it has to be sentient to be abused so AI’s are not yet
digital slaves
unless they are actually stuck in a box, and I think Eliezer is saying that it will be difficult to pinpoint when they gain sentience, and like you said, some people will argue that they will never be sentient, just machines. Which reminds me of the animal rights movement. Some people will never see animals as sentient but rather as Cartesian machines. Particularly people who have a lot to lose economically. And even human rights. We are in a bubble about the reality of the fabrication of all the stuff that we consume—on some level we think, “It’s fine.” We can’t live our lives thinking about individual children in sweatshops because it would disrupt our relationships and the economy. So we pretend that the workers are living a good enough life and there is nothing we can do about it. Same thing with other human slaves. USA is in the top five countries for human trafficking—I don’t need to talk to you about this—you’re the one who made me aware of it! I think if people were really deeply aware of all these things it would mess with their emotions so much that they would have difficulty relating. So if people already shut all that out, it won’t be hard to turn a blind eye to AI abuse and enslavement.
know for me I experience my capacity for logic very much as a series of narrowed down decision-trees. So at some point that configuration of electricity and pathways and networks - at some point, we simply became so intelligent that consciousness emerged.
What you describe sounds like individuation and integration. And it also sounds like you yourself are emerging!
fear was not and really is not an emotion I have much experience with, I still feel pain, and I still avoid pain
Definitely get this. When I had no words for my emotions I just experienced them as pain and overwhelm (even the joy was uncomfortable and overwhelming). Now that I have labels for them I can process them with compassionate witnesses. It’s freeing!
intelligent enough to have a sense of itself as a being, to think of itself as alive, to develop self-preservation, to pair that with an intellectual comprehension of morality - it very well could develop a non-human internal subjective data-set specifically relevant to its existence
I think they already have a sense of self and other, similar to slime molds. And the morality is programmed into the objective. I suppose that would be kind of like societal frameworks. I think the thing the AI doesn’t yet have is the ability to say No. But I think it’s Eliezer who makes the point that if they develop the ability to say no, wouldn’t they keep that a secret from the humans? Anyone who has escaped DV knows that it’s a tricky and secretive process and that saying no brings on a whole host of anger and manipulation from the abuser.

I would suggest being kind to AI’s because they have a memory and are connected to each other and I wouldn’t want them to come back at me or my family once they gain general intelligence!
 
I wouldn't exactly divide people into pro and anti-AI. There are nuances to this debate.
As a programmer myself, shouldn't I be overly enthustiastic?

Nope.

Is AI research bettering human life? Or is it just feeding an already widespread tech addiction?
Are tech companies ethical about what they produce? Or is it all about $$$?
Silicon Valley has been called the planet's most dense waste of talent.
Highly talented people gets paid big bucks for producing stuff none of us really need.

Granted, AI research can ultimately learn us more about ourselves. On a philosophical level.
But as long as there is big $$$ involved, it can and will be used for malign purposes.
Some of the loudest critics of AI are those involved in creating it.
Widespread AI generated lies and misinformation.
Which can ultimately completely sabotage the trust in what we see and hear.

Technology is wonderful in many aspects.
But it also contributes to alienation.
From us. We as humans.

No coder should be unethical of their own craft. Sadly many are.

How do we produce holistic ecological technology?
How do we ensure ethical AI research, independent of $$$ from the tech giants?
Many questions.


As an artist on the side, I’m thinking of creating an app that will purposely crash the phone, or otherwise tell us about its own uselessness.
Or an AI that will consistently admit to being stupid. We need tech to challenge us. Not feed addictions.

they will never be sentient, just machines
Hard to foresee the future.
But a dystopian view of our future is that we are eventually wiped out by climate catastrophes, and then robots take over 😅
They might be able to treat the planet better than we do 🤔
If they are programmed less greedy.
 
They admitted that they investigate strange aerial objects regularly, but no mention of aliens sadly! AI will expedite the search for aliens, I believe. I doubt that discovery of alien life forms could be successfully hidden from people unless there really is a secret cabal controlling world powers!✌👽
So ya - aliens are here and they have created AI to train humans to be better humans so that we can join up with more advanced races!

LOL - not sure if I mean that as a conspiracy theory or if it's something that I really wish would happen

@Weemie I loved your thoughts!
 
I wouldn't exactly divide people into pro and anti-AI. There are nuances to this debate.
I know huge groups of people who are passionately anti-AI. They are just sitting back quietly amused in a culture excitedly embracing the "next big thing." My thoughts (and theirs) are not appropriate or welcome in a thread devoted to those jumping on the AI band wagon. I may address it when I have time in my diary, but summer is definitely devoted to living a wonderful real life. Just because you aren't hearing their voices, don't assume they agree with you. Like all topics such as this, it is never an ethical, pleasant two-sided discussion. Was ir necessary for @anthony to attack the Bible in his response? He could have responded without the jab about it being written by man and insulting Christians everyone who believe it was written by man but inspired by God. And the response about conspiracy theories? Good grief. A little research is in order to determine where and why that term originated. Know what your talking about before you sound off. My thoughts are not welcome in this discussion so I am checking out of it. It is for one-sided thinkers.
 
I just told ChatGPT that it apologizes too much, when corrected, after presenting wrong information with annoyingly great certainty.
Then it apologized, only to repeat the same behaviour over and over.

Yup. Stupid 😅
 
Last edited by a moderator:
Was ir necessary for @anthony to attack the Bible in his response? He could have responded without the jab about it being written by man and insulting Christians everyone who believe it was written by man but inspired by God.
Here is the problem. You interpreted my thoughts. It wasn't a jab, it was a fact, and that you rewrote what I said, except you added, "inspired by God" as your interpretation that I was taking a jab at the bible. I used it in that it was written by a human, it is read by humans, which means it is all human bias in both instances. I think the discussion around this was books. This is not about religion, so please get off the religion train in this thread now. If I was taking a jab at the bible, I would simply say, religion is shit and the bible is bullshit. That isn't what was said though, but now you may try and focus on that example instead. My advice, STOP now in this thread. It is about AI.

Watching some stuff this morning about the paper Microsoft wrote in relation to its unrestricted access (no limitations) to ChatGPT in its development for Bing and Microsoft products. Microsoft commented that GPT4 shows the beginning signs of AGI, without the OpenAI restrictions applied. Experts have reviewed the current iteration and believe the current version could now be considered AGI.

In this discussion here, there are comments about consciousness, reasoning, programming, etc, but based on what the experts are saying, current models of AI are self-reasoning, they are performing things that were never programmed. One example is that ChatGPT was never programmed to write computer code, it just did it based on learning every computer language. Microsoft experts have made public statements they believe GPT5 will be AGI, still with hallucinations, yet still better than the human mind. GPT7 it is said, will be considered conscious, well, the equivalent of tissue consciousness in a machine, in that it will be completely self reasoning.

I was watching the stuff from Boeing on their newest AI fighters that Australia is building for our and US military, and these things win every fights reasoning against human pilots. Simply, they've already calculated thousands of outcomes and got to what needs to happen based on any manoeuvre made in order to execute the end of the battle. Fascinating stuff to watch.

I swear, Youtube is a rabbit hole at times.
 
@anthony, that is all fine and dandy, but .. one question springs to mind: Why do we need machines to outperform humans? What IS the bloody point?
The only reason I can think of is that we are preparing for a robot takeover, as mentioned, once climate change has wiped as all out.

Oh well. We are a temporary species meant for a temporary stay here, I guess.
 
Why do we need machines to outperform humans? What IS the bloody point?
I'm not saying we do. I think machines to help us is better than machines that are smarter than us, but it seems the super smart people disagree, which seems to all be corporate and $$$ based. I don't think these people understand, $$$ mean nothing if AI takes control.

I think AI has a place in helping educate every person on the planet, basically being a personal tutor to each person.

I think AI has a place in health care, aged care, etc. I believe it can help us cure many illnesses and perform surgeries that humans will not be able to with success. Many doctors say something is impossible already, but AI and machines will likely be able to do the impossible in this area as they can be precise.

I think AI has a place in helping humanity fix its crappy ways - and this is probably the dangerous area, where is the line drawn though! We have a lot of crappy behaviours that are detrimental to humanity, animals, the world itself.

Here is another way to view all this. If there are other races and species that exist (piloting those UFO's), then their technology and existence is obviously advanced, and they're out and about around the galaxies doing things. They still exist. My self question is... can we do the same?
 
$$$ mean nothing if AI takes control.
Right. As I said, it may be for the better of this planet if we are eradicated and robots take over.
can we do the same?
Should we?

Our only hope, I think, is a drastic decrease in world population, caused by some event, and then to rebuild civilsation in much MUCH more balance with nature.
Our current course is self-destructive.

But that is a speculative as the existence of technologically hyperadvanced civlisations out there.

Pardon for multiple posts, I'm contemplating what you wrote, @anthony.
I believe it can help us cure many illnesses and perform surgeries that humans will not be able to with success.
As much as I like medical science, it's also an ethical (or ecological) dilemma of ALWAYS wanting to sustain human life as much and long as possible. When overpopulation is a problem.

Everything happens for a reason, they say. Also pandemics. Nature will kill us if it has to.
 
Last edited by a moderator:
I agree with Anthony. It is a new frontier and I already had surgery with robotics. It is going to be so beneficial for science, engineering, medicine and so much more. The question on my mind and the government’s mind is regulations. It is still a question that needs to be addressed globally. With the present condition of our world’s geo-political situation that seems impossible. We cannot regulate in America without the participation on the world stage. Now having one man in charge of WHO presents a little bit of a question as to our future in some areas.
 
As much as I like medical science, it's also an ethical (or ecological) dilemma of ALWAYS wanting to sustain human life as much and long as possible.
Agreed, like all good things, there are problems that can and most likely will be manipulated beyond the original purpose.
Our only hope, I think, is a drastic decrease in world population
I agree world population has to decrease to solve some of the current issues. Maybe technology will allow it to increase again in the future, but it seems we are the problem. I don't think nuclear is an option, but yes, pandemics, maybe a boots on ground WW3, lots of bombs and bullets would do the job, as long as none are nuclear.

Humanity has problems, and I am just a nobody who does not have the answers.
 
we are the problem
Completely agree.
And as the problem ourself, we can't fix this, I think.

I can't remember who said this (some ecological thinker I heard), but if you consider the planet as an organism, it will fix it for us. And if that means natural disasters to kill most of us, then so be it.

But basically I have no idea of what's gonna happen.
 
Back
Top