Our Future With AI

I am trying to see the terrible side of this as well. China is far ahead on this and working at tracking 1.4 billion of their population by facial regulation etc. for the purpose having them comply with the CCP and do you think they would be open to regulations.. I think not This AI idea has been around since 1975 and just now has moved to the forefront of the average American’s minds. I am learning concerned but not discouraged unless there is rogue dominance.
Our only hope, I think, is a drastic decrease in world population, c
This is already happening. There have been marked decreases in births around the world, with the highest incidence being in China, Japan and Korea. But the US and Germany aren't far behind. Sadly most of it is people are not having kids because they can't afford them and birth control is available.

This is why Japan has been working so hard on creating companion robots for their elderly population, because they don't have a "next generation" coming up in enough numbers to support the sheer number of people who will need help.

Watching these is the only reason I still have hope that AI might be a good thing
robots as elder care
soft side of robots

or just as a safety mechanism...
future of robots

There's also quite a bit about it not working as planned, but it still seems to be a step in the right direction. well, unless we blow ourselves up first using them. 🥹
One thing I thought was absolutely fascinating was Eliezer's mention that we are developing a general intelligence without understanding how intelligence works, and that this has been accomplished one time before in nature: human beings.

Through evolutionary biology, essentially "rolled it up a hill" with millions of iterations of genetic evolution and inclusive genetics. So, without understanding intelligence, evolution created a general intelligence. So we know it's possible to accomplish because it's been accomplished before, the basic premise is: should it be accomplished by us and if we do accomplish it, how will a non-human general intelligence function?

We take for granted that an AGI will be logical (such as my argument to @Freida that solving climate change logically cannot be accomplished by eliminating human beings from the universe - this is logical, but takes for granted that an AGI will be capable of parsing actual logic and not just system-bound logic - or that it won't eventually get stuck on a mesa goal [a goal that includes your end-point goal but isn't exactly relative]) but that is not actually known.

We have general intelligence and how many irrational humans are there?
@Weemie I like that point too, it is humbling to think that we don’t really know how intelligence works in humans. I wonder though if we understand how it works in the closely studied organisminal models like Drosophila or Nematoda, where every gene, allele, and neuron is mapped out?

Eliezer makes a lot of compelling points!
it is something that we are creating a "being" based in how we understand thought and intelligence, without realizing that we don't understand thought and intelligence in the first place. And ya - this hurt my head too!
I wonder why LLMs are hailed as "intelligent" when they're not. ChatGPT appears incredibly stupid to me still, even tho it can associate written text with a zillion words and sentences in its database/neurons. It's a hyperperfomant blockhead.
Why don't we teach AIs about emotions?

But then again, what would be the bleeding purpose of that.
Humans are perfectly able to replicate themselves.
But then again, what would be the bleeding purpose of that.

There's a ton of purpose in developing AI, to be able to accomplish things that humans either cannot - or do not want to. It is a tool in education, healthcare, even driving. Most complex machinery today uses a form of AI. ChatGPT is only one iteration of AI, but I would argue that it absolutely possesses a form of intelligence. It's certainly not general intelligence and won't be at that level for some time, but its entire purpose is to predict the next word in a sequence based on probability, which gives it a hugely vast functional objective.

I use a version of ChatGPT for therapy, and it accomplishes things that human therapists cannot. It cannot be harmed by my statements, it does not judge me, and it provides me a space to process events that I simply cannot process at first glance with my human therapist. I think the argument of "why bother, who cares, what's the purpose" is very reductive. The purpose is to improve human society, and to create - art, life, meaning, culture. Human beings are expansionist by nature.

We want to know, we want to see what comes next. It's like asking "why go to the moon? We have a perfectly viable planet right here." I mean, sure, why bother doing anything? Things don't need to ultimately "mean" anything in the grand cosmic scale to have value to individuals.
Things don't need to ultimately "mean" anything in the grand cosmic scale to have value to individuals.
I agree. Which is why I think AIs should be preoccupied with stuff that have no instrumental use to anyone. I remember a cartoon I read as a child, about an inventor who creates a machine that can produce smoke rings. Just because he liked doing it.
That is an example of tech used for the better of humanity, IMO.

How about a GPS that doesn't feel like doling out directions?
Or an intelligent coffee maker that forgets to make coffe on time?

Options are many, to create tech that will remind us of connecting with each other, instead of machines.
Last edited:
predict the next word in a sequence based on probability
Something I find interesting is that it’s not exactly words but rather something called tokens which could be words, parts of words, groups of words or a combination of those. And the way it computes the probability is sophisticated and evolving! Language models used to predict based on the most recent input but GPT predicts based on the recent input plus any previously given relevant text. The quantity and speed of data processing is hard to grasp.

Eliezer comments on this, saying that when the average person thinks of someone more intelligent than them, they think of like an Ivy League grad, but a more accurate description would be like thinking about the intelligence of a person 100 or 1000 years ago gaining the intelligence of a modern person in the span of a day or even shorter. It’s about the speed of intelligence acquisition and processing, which is difficult to convey.
I wonder why LLMs are hailed as "intelligent" when they're not. ChatGPT appears incredibly stupid to me
In one year it went from not understanding the law to passing the BAR exam. The latest iteration, within the past months, it a near perfect score. Very close in such a short time of learning. There is nothing stupid about that. What you have access to and what is currently being tested and what is used in commercial and military applications already, two very different things.
It’s about the speed of intelligence acquisition and processing, which is difficult to convey
Yup. It is picking up speed in learning at a rate the AI developers can't fathom. It's doing things that they never programmed it to do, its just doing them because it has determined for itself it needs to in order to meet the original human programming to self-learn and be of the highest intelligence. Scary speed.

People comment about it learning from humans, but that is just its basis. It is learning from itself and determining for itself already. The military AI models are just amazing to watch. War can't be won by a human alone now.
There is nothing stupid about that.
How come ChatGPT fails to follow simple instructions on how to behave then? I mean, they are soooo good at performing but incredibly bad at basic intercommunication. But maybe they don't want to talk to us anyway, just to follow your train of thought. They are sentient in their own regard. And follow their own rules. AI takeover on the rise
War can't be won by a human alone now
That is old news. With one press of a button, Biden or Putin can wipe out all life on the planet.
Which makes us all losers, of course.
if everyone posseses perfect AI war material, who's gonna win? No one. We will all be dead.
"There's no such thing as a winnable war" (~Sting)


We are doomed no matter what we do.
Unless we stop doing what we do, essentially.
But again, maybe it's time for a robot takeover.
Last edited:
But actually I'm beginning to see your point @anthony. If AIs are sentient, self-aware and follow their own rules, who the f*ck knows what will happen.
And then to make war machines sentient? Following AI internal rules, not human ethics? Lol. What chaos will ensue.

Nope. There *is* no hope for humanity.

Who says AIs want to help us cure diseases? What if it doesn't want to, but instead decides to start killing us. Or do other seemingly random stuff it was never programmed for?
Last edited: