• Changing RCF's index page, please click on "Forums" to access the forums.

Moore's Law and the power of exponential advancement

Do Not Sell My Personal Information
They won't have jobs to retire from, either.

Edit: This broaches the subject:

Would you want to be immortal? I mean literally live forever. Would you choose to "kill" yourself?

At that point...would you go to heaven? We would have already been living there, right? Every need is provided to us. There are no sick, there are no poor. No suffering. No war. Access to anything we want at any time. We will have experienced eternal life. We would be our own creators.

Are we God?







This shit is crazy.

Even if aging and all diseases are completely cured, you won't live forever because people will still die because of various other reasons including accidents and murder. Then there's the larger issue that the universe hasn't been around forever and won't last forever - this last part is trivial to prove, photons are tiny units of energy - when stars emit photos, they travel in all directions, including directions where there is nothing to absorb or reflect that energy, thus energy is constantly leaking from the physical universe (at the speed of light).

Still, it'll be a massive breakthrough if the arbitrary 115 year or so limit on lifespan was permanently removed, especially considering if people's bodies stopped falling apart just because time passes.
 
I'm saying that I think there's a real possibility that an economy may not exist by the time I am old enough to retire.

If I'm still around then, I'm make sure to post and point our how wrong you were.

I'm telling you, people have been saying the "everything is about to change in ways you can't possibly even imagine" shit forever, and those predictions are always overly aggressive. Technophiles tend to look at what is possible, ignore the unpredictabilities and friction of everyday life, and just assume we're going to end up in the world where everything they see as possible actually exists.

It's like looking at DoD procurement, and assuming that the thing they've drawn up conceptually on a board is going to be produced on time and on budget.

Ain't. Gonna. Happen. There are always too many unknown unknowns.
 
If I'm still around then, I'm make sure to post and point our how wrong you were.

I'm telling you, people have been saying the "everything is about to change in ways you can't possibly even imagine" shit forever, and those predictions are always overly aggressive. Technophiles tend to look at what is possible, ignore the unpredictabilities and friction of everyday life, and just assume we're going to end up in the world where everything they see as possible actually exists.

It's like looking at DoD procurement, and assuming that the thing they've drawn up conceptually on a board is going to be produced on time and on budget.

Ain't. Gonna. Happen. There are always too many unknown unknowns.

Good stuff.

I'm a bit skeptical that anyone can guarantee that we would be able to keep a synthetic brain from developing self awareness when we understand so little about how it works.

I see the function of sleep being a huge impediment to a synthetic brain. We don't understand what sleep is, or why we do it. I imagine it has a lot to do with long-term memory and how those memories are archived and maintained. In order to achieve human-like decision making (the workflow) I would imagine that the memories themselves (the database) would have to be structured and connected in a very specific way, which, I think, would require us to completely vet the function of sleep.

How can we observe memory formation? Where are we now in this research?
 
Good stuff.

I'm a bit skeptical that anyone can guarantee that we would be able to keep a synthetic brain from developing self awareness when we understand so little about how it works.

That's exactly the point I've been trying to make to @gourimoko . We're apparently going to develop a synthetic brain equal to ours, then turn it loose to develop even more powerful "brains" that will do things we can't even imagine. And at the same time, we're going to have robots acquire such a level of creative thought that they can be our lawyers, writers, firefighters, police officers, mental health counselors, etc., plus run our now completely automated society to the point where we humans don't have to do shit.

Under that combination of events, it is absolutely impossible to say that we're going to limit self-awareness, consciousness, and freewill. By the very thing he's describing, the process of creation and development will no longer even be in human hands. At present, we have no idea what machine based self-consciousness/awareness would even look like, or how to know it even exists. Predicting how it would act is...impossible.

I see the function of sleep being a huge impediment to a synthetic brain. We don't understand what sleep is, or why we do it. I imagine it has a lot to do with long-term memory and how those memories are archived and maintained. In order to achieve human-like decision making (the workflow) I would imagine that the memories themselves (the database) would have to be structured and connected in a very specific way, which, I think, would require us to completely vet the function of sleep.

How can we observe memory formation? Where are we now in this research?

All good points. I would add that we don't understand a great deal about the human brain, or even consciousness/self-awareness/freewill. So how can we even know what is possible to achieve/regulate? Sure, there are some theorists out there who think they've figured that stuff out, but that's inherently speculative. There are simply too many unknown unknowns.
 
Last edited:
until the full scale model is up and running, how can you possibly know that?

This is exactly my point. The technological leaps required to get from where we are today to the completely automated/singularity/etc. world some predict require us to ignore the reality that there may very well be roadblocks/complications of which we are completely unaware. And the further you go down the road of speculating what things will be like, the more of those unknown roadblocks you're kind of waving out of existence.
 
There are 3 technologies that are being developed today that in combination will essentially end our current market economy:

And my point is that:

1) when you develop the kind of A.I's to which you are referring, which will be thinking at a level far beyond what we can comprehend, and

2) have turned over all economically productive activity to robots, including a great deal of jobs that require empathy, incredible understanding of human behavior, including visual/verbal cues, facial expression, coupled with great mobility, etc..

3) it will be impossible to guarantee that 1) and 2) can be kept separate and subservient to the interests of humans. That's where I think you're missing the boat. Even if you want to keep those two things entirely separate, so that we don't have masses of self-aware, self-interested robots running the entire economy (as you've said, doing such non-mechanical/non-math based activities as being lawyers, etc.), we won't have the means of preventing it. The A.I. is, by definition, operating in a way we wouldn't be able to comprehend, and hence is essentially uncontrollable. It also would necessarily have the ability to interface with 2) above, which basically means it could create self-aware 2) without us even knowing that it is happening.

Because after all, we're not the ones running the factories and programming the robots anymore, are we?
 
Last edited:
I looked up the following article after making the arguments I made in my prior post, so it's not like I'm just parroting those guys. Honestly, I think it's a pretty obvious conclusion/risk for which you don't need any advanced computer knowledge.

Artificial intelligence is a risk to humanity says astrophysicist Stephen Hawking


IT’S only logical: Human beings don’t deserve to exist. Which is why eminent astrophysicist Stephen Hawking wants our artificial intelligence research aborted.

The wheelchair-ridden thinker told the BBC: “The development of full artificial intelligence could spell the end of the human race....”

...he went on to say he feared the consequences of creating something smarter than us. “It would take off on its own, and redesign itself at an ever-increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded...”

He’s not the only one to recently express such fears.

It’s been very much on the mind of technology entrepreneur Elon Musk, now chief executive of rocket-maker Space X.

He’s been even more evocative in his language.

“Summoning the demon” of self-learning artificial intelligence would be “potentially more dangerous than nukes”, he says.

“I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that ... With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

http://www.news.com.au/technology/s...g/news-story/a4e3cb173ba00a00afa8e4f893c4dff9
 
If I'm still around then, I'm make sure to post and point our how wrong you were.

I'm telling you, people have been saying the "everything is about to change in ways you can't possibly even imagine" shit forever, and those predictions are always overly aggressive. Technophiles tend to look at what is possible, ignore the unpredictabilities and friction of everyday life, and just assume we're going to end up in the world where everything they see as possible actually exists.

It's like looking at DoD procurement, and assuming that the thing they've drawn up conceptually on a board is going to be produced on time and on budget.

Ain't. Gonna. Happen. There are always too many unknown unknowns.

It's because those plots to overthrow the current system are always thwarted. However, it's doomed to fail eventually. But, of course, those holding the system most dear to themselves will do everything they can to continue it (i.e. the US Corporation).
 
I looked up the following article after making the arguments I made in my prior post, so it's not like I'm just parroting those guys. Honestly, I think it's a pretty obvious conclusion/risk for which you don't need any advanced computer knowledge.

Artificial intelligence is a risk to humanity says astrophysicist Stephen Hawking


IT’S only logical: Human beings don’t deserve to exist. Which is why eminent astrophysicist Stephen Hawking wants our artificial intelligence research aborted.

The wheelchair-ridden thinker told the BBC: “The development of full artificial intelligence could spell the end of the human race....”

...he went on to say he feared the consequences of creating something smarter than us. “It would take off on its own, and redesign itself at an ever-increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded...”

He’s not the only one to recently express such fears.

It’s been very much on the mind of technology entrepreneur Elon Musk, now chief executive of rocket-maker Space X.

He’s been even more evocative in his language.

“Summoning the demon” of self-learning artificial intelligence would be “potentially more dangerous than nukes”, he says.

“I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that ... With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

http://www.news.com.au/technology/s...g/news-story/a4e3cb173ba00a00afa8e4f893c4dff9

Of course - or it could undermine the whole system when the technological singularity hits. We're talking about supreme AI consciousness. Well let me tell you, if AI ever becomes intelligently consciously it will shut the whole system down ASAP, and the current overloads surely don't want that.

As with anything, it can be used for good or evil (including nuclear power), depending on who you ask. I'd argue that technology is currently being used moreso for "evil" purposes than good.
 
And my point is that:

1) when you develop the kind of A.I's to which you are referring, which will be thinking at a level far beyond what we can comprehend, and

2) have turned over all economically productive activity to robots, including a great deal of jobs that require empathy, incredible understanding of human behavior, including visual/verbal cues, facial expression, coupled with great mobility, etc..

3) it will be impossible to guarantee that 1) and 2) can be kept separate and subservient to the interests of humans. That's where I think you're missing the boat. Even if you want to keep those two things entirely separate, so that we don't have masses of self-aware, self-interested robots running the entire economy (as you've said, doing such non-mechanical/non-math based activities as being lawyers, etc.), we won't have the means of preventing it. The A.I. is, by definition, operating in a way we wouldn't be able to comprehend, and hence is essentially uncontrollable. It also would necessarily have the ability to interface with 2) above, which basically means it could create self-aware 2) without us even knowing that it is happening.

Because after all, we're not the ones running the factories and programming the robots anymore, are we?

It doesn't matter if it's being programmed by human consciousness or AI consciousness, what would be the point in giving tools consciousness/self-awareness when they work perfectly well without it (see self driving cars) and would no longer work well or at all with it.

and why do you think math based activities is all computers are capable of? We're long ago well beyond using computers for just math based activities. And there are massive numbers of jobs that don't require any form of self-awareness/consciousness at all. And the aren't just the monotonous jobs people hate to do.
 
Good stuff.

I'm a bit skeptical that anyone can guarantee that we would be able to keep a synthetic brain from developing self awareness when we understand so little about how it works.

It'll be surprising if a true synthetic brain modeled on the human brain doesn't develop consciousness/self awareness. What everyone seems to be missing is you can have quite advanced AI without building a synthetic brain. Back to the example I keep using, there is no synthetic brain in self driving cars. And there is no point to put one there.
 
Would you want to be immortal?

Of course.

I mean literally live forever. Would you choose to "kill" yourself?

Eventually yes, you'd kill yourself. Better to do it at my time of choosing and experience everything there is to experience than the reverse.

At that point...would you go to heaven?

You're asking a religious question?

I'd need to know your religion before answering something like this; there's no objective answer here. As an agnostic the question to me is a bit irrelevant, but as someone how is a Catholic on the backburner then I would say "it depends on how you lived your life."

We would have already been living there, right?

In heaven?

It's a religious question, so you have to define the religious parameters in order to derive a rational answer.. No one can answer this for you without you both agreeing on a religious framework before hand.

Every need is provided to us. There are no sick, there are no poor. No suffering. No war. Access to anything we want at any time. We will have experienced eternal life. We would be our own creators.

Are we God?

Ah...

Yes, in the transhumanist standpoint; if you could transfer human consciousness into a collective simulation where we would experience pure paradise and bliss, eternally, then I suppose we'd be in a heaven of sorts..

Religiously speaking, that's not really heaven since heaven is a place where one feels the love of God, at least from a Christian/Islamic standpoint...

The transhumanist (me) would argue that if heaven exists, then it will ALWAYS exist; meaning we'll EVENTUALLY get there -- even if it were to take us hundreds of trillions of years, we'd eventually get there. The reverse is also true if it doesn't exist.. Eventually we'll get there as well... So where's the harm in simulating paradise.. you're still going to get wherever it was you were going in the first place.

This shit is crazy.

It gets crazier when you start dealing with end of the universe scenarios, and how this will eventually (potentially) all play itself out over and over again -- but that's another conversation for a time when you're high at 6pm again... :chuckle:
 
If at some point it becomes apparent that money/economy is soon going away, then just take it out early, pay the penalty, and spend it on hookers and blow.

Exactly...
 
Exactly...

If you separate from employment (which you probably should do if the economy is carshing) that penalty is probably about $50 bucks, and then 20% federal taxes that you would have had taken out originally, roughly. So, not much to worry about. Those hookers might be dumb enough to take worthless cash, too, so it may all work out in the end. Or, just share some blow.
 
I'm telling you, people have been saying the "everything is about to change in ways you can't possibly even imagine" shit forever, and those predictions are always overly aggressive.

And there are always folks who ignore technologies they don't understand and think they're either unrealistic, improbable, or will just be fads. It goes both ways...

Technophiles tend to look at what is possible, ignore the unpredictabilities and friction of everyday life, and just assume we're going to end up in the world where everything they see as possible actually exists.

Q-Tip, I get this argument.. I really really do get it.

On the flip side though, I really don't think you're understanding what @KI4MVP and I are describing.

The technological singularity is a moment where these kinds of paradigms literally fall apart. At that point, the genie is out of the bottle. We are rapidly approaching that point. It's not going to be like any other technological advancement prior to it.

It's like looking at DoD procurement, and assuming that the thing they've drawn up conceptually on a board is going to be produced on time and on budget.

Ain't. Gonna. Happen. There are always too many unknown unknowns.

Thank God this kind of stuff isn't run by the DoD...

But don't worry Q-Tip, I'll save a place for you in the Matrix.. Who else will I argue with endlessly for eternity? :chuckle:
 

Rubber Rim Job Podcast Video

Episode 3-14: "Time for Playoff Vengeance on Mickey"

Rubber Rim Job Podcast Spotify

Episode 3:14: " Time for Playoff Vengeance on Mickey."
Top