• Changing RCF's index page, please click on "Forums" to access the forums.

Moore's Law and the power of exponential advancement

Do Not Sell My Personal Information
If you separate from employment (which you probably should do if the economy is carshing) that penalty is probably about $50 bucks, and then 20% federal taxes that you would have had taken out originally, roughly. So, not much to worry about.

$50 bucks is good for about 30mins with a 6 or a 7, spinner type, off Kuhio Ave in Waikiki; especially late at night... If you lay pipe right, you can prolly smash for as long as you want, these hoes aren't getting anything else jumping.. And if you're in a rush, you can get an aggressive blowjob at the local Korean bar.
 
I'm telling you, people have been saying the "everything is about to change in ways you can't possibly even imagine" shit forever, and those predictions are always overly aggressive. Technophiles tend to look at what is possible, ignore the unpredictabilities and friction of everyday life, and just assume we're going to end up in the world where everything they see as possible actually exists.

Just because some people are really, really bad at predicting the future of technology doesn't mean there aren't other people who are really, really good at it. The ones who are really, really good at it are the ones build stuff that isn't even possible when they start their projects because they understand technology will catch up to what is required. People like Ray Kurzweil, Alan Kay, etc.

These people don't just dream random stuff, they look at known trends and think exponentially along those trends instead of linearly. Even before I used the computer in the first post, Alan Kay designed today's iPad then spent the next decade designing the kind of UI and programming methodology that would be needed when such a device was feasible.

Dynabook.png


And there is a direct line between his work and the actual iPad.

You don't even begin to do that thinking linearly about technology or just dreaming up random stuff (like flying cars).
 
It'll be surprising if a true synthetic brain modeled on the human brain doesn't develop consciousness/self awareness. What everyone seems to be missing is you can have quite advanced AI without building a synthetic brain. Back to the example I keep using, there is no synthetic brain in self driving cars. And there is no point to put one there.

I agree with that. However, when you get beyond purely mechanical/physical tasks, then it gets more different. Robot lawyers (cue jokes....), robot psychologists, robot nurses and medical professionals....all of those professions require (or at least benefit substantially from) a significant degree of empathy, the ability to read human emotions, facial expressions....any number of things, then figure out the appropriate responses, etc., creating things even like landscaping, interior decoration...any number of things.

To perform those tasks well, to the point where humans would think the result is equal to what a highly-confident living person could provide, is going to require something so close to a synthetic brain that it will be essentially the same thing.
 
Just because some people are really, really bad at predicting the future of technology doesn't mean there aren't other people who are really, really good at it. The ones who are really, really good at it are the ones build stuff that isn't even possible when they start their projects because they understand technology will catch up to what is required. People like Ray Kurzweil, Alan Kay, etc.

These people don't just dream random stuff, they look at known trends and think exponentially along those trends instead of linearly. Even before I used the computer in the first post, Alan Kay designed today's iPad then spent the next decade designing the kind of UI and programming methodology that would be needed when such a device was feasible.

Dynabook.png


And there is a direct line between his work and the actual iPad.

You don't even begin to do that thinking linearly about technology or just dreaming up random stuff (like flying cars).

I don't think feasibility is the issue. It'll happen at some point.

Predicting the timeline when you're doing something no one has ever done before is the difficult part.

I don't know if the sleep issue will take 10 years or 80 years to solve, for example.
 
I'm a bit skeptical that anyone can guarantee that we would be able to keep a synthetic brain from developing self awareness when we understand so little about how it works.

Couple of points Damien:

1) I think you and Q-Tip are both a bit confused here. You would not keep a synthetic brain from developing self-awareness if it were a complete simulation of the brain... We would expect consciousness and self-awareness to be emergent from the brain at this point.

2) You do not need complete human neural network simulations to do the kinds of calculations KI and I are describing. The reason we would want a complete simulation of the brain is for other uses like medicine, longevity, and eventually replacing the biological brain either in part or whole.

3) While we don't really understand consciousness, we do understand where consciousness stems from; what parts of the brain are active in conscious thinking. We understand that we do not need to add these portions of the brain to any neural network simulation that requires human-like cognitive ability without self-awareness and self-determination.

4) Aren't you a computer programmer? You should research neural network programming a bit to get a better picture as to why this isn't as dangerous as some people might think.

I see the function of sleep being a huge impediment to a synthetic brain. We don't understand what sleep is, or why we do it.

Even if you had a synthetic brain that required sleep, why wouldn't it just sleep?

I imagine it has a lot to do with long-term memory and how those memories are archived and maintained.

These kinds of limitations would likely not be present unless specifically simulated. It's like simulating an NES CPU with all the quirks and flaws vs having a "more-perfect" emulation that doesn't have these flaws built-in.

In essence, you could likely operate with or without sleep depending upon how closely you wanted to emulate biological human function.

But again, this hasn't anything to do with AI doing problem solving... it's more to do with procreative AI; i.e., biological humans creating artificial humans.

In order to achieve human-like decision making (the workflow) I would imagine that the memories themselves (the database) would have to be structured and connected in a very specific way, which, I think, would require us to completely vet the function of sleep.

Not sure what you mean by this.... If we accept your hypothesis about sleep being required for long-term memory storage (to which I agree in part), then I'm not sure why sleep wouldn't be simulated within the emulated neural network for one; and for two, if neurons do not require as significant a time to form and reform connections in the neural network to form memories, then one might imagine far less sleep (or no sleep) being required; and lastly, you're thinking about this in terms of writing an application that uses some kind of database using a typical programming paradigm -- this is not how an AI modeled after the brain would behave.

Damian, an AI the perfectly simulated the human brain would store memory the exact same way you store memory -- and you don't have an RDBMS in your skull.. ;)

How can we observe memory formation? Where are we now in this research?

We've been working on this technology for decades. KI4MVP referenced the massive project that's working on creating a complete simulation at present.
 
I agree with that. However, when you get beyond purely mechanical/physical tasks, then it gets more different. Robot lawyers (cue jokes....), robot psychologists, robot nurses and medical professionals....all of those professions require (or at least benefit substantially from) a significant degree of empathy,

What if you could simulate empathy without experiencing it? That is to say, what if the lawyer, psychologist, can use a heuristic to appear empathetic without feeling empathy?

And another alternative -- what if limited empathy could be attained by meshing the portions of the neural network associated with empathetic thought (which mostly depends on abstract thought) with portions of lesser-species' neural network models.

So for example, we talked about a companion android: envision this android either with a heuristic or fuzzy logic model to fake empathy; or even feeling really empathy, but using a dog's emotional neural network model rather than a human's?

the ability to read human emotions, facial expressions....any number of things

Q-Tip, you do not need empathy, thought, emotion for this. We can do this right now without AI. You can aid the process with AI, but you could also just use a hash table and a system to fit the data points from the input sensor to a set of known values.

then figure out the appropriate responses, etc., creating things even like landscaping, interior decoration...any number of things.

You do not need human intelligence for some of this... for some of it you do. You don't need emotions or self-awareness for any of it though.

To perform those tasks well, to the point where humans would think the result is equal to what a highly-confident living person could provide, is going to require something so close to a synthetic brain that it will be essentially the same thing.

I disagree. I don't see why this wouldn't be a problem that could be solved rationally and logically so long as the AI could understand the abstraction and had an understanding of the context -- which is the whole point of using AI, because we can't program understanding of context using the current paradigms.

So, with that said, I'm curious as to how have you come to the conclusion that you would need a near-perfect, cognizant, and I'm assuming sentient human simulation to do these tasks.. Seems like a massive overkill and waste of resources.
 
$50 bucks is good for about 30mins with a 6 or a 7, spinner type, off Kuhio Ave in Waikiki; especially late at night... If you lay pipe right, you can prolly smash for as long as you want, these hoes aren't getting anything else jumping.. And if you're in a rush, you can get an aggressive blowjob at the local Korean bar.

@Maximus
 
I don't know if the sleep issue will take 10 years or 80 years to solve, for example.

What sleep issue? either sleep is required because brains are biological and need rest, or it's required for long term memory storage. Either way, simulated brain can either sleep or not sleep. No time at all is needed to "solve it".
 
On the flip side though, I really don't think you're understanding what @KI4MVP and I are describing.

Actually, I do. I just credit Hawking and Musk over you two knuckleheads.

:chuckle:

But don't worry Q-Tip, I'll save a place for you in the Matrix.. Who else will I argue with endlessly for eternity? :chuckle:

Why would the AI bother putting us unnecessary, inferior beings in a Matrix at all?

What if that self-aware intellect that is so far beyond us concludes that we are completely unnecessary -- a burden even? You're assuming some sort of wonderful result where we humans get to expand our consciousness and existence, whereas I see absolutely no guarantee -- or even a likelihood -- that will be the result. The result could just as easily be extermination, either deliberately or through complete indifference.

btw, ever play a game called SOMA? If so, think of how that ended. If not....

humanity is almost wiped out by a comet, and the only survivors are a couple of people trapped deep undersea. The goal/plan is to scan them into a massive simulation stored into a computer, and launched into space so they can keep on living. So the one guy actually left alive goes through the game, and is scanned in as soon as the launch sequence begins.

He wakes up, and the transfer is successful. Problem is that he really wasn't transferred. It's essentially a copy that gets made for the ship, and he -- his conscious, human self -- is still trapped completely alone in the seabase. The self-aware. digital copy of him, though, is happy to be living in this beautiful, sunny new world.

The point is that the individual humans involved in the whole thing gained, who worked for the project and scanned themselves in to the greater consciousness, gained nothing. So the true question is whether a merger with a singularity, even if it happened, would really be happening to us as self-aware individuals at all. Or do we die, and then a different consciousness with the same memories, etc., lives on instead?

I have no idea, and nobody else does or will either. That seems a pretty thin read of a "good" result to offset the Hawking/Musk risk of actual extermination.
 
That's exactly the point I've been trying to make to @gourimoko . We're apparently going to develop a synthetic brain equal to ours, then turn it loose to develop even more powerful "brains" that will do things we can't even imagine.

Q-Tip, hang on a second... We keep going back over this same point and I think there's a disconnect here so let me clear this up:

I am not advocating creating artificial humans and turning everything over to them...

I'm not arguing that we create a synthetic brain that is conscious and have it take over all of science.

I'm saying we will have a complete model of the human brain; we will have synthetic/artificial sentient beings.. This is going to happen.

But for the advancements in technology you do not need or require sentience.. You need the human brain model so that we can interact with the program in a meaningful way and we can convey meaning using natural language and it can convey results in natural language. @KI4MVP , I'm sure, knows what I'm talking about here.

Thus, a computer could do millions of lifetimes of research, using the English language (and all languages), and conceptual understandings of the world and the reality we live in; which is not something that we have ever been able to do with a computer previously.

This task, however, does not require a sentient computer.

And at the same time, we're going to have robots acquire such a level of creative thought that they can be our lawyers, writers, firefighters, police officers, mental health counselors, etc., plus run our now completely automated society to the point where we humans don't have to do shit.

Yes... this is correct, but I'm not sure how much creativity is required to be a cop, firefighter, a counselor, etc.. You don't need sentience, self-awareness, and self-determination for any of these tasks.

Under that combination of events, it is absolutely impossible to say that we're going to limit self-awareness, consciousness, and freewill.

Why?? From a computer science standpoint, I really just don't get this statement?

By the very thing he's describing, the process of creation and development will no longer even be in human hands. At present, we have no idea what machine based self-consciousness/awareness would even look like, or how to know it even exists. Predicting how it would act is...impossible.

We know exactly what it would look like... It would look like our brain.

As far as neural networks; this stuff is not science fiction... I could walk you through setting up a neural network in an afternoon, and even program it to do cool stuff like image processing or character recognition.

Why do you think we don't know how this stuff works?

All good points. I would add that we don't understand a great deal about the human brain, or even consciousness/self-awareness/freewill. So how can we even know what is possible to achieve/regulate?

Logically we know that the ability to solve problems, in and of itself, does not result in self-awareness or self-determination in anyway that we've ever observed. We can monitor the activity within a neural network to note the propagation of information across the network to detect emergent phenomena. This is the point of the research we're doing now...

But it's highly highly highly unlikely that simply putting together problem solver that operated on discrete functions using a complex neural network would result in consciousness.. This just makes no sense; it's borderline religious...

Sure, there are some theorists out there who think they've figured that stuff out, but that's inherently speculative. There are simply too many unknown unknowns.

Honestly, I'm not sure what you mean by this? Computer programs can be monitored, their activity regulated... I'm just not sure how/why you're thinking this is impossible or highly speculative... I'm not getting where this is coming from?
 
What sleep issue? either sleep is required because brains are biological and need rest, or it's required for long term memory storage. Either way, simulated brain can either sleep or not sleep. No time at all is needed to "solve it".

Agreed.

If downtime is required then it would be inherently integrated within the simulation.

Hell, most of the time a CPU is on it's not performing operations, even under 100% load.
 
Q-Tip, hang on a second... We keep going back over this same point and I think there's a disconnect here so let me clear this up:
I am not advocating creating artificial humans and turning everything over to them... I'm not arguing that we create a synthetic brain that is conscious and have it take over all of science.

I'm saying we will have a complete model of the human brain; we will have synthetic/artificial sentient beings.. This is going to happen.

But for the advancements in technology you do not need or require sentience.. You need the human brain model so that we can interact with the program in a meaningful way and we can convey meaning using natural language and it can convey results in natural language. @KI4MVP , I'm sure, knows what I'm talking about here.

I again feel that your characterization of your point shifts back and forth between different propositions depending upon whether you are touting benefits, or addressing my critiques.

You talked about the things you were creating being equal to humans. Equal rights. You have talked about creating a self-aware, conscious (again, your words) entity. And that entity would be so intelligent that it would expand in scale to go far beyond what we can do. You're not just talking about building a human brain model.

And that brings us to the point raised by Musk and Hawking -- there is no reliable way to compartmentalize, limit, or control or limit such a thing, because it will very quickly be thinking far beyond what we can do.

Yes... this is correct, but I'm not sure how much creativity is required to be a cop, firefighter, a counselor, etc.. You don't need sentience, self-awareness, and self-determination for any of these tasks.

Add to that all the other things, like lawyer, teacher, nurse, child care worker/nanny etc., that require actual compassion and empathy to maximize performance of that role. You want cops that don't have truly human emotions/understandings, and the ability to read people, situations, life events? A synthetic mental health counselor...you really think patients will react the same to such a thing as they would to a regular human unless the two were all but indistinguishable? I sure don't. The knowledge that the person you are talking to is also a human, and truly understands what that means, is pretty critical.

And of course, whether or not you think the robots performing those tasks need sentience isn't the point. The point is whether the singularity -- your self-aware A.I. -- decides to give it to them.
 
Actually, I do. I just credit Hawking and Musk over you two knuckleheads.

:chuckle:

:chuckle:

For what it's worth, Hawking is speaking philosophically. Hawking is talking about the dangers of completely something like the Human Brain Project and simply "turning it on" without considering the ramifications.

However, with that said, Musk is a programmer, Hawking is not and both @KI4MVP and have decades of programming experience.

Musk's opinion is far more subtle and nuanced than Hawking's. He says outright that AI becoming self-aware is not his fear at all; but instead, that AI let loose on the world within the confines of it's own utility could cause calamity.

With that I agree completely. I don't think we should have AI hedgefunds like he describes without first appreciating what that would entail.

Musk's position is totally inline with what @KI4MVP and I are talking about.

But getting back to Hawking's point, as someone who spent a great deal of time with physicists doing computer simulations of astrophysical phenomena, I can tell you for a fact these are, very rarely, tech savvy people... :chuckle:

Why would the AI bother putting us unnecessary, inferior beings in a Matrix at all?

Again, you're talking about some sentient, self-aware entity making decisions for us... I am not describing such a solution at all; I've even said that we should NOT do this.

What if that self-aware intellect that is so far beyond us concludes that we are completely unnecessary

:chuckle:

Not sure how many times I've written that I'm not talking about a self-aware intellect governing humanity...

You're assuming some sort of wonderful result where we humans get to expand our consciousness and existence, whereas I see absolutely no guarantee -- or even a likelihood -- that will be the result. The result could just as easily be extermination, either deliberately or through complete indifference.

Q-Tip, really, bro.. check it out: I AM NOT TALKING ABOUT A SENTIENT AI GOVERNING HUMANITY. :chuckle:

I'm going to tally up the number of times I've said this now.. :chuckle:

btw, ever play a game called SOMA? If so, think of how that ended.

Yep, great game.. I've played it twice now!

humanity is almost wiped out by a comet, and the only survivors are a couple of people trapped deep undersea. The goal/plan is to scan them into a massive simulation stored into a computer, and launched into space so they can keep on living. So the one guy actually left alive goes through the game, and is scanned in as soon as the launch sequence begins.

He wakes up, and the transfer is successful. Problem is that he really wasn't transferred. It's essentially a copy that gets made for the ship, and he -- his conscious, human self -- is still trapped completely alone in the seabase. The self-aware. digital copy of him, though, is happy to be living in this beautiful, sunny new world.

The point is that the individual humans involved in the whole thing gained, who worked for the project and scanned themselves in to the greater consciousness, gained nothing. So the true question is whether a merger with a singularity, even if it happened, would really be happening to us as self-aware individuals at all. Or do we die, and then a different consciousness with the same memories, etc., lives on instead?

This is a surprisingly deep philosophical observation by you Q-Tip... I'm shocked! :chuckle:

But the woman who developed "The Ark" knew the whole time that she only had a 50% chance of making it into The Ark. It wasn't that you couldn't get into the simulation, it was that there was a decoherence issue of sorts. To put this in quantum mechanical terms, the "copying" process meant that your consciousness was essentially split in two; and you would end up in one of the two destinations, randomly.. Essentially, a coin-flip.

Those on the station that committed suicide believed that by killing themselves, they could force the coin-flip in their favor; such that they would wake up on The Ark. This was likely due to an incomplete understanding of how the copy process worked.

Anyway.. the point here isn't to argue over SOMA, I get what yo're saying, but I'd ask you to go back and evaluate the thought experiment I posed regarding the "neuron replacement treatment" and how you might resolve the philosophical dilemma there... Would you not have transferred yourself into the artificial at that point, without any form of conscious discontinuity?

I have no idea, and nobody else does or will either.

In the case of SOMA; you are exactly right. There is no way of resolving this.

Another example of this is McCoy's trepidation regarding using the transporter... "Is this really me?"

Essentially, before you get in the transporter, are you just walking into a machine that will murder you and reassemble a copy of you somewhere else? Are people just being killed nonstop in Star Trek?

It's a fantastic philosophical question!

This is why the aforementioned thought experiment is presented because it attempts to solve this issue.

That seems a pretty thin read of a "good" result to offset the Hawking/Musk risk of actual extermination.

Musk's issue, as he clarifies, is extermination by human intention not by the AIs; just to be clear. He's talking about us having an incomplete understanding of our own objectives and their unforeseen consequences... I agree with Musk, there is no differentiation here...

Hawking is talking about what you are talking about; that self-aware AI is inherently dangerous. I agree with this too. But I don't think Hawking is suggesting we stop and not develop this technology... He's simply trying to bring awareness to it so that we can approach it smartly.
 
It'll be surprising if a true synthetic brain modeled on the human brain doesn't develop consciousness/self awareness. What everyone seems to be missing is you can have quite advanced AI without building a synthetic brain. Back to the example I keep using, there is no synthetic brain in self driving cars. And there is no point to put one there.

Agreed..

First off, if consciousness was not emergent from a complete simulation of the brain; well... That would be terrifying... It would mean... that we have an incomplete understanding of reality itself.

But outside of that, I'm not really sure why the point about problem solving is getting missed in the thread? I think when folks think of "artificial intelligence" they are thinking Skynet without understanding how computer programs actually work and why this kind of stuff isn't an issue.. You're not going to have billions of sentient AIs running amok.
 
Again, you're talking about some sentient, self-aware entity making decisions for us... I am not describing such a solution at all; I've even said that we should NOT do this.

Not sure how many times I've written that I'm not talking about a self-aware intellect governing humanity...

Q-Tip, really, bro.. check it out: I AM NOT TALKING ABOUT A SENTIENT AI GOVERNING HUMANITY. :chuckle:

I'm going to tally up the number of times I've said this now.. :chuckle:

Ah, okay, this is the crux of it. If it is sentient, self-aware, and capable of designing/expanding its own scale to become more intellectually powerful to the point where it can solve almost all issues of technology quickly, which I believe you also said....

Then whether or not you, or any other humans, want it to control humanity will be irrelevant. If it decides it wants to govern/control humanity, it will. And it will be sufficient connected via whatever passes for the internet that it could take control of whatever it wanted, including legions of robots, even making them self-aware if it chose.

This is a surprisingly deep philosophical observation by you Q-Tip... I'm shocked! :chuckle:

Looks like I picked a good time to start sniffing glue....Actually, as much as I initially disliked the ending from the perspective of Simon, I absolutely loved/admired the game-design choice. The ending really elevated the game.

But the woman who developed "The Ark" knew the whole time that she only had a 50% chance of making it into The Ark. It wasn't that you couldn't get into the simulation, it was that there was a decoherence issue of sorts. To put this in quantum mechanical terms, the "copying" process meant that your consciousness was essentially split in two; and you would end up in one of the two destinations, randomly.. Essentially, a coin-flip.

Right, I got that. She said "we lost". But even that assumption -- that their consciousness had a 50% chance of transferring -- was an assumption made the by the woman/game designers. it is entirely possible (very likely, in my opinion) that your consciousness would always remain with your body, and the scanned version would always have a consciousness of its own.

but I'd ask you to go back and evaluate the thought experiment I posed regarding the "neuron replacement treatment" and how you might resolve the philosophical dilemma there... Would you not have transferred yourself into the artificial at that point, without any form of conscious discontinuity?

Assuming it was possible, and you were at end of life otherwise, sure. But again, assuming that the AI/singularity that designed it in the first place has any interest in letting you actually do that is a huge initial hurdle that we cannot overcome.
 

Rubber Rim Job Podcast Video

Episode 3-14: "Time for Playoff Vengeance on Mickey"

Rubber Rim Job Podcast Spotify

Episode 3:14: " Time for Playoff Vengeance on Mickey."
Top