• Changing RCF's index page, please click on "Forums" to access the forums.

Moore's Law and the power of exponential advancement

Do Not Sell My Personal Information
slight side topic, how much potential does 3D XPoint memory have to change the way computers are designed and function. The old model is RAM and a disk drive with part of the disk drive set aside for swap space. data has to be moved to/from the disk drive a block at a time.

3D XPoint can change all of this. They talk about using it to replace RAM and to replace SSD drives. But why not do both at once. Why not have all of memory and storage directly accessible by the CPU instead of transferring it into "ram" and setting up swap space on the drive? A 64 bit processed can theoretically address 16 million terabytes of data. Not 16 terabytes, 16 million terabytes. I know current processors are limited in the amount of memory they can directly access, but with just a few more pins, the maximum amount of addressable memory can easily jump from multiple gigabytes to multiple terabytes.

It would seem that such a change could have a massive impact on the kinds of things we're talking about here, where disk latency is potentially a huge issue.
 
I again feel that your characterization of your point shifts back and forth between different propositions depending upon whether you are touting benefits, or addressing my critiques.

Q-Tip, try to understand that my point to you is that you "feel" this way because you have an incomplete picture of what's being described. I have not changed my stance here... When have I?

You talked about the things you were creating being equal to humans. Equal rights.

So let's tackle this one first, and hopefully for the final time:

A human intelligence, that is sentient and self-aware would be "human." It would be equal, with equal rights.

This is going to happen, but in a limited space and scope. That might open up as people decide to use this as a means of procreation, but that's a ways from now and is NOT necessary to the automation of societies needs.

You have talked about creating a self-aware, conscious (again, your words) entity.

Yes, I have... this will happen.

And that entity would be so intelligent that it would expand in scale to go far beyond what we can do. You're not just talking about building a human brain model.

No.

Q-Tip, this is, again, where the disconnect lies. I am not talking about enslaving a human consciousness, artificial or otherwise. I am telling you that once we have such a simulation we can isolate the cognitive ability of the neural network to create a scaleable problem solver.

I've said this in about 6 posts.

This is discretely different from having a sentient artificial intelligence doing it; as a problem solver that can understand abstraction and contextual meaning is not inherently sentient, self-aware, or self-determinant. There is no emotion or free will involved in discrete functional problem solving, which is what is necessary to reach the technological singularity.

And that brings us to the point raised by Musk and Hawking -- there is no reliable way to compartmentalize, limit, or control or limit such a thing, because it will very quickly be thinking far beyond what we can do.

If you like I can re-quote my posts throughout the thread to demonstrate my line of reasoning has been consistent throughout... I just think there might be some confusion on your end of understanding what I'm getting at here; because you seem to think that you need sentience when it has nothing to do with the problem solving required to perform every task you've said would be an issue.

So I think that since you believe sentience is required, then you're continually asserting that I must believe the same thing; when I've stated outright I don't think so and find no evidence to conclude it is and I've stated as much repeatedly.

Does that mean there won't be sentient AI? No, there will be; but they won't be used as solution solvers; which means this quoted statement is false.

Add to that all the other things, like lawyer, teacher, nurse, child care worker/nanny etc., that require actual compassion and empathy to maximize performance of that role.

But I argue none of these functions requires ACTUAL empathy. I've dated a few teachers and nurses who were borderline fucking sociopaths. You can simulate/fake empathetic responses using heuristics, fuzzy logic, or even a simple hash table.

You want cops that don't have truly human emotions/understandings,

As an African-American, I have to say yes.

and the ability to read people, situations, life events?

You do not need sentience to do ANY of these things...

Q-Tip, the whole point of all of this is so that computers can attain an ability for abstract thinking which carries contextual meaning while understanding and being able to convey actual ideas using natural language. That's the point.

This means you could tell a computer: Read this physics book and then solve these physics problems; and it could do it, and write a research paper for you to understand it's conclusions.

As @KI4MVP alluded to a few times; you'd be able to do the same thing with a medical journals and an understanding of biology, chemistry, even coupled with quantum mechanics and physical modelling of chemical interactions at a scale we have trouble with today... Computers wouldn't need to brute force these kinds of problems, but could devise elegant solutions using real thought...

But this does not entail sentience.

A synthetic mental health counselor...you really think patients will react the same to such a thing as they would to a regular human unless the two were all but indistinguishable? I sure don't.

It depends on how ubiquitous such counselors are..

I know a big problem in men's health is an issue of men being embarrassed or afraid of talking to their doctor about their problems. Imagine getting in-home care from a seemingly compassionate android that will never judge your or ever disclose anything about you to anyone.

Imagine a counselor that would spend an inordinate amount of time with you, rather than saying "time is up?"

Imagine a counselor that you could call day or night, spend 24/7 on the phone with them, talk with them about all your problems; regardless of your financial ability?

You're asking me if a human being is better than that? I'm going to say no.

The knowledge that the person you are talking to is also a human, and truly understands what that means, is pretty critical.

That's the point you're missing... Understanding is not inherently a part of sentience. You're conflating self-awareness with abstract thought / cognition; these are not the same things.

And of course, whether or not you think the robots performing those tasks need sentience isn't the point.

It kind of is the point, right? I mean, if the artificial counselor is doing a better job than a human counselor with better results -- isn't that the point?

The point is whether the singularity -- your self-aware A.I. -- decides to give it to them.

Again, you wouldn't have self-aware AI in these roles... You also seem to acknowledge this by arguing against the feasibility of a counselor not being self-aware/empathetic, which seems you get the point I'm making but are changing my point based on what you think the parameters should be...

Again, Q-tip, I'm not arguing for self-aware androids performing jobs... This would be slavery... I'm really going to start counting the number of instances I'm saying this now... :chuckle:
 
Ah, okay, this is the crux of it. If it is sentient, self-aware, and capable of designing/expanding its own scale to become more intellectually powerful to the point where it can solve almost all issues of technology quickly, which I believe you also said....

Then whether or not you, or any other humans, want it to control humanity will be irrelevant. If it decides it wants to govern/control humanity, it will. And it will be sufficient connected via whatever passes for the internet that it could take control of whatever it wanted, including legions of robots, even making them self-aware if it chose.

Yeah, I'm not talking about any of this being something we should pursue in an immediate sense. I am not describing a future where a self-aware AI governs our lives -- and to be clear, I've never said anything like that.. That's crazy.

However, with that said, I do think that gradually over a fairly short period of time given the rapidity of advancement, we as a species will fit this description; but again, it will take some time to make such a transition.

Looks like I picked a good time to start sniffing glue....Actually, as much as I initially disliked the ending from the perspective of Simon, I absolutely loved/admired the game-design choice. The ending really elevated the game.

It was a fantastic game and the ending was incredible; was the best part of the game by far as it made the consequences of these choices all the more real.

Right, I got that. She said "we lost". But even that assumption -- that their consciousness had a 50% chance of transferring -- was an assumption made the by the woman/game designers.

Well, she designed the copy system, and seem to be speaking from a position of knowledge regarding how the operation worked. From a quantum mechanical standpoint, it makes some sense; that there isn't a classical way to "copy" a brain, so there would be some kind of coin-flip involved in collapsing a superposition of two states where one consciousness ends up in a random position without any means of predetermining the outcome.

But I get what you mean, it's a game; that doesn't mean this is a description of reality whatsoever...

it is entirely possible (very likely, in my opinion) that your consciousness would always remain with your body, and the scanned version would always have a consciousness of its own.

Well... that really depends on a great many things Q-Tip... There are interpretations of quantum mechanics that strongly suggest (require) some degree of .. connection .. between entangled systems. One could argue that to move a complex system like the mind, that one could somehow use a quantum mechanical process at which some of these "rules" might come into play....

But that's obviously highly speculative and based on very little... Who knows without understanding the process?

If it's a classical process and it's simply a scanning/mapping of the brain, then there could be no transfer so to speak... but we know there is some transfer given the narrative jumps from place to place; and we know what the female scientist described, so we can venture to guess there is something more going on here...

However, again, if you re-evaluate the thought experiment a few pages ago, we wouldn't be dealing with a "scan" of your brain, but instead an in-place replacement... What then?

Assuming it was possible, and you were at end of life otherwise, sure. But again, assuming that the AI/singularity that designed it in the first place has any interest in letting you actually do that is a huge initial hurdle that we cannot overcome.

:chuckle:

We're not talking about building Skynet, Q-Tip..
 
What sleep issue? either sleep is required because brains are biological and need rest, or it's required for long term memory storage. Either way, simulated brain can either sleep or not sleep. No time at all is needed to "solve it".

In order to understand how memories are built and maintained, how they interact with each other, how decisions are made based on experience, all of these things (under my hypothesis) would be directly influenced by sleep, or lack thereof. If we are going to create a machine that can perform these tasks, we must understand how the information used to make a decision is stored and maintained. That would require a thorough understanding of sleep and its function.

What does it mean to "reverse engineer a brain from the cellular level?" We would have to engineer how these cells interact, no? Going down that logical path, wouldn't a thorough understand of sleep be required?
 
Couple of points Damien:

1) I think you and Q-Tip are both a bit confused here. You would not keep a synthetic brain from developing self-awareness if it were a complete simulation of the brain... We would expect consciousness and self-awareness to be emergent from the brain at this point.

2) You do not need complete human neural network simulations to do the kinds of calculations KI and I are describing. The reason we would want a complete simulation of the brain is for other uses like medicine, longevity, and eventually replacing the biological brain either in part or whole.

3) While we don't really understand consciousness, we do understand where consciousness stems from; what parts of the brain are active in conscious thinking. We understand that we do not need to add these portions of the brain to any neural network simulation that requires human-like cognitive ability without self-awareness and self-determination.

4) Aren't you a computer programmer? You should research neural network programming a bit to get a better picture as to why this isn't as dangerous as some people might think.



Even if you had a synthetic brain that required sleep, why wouldn't it just sleep?



These kinds of limitations would likely not be present unless specifically simulated. It's like simulating an NES CPU with all the quirks and flaws vs having a "more-perfect" emulation that doesn't have these flaws built-in.

In essence, you could likely operate with or without sleep depending upon how closely you wanted to emulate biological human function.

But again, this hasn't anything to do with AI doing problem solving... it's more to do with procreative AI; i.e., biological humans creating artificial humans.



Not sure what you mean by this.... If we accept your hypothesis about sleep being required for long-term memory storage (to which I agree in part), then I'm not sure why sleep wouldn't be simulated within the emulated neural network for one; and for two, if neurons do not require as significant a time to form and reform connections in the neural network to form memories, then one might imagine far less sleep (or no sleep) being required; and lastly, you're thinking about this in terms of writing an application that uses some kind of database using a typical programming paradigm -- this is not how an AI modeled after the brain would behave.

Damian, an AI the perfectly simulated the human brain would store memory the exact same way you store memory -- and you don't have an RDBMS in your skull.. ;)



We've been working on this technology for decades. KI4MVP referenced the massive project that's working on creating a complete simulation at present.

Understood on your first 3 points.

Not a programmer. I work on the strategic/managerial side. I'm more concerned with the "what" and the "why" than the "how". I'll read up on neural programming. I'm trying to build up my chops in that area (coding in general)...but it's hard to motivate myself if I'm not solving a problem that I care about solving.

"Damian, an AI the perfectly simulated the human brain would store memory the exact same way you store memory -- and you don't have an RDBMS in your skull.. ;)"

This actually gets at what I'm talking about in reference to sleep: How do you and I store memory? How are those memories formed? How are they prioritized? Under my hypotheses, the answer to those questions would entail an understanding of sleep's function.

Yes, we've been working on this for decades....how far along are we? I'm ignorant to that subject.
 
Understood on your first 3 points.

Not a programmer. I work on the strategic/managerial side. I'm more concerned with the "what" and the "why" than the "how". I'll read up on neural programming. I'm trying to build up my chops in that area (coding in general)...but it's hard to motivate myself if I'm not solving a problem that I care about solving.

Cool.

"Damian, an AI the perfectly simulated the human brain would store memory the exact same way you store memory -- and you don't have an RDBMS in your skull.. ;)"

This actually gets at what I'm talking about in reference to sleep: How do you and I store memory? How are those memories formed? How are they prioritized? Under my hypotheses, the answer to those questions would entail an understanding of sleep's function.

Damien, the answer to this question is beyond my expertise...

Unlike computer science, I am not a neuroscientist and this is way outside my field; so I can only offer you a brief synopsis of my understanding...

But to answer your question, again from very limited understanding, we store memory in various regions of the brain, for example long-term memory (which is likely what you're talking about) is stored in the hippocampus (AFAIK). The method used for creating memory stems from a protein called NPAS4, which is what's called a transcription factor which regulates how the engrams are created from sensory information.

A rough (and incomplete) approximation of how this works is that your cerebral cortex processes sensory data; again, what we call cognition. This is then in turn expressed by the encoding of engrams via the aforementioned mechanism for later retrieval; what we would describe as long-term memory.

How sleep plays a role in this; I'm not entirely sure, but I'm certain there is some need for it biologically. I would imagine, however, that sleep is probably more important for short-term memory though, which works by a different process entirely.

Now, with all of that said, I can give you my opinion as someone in computer science.... As @KI4MVP stated, sleep is not a problem from a logical standpoint. If we have a simulation of the brain itself, from the fundamental building blocks (i.e., neurons) building up the structure of the brain into a correctly spatially defined set of interconnected neurons; then the brain would "sleep" as needed -- just as you sleep as needed. Or, in other words, where's the problem?

Yes, we've been working on this for decades....how far along are we? I'm ignorant to that subject.

See KI's reference to the human brain project...

We're very far along indeed down this path compared to where we were say 20 years ago.
 
In order to understand how memories are built and maintained, how they interact with each other, how decisions are made based on experience, all of these things (under my hypothesis) would be directly influenced by sleep, or lack thereof. If we are going to create a machine that can perform these tasks, we must understand how the information used to make a decision is stored and maintained. That would require a thorough understanding of sleep and its function.

Actually Damien; this is the point that KI and I are making -- you would NOT need to fully understand the brain biologically in order to create a simulation of it. Many of the properties of the human brain, mind, and consciousness would be emergent from a smaller set of initial conditions and rules. Yet having the smaller set of information would be sufficient to "turn on" an artificial brain where a consciousness could emerge from.

For example, if you had an artificial neuron, and again, it operated the same way as a biological one would; you would not need to understand the sleep process and it's function in it's entirety in order to swap out the biological cell for the artificial one.

What does it mean to "reverse engineer a brain from the cellular level?" We would have to engineer how these cells interact, no? Going down that logical path, wouldn't a thorough understand of sleep be required?

No.

In computer science, reverse engineering can be done on black box functions where you have absolutely no idea how the function is implemented. The only thing you do have is a set of inputs and outputs and thus you can perform the necessary operational transformations to get from point A to point B.

So in other words, if you have this artificial neural network that follows the correct ruleset, has the correct interconnects, then there should be no functional difference between the simulation and the real thing. So without having a complete understanding of how the thing works on a macroscopic scale, you've duplicated it nonetheless.

To put this another way, there are emergent qualities of the human brain and consciousness that we will not fully understand before such an AI were to be developed, completed, and activated.
 
Actually Damien; this is the point that KI and I are making -- you would NOT need to fully understand the brain biologically in order to create a simulation of it. Many of the properties of the human brain, mind, and consciousness would be emergent from a smaller set of initial conditions and rules. Yet having the smaller set of information would be sufficient to "turn on" an artificial brain where a consciousness could emerge from.

For example, if you had an artificial neuron, and again, it operated the same way as a biological one would; you would not need to understand the sleep process and it's function in it's entirety in order to swap out the biological cell for the artificial one.



No.

In computer science, reverse engineering can be done on black box functions where you have absolutely no idea how the function is implemented. The only thing you do have is a set of inputs and outputs and thus you can perform the necessary operational transformations to get from point A to point B.

So in other words, if you have this artificial neural network that follows the correct ruleset, has the correct interconnects, then there should be no functional difference between the simulation and the real thing. So without having a complete understanding of how the thing works on a macroscopic scale, you've duplicated it nonetheless.

To put this another way, there are emergent qualities of the human brain and consciousness that we will not fully understand before such an AI were to be developed, completed, and activated.

I understand.

Basically, if you're able to exactly replicate the way a neuron squirts electricity, in conjunction with every other neuron, you've got a functioning brain.

This is assuming that each neouron can, in a practical sense, be studied and replicated individually as a part of the greater whole.

Right?
 
I understand.

Basically, if you're able to exactly replicate the way a neuron squirts electricity, in conjunction with every other neuron, you've got a functioning brain.

This is assuming that each neouron can, in a practical sense, be studied and replicated individually as a part of the greater whole.

Right?

Right.
 
slight side topic, how much potential does 3D XPoint memory have to change the way computers are designed and function. The old model is RAM and a disk drive with part of the disk drive set aside for swap space. data has to be moved to/from the disk drive a block at a time.

3D XPoint can change all of this. They talk about using it to replace RAM and to replace SSD drives. But why not do both at once. Why not have all of memory and storage directly accessible by the CPU instead of transferring it into "ram" and setting up swap space on the drive? A 64 bit processed can theoretically address 16 million terabytes of data. Not 16 terabytes, 16 million terabytes. I know current processors are limited in the amount of memory they can directly access, but with just a few more pins, the maximum amount of addressable memory can easily jump from multiple gigabytes to multiple terabytes.

It would seem that such a change could have a massive impact on the kinds of things we're talking about here, where disk latency is potentially a huge issue.

I think my post got lost in the AI discussion. Any thoughts? @gourimoko
 
Burger-flipping robot replaces humans on first day at work
Capture-large_trans_NvBQzQNjv4BqpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa_RXJU8.PNG

A burger-flipping robot has replaced humans at the grill of CaliBurger CREDIT: MISO ROBOTICS
9 MARCH 2017 • 10:42AM
A burger-flipping robot has just completed its first day on the job at a restaurant in California, replacing humans at the grill.

Flippy has mastered the art of cooking the perfect burger and has just started work at CaliBurger, a fast-food chain.

The robotic kitchen assistant, which its makers say can be installed in just five minutes, is the brainchild of Miso Robotics.

“Much like self-driving vehicles, our system continuously learns from its experiences to improve over time,” said David Zito, chief executive officer of Miso Robotics.

“Though we are starting with the relatively 'simple' task of cooking burgers, our proprietary AI software allows our kitchen assistants to be adaptable and therefore can be trained to help with almost any dull, dirty or dangerous task in a commercial kitchen — whether it's frying chicken, cutting vegetables or final plating.”

Cameras and sensors help Flippy to determine when the burger is fully cooked, before the robot places them on a bun. A human worker then takes over and adds condiments.

More Flippy robots will be introduced at CaliBurgers next year, with the aim of installing them in 50 of their restaurants worldwide by the end of 2019.

CaliBurger say the benefits include making “food faster, safer and with fewer errors”.

View: https://www.youtube.com/watch?v=lMIkWyiJp0k
 
Burger-flipping robot replaces humans on first day at work
Capture-large_trans_NvBQzQNjv4BqpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa_RXJU8.PNG

A burger-flipping robot has replaced humans at the grill of CaliBurger CREDIT: MISO ROBOTICS
9 MARCH 2017 • 10:42AM
A burger-flipping robot has just completed its first day on the job at a restaurant in California, replacing humans at the grill.

Flippy has mastered the art of cooking the perfect burger and has just started work at CaliBurger, a fast-food chain.

The robotic kitchen assistant, which its makers say can be installed in just five minutes, is the brainchild of Miso Robotics.

“Much like self-driving vehicles, our system continuously learns from its experiences to improve over time,” said David Zito, chief executive officer of Miso Robotics.

“Though we are starting with the relatively 'simple' task of cooking burgers, our proprietary AI software allows our kitchen assistants to be adaptable and therefore can be trained to help with almost any dull, dirty or dangerous task in a commercial kitchen — whether it's frying chicken, cutting vegetables or final plating.”

Cameras and sensors help Flippy to determine when the burger is fully cooked, before the robot places them on a bun. A human worker then takes over and adds condiments.

More Flippy robots will be introduced at CaliBurgers next year, with the aim of installing them in 50 of their restaurants worldwide by the end of 2019.

CaliBurger say the benefits include making “food faster, safer and with fewer errors”.

View: https://www.youtube.com/watch?v=lMIkWyiJp0k

Well, at least it can't jack off on to your Big Mac.
 
Burger-flipping robot replaces humans on first day at work
Capture-large_trans_NvBQzQNjv4BqpJliwavx4coWFCaEkEsb3kvxIt-lGGWCWqwLa_RXJU8.PNG

A burger-flipping robot has replaced humans at the grill of CaliBurger CREDIT: MISO ROBOTICS
9 MARCH 2017 • 10:42AM
A burger-flipping robot has just completed its first day on the job at a restaurant in California, replacing humans at the grill.

Flippy has mastered the art of cooking the perfect burger and has just started work at CaliBurger, a fast-food chain.

The robotic kitchen assistant, which its makers say can be installed in just five minutes, is the brainchild of Miso Robotics.

“Much like self-driving vehicles, our system continuously learns from its experiences to improve over time,” said David Zito, chief executive officer of Miso Robotics.

“Though we are starting with the relatively 'simple' task of cooking burgers, our proprietary AI software allows our kitchen assistants to be adaptable and therefore can be trained to help with almost any dull, dirty or dangerous task in a commercial kitchen — whether it's frying chicken, cutting vegetables or final plating.”

Cameras and sensors help Flippy to determine when the burger is fully cooked, before the robot places them on a bun. A human worker then takes over and adds condiments.

More Flippy robots will be introduced at CaliBurgers next year, with the aim of installing them in 50 of their restaurants worldwide by the end of 2019.

CaliBurger say the benefits include making “food faster, safer and with fewer errors”.

View: https://www.youtube.com/watch?v=lMIkWyiJp0k

There goes that job...

So burger joints and taxi/uber/truck drivers are soon to be out of work....

Again, we'll probably be supporting a universal income at some point in the near future as fewer and fewer low skilled jobs are available.
 

Rubber Rim Job Podcast Video

Episode 3-13: "Backup Bash Brothers"

Rubber Rim Job Podcast Spotify

Episode 3:11: "Clipping Bucks."
Top