• Changing RCF's index page, please click on "Forums" to access the forums.

Moore's Law and the power of exponential advancement

Do Not Sell My Personal Information
The thing is that I think the poor will always be engaging in some form of value creating work -- even among themselves -- because the investment cost to provide them a fully-automated everything simply will not be worth it to those with the assets.

Consider that Foxcon, employers of 1.2 million people (mostly in China), has announced a 3 step plan to fully automate it's manufacturing and are building their own robots to use for this.

http://www.theverge.com/2016/12/30/...s-automation-apple-iphone-china-manufacturing
 
My point was in the context of @gourimoko saying no more free market economy because of automation. That essentially requires that human labor/input is of no value. To reach that point, I think, will take an incredibly long time

What happens if, in say 20 years, we develop a scaleable artificial human-like neural network that can operate on a series of Linux machines? What happens when we scale that technology up?

Nothing..?

What happens when robotics gets to the point where artificial muscles make androids fully human-like in articulation and motor skill?

You realize these technologies, coupled with 3D printing, lead to exponential advancements right?
 
What kinds of things do you create most often using your 3D printer? What kinds of things could I as a regular dude print?

My wife made two stands for her phone, one includes a stand for her iWatch.. printed it off Thingiverse, sanded it down; looks great.

I printed 3 different phone holders for the car, those things cost like $25-35 a pop at Walmart.. Wanted one for the iPad for long trips.

But I personally use it for prototyping parts.. My company is looking to move into manufacturing, and I do some embedded work on FPGA logic circuits and MCUs that I can now take an fit into prototype cases designed by CAD users around the planet.

This saves us a HUGE amount of time (months) and a HUGE amount of money (many thousands) since it can be done in-house prior to going off for injection molding.
 
What happens if, in say 20 years, we develop a scaleable artificial human-like neural network that can operate on a series of Linux machines? What happens when we scale that technology up?

Nothing..?

What happens when robotics gets to the point where artificial muscles make androids fully human-like in articulation and motor skill?

You realize these technologies, coupled with 3D printing, lead to exponential advancements right?

You've got a lot of if's in there....

The question for a lot of that is again, the up-front capital investment. There's no real answer possible to these points, because it's inherently unknowable. I'm just registering skepticism. Is it really going to be cheaper to build, maintain, gather the raw materials for, etc.., all these fully articulated robots, just so that perfectly capable humans don't have to do landscaping, or fix a mower that's on the fritz? And then there is the entire world of entertaining, and the aesthetics of humanity. Are people truly going to abandon that, and consider machines equal?

Maybe in SF stories, sure. But I think a great mass of humanity will be very resistant to that.
 
You've got a lot of if's in there....

But my if's are not hard problems to solve (hard in the sense of being potentially impossible to solve), or things we're not already working on engineering into existence.

The point here is Q-Tip; the whole point; is that reaching this technological singularity is unlike any other technological advancement precisely because it is the most important of them all..

Once artificial intelligence has reached human intelligence, then we can scale problem solving and cognitive ability through the roof. This means every other necessary to achieving a fully automated and mechanical future advancement would follow precipitously precisely because you no longer need human beings to do the research and development.

The question for a lot of that is again, the up-front capital investment.

Has already been and is being made right now. In the billions of dollars.

There's no real answer possible to these points, because it's inherently unknowable. I'm just registering skepticism.

And I appreciate that.

I'm just trying to get you to see the picture of what an AI with human cognitive ability really entails....

Yes, Q-Tip, very soon after that point, everything that we know would change given scientific advancement would be happening in seconds rather than years.

Is it really going to be cheaper to build, maintain, gather the raw materials for, etc.., all these fully articulated robots, just so that perfectly capable humans don't have to do landscaping, or fix a mower that's on the fritz?

Yes. Absolutely. You can print the robot and the previous robot can assemble it. So once you have a base infrastructure, it simply replicates itself continuously until there are enough androids that such a question is no longer relevant.

If you started with the base infrastructure for manufacture, and started off with just a few hundred android that could self-replicate daily. Imagine how fast you'd end up with enough to satisfy an entire population.

It's a massive game-changer if the product can make itself!

And then there is the entire world of entertaining, and the aesthetics of humanity. Are people truly going to abandon that, and consider machines equal?

Consider them equal?

I'm not sure what you mean by this? Why would equality have anything to do with this?

Moreover, if the machine were conscious and sentient, and some of them will be surely; then why would it not be "equal?"

Maybe in SF stories, sure. But I think a great mass of humanity will be very resistant to that.

But why?

I appreciate you offering your opinion, but I'm not sure why you think the great mass of humanity would resist this? Would you?
 
My wife made two stands for her phone, one includes a stand for her iWatch.. printed it off Thingiverse, sanded it down; looks great.

I printed 3 different phone holders for the car, those things cost like $25-35 a pop at Walmart.. Wanted one for the iPad for long trips.

But I personally use it for prototyping parts.. My company is looking to move into manufacturing, and I do some embedded work on FPGA logic circuits and MCUs that I can now take an fit into prototype cases designed by CAD users around the planet.

This saves us a HUGE amount of time (months) and a HUGE amount of money (many thousands) since it can be done in-house prior to going off for injection molding.

If you want to design your own part to print, I assume you have to have some sort of 3D CAD software?
 
If you want to design your own part to print, I assume you have to have some sort of 3D CAD software?

Yes. But free ones exist solely for the purpose of 3D printing.

You can export from 3D modelling programs though, so you can go from 3DS Max into your CAD software and then into your printing software.

The workflow depends on where you're most comfortable modelling the part.
 
Has already been and is being made right now. In the billions of dollars.

Not even a drop in the bucket of what would be required.

Yes. Absolutely. You can print the robot and the previous robot can assemble it. So once you have a base infrastructure, it simply replicates itself continuously until there are enough androids that such a question is no longer relevant.

That's assuming an unlimited supply of raw materials, limitless power, etc..

If you started with the base infrastructure for manufacture, and started off with just a few hundred android that could self-replicate daily. Imagine how fast you'd end up with enough to satisfy an entire population.

I think when you start to imagine that happening not just in the relatively sanitized indoor environments, but over the 65 or so million square miles of wildly varying terrain, weather, etc., it gets a lot messier.

Consider them equal? I'm not sure what you mean by this? Why would equality have anything to do with this?

Because I mentioned previously issues of human aesthetics -- art, music, sports, athletic competitions, for which humans may prefer to have other humans be the participants.

Moreover, if the machine were conscious and sentient, and some of them will be surely; then why would it not be "equal?"

Because they're not human.

I appreciate you offering your opinion, but I'm not sure why you think the great mass of humanity would resist this? Would you?

For a ton of reasons. Again, there may be aesthetic reasons that humans would still prefer to interact with other humans. The same reason a real kitten is better than a fake one.

Also, considering how much has been written not just in SF but in actual science, by actual scientists, there may be a very understandable reluctance to ever grant machines that much independence/authority, in part because of the risk it poses to us.

If they are truly the same as us in terms of thinking, feeling, etc. then they wouldn't be immune to any of our emotional or other weaknesses/vices. A desire for power, jealousy, or anything similar. There have been attempted genocides to eliminate supposedly inferior tribes, races, or ethnicities -- you can go through history and see various tribal groups that were wiped out, etc.. And that's not to mention straight up war. Except we'd have granted them complete power over the entire production system and the means of making war against us.

Frankly, creating fully sentient, conscious machines that think no differently than humans with that kind of power would be sheer idiocy on our part.

I mean, really, what's the point of doing that in the first place? Why create fully sentient, conscious beings with their own wants and desires that may not be at all consistent with our own. At that point, they're no longer servants to make our lives easier -- they're our replacements. And who in their right mind would want that?

So, because of that, I think there will rightfully be a tremendous reluctance to create beings of the type, and on the scale, of which you are imagining.
 
Not even a drop in the bucket of what would be required.

Again.. what is this based on? How much money does it take to develop an AI of this magnitude?

That's assuming an unlimited supply of raw materials, limitless power, etc..

Functionally near-limitless power is achievable with space-based solar. Both @KI4MVP and myself have written about this in the past.

I think when you start to imagine that happening not just in the relatively sanitized indoor environments, but over the 65 or so million square miles of wildly varying terrain, weather, etc., it gets a lot messier.

Q-Tip, maybe you and I have a different picture here but I'm not sure what you mean?

A 3D printer, plus base materials, and a starter android can self-replicate as many androids as there can be made with base materials.

This has nothing whatsoever to do with terrain and weather.

I'm trying to picture what you mean, but, I simply can't imagine what it is you're envisioning?

Because I mentioned previously issues of human aesthetics -- art, music, sports, athletic competitions, for which humans may prefer to have other humans be the participants.

Why?


Why would we discriminate whatsoever if the quality of the product was just as good?

Because they're not human.

Well, in some/most instances this would be true; but you realize in some instances these AI would be human - as human as you and I, right?

I mean, a human consciousness is human is it not?

For a ton of reasons. Again, there may be aesthetic reasons that humans would still prefer to interact with other humans. The same reason a real kitten is better than a fake one.

I'm sure there would be some people who would prefer "real" biological things; and we're kind of getting off into a Blade Runner-esque scenario, but, I think the vast majority wouldn't have a problem with artificially produced entities or the creative works of artificial entities.

I mean, I get what you're saying; there would be some prejudice against AI and AI works, but, is that sufficient to stop such progress in it's tracks? No.. I don't think so.

Also, considering how much has been written not just in SF but in actual science, by actual scientists, there may be a very understandable reluctance to ever grant machines that much independence/authority, in part because of the risk it poses to us.

We're very much in a field I'm extremely familiar with (computer science, neural networks, artificial intelligence, and particularly automation) so, I feel quite confident speaking on this subject with a considerable degree of expertise since this kind of research and automation is literally what both myself and my company does within the realm of computer science.. Quite literally finding ways of replacing human labor.

With that said, I would argue that you're presenting a different argument than I am.

To your point, I am not talking about AI governing humans in all aspects; but with that said, have you considered the ramifications of AI reaching human cognitive ability -- at that point, how do you prevent such independence and autonomy?

But to my point, I'm not talking about artificial humans acting as slave labor; but instead, human-like neural networks using their cognitive ability for complex problem solving.

These machines can be used autonomously and will be used. The risk is completely different than the one you describe since there is no concept of free will or self-awareness involved.

For machines that are self-aware, that would be a completely different question, and yes, you're right, we would need to tread carefully -- but that's not too far off; in fact, it's likely closer to happening than you think.

If they are truly the same as us in terms of thinking, feeling, etc. then they wouldn't be immune to any of our emotional or other weaknesses/vices. A desire for power, jealousy, or anything similar.

Agreed! But we're not talking about manufacturing human consciousness for the purposes of automation; we're talking about manufacturing human-like neural networks. There is a fundamental difference here.

There have been attempted genocides to eliminate supposedly inferior tribes, races, or ethnicities -- you can go through history and see various tribal groups that were wiped out, etc.. And that's not to mention straight up war. Except we'd have granted them complete power over the entire production system and the means of making war against us.

Again, the line between them and us becomes increasingly blurred as we become them.

Frankly, creating fully sentient, conscious machines that think no differently than humans with that kind of power would be sheer idiocy on our part.

And yet we're going to do it... There's almost no way we won't end up doing it at some point in the near-future. It's the natural progression of artificial intelligence research.

I mean, really, what's the point of doing that in the first place?

There's a great many reasons to do it.

One of the key reasons to do it is so that we can learn how the human brain can solve problems so effortless compared to machines. So that we can build simulations of the brain to aid in the creation of prosthetic for sensory implants; i.e., artificial eyes, ears, an artificial sense of touch -- the ability to be somewhere that your body is not for dangerous work...

One of the longer-term reasons to do this would be to develop a means to simulate the neural structure of the brain itself so that we could develop synthetic synapses to physically replaced biological ones.. Again, at such time where we could do this, the line between "them" and "us" is pretty much eliminated entirely.

Why create fully sentient, conscious beings with their own wants and desires that may not be at all consistent with our own.

Why have children?

At that point, they're no longer servants to make our lives easier -- they're our replacements. And who in their right mind would want that?

You mean like having children?

So, because of that, I think there will rightfully be a tremendous reluctance to create beings of the type, and on the scale, of which you are imagining.

There might be, but how do you stop progress like this? Do you make such research and development illegal, taking a legislative approach?

In short, if this is something to be avoided purposefully, what do you think society can do to prevent the technological singularity from occurring?

And FWIW, IMHO, we shouldn't fear this but instead embrace it... I truly think the transhumanist approach entails a technological salvation of sorts. An end to disease, hunger, poverty.. the waste of human life.

We should avoid those things and instead embrace what's more familiar simply because of fear of the unknown?

I don't think the majority of society would agree, but, I could be wrong.
 
Ugh.

As usual, I end up battling a moving target with you. Your argument keeps shifting so that no criticism/critique can be valid. So, I'm going to try to ratchet this down to a couple of core points. To recap, you started off arguing how human productive economic activity would become obsolete because everything would be done by machines.

You then expressed how these androids were in fact have self-awareness and consciousness that makes them "equal" -- your words -- to humans:

..lol..

As I see it Q-Tip, in the not-so-distant future, perhaps in our lifetimes, we will very likely have a complete artificial brain; as in, a conscious, sentient, thinking machine modeled after the human brain via a complete simulation of the human neural network.

Once we get to the point where such simulations can do complex tasks, like my job as an analyst/programmer, or say your job as a lawyer, then yes, the point where automation begins to reshape and ultimately obsolete the market economy will be here.

And then:

Q-Tip, imagine a world where you could 3D print an AI-driven android everyday... in your garage.. You could have an android cleaning your home, a different one tending to your food, another one comforting you when you're sick... You could have an android that could act as doctor, lawyer, receptionist, housemaid, nanny, and full-time sexbot.

Think about it, if you need companionship you could make an android; if you want a pet, just print one; if you want a new TV, the androids will print the parts and assemble it for you... Need to go to out? The car will shuttle you wherever you need to go. Want a new house? The androids will build you one, working day and night...

You even noted that there would come a time when we would actually marry them.

Okay, so I then pointed out that you have given them complete control over production, made them sentient, and apparently made them equal (presumably in terms of rights) to humans. And then postulated -- why would they need us inferior flesh and bone types around anyway? And the things is, you cannot rebut that argument, because neither you nor anyone else can actually know what those sentient, physically indistinguishable from humans, perpetually-running, machines might choose to do with us. It's simply impossible to know that because you will have turned production/development/programming of new androids (or whatever) to them.

But after I raise that, you then reversed course as if that's not really what you were saying at all:

To your point, I am not talking about AI governing humans in all aspects....But to my point, I'm not talking about artificial humans acting as slave labor; but instead, human-like neural networks using their cognitive ability for complex problem solving. These machines can be used autonomously and will be used. The risk is completely different than the one you describe since there is no concept of free will or self-awareness involved.

Huh? I mean, they're going to be doing all the programming and building so that we don't have to do any --right? That's not "slave labor?" The android parked in my garage isn't really going to do whatever I tell it to? That's not what you were saying.

And if we really are giving them control over all the means of production -- which is exactly what you're doing -- how are they not "governing" us? Or maybe more to the point, how can we possibly prevent them from "governing" us if they so chose?

You then talk about them having "no concept of freewill or self-awareness", while ignoring what you said about them providing full emotional satisfaction, being marriage partners, and having equal rights. That doesn't make any sense unless they have free-will or self awareness. And if they truly are going to have control over all our production, including gathering of raw materials, automated assembly, research, etc., how the hell could we possibly know (or stop them even if we did) that they won't R&D that on their own?

tl;dr: You can't simultaneously argue that will perform all our manual labor, all our productive work, while gaining consciousness and becoming self-aware to the point of being suitable for marriage, while at the same time claiming they can't/won't acquire human failings and become a danger.
 
Last edited:
Anyway, to wrap that all up....I still think there are going to be a lot of resource-limitations, and probably a lot of geo-political hurdles that are going to affect how easy it will be to gather necessary resources via automation, and a lot of resistance on other fronts.

@KI4MVP

It has to be a world solution, not just a national one, because if we ever get to the point where all of us have everything we could possible want/need due to the labor of machines, and the rest of the world is still living in squalor....well, a 40 foot wall won't be nearly enough.

Many of the raw materials we would need to build and sustain what you and @gourimoko are talking about are overseas, under the control of foreign governments. We still have profound social and related problems that also aren't simply going to stay out of the way while the technocrats among us try to build it either.

In other words, I think that our humanity is going to make the implementation of that technological progress much dirtier than some may envision.
 
Last edited:
Ugh.

As usual, I end up battling a moving target with you. Your argument keeps shifting so that no criticism/critique can be valid.

Q-Tip, maybe you should take a step back and consider I'm not changing my argument -- it's not my argument. I'm presenting to you a concept that's been out there for decades. I'm not shifting ground, I think we're just clarifying the point.

Try not to infer a motive that I'm somehow trying to avoid your criticisms; I'm not -- I'm trying to make sure we're talking about the same things because some of your points, to me, don't seem to coincide with what I'm talking about.

So, I'm going to try to ratchet this down to a couple of core points. To recap, you started off arguing how human productive economic activity would become obsolete because everything would be done by machines.

Yes.

You then expressed how these androids were in fact have self-awareness and consciousness that makes them "equal" -- your words -- to humans:

No. You're conflating two concepts here.

I'll explain in greater detail so that there's no confusion.

There are 3 technologies that are being developed today that in combination will essentially end our current market economy:

1) Artificial intelligence; the development of neural networks that can emulate complete human cognitive ability.
2) Robotics; particularly the development of artificial muscles to replace actuators and servos in robotic assemblies to give a much greater (if not superior to human) dexterity to robots and human-like androids.
3) 3D Printing; the continual development of this technology using additional materials - but this is coming along amazingly fast as it is.

Now, with respect to the bolded point: yes, there will be self-aware AIs in the future -- I would argue there is strong chance this would be in our lifetimes -- sentient, conscious, living artificial intelligence; equal to human beings due to that consciousness.

What's important though is, at this point (actually a bit prior to it), we've essentially reached the technological singularity. Let me take a moment to explain what that means in a nutshell:

The technological singularity is the moment in which human technological advancement, particularly in artificial intelligence, reaches a point of ever increasing self-development. Or to put this another way, it is the exact point at which technology begins to invent itself rather than requiring human interaction, labor, expense, education, time, etc.

This is why #1 on the list is really the most important (if not the only important) item on the list: because an artificial human-like neural network can do problem solving to scale. In comp.sci., this means that one can take the solver (in this case your neural network) and span it across ever increasingly larger platforms. Since neural networks naturally scale given their mechanism of computing, then entire areas of science (in fact, all of science and medicine) can be research and much of it solved solely by computers.

This means that advancements in almost every area of science come essentially immediately. Why? Because an artificial intelligence simulation could perform the equivalent of a human lifetime of research in a few seconds. Overnight, massive technological advances would be made.

Now getting back to your point, I think where the confusion lies is that you're assuming that all of these artificial intelligences will be the same; they won't be; there's no need for them to be. You won't need a fully emulated human neural network for a day-to-day android. You don't need one for massive problem solving either.

The androids that would be getting 3D printed for everyday use would not be sentient, and not be equal.

Now, would there be AIs that are sentient? Yes. Why? Because you can't really prevent someone from doing this once human-like AI becomes ubiquitous. Someone would essentially "enable" the portions of the simulation that allow for sentience; and then the cat is out of the bag.

Moreover, eventually, human evolution at this point will likely blend with these technologies. I've talked about this before, when discussing the "digitization" of living human consciousness; or what's called mind uploading IIRC. In other words, when human beings reach the point that we can move our living consciousness off of a biological platform and into an artificial one (i.e, being conscious within the confines of a computer system), then the difference between these AI and ourselves becomes non-existent.

And then:

Yes.

You even noted that there would come a time when we would actually marry them.

Yes absolutely.

Okay, so I then pointed out that you have given them complete control over production, made them sentient, and apparently made them equal (presumably in terms of rights) to humans.

So just to clarify:

Sentience is not a requirement for all androids. Sentient androids would effectively be human. Why wouldn't they be human? They have the equivalent of a human mind and human consciousness.

But I'm not saying that worker androids or even companion androids would be sentient: if they were, then they could not be used for these purposes; obviously. Q-Tip, if they were sentient, they might not do the work... Why would they?

In the example of a companion android that a person would marry; it might have some intelligence, cognitive ability, and even minimal self-awareness. But you would imagine taking something like the emotional equivalent of a dogs brain and couple that with some logical routines to serve within the function of the android. For someone who is looking for a companion that does not have outward thoughts or desires then this should be ideal. Having a sentient companion android would be counterproductive and self-defeating. If this person can't get a woman in real life, what makes him think the android will want to stay with him if it's equivalent to a human being?

And then postulated -- why would they need us inferior flesh and bone types around anyway? And the things is, you cannot rebut that argument, because neither you nor anyone else can actually know what those sentient, physically indistinguishable from humans, perpetually-running, machines might choose to do with us. It's simply impossible to know that because you will have turned production/development/programming of new androids (or whatever) to them.

Q-Tip, I'm not arguing that there is no danger here; I'm arguing that it is inevitable.

Again, with respect to sentient androids, I don't expect many to be around given the danger. But this is a different conversation from when these things will likely happen. Again, there surely will be some legislative measures to protect against unethical use of AI - and the potential there is massive. However, it's important to note that once the genie is out of the bottle then we'll need to fully embrace this technological future. Trying to run from it would just lead to our demise.

But after I raise that, you then reversed course as if that's not really what you were saying at all:

Just to be clear, you quoted me saying this:

Gour: "To your point, I am not talking about AI governing humans in all aspects....But to my point, I'm not talking about artificial humans acting as slave labor; but instead, human-like neural networks using their cognitive ability for complex problem solving. These machines can be used autonomously and will be used.The risk is completely different than the one you describe since there is no concept of free will or self-awareness involved."

Yes, Q-Tip, what I'm saying here is exactly correct - I'm not reversing course. Re-read what I'm saying:

1) You will have sentient artificial intelligences in the world, yes, they will exist.

2) No these intelligences will not be mass produced, why would they be? Instead they would be more of a byproduct of the research and development itself, and in certain circumstances you might consider creating a human AI for various purposes but it wouldn't be a common occurrence. But the point here is that AI sentience will happen because someone will do it. It's not a thing that can be prevented.

3) Sentient AIs cannot be made to do work; my argument has never been one of slavery. Human-like neural networks with human cognitive ability with respect to problem solving specifically need not be sentient or self-aware. Those aspects of the brain's functionality need not be implemented or active for large-scale problem solving which is the point.

Huh? I mean, they're going to be doing all the programming and building so that we don't have to do any --right?

If by AI then yes; but again, not all AI is sentient. Sentience is not required to write a computer program or to build a house.

That's not "slave labor?"

No. I said this in the quote above that the AI that would be doing work, problem solving etc, would NOT be sentient. I think I've said this in a few posts now. Human-like neural networks designed for problem solving at scale: this does not entail sentience.

The android parked in my garage isn't really going to do whatever I tell it to? That's not what you were saying.

The android in your garage is not going to be sentient. That's exactly what I'm saying. Do you really think I'm advocating for slavery while simultaneously advocating AI having equal rights? C'mon Q-Tip.. :chuckle:

And if we really are giving them control over all the means of production -- which is exactly what you're doing -- how are they not "governing" us?

Because there is no mechanism in their programming to govern? It's like saying Windows 10 governs us. It does it's function.

Again Q-Tip, an AI that utilizes a neural network does not mean that the AI can operate outside of its programming. I am not talking creating a class of slaves; these would be, in 99% of instances, thinking machines but not sentient machines with free-will.

Or maybe more to the point, how can we possibly prevent them from "governing" us if they so chose?

The android in your garage does not have free will Q-Tip; if he did, he wouldn't be sitting in your garage.

You then talk about them having "no concept of freewill or self-awareness",

Yes... that's the entire point.


while ignoring what you said about them providing full emotional satisfaction,

Providing full emotional satisfaction.. to their user. Moreover, emotions do not entail human-level self-awareness and free will.

being marriage partners,

Yes, someone will eventually want to marry their android.

and having equal rights.

You're confusing things here again..

A companion android, in this case, is not a sentient intelligence - by definition... Think about it; if it were sentient, how would it perform the role of "companion" to an owner? If it's sentient, it very likely will want leave at some point. Meaning it's not ideal for the user...

A sentient AI would be a living human being simply not on a biological platform -- so yes, these entities would have equal rights; they are effectively human beings as we might logically define what it means to be human.

That doesn't make any sense unless they have free-will or self awareness.

It makes complete sense if you understand how neural networks and the human brain work. Emotions are not products of sentience. Displaying emotions or simulating emotions can be done programmatically.. You do not need human consciousness for ANY of the tasks described here.

And if they truly are going to have control over all our production, including gathering of raw materials, automated assembly, research, etc., how the hell could we possibly know (or stop them even if we did) that they won't R&D that on their own?

Because they're not programmed to do so, and the AI that are programming AI aren't programmed to veer beyond their programming. It's a box with hard edges... Again, you are assuming that we're going to create a multitude of androids with free will -- that is not what I've described. No one is arguing that we do this.

tl;dr: You can't simultaneously argue that will perform all our manual labor, all our productive work, while gaining consciousness and becoming self-aware to the point of being suitable for marriage,

Who said anything about being suitable for marriage? I said people will marry them. That doesn't mean you or I might.

Moreover I've said a few times now that androids doing labor will not be self-aware or have free will. You're contradicting this and saying that I must be saying this, while quoting me saying that "I'm not saying that." :chuckle:

while at the same time claiming they can't/won't acquire human failings and become a danger.

I'm not talking about having android slaves Q-Tip.. I think there is some confusion here on your part.

Re-read my posts. ;)

EDIT: A page or so ago; I said the following addressing your initial concerns:

Gourimoko:
The purpose of building such a neural network is to understand human creative and cognitive ability. If you were to scale this up solely for computational purposes then you wouldn't simultaneously include areas of the brain dealing with emotion which largely lead to these disagreements to begin with.

I get what you're trying to say, but I think you might misunderstand -- you're not simply reproducing the human brain for the sake of it, you're building a simulation of it in a virtual machine so that you can scale cognitive ability and problem solving onto standing computing platforms.

This means that you could have a human-like cognitive capability scaled over silicon. That's not redundant -- that's a game changer... in every way imaginable.

...

Q-Tip, if you've read this portion of my post from a few days ago, it should be clear that I'm not talking about just birthing new artificial people in mass production.

Hope this clears up any confusion.
 
Last edited:
Anyway, to wrap that all up....I still think there are going to be a lot of resource-limitations, and probably a lot of geo-political hurdles that are going to affect how easy it will be to gather necessary resources via automation, and a lot of resistance on other fronts.

Right, but I'm not sure how any of that is relevant again, if we reach the advancement of having a full simulation of the human brain and are able to scale that neural network for problem solving.

The problems you're describing wouldn't make sense in this context.

It has to be a world solution, not just a national one, because if we ever get to the point where all of us have everything we could possible want/need due to the labor of machines, and the rest of the world is still living in squalor....well, a 40 foot wall won't be nearly enough.

Why??


Why wouldn't androids be proliferated around the world?

You think if we invent a technology like this in the States it will stay here? Everyone will have it... everywhere.

Many of the raw materials we would need to build and sustain what you and @gourimoko are talking about are overseas, under the control of foreign governments.

I'm not sure what you mean here Q-Tip; like what?

We still have profound social and related problems that also aren't simply going to stay out of the way while the technocrats among us try to build it either.

Then those people will be left behind, surely.

In other words, I think that our humanity is going to make the implementation of that technological progress much dirtier than some may envision.

I think that we'll reach a point where people will need to ask themselves whether or not they want to move forward or stay in the past. If that forms a dividing line, I would imagine many of us wouldn't really care for or about those who didn't want to move forward.

In a scenario where one could kickstart/bootstrap a self-replicating, self-developing system of androids and printers, with a massive (insurmountable) technological advantage given the AI we're talking about; I don't think concepts like nations and popular opinion would make much of a difference. In fact, I think the concept of nations would break down rapidly...

I mean, without economies, with a burgeoning world culture, and with the breakdown of economic, technological, and even religious barriers (given the implications of this kind of technology), what would be the point of nations to begin with?
 
@The Human Q-Tip to follow up on @gourimoko - consider that self driving cars are AI, but most certainly are not conscious. And there would be no point in ever designing a self driving car that was conscious. It's an AI tool safely get you from one place to another just like today's phones use AI to do things like look stuff up on the internet for you (i.e. Siri)
 
@The Human Q-Tip to follow up on @gourimoko - consider that self driving cars are AI, but most certainly are not conscious. And there would be no point in ever designing a self driving car that was conscious. It's an AI tool safely get you from one place to another just like today's phones use AI to do things like look stuff up on the internet for you (i.e. Siri)

Great example.

Image recognition applications and natural language processing applications also employ AI these days.

Deep learning systems use AI and neural networks for problem solving.

AI does not necessarily entail consciousness and sentience; but when and where it does, and that is human consciousness and sentience, then yes, one would obviously consider these to be living entities with equal rights just as any human being would have.
 

Rubber Rim Job Podcast Video

Episode 3-15: "Cavs Survive and Advance"

Rubber Rim Job Podcast Spotify

Episode 3:15: Cavs Survive and Advance
Top