Ugh.
As usual, I end up battling a moving target with you. Your argument keeps shifting so that no criticism/critique can be valid.
Q-Tip, maybe you should take a step back and consider I'm not changing my argument -- it's not my argument. I'm presenting to you a concept that's been out there for decades. I'm not shifting ground, I think we're just clarifying the point.
Try not to infer a motive that I'm somehow trying to avoid your criticisms; I'm not -- I'm trying to make sure we're talking about the same things because some of your points, to me, don't seem to coincide with what I'm talking about.
So, I'm going to try to ratchet this down to a couple of core points. To recap, you started off arguing how human productive economic activity would become obsolete because everything would be done by machines.
Yes.
You then expressed how these androids were in fact have self-awareness and consciousness that makes them "equal" -- your words -- to humans:
No. You're conflating two concepts here.
I'll explain in greater detail so that there's no confusion.
There are 3 technologies that are being developed today that
in combination will essentially end our current market economy:
1) Artificial intelligence; the development of neural networks that can emulate complete human cognitive ability.
2) Robotics; particularly the development of artificial muscles to replace actuators and servos in robotic assemblies to give a much greater (if not superior to human) dexterity to robots and human-like androids.
3) 3D Printing; the continual development of this technology using additional materials - but this is coming along amazingly fast as it is.
Now, with respect to the bolded point: yes, there will be self-aware AIs in the future -- I would argue there is strong chance this would be in our lifetimes -- sentient, conscious, living artificial intelligence; equal to human beings due to that consciousness.
What's important though is, at this point (actually a bit prior to it), we've essentially reached the technological singularity. Let me take a moment to explain what that means in a nutshell:
The technological singularity is the moment in which human technological advancement, particularly in artificial intelligence, reaches a point of ever increasing self-development. Or to put this another way,
it is the exact point at which technology begins to invent itself rather than requiring human interaction, labor, expense, education, time, etc.
This is why #1 on the list is really the most important (if not the only important) item on the list: because an artificial human-like neural network can do problem solving to scale. In comp.sci., this means that one can take the solver (in this case your neural network) and span it across ever increasingly larger platforms. Since neural networks naturally scale given their mechanism of computing, then entire areas of science (in fact, all of science and medicine) can be research and much of it solved solely by computers.
This means that advancements in almost every area of science come essentially immediately. Why? Because an artificial intelligence simulation could perform the equivalent of a human lifetime of research in a few seconds. Overnight, massive technological advances would be made.
Now getting back to your point, I think where the confusion lies is that you're assuming that all of these artificial intelligences will be the same; they won't be; there's no need for them to be. You won't need a fully emulated human neural network for a day-to-day android. You don't need one for massive problem solving either.
The androids that would be getting 3D printed for everyday use would not be sentient, and not be equal.
Now, would there be AIs that are sentient? Yes. Why? Because you can't really prevent someone from doing this once human-like AI becomes ubiquitous. Someone would essentially "enable" the portions of the simulation that allow for sentience; and then the cat is out of the bag.
Moreover, eventually, human evolution at this point will likely blend with these technologies. I've talked about this before, when discussing the "digitization" of living human consciousness; or what's called mind uploading IIRC. In other words, when human beings reach the point that we can move our living consciousness off of a biological platform and into an artificial one (i.e, being conscious within the confines of a computer system), then the difference between these AI and ourselves becomes non-existent.
Yes.
You even noted that there would come a time when we would actually marry them.
Yes absolutely.
Okay, so I then pointed out that you have given them complete control over production, made them sentient, and apparently made them equal (presumably in terms of rights) to humans.
So just to clarify:
Sentience is not a requirement for all androids. Sentient androids would effectively
be human. Why wouldn't they be human? They have the equivalent of a human mind and human consciousness.
But I'm not saying that worker androids or even companion androids would be sentient: if they were, then they could not be used for these purposes; obviously. Q-Tip, if they were sentient, they might not do the work... Why would they?
In the example of a companion android that a person would marry; it might have some intelligence, cognitive ability, and even minimal self-awareness. But you would imagine taking something like the emotional equivalent of a dogs brain and couple that with some logical routines to serve within the function of the android. For someone who is looking for a companion that does not have outward thoughts or desires then this should be ideal. Having a sentient companion android would be counterproductive and self-defeating. If this person can't get a woman in real life, what makes him think the android will want to stay with him if it's equivalent to a human being?
And then postulated -- why would they need us inferior flesh and bone types around anyway? And the things is, you cannot rebut that argument, because neither you nor anyone else can actually know what those sentient, physically indistinguishable from humans, perpetually-running, machines might choose to do with us. It's simply impossible to know that because you will have turned production/development/programming of new androids (or whatever) to them.
Q-Tip, I'm not arguing that there is no danger here; I'm arguing that it is inevitable.
Again, with respect to sentient androids, I don't expect many to be around given the danger. But this is a different conversation from when these things will likely happen. Again, there surely will be some legislative measures to protect against unethical use of AI - and the potential there is massive. However, it's important to note that once the genie is out of the bottle then we'll need to fully embrace this technological future. Trying to run from it would just lead to our demise.
But after I raise that, you then reversed course as if that's not really what you were saying at all:
Just to be clear, you quoted me saying this:
Gour: "To your point, I am not talking about AI
governing humans in all aspects....But to my point,
I'm not talking about artificial humans acting as slave labor; but instead, human-like neural networks using their cognitive ability for complex problem solving. These machines can be used autonomously and will be used.
The risk is completely different than the one you describe since there is no concept of free will or self-awareness involved."
Yes, Q-Tip, what I'm saying here is exactly correct - I'm not reversing course. Re-read what I'm saying:
1) You will have sentient artificial intelligences in the world, yes, they will exist.
2) No these intelligences will not be mass produced,
why would they be? Instead they would be more of a byproduct of the research and development itself, and in certain circumstances you might consider creating a human AI for various purposes but it wouldn't be a common occurrence. But the point here is that AI sentience will happen because someone will do it. It's not a thing that can be prevented.
3) Sentient AIs cannot be made to do work; my argument has never been one of slavery. Human-like neural networks with human cognitive ability with respect to problem solving specifically need not be sentient or self-aware. Those aspects of the brain's functionality need not be implemented or active for large-scale problem solving which is the point.
Huh? I mean, they're going to be doing all the programming and building so that we don't have to do any --right?
If by AI then yes; but again, not all AI is sentient. Sentience is not required to write a computer program or to build a house.
That's not "slave labor?"
No. I said this in the quote above that the AI that would be doing work, problem solving etc, would NOT be sentient. I think I've said this in a few posts now. Human-like neural networks designed for problem solving at scale: this does not entail sentience.
The android parked in my garage isn't really going to do whatever I tell it to? That's not what you were saying.
The android in your garage is not going to be sentient. That's exactly what I'm saying. Do you really think I'm advocating for slavery while simultaneously advocating AI having equal rights? C'mon Q-Tip..
And if we really are giving them control over all the means of production -- which is exactly what you're doing -- how are they not "governing" us?
Because there is no mechanism in their programming to govern? It's like saying Windows 10 governs us. It does it's function.
Again Q-Tip, an AI that utilizes a neural network does not mean that the AI can operate outside of its programming. I am not talking creating a class of slaves; these would be, in 99% of instances, thinking machines but
not sentient machines with free-will.
Or maybe more to the point, how can we possibly prevent them from "governing" us if they so chose?
The android in your garage does not have free will Q-Tip; if he did, he wouldn't be sitting in your garage.
You then talk about them having "no concept of freewill or self-awareness",
Yes... that's the entire point.
while ignoring what you said about them providing full emotional satisfaction,
Providing full emotional satisfaction.. to their user. Moreover, emotions do not entail human-level self-awareness and free will.
Yes, someone will eventually want to marry their android.
You're confusing things here again..
A
companion android, in this case, is not a sentient intelligence -
by definition... Think about it; if it were sentient, how would it perform the role of "companion" to an owner? If it's sentient, it very likely will want leave at some point. Meaning it's not ideal for the user...
A sentient AI would be a living human being simply not on a biological platform -- so yes, these entities would have equal rights; they are effectively human beings as we might logically define what it means to
be human.
That doesn't make any sense unless they have free-will or self awareness.
It makes complete sense if you understand how neural networks and the human brain work. Emotions are not products of sentience. Displaying emotions or simulating emotions can be done programmatically.. You do not need human consciousness for
ANY of the tasks described here.
And if they truly are going to have control over all our production, including gathering of raw materials, automated assembly, research, etc., how the hell could we possibly know (or stop them even if we did) that they won't R&D that on their own?
Because they're not programmed to do so, and the AI that are programming AI aren't programmed to veer beyond their programming. It's a box with hard edges... Again, you are assuming that we're going to create a multitude of androids with free will -- that is not what I've described.
No one is arguing that we do this.
tl;dr: You can't simultaneously argue that will perform all our manual labor, all our productive work, while gaining consciousness and becoming self-aware to the point of being suitable for marriage,
Who said anything about being suitable for marriage? I said people will marry them. That doesn't mean you or I might.
Moreover I've said a few times now that androids doing labor will not be self-aware or have free will. You're contradicting this and saying that I must be saying this, while quoting me saying that "I'm not saying that."
while at the same time claiming they can't/won't acquire human failings and become a danger.
I'm not talking about having android slaves Q-Tip.. I think there is some confusion here on your part.
Re-read my posts.
EDIT: A page or so ago; I said the following addressing your initial concerns:
Gourimoko:
The purpose of building such a neural network is to understand human creative and cognitive ability. If you were to scale this up solely for computational purposes then you wouldn't simultaneously include areas of the brain dealing with emotion which largely lead to these disagreements to begin with.
I get what you're trying to say, but I think you might misunderstand -- you're not simply reproducing the human brain for the sake of it, you're building a simulation of it in a virtual machine so that you can scale cognitive ability and problem solving onto standing computing platforms.
This means that you could have a human-like cognitive capability scaled over silicon. That's not redundant -- that's a game changer... in every way imaginable.
...
Q-Tip, if you've read this portion of my post from a few days ago, it should be clear that I'm not talking about just birthing new artificial people in mass production.
Hope this clears up any confusion.