It seems quite likely that it will not be many years before we have machines which are, for most purposes, more intelligent than people.
It seems at least possible that some such machines will be both conscious and self-conscious, in the same sort of way that we are.
So is it proper to be making and breaking such machines? A thought which has cropped up in plenty of science fiction movies. I am thinking here of their human rights, rather than the possibility that they might take over our world, perhaps getting rid of us in the process.
We breed lots of cattle in order to kill them for meat. But we kill them humanely, at least most of the time, and cattle, while conscious are probably only barely self-conscious. Most of us do not have a problem with the meat. Most of us do not think that there is an issue with animals which are not mammals, or perhaps not vertebrates, never mind vegetables.
We breed lots of people. But we look after most of them with great care. We accord them human rights, certainly after they are more than a few hours or days old. Some people accord them human rights from the moment of conception, but that is going too far to my mind.
But it does also seem to me that if one succeeded in making a machine which was exactly the same as us, in the sense that it had consciousness, self-consciousness, feelings, sensations, general purpose intelligence and general knowledge, that such a machine should be accorded human rights. Bearing in mind that such a machine might grow, at least in the brain department, rather in the way that a human does. It does not come out of the box with consciousness, but it does, with enough up-time, acquire it. So one should not make and break them lightly. This would not have been the position in the ancient world where people who came from a different culture or a different country than oneself barely qualified as people at all. They could be treated more or less as cattle - or slaves.
It is also reasonably clear to me that the sort of computer which beat the world at chess or the sort of computer which beat the world at Jeopardy! do not qualify. The latter might have general purpose intelligence and general knowledge, which the former does not, but it does not have consciousness, self-consciousness, feelings or sensations.
However, lots of people are working on machines which do, or at least on the science & technology which such a machine would need, and I think it reasonably likely that they will succeed, at least in part, in say the next twenty years or so. Should we let them? I am reminded of the debate about nuclear physics back in the middle of the last century: was this something a decent scientist should work on, given its potential for evil? A debate which curiosity won, even if it has not yet killed the cat.
It is possible that it will turn out that the 'at least in part' qualification above is all wrong in that once one has succeeded in creating any sensation of the sort that humans have - for example, fear, pain, touch or taste - it will be but a short step to creating all the others. That the difficult bit is getting started.
I shall, along with those lots of people, continue to ponder in odd moments.
PS 1: some non-vertebrates, say octopuses, have quite sophisticated nervous systems. Will it turn out that they are getting on for as conscious as cows?
PS 2: 'curiosity killed the cat' is an odd phrase, in the sense that it is not at all clear why it means what it does, but a phrase which wikipedia tells me has been about since at least the sixteenth century. But it does not tell me nearly enough about how the phrase came into being.
Reference 1: for the last mention of Jeopardy! see http://psmv2.blogspot.ca/2014/07/watson.html.
No comments:
Post a Comment