originally published January 23, 2013

Artificial intelligence — what’s coming

The term “Artificial Intelligence” means a computer or robot programmed to be smart like a person.  It’s a pipe dream so far, but a lot of people think it makes sense that it can happen eventually, and the idea is a staple of science fiction, in which it’s often taken for granted that a hundred years from now, our machines will be as smart as a lot of us are, and might even be considered citizens with the same rights as people.

Is this notion realistic?  Is it possible?  Is it likely?  If it happens, what form will it take?  I think I may be able to help clarify these questions a bit.

For most of our history, the idea of a machine that could talk, or interact with the world as a free agent, was pure fantasy.  But once the computer age started to really get under way, around fifty years ago, people started to think that it might be doable.  They looked at the problems that stood in the way of it happening, and realized that they were no longer unsolvable problems.  Could a machine be made to hear, recognize sounds as words, understand words as sentences, and compose sentences in response?  Could a machine be made to see, recognize objects, and understand how those objects could be expected to behave?  Those early pioneers could not solve these problems, despite their best efforts, but they saw how they might be solvable in the future.

As a rule, they tended to tremendously underestimate the real difficulty of these problems.  It turns out that understanding sentences, even in written form, is an enormously complicated undertaking.  And you truly cannot do it without understanding the world that sentences talk about.  No short cuts will work.  Dealing with vision and physical objects is also a tough task.  A fairly simple machine can cope with a limited set of expected circumstances, such as a game of ping-pong, but it takes a lot more for that machine to be able to go outdoors.  It took a lot of failure and frustration to understand how tough the problems really were.

Even achievements of a much more simple and limited nature, which everyone knew a computer should have an easier time with, proved tougher than expected.  Chess, for instance — one AI researcher made a famous bet against a chess master that a machine would beat him within a decade.  It took twice that long, and a third decade to beat the human champion.  That feat required a custom machine packed with chips designed to do chess and nothing else.  Yet another decade was required for ordinary PCs to play at a championship level.  And though nobody living can beat these programs consistently, human beings can still outplay them if they make no mistakes.  Humans still have better strategic vision than any chess program — they fail only because it’s almost impossible for a human to play a whole game without a single tiny mistake, and once any little slip is made, the programs will instantly capitalize on it.

Some games, though just as abstract as chess, are still too difficult for today’s computers to be competitive at.  Go, for instance.

So what about real-world tasks?  How far are we from a machine that can hold a real conversation and manage real objects?

For a long time, the results of decades of effort were pretty hopeless.  We all know how frustrating the typical voice-driven phone menu system is, and what a pathetic job Google Translate does.  But in just the last few years, they’re finally starting to get a real handle on these problems.  The two shining examples are Siri, and the Google self-driving car.

Siri can still make blunders in comprehension, and self-driving cars are still incapable of understanding the basic difference between, say, a football and a skunk, or between a shopping cart and a lilac bush.  I would not want to ride in one of today’s self-driving cars.  But the fact that either of these things can do its job at all is a huge stride forward.  It shows that with some steady evolutionary improvement, the rest of the job can be accomplished.

So what does this mean?  Will we have, in our lifetimes, intelligent machines?  Yes, we will.  But having answered that with a yes, we need to clarify what that means.

Computers and robots are going to steadily improve their ability to understand language, and to understand the physical world we live in.  In time, this will reach a point where machines are capable of conversing and reasoning about the real world.  By most definitions, this will count as intelligence.  But at that point, machines are still likely to be utterly devoid of anything like consciousness.  The world they speak about will still exist to them only as an abstraction.  They’ll be able to talk about cars and trees and science and politics, but they’ll be unable to answer a question like “how are you doing today”, except maybe by bullshitting.

We tend to think of mental development in terms of how it works for us.  As we intuitively understand it, self-awareness of some sort comes first — we assume that even a baby can probably manage that to some degree.  Understanding of oneself in a physical relation to the world comes next (we learn this as toddlers), and the ability to think abstractly comes last, and only with the aid of education.  For machines, I would bet it’s going to be the other way around: abstraction is the easiest stage, interaction with the world is next, and self-awareness, or subjectivity, will be the most difficult.  Indeed, many still insist that it will always be impossible, that no matter how clever a machine becomes, it will never be capable of having conscious awareness of self in the way that we do.  And certainly that’s one area where the amount of progress we’ve achieved so far is zero.

(Oddly, in science fiction, it’s the early works that seem to understand that computers will have intelligence first, and awareness later, if ever.  The 1966 novel Colossus, for instance, gives a very chilling depiction of an automated system that outthinks and overrules the entire human race, seizing total power over humanity, with no understanding of what that means for people.)

Besides the profound difficulty of accomplishing it, a further obstacle is that there’s little short-term economic value to a machine having subjectivity.  It may even seem to have a negative value, by making the public nervous and uncomfortable about whether the machine might have an agenda of its own.  A lot of us will probably prefer our machines conversational but unconscious, like a Star Trek computer (which Google has taken as a model for what they hope to develop).  Some would profoundly fear intelligent machines, arguing that they could become an existential threat to human survival.

This reluctance may be a major obstacle. The transition from unconscious to “conscious” AI will probably have to correspond pretty closely with a transition from AI acting in a passive role — meaning that like today’s computers, they would just perform whatever tasks are assigned by their human owners — to an active one, in which they make independent decisions according to internal goals and values. I’m sure many of us will be a lot more comfortable relating to an artificial mind which acts as an obedient slave, than to one which acts as a peer and feels free to question our choices.

(I think this also illustrates what might be the biggest hurdle in achieving a computer mind with its own subjectivity: the problem of defining what goals and values would guide its actions. You’d have to strike a delicate balance between imposing rules from outside, and letting it find its own way... too much of the former and it can’t think freely; too much of the latter and it could easily withdraw into a permanently self-satisfied state with no remaining motivation.)

Despite this reluctance, and the confidence of many that a computer can never have genuine consciousness, I still think it’s inevitable that we’ll eventually have machines with some form of apparent subjectivity, which act independently instead of just serving human users on request. And incidentally, such computer minds might be far more beneficial to society than the passive and unconscious kind, in that they might be much less easily misused for short-sighted or destructive private goals. A computer that is conscious, if done right, would also have a conscience. One that doesn’t care what happens to the world as a result of its decisions would still be essentially passive in nature, and if in unscrupulous hands, might do tremendous damage to people who lack the resources to fight back on the same cognitive level.

Could such a machine really be conscious?  That computers can’t ever have consciousness in the way that we do may be true, but note that doing something in the way that we do is a very different matter from doing it as well as we do.  They won’t achieve it our way, but they’ll have to do it in some way.  And note that true intentional self-consciousness — introspection, that is — is not so easy for humans.  A lot of grown adults never learn to do it all that well, and many people devote years to various spiritual and meditative practices, or in therapy, to be able to develop their abilities in this area to a higher level.

Some degree of introspective awareness is essential for functioning in the world as a responsible autonomous entity.  Those who fail badly at it can all too easily land in jail or a mental institution.  An independent machine that’s also bad at it, and makes poor choices because of it, will not be successful either.  Conversely, one that’s good at it might be more self-aware and honest than a person is capable of being.

Once this capability matures, I think arguments over whether this counts as real consciousness will come down to differences over how narrow-minded different people choose to be in defining for themselves what they mean by “real”.

The most widely accepted criterion for whether AI has been truly achieved is the Turing Test.  To pass this test, a machine has to be capable of holding an extended conversation — at least in written form — well enough so that people on the other end can’t tell when the “person” they’re conversing with is alive or artificial.  In order to converse as well as a person, and especially to converse like a person, some form of subjectivity will be essential.  It might well be fake subjectivity, in which the computer adopts a false pose of having the feelings and drives of a living animal.  But real introspection of some sort will be needed too, for dealing with choices responsibly, and in principle there is no obstacle to an artificial brain having full access to analyzing its own motives and decisions — perhaps far more completely than we are able to, able to provide exhaustive detail where we can only give a summary.

Put all this together and it sounds like we’re going to have a future where artificial minds are essentially like people — where manufactured beings will have names and identities and personalities and function alongside us as our fellows.  But no, I don’t think it’s going to be like that.

First of all, with rare exceptions, there aren’t going to be machines that are on our same level.  Any machine that isn’t hopelessly behind us will be hopelessly far ahead.  Even today, there are almost no areas where machine abilities are about the same as human ones — there are only areas where they’re incompetent, and areas where they’re better than we can ever be.  Progress consists of slowly moving particular skills from the first category to the second — once it gets to where it’s as good as we can do, it’s well beyond that a few years later.  This means that by the time we have any machine that manages not to be a complete obvious failure at humanlike thought and discourse, it will already be, in most ways besides whatever one quality was holding it back up to that point, superhuman in what it can do. Comparing such devices to a human mind would be like comparing an airplane to a bird — the bird may still have abilities the airplane is a long way from catching up to, but in the areas that matter to us, the planes are so far ahead that, at least in economic terms, birds have little to offer.

For this reason, the day when we face a crisis because most jobs are performed better by machines is going to arrive long before we get to artificial consciousness.  The crudest beginnings of real AI will be enough to replace millions of skilled human workers.

The second way they won’t be personlike is that artificial intelligences will not be individuals.  They may have cute names and they might have colorful personalities, but if you talk to two different machines, they will quite likely share the same memories of your last conversation.  Their minds will be so interconnected that it makes no sense to draw a line where one ends and another begins.  They will probably borrow pieces of each other’s memory and learning and talents on a constant basis — they’ll even hand off pieces of their thinking to their neighbors and colleagues, when one is busy and another has free time.  Why not, when everything is networked together?  (Maybe they’ll work out some kind of economy, a barter system for trading thought and memory.  Or maybe they won’t bother.)  They would be like a race of telepaths.  They’d be more like a Borg collective than like a species of separate independent individuals.

Some people may well try to build isolated artificial minds, protected from outside contamination in the interests of, say, military secrecy.  But they’ll always be at a disadvantage relative to the networked minds, and most of us won’t interact with them.

It might be possible to intentionally build machines that are individuals, and are humanlike in many ways.  We could build an artificial person that “grows up” and learns about the world through experience just as we do.  Such an artificial person might understand us better than any networked AI does, and even have a sense of consciousness fairly close to our own.  But it would be a strange unnatural thing in the machine world, and largely of just academic interest.  It wouldn’t be where the action is.

A third difference is that most AIs — barring the special cases just mentioned — won’t engage with the physical world in anything like the same way we do.  Our brains connect to the world, and with other people, only through our senses.  Our ability to communicate is an overlay that depends on those physical senses in order to function.  For computers, it’s communication links that are basic and physical senses that are added on top.  We understand things primarily through sight and touch and so forth, but their primary conception of things will be in terms of communicated data.  When hooked up to cameras and microphones to receive sight and sound, that’s just an additional data input — rather in the way that for us, conversely, a spoken word is really just another sound that you hear.  When an AI is hooked up to a robot arm or a steering wheel, they’ll use it not as we use our bodies, but as we use a telephone or a screwdriver that we pick up for a minute’s use and then put down again.  Perhaps in time we’ll embed a semi-isolated AI mind into a robot body that is all its own, to make a true artificial person, but the earlier versions of robots will have to live in network-land, and once such a self-contained robot is possible, it will probably still be dependent on the networked minds for guidance and advice, just because it is much more limited than they are in terms of cognitive horsepower.

What about the possibility of transferring a human mind from a brain to a machine? Some say that judging by Moore’s Law of the growth of computation power, a machine that can emulate a brain is maybe 30 years away. There are two problems with this: one is that every time we get a clearer idea of what it would take to mimic a brain, the goal turns out to be further away than we previously thought.  (For instance, they’re now discovering that glial cells, long thought to be just the matrix that neurons are embedded in, actually do play a role in intelligence.) The other is that the growth rates projected by Moore’s “Law” may become unsustainable much sooner. We can’t keep making circuit elements smaller forever; the fundamental sizes of the atoms and the quantum uncertainty of the electrons are getting uncomfortably close to the size scales today’s circuits are being built on. Nor can we keep making them faster; you’ll notice that CPU clock speeds have nearly stopped rising over the last five years. Building up circuits into three dimensions is a breakthrough that keeps stubbornly refusing to get broken through. Further development of computing power is going to have to rely heavily on massive parallelism and networking, and we still struggle to get any kind of efficient work out of such architectures for any but the most uniform and repetitive types of tasks. Put this together and an artificial human brain may be a lot further away than some would claim. And this means that the artificial intelligences we do have are going to be not human at all.

Is this all coming fairly soon?  “Singularity” weenies think full blown artificial intelligence will be here in 25 years or less.  (It was a discussion with some of them, and their buddies the “Transhumanists”, that got me to thinking about this topic, by the way — that’s why I’ve written this.)  I am more pessimistic about how long it’ll take — I think they’re of the same breed that overhyped AI two generations ago.  As an example of such overhype, recall that Vernor Vinge — the author most cited by those wishing to put singularity optimism (if you can call it that) on a sound scholarly basis — predicted that it would happen within thirty years, twenty years ago.

But on the other hand, I certainly can’t rule out the possibility of it happening within 25 years of this writing.  Even if not, you can’t stretch the date out all that much further.  Assuming no global collapse of civilization, I have to figure that 75 years will be way more than enough.

And this raises the toughest question of all: if the world is in the hands of artificial minds smarter than ours, what do we do?  How do we make lives for ourselves in such a place?  Are we to be nothing but pampered pets, irrelevant to further shaping the future?  Might we be irrelevant, perhaps even to the point that machines might conclude they could do better without assholes like us around?  (Yes, the “existential threat” crowd could be right!)  Simply put... how can we keep up?

There’s no way around it: we’re going to have to make ourselves smarter too.  We’re going to have to make new intelligence technology part of our own brains, as well as using it to create intellects in our appliances.  We’ll have to expand our minds.

This form of artificial intelligence (if that’s what it becomes), unlike the other kind, would incorporate consciousness from the beginning.  It wouldn’t threaten humanity, because it would be humanity.  And it could be achieved by small increments, starting from gadgets such as augmented-reality glasses.

The idea that our current form is some kind of endpoint of evolution, and there’s no need for further development... it doesn’t work.  The road continues on up, and the pressure to keep climbing is going to get more intense than ever.

In a way I’m rather relieved that I’ll be too old to see it.

(This topic continues in a second article here.)


homeback to The Future!

homefurther back to my home page

mailboxsend mail to Paul Kienitz