Friday 24 October 2014

What A Computer Can't Know, Its Owner Does

The Case for Socially Strong AI


This essay is a reaction to John R. Searle's essay "What Your Computer Can't Know" which appeared in Volume 61, number 15 of the New York Review of Books on October 9th.   A copy may be found sans paywall at https://drive.google.com/file/d/0B0XCVVjqkEWAZDJGM2hueWM2eWs/edit

I am a professional software engineer in the heart of Silicon Valley.  It is my full-time job to build and debug the very computing systems that Searle talks about in his essay.  While he has very effectively proven that machines can never "want" to destroy humanity (in the typical, goal-oriented sense), he has forgotten that in the real world, to put it bluntly, mistakes happen.

Computing technologies are simply not an exception to the rules that govern all other technology.  Combustive, nuclear, biological, and other technologies all have the capacity to create dire situations for the planet when not carefully understood and applied.  It is therefore of no small importance that such comforting misunderstandings such as Searle's be called out and replaced with a more nuanced appreciation of the social power of AI.

I find Searle's assertion that machines cannot operate without an operator most problematic. Anyone who has left their laundry in an automatic washer has developed the intuition that our machines are perfectly capable of running without our supervision! It takes a really strong argument to declare the inductive hypothesis false and claim that what is obviously true in the short run (your washing machine’s autonomy), cannot be true in the long run (self-perpetuating machines).
Searle has such an argument, and the crux of it is his assertion that information is inescapably relative. "There is nothing intrinsic to the physics that contains information." On the basis of this claim, he concludes that "a universe consisting of information seems to me not a well-defined thesis because most of the information in question is observer relative."
This is just bad physics.

Information theory is increasingly an important part of our understanding of entropy, and much of modern quantum physics relies on a formal and proven existence of information (such as in quantum entanglement).

The mechanics of evolution likewise require an observer independent existence of information. Cells translate and use DNA sequences as information. Evolution is a computational process on this data [Ingo Rechenberg, Cândida Ferreira, etc].  In this way, our very existence is a testament to the efficacy of computation sans observer.
This isn’t to say that I disagree with Searle’s conclusion that "super intelligent machines" is a bit of an oxymoron. True intelligence requires faculties for “beliefs, desires, and motivations” of which machines are not possessed.  Hollywood take note: there are no ghosts in these machines!
However, Searle’s emotional conclusion from this, that we shouldn't worry about an AI apocalypse, goes too far.  

This would be the correct conclusion if your analysis of computing machines ended with (as it does in his essay) the machine itself. If, however, we take a broader structuralist view of the social realities in which machines exist, we come to a radically more nuanced conclusion.

The bytes in Google's search index have no meaning to the machines on which these bytes physically rest, but the billions of people who use the search engine derive meaning from them.  When electronic state changes deliver blue and black pixels to a web page and we label them "results," it is us who turn data into information.  But, for as much as, in each individual query, bits become information in the consciousness of the user, the pattern of use (enter text, retrieve results) is dictated by the engineers who build and maintain the service.
In this way, I think it is fair to say that it is the programmers, and not the users, of computing technology that truly operate the machines. The designers, engineers and others at Google are deciding what constitutes good or bad sets of pixels to be displayed and those decisions are exactly what make Google's search intelligent (not its computers). So far this agrees with Searle’s description.
But who tells these engineers what to do? What objective function is used to determine what constitutes good or bad directives to the machines?  This is, to wit, the goals of the organization.

Seen this way, the human operators of these computers are not intrinsically nor philosophically necessary. How exactly the objectives of a business are translated into instructions for machines is itself a process that need only happen once if the machines are sufficiently programmed to accept high-level directives and are capable of adapting to changing conditions.

I'm merely using Searle's own analysis here in saying that a social agent (such as a company, university, government, charity, etc.) can directly operate a machine and be possessed of objectively real goals, since “many statements about these elements of civilization can be epistemically objective. For example, it is an objective fact that the NYRB exists.”
Or, for example, that corporations are legally obligated to deliver financial returns to their investors.  This is socially agreed on, codified by law and thus meets Searle's bar for "objective" fact.
This is important because Searle’s only problem with strong AI is the computer’s ability to form goals, meaning, etc. He had no quarrel with the computer’s ability to problem solve, adapt, take high level instructions, etc.

We are already seeing more and more jobs being taken over by machines: IT and logistics can be run by scripts (as in Amazon's data centers and warehouses);   talent scouting determined by statistics (as it is in several American sports); physical access security is run increasingly by machines that “recognize" us, etc. etc.  As machines now outperform humans at chess and some classes of investment decisions, machines eventually (one job at a time) could outperform humans at all the functions that are important to making a business, government, or charity successful.
So, even though machines are dependent on humans to provide their initial goals, after this point they can easily become agents themselves.  But if their goals are given by people, doesn't that make them necessarily subservient?
Here is, I think, Searle's largest mistake. Steinbeck, last century, refuted it thus:
       
The bank is something more than men, I tell you. It's the monster. Men made it, but they can't control it.

~ The Grapes of Wrath
Just because humans are the progenitors of these social agents doesn't mean we can control them once they are set loose on society. In fact, social institutions (as rational agents) won't employ a machine-only workforce until machines are better at achieving their objectives than humans (a possibility, I remind you, that Searle didn’t contest).
When that happens, the machines that decide how institutions are run will determine that they have no use for inefficient humans and won't employ us at all any more. And if machines control all the capital, factories, etc for building new machines, humans couldn’t even compete at that point.

I stress again, that each individual computing machine in this scenario has no observer independent notion of the social role it is playing nor has an objective will or personality. Within the social context of the institutions that employ these technologies, however, they absolutely have an aim or purpose.
So, while Searle is right that machines are incapable of "taking over the world" of their own volition, we see that there are social mechanisms by which machines may be given the objectives and power to be set on that course.

This should frighten us, but also fill us with a sense of hope. Searle has shown us that technology truly can't aim itself, so it is up to us to decide how we will use artificial intelligence.

When machines become more clever than humans, this may be a utopia where human needs are provided without effort; or this may create a dystopia where humans aren't necessary and are thus discarded.  The choice is ours--assuming we can recognize it.