Friday, 24 October 2014

What A Computer Can't Know, Its Owner Does

The Case for Socially Strong AI

This essay is a reaction to John R. Searle's essay "What Your Computer Can't Know" which appeared in Volume 61, number 15 of the New York Review of Books on October 9th.   A copy may be found sans paywall at

I am a professional software engineer in the heart of Silicon Valley.  It is my full-time job to build and debug the very computing systems that Searle talks about in his essay.  While he has very effectively proven that machines can never "want" to destroy humanity (in the typical, goal-oriented sense), he has forgotten that in the real world, to put it bluntly, mistakes happen.

Computing technologies are simply not an exception to the rules that govern all other technology.  Combustive, nuclear, biological, and other technologies all have the capacity to create dire situations for the planet when not carefully understood and applied.  It is therefore of no small importance that such comforting misunderstandings such as Searle's be called out and replaced with a more nuanced appreciation of the social power of AI.

I find Searle's assertion that machines cannot operate without an operator most problematic. Anyone who has left their laundry in an automatic washer has developed the intuition that our machines are perfectly capable of running without our supervision! It takes a really strong argument to declare the inductive hypothesis false and claim that what is obviously true in the short run (your washing machine’s autonomy), cannot be true in the long run (self-perpetuating machines).
Searle has such an argument, and the crux of it is his assertion that information is inescapably relative. "There is nothing intrinsic to the physics that contains information." On the basis of this claim, he concludes that "a universe consisting of information seems to me not a well-defined thesis because most of the information in question is observer relative."
This is just bad physics.

Information theory is increasingly an important part of our understanding of entropy, and much of modern quantum physics relies on a formal and proven existence of information (such as in quantum entanglement).

The mechanics of evolution likewise require an observer independent existence of information. Cells translate and use DNA sequences as information. Evolution is a computational process on this data [Ingo Rechenberg, C├óndida Ferreira, etc].  In this way, our very existence is a testament to the efficacy of computation sans observer.
This isn’t to say that I disagree with Searle’s conclusion that "super intelligent machines" is a bit of an oxymoron. True intelligence requires faculties for “beliefs, desires, and motivations” of which machines are not possessed.  Hollywood take note: there are no ghosts in these machines!
However, Searle’s emotional conclusion from this, that we shouldn't worry about an AI apocalypse, goes too far.  

This would be the correct conclusion if your analysis of computing machines ended with (as it does in his essay) the machine itself. If, however, we take a broader structuralist view of the social realities in which machines exist, we come to a radically more nuanced conclusion.

The bytes in Google's search index have no meaning to the machines on which these bytes physically rest, but the billions of people who use the search engine derive meaning from them.  When electronic state changes deliver blue and black pixels to a web page and we label them "results," it is us who turn data into information.  But, for as much as, in each individual query, bits become information in the consciousness of the user, the pattern of use (enter text, retrieve results) is dictated by the engineers who build and maintain the service.
In this way, I think it is fair to say that it is the programmers, and not the users, of computing technology that truly operate the machines. The designers, engineers and others at Google are deciding what constitutes good or bad sets of pixels to be displayed and those decisions are exactly what make Google's search intelligent (not its computers). So far this agrees with Searle’s description.
But who tells these engineers what to do? What objective function is used to determine what constitutes good or bad directives to the machines?  This is, to wit, the goals of the organization.

Seen this way, the human operators of these computers are not intrinsically nor philosophically necessary. How exactly the objectives of a business are translated into instructions for machines is itself a process that need only happen once if the machines are sufficiently programmed to accept high-level directives and are capable of adapting to changing conditions.

I'm merely using Searle's own analysis here in saying that a social agent (such as a company, university, government, charity, etc.) can directly operate a machine and be possessed of objectively real goals, since “many statements about these elements of civilization can be epistemically objective. For example, it is an objective fact that the NYRB exists.”
Or, for example, that corporations are legally obligated to deliver financial returns to their investors.  This is socially agreed on, codified by law and thus meets Searle's bar for "objective" fact.
This is important because Searle’s only problem with strong AI is the computer’s ability to form goals, meaning, etc. He had no quarrel with the computer’s ability to problem solve, adapt, take high level instructions, etc.

We are already seeing more and more jobs being taken over by machines: IT and logistics can be run by scripts (as in Amazon's data centers and warehouses);   talent scouting determined by statistics (as it is in several American sports); physical access security is run increasingly by machines that “recognize" us, etc. etc.  As machines now outperform humans at chess and some classes of investment decisions, machines eventually (one job at a time) could outperform humans at all the functions that are important to making a business, government, or charity successful.
So, even though machines are dependent on humans to provide their initial goals, after this point they can easily become agents themselves.  But if their goals are given by people, doesn't that make them necessarily subservient?
Here is, I think, Searle's largest mistake. Steinbeck, last century, refuted it thus:
The bank is something more than men, I tell you. It's the monster. Men made it, but they can't control it.

~ The Grapes of Wrath
Just because humans are the progenitors of these social agents doesn't mean we can control them once they are set loose on society. In fact, social institutions (as rational agents) won't employ a machine-only workforce until machines are better at achieving their objectives than humans (a possibility, I remind you, that Searle didn’t contest).
When that happens, the machines that decide how institutions are run will determine that they have no use for inefficient humans and won't employ us at all any more. And if machines control all the capital, factories, etc for building new machines, humans couldn’t even compete at that point.

I stress again, that each individual computing machine in this scenario has no observer independent notion of the social role it is playing nor has an objective will or personality. Within the social context of the institutions that employ these technologies, however, they absolutely have an aim or purpose.
So, while Searle is right that machines are incapable of "taking over the world" of their own volition, we see that there are social mechanisms by which machines may be given the objectives and power to be set on that course.

This should frighten us, but also fill us with a sense of hope. Searle has shown us that technology truly can't aim itself, so it is up to us to decide how we will use artificial intelligence.

When machines become more clever than humans, this may be a utopia where human needs are provided without effort; or this may create a dystopia where humans aren't necessary and are thus discarded.  The choice is ours--assuming we can recognize it.

Tuesday, 10 September 2013

What is real?

What is real?

Well that, to quote our ex-president, depends on what your definition of "is" is.

Let us take one mundane definition of existence to start with: Something exists if it is real. If that thing finds form or causality in the Universe, than it is said to exist.

This is quite a good definition of existence!  If you can point to it (an apple) or if you can point to its effects (the wind, electricity) than it definitely is a thing

But at second blush, that really is a pretty poor definition.  For starters it says nothing about math, or formal logic.   If you ask certain philosophers or mathematicians, they will hold up formal logic as somehow being even _more_ real than apples and electricity, for what universe is even possible without causation, implication and set theory?  There's an even more obvious hole in this definition too:  the Universe itself cannot be said to exist by that definition, for it certainly doesn't exist "inside" itself!

And so we are forced to ponder a reality greater than our universe. What thing can we say about both matter and energy (as above) that we can also equally say about universes and math?

The statement that leaps to my mind is: these things cannot be created nor destroyed.  

You can't make another universe in your kitchen [1], any more than you can destroy Algebra (much to the consternation of students everywhere).

This actually seems a rather good description, as it allows our Universe and matter inside it and logic outside it, to all happily exist within one definition of existence.

Unhappily, this says that an apple (which certainly is created and destroyed) doesn't _actually_ exist. But this isn't really a problem. It can actually help us illuminate:

The concept of Appleness is a form our mind applies to certain collections of atoms and not others (which are the things that actually exist).  This fits our scientific view of things quite nicely!  While apples certainly perform a nice function in trees and salads, this purely _biological_ and _sociological_ (economic, etc) existence shouldn't blind us to the fact that an apple actually exists only to the extent that it is composed of atoms (which do).  Which _also_ isn't to say that biological, etc function isn't important!  Apples are amazing!   We just shouldn't blind ourselves to the fact that apples are merely collections of atoms.

There is one further, aesthetic argument against this definition:  it's full of negations!  It only defines existence in terms of what can't be done, or what isn't.   To this I say: yes!  That's the point!  We set out originally to define what is: if I were to use "is" in my definition, that would be pretty circular wouldn't it?

And so we've actually uncovered an interesting fact about the nature and character of statements about Truth and Existence:  you can't make satisfactory positive definitions.  I've also demonstrated how satisfying a good negative definition can be.

THIS is my answer to those who charge that Science's (and The Buddha's for that matter!) negative definitions of reality are nihilistic.   Negative definitions don't render all things meaningless.  They merely put the different layers of Truth in their proper order [2]

[1] - Well-read readers are aware of modern cosmology and will note that our Universe _has_ a definite starting point (the Big Bang) and are probably ready to dismiss me.  My response is that no matter which modern theory of the creation of the universe you believe in, in _none_ of those theories is it possible for you to make _another_ universe. This is what is meant by "can not be created".   If you're curious I've explained this for all three major scientific views below:

1)  In a "closed" universe which is its own cause (a cyclical big-cruch / big-bang) the big bang wasn't the time of creation, merely the point of re-birth.

2)  An "open" universe which is not subject to destruction (the big crunch) is still cause-less (ie without creditability or the ability to be created) even though it does have a definite starting time (the big bang). In this model, because time is a concept native to our Universe you can't talk about what happened _before_ the Big Bang. And if there was no _before_ the Big Bang, there was nothing able to cause it to come into being.  The Universe simply sprang into being, without cause or reason.

3)  There is one further concept of the universe, which posits that universes _can indeed_ be created and destroyed within the concept of a further "multiverse" which is truly eternal and contains a kind of "foam" out of which universes bubble up.  If this is the truth, then our Universe _isn't_ real (by my definition) but the _Multiverse_ most certainly is.  Trivially moving "exists" up one rung doesn't negate the argument.

[2] -

Wednesday, 9 January 2013

True is the opposite of Useful

You come up with a theory.  You think that it's pretty cool.  It explains something about the world that you couldn't explain before.   Better yet, it gives an idea of something you can do about it.  You're pretty excited.

Then you learn something that seems to poke a hole in your neat little theory.  Crap.  Back to the drawing board.


The world is complex.  Really complex.

Any theory complex enough to fully describe a system has to be as complex as that system. So even if you come up with a perfectly "true" theory, you've gotten nowhere: you've gained no insight into the problem you're trying to solve because you haven't reduced it all.  "True" implies "not useful"

So we come up with imperfect theories, and we use them.

This extends to all theories, including the most general of all: words.

What exactly is a chair?  I dare you to try, but you can never define _exactly_ what is chair and what is not chair.   This is because your chair theory of the universe isn't true.  The universe doesn't give a crap what you sit on.

Chairness isn't "true."  But like any good theory it is certainly useful!  Not only does the chair theory explain what things are good for sitting on, it also tells the makers of chairs what they should make, helps you furnish a home that is comfortable to visitors, and allows concert halls to let in the right number of people.

So don't be discouraged if your theories are wrong. If it's useful: use it.   And just because your words have been useful, that doesn't make them true.  In fact:  it means they aren't.

Monday, 5 November 2012


Our default behavior as animals is pain-avoiding and pleasure-chasing.  [1]

Asceticism is roughly gaining the skill to control your animal nature. If we can rid ourselves of “if only” thinking and let go of desire (the theory goes) we accept and love the world as it is.

Tibetan Buddhism makes a life-long study of controlling our animal nature and doing no harm. But leading a human life is more than just being happy and doing no harm. This is a zero sum life. Life should be about actively creating good in the world.  [2]

To improve the world, step in and make the improvements that you have to power to when you see something that could be better.

And so we have a conflict:
  1.  A good life is one that is happy and improves the world.
  2.  To be happy one must accept the world as it is
  3.  To improve the world one must not accept the world as it is

How does one act to improve the world, without being attached to the result?

If you throw yourself 100% into something, do your best with no reservation, then even if you fail to make the change you envisioned you can’t feel bad because there was literally nothing more you could have done.

What if you chased the wrong goal? Similarly, if you can honestly say that you chose the goal you did because it was the single most important thing you could be doing, you will not feel ashamed for having done what you thought was right.

And so, to improve the world without being invested in the result (ie to remain unshakably happy in any outcome) one’s behavior must have a certain pattern:
  1. Be honest with yourself about the state of the world and your own power to change it
  2.  Decide what is the most important improvement you can make with one immediate action (for a loose definition of “one”)
  3. With no hesitation or distraction, and a sense of urgency and ruthlessness [3] do that one action completely and thoroughly
  4. Evaluate the outcome. Watch the result and learn.
  5. Repeat


This is the Japanese philosophy of Kaizen. It is interesting to note that through only thinking about leading a good life, we have arrived at the central principles of agile software development. [5]

It seems rather surprising that thinking through the nature of happiness and action in the most abstract way led to a concrete suggestion on how to develop software. Yet perhaps we shouldn’t be so surprised: developing software is, after all, a subset of “everything that you do”

When you exercise regularly in a variety of ways because you want to be healthy and happy you become better in many physical ways. You look sexier, you can lift heavier things, you get tired less easily.

There is a temptation to exercise solely for one of these improvements.  Perhaps you’ve seen someone who lifts weights just to get big. People that meditate because of stress, or follow Zen practices for professional success are similarly stunted. Focusing on improving just one thing improves that thing without improving anything else, and that is wasted effort.

If you continuously look to improve the whole of your life, the benefits you gain will similarly bleed into every aspect of your life.

[1] -  This is the “id” in Freudian psychology, also called the pleasure principle.  If I had to word in this framework the behavior of free will I would say that humans should be stagnation-averse and kindness-enthusiastic.
[2] – Thai and Tibetan Bhuddists would argue that being an example or teacher is a solid improvement to the world, and I would agree. That is why I am sharing this with you J
[3] – at least this is how Zen actions tend to look
[4] – “the state of the world” includes the state of you. Sometimes the best action you can take is gaining some knowledge or skill to improve your power to effect the world or the accuracy of your world-model

Tuesday, 30 October 2012

Optimize your time

Optimize Your Time

Most people think that getting the most amount of work done per day is achieved through minimizing the number of distractions you have while at work. While having the right breakdown of time _during_ work is important, far more import is what you're doing when not working.

If you're working 8 hours per day, but only sleep 4 hours per day this is clearly not sustainable.

Society as a whole has decided that 8 hours / day, 5 days per week working and 8 hours / night sleeping is the right mix (with the rest "personal time")  This is probably pretty close to right, but if you are fortunate enough to be able to set your own hours, I highly encourage you to experiment with this yourself.   Keep track of how many hours you're working, what you're doing when you're not working, and how focused and productive you feel.

The result of this is that you come to realize what you do for those 8 hours not working and not sleeping is super important.  This is probably not surprising but the corollary to it is: if you're at work and not feeling productive: STOP.  Don't try to keep going and power through it.  Creativity doesn't work that way.  

Go home. Go jog.  Write a snarky blog post about how you feel ;)

You can come back to work later, and when you do you'll be amazed at how much more you can do.

Sunday, 17 June 2012


Just a quick note about the name of this blog:

The philosopher Eric Zinn, publicly thinking.

Many [ ] have realized that good thinking happens in private, particularly when you don't have anything better to be thinking about.  It seems to me that we have only a few such places left:  in the shower, on the toilet, and on a bus or train.

It is interesting to note that driving, while not the most demanding task, still engages too much thought to be a good spot for thinking.  This is why, even if public transit takes longer, I would recommend that you take it over driving yourself to work.

Similarly, don't read on the toilet, or play music while you shower. Leave yourself alone with your thoughts every now and then, and see where they take you.

The Inexhaustibility of Talent

Passion is what makes someone good at something.[1]  Since passion is what makes a great employee, and passion is an unlimited resource, it stands to reason that great employees are also an unlimited resource.

Of course passion isn't exactly enough for most jobs.  You also have to have skill.

While my experience is limited, I was amazed that very few people at Microsoft made me wonder why they were there. That's a company of over 90,000 people. I met no such person at Google. They also turn down a lot of people I think are great.  Both type 1 and type 2 errors are from the difficulty of making good hiring decisions, not from the availability of talent.  

Is this true for areas outside of tech? Perhaps Google attracts the best to the suffering of others? Tech is, however, the _most_ likely space for companies to run out of talent, due to the relative scarcity of good programmers compared to the market size. If huge tech companies are able to hire thousands of competent people, I don't see how a grocery store should have a hard time finding competent clerks.  

The problem, really, is that companies don't understand that _every_ employee matters. They don't care about certain positions enough to bother hiring competent people.  Or, for that matter, treating their current employees well.  We've all suffered from a run-in with an incompetent sales associate or an underpayed security guard.

Creating a great product starts, then, with hiring the best and creating a great place to work: one that encourages and rewards passion.  This a truth in a service-oriented economy that wasn't true when we were a manufacturing economy. How happy Foxconn workers are doesn't matter the same way that the happiness of the Genius Bar staff does, or the passion of Apple's designers for that matter. In many ways, employers are stuck in an old economy where passion didn't matter.

It is interesting to note that I reduced skill to happiness.  There is always more talent out there[2].  Keeping your workers happy attracts the talent, keeps them, and keeps them working well.   Because happiness is clearly inexhaustible, I argue that talent is as well.

[1] - You work hard and constantly improve if you care. Natural talent is only a multiplier, or worse a head start.
[2] - Even if you run into the talent wall, you can make schools that generate more talent. This is what Henry Ford did. More recently, Microsoft donated money to the University of Washington because UW generates good programmers. UW now has twice the number of CSE students, thus increasing the talent pool in the Puget Sound.