September 25, 2008

Machine Intelligence

Filed under: Uncategorized — Tags: , , — alsuren @ 8:02 pm

Alex and I had an interesting discussion a while back relating to machine intelligence. It would be difficult to really do justice to all arguments made this far after the fact, but we agreed that a useful thing to do would be to each post an article at least outlining the conclusions we came away with. His post should be found at . If it ends up at or later, then he owes me lending of some books 😛

It is my belief that computers are already intelligent in all of the same ways that humans are. I come at this from an empirical viewpoint, and I also don’t make the claim that they are more intelligent than us. I think that it would be ridiculous to rank intelligence, as there will always be a rock-paper-scissors argument that makes the ranking undecided. I will not concern myself with scales and magnitudes here, otherwise we will always be late in arriving at the conclusion that computers are intelligent.

A simplified version of my claim is that it is impossible to make a scientific test for any specific *type* of intelligence which the average human can pass, and no computer can.

The obvious test for self-awareness is the mirror test: Does an animal recognise itself in the mirror? Well it’s *easy* to make a machine recognise itself in the mirror. Simply make it learn a correlation between what it outputs, and what it sees in the mirror.

Once my simplified claim is accepted, the next objection is that there is a human programmer imparting his intelligence into the computer. I don’t think I can quite do Alex’s argument justice, so you may need to read his post at this point. The idea is that it isn’t the computer that’s intelligent, but its human programmer, and the computer program is just a manifestation of the human’s intelligence. While I don’t believe that this condition is possible to scientifically test for, we will address it all the same. The argument goes like this:

Would a human child raised by dogs pass an intelligence test? I’m not convinced that it would in all cases. If an untrained human can’t pass an intelligence test, but a trained one can, it suggests that it’s not the human’s intelligence that’s causing it to pass the test. Rather, it is simply a manifestation of its parents’ intelligence. A possible conclusion to be drawn from this is that no humans are intelligent, and we only behave intelligently as a manifestation of some intelligent creator. Neither of us were happy with this conclusion, so another approach would be needed to claim that computers are not intelligent.

The point about deriving intelligence from beings other than one’s creator is a useful one, and should be explored further. A proposed restriction was that more intelligence should be derived from one’s surroundings and experiences than one’s creator. Once again, this leads to the problem of trying to decide whether one measure of intelligence is greater than another. If it’s done in terms of information required to record/recover intelligence, most machine learning algorithms already pass this test, as their programming is generally very simple, and probability based, but their corpus of knowledge is potentially very large, especially if the system’s graphical model is well connected, with large cliques.

I will concede that most machines are programmed with a model, and are restricted to learning the parameters of that model, without changing the structure. The one kind of intelligence that we seem to possess, which is hard to program into computers, is the ability to learn new model structures. There is extensive research going on into this problem, but it is hard to test whether a machine is doing this or not, because the same results can be had by simply fitting a model with too many parameters.

We then got on to whether it would be possible for a machine (or human) to completely comprehend itself. I proposed the construction that if a being has complexity C, and in order to completely comprehend itself in its current state, its complexity increases to no more than (1 + r) C, then in order to understand the self that understands its original self, its complexity will increase to (1 + r + r^2) C and so on in an infinite series. So long as r is less than 1 for this being, it will converge on a finite complexity. Alex argues that in order to understand the new self, it must re-understand the old self, as well as the bit that was bolted on in order to understand its old self. This makes its complexity for the second step (1 + r)(1 + r) C, and so on, which makes the complexity go to infinity for any positive r.

It is around this point that it emerges that I subscribe to most of the monist, empiricist, and reductionist ideas. I will let Alex make his own statement about his philosophical subscriptions in his own post. It has also recently emerged (from discussions with Holly) that I have a utilitarian-like philosophy, but with an objection to the commonly held assumption that the aim is to increase the utility of the world. Including the utility of others in my own, and greedily optimising my own [expected] utility seems like one less assumption to make. Once you do this, it should be possible to concoct a set of priors, and a utility function which encompasses (or at least explains) almost all {theological, philosophical} {views, behaviour strategies}. A more detailed post on this will follow in time.



  1. Interesting. I haven’t time to respond to this in detail (*some* of us have real jobs to go to :p) but essentially: you’re not at all on firm ground wherever you refer to human intelligence or human intelligence tests. This is still a massively debated area of psychology! The assumption that there are different “types” of intelligence is, itself, unfounded, and plenty of psychologists would disagree with it. (Others would agree).

    Likewise, all consideration of intelligence tests is quite soft ground – lots of debate over what they actually do measure.

    If you’re really interested and have time, I strongly recommend finding Dr Nick Macintosh’s Part II Psychology lectures on Intelligence, in Mich term. He’s an awesome lecturer, too.

    Comment by Stuart — September 26, 2008 @ 8:09 am

  2. […] between David (a friend of mine and engineering student) and myself. It ought to be read alongside his post on the same topic, which takes a quite different perspective on many of the matters. As he states, […]

    Pingback by Machine Consciousness « Alex’s Blog — September 26, 2008 @ 6:22 pm

  3. […] between David (a friend of mine and engineering student) and myself. It ought to be read alongside his post on the same topic, which takes a quite different perspective on many of the matters. As he states, […]

    Pingback by Noldorin's Blog » Blog Archive » Machine Consciousness - Musings on my countless projects, software, science, and various other curiosities — August 20, 2009 @ 8:34 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at

%d bloggers like this: