Previously in seriesBuilding Something Smarter

Humans in Funny Suits inveighed against the problem of "aliens" on TV shows and movies who think and act like 21st-century middle-class Westerners, even if they have tentacles or exoskeletons.  If you were going to seriously ask what real aliens might be like, you would try to make fewer assumptions - a difficult task when the assumptions are invisible.

I previously spoke of how you don't have to start out by assuming any particular goals, when dealing with an unknown intelligence.  You can use some of your evidence to deduce the alien's goals, and then use that hypothesis to predict the alien's future achievements, thus making an epistemological profit.

But could you, in principle, recognize an alien intelligence without even hypothesizing anything about its ultimate ends - anything about the terminal values it's trying to achieve?

This sounds like it goes against my suggested definition of intelligence, or even optimization.  How can you recognize something as having a strong ability to hit narrow targets in a large search space, if you have no idea what the target is?

And yet, intuitively, it seems easy to imagine a scenario in which we could recognize an alien's intelligence while having no concept whatsoever of its terminal values - having no idea where it's trying to steer the future.

Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands.  Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish?  Can I guess that the machine's makers were intelligent, without guessing their motivations?

And again, it seems like in an intuitive sense I should obviously be able to do so.  I look at the cables running through the machine, and find large electrical currents passing through them, and discover that the material is a flexible high-temperature high-amperage superconductor.  Dozens of gears whir rapidly, perfectly meshed...

I have no idea what the machine is doing.  I don't even have a hypothesis as to what it's doing.  Yet I have recognized the machine as the product of an alien intelligence.  Doesn't this show that "optimization process" is not an indispensable notion to "intelligence"?

But you can't possibly recognize intelligence without at least having such a thing as a concept of "intelligence" that divides the universe into intelligent and unintelligent parts.  For there to be a concept, there has to be a boundary.  So what am I recognizing?

If I don't see any optimization criterion by which to judge the parts or the whole - so that, as far as I know, a random volume of air molecules or a clump of dirt would be just as good a design - then why am I focusing on this particular object and saying, "Here is a machine"? Why not say the same about a cloud or a rainstorm?

Why is it a good hypothesis to suppose that intelligence or any other optimization process played a role in selecting the form of what I see, any more than it is a good hypothesis to suppose that the dust particles in my rooms are arranged by dust elves?

Consider that gleaming chrome.  Why did humans start making things out of metal?  Because metal is hard; it retains its shape for a long time.  So when you try to do something, and the something stays the same for a long period of time, the way-to-do-it may also stay the same for a long period of time.  So you face the subproblem of creating things that keep their form and function.  Metal is one solution to that subproblem.

There are no-free-lunch theorems showing the impossibility of various kinds of inference, in maximally disordered universes.  In the same sense, if an alien's goals were maximally disordered, it would be unable to achieve those goals and you would be unable to detect their achievement.

But as simple a form of negentropy as regularity over time - that the alien's terminal values don't take on a new random form with each clock tick - can imply that hard metal, or some other durable substance, would be useful in a "machine" - a persistent configuration of material that helps promote a persistent goal.

The gears are a solution to the problem of transmitting mechanical forces from one place to another, which you would want to do because of the presumed economy of scale in generating the mechanical force at a central location and then distributing it.  In their meshing, we recognize a force of optimization applied in the service of a recognizable instrumental value: most random gears, or random shapes turning against each other, would fail to mesh, or fly apart.  Without knowing what the mechanical forces are meant to do, we recognize something that transmits mechanical force - this is why gears appear in many human artifacts, because it doesn't matter much what kind of mechanical force you need to transmit on the other end.  You may still face problems like trading torque for speed, or moving mechanical force from generators to appliers.

These are not universally convergent instrumental challenges.  They probably aren't even convergent with respect to maximum-entropy goal systems (which are mostly out of luck).

But relative to the space of low-entropy, highly regular goal systems - goal systems that don't pick a new utility function for every different time and every different place - that negentropy pours through the notion of "optimization" and comes out as a concentrated probability distribution over what an "alien intelligence" would do, even in the "absence of any hypothesis" about its goals.

Because the "absence of any hypothesis", in this case, does not correspond to a maxentropy distribution, but rather an ordered prior that is ready to recognize any structure that it sees.  If you see the aliens making cheesecakes over and over and over again, in many different places and times, you are ready to say "the aliens like cheesecake" rather than "my, what a coincidence".  Even in the absence of any notion of what the aliens are doing - whether they're making cheesecakes or paperclips or eudaimonic sentient beings - this low-entropy prior itself can pour through the notion of "optimization" and be transformed into a recognition of solved instrumental problems.

If you truly expected no order of an alien mind's goals - if you did not credit even the structured prior that lets you recognize order when you see it - then you would be unable to identify any optimization or any intelligence.  Every possible configuration of matter would appear equally probable as "something the mind might design", from desk dust to rainstorms.  Just another hypothesis of maximum entropy.

This doesn't mean that there's some particular identifiable thing that all alien minds want.  It doesn't mean that a mind, "by definition", doesn't change its goals over time.  Just that if there were an "agent" whose goals were pure snow on a television screen, its acts would be the same.

Like thermodynamics, cognition is about flows of order.  An ordered outcome needs negentropy to fuel it.  Likewise, where we expect or recognize a thing, even so lofty and abstract as "intelligence", we must have ordered beliefs to fuel our anticipation.  It's all part of the great game, Follow-the-Negentropy.

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 7:48 PM

I tend to think aliens shaped by natural selection will exhibit many of the same neurological adaptations that we do.

Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands. Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish? I have no idea what the machine is doing. I don't even have a hypothesis as to what it's doing. Yet I have recognized the machine as the product of an alien intelligence.

Carefully, Eliezer. You are very, very close to simply restating the Watchmaker Argument in favor of the existence of a Divine Being.

You have NOT recognized the machine as the product of an alien intelligence. You most certainly have not been able to identify the machine as 'well-designed'.

[-][anonymous]10y30

You are very, very close to simply restating the Watchmaker Argument in favor of the existence of a Divine Being.

Not at all. The problem with the Watchmaker Argument wasn't the observation that humans are highly optimized; it was the conclusion that, therefore, it was God. And God is a very different hypothesis from alien intelligence in a universe we already know has the capability of producing intelligence.

I have no idea what the machine is doing. I don't even have a hypothesis as to what it's doing. Yet I have recognized the machine as the product of an alien intelligence.

Are beaches the product of an alien intelligence? Some of them are - the ones artificially constructed and maintained by humans. What about the 'naturally-occurring' ones, constructed and maintained by entropy? Are they evidence for intelligence? Those grains of sand don't wear down, and they're often close to spherical. Would a visiting UFO pause in awe to recognize beaches as machines with unknown purposes?

I see that the sentence noting how this line of argument comes dangerously close to the Watchmaker Argument for God has been edited out.

Why? If it's a bad point, it merely makes me look bad. If it's a good point, what's gained by removing it?

"For there to be a concept, there has to be a boundary. So what am I recognizing?"

I think you're just recognizing that the alien artifact looks like something that wouldn't occur naturally on Earth, rather than seeing any kind of essence. Because Earth is where we originally made the concept, and we didn't need an essence there, we just divided the things we know we made from the things we know we didn't.

There is no way to tell that something is made by 'intelligence' merely by looking at it - it takes an extensive collection of knowledge about its environment to determine whether something is likely to have arisen through simple processes.

A pile of garbage seems obviously unnatural to us only because we know a lot about Earth nature. Even so, it's not a machine. Aliens concluding that it is a machine with an unknown purpose would be mistaken.

In order to figure out if it's made by intelligence, you need to figure out how likely it is that natural processes would result in it, and how likely it is that intelligence would result in it. Working out the former, though far from trivial, isn't as interesting and isn't what he's wondering about. He's wondering about how to do the latter.

Can we stop deleting Caledonian's references to the fact that his comments are being deleted/altered?

Censorship is a form of bias, after all.

Consider yourself lucky that he's still on the blog. I'm tired of putting up with his Stephen J. Gould-like attempts to pretend that various issues have never been discussed here and that he's inventing them all on his own.

Can I guess that the machine's makers were intelligent, without guessing their motivations?

We can guess optimization, but I'd avoid unconsciously assuming it wasn't built by an unintelligent optimizer, such as some weird alien evolutionary process or non-intelligent creature/hive, without more extra-terrestrial data.

It is impossible to determine whether something was well-designed without speculating as to its intended function. Bombs are machines, machines whose function is to fly apart; they generally do not last particularly long when they are used. Does that make them poorly-made?

If the purpose of a collection of gears was to fly apart and transmit force that way, sticking together would be a sign of bad design. Saying that the gears must have been well-designed because they stick together is speculating as to their intended function.

I do not see what is gained by labeling blind entropy-increasing processes as 'intelligence', nor do I see any way in which we can magically infer quality design without having criteria by which to judge configurations.

In my opinion, EY's point is valid—to the extent that the actor and observer intelligence share neighboring branches of their developmental tree. Note that for any intelligence rooted in a common "physics", this says less about their evolutionary roots and more about their relative stages of development.

Reminds me a bit of the jarred feeling I got when my ninth grade physics teacher explained that a scrambled egg is a clear and generally applicable example of increased entropy. [Seems entirely subjective to me, in principle.] Also reminiscent of Kardashev with his "obvious" classes of civilization, lacking consideration of the trend toward increasing ephemeralization of technology.

Maye you addressed this and I'm just missing it, but what you're describing seems to be more generally a way to detect an optimization process, rather than neccesarally an intelligent one.

Earlier I said we are seeing things that are like what we make. But that's not a very useful definition implementation-wise.

My own approach to implementation is to define intelligence as the results of a particular act - "thinking" - and then introspect to see what the individual elements of that act are, and implement them individually.

Yes, I went to Uni and was told intelligence was search, and all my little Prolog programs worked, but I think they were oversimplifying. They were unacknowledged Platonists, trying to find the hidden essence, trying to read God's mind, instead of simply looking at what is (albeit through introspection) and attempting to implement it.

All very naive and 1800s of me, I know. Imagine using introspection! What an unthinkable cad. Well pardon me for actually looking at the thing I'm trying to program.

Doesn't this show that "optimization process" is not an indispensable notion to "intelligence"?

I don't see much here that screams "intelligence" - rather than "adaptive fitness". Though it is true that with enough evidence you could probably distinguish between the two.

I wonder if there's an implication that intelligence is being used to mean things that require effortful conscious thought for us.

Imagine a species that has very quick minds but less spacial sense than we do. They can catch and throw accurately, but only by thinking as they do it. They would see baseball as much more evidence of intelligence than we do.

Or a species with much more innate mathematical and logical ability than we have-- they might put geometry on the same level that we put crows' ability to count.

Is a beehive evidence of intelligence? How about an international financial system?

You mentioned earlier that intelligence also optimizes for subgoals: tasks that indirectly lead to terminal value, without being directly tied to it. These subgoals would likely be easier to guess at than the ultimate terminal values.

For example, a high-amperage high temperature superconductor, especially with significant current flowing through it, is highly unlikely to have occurred by chance. It is also very good at carrying electrons from one place to another. Therefore, it seems useful to hypothesize that it is the product of an optimization process, aiming to transport electrons. It might be a terminal goal (because somebody programmed a superintelligent AI to "build this circuit"), or more likely it is a subgoal. Either way, it implies the presence of intelligence.

Well either the big metal gleaming thing was designed or it wasn't.

If it wasn't, it occured "naturally" - that is was constructed through basic physical phenomena. I feel I have a relatively sound understanding of the universe, backed up by years of research done by fellow humans, and can see no way a gleaming metal machine-like object, full of clockwork and electrical cable, can occur naturally. So I have to reach one of two conclusions: Either my (and most likely humanity's) understanding of the universe is completely wrong, or the big metal thing was designed.

To me it seems clear which is the more likely scenario, though maybe I am missing a point or two about unknowable priors?

Anyway, this is how I would deduce the machine was designed - not through an understanding of optimization pressures. I feel this is a much more natural way to do it. Things that are created by design are merely those things that "aren't not created by design".

Either my (and most likely humanity's) understanding of the universe is completely wrong, or the big metal thing was designed.

That is essentially Paley's argument from design - the one which Darwin proved to be a bad argument.

Distinguishing between designed and "designoid" objects is often possible - but it can take some work.

Gregory: Either way, it implies the presence of intelligence. Scott: If it wasn't, it occurred "naturally" - that is was constructed through basic physical phenomena.

Probably, but not necessarily. We've already met on our own planet a non-intelligent optimization process that's built gears to "whir rapidly, perfectly meshed", and electricity generators and conductors for both data transmission and delivery of large currents, and surfaces that gleam in bright colours, and even built things using metals.

While reading on the topic of intelligent design, I stumbled upon a distinction between complexity and "unnaturalness" [can't remember the actual word they used]. The gist of it was, a pile of mud is incredibly complex -- it would require an absurd amount of information to create an exactly equal pile of mud, to get every grain of sand in the right place and every speck of muck as well. Sure, the information contained in a pile of mud is almost certainly random gibberish, but it is there nonetheless.

Conversely, consider a single crystal of salt in the shape of a perfect dodecahedron. The information content of this is nearly null, the shape is simple and even the arrangement of the atoms is regular and repetitive. In perhaps a few sentences you could specify it so exactly that it could be replicated atom for atom.

Despite its simplicity, the second example is undeniable evidence of intelligent design, as salt's crystal structure would make a cube, and there really isn't any remotely likely way for a natural system to shape things into dodecahedrons, much less a perfect one. As for the pile of mud, unless we have reason to believe some intelligence carefully arranged the mud into the code for a really long book or something, we can consider it equivalent to any other pile of mud, and we know a common natural process that makes mud. The point is, we can recognize the fingerprints of intelligence in very simple objects, on the basis of what sort of process would be necessary to create the object, not on the complexity of the object itself.

(As to the application to living things, it boils down to arguments about unknown primitive replicators of unknown complexity and likelihood)

a pile of mud is incredibly complex -- it would require an absurd amount of information to create an exactly equal pile of mud

Using this definition, everything containing the same number of atoms would be equally complex; you have to specify where each atom is. This does not feel correct. The authors modified the word complexity to something meaningless; and it most likely did not happen accidentally.

Using this definition, everything containing the same number of atoms would be equally complex; you have to specify where each atom is. This does not feel correct. The authors modified the word complexity to something meaningless; and it most likely did not happen accidentally.

Fixing this problem is harder than complaining about it. A formal definition that captures intuitive notions of complexity seems to be lacking.

WRT VB's original comment: surely this can't be true. If two objects A and B contain the same number of atoms, and A's atoms are in a loose irregular arrangement with many degrees of freedom, and B's are in a tight regular arrangement with few degrees of freedom, specifying the position of one atom in B tells me much more about the positions of all the other atoms than specifying the position of one atom in A does. It seems to follow that specifying the positions of all the atoms in B, once I've specified the regularity, requires a much shorter string than for A.

But that said, I've always been puzzled by the tendency of discussions of the information-theoretical content of the physical world, especially when it comes to discussions of simulations of that world, to presume that we're measuring all dimensions of variability.

Specifying a glob of mud in such a way as to reproduce that specific glob of mud, and not some other glob of mud, requires a lot of information. Specifying a glob of mud in such a way as to reproduce what we value about a glob of mud requires a lot less information (and, not incidentally, loses most of the individual character of that glob, which we don't much value).

The discussion in this thread about the complexity of a glob of mud seems in part to be eliding over this distinction... what we value about a glob of mud is much simpler than the entirety of that glob of mud.

My reply to cj applies to this as well.

Using this definition, everything containing the same number of atoms would be equally complex; you have to specify where each atom is.

Not really. You can describe a diamond of pure carbon-12 at 0 K with much less information than that. (But IAWYC -- there should be some measure of ‘complexity I care about’ by which music would rank higher than both silence (zero information-theoretical complexity) and white noise (maximum complexity).)

But IAWYC -- there should be some measure of ‘complexity I care about’ by which music would rank higher than both silence (zero information-theoretical complexity) and white noise (maximum complexity).

How about the measures 'sophistication' or 'logical depth'? Alternately, you could take a Schmidhuber tack and define interestingness as the derivative of compression rate.