37 Comments

JoshINHB, and you know this how? Did your dogs tell you?

Who is using anthropomorphic projection?

Expand full comment

@daedalus2u

Relatively few animals can recognize themselves in a mirror.

That's a relatively strong anthropocentric standard.

I'm sure that my dogs think that a self aware entity should be able to recognize the odor of the urine that they left anywhere withing the last several months and marvel at our inability to do so.

Expand full comment

In principle, you might be able to engineer an organism that looked like a lion and had strange non-evolvable properties, like the organisms in the Hitch Hikers guide that were sentient and wanted to be eaten and would voluntarily drop dead so they could be eaten. But there is a pretty high degree of difficulty which we don't know enough to be able to estimate even within a few orders of magnitude. Is it a few billion person years of experiment and design? Or a few trillion? Or a few quadrillion? Or if complexity doesn't scale linearly as n but scales as n^2, it may take 10^500 person years of experiment and design. You are talking about something that can't evolve naturally, and with very specific and unique mental characteristics. It may not even be possible to do.

Remember, behaviors are not a property of a genotype, they are a property of a phenotype, and the phenotype of the brain is generated via neurodevelopment.

If you had the capability of tuning mental states as you are talking, then putting it in a lion suit would be trivial. It wouldn't be able to reproduce with a real lion because it would have to be different in so many different ways there is no way they could possibly have compatible genome.

A stroke is a failure mode. It is the failure mode after all the stroke-prevention control pathways have failed to prevent a stroke. We don't know what all of those stroke-prevention pathways are or how they work, what they do and what they don't do. Eventually virtually everyone has strokes, and strokes that they survive from. Is the occurrence of hallucinations following a big stroke a consequence of stroke-response control pathways that enhances survival and reduced damage for a small stroke?

Expand full comment

Vaniver,

Yes i agree with your explanation. The only question is why don't more upstart orgs use prediction markets for whom benefits are MORE likely to outweigh costs. If prediction markets were such a good deal as in their current proposed forms, then surely market competition in many countries of the world might have pushed at least SOME orgs to adopt these techniques. If the time trend of prediction markets adoption is not positive then there might be a problem with the basic concept's benefits itself.

Expand full comment

I would imagine that smaller organizations get less benefit from prediction markets because they have fewer participants. The markets are less fluid, and other information-sharing mechanisms are more likely to work.

However, I would expect that they still get some benefit, and so that can't be a complete explanation.

Expand full comment

You can't miss the unique fact that you are conscious of your self in a unique way and not at all conscious of any other selves in that same way.

To exerience this directly, but then to assert that there is no such thing is a miracle of putting the model ahead of the reality the model is built to explain.

Cogito ergo sum. Its not an argument, its an observation.

Expand full comment

You make a large leap. If I found out tomorrow/500 years from now that I was a backup copy, and that another instance of me went on to do the things listed on the Wikipedia page, I would not expect that it was a liberal plot. (I would, however, suspect it was a hallucination, as I am aware of any current way of backing up my brain state). I may be projecting, but I suspect that Robin Hanson would have a similar reaction.

I am fairly sure that I would classify any two instances of "me" to be different so long as they had different memories, and the same when the memories were coherent. As soon as I can no longer read from and write to the memory state of an entity, that entity ceases to be "me". So I think the "I am me" module is actually a "I can read from and write to memory" module. Then again, this may not reflect the way consciousness feels to you.

Expand full comment

You want to be able to trigger euphoria during a stroke, because you might need to take some action during a stroke that requires euphoria.daedalus2u, I suspect you confuse correlation with function in some of these examples. Strokes are clearly failure modes, and even if you hallucinated meeting Mel Gibson during a stroke, that wouldn't imply the hallucination has a function, it could just be a symptom of brain failure. I suspect the same for euphoria from hypoxia, such as in erotic asphyxiation, but I didn't research this enough to insist on the point.

My central observation from your last post is that your main argument revolves around complexity, i.e. that the system's redesign would have to be very thorough, and many interdependencies would have to be considered. This may well be true, but as far as I can tell, no one has raised arguments why it can't be done in principle, just that it would be very difficult. The same is true for other technological goals, such as strong AI, economic fusion reactors, or, retrospectively, getting humans to the moon or sending functional probes to Mars. It's not a knock-down argument.

I also disagree that the resulting organisms couldn't bear any resemblence to the existing ones - a lion redesigned like this could still look like a current lion, behave relatively similarly to it, and occuply the same ecological niches under similar ecological laws. A very advanced civilization could create entire ecosystems that are very similar to our natural ones, but contain much less or no suffering. Intelligent beings could redesign themselves in ways that leave them able to survive and compete, without suffering or with much less suffering than humans. As far as I can see, no principle arguments have been raised against such options for humanity in the future, given enough research, motivation and acceptance of failures along the way.

Expand full comment

The main reason is because that is how organisms evolved and if you made the euphoria module contingent like that it wouldn't activate appropriately when it was needed.

A zillion things have to be regulated automatically during that euphoric state, many of them are unconscious and relate to resource allocation to running from the bear. Those resource allocation steps have to occur under a variety of conditions in addition to running from a bear. For example if you have an ischemic stroke, one of the symptoms is sometimes euphoria. The reason is because insufficient brain ATP triggers euphoria in addition to triggering things that reduce ATP consumption. That is the physiology behind autoerotic asphyxiation (which you can die from).

You want to be able to trigger euphoria during a stroke, because you might need to take some action during a stroke that requires euphoria.

It might be possible to make a wholly synthetic organism that didn't exhibit euphoria except contingently like that, but it would mean redesigning essentially all of physiology including development (and all of the zillion things that get regulated automatically). Such organisms would bear no relationship to organisms that already exist. It is not even clear if it would be possible.

Expand full comment

Suffering isn’t the driver to cause an action, it is the driver to prevent transition into the euphoric near death state.But why? Why not make the euphoria itself contingent upon there actually being a bear? (metaphorically speaking)

Expand full comment

Anonymous, no, it can't. There are some activities that must be highly rewarded with intense euphoria; activities such as running from a bear.

If an organism is in the neutral default state, what is there to prevent it from transitioning to the euphoric state of near death metabolic stress which it can enter simply by depleting its ATP resources?

There has to be an aversive state between the default neutral state and the euphoric near death metabolic state to prevent useless entry into the euphoric near death state. That aversive state has to have the same degree of aversion as the euphoric state has euphoria. In other words the aversive state has to make life so miserable that suicide seems attractive. That level of aversion is the only thing that can successfully control a degree of euphoria where running yourself to death actually is attractive and highly desirable.

Evolution has minimized the sum of deaths from being caught by the bear, by suicide due to depression and by dropping dead from exhaustion when being chased by a bear. The minimization function has to include some of all three.

Maybe you could anesthetize an organism so it couldn't feel the adverse state, but then it wouldn't be able to feel the euphoric state either and it wouldn't have the differential control necessary to selectively enter either one. If you want to retain conscious control over activities such as running from a bear, then you need to give that consciousness signals that indicate when to enter the “fight or flight” state, what to do in that state and when to leave it.

Suffering isn't the driver to cause an action, it is the driver to prevent transition into the euphoric near death state.

Expand full comment

Faul, that is right. The “I am me” module doesn't have the fidelity to differentiate different “I's” and different “me's”. What ever entity is resident inside your brain is identified as “I” and also as “me”.

If you could transplant the 'you' of 5 years ago into your brain today (you actually can't because there is no “I” independent of the brain that instantiates it), it would still think “I am me”.

To identify itself, the “I am me” module has to do pattern recognition. What pattern does it compare its self perception against in order to do a mapping to see if “I am me”, or “I am someone else”? The only patter it has to use is itself. There is no evolutionary reason for a brain to evolve to use a different pattern for self-identity, and if a different pattern was used, that pattern would require brain resources (neuroanatomy, neural networks, ATP, etc) to instantiate that different pattern. There is no evolved reason to waste resources on a self-identity module with fidelity better than producing a result “I am me”.

You can intellectually appreciate that the “I am me” tautology has to be artifact, but it doesn't feel that way. The feeling is from the “I am me” module running 'native' and what ever substrate it is running 'native' in, it will return the “I am me” tautology.

To be able to appreciate how “you” change over time, “you” have to retain the earlier versions to do pattern matching against. To do pattern recognition with the requisite fidelity means you need to keep exact and total copies. That takes exact and total brain computational resources to instantiate. Brains don't work that way. A copy of former brain states is not kept with the requisite fidelity to instantiate those earlier selves so as to do pattern matching with current self.

In the other thread, Robin talks about saving multiple versions of himself over time so his multiple versions can experience the future too. If he does that, every single one of those ems will “think” that it is the “real” Robin. The “I am me” module doesn't have the fidelity to do otherwise. If you took a random em, and did a global replace of the “name parameter” with “Robin Hanson”, then that random em would self-identify as Robin Hanson. If every self-reference it has is that it is Robin Hanson, then as far as it is concerned it is Robin Hanson. Never mind that the experiences it has in its memories don't match the biography in Wikipedia. Its self-identity module will identify it as Robin Hanson and any discrepancies with its Wikipedia biography will be rationalized away as a liberal plot.

Expand full comment

Daedelus:

The "I am me" module is simply a feature of the English language, much like the "trout is fish" module. I now am me now, but that is tautological and doesn't give any new information. I now resemble me five years ago more closely than I resemble Lindsay Lohan, but less closely than I resemble me one year ago (me, in this case, is shorthand for "the entity who posts using the name "Faul_Sname" in online discussions). The idea of self is a useful tool for compression, and the vast majority of the time it is good enough to get the job done.

Expand full comment

Robin,

I had a question for you regarding prediction markets.

You often say that large organizations are less likely to adopt such markets for various non-efficiency reasons. I agree with this. However, what prevents upstart and fairly new orgs from adopting these markets where the benefit from higher efficiency is likely much higher than the costs?

Expand full comment

I don’t think it is possible to have organisms that will compete successfully with “wild-type” organisms if those organisms do not have the capacity to experience differential compulsions sufficient to motivate actions in the moment (running from a bear) more than actions over years (not running yourself to death).daedalus2u, I think we can all agree with this, but of course it's not the answer to the original question. Specifically, you haven't shown that such a system can't be implemented without using suffering as a driver. Several potential approaches have been mentioned above.

Do you think if I only experienced the ones above my current “neutral” that I would start disliking good but not great experiences?Andrew, no, I don't think so. The relevant question is whether the full adaptive value of below-neutral experiences can be replaced by autonomous reflexes, explicit cognitive reasoning, or gradients of above-neutral experiences.

To avoid confusion, we should also strive toward a better neuroscientific analysis to which information-processes in the brain good vs. bad experiences reduce.

Expand full comment

If instead of labeling the points on the scale as “feelings”, we map them onto the degree of urgency that the organism needs to compel a certain activity so as to survive.

With this mapping, what we have is a ranking of things the organism needs to do so as to survive. For the most extreme time and survival urgent task, there needs to be the most extreme compulsion to achieve that task ASAP.

Consider the degree of urgency to be analogous to a discount rate. When there is low urgency, the discount rate is low and things can be put off until later. When there is high urgency, the discount rate is higher and things that are currently being done get put off because they have a lower discount rate.

In the limit, when running from a bear, the discount rate becomes infinite. The next few seconds of running from the bear is worth as much as the organisms entire future life. That is the trade-off being made. This is what compels the euphoria of near death metabolic stress. This is why the runner's high makes you feel as if you can run forever. You can only run until the bear catches you, or until you drop dead from exhaustion. The bear catching you and dropping dead are the same as “forever” and completely equivalent in an evolutionary sense.

I don't think it is possible to have organisms that will compete successfully with “wild-type” organisms if those organisms do not have the capacity to experience differential compulsions sufficient to motivate actions in the moment (running from a bear) more than actions over years (not running yourself to death).

Expand full comment