This is our monthly place to discuss relevant topics that have not appeared in recent posts.
I have a question about the singularity. I understand that it may be possible to one day upload one’s brain into a computer, but wouldn’t that just result in a copy? It wouldn’t actually be you, and you wouldn’t be aware of what the computer-you knows. From your perspective, you would walk into a brain scanner, walk out, wait a couple minutes, then a technician would tell you that a copy of your brain in now in the computer. It isn’t as though you would suddenly have a telepathic bond with the computer-you just because it has a copy of your brain.
So what’s the appeal?
One standard response is to imagine if a single neuron in your brain is replaced by an electronic equivalent. Are you still you? Yes? Then replace another neuron, and another, until you’re all electronic. Boom, electronic consciousness.
What I think this proves is that we have really crappy intuitions and ways of thinking about consciousness, but many take this to mean that equivalent-architecture minds may somehow “share” consciousness such that you could live on in a machine copy.
Thanks for the response. I think I agree with your skepticism. At most it appears to be an argument against a vulgar form of reductionism. It does not then follow that we’d develop mental telepathy with our computer clone.
OTOH, what if the computer were physically integrated into your brain? Then it seems reasonable that you would share consciousness. Then you could “cut the link”. My intuition is that you would stop sharing consciousness at that point. But maybe, as in the movie The Prestige, you might get lucky and find yourself in the computer rather than your physical body.
Consciousness is an illusion, albeit a persistent one. The entity that is “you” is not self-identical with the “you” of a week ago or a year ago.
The illusion of continuity of consciousness arises because our self-detection pattern recognition doesn’t have the fidelity to notice the difference. You think you are the same entity because you can’t notice the difference, and don’t think any differences you can notice matter.
You could of course program a computer to “think” it was you, and to respond every time you asked it who it was to blurt out your name. Is it “really” you? How would anyone be able to tell? The entity wouldn’t be able to tell, and in a sense, that is all that matters.
You could of course program a computer to “think” it was you, and to respond every time you asked it who it was to blurt out your name. Is it “really” you? How would anyone be able to tell? The entity wouldn’t be able to tell, and in a sense, that is all that matters.
The meat version of you would know. I agree that no one else would be able to distinguish them, except by the fact that the computer version is much more intelligent. If you killed the meat version, he wouldn’t experience the illusion of psychological continuity in the computer. He would just die.
Actually, from a consciousness point of view, there are two yous. The meat you will have entered the machine, come out and see that there is now a computer you. The computer you will be created, have your
But here’s the point: if the computer you isn’t you, then neither is the meat you. The molecules that make the meat you up are, for the most part, not the molecules that made you up a decade ago. Most of the cells have been replaced with new cells. If sheer physicality is the measure of the residence of consciousness, then none of us are the people we were born as. Thus, consciousness must instead be an algorithm, independent of a physical basis. And if consciousness is an algorithm, the computer you is no less you than the meat you.
Either both the computer and meat you are you, or neither is.
Bad cut off sentence above.
“The computer you will be created, have your memories, and continue as you in the computer.”
And regarding The Prestige: There were always two that came out. Either was perfectly capable of surviving and continuing on his own.
I made the reference to The Prestige because at one point, Hugh Jackman’s character talks about how he got his hands dirty because he never knew if he’d end up dying in the tank, or as the prestige who got to take the bow. Same reasoning might apply to physically integrating with a computer and then “cutting the cord”.
The molecules that make the meat you up are, for the most part, not the molecules that made you up a decade ago. Most of the cells have been replaced with new cells. If sheer physicality is the measure of the residence of consciousness, then none of us are the people we were born as. Thus, consciousness must instead be an algorithm, independent of a physical basis. And if consciousness is an algorithm, the computer you is no less you than the meat you.
Do you think that “meat you” will have a telepathic bond with “computer you”? That they will know each others’ thoughts?
I think a more reasonable alternative is that consciousness is an emergent property that supervenes on the physical. If so,then there would be two “you’s”, both of which can remember being the physical you. But I think Constant is correct below – once you really start to think about identity you must recognize that if naturalism is true, that a persistent identity of any type is an illusion. In that sense, I would go with the option of “neither”.
The appeal is that a lot of people think you’re wrong about the persistence of personal identity. So your opinion (that it would not actually be you) is simply not shared. I’m among those, but the topic simply can’t be done justice in blog comments.
I think that if naturalism is true, then you are correct that a persistent identity is an illusion. I have a sense that I’m the same person now that I was five minutes ago, but that is an illusion. However, I can say that from a phenomenological perspective, I have a sense (illusion) of a persistent identity.
I don’t think you can even get that far (illusion) when you upload your brain to a computer. The illusion supervenes on the underlying substrate.
I get that you have your views and you have your reasons for your views. My point is that other people have a very different view of the topic, they do not find your reasons to be decisive, and that is why upload is appealing. That’s all I want to say. I don’t want to try to explain why your reasons are not decisive.
Fair enough, but can you recommend a book perhaps? What has the most persuasive defense of uploading?
I am sorry. I spent more than half an hour looking for something. I have to give up. I don’t want to refer you to Derek Parfit’s Reasons and Persons. You will definitely understand what I’m talking about if you read that book, but it is a 500 page book. I was looking for something way, way more accessible, and failed.
We consider ourselves the same entity tomorrow as today because we care about tomorrow’s version. We don’t care about it because we’re the same entity, about which there’s no fact of the matter.
It’s an interesting empirical question then whether we care about this uploaded entity. I do not find that I care in the least, and I wonder whether those who do are merely telling themselves they ought to care, on account of their (metaphysical) theory of identity. (Another hypothesis is that those choosing uploading have a pathological fear of death, and therefore convince themselves this future life is the real thing.)
(Incidentally, if I did care about this distantly future “me,” I would NOT wish upon it quasi-eternal life under conditions unknown. There are many possible fates worse than death.)
I can care about an entity without thinking it is self-identical to a version of me. Actually I care more about entities that I know are not self-identical to me at any time, my children.
You care more about the person you’ll wake up as tomorrow than the brainscan-cloned copy of you because you’ve learned to; every day of your life you start a day sharply influenced by a you-before-going-to-sleep, so you learn that you’re better off being nicer to you-waking-up when you’re you-before-going-to-sleep. If you likewise had thousands of memories of you-in-different-hardware having your experience substantially affected by yous-before-changing-hardware, you’d learn to care a fair bit.
Interesting empirical data might come from creating an experimental society of people who never sleep, and then telling them that they will be forced to sleep and seeing whether they identify with the post-sleep versions of themselves, and feel that they should care about them. Perhaps some variation of this could be tried with animal subjects?
I don’t think there will ever be brain scanned versions of anyone.
Trying to scan someone will fail. If it didn’t fail, it would be so “clunky” trying to run it virtual instead of “native” in the hardware it built to live in that it would be unacceptable. Think trying to run Windows on a slide-rule.
There is nothing pathological about a fear of death, when it has a fiar overwhelming likleyhood of happening in the time frame one cares about.
I don’t know about others but I’m definetly a “far” guy.
what’s the deal with q-tips? they’re not q’s, and they’re not tips?
also ovaltine should be called roundtine
I’ve been reading and enjoying your blog for a year now. I had a question about status.Do you think status inherently zero-sum, or can you imagine any ways in which it’s somehow possible to collectively raise everyone’s status? Perhaps, for example, egalitarian societies may be generally less concerned with status is such a way that on average people feel better about themselves.
[C]an you imagine any ways in which it’s somehow possible to collectively raise everyone’s status? Perhaps, for example, egalitarian societies may be generally less concerned with status is such a way that on average people feel better about themselves.
Given the inherent human propensity to stratify by status, I think we can safely conclude that “egalitarian society” is an impossibility under any reasonable definition of the term. Moreover, I’d say it’s also clear that a pretense of egalitarianism is likely to only make things worse, as a special case of the general principle that denial of awkward truths is likely to make their consequences only more awful.
With that in mind, it seems to me that the optimal arrangement might in fact be an intensely hierarchical society without any pretense of egalitarianism at all. People have an urge to struggle for higher status, but also an even stronger urge to respect and submit to those whose higher status they recognize. So the optimal distribution of status may well be the traditional one where there exists a rigid hierarchy, but (almost) everyone is given a realistic long-term prospect to establish some place in the hierarchy above the lowest level (typically by assigning respect and authority to parents and elders).
Someone recently cited a pertinent quote by Don Colacho here on OB: “In societies where everybody believes they are equal, the inevitable superiority of a few makes the rest feel like failures. Inversely, in societies where inequality is the norm, each person settles into his own distinct place, without feeling the urge, nor even conceiving the possibility, of comparing himself to others. Only a hierarchical structure is compassionate towards the mediocre and the meek.“
I disagree with Vladimir. Social status is zero sum, but the difference between top and bottom doesn’t need to be large. It was much larger in the distant past with kings and slaves at the opposite ends. The king could do no wrong, and could kill the slave at a whim.
A differential status hierarchy will only be stable when it only reflects differential ability and when all on the hierarchy agree with its ranking and when all agree that the difference between top and bottom is “fair”.
When status reflects something unrelated to ability (inherited money for example), then the hierarchy is unstable because what ever the hierarchy is reflecting can be transferred. In the case of money, by theft, taxation, hard work, or stupidity.
There are some things that render social hierarchies inherently unstable. For example when social status is determined by brutal repression as in North Korea. The person willing to be the most brutal has an advantage over those not so willing. But if a social hierarchy is too repressive, people at the bottom have a strong incentive to kill their oppressors even if they don’t move up the social hierarchy.
Tribalism can make the social hierarchy unstable too. If one tribe suppresses another due to traits not related to ability, the hierarchy is unstable and can break down and reform rapidly. The speed of the breakdown and formation of a new hierarchy depends on how much the hierarchy itself is needed to maintain the status of those at the top. The deposed head of state loses status in a few minutes. That status was not a product of individual ability, it was status by virtue of being the head of state. That status (and its power) transfers to the next head of state.
The king can still be the top of the social hierarchy, but if he can no longer kill the slave at a whim, the status of the slave has increased in absolute terms. The status of the slave has not increased in relative terms, but for most people near the bottom of the hierarchy, it is the absolute levels that matter most. For people at the top, it is the relative levels that matter most.
“Property” is one of several social conventions that are only maintained by the adherence of everyone in the social hierarchy to that social convention. If the power distribution in the social hierarchy changes such that people no longer want to adhere to the social conventions that lead to that power distribution, then people will change the social conventions until the power distribution does match what they want.
Some changes occur all the time. Fashion is quite mutable, with fashion status depending on being at the cutting edge of fashion. There is fashion in clothing, music, food, politics, even science and medicine are not immune to ascribing inordinate status to what is fashionable. You can tell if something is fashion driven by how much marketing it takes to keep it up, and how much of a disconnect there is between the marketing driven claims and the actual substance. Much of current politics is all fashion, much hype with little substance.
I would just like to point out that I think that there is a implementation of hirerachichal society that might be friendlier than the traditional one.
Tribalism seems to be cruical in this. I can easily imagine several parallel societies that don’t care much about what the others think about them since they assign status according to different value systems.
If people are allowed to leave their society (but societies are allowed to deny access to anyone) this would provide a small tirckle of people who find due to their innate talents easier to advance on the status track in another society rather than their own and this might offset the cost of adjusting to a foreign (or should I say alien) culture.
But modern Westerners are remarkably demanding of ideological conformity, not only that they are universalits, so this plan might not be workeable with a predominantly Western-derived transhuman society. China dosen’t seem any better for now, though I struggel to find a “real” metric to compare the situation in say Korea or Japan to our own, so maybe they are a alternative.
One way to conceive of the problem is this: there is some portion of human welfare that is purely comparative. Some portion of that is positively correlated with the welfare of others (empathy), and some is negatively correlated (status). This latter form of welfare is by definition zero-sum; we can then attempt to figure out what its magnitude is in different circumstances, different societies, different deciles, etc. The magnitude could theoretically be anywhere from 0% of human welfare to 100%. It can be expected to vary with circumstances; we could investigate what the proportion varies with.
The problem with a large part of welfare being zero-sum is that for that component, everybody isn’t made better and better every day by consensual exchanges.
Is there any difference between hedge funds and cannibalism?
Cannibals can get transmissible spongioform encephalopathies so there is something that limits cannibalism.
Yes, thank you. Hedge funds get “transmissible spongioform encephalopathies” too. However, they have had the Federal Reseve
bank of New York and private banks “cure” their illness. Like cannibals they should suffer the same fate. However…….
Hedge Funds move value. Under any circumstance, they fail to create value.
open threads are not open threads!
relevant topics are not relevant!
I wonder if I could modify the random postmodern essay generator to create Hansonian posts? If so, that would make it easier for Robin to go on vacation.
Indeed, that is quite easy to do. I cut & pasted the first page into this Markov chain generator:
In this subject, then does policy interventions. … Reputational ratings are being publicly ranked Pen State’s law school.” … A decade ago, many, like a post, Easterly elaborates: The new study’s authors are almost conned by controlling fat folk aren’t as well. You should mostly ignore what extent it on somewhat larger fraction of a few complain so polluted the colleges [according to women’s primary value is forbidden or culture. It is amazing that women disapprove of whether its architecture. … showing women’s humanity; … numbing men the impossible task of getting students who are also responds to see the moral obligation to fear short term mates in this debate hinges on this case, fields low class folks don’t the world of women to “white trash,” who and seedy are a long shot. But while this field shows clear symptoms of strength, skill, and benefits of information, and women at the story isn’t obviously likely it makes most reliable source for women models aren’t aware of the same conclusion. (more; see in the big names. Harvard. Yale. University of woman” even if they were thinner. Some argue that marketers have just wishful thinking.
A longer sample size would generate more interesting results.
What are the highest-status hobbies that can be done alone or with one other person? Some that quickly come to mind are chess, playing an instrument, reading classical literature, perhaps the NY Times crossword puzzles. Any other big ones?
Almost any individual sport you can think of – fencing, tennis, riding, golf, sailing, skiing, hang gliding . . .
Not so much sports explicitly tied to physical strength or actual combat (wrestling, MMA, weightlifting) or the satisfaction of bodily needs (the status of fishing and hunting is dependent on whether it is for food or not).
Why are individual sports more high-status than team sports?
Hi Sister Y,
I disagree. Among effete, latte drinking intellectuals, individual sports that do not rely on physical strength are high status. That is almost certainly because effete, latte drinking intellectuals do not generally have physical strength and fighting ability.
I won’t channel my inner Roissy to point out that sports that rely on physical strength and combat are higher status to intellectual women than they give them credit for.
Sex appeal is only part of status, and status is only part of sex appeal.
I like your three-part model below, but cf. this text message I got from my friend last night:
“The girl i met friday asked what i do (aside from paddling) and i was like: um…Mathematical origami, tolkien’s obscure background stories, and gem cutting…FAIL!”
Yeah, probably better to wait until the second date to unleash the Tolkien fan boy.
Thinking about status brings out my inner postmodernist. I think status is to a large degree socially constructed and context dependent (even though there are some objective fundamentals like IQ/g and signals of health, fertility and fighting ability). I do not think it resolves to a single number.
That may be why we play so many status games. If status did easily resolve to a single number, status hierarchies would probably be more stable and thus there would be little incentive to constantly trying to raising one’s status.
Actually, there’s a great question here: If demonstrations of IQ and depth of knowledge are high status, why are “nerdy” pursuits low status?
That is easy: those that are signals of (1) IQ, (2) creativity, and (3) the personality trait of openness. The latter is interesting. I would recommend Go over Chess, foreign literature over classical literature, and playing a foreign instrument. E.g. play the Japanese Biwa instead of the guitar.
I think juggling thousand carat diamonds is a pretty high status hobby. There is a pretty high barrier to entry.
“I can care about an entity without thinking it is self-identical to a version of me. Actually I care more about entities that I know are not self-identical to me at any time, my children.”
Perhaps I should have been more precise. The point is that you care for your future self in the same way as your present self. One of the distinguishing characteristics of that kind of caring is its ineliminability: in principle, you could stop loving your children; you could never stop caring about yourself (even if that caring consists in wanting to die).
Leaving aside the technical possibilities of uploading, I wonder if those who look forward to being uploaded would take in immediately available opportunity to be uploaded. Again, there’s no right or wrong in this matter, but I’m inclined to think I can rely on my introspections to infer the same response in others. I wouldn’t. It just doesn’t seem like it would be me (and what it seems like is all there is to the question of whether it is).
I take my personal reaction to express the way human beings intuitively determine their identity, and I am in crucial measure partly constituted by my physical nature, even if I couldn’t tell the difference between an ordinary and an uploaded life. So, I tentatively infer that people who think differently are in the grip of a moral concept. They’ve determined (based on their version of utilitarianism) that it’s rational to care about your own continuous experience; but I doubt that is the sole determinant of whether you spontaneously and inherently care about a physically separate entity.
Consider this thought experiment. Those who think our universe is infinite often infer that there must be an additional copy of each of us “out there.” (In fact, there should be infinitely many such copies.) Now, do you care a whit about what happens to these copies? Why not? They differ from a downloaded version of you only in that the downloaded version has properties causally related to your being, but I don’t see that should make any difference when the two versions are identical and even live in an identical light cone.
I wonder if those who look forward to being uploaded would take in immediately available opportunity to be uploaded.
I think that many people have a fairly high degree of confidence that they will indeed find themselves uploaded – i.e., that the upload will be numerically identical with them (actually the same person), rather than merely qualitatively identical (a distinct person who merely happens to share memories).
However, I don’t think very many people are absolutely, 100% confident in the thinking which leads them to this conclusion. Therefore they would not gratuitously choose to be uploaded for no reason. However, they might very easily choose to be uploaded instead of, say, being placed into a situation with a 10% chance of being murdered – showing that their confidence in the thinking that leads them to anticipate being numerically identical with the upload is high.
I think that janos’s point is key. Once the process of repeatedly uploading and downloading a personality gets going and is repeated many times, the resulting personalities will be well-stocked with memories of having been uploaded and downloaded multiple times and having survived the transfer, and as a result they will anticipate without any trepidation the next upload. But many people have already performed this in imagination as a philosophical thought experiment or intuition pump, and this has pumped their intuitions. It’s a popular thought experiment.
there must be an additional copy of each of us “out there.” (In fact, there should be infinitely many such copies.) Now, do you care a whit about what happens to these copies? Why not?
Applying janos’s point, they fail to care because they don’t remember having been those other people, so they don’t anticipate being those people, i.e. their intuition hasn’t been properly pumped to get them to anticipate that.
Once the process of repeatedly uploading and downloading a personality gets going and is repeated many times, the resulting personalities will be well-stocked with memories of having been uploaded and downloaded multiple times and having survived the transfer, and as a result they will anticipate without any trepidation the next upload.
Why? That particular data point is equally likely to uploading skeptics as to believers. They both predict that the uploaded persona will report “Yes, it’s really me, and I survived the transition with my personality intact.”
So apparantly the case for uploading rests on the idea that thinking you will become the uploaded persona is no more delusional than thinking you will become your future self. If a persistent identity is an illusion – and it certainly is an illusion given naturalism – then that is completely true.
But it seems strange that someone who tries to be rational will say “I’ve already accept delusion #1, so I might as well as go for delusion #2 as well.” It appears to me the rational thing to do would be to try to grapple with the fact that a persistent identity – even without uploading – is a delusion.
I don’t understand the question. Maybe if I clarify: start with personality-instance X1 who calls himself Bob and says he was born in 1994, and perform an upload which creates X2 identical to X1 and destroys X1 in the process. So now the only existent personality-instance is X2, who calls himself Bob, says he was born in 1994, and says, “I was uploaded one time, and I survived as the upload.”
Now download, creating X3 and destroying X2. Now upload, creating X4 and destroying X3. Continue the process until we arrive at X99. X99 identifies himself as Bob, says he was born in 1994, and says he’s lost track of the number of times he’s been uploaded, but he has experienced success every time, and so deep in his gut he is as sure that the next upload will transfer him up as he is that his next physical step will carry him forward. He has no fear of upload, and does not believe that being uploaded will result in his personal annihilation and the creation of a mere duplicate, a mere other person who happens to share his memories, because it’s not what he remembers having happened. He remembers always becoming the upload when he is uploaded.
I’m talking about X99 here. I am not talking about somebody else, Y1, who calls himself Bill, who has been watching the uploads, but who has never personally been uploaded. As for Y1, he may be skeptical. But I’m not talking about him, I’m talking about X99.
However, belief tends to be infectious. If enough people around you believe something, you’re likely to believe it. So Y1 might actually believe he’ll survive upload and find himself uploaded. In any case, I was talking about X99.
But it seems strange that someone who tries to be rational will say “I’ve already accept delusion #1, so I might as well as go for delusion #2 as well.”
That’s your way of thinking of it, that it’s a delusion. That’s not the way everyone’s going to think of it.
It appears to me the rational thing to do would be to try to grapple with the fact that a persistent identity – even without uploading – is a delusion.
A lot of people would rather escape old age and death with the flick of a button, than spend thirty years in a remote monastery becoming reconciled with death by achieving the kind of enlightenment you describe.
I think far more people think they will escape death by following certain rules and praying to their particular Deity, who will then grant them an infinite afterlife. I don’t consider them particularly “rational” either.
When I see an optical illusion, I know the unrealistic object is due to a defect in my visual perception. I don’t change my perception of reality because I see an optical illusion. Why would I change my perception of my conscious reality if I observed a consciousness illusion of the type you describe?
The skeptic does not dispute that X99 has a sense of psychological continuity that survives the uploads and downloads. After all, we are making a copy that has the memories of the original. Rather, the skeptic asserts that uploading to create X100 (and simultaneously destroying X99) is, phenomenologically speaking, no different than X99 walking into a disintegration booth to commit suicide.
Of course, as you correctly point out, persistent identities are a delusion on naturalism. So let’s say that it is day 0. We can then label our guy X99.0. Tomorrow he will be X99.1. The day after that he will be X99.2. On naturalism, it is every bit of a delusion for X99.0 to think that he will be X99.1 as it is for him to believe that he will be X99.2.
The believer makes a wrong turn. He says, “well, I’ve already accepted the delusion that I will become X99.1, so having done that, I can also accept the delusion that I will become X100.” That is wrong. It is like saying, as long as I’ve added 2 + 2 = 5 once, I might as well as do it a second time. The correct response is to cease caring for your future self at all, except to the degree that evolution has bestowed some irrational, non-truth seeking sense of regard for one’s future self.
errr, mistake in my last comment. The correct sentence should read:
“Tomorrow he will be X99.1. The day after that he will be X99.2. On naturalism, it is every bit of a delusion for X99.0 to think that he will be X99.1 as it is for him to believe that he will be X100”
Uploading does not allow one to escape death because it is constrained by the life of the universe. The universe is tending towards increasing entropy and will ultimately suffer heat death. There will be no free energy to power the computers.
Or do uploaders have some tie-in to the alleged multiverse? Perhaps some form of hyperspace which will allow them to go to other, younger universes?
Justin, maybe I have failed but I have tried to avoid coming down on either side of the question of whether it is true that the self survives upload. I have tried to avoid saying that it’s a delusion, though you interpret me as saying that it’s a delusion.
I didn’t really want to make a statement about truth because I don’t want to be drawn into a discussion about it. This is why I described the views of X99 without agreeing or disagreeing (though you interpret me as disagreeing). But simply in order to avoid further misinterpretation, I don’t think that the persistence of self, either in one’s own body or as an upload, is a delusion, because in order for it to be a delusion it must be a false belief, and in order for it to be a false belief there must be truth conditions which are not satisfied, and I don’t think it’s the sort of thing that has truth conditions, at least not in the usual sense, at least not in this situation.
Ordinarily, personal survival has pretty easy truth conditions which anyone can in principle check. But once we introduce science fictional elements such as uploads or star trek transporter accidents (where two Kirks are produced instead of one) and so on, then our ordinary methods of checking produce conflicting results.
I do not accept grounding objections in philosophy. I think counterfactual logic is perfectly acceptable. But we don’t need to worry about it. If there are persistent identities then we can falsify uploading. If there are not persistent identities then we do not need to.
Suppose that X99 walks into an upload booth, has his brain scanned, and then X100 is created. A few seconds later, X99 is disintegrated. What happens phenomenologically to X99?
Skeptics: X99 walks into a booth and then dies.
Persistent Identity Uploaders: X99 walks into a booth, becomes aware of both X99 and X100 at the same time, then is only aware of X100.
Non-Persistent Identity Uploaders: X99 walks into a booth and dies. But so what? The X99 at 3:00 PM is no more like the X99 at 3:01 PM than he is like X100. To the extent that X99 at 3:01 is still the “real” X99, then so X100.
We can falsify persistent identity uploading because the first person to upload can report that they never became aware of both X1 and X2 at the same time. Their memories of X1 ended with the brain scan.
We don’t need to falsify non-persistent identity uploading because everyone agrees about how it works. The only question is whether or not doing it is rational. I don’t think that is is rational.
A delusion is a belief in the face of overwhelming evidence that the belief is wrong. I don’t have a delusion about the persistence of identity because I don’t think that identity does persist. I don’t have a false belief about it. My belief follows the evidence. There is no evidence that self-identity persists, I don’t believe that self-identity does persist.
There is the illusion of continuity of self-identity, but that is an illusion, just like an optical illusion. It only becomes a delusion when belief in the non-real occurs in the face of overwhelming evidence to the contrary. I think the belief in continuity of self-identity is closer to delusional than the belief that there is no such thing. There is gigantic evidence that there is no continuity of self-identity. Why would anyone think that there is such a thing if not for some pretty strong illusions?
I don’t think that is correct. The Buddhist concept of not-self as in this quote.
“It is often thought that the Buddha’s doctrine teaches us that suffering will disappear if one has meditated long enough, or if one sees everything differently. It is not that at all. Suffering isn’t going to go away; the one who suffers is going to go away.”
suggests otherwise (at least for these people).
In other words we have risked the fate of the earth, the fate of the species, on the mental stability of a few ambitious politicians who rise to the top of the heap, not necessarily because of their rationality. There is no foolproof command and control system. The imposing phrase “command and control” belies its meretriciousness.
From a Slate.com article on a missileer-trainee who asked how he would know that an order for nuclear attack came from a sane, lawful source. This problem remains outstanding and unsolved.
More broadly, how should we as rational individuals reconcile the benefits of obedience to an external command structure when we can’t verify the sanity, rationality, etc. of the source.
Surgical brain enhancements seem a ways off in the future. We’ve been experimenting with pharmaceuticals for a while, what prospect do they hold in the near-term for cognitive enhancement?
Shoot, what I really intended to note here but then forgot is this obituary of health economist John Calfee, particularly the bit about “fear of persuasion” which Robin has discussed before.
I’d like to know, how much of your work is supported by the Koch Bros? Does this introduce a bias into your thinking? Do you think other scientists and intellectuals are biased by the source of their funding?
For here or here for reference.
What will happen when science is combined with better money collection mechanisms than grants?
Steve Grand famous in the AI game area has a street performer protocol here to raise money for a new game
What might a street performer created AI look like? Is it more likely to be friendly?
“Once the process of repeatedly uploading and downloading a personality gets going and is repeated many times, the resulting personalities will be well-stocked with memories of having been uploaded and downloaded multiple times and having survived the transfer, and as a result they will anticipate without any trepidation the next upload.”
No doubt, but my point concerned a decision today in anticipation of far-future uploading. (I’m not sure I was clear regarding that.)
“Applying janos’s point, they fail to care because they don’t remember having been those other people, so they don’t anticipate being those people, i.e. their intuition hasn’t been properly pumped to get them to anticipate that.”
Yes, I agree with that [except possibly your application of “intuition pump, but let’s leave that aside]. But for a person concerned *today* about being uploaded, there’s no rational basis for prejudice against the exact duplicate compared to the upload. I can make the distinction by introducing causal continuity, but what justifies this when your duplicate has been influenced by exactly the same events and has responded with exactly the same thoughts? Why should I care about causal continuity when it makes absolutely no difference to my experience? If “I” am constituted by the contents of my thoughts, then anyone who has exactly those thoughts is me, to the same degree as the upload is me.
What I think is going on is that we only have room for one “me,” and we reject continuators coming in multiples. But surely someone has asked this question: what if I’m uploaded while I’m alive? Given a choice favoring the welfare of the upload over the meat version or the reverse, would you be equally loyal to each prior to the uploading (you must decide?) What if a thousand versions of you are created? Would that be a good thing (because “you” multiply “your” experiences), or a bad thing (because you are forced to divide your loyalties prior to uploading between a thousand versions?
I’m sure the conditioning process you and Janos envision would succeed, but it’s entirely an acquired taste. But when a person becomes concerned *today* about being uploaded tomorrow, his intuition pumps should be primed by these thought experiments. If the person nevertheless decides to continue to be concerned, he’s in the grip of a philosophical position; it is *not* the way our intuitions spontaneously turn, as taking *those* intuitions seriously leads to absurdities because our concept of identity is limited to one being; but a being created after I die is no different from a being created before I die, and It’s hard to see how anyone’s intuitions would claim a relevant distinction. Therefore, it seems that any method of continuation that’s capable of repeated application fails to create a sense of identity.
I truly wonder whether those who want to be uploaded might prefer being repeatedly uploaded.
No doubt, but my point concerned a decision today in anticipation of far-future uploading.
My answer to that followed: But many people have already performed this in imagination as a philosophical thought experiment or intuition pump, and this has pumped their intuitions. It’s a popular thought experiment.
In particular, there’s probably a significant overlap between upload enthusiasts and science fiction genre fans. Anyone who has read a significant amount of science fiction has almost certainly gone through this or equivalent thought experiments more times than he can remember.
But for a person concerned *today* about being uploaded, there’s no rational basis for prejudice against the exact duplicate compared to the upload.
There is a rational basis for not caring about that which you have no control over anyway, such as perfect copies of you vastly far away. There is a rational basis for reserving your concern for that which you have some control over. You have finite resources, and it is rational to spend them where they will do some good.
This, of course, assumes that you are only one of the copies. On another interpretation, where we take qualitative identity to entail numerical identity, you are all of the copies. You do care about all of them because you are all of them. Since you care about yourself, you care about all of them.
What I think is going on is that we only have room for one “me,” and we reject continuators coming in multiples.
Certainly. This intuition however can be cured by merging. You allow yourself to be duplicated, split into two people. Each person goes out and has a day or a week or a year of experiences. Then at the end of the year, they are re-integrated into a single person. He will have the memories of both. He will not be able to place the memories of either one in order either before or after the other. But this is not unprecedented in reality. I have often had two experiences, and then later on when remembering, was unsure which came first and which came second. He will, however, remember having been split into two, and he will remember having then been each of the two, though seemingly (so it seems to him) at different times.
Do this enough times, and he will expect to become both, not simultaneously, but (as he sees it) at different times. If he is split into two, then each of the duplicates will think something like, “I am this duplicate this time, though at another time I will be – or maybe was (and temporarily forgot) – the other duplicate, though eventually, at re-integration, I will remember everything.” And, as he is about to be split into two, he will think, “I am about to become first one of the duplicates, and then the other, though I don’t know which one will come first, nor, at the end, will I remember which one came first.”
This intuition could, conceivably, be so strong that he feels that he will become both (but “at different times”) even if he knows there will be no re-integration at the end. And each duplicate may even face death with equanimity, with the gut feeling that all that his death amounts to is the loss of some memories – as long as the other duplicate survives.
With the intuition properly prepared, then, a person can anticipate becoming both of the duplicates when a person is split in two. For example, if he is uploaded but kept alive so that there are now two of him. Or if he is copied a thousand times.
To what extent can rationality aid those with abnormal psychology – those who are already predisposed to heavy biases? What specific techniques, models, or questions best address such skewed perspectives?
For example, if you had a friend who was clinically depressed, what advice on rationality might you offer? After the basics of: seek medication and counseling, exercise, and get enough sunlight?
I think there is nothing irrational about depression. I see depression in terms of physiology, that it is the necessary aversive mental state between “at rest” and the euphoric state of near death metabolic stress when you are running from a bear, where to be caught is certain death. Physiology induces a state of euphoria so that one can run until one has escaped from the bear or dropped dead from exhaustion. Being caught and dropping dead from exhaustion are *the same* as far as evolution is concerned. That state has to be euphoric if one is going to ignore the pain signals and continue running. If organisms could enter a euphoric state easily they would, and would risk death with no benefit.
Evolution has configured physiology to minimize the sum of deaths from being caught by bears, from dropping dead from exhaustion and from suicide.
I do not understand. Are you saying depression is the recouperation after such a flight-or-fight response? or that it is part of the response itself?
Mayo Clinic studies find that those who suffer from reoccurring depression have a 9% or 1 in 11 chance of death from suicide. This makes it one of the most common causes of death – just above firearms and just below motor vehicle accidents.
Citation? I’ve seen studies that give suicide rates that high for borderline personality disorder, but never for DSM-IV Major Depressive Disorder. The correlation is embarrassingly weak for depression and suicide.
You’ve written a lot about foragers vs farmers, including farming bringing war. Azar Gat’s “War in Human Civilization” is excellent (so far at least) and places a lot of attention on the distinction between true hunter-gatherers and mere primitive agriculturalists, and how we can get uncontaminated evidence of what our ancestors were like. He concludes that they were pretty similar to primitive farmers with respect to their proclivity towards war.
Interesting book! I may have to pick that up. I have gone around with Robin on the issue and I think he is wrong. Research from anthropologists and archeologists (such as Keeley and LeBlanc) makes it pretty clear that foragers are warlike.
REALIST: research on primitive societies shows that they are very warlike
UTOPIAN: Those societies are not primitive enough. They are all at least somewhat “post-forager.” Some are pastoralists, some have limited agriculture, etc…
REALIST: if you look strictly at foragers like the Innuit, Kung San, and Aborigines you see that their history is very warlike.
UTOPIAN: those are only a small number of data points. We don’t have enough true foragers to examine.
REALIST: If you look at the archeological record, you find people before the neolithic times were very warlike. There is ample evidence of arrows and axes designed just for warfare, defensive formations, bones with weapons still in them, etc…
UTOPIAN: the archeological record is also spotty.
REALIST: Now consider our model. Records of people on small islands shows that they do not live “in balance” with the environment. They tend to overuse their local resources slowly. E.g. piles of discarded abalone shells get smaller from generation to generation, indicating that they are slowly overusing their resource until they disappear from the island. Why doesn’t this happen as often on the mainland? Because people who slowly deplete their local resources make war on their neighbors.
UTOPIAN: that is just a theory. You haven’t proven that.
REALIST: I think any good Bayesian would have concluded that foragers are warlike unless they had a subjective prior that is heavily skewed to the Noble Savage. And even then, the available evidence strongly confirms the warlike savage.
It seems you can buy a service two ways:
1. jobs I can do, but don’t want to do
2. jobs I can’t do
In the former are many menial jobs and in the latter are many of the finer things in life. If you tell a cook how you want your food, your dining experience is the former. You’d never tell your heart surgeon how you want your incision.
These New York chefs are defending the status of their product.
Since Robin has an interest in paternalism and better access to info, I’d like him to help get the word out about the FDA’s attempt to get in the way of people knowing their own genetic information.
… be a charity angel.