Did you have great hopes for Xmas? Were you disappointed, but think it was good at least for you to have hoped? Turns out, hope need not make you happier. From the NYT Year in Ideas : Prisoners with life sentences but with the possibility of parole adapt less well to prison life, for example, than prisoners with life sentences without the possibility of parole. … The research team … tracked people who had portions of their colons removed or bypassed, such that the patients couldn’t defecate normally. The condition is extremely unpleasant and leads many people to say they’d rather be dead. … But a colostomy isn’t always permanent. Some patients are likely to heal and have their bowels reconnected. . Were it up to the patient to choose, "almost anybody would choose temporary over permanent," Ubel says.
The Palestinian poet Mahmoud Darwish described his people's plight by saying: "We are destined to suffer from this deep-rooted affliction: the affliction of hope."
Assuming the results in the original post generalize, transhumanists who expect to see massively human-condition-improving technologies within their lifetime should be significantly unhappier than other people.
Just because your "up" direction is the future doesn't mean you have to keep comparing yourself to the future. You can just as easily compare yourself to a Middle Ages peasant and think, "Wow, they were completely screwed, not even any cryonics."
It looks to me like most folks only see a future as different from the present, as the size of the largest change that separates their present from their childhood. Nanotech sounds "absurd" because they don't remember the invention of electricity; the Singularity sounds "absurd" because they don't remember being a chimpanzee.
A transhumanist (should) have a much wider historical perspective, which creates a much larger gradient between the past and the future. But which one you mentally compare yourself to is up to you.
"you're thinking, I can't wait until I get rid of this."I have thoughs like that fairly frequently, about things like having to eat, having to sleep, having very limited I/O bandwith and my mind being a black box to me (to the point where I can't even make backups).
Assuming the results in the original post generalize, transhumanists who expect to see massively human-condition-improving technologies within their lifetime should be significantly unhappier than other people.
This brings back echoes of Dostoevsky's The Grand Inquisitor, only there it's applied to the great existential hope that Christianity casts over a society. The Inquisitor notes to the silent Jesus that the path to happiness isn't through Freedom or Choice, but through control and lack of concern over daily things (or something like that). Old Dusty wasn't the perkiest of fellows, but he did seem to get some things right :-)
Also, I think one of Eliezer's papers points out a study where college-aged pupils were taking a photography class. At the end got to choose one picture to take home and the other photos would be kept by the instructor. Half of the students were told their choice would be final, the other half were told they were allowed to exchange the photo they chose for another if they changed their mind.
Guess who was happier with their choice?(p.s. Anyone have a link to this?)
The solution to the dilemma is the development of (no sarcasm, honestly) peace about outcomes. If you can balance the benefits of possible improvement in your situation with acceptance of the chance that you won't improve, you can have the best of both worlds. The problem isn't the hope, it's the over-weighting of both the chance of improvement and the benefit.
I like the caste system observation: we are under a delusion that limitless personal freedom is what makes us happy when it's actually restriction of choice -- something we rarely choose for ourselves -- that pleases us most in the long-run. It reminds me of a challenge that I still have trouble answering:
I'm constantly making decisions. Let's say I make the right decision two-thirds of the time. Now let's say I have the option to give up my ability to make decisions entirely in return for the services of a wise decision-maker who will make the right decision for me three-quarters of the time.
By now I've gotten very close to saying that yes, I will have to enlist such services, in fact, it's my responsibilty to do that, considering that decisions I make affect the world in however limited a way.
This all raises the question of what it means to be a person if not a decision-maker, but I can make peace with the fact that my personhood will be suppressed in return for the world benefiting more from my actions -- I can almost make peace with it.
My goal is to work past this fetishism of personal choice and the preciousness of my own personhood, to say an unambiguous yes to such a chance of becoming a better world-affecter.
if I had the option of permanently losing a part of my bowel or having hope, I'd choose the latter
That's what your self-model says you'd do. We have no particular reason to believe that your model is accurate.
Besides, you've gotten it wrong: the choice needs to be "permanently losing a part of my bowel" or "temporarily losing a part of my bowel". You can have hope in either condition; it is only justified in one.
That's weird. I can see the point of this, and yet, if I had the option of permanently losing a part of my bowel or having hope, I'd choose the latter. Does this mean I'm not a eudaemonist, or are my actions speaking louder than words? Or something else?
This is interesting when compared or contrasted with survey data used to compare Europeans with Americans. Interesting because of the fact Europeans reported both lower self evaluated happiness and were more inclined to agree that luck played a greater role than hard work, the converse holding true for Americans.
The notion that one is in charge of their own destiny seems it could easily play either way.
If this is true lottery tickets must cause poor people lots of unhappiness.
The Palestinian poet Mahmoud Darwish described his people's plight by saying: "We are destined to suffer from this deep-rooted affliction: the affliction of hope."
علينا ان نصاب بهذا الداء المتأصل, داء الامل
These are just versions of good-oldfashioned relative deprivation, no?
Just because your "up" direction is the future doesn't mean you have to keep comparing yourself to the future.
That's not how the human mind works.
You can argue that isn't how it should work, or that we should work to change our minds from that pattern...
...but that doesn't make our minds work any differently here-and-now, does it.
Zero,
Aha, thanks! Now I remember hearing about it from that ted talk. Must have been some cross-wiring in this poorly structured memory device...
Assuming the results in the original post generalize, transhumanists who expect to see massively human-condition-improving technologies within their lifetime should be significantly unhappier than other people.
Just because your "up" direction is the future doesn't mean you have to keep comparing yourself to the future. You can just as easily compare yourself to a Middle Ages peasant and think, "Wow, they were completely screwed, not even any cryonics."
It looks to me like most folks only see a future as different from the present, as the size of the largest change that separates their present from their childhood. Nanotech sounds "absurd" because they don't remember the invention of electricity; the Singularity sounds "absurd" because they don't remember being a chimpanzee.
A transhumanist (should) have a much wider historical perspective, which creates a much larger gradient between the past and the future. But which one you mentally compare yourself to is up to you.
Paul Gebheim: I don't have a link to the paper, but Dan Gilbert talks about it in the video at the link below (starting at ~10:00).
http://www.ted.com/index.ph...
"you're thinking, I can't wait until I get rid of this."I have thoughs like that fairly frequently, about things like having to eat, having to sleep, having very limited I/O bandwith and my mind being a black box to me (to the point where I can't even make backups).
Assuming the results in the original post generalize, transhumanists who expect to see massively human-condition-improving technologies within their lifetime should be significantly unhappier than other people.
This brings back echoes of Dostoevsky's The Grand Inquisitor, only there it's applied to the great existential hope that Christianity casts over a society. The Inquisitor notes to the silent Jesus that the path to happiness isn't through Freedom or Choice, but through control and lack of concern over daily things (or something like that). Old Dusty wasn't the perkiest of fellows, but he did seem to get some things right :-)
Also, I think one of Eliezer's papers points out a study where college-aged pupils were taking a photography class. At the end got to choose one picture to take home and the other photos would be kept by the instructor. Half of the students were told their choice would be final, the other half were told they were allowed to exchange the photo they chose for another if they changed their mind.
Guess who was happier with their choice?(p.s. Anyone have a link to this?)
The solution to the dilemma is the development of (no sarcasm, honestly) peace about outcomes. If you can balance the benefits of possible improvement in your situation with acceptance of the chance that you won't improve, you can have the best of both worlds. The problem isn't the hope, it's the over-weighting of both the chance of improvement and the benefit.
I like the caste system observation: we are under a delusion that limitless personal freedom is what makes us happy when it's actually restriction of choice -- something we rarely choose for ourselves -- that pleases us most in the long-run. It reminds me of a challenge that I still have trouble answering:
I'm constantly making decisions. Let's say I make the right decision two-thirds of the time. Now let's say I have the option to give up my ability to make decisions entirely in return for the services of a wise decision-maker who will make the right decision for me three-quarters of the time.
By now I've gotten very close to saying that yes, I will have to enlist such services, in fact, it's my responsibilty to do that, considering that decisions I make affect the world in however limited a way.
This all raises the question of what it means to be a person if not a decision-maker, but I can make peace with the fact that my personhood will be suppressed in return for the world benefiting more from my actions -- I can almost make peace with it.
My goal is to work past this fetishism of personal choice and the preciousness of my own personhood, to say an unambiguous yes to such a chance of becoming a better world-affecter.
Pandora's box?
if I had the option of permanently losing a part of my bowel or having hope, I'd choose the latter
That's what your self-model says you'd do. We have no particular reason to believe that your model is accurate.
Besides, you've gotten it wrong: the choice needs to be "permanently losing a part of my bowel" or "temporarily losing a part of my bowel". You can have hope in either condition; it is only justified in one.
That's weird. I can see the point of this, and yet, if I had the option of permanently losing a part of my bowel or having hope, I'd choose the latter. Does this mean I'm not a eudaemonist, or are my actions speaking louder than words? Or something else?
"It's not the despair, Laura. I can take the despair. It's the hope."
This is interesting when compared or contrasted with survey data used to compare Europeans with Americans. Interesting because of the fact Europeans reported both lower self evaluated happiness and were more inclined to agree that luck played a greater role than hard work, the converse holding true for Americans.
The notion that one is in charge of their own destiny seems it could easily play either way.