15 Comments

There is probably (as is so often the case) that you have an inverted U with happiness / longevity / success in life.

At the depression end you do not get anything done, suffer physically and may even commit suicide. At the eternal bliss end you do not do anything either (and thus are likely not to be much of a success / live a healthy life) so slight unhappiness that gives you drive forward is probably not such an absurd optimum state?

Expand full comment

Fair enough. Except that, if there's a positive selection effect, you'll end up with an entire population of bad decision-makers.

Great Filter? :)

Expand full comment

If you're happy and you know it, clank your chains!

Expand full comment

I don't think you fully appreciate how selection effects work. Someone disposed to "risk-taking behaviors, excess alcohol and drug consumption, binge eating, and ... neglect[ing] threats" is exactly the sort of person that blunders into parenthood. The richer, soberer, slightly more realistic and melancholy person... not so much. Being smart, successful and rational decreases your evolutionarily fitness. Fertility statistics don't lie.

Expand full comment

Great Filter?

Expand full comment

If a utilitarian computer gave that advice, I would ask why, since the most wealthy usually don't derive that much happiness from tax cuts. Maybe there's a point about economic efficiency somewhere. Or specific types of innovation that are driven mostly by the most wealthy investors and that would make the world a lot better.

However, I would not predict a functional utilitarian machine to output such policy advice. From the total set of available policies, it's hard to see how further resource inequality is the best one to maximize total happiness.

Expand full comment

I don't know, if government policies keep favoring the most wealthy, then taxing everyone else to give tax cuts to the richest 0.00000001% would do it.

Expand full comment

Cute. But it's useless as an actual critique. Felix is a utility monster, and humans aren't utility monsters. If Felix existed, the computer would authorize scientists to abduct and study him so that his neurological phenotype could be replicated as many times as possible.

Expand full comment

Seeing how this is a thread on being happy, and one of the goals of some people is to maximize total happiness, here is a cautionary tale.

http://www.smbc-comics.com/...

Expand full comment

@Dremora

There is a downside to being unhappy: being less confident means being less attractive and stress is not conducive to good health. Maybe constantly overthinking stuff wears down the body as well, with greater cognitive during depression being similar to the greater strength and speed provided by adrenaline release caused by fear (sure, greater strength and speed sound good but prolonged adrenaline release will wear your body down).

It's also possible the differences in cognitive capability only became relevant in our modern world (with all its intricate deceit, crime and scams) while it being happy doesn't influence a hunter-gatherer's ability to hunt and gather.

Lastly, the results could be influenced by culture. Perhaps children are taught flawed ways of thinkin about the world and judging people and it is those flawed ways of thinking that are vulnerable to happiness.

Expand full comment

Well as a first order approximation there has to be a tradeoff between any intelligent's agent's happiness and motivation. If we think about intelligence far outside human brains consider Watson or the Netflix algorithm.

Certainly these systems have computational complexity somewhat on the order of lower vertebrate if not at the level of dogs. So it's not conceptually absurd to speak of their "happiness." At their core both systems use boosted ensemble methods across a variety of base classifiers. The systems are optimized to minimize some classification loss function.

I think you could say the Watson/Netflix are happy when that loss function is small/decreasing. A day when Netflix gets a high proportion of movie ratings correct probably "feels" like a good day to that system in the way that a day when we get a raise or go out on a good date.

From this perspective though most complex problems (ones that you would need to build an intelligent agent for) have very heavy loss functions. R-squared in the vast majority of real world statistical problems even with very sophisticated algorithms rarely exceeds 20%. Netflix and Watson misclassify far more often than they classify correctly.

So must this imply an asymmetry of pain vs. pleasure? Or does Netflix internal state correspond to a "greater feeling of happiness" on a correct classification than the "sting of pain" on a misclassification?

Expand full comment

Cheerful and Happy are shared with relative I.Q. levels in the people in subjected to this testing. This "study" isn't saying anything with only "Happy" as a metric.

Expand full comment

This is bad news, actually. It implies happier agents are less functional. If there is a systematic reason for this, rather than, say, a psychological confound that happens to exist in current humans, then it would predict that selection effects tend to favor naturally less happy agents. In that case, either the correlation has to be broken somehow, or selection effects have to be mitigated, or the future belongs to the depressed.

Expand full comment

...Yay?

(for those who can't read my mind, the joke is that this suggests it's often good to be unhappy - a depressing but helpful conclusion)

Expand full comment

I wonder which is more preferred and whether they would wish it any different.

Expand full comment