Tag Archives: Hypocrisy

Graeme Wood on Futarchy

At the end of his article on the deaths of Intrade and its founder John Delaney, Graeme Wood considers futarchy:

It’s perhaps no great surprise that we haven’t embraced Hanson’s “futarchy.” Our current political system resists dramatic change, and has resisted it for 237 years. More traditional modes of prediction have proved astonishingly bad, yet they continue to run our economic and political worlds, often straight into the ground. Bubbles do occur, and we can all point to examples of markets getting blindsided. But if prediction markets are on balance more accurate and unbiased, they should still be an attractive policy tool, rather than a discarded idea tainted with the odor of unseemliness. As Hanson asks, “Who wouldn’t want a more accurate source?”

Maybe most people. What motivates us to vote, opine, and prognosticate is often not the desire for efficacy or accuracy in worldly affairs—the things that prediction markets deliver—but instead the desire to send signals to each other about who we are. Humans remain intensely tribal. We choose groups to associate with, and we try hard to show everybody which groups we belong to. We don’t join the Tea Party because we have exhaustively studied and rejected monetarism, and we don’t pay extra for organic food because we have made a careful cost-benefit analysis based on research about its relative safety. We do these things because doing so says something that we want to convey to others. Nor does the accuracy of our favorite talking heads matter that much to us. More than we like accuracy, we like listening to talkers on our side, and identifying them as being on our team—the right team.

“We continue to have consistent results and evidence that markets are accurate,” Hanson says. “If the question is, ‘Do these things predict well?,’ we have an answer: They do. But that story has to be put up against the idea that people never really wanted more accurate sources.”

On this theory, the techno-libertarian enthusiasts got the technology right, and the humanity wrong. Whenever John Delaney showed up on CNBC, hawking his Intrade numbers and describing them as the most accurate and impartial around, he was also selling a future that people fundamentally weren’t interested in buying. (more)

I don’t much disagree — I raised these issues with Wood when he interviewed me. As usual, our hopes for idealistic outcomes mostly depend on finding ways to shame people into actually supporting what they pretend to support, by making the difference too obvious to ignore.

More specifically, I hope prediction markets within firms may someday gain a status like cost accounting today. In a world were no one else did cost accounting, proposing that your firm do it would basically suggest that someone was stealing there. Which would look bad. But in a world where everyone else does cost accounting, suggesting that your firm not do it would suggest that you want to steal from it. Which also looks bad.

Similarly, in a world where few other firms use prediction markets, suggesting that your firm use them on your project suggests that your project has an unusual problem in getting people to tell the truth about it via the usual channels. Which looks bad. But in a world where most firms use prediction markets on most projects, suggesting that your project not use prediction markets would suggest you want to hide something. That is, you don’t want a market to predict if your project will make its deadline because you don’t want others to see that it won’t make the deadline. Which would look bad.

Once prediction markets were a standard accepted practice within firms, it would be much easier to convince people to use them in government as well.

GD Star Rating
loading...
Tagged as: ,

The Coalition Politics Hypothesis

Game theories let us analyze precise models of social situations. While each model leaves out much that is important, the ability to see how an entire set of payoffs, info, and acts work together can give powerful insights into social behavior. But it does matter a lot which games we think apply best to which real situations.

Today the game most often used as a metaphor for general social instincts is the public goods game, where individuals contribute personal efforts to benefit everyone in a group. This is seen as a variation on the prisoner’s dilemma. With this metaphor in mind, people see most social instincts as there to detect and reward contributions, and to punish free-riders. Many social activities that on the surface appear to have other purposes are said to be really about this. Here, “pro-social” is good for the group, while “anti-social” is bad. Institutions or policies that undercut traditional social instincts are suspect.

While this metaphor does give insight, the game I see as a better metaphor for general social instincts is this:

Divide The Dollar Game … There are three players … 1, 2, 3. The players wish to divide 300 units of money among themselves. Each player can propose a payoff such that no player’s payoff is negative and the sum of all the payoffs does not exceed 300. … Players get 0 unless there is some pair of players {1, 2}, {2, 3}, or {1, 3} who propose the same allocation, in which case they get this allocation. …

It turns out that in any equilibrium of this game, there is always at least one pair of players who would both do strictly better by jointly agreeing to change their strategies together. …

Suppose the negotiated agreements are tentative and non-binding. Thus a player who negotiates in a sequential manner in various coalitions can nullify his earlier agreements and reach a different agreement with a coalition that negotiates later. Here the order in which negotiations are made and nullified will have a bearing on the final outcome. … It is clear that coalitions that get to negotiate later hold the advantage in this scheme. (more)

That is, most social behavior is about shifting coalitions that change how group benefits are divided, and social instincts are mostly about seeing what coalitions to join and how to get others to want you in their coalitions. Such “social” behavior isn’t good for the group as a whole, though it can be good for your coalition. Because coalition politics can be expensive, institutions or policies that undercut it can be good overall.

In this view of social behavior, we expect to see great efforts to infer each person’s threat point – how much they and a coalition would lose if they leave that coalition. We also expect even greater efforts to infer each person’s loyalty – what coalitions they are likely to prefer and help. And we expect great efforts to signal desirable loyalties and threat points. When shifting coalitions are important, we expect lots of efforts to go into seeing and changing the focal points people use to coordinate which new coalitions form, and to seeing who will be pivotal in those changes.

At a meta level, people would also try to infer what other people think about these things. That is, folks will want to know what others think about various loyalties, threat points, and focal points, and in response those others will try to signal their opinions on such things. In other words, people will want to know how well others can track and influence changing fashions on these topics. At a higher meta level, people will want to know what others think that still others think about these things, i.e., they’ll want to know who is seen to be good at tracking fashion. And so on up the meta hierarchy.

When people talk, we expect them to say some things directly and clearly to all, to influence overall focal points. But we expect many other messages to be targeted to particular audiences, like “Let’s dump that guy from our coalition.” When such targeted messages might be overheard, or quoted to others, we expect talking to be indirect, using code words that obscure meanings, or at least give plausible deniability.

A social world dominated by shifting coalitions would spend modest efforts to influence temporary policies, such has how to divide up today’s spoils, and more efforts on rare chances to change longer term policies that more permanently divide spoils. Even more effort would be spent on rare chances to change who is possible as a coalition partner, For our forager ancestors, killing someone, or letting a new person live nearby, could change the whole game. In a firm today, hiring or firing someone can have similar effects.

This view of social behavior as mostly about shifting coalitions raises the obvious question: why doesn’t most social behavior and conversation seem on the surface to be about such things. And the obvious homo hypocritus answer is that we do such things indirectly to avoid admitting that this is what we are doing. Since coalition politics is socially destructive, we have long had social norms to discourage it, such as the usual norms against gossip. So we do these things indirectly, to get plausible deniability.

This can explain why we place such a high premium on spontaneity and apparent randomness in conversation and other leisure behavior. And also why we seem so uninterested in systematic plans to prioritize our efforts in charity and other good causes. And why we drop names so often. When we manage our shifting coalitions, we prefer to stay free to quickly shift our conversations and priorities to adapt to the changing fashions. If you ever wonder why the news, public discourse, and academia seem so uninterested in the topics most everyone would agree are really important, this is plausibly why.

GD Star Rating
loading...
Tagged as: ,

Boss Hypocrisy

In our culture, we are supposed to resent and dislike bosses. Bosses get paid too much, are mad with power, seek profits over people, etc. In fiction, we are mainly willing to see bosses as good when they run a noble work group, like a police, military, medicine, music, or sport group. In such rare cases, it is ok to submit to boss domination to achieve the noble cause. Or a boss can be good if he helps subordinates fight a higher bad boss. Otherwise, a good person resents and resists boss domination. For example:

The [TV trope of the] Benevolent Boss is that rarity in the Work [Sit]Com: a superior who is actually superior, a nice guy who listens to employee problems and really cares about the issues of those beneath him. … A character that is The Captain is likely, but not required, to be a Benevolent Boss.
Contrast with Bad Boss and Stupid Boss. Compare Reasonable Authority Figure. In more fantastic works, this character usually comes in the form of Big Good. On the other hand, an Affably Evil character can be a benevolent boss with his mooks, as well.
In The Army, he is often The Captain, Majorly Awesome, Colonel Badass, The Brigadier, or even the Four Star Badass and may be A Father to His Men.
For some lucky workers, this is Truth in Television. For a lot of other people, this is some sort of malicious fantasy. (more)

But here is a 2010 (& 2011) survey of 1000 workers (30% bosses, half blue collar):

Agree or completely agree with:

  • You respect your boss 91%
  • You think your boss trusts you 91%
  • You think your boss respects you 91%
  • You trust your boss 86%
  • If your job was on the line, your boss would go to bat for you 78%
  • You consider your boss a friend 61%
  • You would not change a thing about your boss 59%
  • Your boss has more education than you 53%
  • You think you are smarter than your boss 37%
  • You aspire to have the bosses job 30%
  • You work harder than your boss 28%
  • You feel pressure to conform to your bosses hobbies/interests in order to get ahead 20% (more; more; more)

In reality most people respect and trust their bosses, see them as a friend, and so on. Quite a different picture than the one from fiction.

Foragers had strong norms against domination, and bosses regularly violate such norms. We retain a weak allegiance to forager norms in fiction and when we talk politics. But we also have deeper more ancient mammalian instincts to submit to powers above us. And also, our competitive economy probably tends to make real bosses be functional and useful, and we spend enough time on our jobs to see that.

Many other of our cultural presumptions are probably similar. We give lip service to them in the far modes of fiction and politics, but we quickly reject them in the near mode of concrete decisions that matter to us.

GD Star Rating
loading...
Tagged as: , , ,

`Best’ Is About `Us’

Why don’t we express and follow clear principles on what sort of inequality is how bad? Last week I suggested that we want the flexibility to use inequality as an excuse to grab resources when grabbing is easy, but don’t want to obligate ourselves to grab when grabbing is hard.

It seems we prefer similar flexibility on who are the “best” students to admit to elite colleges. Not only do inside views of the admission process seem to show careful efforts to avoid clarity on criteria, ordinary people seem to support such flexibility:

Half [of whites surveyed] were simply asked to assign the importance they thought various criteria should have in the admissions system of the University of California. The other half received a different prompt, one that noted that Asian Americans make up more than twice as many undergraduates proportionally in the UC system as they do in the population of the state. When informed of that fact, the white adults favor a reduced role for grade and test scores in admissions—apparently based on high achievement levels by Asian-American applicants. (more)

Matt Yglesias agrees:

This is further evidence that there’s no stable underlying concept of “meritocracy” undergirding the system. But rather than dedicating the most resources to the “best” students and then fighting over who’s the best, we should be allocating resources to the people who are mostly likely to benefit from additional instructional resources.

But this seems an unlikely strategy for an elite coalition to use to entrench itself. If we were willing to admit the students who would benefit most by objective criteria like income or career success, we could use prediction markets. The complete lack of interest in this suggests that isn’t really the agenda.

Much of law is like this, complex and ambiguous enough to let judges usually draw their desired conclusions. People often say the law needs this flexibility to adapt to complex local conditions. I’m skeptical.

GD Star Rating
loading...
Tagged as: , , ,

Inequality Talk Is About Grabbing

The US today has about 425 billionaires, over 1/3 of the world’s total. Many folks say these billionaires are unfairly unequal, and so we should tax them lots more.

People usually become billionaires via having “super-powers,” i.e., very unusual abilities, at least within some context. But what if most billionaires had super-powers of the traditional comic book sort, like x-ray vision or an ability to fly, etc.? That is, what if people with physical super-powers earned billions in the labor market by selling the use of these powers? Would folks be just as eager to tax them to reduce unfair inequality?

My guess is no, most would be less eager to tax billionaires with physical super-powers. And I offer this prediction as a test of my favored theory of expressed inequality concerns: that inequality talk is usually a covert way of coordinating who to maybe grab stuff from. Let me explain.

As I’ve discussed before, while people usually justify their inequality concerns by noting that inequality can make lower folks feel bad, that justification can apply equally to a great many sorts of inequality. Yet concern is actually only voiced about a very particular sort: financial inequality at a given time between the families of a nation. The puzzle in need of explaining is: why is so little concern expressed about all the other sorts of inequality?

My favored theory is an application of homo hypocritus: our forager ancestors developed the ability to express and enforce social norms, and then developed rich and subtle abilities to coordinate to evade those norms. One of those norms was that foragers weren’t supposed to grab stuff from each other just because they wanted the stuff, or just because that stuff was easy to grab. But they did have norms favoring sharing and equal treatment, and so it was ok to talk about who might be violating such norms, and what punishments to apply to violators.

But they all knew, at least subconsciously, that some groups would be quite effective at retaliating against such suggestions. The accused might physically resist the attempted punishment, or might retaliate with contrary accusations. So foragers needed ways not only to overtly accuse folks of violating norms, and to officially propose to take stuff away as punishment, but also to covertly discuss who might have especially nice stuff to take, and who they could most easily get away with grabbing from.

I suggest that most talk about the problems of inequality actually invokes this ancient hypocritical ability to covertly discuss where to find lots of nice easy-to-grab stuff. We don’t discuss inequalities across time, because it is hard to grab much more than we do from the past or the future. We don’t much discuss the inequality of rich foreigners, because it is much harder to grab their stuff. We don’t much discuss inequality of those with unusual artistic abilities or sexual attractiveness, because we can’t directly grab their advantages and while we might try to grab their material goods to compensate, they don’t have that much, and the grabbing would be hard. (Also, such folks have more social status to resist with. For foragers, status counted lots more than material goods for influence.)

A few people within our nation who each have lots and lots of material goods, however, seem to make a great target for grabbing. So people discover they have a deep moral concern about that particular inequality, and ponder what oh what could we possibly do to rectify this situation? Anyone have an idea? Anyone?

But if those few very rich folks had real physical super-powers, we would be a lot more afraid of their simple physical retaliation. They might be very effective at physically resisting our attempts to take their stuff. So somehow, conveniently, we just wouldn’t find that their unequal wealth evoked as much deeply felt important-social-issue-in-need-of-discussing moral concern in us. Because, I hypothesize, in reality those feelings only arise as a cover to excuse our grabbing, when such grabs seem worth the bother.

GD Star Rating
loading...
Tagged as: ,

Impressive Power

Monday I attended a conference session on the metrics academics use to rate and rank people, journals, departments, etc.:

Eugene Garfield developed the journal impact factor a half-century ago based on a two-year window of citations. And more recently, Jorge Hirsch invented the h-index to quantify an individual’s productivity based on the distribution of citations over one’s publications. There are also several competing “world university ranking” systems in wide circulation. Most traditional bibliometrics seek to build upon the citation structure of scholarship in the same manner that PageRank uses the link structure of the web as a signal of importance, but new approaches are now seeking to harness usage patterns and social media to assess impact. (agenda; video)

Session speakers discussed such metrics in an engineering mode, listing good features metrics should have, and searching for metrics with many good features. But it occurred to me that we can also discuss metrics in social science mode, i.e., as data to help us distinguish social theories. You see, many different conflicting theories have been offered about the main functions of academia, and about the preferences of academics and their customers, such as students, readers, and funders. And the metrics that various people prefer might help us to distinguish between such theories.

For example, one class of theories posits that academia mainly functions to increase innovation and intellectual progress valued by the larger world, and that academics are well organized and incentivized to serve this function. (Yes such theories may also predict individuals favoring metrics that rate themselves highly, but such effects should wash out as we average widely.) This theory predicts that academics and their customers prefer metrics that are good proxies for this ultimate outcome.

So instead of just measuring the influence of academic work on future academic publications, academics and customers should strongly prefer metrics that also measure wider influence on the media, blogs, business practices, ways of thinking, etc. Relative to other kinds of impact, such metrics should focus especially on relevant innovation and intellectual progress. This theory also predicts that, instead of just crediting the abstract thinkers and writers in an academic project, there are strong preferences for also crediting supporting folks who write computer programs, built required tools, do tedious data collection, give administrative support, manage funding programs, etc.

My preferred theory, in contrast, is that academia mainly functions to let outsiders affiliate with credentialed impressive power. Individual academics show exceptional impressive abstract mental abilities via their academic work, and academic institutions credential individual people and works as impressive in this way, by awarding them prestigious positions and publications. Outsiders gain social status in the wider world via their association with such credentialed-as-impressive folks.

Note that I said “impressive power,” not just impressiveness. This is the new twist that I’m introducing in this post. People clearly want academics to show not just impressive raw abilities, but also to show that they’ve translated such abilities into power over others, especially over other credentialled-as-impressive folks. I think we also see similar preferences regarding music, novels, sports, etc. We want people who make such things to show not only that they have have impressive abilities in musical, writing, athletics, etc., we also want them to show that they have translated such abilities into substantial power to influence competitors, listeners, readers, spectators, etc.

My favored theory predicts that academics will be uninterested in and even hostile to metrics that credit the people who contributed to academic projects without thereby demonstrating exceptional abstract mental abilities. This theory also predicts that while there will be some interest in measuring the impact of academic work outside academia, this interest will be mild relative to measuring impact on other academics, and will focus mostly on influence on other credentialed-as-impressives, such as pundits, musicians, politicians, etc. This theory also predicts little extra interest in measuring impact on innovation and intellectual progress, relative to just measuring a raw ability to change thoughts and behaviors. This is a theory of power, not progress.

Under my preferred theory of academia, innovation and intellectual progress are mainly side-effects, not main functions. They may sometimes be welcome side effects, but they mostly aren’t what the institutions are designed to achieve. Thus proposals that would tend to increase progress, like promoting more inter-disciplinary work, are rejected if they make it substantially harder to credential people as mentally impressive.

You might wonder: why would humans tend to seek signals of the combination of impressive abilities and power over others? Why not signal these things separately? I think this is yet another sign of homo hypocritus. For foragers, directly showing off one’s power is quite illicit, and so foragers had to show power indirectly, with strong plausible deniability. We humans evolved to lust after power and those who wield power, but to pretend our pursuit of power is accidental; we mainly just care about beauty, stories, exciting contests, and intellectual progress. Or so we say.

So does anyone else have different theories of academia, with different predictions about which metrics academics and their customers will prefer? I look forward to the collection of data on who prefers which metrics, to give us sharper tests of these alternative theories of the nature and function of academia. And theories of music, stories, sport, etc.

GD Star Rating
loading...
Tagged as: , , , ,

Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,

Thought Crime Hypocrisy

Philip Tetlock’s new paper on political hypocrisy re thought crimes:

The ability to read minds raises the specter of punishment of thought crimes and preventive incarceration of those who harbor dangerous thoughts. … Our participants were highly educated managers participating in an executive education program who had extensive experience inside large business organizations and held diverse political views. … We asked participants to suppose that scientists had created technologies that can reveal attitudes that people are not aware of possessing but that may influence their actions nonetheless.

In the control condition, the core applications of these technologies (described as a mix of brain-scan technology and the IAT’s reaction-time technology) were left unspecified. In the two treatment conditions, these technologies were to be used … to screen employees for evidence of either unconscious racism (UR) against African Americans or unconscious anti-Americanism (UAA). … Liberals were consistently more open to the technology, and to punishing organizations that rejected its use, when the technology was aimed at detecting UR among company managers; conservatives were consistently more open to the technology, and to punishing organizations that rejected its use, when the technology was aimed at detecting UAA among American Muslims.

Virtually no one was ready to abandon that [harm] principle and endorse punishing individuals for unconscious attitudes per se. … When directly asked, few respondents saw it as defensible to endorse the technology for one type of application but not for the other—even though there were strong signs from our experiment that differential ideological groups would do just that when not directly confronted with this potential hypocrisy. …

Liberal participants were [more] reluctant to raise concerns about researcher bias as a basis for opposition, a reluctance consistent [the] finding that citizens tend to believe that scientists hold liberal rather than conservative political views. …

This experiment confronted the more extreme participants with a choice between defending a double standard (explaining why one application is more acceptable) and acknowledging that they may have erred initially (reconsidering their support for the ideologically agreeable technology). … Those with more extreme views were more disposed to … backtrack from their initial position. (more; ungated)

So if we oppose thought crime in general, but support it when it serves our partisan purposes, that probably means that we will have it in the long run. There will be thought crime.

GD Star Rating
loading...
Tagged as: , , ,

Your Honesty Budget

Kira Newman runs The Honesty Experiment:

30 days. Complete honesty. Can they survive it? — Follow their journey and read about honesty in life, love, and business.

She interviewed me recently. One excerpt:

Honesty Experiment: How do we solve this conundrum?

Hanson: I think the first thing you’ll have to come to terms with is wondering why you think you want to be otherwise. We’re clearly built to be two-faced – we’re built to, on one level, sincerely want to and believe that we are following these standard norms – and at the other level, actually evading them whenever it’s in our interest to get away with it. And since we are built that way, you should expect to have a part of yourself that feels like it sincerely wants to follow the norms, and you should expect another part of you that consistently avoids having to do that.

And so, if you observe this part of yourself that wants to be good (according to the norms), that’s what you should expect to see. It’s not evidence that you’re different from everybody else. So a real hard question is: how different do you want to be, actually? How different are your desires to be different? . . . Overall, you should expect yourself to be roughly as hypocritical as everybody else.

I later recommend compromise:

It would be simply inhuman to actually try to be consistently honest, because we’re so built for hypocrisy on so many levels. But what you can hope for is perhaps a better compromise between the parts of you that want to be honest and the parts of you that don’t. Think more in terms of: you have a limited budget of honesty, and where you should spend it.

GD Star Rating
loading...
Tagged as: ,

Sleep Signaling

We sleep less well when we sleep together:

Our collective weariness is the subject of several new books, some by professionals who study sleep, others by amateurs who are short of it. David K. Randall’s “Dreamland: Adventures in the Strange Science of Sleep” belongs to the latter category. It’s a good book to pick up during a bout of insomnia. …

Research studies consistently find … that adults “sleep better when given their own bed.” One such study monitored couples over a span of several nights. Half of these nights they spent in one bed and the other half in separate rooms. When the subjects woke, they tended to say that they’d slept better when they’d been together. In fact, on average they’d spent thirty minutes more a night in the deeper stages of sleep when they were apart. (more)

In 2001, the National Sleep Foundation reported that 12% of American couples slept apart with that number rising to 23% in 2005. … Couples experience up to 50% more sleep disturbances when sleeping with their spouse. (more)

Why do we choose to sleep together, and claim that we sleep better that way, when in fact we sleep worse? This seems an obvious example of signaling aided by self-deception. It looks bad to your spouse to want to sleep apart. In the recent movie Hope Springs, sleeping apart is seen as a big sign of an unhealthy relation; most of us have internalized this association. So to be able to send the right sincere signal, we deceive ourselves into thinking we sleep better.

GD Star Rating
loading...
Tagged as: , ,