Author Archives: Stuart Armstrong

Do moral systems have to make sense?

It’s often taken as a disadvantage if your moral values are hodgepodge of impressions, rules of thumb, and narrow cases studies. In fact, universality seems the most desirable attributes of any moral system, with coherence hot on its tale.

Even extreme relativism exalts “there is no truth” to the level of a universal principal, and the “realistic” moral systems solve the issue by saying that moral values are eternal, but how they balance and how they apply depends on circumstances in the world.

I found myself defending this universality and coherence in recent comments, where I argued that future progress in understanding the brain will cause us to question all our free will assumptions. However, I didn’t want to update my own moral system yet, because there implications hadn’t been organized into anything coherent yet.

Why did I feel like that? Maybe there’s a natural human urge to make our moral values clear and absolute, while admitting reality is flawed. We’d much prefer “murder is wrong, but may be allowed to prevent other crimes” to “murder is quite likely to be wrong”, even if they end up being similar in practice.

Continue reading "Do moral systems have to make sense?" »

GD Star Rating
Tagged as:

Biases, by and large

There is an old book by Thomas Shelling from 1978, entitled Micromotives and Macrobehavior. In it he describes a computer model to explain segregated neighbourhoods, but it also can describe self-sustaining biased communities. In the model, every agent is grey or black. They prefer their own colour, but not strongly – they will move only if 45% of their neighbours are of a different colour. Very quickly, however (within "two moves" in the original computer model) the result is a completely segregated society. More intriguingly, he finds that

increased tolerance does not necessarily make a stable mixed result more likely.

Now ‘colour’ can stand in for many things – race, religion, politics, social class, and biases/opinions.  That last category is where it become relevant for us. Modeling ‘neighbours’ as the people you choose to interact with, this shows how a slight preference for avoiding disagreement ("I don’t want the majority of my friends to be of opposite political views") can result in the clumping together of groups with similar biases.

And once the groups are formed, social factors then act to reinforce the biases of the agents – when all those you interact with have similar biases, it becomes very unlikely you will change your mind. Even if you decide to investigate something impartially, all those you know will be pulling in the same direction, meaning that you are unlikely to make a clean break. And stronger biases then feed into the clumping guaranteeing the stability of the segregation.

But that’s just standard social biases. The new piece in this model is that simply reducing biases (or at least reducing public announcement of biases, since those cause the group clumping) may not reduce the clumping at all. They may need to nearly vanish before that happens. Another relevant new point in the model is that individuals did not want or seek out the biased ‘mono-culture’ they ended up with – it was just the sum of their interactions.

Continue reading "Biases, by and large" »

GD Star Rating
Tagged as:

White-collar crime and moral freeloading

Steven Levitt’s book Freakonomics has a section where he tries to gain insights into white-collar crime via the experiences of a bagel-seller called Paul F:

early in the morning, [Paul] would deliver some bagels and a cash basket to a company’s snack room; he would return before lunch to pick up the money and the leftovers. It was an honor-system commerce scheme, and it worked.

There was a lot of fascinating data coming out of his scheme, especially as to when and why people would cheat and not pay for their bagels; however, for those with an eye to bias, the most relevant observation was that:

the same people who routinely steal more than 10 percent of his bagels almost never stoop to stealing his money box.

Continue reading "White-collar crime and moral freeloading" »

GD Star Rating
Tagged as: , , ,

More Lying

The subject of lying has come up here recently, with some tips on how to detect liars. But you’d expect that those whose job involves ferreting out liars, such as police officers or immigration judges, would be better at it than the rest of us. But this study claims that Swedish judges on the Migration Board (MB) are about as good at recognising the signs of lying as students:

Overall, the beliefs held by MB personnel were not more in tune with research findings on objective cues to deception than were the beliefs of the students. In addition, MB personnel often refrained from taking a stand concerning the relation between specific behaviours and deception; they exhibited substantial within-group disagreement […]

In fact, we all seem to make the similar stereotypical mistakes when judging lies, implying that we believe our lie detecting abilities are much better than they actually are:

[…] there is a lack of overlap between the cues research has shown to be associated with deception (objective cues) and the cues people believe to be associated with deception (subjective cues) […] Generally, these subjective cues to deception are indicators of nervousness. It seems as if people believe that a liar will feel nervous and act accordingly; however, far from all liars do […]

But is there any group that is actually good at detecting lies? Indeed there is: of convicted criminals. Why? The most likely hypothesis seems to be that criminals have much more experience in deception, and, crucially, have feedback: when they’re lied to, they generally discover it (to their cost) later.

So, unless you have a criminal record or a great experience in being lied to, the best is most definitely not to trust your gut.

GD Star Rating
Tagged as: , ,