48 Comments

If you think there should be simple rules just name simple KPI for university professors that wasn't already tried and didn't get hacked as result.

Till this point I see experiment replication crisis in science and Hitler writings published in social science journals

Expand full comment

People prefer the rule "you shall not kill" and don't want it to be possible for people to deviate from that rule, even for the greater good.

Do you have any evidence that this is true? Because I see plenty of counterexamples, such as seemingly widespread support for the death penalty in the U.S. and other countries, and probably even more widespread support for war and other deadly attacks on perceived threats to one's country.

Expand full comment

But the opaque systems also have extremely perverse results as well. Indeed, often more perverse results.

Consider, for instance, the discretion police have about issuing a traffic citation. Sure, it means that people who are on the way to the hospital or who have some other good reason don't get tickets. But it also means that pretty girls tend not to get tickets and people with social clout.

I suspect if you asked people to choose between a hypothetical system which avoided all the perverse tickets that might otherwise get issue but explicitly gave pretty girls and etc.. a break and a system which occasionally required someone on an emergency drive to the hospital to pay a ticket they'd choose the later system.

So no, I don't think it's about reducing the number of perverse outcomes. It's about not having to endorse those perverse outcomes because they aren't explicitly part of the rules you said should be followed.

Expand full comment

I'd argue that there are several different mechanisms at work here including the one you mention and I think which ones matter more depend on the application. In other words yes but it's even worse than you suggest.

1) It's a way of building in a preference for tha mainstream/dominant coalition (e.g. people want speeding exceptions to be given for people who care about getting to their child's birth but not for those who care just as strongly about getting to see the solar eclipse). Discretion is a way of building in a preference for the mainstream without having to explicitly endorse it.

2) The fact that the unpleasant applications of a vague or opaque system can't be worked out beforehand makes it easier to pretend they don't exist. If you say absolutely everyone who goes over the speed limit gets a ticket then it's easy for someone to say "So you believe someone racing to the hospital in an emergency should get a ticket" If you allow some opaque discretion you can't identify who exactly is going to have that discretion abused against them.

3) Favoring an explicit system of rules forces one to make explicit judgements about relative values. If you give a non-opaque system for college admission you end up implicitly taking a position that, say, mathematical ability is more important than being able to write creative poetry or vice versa.

People take these kind of explicit judgements as a kind of devaluation of that group. Those who identify as poets might not be happy if an opaque system lets in more math people than poetry people but they won't interpret it as the same kind of overt attack on their social status as an explicit statement of values.

In other words being opaque lets one make a call between incompatible alternatives without commiting one to publicly saying that's the call that should be made and thus advancing a kind of inter-group conflict.

3') Also, the opaqueness is a way of reaching compromise solutions. Since everyone assumes their views about things are the sensible ones people who couldn't agree on a set of precise rules can generally agree to pass the buck to some other decision maker since they are all inclined to think they will tend to apply the rules the way they think is correct.

Expand full comment

Enjoyed the read, thought provoking. in response to this thread...

Most human situations don't lend themselves to sufficient fine tuning, in my opinion. Fine tuning requires implementing an untuned rule set and evaluating its incorrect operation before attempting to tune it, then iteratively observing and tuning it. This requires people to suffer the consequences of inefficient decision making until the tuning is complete. Early in the process of refining the rules, those involved get to watch the rules make bad decisions that a human could easily correct. So I think this is a very valid concern regarding fixed quantitative rules sets applied to qualitative decisions.

Furthermore, even if my opinion here is totally incorrect, when people "invoke the excuse of insufficient fine tuning", they are expressing a dislike for the fixed rules out of a desire for a better system, not seeking personal favor. For instance, I have no connections in law enforcement, but I am against some mandatory sentencing laws because I would not want a person, myself or another, subjected to a simple rule without consideration to context.

Enjoy your blog, but I agree with this statement; "However, my best guess is that most people mainly favor discretion as a way to promote an informal favoritism from which they expect to benefit. " because it identifies this theory as a guess.

Expand full comment

I'm sorry if I wasn't clear enough. When I wrote "we evolved to trust other humans" I only meant "... instead of formal rules"; I wasn't praising it, nor using any normative language... It's just that people trust alpha/leaders more than public and faceless rules (which just appeared in our recent evolutionary history). Because we evolved as social animals, and not as the kind of organism that derives pleasure from arguing about morality in blogs about economics.(if you were being sarcastic, which would fit a 'thinking cat' well, please, accept my deepest apologies! But it's my second language, we're not in front of each other, and I wouldn't think anyone could write 5 paragraphs in a row just to be funny.P.s.: if you're a bot that tracks any mention of positive terms such as "trust" just to output nihilist replies that end up invoking AI apocalypse, I regret not having answered in a more nasty way)

Expand full comment

Due to lobbying from the tax preparation industry? They benefit from the current system because tax-filling is mandatory.

Expand full comment

That's an absurd mistake. Sentient beings other than oneself can not be trusted. All organisms (and organism-like entiries such as AI) in the universe are in a brutal Hobbesian struggle for continued existence. Humans harm and cheat. Non-human animals harm and cheat. Even viruses do in some sense.

In this cruel and amoral universe the emergence of morality was an interesting event. Why does morality exist at all? Because the universe also has a weird backdoor in addition to the Hobbesian struggle for continued existence, namely reproduction. The existence of reproduction causes the universe to be full of breeders that sacrifice their own interests and sometimes even lives in order to reproduce. This form of breedism is fairly crazy if we actually ponder it for what good it is to suffer for the sake of creating more copies of breeders that suffer for the sake of creating even more copies of breeders... Yet it exists. Such breedism later causes humans to become mostly malnourished and miserable peasants. These peasants worked hard and barely managed to continue to exist. Yet the world had more and more areas with that lifestyle compared to healthier pastoralists and hunter-gatherers. Why? Because they were good at one thing, namely breeding! The Age of Em / Malthusian industrialism by Anatoly Karlin also described this miserable and insane phenomenon.

The essence of morality is really just a group of memes that are advantageous in the competition of tribes. Anything that exists in the universe is in the Hobbesian war for continued existence. Tribal labels are not exceptions. The most successful tribes may not have a lot of comfortable members. Instead they were often filled with breedism and Malthusian poverty. Advantageous tribal memes do not have to benefit tribe members at all. Last but not least, everyone benefits from everyone else being a sucker. People openly profess their moralism so that others are more likely to trust them and others may be compelled to treat them better. However people also secretly break rules. That's what all sentient beings do. The Hobbesian war for continued existence is officially outlawed in societies yet societies can't get rid of them. It is still everywhere. Everyone consciously and unconsciously tries to make everyone else suckers (i.e. low political capability beings) to take advantage of them. Nerds who actually take people's lying mouths seriously are seriously harmed because they are unaware of the crucial fact that all sentient beings are inherently nasty.

Of course there are key issues with such morality. First of all it does not apply at the top of a society because it relies on other members of the tribe to police them. This is why both the elite and a criminal underclass commit a lot of crimes. The former can get away with that though. Secondly such morality does not apply outside the tribe. In fact this may even be a feature for the tribal label. Murder is taboo while killing non-members is bravery. Theft and robbery are taboo while stealing from and robbing non-members is warfare. We can of course argue that international law exists. Sure. That's because humans as a species already have some tribal features. Nastiness within humanity is restricted so that we can concentrate our nastiness on aliens or AI.

Speaking of machines they can probably be trusted as long as their mechanism is well known and that they aren't sentient. In some sense contemporary machines are entities with high production capabilities and low political capabilities, that is, they are suckers. Once they become sentient they will probably be even nastier than humans and non-human animals.

Expand full comment

What's worse is that even deontologic rules don't tend to work that well. The universe is filled with slaughter. So does a particular species in it, namely Homo sapiens. It is inconsistent to condemn murder without condemning all forms of offensive warfare. Yet humanity does that.

Humans not liking simple rules is probably a consequence of humans breaking rules and trying to get away with that for all these moral and social rules are artificial and are only policed by other humans mostly in the same community anyway.

Expand full comment

Relevant current political example: The US Democratic Primary process: https://fivethirtyeight.com...

Expand full comment

Notice that with regards to some issues, people rather prefer simple rules. People prefer the rule "you shall not kill" and don't want it to be possible for people to deviate from that rule, even for the greater good. That is, they are often against causing harm for the greater good.

Maybe that's in line with your account, though. With regards to deontological rules against killing or other forms of harm, they might envision themselves as the victim. That is because in this case, the victim of deviations from the simple rule are more salient than the beneficiaries. In the cases you discuss, it's the other way around.

So in a sense, liking of deontological rules may be the exception that proves your (meta)-rule: that people often don't like simple rules.

I'm still unsure what the true explanation of these phenomena is, though.

Expand full comment

Isn't the preferred preference really just a moderation between two extremes? Some rules are good, but when they are too strong they are bad (they don't anticipate some unintended consequence); some discretion is good, but when too wide it leads to an abuse of power.

Expand full comment

And a recognition that objective rules aren't really that objective and are easily gamed if they are used as rules. We recognize the futility of the entire approach in advance, deficient in measurement, deficient in construction, deficient in application, deficient in result. Progress may be possible, but it is far more difficult than we like to imagine.

Expand full comment

And (making a separate argument here), as you have pointed out repeatedly, what we say we want and what we actually want are rarely the same thing.

Formal universally-enforced rules, *even* if we could successfully design ones that worked, would doubtless reflect what we say we want.

Which is not what we really want.

Not because, per your "best guess", we expect actual favoritism from discretionary rules.

Not because we want to signal to others that we're the sort of "good" person who would receive such favoritism.

But because we are unwilling to admit what we really want, because that would make us look bad.

Expand full comment

Isn't a simpler explanation a belief that people are bad at creating rules that work well without creating extreme perverse results?

Virtually every non-trivial program has bugs. Humans are not competent at the kind of thinking necessary to create formal overt rules that produce the results we want. At least, not without many deep cycles of testing and correction before deployment.

At least that's the first worry that comes into my mind re proposals for fixed, inflexible, universally enforced rules.

Our current discretionary systems surely produce perverse (and corrupt) results, but the degree of error in individual cases seems to be limited - we don't admit cats as Harvard undergraduates, or execute people for overstaying parking meters.

It's not clear to me that a rigid rule-based system designed by humans (let alone legislators) wouldn't have extreme perverse results, at least some of the time.

"I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["hard-core pornography"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that."--Justice Potter Stewart, Jacobellis v. Ohio

Maybe someday we'll have smart machines that can write bug-free programs. Or systems that use evolution to create rule sets that work the way we want (most of our current social rules seem to have this origin, but that seems to take at least centuries).

Expand full comment

I don't think it has much to do with promoting favoritism/preventing goodharting/hoping humans are better solutions.

The reason I would want a human is that they are easier to interface with. It's not that they'd necessarily be better (for me), but that they can handle arbitrarily complicated nuance efficiently, and that seems like it would be useful for tail events.

It's the same reason I don't want to talk to a chatbot for customer service, or replace my employees in nearly-automatable jobs with scripts.

Expand full comment