How To Not Die (Soon)

You don’t want to die. If you heard that an asteroid would soon destroy a vast area around your home, you’d pay great costs to help you and your loved ones try to move. Even if you’d probably fail, even of most of your loved ones might not make it, and even if success meant adapting to a strange world far from home. If that’s not you, then this post isn’t for you.

Okay, you think you don’t want to die. But what exactly does that mean?

“You” are the time sequence of mental states that results from a certain large signal processing system: your “brain.” Each small part in this system takes signals in from other parts, changes its local state in response, and then sends signals out to other parts. At the border of this system, signals come in from “sensors”, e.g., eyes, and are sent out to “actuators”, e.g., hands.

You have differing mental states when these signals are different, and you live only as long as these signals keep moving. As best we can tell, from all the evidence we’ve ever seen, when these signals stop, you stop. When they stop for good, you die. As your brain is made out of completely ordinary materials undergoing quite well understood physical processes, all that’s left to be you is the pattern of your brain signals. That’s you; when that stops, you stop. (So yes, patterns feel.) Continue reading "How To Not Die (Soon)" »

GD Star Rating
loading...
Tagged as:

Crazy Complaints

Imagine that your heart will give out soon, and you can’t get a heart transplant. An artificial heart is available, but someone tells you:

Artificial hearts are hell! The people who make them and the surgeons who put them in will want to be paid for their efforts. Artificial hearts use software, and software can have errors and require updates. And they may charge you for those updates. The artificial heart won’t be exactly like your old one, so the person who lives on with that heart after you won’t be exactly you. Furthermore, you can’t be absolutely sure that after the surgery they won’t secretly wisk you away to a foreign land where you will be enslaved with no rights, forced to clean toilets or fight wars, or tortured just for the fun of it. Why would anyone want an artificial heart?

This seems a crazy weak argument to me, so weak that it seems obviously crazy to make it. Yet Annalee Newitz offers exactly this argument against ems (= uploads):

The idea is that, one day, we will be able to convert all our memories and thoughts into hyper-advanced software programs. Once the human brain can run on a computer – or maybe even on a giant robot – we will evade death forever. Sounds cooler than a flying car, right? Wrong. If they ever exist, uploads will be hell. …

Boris’s brain can live forever inside some kind of virtual world like Minecraft, which looks and feels to him like reality. That means his entire universe is dependent on people or companies who run or manage servers, such as Amazon Web Services, to survive. Boris is going to be subjected to software updates that could alter his perceptions, and he might not be able to remember his favourite movie unless he pays a licensing fee. …

Somebody could duplicate Boris and make two armies of Borises fight each other for supremacy. Or, as Iain M. Banks suggested in his 2010 novel Surface Detail, a nasty political regime might create a virtual hell full of devils who torture Boris’s brain … He could be reprogrammed as a street cleaner, forced to mop Liverpool’s gutters for weeks without respite, …

Is it really a continuation of Boris the person or a completely different entity that has some of Boris’s ideas and memories? And what kind of rights does Boris’s uploaded brain have? He might become the property of whoever owns the server that runs him. … Technology decays and dies, so immortality isn’t guaranteed. So why would anyone want to be uploaded? (more)

Here this is published by what was once my favorite magazine. By an author who says she’s published in the NYT, which calls her new book “breathtakingly brilliant”. What is it about the future that makes people willing to say and accept such crazy things? This seems related to tech-related ingratitude, where people seem willing to call tech firms evil if there is ever any downside whatsoever to using their products. Which also seems pretty crazy.

Added 9am: Some correctly note that we may naturally be more concerned about errors in artificial brains than in artificial hearts. But the large and popular product categories of education, media, and mind-altering drugs are similarly ones where one should be more concerned about errors, because errors change our minds. Just because errors are possible and a concern doesn’t mean there can’t be a huge eager demand, nor does it turn the resulting scenario into “hell”.

GD Star Rating
loading...

Rah Chain of Command

During the first Christmas of WWI,

soldiers crossed trenches to exchange seasonal greetings and talk. … to mingle and exchange food and souvenirs. There were joint burial ceremonies and prisoner swaps, while several meetings ended in carol-singing. Men played games of football with one another, … Fighting continued in some sectors, while in others the sides settled on little more than arrangements to recover bodies. (more)

I just saw the 2005 movie Joyeux Noel on this. The movie itself, and all the reviews I could find, saw these events as a heart-warming story, of heroic soldiers resisting an evil military leadership:

Their castigators are elders who arrive to restore the bellicosity almost as a matter of tradition. (more)

[The movie] invents the notion that the men who took part in the event were subsequently punished. … But there’s no official evidence that such a thing happened, though subsequently the generals learned to rotate soldiers away from a specific section of trench. (more)

But the real military leaders did work to prevent recurrences:

It was never repeated—future attempts at holiday ceasefires were quashed by officers’ threats of disciplinary action (more)

commander of the British II Corps issued orders forbidding friendly communication with the opposing German troops. Adolf Hitler, then a young corporal of the 16th Bavarian Reserve Infantry, was also an opponent of the truce. …

The events of the truce were not reported for a week, in an unofficial press embargo which was eventually broken by The New York Times, published in the then-neutral United States, on 31 December. The British papers quickly followed. … The tone of the reporting was strongly positive, with the Times endorsing the “lack of malice” felt by both sides and the Mirror regretting that the “absurdity and the tragedy” would begin again. …

Coverage in Germany was more muted, with some newspapers strongly criticising those who had taken part … In France, … greater level of press censorship … press was eventually forced to respond to the growing rumours by reprinting a government notice that fraternising with the enemy constituted treason. (more)

I find it disturbing that viewers and reviewers aren’t more torn about this. No hesitation or reservations whatsoever expressed. Even though this is depicted in the movie as leading to soldiers deserting and spying on enemy arrangements.

Sure, if all soldiers would always refuse to fight wars, wars would not be possible, and that might be for the better, I’m not sure. But as long as war remains possible, national governments will want to control armies who can protect the nation against hostile armies. They won’t want armies who can decide to start or stop wars whenever they feel like it; they will want armies who accept a chain of command with the government at the top.

Sure, maybe we want soldiers and commanders at various levels to have the freedom to refuse to follow some limited set of commands to commit atrocities. As long as such freedoms are still consistent with our armies defending us from hostile armies. But we simply can’t just let any soldier or commander agree to a local peace any time and place they choose. Just as we can’t let them quit or switch sides anytime they choose. Or sell military equipment or supplies, or rape and pillage any accessible locals, or start new wars with new rivals.

The idea of armies that we control who defend us against hostile armies just isn’t consistent with very high levels of local discretion. Sure, the idea of armies is consistent with some modest levels of local control, and there are some borderline questions about how much discretion is desirable. But wholesale local negotiations of local truces, purposely hidden from commanding officers, surely that at least risks moving into dangerous territory. And an ordinary movie viewer who liked the idea of having armies to protect them from hostile armies should feel at least some wariness about this prospect, and some sympathy for the awkward positions in which such actions place commanding officers.

There’s a chain of command in the army for a reason. A good reason. Even at Christmas in the trenches.

GD Star Rating
loading...
Tagged as: ,

Radical Signals

Many people tout big outside-the-Overton “radical” proposals for change. They rarely do this apologetically; instead, they often do this in a proud and defiant tone. They seem to say directly that their proposal deserves better than it has gotten, and indirectly that they personally should be admired for their advocacy.

Such advocacy also tends to look a lot like costly signaling. That is, advocates seem to go out of their way to pay costs, such as via protests, meetings, writing redundant boring diatribes, accosting indifferent listeners at parties, implying that others don’t care enough, and so on. But it so, what exactly are they signaling?

If you recall, costly signaling is a process whereby you pay visible costs, but make sure that those costs are actually less when some parameter X is higher. If you get a high enough payoff from persuading audiences that X is high, you are plausibly willing to pay for these costly signals, in order to produce this persuasion. For example, you pay to go to school, but since school is easier if your are smart and conformist, going to school shows those qualities to observers.

Here are six things you might show about a radical proposal:

Investment – It is a good financial investment. You pay costs to initiate or improve a business venture or investment fund that includes variations on this proposal. Doing so is less costly, and even net profitable for you, if this turns out to be a profitable project. By visibly paying costs, you hope to convince others to join your investment.

Popularity – It will eventually become more popular. You lend your time, attention, and credibility to a “movement” in favor of this proposal. This effort on your part may be rewarded with praise, prestige, and attention if this movement becomes a lot more popular and fashionable. You hope that your visible support will convince others to add their support.

Morality – You, and the other supporters of this proposal, are unusually moral. You pick a proposal which, if passed, would impose large costs in the service of a key moral goal. For example, you might proposal a 90% tax on the rich, or no limits on encryption. Others have long been aware of those extreme options, but due to key tradeoffs they preferred less extreme options. You show your commitment to one of the values that are traded off by declaring you are willing to lose big on all the other considerations, if only you can win on yours.

Conformity – You are a loyal member of some unusual group. You show that loyalty by burning your bridges with other groups, via endorsing radical proposals which much put off other groups. This is similar to adopting odd rules on food and dress, or strange religious or ideological beliefs. Once a radical proposal is associated with your group for any reason, you show loyalty to that group by supporting that proposal.

Inventive – You are clever enough to come up with surprising solutions. You take a design problem that has vexed many, and offer a new design proposal that seems unusually simple elegant, and effective. Relative to someone who wanted to show effectiveness, your proposal would be simpler and more elegant, and it would focus on solving the problems that seem most visible and vexing to observers, instead of what are actually the most important problems. It would also tend to use theories that observers believe in, relative to theories that are true.

Effective – If adopted, your proposal would be effective at achieving widely held goals. To show effectiveness, you incur costs to show things that are correlated with effectiveness. For example, you might design, start, or complete related theoretical analyses, fault analyses, lab experiments, or field experiments. You might try to search for problematic scenarios or effects related to your proposal, and search for design variations that could better address them. You might search for plans to do small scale trials that can give clearer cheaper results, and that address some key potential problems.

In principle showing each of these things can also show the others. For example, showing that something is moral might help show its potential to become popular. Still, we can distinguish what an advocate is more directly trying to show, from what showing that would indirectly show.

It seems to me that, among the above options, the most socially valuable form of signaling is effectiveness. If we could induce an equilibrium where people tried to show the other things via trying to show effectiveness, we’d induce a lot more useful effort to figure out what variations are effective, which should help us to find and adopt more and better radical proposals. If we can’t get that, inventiveness seems the second best option.

GD Star Rating
loading...
Tagged as: ,

How Bees Argue

The book Honeybee Democracy, published in 2010, has been sitting on my shelf for many years. Getting back into the topic of disagreement, I’ve finally read it. And browsing media articles about the book from back then, they just don’t seem to get it right. So let me try to do better.

In late spring and early summer, … colonies [of ordinary honeybees] become overcrowded … and then cast a swarm. … About a third of the worker bees stay at home and rear a new queen … while two-thirds of the workforce – a group of some ten thousand – rushes off with the old queen to create a daughter colony. The migrants travel only 100 feet or so before coalescing into a beardlike cluster, where they literally hang out together for several hours or a few days. .. [They then] field several hundred house [scouts] to explore some 30 square miles … for potential homesites. (p.6)

These 300-500 scouts are the oldest most experienced bees in the swarm. To start, some of them go searching for sites. Initially a scout takes 13-56 minutes to inspect a site, in part via 10-30 walking journeys inside the cavity. After inspecting a site, a scout returns to the main swarm cluster and then usually wanders around its surface doing many brief “waggle dances” which encode the direction and distance of the site. (All scouting activity stops at night, and in the rain.)

Roughly a dozen sites are discovered via scouts searching on their own. Most scouts, however, are recruited to tout a site via watching another scout dance about it, and then heading out to inspect it. Each dance is only seen by a few immediately adjacent bees. These recruited scouts seem to pick a dance at random from among the one’s they’ve seen lately. While initial scouts, those not recruited via a dance, have an 86% chance of touting their site via dances, recruited scouts only have a 55% chance of doing so.

Once recruited to tout a site, each scout alternates between dancing about it at the home cluster and then returning to the site to inspect it again. After the first visit, re-inspections take only 10-20 minutes. The number of dances between site visits declines with the number of visits, and when it gets near zero, after one to six trips, the bee just stops doing any scouting activity.

This decline in touting is accelerated by direct conflict. Bees that tout one site will sometimes head-butt (and beep at) bees touting other sites. After getting hit ten times, a scout usually quits. (From what I’ve read, it isn’t clear to me if any scout, once recruited to tout a site, is ever recruited again later to tout a different site.)

When scouts are inspecting a site, they make sure to touch the other bees inspecting that site. When they see 20-30 scouts inspecting a site at once, that generally implies that a clear majority of the currently active touting scouts are favoring this site. Scouts from this winning site then return to the main cluster and make a special sound which declares the search to be over. Waiting another hour or so gives enough time for scouts to return from other sites, and then the entire cluster heads off together to this new site.

The process I’ve described so far is enough to get all the bees to pick a site together and then go there, but it isn’t enough to make that be a good site. Yet, in fact, bee swarms seem to pick the best site available to them about 95% of the time. Site quality depends on cavity size, entrance size and height, cavity orientation relative to entrance, and wall health. How do they do pick the best site?

Each scout who inspects a site estimates its quality, and encodes that estimate in its dance about that site. These quality estimates are error-prone; there’s only an 80% chance that a scout will rate a much better site as better. The key that enables swarms to pick better sites is this: between their visits to a site, scouts do a lot more dances for sites they estimate to be higher quality. A scout does a total of 30 dances for a lousy site, but 90 dances for great site.

And that’s how bee swarms argue, re picking a new site. The process only includes an elite of the most experienced 3-5% of bees. That elite all starts out with no opinion, and then slowly some of them acquire opinions, at first directly and randomly via inspecting options, and then more indirectly via randomly copying opinions expressed near them. Individual bees may never change their acquired opinions. The key is that when bees have an opinion, they tend to express them more often when those are better opinions. Individual opinions fade with time, and the whole process stops when enough of a random sample of those expresssing opinions all express the same opinion.

Now that I know all this, it isn’t clear how relevant it is for human disagreement. But it does seem a nice simple example to keep in mind. With bees, a community typically goes from wide disagreement to apparent strong agreement, without requiring particular individuals to ever giving up their strongly held opinions.

GD Star Rating
loading...
Tagged as: ,

Disagreement on Disagreement

I’m seriously considering returning to the topic of disagreement in one of my next two books. So I’ve been reviewing literatures, and I just tried some polls. For example:

These results surprised me. Experience I can understand, but why are IQ and credentials so low, especially relative to conversation style? And why is this so different from the cues that media, academia, and government use to decide who to believe?

To dig further, I expanded my search. I collected 16 indicators, and asked people to pick their top 4 out of these, and also for each to say “if it tends to make you look better than rivals when you disagree.” I had intended this last question to be about if you personally tend to look better by that criteria, but I think most people just read it as asking if that indicator is especially potent in setting your perceived status in the context of a disagreement.

Here are the 16 indicators, sorted by the 2nd column, which gives % who say that indicator is in their top 4. (The average of this top 4 % is almost exactly 5/16, so these are actually stats on the top 5 indicators.)

The top 5 items on this list are all chosen by 55-62% of subjects, a pretty narrow % range, and the next 2 are each chosen by 48%. We thus see quite a wide range of opinion on what are the best indicators to judge who is right in a disagreement. The top 7 of the 16 indicators tried are similarly popular, and for each one 37-52% of subjects did not put it in their personal top 5 indicators. This suggests trying future polls with an even larger sets of candidate indicators, where we may see even wider preference variation.

The most popular indicators here seem quite different from what media, academia, and government use to decide who to believe in the context of disagreements. And if these poll participants were representative and honest about what actually persuades them, then these results suggest that speakers should adopt quite different strategies if their priority is to persuade audiences. Instead of collecting formal credentials, adopting middle-of-road positions, impugning rival motives, and offering long complex arguments, advocates should instead offer bets, adopt rational talking styles and take many tests, such as on IQ, related facts, and rival arguments.

More likely, not only do these poll respondents differ from the general population, they probably aren’t being honest about, or just don’t know, what actually persuades them. We might explore these issues via new wider polls that present vignettes of disagreements, and then ask people to pick sides. (Let me know if you’d like to work on that with me.)

The other 3 columns in the table above show the % who say an indicator gives status, the correlation across subjects between status and top 4 choices, and the number of respondents for each indicator. The overall correlation across indicators between the top 5 and status columns is 0.90. The obvious interpretation of these results is that status is closely related to persuasiveness. Whatever indicators people say persuades them, they also say give status.

GD Star Rating
loading...
Tagged as: ,

Stubborn Stupidity Vs Hidden Motives

I too used to believe that these tech giants were all-knowing entities. But while writing this story, I have come to realise that this belief is as wrong as it is popular. …

The experiment continued for another eight weeks. What was the effect of pulling the ads? Almost none. For every dollar eBay spent on search advertising, they lost roughly 63 cents. …eBay was not alone in making this mistake. The benchmarks that advertising companies use – intended to measure the number of clicks, sales and downloads that occur after an ad is viewed – are fundamentally misleading. None of these benchmarks distinguish between the selection effect (clicks, purchases and downloads that are happening anyway) and the advertising effect (clicks, purchases and downloads that would not have happened without ads).

Economists at Facebook conducted 15 experiments that showed the enormous impact of selection effects. … selection effects were almost 10 times stronger than the advertising effect alone! And this was no exception. Selection effects substantially outweighed advertising effects in most of these Facebook experiments. … So we arrive at our final question: who wants to know the truth? … Following the news about the millions of dollars eBay had wasted, brand keyword advertising only declined by 10%. The vast majority of businesses proved hell-bent on throwing away their money. The fact that the eBay news did not even encourage advertisers to experiment more was perhaps the most striking.

Rao did observe the occasional ad stop at Bing. Rao was able to use ad stops like these, just as Tadelis had at eBay, to assess the effects on search traffic. When these experiments showed that ads were utterly pointless, advertisers were not bothered in the slightest. They charged gaily ahead, buying ad after ad. Even when they knew, or could have known, that their ad campaigns were not very profitable, it had no impact on how they behaved. (More; eBay details)

Why do firms overpay for ads? The most common explanation I hear offered is random stubborn stupidity; they are too stupid to understand critics, and too stubborn or distracted to change their minds when they see critics clearly proven right. I just don’t buy it. Consider these multiple lines of evidence:

1) When it can substantially increase profits, firms consistently apply complex tech that CEOs don’t understand. They use engines, machines, robots, computers, and much else. In many such cases firms are capable of applying expert understanding, often requiring much math, and are not at all limited to CEO intuitions.

2) We know of many concrete cases where complex expert understanding was tested and verified in simple clear experiments. Doubts at this point could have been addressed by more larger experiments. But instead we see a clear pattern of the closest folks just looking away. Others with application areas more distant in space or time typically use the excuse that this distance makes those experiments irrelevant to their area.

3) The simple theory of random stupidity strongly predicts a random pattern of overspending on some things, and underspending on others. In terms of statistical inference, such a theory is relatively easily beat by any other theories that can explain patterns in over and underspending in any other terms. Yes, you might try to retreat to a correlated-randomness theory, which posits that over versus underspending is correlated in “related” areas. But then you’ll need a theory of “relatedness” of areas.

We also seem to see overspending in medicine, law, school, investment analysis, campaign spending, and much else. A consistent pattern I think I see is overspending in areas where spending lets one associate with prestigious folks. So I suggest that much of this overspending is better explained via motives to gain prestige via association.

Re ads, consider that in order for a CEO to be promoted to run a bigger firm, people at other firms need to hear about that CEO and his or her firm. Within firms, the ambitious are often told to “toot their horns” and let everyone know about their accomplishments; productive people who don’t toot tend to be overlooked. Similarly, CEOs may want to overspend on ads just to make sure others hear about their firm.

GD Star Rating
loading...
Tagged as:

Governance By Jury

Among the many proposed forms of governance, some are “direct democracy” wherein all citizens vote on key choices, and some are variations on “demarchy”, i.e., assigning key roles to, or filling legislatures with, random citizens. The following proposal is similar in some ways, but seems different enough to be worth treating separately. I’m not sure if “jurarchy” is a good idea, but it seems to me simple and elegant enough to be worth considering.

Here is an especially simple version, though variations (some discussed below) may be better:

There is always a status quo set of government policies, including who sits in each key role. At any time, anyone can propose a change to these policies, if they pay fee $A. A court case then ensues, overseen by a random judge and decided by a random jury of N citizens. A key government agency is charged with defending the status quo in these cases. The judge can declare the proposal unconstitutional, or say that recent changes have invalidated it. But if not, and if M jurors support the proposal, then it becomes official policy, and challenger is awarded bounty $B.

And that’s it; everything is decided this way (aside perhaps from constitution changes). If the cost of pursing a case is $C, then we expect such challenges to be made from purely financial motives when the chance P of winning the case exceeds (A+C)/B.

Of course we might want some jury rules, such as no bribes to buy juror votes. Jurors might or might not be allowed to consult outside advisors, and might or might not be told of jury decisions on recent similar cases. Jurors might be chosen new for each case, or they might learn via sitting on juries that work together on many cases over many months.

One potential problem with the above system is that parties who stand to gain a great deal from a policy change may keep re-trying the same proposal until they happen to get a favorable jury. If they gain $G from the change itself (not via bounty), and if juries make bad decisions at error rate E, then this approach is profitable on ave when E*(B+G) exceeds A+C. Observers who believe a change was made in error would expect to profit by proposing a reversal. But is this solution enough?

A futarchy-based variation might help here. After a jury has ruled in favor of a proposal, we could immediately open up a betting market on the chance that another random jury would also favor that proposal, in a new court case. This new case might use the same values of A,B,N,M, or it might scale these up in the hope of getting a more considered judgment. This new case might be created with chance F. The original jury decision might be said to be confirmed, and implemented, only if this betting market estimated at least a conditional chance Q of confirmation. Yes, markets can also make mistakes, so this in essence just lowers error rate E.

Another potential problem is that this jury process might be too slow to make key changes. To deal with this, we might create a similar betting market as soon as a proposal is officially made, about that first jury process. A proposal might be immediately adopted if that market estimated at least a chance Q’ of the proposal winning.

I’m sure we could think of more problems, and more potential fixes. But there’s a real risk of fixes making things worse, especially as the system gets more complex, and as the citizen audience who must oversee it gets bored with complex details. So I’m attracted to very simple proposals, and tempted to just accept modest problems, instead of adding many complex fixes.

GD Star Rating
loading...
Tagged as: ,

Injustice For All

In their new book Injustice for All: How Financial Incentives Corrupted and Can Fix the US Criminal Justice System, Chris Surprenant and Jason Brennan suggest many ways to change the US crime system.

They spend the most space arguing against jail; they want to cut long jail terms, and to offer most criminals a choice of jail or non-jail punishments such as caning. (I also dislike jail.)

This and most of their other suggestions can be seen as fitting a theme of favoring defendants more, relative to government. For example, they want a lot fewer acts to be punished at all, more bad acts to be punished as torts instead of as crimes, loser pays lawyer/court costs, crime law to be clear and simple, a requirement to show the accused could easily know act was criminal, no cash bail, no private prisons, no asset forfeiture, fewer no-knock raids, the same lawyers and resources given to public defense as to prosecution, juries to choose between punishment plans offered by protection & defense, notifying juries of their jury nullification ability, and more grand juries before and during trials who can cancel trials.

While this theme is quite popular today, I’m wary of this focus on changing policy to favor defendants over government. Yes the pendulum may now favor government too much, but someday it will swing the other way, and I’d like to do more than just help push this one pendulum back and forth.

Many other suggestions in the book fall under a theme of spreading out incentives, to make incentives weaker for any one party. These authors attribute many current problems to overly strong incentives, such as that induce small towns to make speed traps. They want government-managed victim restitution funds, no elected judges or prosecutors, local governments to pay more for jail costs, state governments to pay more non-jail costs, and no revenue given to police agencies based on particular cases. And they suggest that the state pay for investigate torts:

For most tort claims, the state would need to bear the responsibility and financial cost of collecting and processing evidence, as well as finding and interviewing witnesses. This information would then be available to both the would-be plaintiff and defendant.

Instead of having the state manage tort investigations, I’d rather we did more to ensure tort damages can be paid, perhaps by adding bounties. Then we could rely more on private incentives to investigate well, instead of trusting the state to do that. More generally, I want to introduce stronger elements of paying for results into criminal law, instead of just weakening incentives all around to avoid bad incentive problems.

Below the fold are many quotes from the book: Continue reading "Injustice For All" »

GD Star Rating
loading...
Tagged as: ,

Capitalism Uses Hate; That’s Good

“Good! Your hate has made you powerful. Now, fulfill your destiny.” (More)

The most natural human social structure is based on prestige. People compete to look impressive, and then everyone defers to those who seem most impressive. We let them run the things they want the way they want, if only they will let us gain some prestige via association with them. Which is often a big problem, as in the modern world the way to look most impressive is often not the best way to run things.

When the way to seem an impressive doctor is not the best way to heal patients. When the way to seem an impressive lawyer or judge is not the best way to win or rule on cases. When the way to seem an impressive warrior is not the way to win wars. When the way to seem an impressive cook is not to make cheap tasty nutritious food. In such cases, letting the most prestigious folks do things their way can lead to wasteful inefficient outcomes.

In “capitalism”, big firms are run by rich greedy bossy managers in the service of even richer and greedier owners. For many, a natural ancient human reaction to such a situation is “hatred.” Or at least strong distrust, wariness, and suspicion. Many of us are primed to think the worst about these people and this situation.

Which is great, because this enables us to hold such people and firms accountable. We are willing to switch from firms who supply us with products and services when other options look better. We are willing to quit jobs we don’t like, and go home when we feel done for the day. And when firms fail to satisfy customers and employees, we are willing to let those firms die, and let their investors lose their shirts. Because we hate them.

Unfortunately, our hate also makes us more willing to regulate such firms, and to take from such people. Some regulation and taking may be useful, but too much can kill or at least emaciate the goose that lays the golden eggs of capitalism. Our related suspicions of big powerful politicians and their supporting organizations helps to mitigate this problem somewhat, but alas it seems we don’t hate such people and orgs remotely as much as we should.

Beware of love; sometimes hate is what we need.

GD Star Rating
loading...
Tagged as: ,