Monthly Archives: February 2019

Classic Style Intellectual Worlds

The photo above is one I recently took at the Vatican. For me, it illustrates some key principles of classic artistic style. This style tends to be a fractal collection of structures at different scales, structures that frame spaces of many different sizes. Each structure doesn’t use up all local space, but instead leaves open holes for other works of art. Most all space is used by art of some sort, but items that help define larger structures are more homogeneous.

For example, the key arches in this picture have some patterns within them, but these vary less in texture, style, color, and theme, so that the arch itself can be more clearly visible. A similar pattern happens for the art in the spaces between the arches: works in larger spaces can be more complex, with more variations in textures, styles, colors, and themes. In contrast, works in smaller spaces are more constrained to fit well into the patterns around them.

Higher status artists are allocated to fill the larger spaces, where artists are allowed more discretion. This is a sense in which status is correlated with creativity. But its not at all that all artists are being as creative as they can, with the highest status artists capable of the most creativity. Instead, artists are only allowed to be more creative when they are higher status. That is, we don’t so much like high status because that indicates an ability to be creative, we instead like to see creativity because that indicates that the creator was high status.

Now consider this as a metaphor for academia and related intellectual worlds. These worlds make many products that fit into many structures on many different scales, including fields, subfields, topics, theories, and methods. Imagine that the world mainly wants all these products to fit into a pleasing aesthetic structure with an overall classic artistic style. When an individual makes a product and proposes to put it in some particular place within this overall structure, that proposal is accepted or rejected largely on the basis of how well it improves the overall artistic composition.

If so, most individuals will be rewarded for making impressive small variations. For example, if lots of people are talking about how AI will soon take all the jobs, and how a UBI could solve that, then aspiring intellectuals are rewarded for talking about modest variations on such topics. It is not that hard to be very critical of such scenarios, or to talk about very different future problems and solutions. And yes that might create more social value in terms of intellectual progress. But talking about those things clashes with the rest of the conversation, and so doesn’t make as aesthetically pleasing a whole. So you’ll instead want to be the most witty, clever, articulate, inspiring, rigorous person who talks about relatively small variations on what others are talking about.

That is, as an aspiring intellectual, you should mainly imagine yourself bidding to make one of the tiny artworks in the picture above. You’ll need to both stand out in some way, but also make something that fits very closely with nearby works. You may be capable of far greatly creativity, which could lead to greater intellectual progress. At least if anyone were to listen to you, and be tempted to build on your work. But if you aren’t one of the most impressive people doing lots of stuff that fits aesthetically with what others are doing nearby, its quite likely that no one will listen to you.

If you succeed on that usual path, you may someday be allowed more creativity, to contribute something bigger. Perhaps even something that produces real intellectual progress. And then future historians may say yay, what a great system that gives the biggest rewards to the most creative, surely it must be designed to maximize intellectual progress. Which if you are paying attention, you’ll know to be bull. Though you’ll probably also know to keep quiet about, as most everyone around you prefers a more flattering view of how their world contributes to intellectual progress.

GD Star Rating
loading...
Tagged as: , ,

Why Weakly Enforced Rules?

While some argue that we should change our laws to open our borders, it is more common for pro-immigrant folks to argue for weaker enforcement of anti-immigration laws. They want fewer government agencies to be authorized to help enforcement, fewer resources to go into finding violators, and weaker punishment of violators. Similar things happen regarding prostitution and adultery; many complain about enforcement of such laws, and yet don’t support eliminating them.

The recently celebrated “criminal justice reform” didn’t make fewer things illegal, or substitute more efficient forms of punishment (eg torture, exile) for less efficient prison. It mainly just reduces jail sentence durations. When I probed supporters, they confirmed they didn’t want fewer things illegal or more efficient enforcement.

The policing reforms that many want are not to substitute more cost-effective enforcers such as bounty hunters, or stronger punishments against police misconduct, but to instead just have police do less: pull over fewer drivers, investigate fewer suspects, etc.

When I claim that stronger norm enforcement is a big advantage of legalized blackmail, many people say that’s exactly the problem; they want less enforcement of common norms. For example, Scott Sumner:

Great literature and great films often turn people violating society’s norms into sympathetic characters, especially when they are ground down by “the machine”. I suspect that the almost universal public opposition to legalizing blackmail reflects society’s view (subconscious to be sure) that enforcing these norms (especially for non-criminal activities) requires a “light touch”, and that turning shaming into an highly profitable industry will do more harm than good. It will turn society into a mean, backstabbing culture. The people hurt most will be sensitive good people who made a mistake, not callous gang members who don’t care if others think they are evil.

On the surface, all of these positions seems puzzling to me; if a norm or law isn’t worth enforcing well, why not eliminate it? Some possible explanations:

  1. People like the symbolism of being against things they don’t really want to stop. It is more about wanting to look like the sort of person who doesn’t fully approve of such things.
  2. Having more rules that are only weakly enforced allows the usual systems more ways to arbitrarily punish some folks via selective enforcement. You might like this if you share such system’s tastes re who to arbitrarily punish. Or if you want to signal submission to authorities who want to use such power.
  3. If these things were actually legal and licit, people might sometimes publicly suggest that you are engaging in them. But if they are illicit or illegal, there’s a norm against accusing someone of doing them without substantial evidence. So if you want to discourage others from lightly accusing you of such things, you may want those activities to be officially disapproved, even if you don’t actually want to discourage them.
  4. We mainly want these norms and laws to help us deal with some disliked “criminal class” out there, a class that we don’t actually interact with much. So when we see real cases in our familiar word, they seem like they are not in that class, and thus we don’t want our norms or laws to apply to them. We only want less enforcement for folks in our world.
  5. What else?

Added 26Feb: I clearly didn’t communicate well in this post, as many commenters and this responding post saw me as arguing that all punishment, conditional on being caught and convicted, should either be zero or max extreme (eg death). Yes of course it is often reasonable to use intermediate punishments.

But enforcement also includes a chance of being caught, not just a degree of punishment, and there are issues of the cost-effectiveness of the processes to catch and punish people. There are many who want less punishment if caught, and less chance of catching, for most all offenses, and don’t want more cost effective catching or punishment, for fear that this might lead to more catching or punishing. To me, this seems hard to explain via just thinking that we’ve overestimated the optimal punishment level for some particular offenses.

Added 3Mar: A striking example is how in WWI recruits were supposed to be age 19 or older, but it was easy to lie and get in at younger ages, and most everyone knew of someone who had done this. We tsk tsk about child soldiers elsewhere, but don’t seem much ashamed of our own.

GD Star Rating
loading...
Tagged as: , ,

Enforce Common Norms On Elites

In my experience, elites tend to differ in how they adhere to social norms: their behavior is more context-dependent. Ordinary people use relatively simple strategies of being generally nice, tough, silly, serious, etc., strategies that depend on relatively few context variables. That is, they are mostly nice or tough overall. In contrast, elite behavior is far more sensitive to context. Elites are often very nice to some people, and quite mean to others, in ways that can surprise and seem strange to ordinary people.

The obvious explanation is that context-dependence is gives higher payoffs when one has the intelligence, experience, and social training to execute this strategy well. When you can tell which norms will tend to be enforced how when and by whom, then you can adhere strongly to the norms most likely to be enforced, and neglect the others. And skirt right up to the edge of enforcement boundaries. For weakly enforced norms, your power as an elite gives you more ways to threaten retaliation against those who might try to enforce them on you. And for norms that your elite associates are not particularly eager to enforce, you are more likely to be given the benefit of the doubt, and also second and third chances even when you are clearly caught.

One especially important human norm says that we should each do things to promote a general good when doing so is cheap/easy, relative to the gains to others. Applied to our systems, this norm says that we should all do cheap/easy things to make the systems that we share more effective and beneficial to all. This is a weakly enforced norm that elite associates are not particularly eager to enforce.

And so elites do typically neglect this system-improving norm more. Ordinary people look at a broken system, talk a bit out how it might be improved, and even make a few weak moves in such directions. But ordinary people know that elites are in a far better position to make such moves, and they tend to presume that elites are doing what they can. So if nothing is happening, probably nothing can be done. Which often isn’t remotely close to true, given that elites usually see the system-improving norm as one they can safely neglect.

Oh elites tend to be fine with getting out in front of a popular movement for change, if that will help them personally. They’ll even take credit and pretend to have started such a movement, pushing aside the non-elites who actually did. And they are also fine with taking the initiative to propose system changes that are likely to personally benefit themselves and their allies. But otherwise elites give only lip service to the norm that says to make mild efforts to seek good system changes.

This is one of the reasons that I favor making blackmail legal. That is, while one might have laws like libel against making false claims, and laws against privacy invasions such as posting nude picts or stealing your passwords, if you are going to allow people to tell true negative info that they gain through legitimate means, then you should also let them threaten to not tell this info in trade for compensation.

Legalized blackmail of this sort would have only modest effects on ordinary people, who don’t have much money, and who others aren’t that interested in hearing about. But it would have much stronger effects on elites; elites would be found out much more readily when they broke common social norms. They’d be punished for such violations either by the info going public, or by their having to pay blackmail to keep them quiet. Either way, they’d learn to adhere much more strongly to common norms.

Yes, this would cause harm in some areas where popular norms are dysfunctional. Such as norms to never give in to terrorists, or to never consider costs when deciding whether to save lives. Elites would have to push harder to get the public to accept norm changes in such areas, or they’d have to follow dysfunctional norms. But elites would also be pushed to adhere better to the key norm of working to improve systems when that is cheap and easy. Which could be a big win.

Yes trying to improve systems can hurt when proposed improvements are evaluated via naive public impressions on what behavior works well. But efforts to improve via making new small scale trials that are scaled up only when smaller versions work well, that’s much harder to screw up. We need a lot more of that.

Norms aren’t norms if most people don’t support them, via at least not disputing the claim that society is better off when they are enforced. If so, most people must say they expect society to be better off when we find more cost-effective ways to enforced current norms. Such as legalizing blackmail. This doesn’t necessarily result in our choosing to enforce norms more strictly, though this may often be the result. Yes, better norm enforcement can be bad when norms are bad. But in that case it seems better to persuade people to change norms, rather than throwing monkey-wrenches into the gears of norm enforcement.

So let’s hold our elites more accountable to our norms, listen to them when they suggest that we change norms, and especially enforce the norm of working to improve systems. Legalized blackmail could help with getting elites to adhere more closely to common norms.

GD Star Rating
loading...
Tagged as: , ,

Aliens Need Not Wait To Be Active

In April 2017, Anders Sandberg, Stuart Armstrong, and Milan Cirkovic released this paper:

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: This can produce a 1030 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: The reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyses the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis. (more)

That is, they say that if you have a resource (like a raised weight, charged battery, or tank of gas), you can get at lot (~1030 times!) more computing steps out of that if you don’t use it  today, but instead wait until the cosmological background temperature is very low. So, they say, there may be lots of aliens out there, all quiet and waiting to be active later.

Their paper was published in JBIS in a few months later, their theory now has its own wikipedia page, and they have attracted at least 15 news articles (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15). Problem is, they get the physics of computation wrong. Or so says physics-of-computation pioneer Charles Bennett, quantum-info physicist Jess Riedel, and myself, in our new paper:

In their article, ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’, Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer’s principle implies that a civilization can in principle perform far more (∼1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cos- mic background temperature is very low. So perhaps aliens are out there, but quietly waiting.

Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong. (more)

That is, the key resource is negentropy, and if you have some of that you can use it at anytime to correct computing-generated bit errors at the constant ideal rate of one bit of negentropy per one bit of error corrected. There is no advantage in waiting until the distant future to do this.

Now you might try to collect negentropy by running an engine on the temperature difference between some local physical system that you control and the distant cosmological background. And yes, that process may go better if you wait until the background gets colder. (And that process can be very slow.) But the negentropy that you already have around you now, you can use that at anytime without any penalty for early withdrawal.

There’s also (as I discuss in Age of Em) an advantage in running your computers more slowly; the negentropy cost per gate operation is roughly inverse to the time you allow for that operation. So aliens might want to run slow. But even for this purpose they should want to start that activity as soon as possible. Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

GD Star Rating
loading...
Tagged as: ,

Checkmate On Blackmail

Often in chess, at least among novices, one player doesn’t know that they’ve been checkmated. When the other player declares “checkmate”, this first player is surprised; that claim contradicts their intuitive impression of the board. So they have to check each of their possible moves, one by one, to see that none allow an escape.

The same thing sometimes happens in analysis of social policy. Many people intuitively want to support policy X, and they usually want to believe that this is due to the good practical consequences of X. But if the policy is simple enough, one may be able iterate through all the possible consequential arguments for X and find that they all fail. Or perhaps more realistically, iterate through hundreds of the most promising actual consequential arguments that have been publicly offered so far, and both find them all wanting, and find that almost all of them are repetitions, suggesting that few new arguments are to be found.

That is, it is sometimes possible with substantial effort to say that policy X has been checkmated, at least in terms of known consequentialist supporting arguments. Yes, many social policy chess boards are big, and so it can take a lot of time and expertise to check all the moves. But sometimes a person has done that checking on policy X, and then frequently encounters others who have not so checked. Many of these others will defend X, basically randomly sampling from the many failed arguments that have been offered so far.

In chess, when someone says “checkmate”, you tend to believe them, even if you have enough doubt that you still check. But in public debates on social policy, few people accept a claim of “checkmate”, as few such debates ever go into enough depth to go through all the possibilities. Typically many people are willing to argue for X, even if they haven’t studied in great detail the many arguments for and against X, and even when they know they are arguing with someone who has studied such detail. Because X just feels right. When such a supporter makes a particular argument, and is then shown how that doesn’t work, they usually just switch to another argument, and then repeat that process until the debate clock runs out. Which feels pretty frustrating to the person who has taken the time to see that X is in fact checkmated.

We need a better social process for together identifying such checkmated policies X. Perhaps a way that a person can claim such a checkmate status, be tested sufficiently thoroughly on that claim, and then win a reward if they are right, and lose a stake if they are wrong. I’d be willing to help to create such a process. Of course we could still keep policies X on our books; we’d just have to admit we don’t have good consequential arguments for them.

As an example, let me offer blackmail. I’ve posted seven times on this blog on the topic, and in one of my posts I review twenty related papers that I’d read. I’ve argued many times with people on the topic, and I consistently hear them repeat the same arguments, which all fail. So I’ll defend the claim that not only don’t we have good strong consequential arguments against blackmail, but that this fact can be clearly demonstrated to smart reasonable people willing to walk through all the previously offered arguments.

To review and clarify, blackmail is a threat that you might gossip about someone on a particular topic, if they don’t do something else you want. The usual context is that you are allowed to gossip or not on this topic, and if you just mention that you know something, they are allowed to offer to compensate you to keep quiet, and you are allowed to accept that offer. You just can’t be the person who makes the first offer. In almost all other cases where you are allowed to do or not do something, at your discretion, you are allowed to make and accept offers that compensate you for one of these choices. And if a deal is legal, it rarely matters who proposes the deal. Blackmail is a puzzling exception to these general rules.

Most ancient societies simply banned salacious gossip against elites, but modern societies have deviated and allowed gossip. People today already have substantial incentives to learn embarrassing secrets about associates, in order to gain social rewards from gossiping about those to others. Most people suffer substantial harm from such gossip; it makes them wary about who they let get close to them, and induces them to conform more to social pressures regarding acceptable behaviors.

For most people, the main effect of allowing blackmail is to mildly increase the incentives to learn embarrassing secrets, and to not behave in ways that result in such secrets. This small effect makes it pretty hard to argue that for gossip incentives the social gains out weigh the losses, but for the slightly stronger blackmail incentives, the losses out weight the gains. However, for elites these incentive increases are far stronger, making elite dislike plausibly the main consequentialist force pushing to keep blackmail illegal.

In a few recent twitter surveys, I found that respondents declared themselves against blackmail at a 3-1 rate, evenly split between consequential and other reasons for this position. However, they said blackmail should be legal in many particular cases I asked about, depending on what exactly you sought in exchange for your keeping someone’s secret. For example, they 12-1 supported getting your own secret kept, 3-2 getting someone to treat you fairly, and 1-1 getting help with child care in a medical crisis.

These survey results are pretty hard to square with consequential justifications, as the consequential harm from blackmail should mainly depend on the secrets being kept, not on the kind of compensation gained by the blackmailer. Which suggests that non-elite opposition to blackmail is mainly because blackmailers look like they have bad motives, not because of social consequences to others. This seems supported by the observation that women who trash each other’s reputations via gossip tend to consciously believe that they are acting helpfully, out of concern for their target.

As examples of weak arguments, Tyler Cowen just offered four. First, he says even if blackmail has good consequences, given current world opinion it would look bad to legalize it. (We should typically not do the right thing if that looks bad?) Second, he says negotiating big important deals can be stressful. (Should most big deals be banned?) Third, it is bad to have social mechanisms (like gossip?) that help enforce common social norms on sex, gender and drugs, as those are mistaken. Fourth, making blackmail illegal somehow makes it easier for your immediate family to blackmail you, and that’s somehow better (both somehows are unexplained).

I’d say the fact that Tyler is pushed to such weak tortured arguments supports my checkmate claim: we don’t have good strong consequential arguments for making gossiper-initiated blackmail offers illegal, relative to making gossip illegal or allowing all offers.

Added 18Feb: Some say a law against negative gossip is unworkable. But note, not only did the Romans manage it, we now have slander/libel laws that do the same thing except we add an extra complexity that the gossip must be false, which makes those laws harder to enforce. We can and do make laws against posting nude pictures of a person who disapproves, or stealing info such as via hidden bugs or hacking into someone’s computer.

GD Star Rating
loading...
Tagged as: ,

How Lumpy AI Services?

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)

Many intellectuals and ordinary people found such views quite plausible then, and still do; these are the concerns most often voiced to justify redistribution and regulation. Wealth inequality is said to be bad for social and political health, and big firms are said to be bad for the economy, workers, and consumers, especially if they are not loyal to our nation, or if they coordinate behind the scenes.

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us. Many people either want these big firms broken up, or heavily constrained by presumed-friendly regulators.

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task. In Drexler’s words: Continue reading "How Lumpy AI Services?" »

GD Star Rating
loading...
Tagged as: , ,

Conditional Harberger Tax Games

Baron Georges-Eugène Haussmann … transformed Paris with dazzling avenues, parks and other lasting renovations between 1853 and 1870. … Haussmann… resolved early on to pay generous compensation to [Paris] property owners, and he did. … [He] hoped to repay the larger loans he obtained from the private sector by capturing some of the increased value of properties lining along the roads he built. … [He] did confiscate properties on both sides of his new thoroughfares, and he had their edifices rebuilt. … Council of State … forced him to return these beautifully renovated properties to their original owners, who thus captured all of their increased value. (more)

In my last post I described abstractly how a system of conditional Harberger taxes (CHT) could help deal with zoning and other key city land use decisions. In this post, let me say a bit more about the behaviors I think we’d actually see in such a system. (I’m only considering here such taxes for land and property tied to land.)

First, I while many property owners would personally manage their official declared property values, many others would have them set by an agent or an app. Agents and apps may often come packaged with insurance against various things that can go wrong, such as losing one’s property.

Second, yes, under CHT, sometimes people would (be paid well to) lose their property. This would almost always be because someone else credibly demonstrated that they expect to gain more value from it. Even if owners strategically or mistakenly declare values too low, the feature I suggested of being able to buy back a property by paying a 1% premium would ensure that pricing errors don’t cause property misallocations. The highest value uses of land can change, and one of the big positive features of this system is that it makes the usage changes that should then result easier to achieve. In my mind that’s a feature, not a bug. Yes, owners could buy insurance against the risk of losing a property, though that needn’t result in getting their property back.

In the ancient world, it was common for people to keep the same marriage, home, neighbors, job, family, and religion for their entire life. In the modern world, in contrast, we expect many big changes during our lifetimes. While we can mostly count on family and religion remaining constant, we must accept bigger chances of change to marriages, neighbors, and jobs. Even our software environments change in ways we can’t control when new versions are issued. Renters today accept big risks of home changes, and even home “owners” face big risks due to job and financial risks. All of which seems normal and reasonable. Yes, a few people seem quite obsessed with wanting absolute guarantees on preservation of old property usage, but I can’t sympathize much with such fetishes for inefficient stasis. Continue reading "Conditional Harberger Tax Games" »

GD Star Rating
loading...
Tagged as: , ,