MacAskill on Value Lock-In

Will MacAskill has a new book out today, What We Owe The Future, most of which I agree with, even if that doesn’t exactly break new ground. Yes, the future might be very big, and that matters a lot, so we should be willing to do a lot to prevent extinction, collapse, or stagnation. I hope his book induces more careful future analysis, such as I tried in Age of Em. (FYI, MacAskill suggested that book’s title to me.) I also endorse his call for more policy and institutional experimentation. But, as is common in book reviews, I now focus on where I disagree.

Aside from the future being important, MacAskill main concern in his book is “value lock-in”, by which he means a future point in time when the values that control actions stop changing. But he actually mixes up two very different processes by which this result might arise. First, an immortal power with stable values might “take over the world”, and prevent deviations from its dictates. Second, in a stable universe decentralized competition between evolving entities might pick out some most “fit” values to be most common.

MacAskill’s most dramatic predictions are about this first “take over” process. He claims that the next century or so is the most important time in all of human history:

We hold the entire future in our hands. … By choosing wisely, we can be pivotal in putting humanity on the right course. … The values that humanity adopts in the next few centuries might shape the entire trajectory of the future. … Whether the future is government by values that are authoritarian or egalitarian, benevolent or sadistic, exploratory or rigid, might well be determined by what happens this century.

His reason: we will soon create AGI, or ems, who, being immortal, have forever stable values. Some org will likely use AGI to “take over the world”, and freeze in their values forever:

Advanced artificial intelligence could enable those in power to to lock in their values indefinitely. … Since [AGI] software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal. These two features of AGI – potentially rapid technological progress and in-principle immortality – combine to make value lock-in a real possibility. …

Using AGI, there are a number of ways that people could extend their values much farther into the future than ever before. First, people may be able to create AGI agents with goals closely assigned with their own which would act on their behalf. … [Second,] the goals of an AGI could be hard-coded: someone could carefully specify what future white want to see and ensure that the AGI aims to achieve it. … Third, people could potentially “upload”. …

International organizations or private actors may be able to leverage AGI to attain a level of power not seen since the days of the East India Company, which in effect ruled large areas of India. …

A single set of values could emerge. …The ruling ideology could in principle persist as long as civilization does. AGI systems could replicate themselves as many times as they wanted, just as easily as we can replicate software today. They would be immortal, freed from the biological process of aging, able to create back-ups of themselves and copy themselves onto new machines. … And there would not longer be competing value systems that could dislodge the status quo. …

Bostrom’s book Superintelligence. The scenario most closely associated with that book is one in which a single AI agent … quickly developing abilities far greater than the abilities of all of humanity combined. … It would therefore be incentivize to take over the world. … Recent work has looked at a broader range of scenarios. The move from subhuman intelligence to super intelligence need not be ultrafast or discontinuous to post a risk. And it need not be a single AI that takes over; it could be many. …

Values could become even more persistent in the future if a single value system were to become global dominant. If so, then the absence of conflict and competition would remove one reason for change in values over time. Conquest is the most dramatic pathway … and it may well be the most likely.

Now mere immortality seems far from sufficient to create either value stability or a takeover. On takeover, immortality is insufficient. Not only is a decentralized world of competing immortals easy to imagine, but in fact until recently individual bacteria, who very much compete, were thought to be immortal.

On values, immortality also seems far from sufficient to induce stable values. Human organizations like firms, clubs, cities, and nations seem to be roughly immortal, and yet their values often greatly change. Individual humans change their values over their lifetimes. Computer software is immortal, and yet its values often change, and it consistently rots. Yes, as I mentioned in my last post, some imagine that AGIs have a special value modularity that can ensure value stability. But we have many good reasons to doubt that scenario.

Thus MacAskill must be positing that a power who somehow manages to maintain stable values takes over and imposes its will everywhere forever. Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario. MacAskill claims that other scenarios are also relevant, but doesn’t even try to show how they could produce this result. For reasons I’ve given many times before, I’m skeptical of foom-like scenarios.

Furthermore, let me note that even if one power came to dominate Earth’s civilization for a very long time, it would still have to face competition from other grabby aliens in roughly a billion years. If so, forever just isn’t at issue here.

While MacAskill doesn’t endorse any regulations to deal with this stable-AGI-takes-over scenario, he does endorse regulations to deal with the other path to value stability: evolution. He wants civilization to create enough of a central power that it could stop change for a while, and also limit competition between values.

The theory of cultural evolution explains why many moral changes are contingent. … the predominant culture tends to entrench itself. … results in a world increasingly dominated by cultures with traits that encourage and enable entrenchment and thus persistence. …

If we don’t design our institutions to govern this transition well – preserving a plurality of values and the possibility of desirable moral progress. …

A second way for a culture to become more powerful is immigration [into it]. … A third way in which a cultural trait can gain influence is if it gives one group greater ability to survive or thrive in a novel environment. … A final way in which one culture can outcompete another is via population growth. … If the world converged on a single value system, there would be much less pressure on those values to change over time.

We should try to ensure that we have made as much moral progress as possible before any point of lock-in. … As an ideal, we could aim for what we could call the long reflection: a stable state of the world in which we are safe from calamity and can reflect on and debate the nature of the good life, working out what the more flourishing society would be. … It would therefore be worth spending many centuries to ensure that we’ve really figured things out before taking irreversible actions like locking in values or spreading across the stars. …

We would need to keep our options open as much as possible … a reason to prevent smaller-scale lock-ins … would favor political experimentation – increasing cultural and political diversity, if possible. …

That one society has greater fertility than another or exhibits faster economic growth does not imply that society is morally superior. In contrast, the most important mechanisms for improving our moral views are reason, reflection, and empathy, and the persuasion of others based on those mechanisms. … Certain forms of free speech would therefore be crucial to enable better ideas to spread. …

International norms or laws preventing any single country from becoming too populous, just as anti-trust regulation prevents any single company from dominating a market. … The lock-in paradox. We need to lock-in some institutions and ideas in order to prevent a more thorough-going lock-in of values. … If we wish to avoid the lock-in of bad moral views, an entirely laissez-faire approach would not be possible; over time, the forces of cultural evolution would dictate how the future goes, and the ideologies that lead to the greatest military powered that try to eliminate their competition would suppress all others.

I’ve recently described my doubts that expert deliberation has been a large force in value change so far. So I’m skeptical that will be a large force in the future. And the central powers (or global mobs) sufficient to promote a long reflection, or to limit nations competing, seem to risk creating value stability via the central dominance path discussed above. MacAskill doesn’t even consider this kind of risk from his favored regulations.

While competition may produce a value convergence in the long run, my guess is that convergence will happen a lot faster if we empower central orgs or mobs to regulate competition. I think that a great many folks prefer that latter scenario because they believe we know what are the best values, and fear that those values would not win an evolutionary competition. So they want to lock in current values via regs to limit competition and value change.

To his credit, MacAskill is less confident that currently popular values are in fact the best values. And his favored solution of more deliberation probably would’t hurt. I just don’t think he realizes just how dangerous are central powers able to regulate to promote deliberation and limit competition. And he seems way too confident about the chance of anything like foom soon.

GD Star Rating
loading...
Tagged as: ,

AGI Is Sacred

Sacred things are especially valuable, sharply distinguished, and idealized as having less decay, messiness, inhomogeneities, or internal conflicts. We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense.

We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association. (More)

When we treat something as sacred, we acquire the predictably extreme related expectations and values characteristic of our concept of “sacred”. This biases us in the usual case where such extremes are unreasonable. (To min such biases, try math as sacred.)

For example, most ancient societies had a great many gods, with widely varying abilities, features, and inclinations. And different societies had different gods. But while the ancients treated these gods as pretty sacred, Christians (and Jews) upped the ante. They “knew” from their God’s recorded actions that he was pretty long-lasting, powerful, and benevolent. But they moved way beyond those “facts” to draw more extreme, and thus more sacred, conclusions about their God.

For example, Christians came to focus on a single uniquely perfect God: eternal, all-powerful, all-good, omnipresent, all-knowing (even re the future), all-wise, never-changing, without origin, self-sufficient, spirit-not-matter, never lies nor betrays trust, and perfectly loving, beautiful, gracious, kind, and pretty much any other good feature you can name. The direction, if not always the magnitude, of these changes is well predicted by our sacredness concept.

It seems to me that we’ve seen a similar process recently regarding artificial intelligence. I recall that, decades ago, the idea that we could make artificial devices who could do many of the kinds of tasks that humans do, even if not quite as well, was pretty sacred. It inspired much reverence, and respect for its priests. But just as Christians upped the ante regarding God, many recently have upped the AI ante, focusing on an even more sacred variation on AI, namely AGI: artificial general intelligence.

The default AI scenario, the one that most straightforwardly projected past trends into the future, would go as follows. Many kinds of AI systems would specialize in many different tasks, each built and managed by different orgs. There’d also be a great many AI systems of each type, controlled by competing organizations, of roughly comparable cost-effectiveness.

Overall, the abilities of these AI would improve at roughly steady rates, with rate variations similar to what we’ve seen over the last seventy years. Individual AI systems would be introduced, rise in influence for a time, and then decline in influence, as they rotted and become obsolete relative to rivals. AI systems wouldn’t work equally well with all other systems, but would instead have varying degrees of compatibility and integration.

The fraction of GDP paid for such systems would increase over time, and this would likely lead to econ growth rate increases, perhaps very large ones. Eventually many AI systems would reach human level on many tasks, but then continue to improve. Different kinds of system abilities would reach human level at different times. Even after this point, most all AI activity would be doing relatively narrow tasks.

The upped-ante version of AI, namely AGI, instead changes this scenario in the direction of making it more sacred. Compared to AI, AGI is idealized, sharply distinguished from other AI, and associated with extreme values. For example:

1) Few discussions of AGI distinguish different types of them. Instead, there is usually just one unspecialized type of AGI, assumed to be at least as good as humans at absolutely everything.

2) AGI is not a name (like “economy” or “nation”) for a diverse collection of tools run by different orgs, tools which can all in principle be combined, but not always easily. An AGI is instead seen as a highly integrated system, fully and flexibly able to apply any subset its tools to any problem, without substantial barriers such as ownership conflicts, different representations, or incompatible standards.

3) An AGI is usually seen as a consistent and coherent ideal decision agent. For example, its beliefs are assumed all consistent with each other, fully updated on all its available info, and its actions are all part of a single coherent long-term plan. Humans greatly deviate from this ideal.

4) Unlike most human organizations, and many individual humans, AGIs are assumed to have no internal conflicts, where different parts work at cross purposes, struggling for control over the whole. Instead, AGIs can last forever maintaining completely reliable internal discipline.

5) Today virtually all known large software systems rot. That is, as they are changed to add features and adapt to outside changes, they gradually become harder to usefully modify, and are eventually discarded and replaced by new systems built from scratch. But an AGI is assumed to suffer no such rot. It can instead remain effective forever.

6) AGIs can change themselves internally without limit, and have sufficiently strong self-understanding to apply this ability usefully to all of their parts. This ability does not suffer from rot. Humans and human orgs are nothing like this.

7) AGIs are usually assumed to have a strong and sharp separation between a core “values” module and all their other parts. It is assumed that value tendencies are not in any way encoded into the other many complex and opaque modules of an AGI system. The values module can be made frozen and unchanging at no cost to performance, even in the long run, and in this way an AGI’s values can stay constant forever.

8) AGIs are often assumed to be very skilled, even perfect, at cooperating with each other. Some say that is because they can show each other their read-only values modules. In this case, AGI value modules are assumed to be small, simple, and standardized enough to be read and understood by other AGIs.

9) Many analyses assume there is only one AGI in existence, with all other humans and artificial systems at the time being vastly inferior. In fact this AGI is sometimes said to be more capable than the entire rest of the world put together. Some justify this by saying multiple AGIs cooperate so well as to be in effect a single AGI.

10) AGIs are often assumed to have unlimited powers of persuasion. They can convince humans, other AIs, and organizations of pretty much any claim, even claims that would seem to be strongly contrary to their interests, and even if those entities are initially quite wary and skeptical of the AGI, and have AI advisors.

11) AGIs are often assumed to have unlimited powers of deception. They could pretend to have one set of values but really have a completely different set of values, and completely fool the humans and orgs that developed them ever since they grew up from a “baby” AI. Even when those had AI advisors. This super power of deception apparently applies only to humans and their organizations, but not to other AGIs.

12) Many analyses assume a “foom” scenario wherein this single AGI in existence evolves very quickly, suddenly, and with little warning out of far less advanced AIs who were evolving far more slowly. This evolution is so fast as to prevent the use of trial and error to find and fix its problematic aspects.

13) The possible sudden appearance, in the not-near future, of such a unique powerful perfect creature, is seen by many as event containing overwhelming value leverage, for good or ill. To many, trying to influence this event is our most important and praise-worthy action, and its priests are the most important people to revere.

I hope you can see how these AGI idealizations and values follow pretty naturally from our concept of the sacred. Just as that concept predicts the changes that religious folks seeking a more sacred God made to their God, it also predicts that AI fans seeking a more sacred AI would change it in these directions, toward this sort of version of AGI.

I’m rather skeptical that actual future AI systems, even distant future advanced ones, are well thought of as having this package of extreme idealized features. The default AI scenario I sketched above makes more sense to me.

Added 7a: In the above I’m listing assumptions commonly made about AGI, not just applying a particular definition of AGI.

GD Star Rating
loading...
Tagged as: ,

Is Nothing Sacred?

“is nothing sacred?” is spoken used to express shock when something you think is valuable or important is being changed or harmed (more)

Human groups often unite via agreeing on what to treat as “sacred”. While we don’t all agree on what is how sacred, almost all of us treat some things as pretty sacred way. Sacred things are especially valuable, sharply distinguished, and idealized, so they have less decay, messiness, inhomogeneities, or internal conflicts.

We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense. We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association.

Treating things as sacred will tend to bias our thinking when such things do not actually have all these features, or when our values regarding them don’t actually justify all these sacred valuing rules. Yes, the benefits we get from uniting into groups might justify paying the costs of this bias. But even so, we might wonder if there are cheaper ways to gain such benefits. In particular, we might wonder if we could change what things we see as sacred, so as to reduce these biases. Asked another way: is there anything that is in fact, naturally sacred, so that treating it as such induces the least bias?

Yes, I think so. And that thing is: math. We do not create math; we find it, and it describes us. Math objects are in fact quite idealized and immortal, mostly lacking internal messy inhomogeneities. Yes, proofs can have messy details, but their assumptions and conclusions are much simpler. Math concepts don’t even suffer from the cultural context-dependence or long-term conceptual drift suffered by most abstract language concepts.

We can draw clear lines distinguishing math vs. non-math objects. Usually no one can own math, avoiding the vulgarity of associated prices. And while we think about math cognitively, the value we put on any piece of math, or on math as a whole, tends to be come intuitively, even reverently, not via calculation.

Compared to other areas, math seems an at extreme of ease of evaluation of abilities and contributions, and thus math can suppress factionalism and corruption in such evaluations. This helps us to use math to judge mental ability, care, and clarity, especially in the young. So we use math tests to sort and assign prestige early in life.

As math is so prestigious and reliable to evaluate, we can more just let math priests tell us who is good at math, and then use that as a way to choose who to hire to do math. We can thus avoid using vulgar outcome-based forms of payment to compensate math workers. It doesn’t work so badly to give math priests self-rule an long job tenures. Furthermore, so many want to be math priests that their market wages are low, making math inequality feel less offensive.

The main thing that doesn’t fit re math as sacred is that today treating math as sacred doesn’t much help us unite some groups in contrast to other groups. Though that did happen long ago (e.g., among ancient Greeks). However, I don’t at all mind this aspect of math today.

The main bias I see is that treating math as sacred induces us to treat it as more valuable than it actually is. Many academic fields, for example, put way too high a priority on math models of their topics. Which distracts from actually learning about what is important. But, hey, at least math does in fact have a lot of uses, such as in engineering and finance. Math was even crucial to great advances in many areas of science.

Yes, many over-estimate math’s contributions. But even so, I can’t think of something else that is in fact more naturally “sacred” than math. If we all in fact have a deep need to treat some things as sacred, this seems a least biased target. If something must be sacred, let it be math.

GD Star Rating
loading...
Tagged as: ,

Moral Progress Is Not Like STEM Progress

In this post I want to return to the question of moral progress. But before addressing that directly, I first want to set up two reference cases for comparison.

My first comparison case is statistics. Statistics is useful, and credit for the value that statistics adds to our discussions goes to several sources: to the statisticians who develop stat tests and estimates, to the teachers who transmit those tools to others, and to the problem specialists who find useful places to apply stats.

We can tell that statisticians deserve credit because we can usually identify the particular tests and estimates being used (e.g., “chi test”) in each case, and can trace those back to the teachers who taught them, and the researchers who developed them. New innovations are novel combinations of stat details whose effectiveness depends greatly on those details. We can see the first use cases of each such structure, and then see how a habit of its use spread.

Similar stories apply to many STEM areas, where we can distinguish particular design elements and analysis tools, and trace them back to their teachers and innovators. We can thus credit those innovators with their contributions, and verify that we have in fact seen substantial progress in these areas. We can see many cases where new tools let us improve on the best we could do with old tools.

My second comparison case is the topic area of home arrangement: what things to put in what drawers and rooms in our homes, and what activities to do in what parts of what rooms at what times of the day or week. Our practices in these areas result from copying the choices of our parents, friends, TV shows, and retailers, and also from experimenting with personal variations to see what we like. Over our lifetimes, we each tend to get more satisfied with our choices.

It is less clear, however, how much humanity as a whole improves in this area over time. Oh, we prefer our homes to homes of centuries ago. But this is most clearly because we have bigger nicer homes, that we fill with more nicer things than our ancestors had or could afford.

As new items become available, our plans for which things go where, and what we do with them when, have adapted over time. But it isn’t clear that humanity learns much after an early period of adaptation to each new item. Yes, for each choice we make, we can usually offer an argument for why that choice is better, and sometimes we can remember where we heard that argument. But the general set of arguments used in this area doesn’t seem to expand or improve much over time.

It is possible and even plausible that, even so, we are slowly getting better in general at knowing where to put things and what to do when in homes. Even if we don’t learn new general principles, we may be slowly getting better at reducing our case specific errors relative to our constant general principles.

But if so, the value of this progress seems to be modest, compared to our other related sources of progress, such as bigger houses, better items, and more free time to spend on them. And it seems pretty clear that little of the progress that we have seen here is to be credited to researchers specializing in home arrangement or personal activity scheduling. We don’t share much general abstract knowledge about this area, and haven’t added much lately to whatever of that we once had.

We see similar situations in many other areas where there is widespread practice, but few research specialists or teachers of newly researched tools. There might be progress in reducing errors where practice deviates from widely accepted stable principles, but if so that progress seems modest relative to progress due to other factors, such as better technology, increased wealth, and larger populations.

With these two reference cases in mind, STEM tools and home arrangement, let us now consider moral progress. The world seems to many to be getting more moral over time. But that could be because we have been getting richer and safer, which makes morality more affordable to us. Or it could be due to random correlated drift in our practices and standards, combined with our habit of judging past practices by current standards.

However, it also seems possible, at least at first glance, that our world is getting more apparently moral because of improved moral abilities, holding constant our wealth and knowledge about non-moral topics. For example, moral researchers might be acquiring more objective genera knowledge about morality, knowledge which morality teachers then spread to the rest of us, who then apply those improved moral tools to particular cases.

In support of this theory, many people point to particular moral arguments when they defend the morality of particular behaviors, and they often point to particular human sources for those arguments. Furthermore, many of those sources are new and canonical, so that a great many people in each era point to the same few sources, sources that are different from those to which prior generations pointed. Does this show progress?

If you look carefully at the specific moral arguments that people cite to support their behavior, it turns out that those arguments look pretty similar to arguments that were known long before. While each new generation’s canonical sources have some unique examples, styles, and argument details, those differences don’t seem to matter much to the practices of the ordinary people who cite them.

This situation seems in sharp contrast to the case of progress in statistics, for example, where the details of each new statistical test or estimate show up clearly and matter greatly to applications of those stats. It seems more consistent with moral arguments being used to justify behavior that would have happened anyway, rather than having moral arguments cause changes in behavior.

Yes, some old moral arguments may well have been forgotten for a time, and thus need to be reinvented by newer sources. For example, while ancient sources plausibly expressed thoughtful critiques of slavery and gender inequality, recent critics of such things may well have not read such ancient sources.

Even so, progress in morality looks to me much more like progress in home arrangement, and much less like progress in STEM. Even though locally new home arrangement choices continually appear, they don’t appear to add up to much overall progress relative to other sources of progress. Similarly, while it is possible that there is some moral progress due to slowly learning to have lower local error rates relative to constant general principles, I think we can pretty clearly reject the STEM-analogue hypothesis that morality researchers invent new detailed morality structures which then diffuse via teachers to greatly change typical practice.

Thus an examination of the details of moral change suggests that little of it can be credited to moral researchers, and only modest amounts to practioners slowly learning to cut errors relative to stable principles. Thus most apparent progress is plausibly due to our getting richer and safer, or to drift combined with a habit of judging past practices by current standards.

GD Star Rating
loading...
Tagged as: ,

A Portrait of Civil Servants

Our choices of the areas of life where governments will more regulate or directly provide services are some of our most important policy choices. But while on the surface we hear a great many different arguments on these topics. and awful lot of them seem to come down to this claim:

Government agencies can do better than private orgs because (A) they are more accountable to citizens via the voting channel, and (B) their employees more prioritize public welfare, due both to selecting nicer people, and to embedding them in a supportive work culture.

My Caltech Ph.D. in formal political theory prepared me to dispute the (A) part, but I honestly haven’t paid that much attention to the (B) part. Until now. Here is what I’ve just learned from a quick search about how civil servants differ from other workers.

First, I couldn’t quickly find stats on how govt workers differ from others in age, gender, race, or political orientation. (If someone can find those, I’ll edit this to include those here.) But I did find that they are better educated than other workers, and even controlling for that they are paid more. Furthermore, public sector workers had a median 6.5 years tenure, compared to 3.7 years in the private sector.

Its not crazy to think that having a relatively secure well-paid job for an employer with a noble mission might incline one toward being a better person who makes job decisions more generously, i.e., more for the public good. But if that were true, what would you predict about their relative rates of workplace absenteeism, fraud, bullying, and violent events at work? You’d predict those to be lower, right?

Across nations, government workers have 10% to 84% higher work absenteeism rates; 40% for the U.S. Out of 22 industries, govt workers are #2 in work fraud rates. While govt workers are only 15% of U.S. workers, they were reported to have 24.7% and 26% of fraud cases. And while bullying and violent victimizations happen respectively at rates of  3.7% and 0.47% in private jobs, they happen at rates of 5.6% and 0.87% in public jobs.

This looks pretty damning so far. But what about direct measure of productivity, comparing public and private orgs doing this same task? It seems they do about the same on prisons, and private does better on schools and catching fugitives, In medicine, they do about this same re health and cost, but private seems better on timing and satisfaction. Even private military contractors seem to perform similarly.

Bottom line: I find little support for the idea that we can trust govt agencies more than private orgs due to their having or inspiring more trustworthy employees.

GD Star Rating
loading...
Tagged as: ,

Violent Offense Under Bounties & Vouchers

I recently talked to some smart high school students about the voucher and bounty crime reform scenario. They imagined bounty hunters spending most of their time in chases and gun fights, as in cowboy or Star Wars movies. So they were against the scenario, preferring such violence roles to be filled by government employees.

But in fact bounty hunters today spend almost no time in chases or fights. And that was true throughout history; bounty hunters have been widely used in Rome and England for thousands of years. (I’ll discuss that history more below.) Movies emphasize rare scenarios to create conflict and drama. The main job of most bounty hunters was to collect evidence, and then to sue in a court trial. As lawyers have always done to prepare for and engage in lawsuits.

Okay, you might ask, but in a world of vouchers and bounty hunters, sometimes there would be gun fight or car chases, right? So who would be authorized to participate in such activities, and what powers would they have or need? That is, who would do violence in this scenario?

First, many parties, maybe even everyone, could be allowed to stand ready to defend themselves violently. Okay, you might say, but won’t offensive violence also be needed sometimes? If so, who is authorized to do that?

Well, note that a person found to lack a voucher would need to be assigned one immediately. Perhaps a “public option” voucher who keeps clients temporarily in a detention center. And offensive force might be needed to move such a newly found client to such a detention center.

Actually, this isn’t a special case, as in general vouchers and their representatives would be the main parties authorized to use offensive force. After all, vouchers would often be authorized by their client contracts to physically punish their clients. And if a client seems to be about to hurt others, perhaps via force, their voucher is usually the party with the strongest interest in stopping them. As they have to pay for any resulting damages.

Thus voucher-client contracts will pretty much always authorize the voucher to use offensive force against their client, both to punish them, and to prevent clients from causing harm. And the rest of us don’t need to decide what kinds of force should be allowed there, if those two are the only parties effected by their choice.

However, what if a third party ends up getting hurt when a voucher uses offensive force on their client? In this case, either the voucher or their client is likely guilty of a crime, and the voucher is on the hook either way to pay damages. To avoid these losses, vouchers would likely make deals to help each other in such situations, and have their clients agree to such behavior in their voucher-client contracts. Thus in the general bounty-voucher scenario, most offensive violence would happen between parties who had agreed by contract beforehand on how violence is to be handled.

Vouchers who have made such voucher-voucher deals also seem well-placed to handle people discovered to be without a voucher. Thus a simple solution for this case might be to hold a fast auction to see which nearby voucher is willing to take on this person as a client at the lowest price. This voucher would then have the job of transferring this client to a public option detention center, after which that detention center would become the client’s official voucher. At least until that client could arrange for a new voucher.

Note that under this voucher-bounty system, as long as everyone has a voucher then there is no need for any other party besides a voucher to forcibly detain anyone, either to ensure that they appear in court or to ensure that they can be punished. As vouchers are fully liable for such failures, such tasks can be delegated to them.

As I said above, fights and chases have not actually been the main complaints about bounty hunters in history. The main complaint in the last few centuries, which led to cuts in their usage, seems to be that bounty hunters were typically for-profit agents, whereas many thought government employees could be better trusted to promote the general welfare.

Here are the other main complaints about bounty hunters that I find in this article on the history of their usage (called “qui tam”) in England. Bounty hunters have at times made false accusations, committed perjury, coerced witnesses, faked evidence, tempted people to commit crimes, threatened jurors who ruled against them, and enforced the letter of laws against the spirit of the law.

Bounty hunters have also at times filed their claims in distant expensive-to-travel-to courts, and detained the accused before delayed trials, and used the treat of such treatments to extort concessions. They have accepted private settlements (i.e., plea bargains and bribes) instead of going to court. And they have accepted payments from guilty folks to do a bad job at trial, when such efforts prevent future trials from being held on the same accusations.

However, the government employee police who replaced bounty hunters have also done all these things. Some assume that such employees will do such things less often than would bounty hunters. But I don’t know of evidence that supports this claim. And remember that government police can much more effectively maintain a “blue wall of silence” that prevents the reporting and prosecution of such things. Whereas bounty hunters will happily turn on each other, just as one can easily hire a lawyer today to sue another lawyer, or a P.I. to investigate another P.I.

Note that we can greatly cut the harm of private settlements via keeping the bounty and fine levels close to each other. And no one besides vouchers need to detain anyone.

GD Star Rating
loading...
Tagged as: , ,

Who Should Be Our “Adults”?

Adult: “a mature, fully developed person. An adult has reached the age when they are legally responsible for their actions.”
“to attend to the ordinary tasks required of a responsible adult” “children should be accompanied by an adult” “responsibility, independent decision-making, and financial independence”

Mature: “fully grown physically” “developed mentally and emotionally and behave in a responsible way” “a lot of careful thought”

Responsible: “liable to be called to account” “able to answer for one’s conduct and obligations; trustworthy” “involving important duties, independent decision-making, or control over others.”

The usual concept of “adult” combines both a style in a role, “mature, responsible, independent”, and a description of who we let fill that role, “fully grown human”. In this post I want to reconsider who should fill that role.

The main role of an “adult” is to think carefully about what to do, and then do it reliably, with action choices that account well for their effects on others. That is, an adult has autonomy, self-control, and intelligence to make choices well and reliably, but also faces social incentives adequate to make them play well with others. Or at least play similarly well to the other available adults. Adults can be relied on to do the important things than need doing, and yet can be given great autonomy to decide what to do how and why.

A key subsidiary adult role is to manage “dependents” who are not up to filling this role. Such as children, animals, machines, the mentally ill, and the infirm. Not all adults need take this role, but those who do take this role must be adults. We match each such dependent to an adult “guardian”, allow that guardian to limit dependent behavior, and hold that guardian responsible for such behavior. In order to limit guardian mistreatment of dependents, sufficiently able dependents may be allowed to choose their guardians,

The prototype for this relation is that between human parents and their children. Parents limit their children, and are responsible for them to outsiders. Compared to their children, parents are more free to choose their actions and relations, are more held responsible for their actions, and are more trusted to do important things.

A common “libertarian” vision is to treat all fully grown humans as “adults” in this sense. But in fact such humans have usually not been full trusted, free, or responsible. Among foragers, the band as a whole, discussing together, was more of an “adult”, trusted to limit the behavior of band members. Later on, during the farmer/herder era, family clans were more the “adults”, held responsible for member behavior and able to limit those individuals. Larger nations and empires have also been treated by the world as “adults”, free to choose and to be destroyed. And at times such units have decided to limit the freedoms of particular family clans, treating them as less than fully adult.

Such higher level “adult” social units have at times treated particular fully grown humans as also “adult”, judging them to be sufficiently reliable and responsible to be treated in that way. But many other fully grown humans have been treated more as dependents. And the usual rule has been that such dependents must be associated with particular controlling adults who were more reliable and could be more held more responsible.

The industrial revolution was primarily driven by the rise of new larger orgs, such as for-profits, non-profits, and government agencies. (Science & tech were side effects of those new orgs.) And once such orgs became available, we soon came to treat them as the main “adults” of our world. Such orgs are arguably just smarter and more thoughtful and reliable than individual humans. They are now trusted to manage our most important activities, and are allowed to make deals and relations with each other quite freely, with almost no regulations.

Today we do not treat most fully grown humans as fully “adult”; we instead require each such human to pair up with a nation-state. Nation-states then limit the choices of their fully grown human members, and are held responsible by other nations-states for the actions of such members. We also usually support a norm that humans should be free to switch nations, if the new nation will take them. Nations don’t always play well with each other, but no other orgs at that level can force them to behave better.

However, I propose that we seriously consider instead treating smaller organizations (for-profits and non-profits) as the main responsible “adults” with which we pair each fully grown human. These smaller orgs are arguably on average even smarter, more thoughtful, and more reliable than are nations, they arguably play better with each other, and we are more willing and able to hold them strictly responsible.

Furthermore, these are the orgs that we actually trust to do most of our important activities. Competition between such orgs is what mainly ensures adaptation and innovation in our world, far more than does competition between nation-states. And allowing humans to choose between these as their adults gives them far more effective choice than when choosing between nations.

Today employers are in part treated as “adults” relative to their employees. And requiring each fully grown human to pair up with a sufficiently responsible firm is the essence of my “vouching” proposal for criminal law reform. The main formal requirement to be a voucher is having enough money to pay client fines, which makes such an org much easier to hold responsible for they and client actions. In addition, I expect most to be for-profit firms, and thus smarter and more reliable than are most fully-grown humans. With vouchers responsible for individual behavior, and able to regulate that behavior, we’d have less need for government regulation to limit individual behavior.

Compared to themselves, children see their parents taking on more important roles in the world, being held more responsible for their actions, being more careful in their choices, and being more free to choose as they like. While most children eventually grow into such roles, many are disappointed to learn that few fully grown humans are treated fully as ideal “adults”. In our world, that role is reserved for nation-states.

Some are so disappointed to learn this that they propose “libertarian” reforms to make fully grown humans be the “adults” of our world, mostly unregulated and strongly responsible for their actions. If you ask them why children should not also be treated this way, a few will bite that bullet, but most will point to children being less reliable, thoughtful, and knowledgable, and to our being less willing to hold them fully responsible for their actions.

But even though my intuitions pull libertarian, I have to admit that many fully grown humans also look this way, at least compared to our larger orgs. (These two recent movies brought this point home to me.) Such humans can also be pretty random, unreliable, and unthoughtful, and knowing this fact most people aren’t willing to hold them fully responsible for their actions, and are willing to authorize regulation instead to greatly limit their behavior.

However, even though we aren’t willing to treat most children as ideal “adults”, this doesn’t mean that nation-states must directly manage them. Instead we all understand that it probably works better to tie each young human to a fully grown human, who is more thoughtful than, and can be held more responsible than, that child.

So similarly, even if we also aren’t willing to treat most fully grown humans as ideal “adults”, this also doesn’t mean that they should be directly subject to limitations by nation-states. As we can instead tie each fully grown human to a larger voucher org, who we are in fact willing to treat as an ideal “adult”. Because such orgs are in fact more thoughtful, reliable, and able to be held responsible, and we are more willing to actually hold them strictly responsible.

To review, the concept “adult” has two parts, a social role that can be filled, and a description of who fills that role. The role is that of the thoughtful reliable responsible party, who can be trusted to do important things, who can be given great discretion re how to do them, and who can manage non-adults. In the context of small families, then compared to their children to a first approximation that adult role can be filled by fully grown parents.

However, in our larger society we do not in fact trust most fully grown humans to fully fill that role, as we have available to us more thoughtful, reliable, and responsible orgs. We have so far been putting nation-states into the ideal adult role.

But I argue that we’d do better to put smaller orgs in that role. That is, I propose to require each fully grown human to pair up with a “responsible adult” org, ready to pay for all they do wrong, and able to limit their behavior. To avoid mistreatment and allow adaptation to varying context, allow those fully grown humans the freedom to choose a mutually-agreeable adult, but require them to pick one.

If someone can find a voucher willing to back their being treated fully as an adult, well then I’m okay with that person being treated that way. But if no voucher is willing to back that stance, I don’t see why I should back it either. This may be as libertarian as I’m willing to go.

Added 11a: As Stefan Schubert notes, we can also see adult-dependent status in they ways that parties talk to each other. Complaining “kids” talk differently.

GD Star Rating
loading...
Tagged as: , ,

Beware Upward Reference Classes

Sometimes when I see associates getting attention, I wonder, “do they really deserve more attention than me?” I less often look at those who get less attention than me, and ask whether I deserve more. Because they just don’t show up in my field of view as often; attention makes you more noticeable.

If I were to formalize my doubts, I might ask, “Among tenured econ professors, how much does luck and org politics influence who gets more funding, prestige, and attention?” And I might find many reasons to answer “lots”, and so suggest that such things be handed out more equally or randomly. Among tenured econ professors, that is. And if an economist with a lower degree, or a professor from another discipline, asked why they aren’t included in my comparison suggested redistribution, I might answer, “Oh I’m only talking about econ researchers here.”

Someone with a college econ degree might well ask if those with higher credentials like M.S., Ph.D., or a professor position really deserve the extra money, influence, and attention that they get. And if someone with only a high school degree were to ask why they aren’t included in this comparison, the econ degree person might say “oh, I’m only talking about economists here”, presuming that you can’t be considered an economists if you have no econ degree of any sort.

The pattern here is: “envy up, scorn down”. When considering fairness, we tend to define our comparison group upward, as everyone who has nearly as many qualification as we do or more, and then we ask skeptically if those in this group with more qualifications really deserve the extra gains associated with their extra qualifications. But we tend to look downward with scorn, assuming that our qualifications are essential, and thus should be baked into the definition of our reference class. That is, we prefer upward envy reference classes to justify our envying those above us, while rejecting others envying us from below.

Life on Earth has steadily increased in its abilities over time, allowing life to spread into more places and niches. We have good reasons to think that this trend may long continue, eventually allowing our descendants to spread through the universe, until they meet up with other advanced life, resulting in a universe dense with advanced life.

However, many have suggested that this view of the universe makes us today seem suspiciously early among what they see as the relevant comparison group. And thus they suggest we need a Bayesian update toward this view of the universe being less likely. But what exactly is a good comparison group? For example, if you said “We’d be very early among all creatures with access to quantum computers?”, I think we’d all get that this is not so puzzling, as the first quantum computers only appeared a few year ago.

We would also appear very early among all creatures who could knowingly ask the question “How many creatures will ever appear with feature X”, if the concept X applies to us but has only been recently introduced.  We’d also be pretty early among among all creatures who can express any question in language, if language was only invented in the last million years. It isn’t much better to talk about all creatures with self-awareness, if you say only primates and a few other animals count as having that, and they’ve only been around for a few million more years.

Thus in general in a universe where abilities improve over time, creatures that consider upward defined reference classes will tend to find themselves early. Often very early, if they insist that their class members have some very recently acquired abilities. But once you see this tendency to pick upward reference classes, the answers you get to such questions need no longer suggest updates against the hypothesis of long increasing abilities.

Furthermore, in an any universe that will eventually fill up, creatures who find themselves well before that point in time can estimate that they are very early relative to even very neutral reference classes.

It seems to me that something similar is going on when people claim that this coming century will be uniquely important, the most important one ever, as computers are the most powerful tech we have ever seen, and as the next century is plausibly when we will make most of the big choices re how to use computers.  If we generally make the most important choices about each new tech soon after finding it, and if increasingly powerful new techs keep appearing, then this sort of situation should be common, not unique, in history.

So this next century will only be the most important one (in this way) if computers are the last tech to appear that is more powerful than prior techs. But it we expect that even more important techs will continue to be found, then we shouldn’t expect this one to be the most important tech ever. No, I can’t describe these more important yet-to-be-found future techs. But I do believe they exist.

GD Star Rating
loading...
Tagged as: , ,

Hating On Personal Equity

The New Yorker has new article called “Is Selling Yourself The Wave of the Future?”, purportedly on entrepreneurs Daniil and David Liberman efforts to finance their careers via equity (i.e., shares of future income) instead of debt or self-funding, and to entice others to do likewise. But like most New Yorker articles, most of its 8300 words are a profile with many personal details, with hardly any of the article actual discussing this idea.

But as I’ve often written on this concept, a concept close to others I’m fond of, let me take this chance to revisit the topic. In this article, I find five complaints voiced about financing careers vis equity:

Their model isn’t so much digging young people out of their predicament as replacing one kind of weight with another. The vulnerable are still vulnerable, and it remains a long way from the bottom to the top.

Yes, equity doesn’t eliminate poverty. But equity might still improve on debt or no funding, approaches that many vulnerable now use.

“Yes, if you’re the kind of person who wants to work at a job you love and it’s predictable how much money you’re going to make, it’s a bad instrument,” Sam Lessin, the venture capitalist, told me. “It works only when someone can squint and say, O.K., you’ll probably fail, but if you work we’re going to make a ton of money.

If the price of such instruments are set by market forces, I don’t see how they would be bad investments on average. Yes there are transaction costs to create equity, perhaps adverse selection in who sells them, and selling your equity can cut your incentive to work. But on the other side are two key gains. First, equity might fund good career investments that would not otherwise happen. And second, equity can help to align the incentives of advisors and promoters, as we already do now with most career agents.

(Note that in my favorite equity variation, wherein the government auctions off the rights to receive fractions of the stream of tax revenue that it would otherwise get from a taxpayer, there is no added disincentive to work, and in fact an improved incentive to work when that taxpayer wins auctions to buy their own revenue streams.)

Investors give money to promising youths—usually through middleman companies such as Upstart—in exchange for a percentage of their future incomes. The traditional knock against such schemes has been that they’re exploitative or worse, a form of indentured servitude.

This seem empty mud-raking slander. What exactly makes equity exploitive or bad? I think the following two complaints get to the heart of what most people really object to, as I’ve also heard them often when I’ve discussed related proposals:

If the young have to present themselves in a particular way to the older generations so that they will find their life trajectory appealing, I could totally see how there could be a social hierarchy you typically just have between benefactors and those who receive those funds. …

One economist told me he doubts that normal people, even with technical protections, could be free of shareholder influence. (“There is reason to expect that a system that starts out that way will evolve under pressure from investors,” he said. “We saw this with changes in bankruptcy law in 2005 that gave the holders of credit-card debt more power vis-à-vis credit-card debtors by making it harder to file for bankruptcy under Chapter 7.”) As most C.E.O.s know, not even success brings freedom from shareholder pressure.

That is, the key complaint is that whomever buys your equity might try to lobby you or your associates to influence your behavior. Or they might try to lobby the government for favorable treatment and powers. That is, they might ask you to agree to changes as a condition of their investment. Or as investors they might try to cosy up to your associates to get them to lobby you toward behavior they prefer.

Note that your associates, in addition to wanting to promote and help you, also have various ways that they want your behavior to change. And they already coordinate with each other to lobby you about these. Such associates include family, friends, lovers, employers, landlords, shared-club members, and business partners. Note further that we already allow you to borrow money, by which once strangers acquire a financial interest in both promoting and lobbying you re your future income.

Once we see that your debt owners, employers, and many other associates already want to promote and lobby you and the government re your behavior, I find it very hard to see how letting you sell shares in your future income adds much to this problem. Especially if we let you decide who if anyone can buy such shares. This seems to me more like a simple anti-capitalist instinct that just isn’t very sensitive to the specifics of this situation.

GD Star Rating
loading...
Tagged as:

Cook’s Critique of Our Earliness Argument

Tristan Cook has posted an impressive analysis, “Replicating and extending the grabby aliens model.” We are grateful for his detailed and careful work. Cook’s main focus is on indexical inference, showing how various estimates depend on different approaches to indexical analysis. But he has an appendix, ‘Updating n on the time remaining”, wherein he elaborates a claim that some of our analysis is “problematic”, “a possible error”, is “incompatible” with his, and that

“These results fail to replicate Hanson et al.’s (2021) finding that (the implicit use of) SSA implies the existence of GCs in our future.”

In this post I respond to that critique.

Cook quotes our claim:

If life on Earth had to achieve n “hard steps” to reach humanity’s level, then the chance of this event rose as time to the n-th power. Integrating this over habitable star formation and planet lifetime distributions predicts >99% of advanced life appears after today, unless n < 3 and max planet duration <50Gyr. That is, we seem early.

He also replicates this key diagram of ours:

This shows that humanity looks very early unless we have a low value of either n (the number of hard steps needed to create advanced life like us) or Lmax (max habitable planet duration). We suggest that this be explained via a grabby-aliens-fill-the-universe deadline coming soon, though we admit that either very low n or very low Lmax are other possible explanations.

But Cook claims instead that “large n and large Lmax … are incompatible.” Why? He offers a simple Bayesian model with a uniform prior over n, equal numbers of two types of planets all born at the same time with lifetimes of 5 and 100 billion years, and updating on the fact that humans appeared on one of these planets after 4.5 billion years. He shows (correctly) that the Bayesian posterior then overwhelmingly favors n=1, with almost no weight on n>2.

But this seems to me to just repeat our point above, that without a grabby aliens deadline one needs to assume either low n or low Lmax. If you allow large Lmax with no deadline, that will force you to conclude low n; no surprise. (Also, it seems to me that all of Cook’s n estimates do not update on all of the varied evidence that has led other authors to estimate higher n.)

The body of Cook’s paper describes a much more elaborate Bayesian model, a model which includes the deadline effect. And the posteriors on Lmax there also very strongly favor low Lmax, for all the indexical reasoning cases that he considers. Does this show that Lmax is “incompatible” with large n?

No, because this result is easily attributed to the fact that his prior on Lmax strongly favors both low n and low Lmax. Cook considers three priors on n, with medians of 0,1,3. And while he allows Lmax to range from 5 to 20,000 Gyr, the median of his prior is ~10 Gyr. Even though actual median planet lifetime is 5,000 Gyr. An analysis that won’t allow large Lmax or large n can’t tell us is those two are compatible.

Note that the priors in Cook’s main Bayesian analysis are not designed to express great ignorance, but instead designed to agree with estimates from several prior papers that Cook likes. So Cook’s main priors exclude the possibilities that grabby alien civs might expand slowly, or that there are a great many non grabby civs for each grabby one. And he tunes his prior to ensure a median of exactly one intelligent civilization per observable universe volume.

However, in another appendix of Cook’s paper, “Varying the prior on Lmax”, he also considers a wider prior on Lmax. (He retains all his other prior choices, including a prior on n with median 1.) Namely a lognormal with a median of 500 Gyr and a one sigma range of 110 to 2200 Gyr. His posterior from this has a median Lmax of 7Gyr, and a 90th percentile at ~100 Gyr. Which means that compared to Cook’s prior on Lmax, his posterior has substantially lower values of Lmax. Does this prove his claim that high Lmax is incompatible with high n?

I think not, because 60% of this posterior is on cases with less than one grabby civ per observable universe volume, and it takes a much higher density of such civs to create a grabby aliens deadline effect.

Look, the fact that we now find ourselves on a planet that has only lasted for 4.5Gyr should boost low Lmax hypotheses in two ways. The first, and weaker effect, is that the lower is Lmax, the fewer planets there are below Lmax, and thus the higher becomes the prior on our particular planet. This is a count effect, which boosts our planet’s posterior by a factor of ten for every factor of one hundred by which Lmax falls. As the total dynamic range of Lmax under consideration here is a factor of 4000, that’s a real but modest effect.

The second effect is much larger. Without a grabby aliens deadline effect, then for n=1 a planet that lasts for 4000 times longer becomes 4000 times more likely to birth an intelligent civilization. For n=2, it becomes eight million times more likely. And this factor gets even bigger for larger n. Thus observing that we appear on a planet that has lasted only 4.5Gyr can force a huge additional update toward lower Lmax. Without a deadline, that’s the only way to explain how we appear on such a short lived planet if there is no grabby aliens deadline. This strong effect plausibly explains the strong Lmax updating effects we see in Cook’s wider Lmax prior analysis, as most of the posterior weight there is on scenarios with no deadline effect.

Bottom line: I happily admit there is a count effect that prefers lower Lmax in a posterior compared to a prior. But this effect is weak; a factor of ten in posterior per factor of one hundred in Lmax. This effect happens regardless of whether a grabby aliens deadline effect applies. But the other much stronger Lmax update effect is cancelled by a grabby aliens deadline. Yes, if aliens are so rare that there’s no deadline effect, the update toward low Lmax seems to be strong. But there is an important sense in which such a deadline is an alternate explanation to human earliness. This is what we claimed in our paper, and I don’t see that Cook’s analysis changes this conclusion.

P.S. Cook doesn’t actually simulate a stochastic model where alien civs arise then block each other. He instead uses a simple formula following “following Olson (2015).” So his distributions over civ size only include variance over time, but not other kinds of variance. I worry that this formula assumes an independence of alien volume locations that isn’t true. Though I doubt the errors from this simplification make that big of a difference.

GD Star Rating
loading...
Tagged as: