Monthly Archives: August 2022

We See The Sacred From Afar, To See It Together

I’ve recently been trying to make sense of our concept of the “sacred”, by puzzling over its many correlates. And I think I’ve found a way to make more sense of it in terms of near-far (or “construal level”) theory, a framework that I’ve discussed here many times before.

When we look at a scene full of objects, a few of those objects are big and close up, while a lot more are small and far away. And the core idea of near-far is that it makes sense to put more mental energy into analyzing each object up close, objects that matters to us more, by paying more attention to their detail, detail often not available about stuff far away. And our brains do seem to be organized around this analysis principle.

That is, we do tend to think less, and think more abstractly, about things far from us in time, distance, social connection, or hypothetically. Furthermore, the more abstractly we think about something, the more distant we tend to assume are its many aspects. In fact, the more distant something is in any way, the more distant we tend to assume it is in other ways.

This all applies not just to dates, colors, sounds, shapes, sizes, and categories, but also to the goals and priorities we use to evaluate our plans and actions. We pay more attention to detailed complexities and feasibility constraints regarding actions that are closer to us, but for far away plans we are content to think about them more simply and abstractly, in terms of relatively general values and principles that depend less on context. And when we think about plans more abstractly, we tend to assume that those actions are further away and matter less to us.

Now consider some other ways in which it might make sense to simplify our evaluation of plans and actions where we care less. We might, for example, just follow our intuitions, instead of consciously analyzing our choices. Or we might just accept expert advice about what to do, and care little about experts incentives. If there are several relevant abstract considerations, we might assume they do not conflict, or just pick one of them, instead of trying to weigh multiple considerations against each other. We might simplify an abstract consideration from many parameters down to one factor, down to a few discrete options, or even all the way down to a simple binary split.

It turns out that all of these analysis styles are characteristic of the sacred! We are not supposed to calculate the sacred, but just follow our feelings. We are to trust priests of the sacred more. Sacred things are presumed to not conflict with each other, and we are not to trade them off against other things. Sacred things are idealized in our minds, by simplifying them and neglecting their defects. And we often have sharp binary categories for sacred things; things are either sacred or not, and sacred things are not to be mixed with the non-sacred.

All of which leads me to suggest a theory of the sacred: when a group is united by valuing something highly, they value it in a style that is very abstract, having the features usually appropriate for quickly evaluating things relatively unimportant and far away. Even though this group in fact tries to value this sacred thing highly. Of course, depending on what they try to value, such attempts may have only limited success.

For example, my society (US) tries to value medicine sacredly. So ordinary people are reluctant to consciously analyze or question medical advice; they are instead to just trust its priests, namely doctors, without looking at doctor incentives or track records. Instead of thinking in terms of multiple dimensions of health, we boil it all down to a single health dimension, or even a binary of dead or alive.

Instead of seeing a continuum of cost-effectiveness of medical treatments, along which the rich would naturally go further, we want a binary of good vs bad treatments, where everyone should get the good ones no matter what their cost, and regardless of any other factors besides a diagnosis. We are not to make trades of non-sacred things for medicine, and we can’t quite believe it is ever necessary to trade medicine against other sacred things. Furthermore, we want there to be a sharp distinction between what is medicine and what is not medicine, and so we struggle to classify things like mental therapy or fresh food.

Okay, but if we see sacred things as especially important to us, why ever would we want to analyze them using styles that we usually apply to things that are far away and the least important to us? Well one theory might be that our brains find it hard to code each value in multiple ways, and so typically code our most important values as more abstracted ones, as we tend to apply them most often from a distance.

Maybe, but let me suggest another theory. When a group unites itself by sharing a key “sacred” value, then its members are especially eager to show each other that they value sacred things in the same way. However, when group members hear about and observe how an associate makes key sacred choices, they will naturally evaluate those choices from a distance. So each group member also wants to look at their own choices from afar, in order to see them in the same way that others will see them.

In this view, it is the fact groups tend to be united by sacred values that is key to explaining why they treat such values in the style usually appropriate for relatively unimportant things seen from far away, even though they actually want to value those things highly. Even though such a from-a-distance treatment will probably lead to a great many errors and misjudgments when actually trying to promote that thing.

You see, it may be more important to groups to pursue a sacred value together than to pursue it effectively. Such as the way the US spends 18% of GDP on medicine, as a costly signal of how sacred medicine is to us, even though the marginal health benefit of our medical spending seems to be near zero. And we show little interest in better institutions that could make such spending far more cost effective.

Because at least this way we all see each other’s ineffective medical choices in the same way. We agree on what to do. And after all, that’s the important thing about medicine, not whether we live or die.

Added 10Sep: Other dual process theories of brains give similar predictions.

GD Star Rating
loading...
Tagged as: , ,

Bizarre Accusations

Imagine that you planned a long hike through a remote area, and suggested that it might help to have an experienced hunter-gather along as a guide. Should listeners presume that you intend to imprison and enslave such guides to serve you? Or is it more plausible that you propose to hire such people as guides?

To me, hiring seems the obvious interpretation. But, to accuse me of advancing a racist slavery agenda, Audra Mitchell and Aadita Chaudhury make the opposite interpretation in their 2020 International Relations article “Worlding beyond ‘the’ ‘end’ of ‘the world’: white apocalyptic visions and BIPOC futurisms”.

In a chapter “Catastrophe, Social Collapse, and Human Extinction” in the 2008 book Global Catastrophic Risks I suggested that we might protect against human extinction by populating underground refuges with people skilled at surviving in a world without civilization:

A very small human population would mostly have to retrace the growth path of our human ancestors; one hundred people cannot support an industrial society today, and perhaps not even a farming society. They might have to start with hunting and gathering, until they could reach a scale where simple farming was feasible. And only when their farming population was large and dense enough could they consider returning to industry.

So it might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right. Perhaps such people could be rotated periodically from a well protected region where they practiced simple lifestyles, so they could keep their skills fresh. And perhaps we should test our refuge concepts, isolating real people near them for long periods to see how well particular sorts of refuges actually perform at returning their inhabitants to a simple sustainable lifestyle.

On this basis, Mitchell and Chaudhury call me a “white futurist” and “American settler economist” seeking to preserve existing Euro-centric power structures:

Indeed, many contributors to ‘end of the world’ discourses offer strategies for the reconstruction and ‘improvement’ of existing power structures after a global catastrophe. For example, American settler economist Robin Hanson calculates that if 100 humans survived a global catastrophic disaster that killed all others, they could eventually move back through the ‘stages’ of ‘human’ development, returning to the ‘hunter-gatherer stage’ within 20,000 years and then ‘progressing’ from there to a condition equivalent to contemporary society (defined in Euro-centric terms). …

some white futurists express concerns about the ‘de-volution’ of ‘humanity’ from its perceived pinnacle in Euro-centric societies. For example, American settler economist Hanson describes the emergence of ‘humanity’ in terms of four ‘progressions’

And solely on the basis of my book chapter quote above, Mitchell and Chaudhury bizarrely claim that I “quite literally” suggest imprisoning and enslaving people of color “to enable the future re-generation of whiteness”:

To achieve such ideal futures, many writers in the ‘end of the world’ genre treat [black, indigenous, people of color] as instruments or objects of sacrifice. In a stunning display of white possessive logic, Hanson suggests that, in the face of global crisis, it

‘might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course, such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right.

In this imaginary, Hanson quite literally suggests the (re-/continuing)imprisonment, (re-/continuing)enslavement and biopolitical (re-/continuing) instrumentalization of living BIPOC in order to enable the future re-generation of whiteness. This echoes the dystopian nightmare world described in …

And this in a academic journal article that supposedly passed peer review! (I was not one of the “peers” consulted.)

To be very clear, I proposed to hire skilled foragers and subsistence farmers to serve in such roles, compensating them as needed to gain their consent. I didn’t much care about their race, nor about the race of the world that would result from their repopulating the world. And presumably someone with substantial racial motivations would in fact care more about that last part; how exactly does repopulating the world with people of color promote “whiteness”?

GD Star Rating
loading...
Tagged as: ,

MacAskill on Value Lock-In

Will MacAskill has a new book out today, What We Owe The Future, most of which I agree with, even if that doesn’t exactly break new ground. Yes, the future might be very big, and that matters a lot, so we should be willing to do a lot to prevent extinction, collapse, or stagnation. I hope his book induces more careful future analysis, such as I tried in Age of Em. (FYI, MacAskill suggested that book’s title to me.) I also endorse his call for more policy and institutional experimentation. But, as is common in book reviews, I now focus on where I disagree.

Aside from the future being important, MacAskill main concern in his book is “value lock-in”, by which he means a future point in time when the values that control actions stop changing. But he actually mixes up two very different processes by which this result might arise. First, an immortal power with stable values might “take over the world”, and prevent deviations from its dictates. Second, in a stable universe decentralized competition between evolving entities might pick out some most “fit” values to be most common.

MacAskill’s most dramatic predictions are about this first “take over” process. He claims that the next century or so is the most important time in all of human history:

We hold the entire future in our hands. … By choosing wisely, we can be pivotal in putting humanity on the right course. … The values that humanity adopts in the next few centuries might shape the entire trajectory of the future. … Whether the future is government by values that are authoritarian or egalitarian, benevolent or sadistic, exploratory or rigid, might well be determined by what happens this century.

His reason: we will soon create AGI, or ems, who, being immortal, have forever stable values. Some org will likely use AGI to “take over the world”, and freeze in their values forever:

Advanced artificial intelligence could enable those in power to to lock in their values indefinitely. … Since [AGI] software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal. These two features of AGI – potentially rapid technological progress and in-principle immortality – combine to make value lock-in a real possibility. …

Using AGI, there are a number of ways that people could extend their values much farther into the future than ever before. First, people may be able to create AGI agents with goals closely assigned with their own which would act on their behalf. … [Second,] the goals of an AGI could be hard-coded: someone could carefully specify what future white want to see and ensure that the AGI aims to achieve it. … Third, people could potentially “upload”. …

International organizations or private actors may be able to leverage AGI to attain a level of power not seen since the days of the East India Company, which in effect ruled large areas of India. …

A single set of values could emerge. …The ruling ideology could in principle persist as long as civilization does. AGI systems could replicate themselves as many times as they wanted, just as easily as we can replicate software today. They would be immortal, freed from the biological process of aging, able to create back-ups of themselves and copy themselves onto new machines. … And there would not longer be competing value systems that could dislodge the status quo. …

Bostrom’s book Superintelligence. The scenario most closely associated with that book is one in which a single AI agent … quickly developing abilities far greater than the abilities of all of humanity combined. … It would therefore be incentivize to take over the world. … Recent work has looked at a broader range of scenarios. The move from subhuman intelligence to super intelligence need not be ultrafast or discontinuous to post a risk. And it need not be a single AI that takes over; it could be many. …

Values could become even more persistent in the future if a single value system were to become global dominant. If so, then the absence of conflict and competition would remove one reason for change in values over time. Conquest is the most dramatic pathway … and it may well be the most likely.

Now mere immortality seems far from sufficient to create either value stability or a takeover. On takeover, immortality is insufficient. Not only is a decentralized world of competing immortals easy to imagine, but in fact until recently individual bacteria, who very much compete, were thought to be immortal.

On values, immortality also seems far from sufficient to induce stable values. Human organizations like firms, clubs, cities, and nations seem to be roughly immortal, and yet their values often greatly change. Individual humans change their values over their lifetimes. Computer software is immortal, and yet its values often change, and it consistently rots. Yes, as I mentioned in my last post, some imagine that AGIs have a special value modularity that can ensure value stability. But we have many good reasons to doubt that scenario.

Thus MacAskill must be positing that a power who somehow manages to maintain stable values takes over and imposes its will everywhere forever. Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario. MacAskill claims that other scenarios are also relevant, but doesn’t even try to show how they could produce this result. For reasons I’ve given many times before, I’m skeptical of foom-like scenarios.

Furthermore, let me note that even if one power came to dominate Earth’s civilization for a very long time, it would still have to face competition from other grabby aliens in roughly a billion years. If so, forever just isn’t at issue here.

While MacAskill doesn’t endorse any regulations to deal with this stable-AGI-takes-over scenario, he does endorse regulations to deal with the other path to value stability: evolution. He wants civilization to create enough of a central power that it could stop change for a while, and also limit competition between values.

The theory of cultural evolution explains why many moral changes are contingent. … the predominant culture tends to entrench itself. … results in a world increasingly dominated by cultures with traits that encourage and enable entrenchment and thus persistence. …

If we don’t design our institutions to govern this transition well – preserving a plurality of values and the possibility of desirable moral progress. …

A second way for a culture to become more powerful is immigration [into it]. … A third way in which a cultural trait can gain influence is if it gives one group greater ability to survive or thrive in a novel environment. … A final way in which one culture can outcompete another is via population growth. … If the world converged on a single value system, there would be much less pressure on those values to change over time.

We should try to ensure that we have made as much moral progress as possible before any point of lock-in. … As an ideal, we could aim for what we could call the long reflection: a stable state of the world in which we are safe from calamity and can reflect on and debate the nature of the good life, working out what the more flourishing society would be. … It would therefore be worth spending many centuries to ensure that we’ve really figured things out before taking irreversible actions like locking in values or spreading across the stars. …

We would need to keep our options open as much as possible … a reason to prevent smaller-scale lock-ins … would favor political experimentation – increasing cultural and political diversity, if possible. …

That one society has greater fertility than another or exhibits faster economic growth does not imply that society is morally superior. In contrast, the most important mechanisms for improving our moral views are reason, reflection, and empathy, and the persuasion of others based on those mechanisms. … Certain forms of free speech would therefore be crucial to enable better ideas to spread. …

International norms or laws preventing any single country from becoming too populous, just as anti-trust regulation prevents any single company from dominating a market. … The lock-in paradox. We need to lock-in some institutions and ideas in order to prevent a more thorough-going lock-in of values. … If we wish to avoid the lock-in of bad moral views, an entirely laissez-faire approach would not be possible; over time, the forces of cultural evolution would dictate how the future goes, and the ideologies that lead to the greatest military powered that try to eliminate their competition would suppress all others.

I’ve recently described my doubts that expert deliberation has been a large force in value change so far. So I’m skeptical that will be a large force in the future. And the central powers (or global mobs) sufficient to promote a long reflection, or to limit nations competing, seem to risk creating value stability via the central dominance path discussed above. MacAskill doesn’t even consider this kind of risk from his favored regulations.

While competition may produce a value convergence in the long run, my guess is that convergence will happen a lot faster if we empower central orgs or mobs to regulate competition. I think that a great many folks prefer that latter scenario because they believe we know what are the best values, and fear that those values would not win an evolutionary competition. So they want to lock in current values via regs to limit competition and value change.

To his credit, MacAskill is less confident that currently popular values are in fact the best values. And his favored solution of more deliberation probably would’t hurt. I just don’t think he realizes just how dangerous are central powers able to regulate to promote deliberation and limit competition. And he seems way too confident about the chance of anything like foom soon.

GD Star Rating
loading...
Tagged as: ,

AGI Is Sacred

Sacred things are especially valuable, sharply distinguished, and idealized as having less decay, messiness, inhomogeneities, or internal conflicts. We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense.

We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association. (More)

When we treat something as sacred, we acquire the predictably extreme related expectations and values characteristic of our concept of “sacred”. This biases us in the usual case where such extremes are unreasonable. (To min such biases, try math as sacred.)

For example, most ancient societies had a great many gods, with widely varying abilities, features, and inclinations. And different societies had different gods. But while the ancients treated these gods as pretty sacred, Christians (and Jews) upped the ante. They “knew” from their God’s recorded actions that he was pretty long-lasting, powerful, and benevolent. But they moved way beyond those “facts” to draw more extreme, and thus more sacred, conclusions about their God.

For example, Christians came to focus on a single uniquely perfect God: eternal, all-powerful, all-good, omnipresent, all-knowing (even re the future), all-wise, never-changing, without origin, self-sufficient, spirit-not-matter, never lies nor betrays trust, and perfectly loving, beautiful, gracious, kind, and pretty much any other good feature you can name. The direction, if not always the magnitude, of these changes is well predicted by our sacredness concept.

It seems to me that we’ve seen a similar process recently regarding artificial intelligence. I recall that, decades ago, the idea that we could make artificial devices who could do many of the kinds of tasks that humans do, even if not quite as well, was pretty sacred. It inspired much reverence, and respect for its priests. But just as Christians upped the ante regarding God, many recently have upped the AI ante, focusing on an even more sacred variation on AI, namely AGI: artificial general intelligence.

The default AI scenario, the one that most straightforwardly projected past trends into the future, would go as follows. Many kinds of AI systems would specialize in many different tasks, each built and managed by different orgs. There’d also be a great many AI systems of each type, controlled by competing organizations, of roughly comparable cost-effectiveness.

Overall, the abilities of these AI would improve at roughly steady rates, with rate variations similar to what we’ve seen over the last seventy years. Individual AI systems would be introduced, rise in influence for a time, and then decline in influence, as they rotted and become obsolete relative to rivals. AI systems wouldn’t work equally well with all other systems, but would instead have varying degrees of compatibility and integration.

The fraction of GDP paid for such systems would increase over time, and this would likely lead to econ growth rate increases, perhaps very large ones. Eventually many AI systems would reach human level on many tasks, but then continue to improve. Different kinds of system abilities would reach human level at different times. Even after this point, most all AI activity would be doing relatively narrow tasks.

The upped-ante version of AI, namely AGI, instead changes this scenario in the direction of making it more sacred. Compared to AI, AGI is idealized, sharply distinguished from other AI, and associated with extreme values. For example:

1) Few discussions of AGI distinguish different types of them. Instead, there is usually just one unspecialized type of AGI, assumed to be at least as good as humans at absolutely everything.

2) AGI is not a name (like “economy” or “nation”) for a diverse collection of tools run by different orgs, tools which can all in principle be combined, but not always easily. An AGI is instead seen as a highly integrated system, fully and flexibly able to apply any subset its tools to any problem, without substantial barriers such as ownership conflicts, different representations, or incompatible standards.

3) An AGI is usually seen as a consistent and coherent ideal decision agent. For example, its beliefs are assumed all consistent with each other, fully updated on all its available info, and its actions are all part of a single coherent long-term plan. Humans greatly deviate from this ideal.

4) Unlike most human organizations, and many individual humans, AGIs are assumed to have no internal conflicts, where different parts work at cross purposes, struggling for control over the whole. Instead, AGIs can last forever maintaining completely reliable internal discipline.

5) Today virtually all known large software systems rot. That is, as they are changed to add features and adapt to outside changes, they gradually become harder to usefully modify, and are eventually discarded and replaced by new systems built from scratch. But an AGI is assumed to suffer no such rot. It can instead remain effective forever.

6) AGIs can change themselves internally without limit, and have sufficiently strong self-understanding to apply this ability usefully to all of their parts. This ability does not suffer from rot. Humans and human orgs are nothing like this.

7) AGIs are usually assumed to have a strong and sharp separation between a core “values” module and all their other parts. It is assumed that value tendencies are not in any way encoded into the other many complex and opaque modules of an AGI system. The values module can be made frozen and unchanging at no cost to performance, even in the long run, and in this way an AGI’s values can stay constant forever.

8) AGIs are often assumed to be very skilled, even perfect, at cooperating with each other. Some say that is because they can show each other their read-only values modules. In this case, AGI value modules are assumed to be small, simple, and standardized enough to be read and understood by other AGIs.

9) Many analyses assume there is only one AGI in existence, with all other humans and artificial systems at the time being vastly inferior. In fact this AGI is sometimes said to be more capable than the entire rest of the world put together. Some justify this by saying multiple AGIs cooperate so well as to be in effect a single AGI.

10) AGIs are often assumed to have unlimited powers of persuasion. They can convince humans, other AIs, and organizations of pretty much any claim, even claims that would seem to be strongly contrary to their interests, and even if those entities are initially quite wary and skeptical of the AGI, and have AI advisors.

11) AGIs are often assumed to have unlimited powers of deception. They could pretend to have one set of values but really have a completely different set of values, and completely fool the humans and orgs that developed them ever since they grew up from a “baby” AI. Even when those had AI advisors. This super power of deception apparently applies only to humans and their organizations, but not to other AGIs.

12) Many analyses assume a “foom” scenario wherein this single AGI in existence evolves very quickly, suddenly, and with little warning out of far less advanced AIs who were evolving far more slowly. This evolution is so fast as to prevent the use of trial and error to find and fix its problematic aspects.

13) The possible sudden appearance, in the not-near future, of such a unique powerful perfect creature, is seen by many as event containing overwhelming value leverage, for good or ill. To many, trying to influence this event is our most important and praise-worthy action, and its priests are the most important people to revere.

I hope you can see how these AGI idealizations and values follow pretty naturally from our concept of the sacred. Just as that concept predicts the changes that religious folks seeking a more sacred God made to their God, it also predicts that AI fans seeking a more sacred AI would change it in these directions, toward this sort of version of AGI.

I’m rather skeptical that actual future AI systems, even distant future advanced ones, are well thought of as having this package of extreme idealized features. The default AI scenario I sketched above makes more sense to me.

Added 7a: In the above I’m listing assumptions commonly made about AGI in AI risk discussions, not applying a particular definition of AGI.

GD Star Rating
loading...
Tagged as: ,

Is Nothing Sacred?

“is nothing sacred?” is spoken used to express shock when something you think is valuable or important is being changed or harmed (more)

Human groups often unite via agreeing on what to treat as “sacred”. While we don’t all agree on what is how sacred, almost all of us treat some things as pretty sacred way. Sacred things are especially valuable, sharply distinguished, and idealized, so they have less decay, messiness, inhomogeneities, or internal conflicts.

We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense. We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association.

Treating things as sacred will tend to bias our thinking when such things do not actually have all these features, or when our values regarding them don’t actually justify all these sacred valuing rules. Yes, the benefits we get from uniting into groups might justify paying the costs of this bias. But even so, we might wonder if there are cheaper ways to gain such benefits. In particular, we might wonder if we could change what things we see as sacred, so as to reduce these biases. Asked another way: is there anything that is in fact, naturally sacred, so that treating it as such induces the least bias?

Yes, I think so. And that thing is: math. We do not create math; we find it, and it describes us. Math objects are in fact quite idealized and immortal, mostly lacking internal messy inhomogeneities. Yes, proofs can have messy details, but their assumptions and conclusions are much simpler. Math concepts don’t even suffer from the cultural context-dependence or long-term conceptual drift suffered by most abstract language concepts.

We can draw clear lines distinguishing math vs. non-math objects. Usually no one can own math, avoiding the vulgarity of associated prices. And while we think about math cognitively, the value we put on any piece of math, or on math as a whole, tends to be come intuitively, even reverently, not via calculation.

Compared to other areas, math seems an at extreme of ease of evaluation of abilities and contributions, and thus math can suppress factionalism and corruption in such evaluations. This helps us to use math to judge mental ability, care, and clarity, especially in the young. So we use math tests to sort and assign prestige early in life.

As math is so prestigious and reliable to evaluate, we can more just let math priests tell us who is good at math, and then use that as a way to choose who to hire to do math. We can thus avoid using vulgar outcome-based forms of payment to compensate math workers. It doesn’t work so badly to give math priests self-rule an long job tenures. Furthermore, so many want to be math priests that their market wages are low, making math inequality feel less offensive.

The main thing that doesn’t fit re math as sacred is that today treating math as sacred doesn’t much help us unite some groups in contrast to other groups. Though that did happen long ago (e.g., among ancient Greeks). However, I don’t at all mind this aspect of math today.

The main bias I see is that treating math as sacred induces us to treat it as more valuable than it actually is. Many academic fields, for example, put way too high a priority on math models of their topics. Which distracts from actually learning about what is important. But, hey, at least math does in fact have a lot of uses, such as in engineering and finance. Math was even crucial to great advances in many areas of science.

Yes, many over-estimate math’s contributions. But even so, I can’t think of something else that is in fact more naturally “sacred” than math. If we all in fact have a deep need to treat some things as sacred, this seems a least biased target. If something must be sacred, let it be math.

GD Star Rating
loading...
Tagged as: ,

Moral Progress Is Not Like STEM Progress

In this post I want to return to the question of moral progress. But before addressing that directly, I first want to set up two reference cases for comparison.

My first comparison case is statistics. Statistics is useful, and credit for the value that statistics adds to our discussions goes to several sources: to the statisticians who develop stat tests and estimates, to the teachers who transmit those tools to others, and to the problem specialists who find useful places to apply stats.

We can tell that statisticians deserve credit because we can usually identify the particular tests and estimates being used (e.g., “chi test”) in each case, and can trace those back to the teachers who taught them, and the researchers who developed them. New innovations are novel combinations of stat details whose effectiveness depends greatly on those details. We can see the first use cases of each such structure, and then see how a habit of its use spread.

Similar stories apply to many STEM areas, where we can distinguish particular design elements and analysis tools, and trace them back to their teachers and innovators. We can thus credit those innovators with their contributions, and verify that we have in fact seen substantial progress in these areas. We can see many cases where new tools let us improve on the best we could do with old tools.

My second comparison case is the topic area of home arrangement: what things to put in what drawers and rooms in our homes, and what activities to do in what parts of what rooms at what times of the day or week. Our practices in these areas result from copying the choices of our parents, friends, TV shows, and retailers, and also from experimenting with personal variations to see what we like. Over our lifetimes, we each tend to get more satisfied with our choices.

It is less clear, however, how much humanity as a whole improves in this area over time. Oh, we prefer our homes to homes of centuries ago. But this is most clearly because we have bigger nicer homes, that we fill with more nicer things than our ancestors had or could afford.

As new items become available, our plans for which things go where, and what we do with them when, have adapted over time. But it isn’t clear that humanity learns much after an early period of adaptation to each new item. Yes, for each choice we make, we can usually offer an argument for why that choice is better, and sometimes we can remember where we heard that argument. But the general set of arguments used in this area doesn’t seem to expand or improve much over time.

It is possible and even plausible that, even so, we are slowly getting better in general at knowing where to put things and what to do when in homes. Even if we don’t learn new general principles, we may be slowly getting better at reducing our case specific errors relative to our constant general principles.

But if so, the value of this progress seems to be modest, compared to our other related sources of progress, such as bigger houses, better items, and more free time to spend on them. And it seems pretty clear that little of the progress that we have seen here is to be credited to researchers specializing in home arrangement or personal activity scheduling. We don’t share much general abstract knowledge about this area, and haven’t added much lately to whatever of that we once had.

We see similar situations in many other areas where there is widespread practice, but few research specialists or teachers of newly researched tools. There might be progress in reducing errors where practice deviates from widely accepted stable principles, but if so that progress seems modest relative to progress due to other factors, such as better technology, increased wealth, and larger populations.

With these two reference cases in mind, STEM tools and home arrangement, let us now consider moral progress. The world seems to many to be getting more moral over time. But that could be because we have been getting richer and safer, which makes morality more affordable to us. Or it could be due to random correlated drift in our practices and standards, combined with our habit of judging past practices by current standards.

However, it also seems possible, at least at first glance, that our world is getting more apparently moral because of improved moral abilities, holding constant our wealth and knowledge about non-moral topics. For example, moral researchers might be acquiring more objective genera knowledge about morality, knowledge which morality teachers then spread to the rest of us, who then apply those improved moral tools to particular cases.

In support of this theory, many people point to particular moral arguments when they defend the morality of particular behaviors, and they often point to particular human sources for those arguments. Furthermore, many of those sources are new and canonical, so that a great many people in each era point to the same few sources, sources that are different from those to which prior generations pointed. Does this show progress?

If you look carefully at the specific moral arguments that people cite to support their behavior, it turns out that those arguments look pretty similar to arguments that were known long before. While each new generation’s canonical sources have some unique examples, styles, and argument details, those differences don’t seem to matter much to the practices of the ordinary people who cite them.

This situation seems in sharp contrast to the case of progress in statistics, for example, where the details of each new statistical test or estimate show up clearly and matter greatly to applications of those stats. It seems more consistent with moral arguments being used to justify behavior that would have happened anyway, rather than having moral arguments cause changes in behavior.

Yes, some old moral arguments may well have been forgotten for a time, and thus need to be reinvented by newer sources. For example, while ancient sources plausibly expressed thoughtful critiques of slavery and gender inequality, recent critics of such things may well have not read such ancient sources.

Even so, progress in morality looks to me much more like progress in home arrangement, and much less like progress in STEM. Even though locally new home arrangement choices continually appear, they don’t appear to add up to much overall progress relative to other sources of progress. Similarly, while it is possible that there is some moral progress due to slowly learning to have lower local error rates relative to constant general principles, I think we can pretty clearly reject the STEM-analogue hypothesis that morality researchers invent new detailed morality structures which then diffuse via teachers to greatly change typical practice.

Thus an examination of the details of moral change suggests that little of it can be credited to moral researchers, and only modest amounts to practioners slowly learning to cut errors relative to stable principles. Thus most apparent progress is plausibly due to our getting richer and safer, or to drift combined with a habit of judging past practices by current standards.

GD Star Rating
loading...
Tagged as: ,

A Portrait of Civil Servants

Our choices of the areas of life where governments will more regulate or directly provide services are some of our most important policy choices. But while on the surface we hear a great many different arguments on these topics. and awful lot of them seem to come down to this claim:

Government agencies can do better than private orgs because (A) they are more accountable to citizens via the voting channel, and (B) their employees more prioritize public welfare, due both to selecting nicer people, and to embedding them in a supportive work culture.

My Caltech Ph.D. in formal political theory prepared me to dispute the (A) part, but I honestly haven’t paid that much attention to the (B) part. Until now. Here is what I’ve just learned from a quick search about how civil servants differ from other workers.

First, I couldn’t quickly find stats on how govt workers differ from others in age, gender, race, or political orientation. (If someone can find those, I’ll edit this to include those here.) But I did find that they are better educated than other workers, and even controlling for that they are paid more. Furthermore, public sector workers had a median 6.5 years tenure, compared to 3.7 years in the private sector.

Its not crazy to think that having a relatively secure well-paid job for an employer with a noble mission might incline one toward being a better person who makes job decisions more generously, i.e., more for the public good. But if that were true, what would you predict about their relative rates of workplace absenteeism, fraud, bullying, and violent events at work? You’d predict those to be lower, right?

Across nations, government workers have 10% to 84% higher work absenteeism rates; 40% for the U.S. Out of 22 industries, govt workers are #2 in work fraud rates. While govt workers are only 15% of U.S. workers, they were reported to have 24.7% and 26% of fraud cases. And while bullying and violent victimizations happen respectively at rates of  3.7% and 0.47% in private jobs, they happen at rates of 5.6% and 0.87% in public jobs.

This looks pretty damning so far. But what about direct measure of productivity, comparing public and private orgs doing this same task? It seems they do about the same on prisons, and private does better on schools and catching fugitives, In medicine, they do about this same re health and cost, but private seems better on timing and satisfaction. Even private military contractors seem to perform similarly.

Bottom line: I find little support for the idea that we can trust govt agencies more than private orgs due to their having or inspiring more trustworthy employees.

GD Star Rating
loading...
Tagged as: ,

Violent Offense Under Bounties & Vouchers

I recently talked to some smart high school students about the voucher and bounty crime reform scenario. They imagined bounty hunters spending most of their time in chases and gun fights, as in cowboy or Star Wars movies. So they were against the scenario, preferring such violence roles to be filled by government employees.

But in fact bounty hunters today spend almost no time in chases or fights. And that was true throughout history; bounty hunters have been widely used in Rome and England for thousands of years. (I’ll discuss that history more below.) Movies emphasize rare scenarios to create conflict and drama. The main job of most bounty hunters was to collect evidence, and then to sue in a court trial. As lawyers have always done to prepare for and engage in lawsuits.

Okay, you might ask, but in a world of vouchers and bounty hunters, sometimes there would be gun fight or car chases, right? So who would be authorized to participate in such activities, and what powers would they have or need? That is, who would do violence in this scenario?

First, many parties, maybe even everyone, could be allowed to stand ready to defend themselves violently. Okay, you might say, but won’t offensive violence also be needed sometimes? If so, who is authorized to do that?

Well, note that a person found to lack a voucher would need to be assigned one immediately. Perhaps a “public option” voucher who keeps clients temporarily in a detention center. And offensive force might be needed to move such a newly found client to such a detention center.

Actually, this isn’t a special case, as in general vouchers and their representatives would be the main parties authorized to use offensive force. After all, vouchers would often be authorized by their client contracts to physically punish their clients. And if a client seems to be about to hurt others, perhaps via force, their voucher is usually the party with the strongest interest in stopping them. As they have to pay for any resulting damages.

Thus voucher-client contracts will pretty much always authorize the voucher to use offensive force against their client, both to punish them, and to prevent clients from causing harm. And the rest of us don’t need to decide what kinds of force should be allowed there, if those two are the only parties effected by their choice.

However, what if a third party ends up getting hurt when a voucher uses offensive force on their client? In this case, either the voucher or their client is likely guilty of a crime, and the voucher is on the hook either way to pay damages. To avoid these losses, vouchers would likely make deals to help each other in such situations, and have their clients agree to such behavior in their voucher-client contracts. Thus in the general bounty-voucher scenario, most offensive violence would happen between parties who had agreed by contract beforehand on how violence is to be handled.

Vouchers who have made such voucher-voucher deals also seem well-placed to handle people discovered to be without a voucher. Thus a simple solution for this case might be to hold a fast auction to see which nearby voucher is willing to take on this person as a client at the lowest price. This voucher would then have the job of transferring this client to a public option detention center, after which that detention center would become the client’s official voucher. At least until that client could arrange for a new voucher.

Note that under this voucher-bounty system, as long as everyone has a voucher then there is no need for any other party besides a voucher to forcibly detain anyone, either to ensure that they appear in court or to ensure that they can be punished. As vouchers are fully liable for such failures, such tasks can be delegated to them.

As I said above, fights and chases have not actually been the main complaints about bounty hunters in history. The main complaint in the last few centuries, which led to cuts in their usage, seems to be that bounty hunters were typically for-profit agents, whereas many thought government employees could be better trusted to promote the general welfare.

Here are the other main complaints about bounty hunters that I find in this article on the history of their usage (called “qui tam”) in England. Bounty hunters have at times made false accusations, committed perjury, coerced witnesses, faked evidence, tempted people to commit crimes, threatened jurors who ruled against them, and enforced the letter of laws against the spirit of the law.

Bounty hunters have also at times filed their claims in distant expensive-to-travel-to courts, and detained the accused before delayed trials, and used the treat of such treatments to extort concessions. They have accepted private settlements (i.e., plea bargains and bribes) instead of going to court. And they have accepted payments from guilty folks to do a bad job at trial, when such efforts prevent future trials from being held on the same accusations.

However, the government employee police who replaced bounty hunters have also done all these things. Some assume that such employees will do such things less often than would bounty hunters. But I don’t know of evidence that supports this claim. And remember that government police can much more effectively maintain a “blue wall of silence” that prevents the reporting and prosecution of such things. Whereas bounty hunters will happily turn on each other, just as one can easily hire a lawyer today to sue another lawyer, or a P.I. to investigate another P.I.

Note that we can greatly cut the harm of private settlements via keeping the bounty and fine levels close to each other. And no one besides vouchers need to detain anyone.

GD Star Rating
loading...
Tagged as: , ,