Category Archives: Uncategorized

Beware Profane Priests

We humans evolved a way to take some of the things that are important to us, and bind our groups together by seeing those things as “sacred”. That is, by seeing them in the same way, via always seeing them from a distance. Such things are seen more abstractly and intuitively, with less conscious calculation, and less attending to details. Sacred things are idealized, and not to be mixed with or traded off against other things. Sacred thinking can be less competent, but induces more effort, and can keep us from being overwhelmed by strong passions.

Let us call the experts associated with a sacred area “priests”. The possibility of priests raises two issues for the sacred. First, if ordinary people saw a sacred area as one where they could personally gain expertise, and where they need to think to judge the relative expertise of others, this would seem to induce conscious calculation about the details of this sacred topic, which is a no-no. Second, those who are most expert would think a lot about the topic, and often see it up close, which would make it harder for them to see it a sacred.

Humans seem to solve the first issue by treating all sacred topics as being at one of two extremes. At one extreme, e.g., medicine, there are highly expert sacred priests, which they rest of us are not to second guess nor evaluate. At the other extreme, e.g., politics or friendship, expertise via thinking is seen as not possible, making everyone’s opinions nearly as good as anyone’s opinions. In neither case does thinking help ordinary people much, either to form opinions or to choose experts.

On the second issue, experts who only rarely directly confront the most sacred versions of their subject up close, like soldiers, police, or doctors, can drill and practice in a far mode, so that they can perform well intuitively and without much thought in the rare big stakes cases. But what about the other priests, who confront their sacred subjects more often?

When we think about this question in a sacred mode, intuitively and using a few abstract associations, our minds usually conclude that as the sacred is good and ideal, contact with it makes people more good and ideal. Thus we can trust priests to act in our collective interest. But the norm that the rest of us are not to judge such experts, and are to defer strongly to their judgement, gives them a lot of collective discretion. And it seems to me that near mode engagement with the topic means we can’t count much on their reverence for it to restrain them from from using their discretion for selfish advantage.

Thus in fact priests will often act profanely, a fact that the rest of us are often unwilling to see. Beware profane priests.

GD Star Rating
loading...
Tagged as: ,

Sacred Inquiry

The reason I first started to study the sacred was that “sacred cows” kept getting in my way; our treating things as sacred often blocks sensible reforms. But now that I have a plausible theory of how and why we treat some things as sacred, I have to admit: I too treat some things as sacred. Maybe I should learn to stop that, but it seems hard. So perhaps we should accept the sacred as a permanent feature of human thought, and instead try to change what we see as how sacred or how exactly we do that.

So it seems worth my trying to describe in more detail how I see something as sacred, not just habitually but even after I notice this fact. In this post, that thing will be: intellectual inquiry. In this post I’ll mostly try to describe how I revere this, and not so much ask whether I should.

All the thinking and talking that happens in the world helps us to do many things, and to figure out many things. And while some of those things are pretty concrete and content-dependent, others are less so, helping us to learn more general stuff whose usefulness plausibly extends further into the future. And this I call “intellectual progress”.

In general, all of the thinking and talking that we do contributes to this progress, even though it is done for a wide variety of motives, and via many different forms of social organization. I should welcome and celebrate it all. And while abstractly, I do, I notice that, emotionally, I don’t.

It seems that I instead deeply want to distinguish and revere a particular more sacred sort of thinking and talking from the rest. And instead of assuming that my favored type is just very rare, hardly of interest to anyone but me, I instead presume that a great many of us are trying to produce my favored type, even if most fail at it. Which can let me presume that most must know how to do better, and thus justify my indignant stance toward those who fail to meet my standards.

This sort of thinking and talking that I revere is that which actually achieves substantial and valuable progress in abstract understanding, and is done in a way to effectively and primarily achieve this goal. Thus I see as “profane” work that appears to be greatly influenced by other purposes, such as showing off one’s impressiveness, or assuring associates of loyalty.

That is, I have a sacred purity norm, where I don’t like my pure sacred stuff mixed up with other stuff. Good stuff not only has to achieve good outcomes, it also has to be done the right way for the right reasons. I tend to simplify this category and its boundary, and presume that it can be distinguished clearly. I feel bound to others who share my norms, even if I can’t actually name any of them. I don’t calculate most of this; it instead comes intuitively, and seems aesthetically elegant. And I can’t recall ever choosing all this; it feels like I was always this way.

Now on reflection this has a lot of specific implications re what I find more sacred or profane, as I have a lot of beliefs about which intellectual topics are more valuable, and what are more effective methods. And I’ll get to those soon here.

But first let me note that while many intellectuals also see their professional realm as sacred, and have many similar sacred norms about how their work should be done, most of them don’t apply such norms nearly as strongly to their personal lives. In contrast, I extend this to all my thinking and talking. That is, while I’m okay with engaging in many kinds of thinking and talking, I want to sharply distinguish some sacred versions, where all these sacred norms apply, and try to actually use them often in my personal life.

Okay, I can think of a lot of specific implications this has for what I respect and criticize. The following is a somewhat random list of what occurs to me at the moment.

For example, I take academic papers to be implicitly claiming to promote intellectual progress. This implies that they should try to be widely available for others to critique and build on. So I dislike papers that are less available, or that use needlessly difficult languages or styles. Or that aren’t as forthcoming or concise as they could re what theses they argue, to allow readers to judge interest on that basis. I dislike intentional use of vague terms when clearer terms were available, and switching between word meanings to elude criticism.

I feel that a paper which cites another is claiming that it got some particular key input from that other paper, and a paper that cites nothing is claiming to have not needed such inputs. So I disapprove of papers that fail to cite key inputs, or that substitute a more prestigious source for the less prestigious source from which they actually got their input.

I see a paper on a topic as implicitly claiming that the topic is some rough approximation to the best topic they could have chosen, and a paper using a method that some rough approximation to the best method. So it bothers me when it seems obvious the topic isn’t so good, or when the method seems poorly chosen. I’m also bothered when the length of some writing seems poorly matched to the thesis presented. For example, if a thesis could have been adequately argued in a paper, then I’m bothered if its in a book with lots of tangential stuff added on to fill out the space.

I find it profane when authors seem to be pushing an agenda via selective choice of arguments, evidence, or terminology. They should acknowledge weak points and rebuttals of which they are aware without making readers or critics find them. I dislike when authors form mutual admiration societies designed to praise each other regardless of the quality of particular items. That is, I find the embrace of bias profane. Which maybe shouldn’t be too surprising given my blog name.

Now I have to admit that it isn’t clear how effective are these stances toward promoting this sacred goal of mine. While they might happen to help, it seems more plausible that they result more from a habit of treating this area as sacred, rather than from some careful calculation of their effects on intellectual progress. So it remains for me to reconsider my sacred stances in light of this criticism.

GD Star Rating
loading...
Tagged as: ,

Evaluating “Feminism”

My close friend and colleague Bryan Caplan has a new book, “Don’t Be a Feminist”. In general, I’m reluctant to embrace or oppose vague political slogan terms like “feminist”, preferring instead to stick to terms that are better defined. But I accept that his definition isn’t greatly wide of how I’ve seen the term usually used:

Feminism is the view that society generally treats men more fairly than women.

His summary assessment on fairness is:

What then is the big picture? The fairness of the treatment that men and women receive in our society is remarkably equal. And if there is a disparity, it is probably in women’s favor. This is especially true if we ponder one last gender gap: Men endure far more false accusations of unfairness than women do.

Caplan’s essay seems to reasonably summarize what we know about the ways in which men and women are favored or not, and I agree that over all things look roughly equal. I’m more skeptical that including false accusations against men changes this overall assessment; I’d say we still don’t know which side is favored more overall. And given how close things seem, I find it hard to care much about the overall sign.

Here’s another key Caplan claim:

Feminism is so rhetorically dominant that critics fear opening their mouths. … Most intellectual movements make an effort to distinguish wrong-doers from bystanders. … Feminist thinkers, in contrast, routinely and self-righteously do otherwise. … Most self-identified feminists are probably just regular people … Unfortunately, most vocal feminists are fanatics – and rank-and-file feminists tend to defer to them.

Here I also mostly agree, and can in fact attest via personal experience. Most of my “cancellation” (which has substantially harmed my career) has been due to people who saw themselves as feminists aggressively misinterpreting a few neutral things I said as anti-feminist, and most observers going along with that move. A great many have disagree with me over the years, but few others have treated me this way.

Caplan didn’t directly address what I see as the most common “feminist” issue raised: is it okay to have, and act on, gender-conditional expectations about behavior? Seems to me that this is okay when such expectations are based primarily on observed behaviors. This implies that it can be okay to have gender roles, if these result from gendered expectations.

Yes, one should be open to the possibility of seeing outlier cases, of behaviors changing with time or context, and that gender-behavior correlations might result from gendered-expectations. That is, we should look out for ways to change our matching sets of behaviors and expectations. Which is to say, we should look for ways to switch to superior game theory equilibria.

But that needn’t require us denying observed facts about behaviors in the equilibria that we have seen so far. Bryan is roughly right, both on the overall balance of gender unfairness, and on feminist rhetorical aggression.

GD Star Rating
loading...
Tagged as: ,

Hail Industrial Organization

(ngrams)

Economists know many useful things about human social behavior, and about how to improve it. And the world would probably be better off if it listened to economists more. But while the world respects economists enough to mention when their analyses support favored policies, people are much less interested in deciding what to favor based on econ analyses. What could get people to listen more?

There are many relevant factors, but a big one where we might do better is: a track record for being useful. For example, the world listens to chemists, computer scientists, and engineers in part because of their widely-known reputations for having long track records of being directly and simply useful to diverse clients.

Yes, econ majors in college are among the best paid outside of computers and engineering. But that may only show that learning our methods is an impressive feat, not that we produce reliable results. And the fact that people like to point to our analyses to support their policies only shows that we have prestige, not that we are right. What we want is a track record of being, not just impressive, but directly and clearly right, and useful because of that.

Now it turns out that we economists have actually found a way to be frequently and directly useful to diverse clients, and via being right, not just impressive. But we’ve failed to claim sufficient credit for this, and now we seem to be dropping the ball in pursuing it. This place is: business strategy.

When a firm considers what products or services to make, what customers to seek and how, and what prices to charge, it can help to have a theory of that firm’s industry. A theory of its customer demands and producer costs. A theory that says who wants what, who can take what actions when, who knows what when doing what, and how each actor tends to respond to their expectations re other actions. With such a theory, one can predict which actions might be how profitable, and choose accordingly.

Firms today regularly debate key business choices, and hire management consultants to advise those decisions. In addition, new firms pitch their plans to investors, and frequently revise such plans. And while all these choices might seem to be done without theories, that is an illusion. In fact, all such analyses are based on at least implicit theories of how local industries work. Such theories might be simple, or wrong, but they are there.

Now many aspects of useful industry theories are quite context dependent. But other aspects are more general. There are in fact many common patterns in key industry features, and in the ways that industries compete. And in the last century, the world has made great progress in developing better general theories of how firms compete in industries. Furthermore, economics has been central to that story.

In particular, game theory has become a robust general account of how social decisions are made. And we’ve identified dozens of key factors that influence industrial competition. Key ways in which industries differ, that result in different styles of competition. And we’ve worked out a great many specific models of how small sets of these factors work together to create distinctive patterns of industry competition. And much, perhaps even most, of this has happened within the econ field of “industrial organization.”

Today, most who discuss business strategy do so using concepts and distinctions that are well integrated into this rich well-developed and useful econ account of how firms in industries compete. And firms are in fact constantly reconsidering their business strategies using such concepts. So we economists have in fact developed powerful tools that are very useful, and are widely being used.

But, alas, we economists are failing to take credit for it. We don’t teach courses in business strategy, and we don’t recommend students who take our industrial organization courses for such roles. We’ve instead allowed business schools to do that teaching, and to take that credit. And even to take most of the consulting gigs.

Furthermore, academic economists have drifted away from industrial organizations; it is no longer in fashion. It mostly uses old fashion game theory, instead of now popular behaviorism or machine learning. It isn’t well suited for controlled experiments, which are so much the fashion in econ these days that all other kinds of data are considered unclean. And it doesn’t give many chances to promote woke agendas. So few people publish in industrial organization, and few students take classes in it. I know, as I still teach it, but to few students, and nearby universities don’t even offer it.

As usual, academic research priorities are mostly set by internal coalition politics, not by what would be good for the world as a whole, or even each field as a whole.

GD Star Rating
loading...
Tagged as: , ,

Sacred Distance Hides Motives

My book with Kevin Simler describes many hidden human motives, common in our everyday lives. But that raises the question: how exactly can we humans hide our motives from ourselves?

Consider that we humans are constantly watching and testing our and others’ words and deeds for inconsistency, incoherence, and hypocrisy. As our rivals are eager to point out such flaws, we each try to adjust our words and deeds to cut and smooth the flaws we notice. Furthermore, we habitually adjust our words and deeds to match those of our associates, to make remaining flaws be shared flaws. After a lifetime of such smoothing, how could much personal incoherence remain?

One way to keep motives hidden is to hide your most questionable actions, those where you feel you least control or understand them. If you can’t hide such actions, then try not to make strong claims about related motives. And I think we do follow this strategy for our strongest feelings, such as lust, envy, or social anxiety. We often try to hide such feelings even from ourselves, and when we do notice them we often fall silent; we fear to speak on them.

How easy it is to check deeds and words for coherence depends in part on how dense and clear are their connections. And as all deeds are concrete, and as concrete words tend to be clearer and more densely connected, it seems easier to check concrete priorities, relative to abstract ones.

For example, it is easiest to check the motives that lead to our conscious, often written, calculations of detailed plans. Our time schedules, spatial routes and layouts, and our spending habits are often visible and full of details that make it hard to hide priorities. For example, if you go out of your way to drive past the home of your ex  on your way home from work, it will be hard to pretend you don’t care about her.

We have more room to maneuver, however, regarding our more hidden and infrequent concrete choices. And when we are all in denial in similar ways on similar topics, then we can all be reluctant to “throw stones” at our shared “glass houses”. This seems to apply to our hidden motives re schools and medicine, for example; we apparently all want to pretend together that school is for learning job skills and hospitals are for raising health.

Compared to our concrete priorities, our abstract priority expressions (e.g, “family is everything”) are less precise, and so are harder to check against each other for consistency. And abstract expressions can be even harder to check against concrete actions; large datasets of deeds may be required to check for such coherence.

We ground most abstract concepts, like “fire”, “sky”, “kid” or “sleep”, by reference to concrete examples with which we have had direct experience. So when we are confused about their usage, we can turn to those examples to get clear. But we ground other more “sacred” abstract concepts, like “love”, “justice”, or “capitalism”, more by reference to other abstract concepts. These are more like “floating abstractions.” And this habit makes it even harder to check our uses of such sacred concepts for coherence.

This potential of abstract concepts to allow more evasion of coherence checking is greatly enhanced by the fact that our brains have two rather different systems for thinking. First, our “near” system is designed to look at important-to-us close-up things, by attending to their details. This system is better integrated with our conscious thoughts. For example, we often first do a kind of calculation slowly and consciously, and then later by habit we learn to do such calculations unconsciously. This integration supports coherence checking, as we can respond to explicit challenges by temporarily returning to conscious calculation, to find explanations for our choices.

Our “far” system, in contrast, is designed to look at less-important-to-us far-away things, about which we usually know only a few more abstract descriptors. This system uses many opaque quick and dirty heuristics, including intuitive emotional and aesthetic associations, crude correlations, naive trust, and social approval. If someone else is using this system in their head to think about a topic, and then you use this system in your head to try to check their thinking, you will have a hard time judging much more than if your system gives the same answers as theirs. If you get different answers, it will be hard to say exactly why.

As our minds tend to invoke our far systems for thinking about more abstract topics, that makes it even harder to check abstract thoughts for coherence. But, you might respond, if that system is designed for dealing with relatively unimportant things, won’t the other near system get invoked for important topics, limiting this problem of being harder to check coherence to unimportant topics?

Alas, no, due to the sacred. Our sacred things are our especially important things, described via floating abstractions, where our norm is to think about them only using our far systems. We are not to calculate them, consider their details, or mix them with or trade them off against other things. Our intuitions there are sacred, and beyond question.

Making it hard to check the coherence of related deeds and words. The main thing we can do there is to intuit our own answer and compare it to others’ answers. If we get the same answers, that confirms that they share our sense of the sacred, and are from our in-group. If not, we can conclude they are from an out-group, and thus suspect; they didn’t learn the “right” sense of the sacred.

And that’s some of the ways that our minds tend to hide our motives, even given the widespread practice of trying to expose incoherence in rivals’ words and deeds. Floating abstractions help, and the sacred helps even more. And maybe we go further and coordinate to punish those who try to expose our sacred hypocrites.

Note that I’m not claiming that all these habits and structures were designed primarily for this effect of making it harder to check our words and deeds for coherence. I’m mainly pointing out that they have this effect.

GD Star Rating
loading...
Tagged as: ,

We See The Sacred From Afar, To See It Together

I’ve recently been trying to make sense of our concept of the “sacred”, by puzzling over its many correlates. And I think I’ve found a way to make more sense of it in terms of near-far (or “construal level”) theory, a framework that I’ve discussed here many times before.

When we look at a scene full of objects, a few of those objects are big and close up, while a lot more are small and far away. And the core idea of near-far is that it makes sense to put more mental energy into analyzing each object up close, objects that matters to us more, by paying more attention to their detail, detail often not available about stuff far away. And our brains do seem to be organized around this analysis principle.

That is, we do tend to think less, and think more abstractly, about things far from us in time, distance, social connection, or hypothetically. Furthermore, the more abstractly we think about something, the more distant we tend to assume are its many aspects. In fact, the more distant something is in any way, the more distant we tend to assume it is in other ways.

This all applies not just to dates, colors, sounds, shapes, sizes, and categories, but also to the goals and priorities we use to evaluate our plans and actions. We pay more attention to detailed complexities and feasibility constraints regarding actions that are closer to us, but for far away plans we are content to think about them more simply and abstractly, in terms of relatively general values and principles that depend less on context. And when we think about plans more abstractly, we tend to assume that those actions are further away and matter less to us.

Now consider some other ways in which it might make sense to simplify our evaluation of plans and actions where we care less. We might, for example, just follow our intuitions, instead of consciously analyzing our choices. Or we might just accept expert advice about what to do, and care little about experts incentives. If there are several relevant abstract considerations, we might assume they do not conflict, or just pick one of them, instead of trying to weigh multiple considerations against each other. We might simplify an abstract consideration from many parameters down to one factor, down to a few discrete options, or even all the way down to a simple binary split.

It turns out that all of these analysis styles are characteristic of the sacred! We are not supposed to calculate the sacred, but just follow our feelings. We are to trust priests of the sacred more. Sacred things are presumed to not conflict with each other, and we are not to trade them off against other things. Sacred things are idealized in our minds, by simplifying them and neglecting their defects. And we often have sharp binary categories for sacred things; things are either sacred or not, and sacred things are not to be mixed with the non-sacred.

All of which leads me to suggest a theory of the sacred: when a group is united by valuing something highly, they value it in a style that is very abstract, having the features usually appropriate for quickly evaluating things relatively unimportant and far away. Even though this group in fact tries to value this sacred thing highly. Of course, depending on what they try to value, such attempts may have only limited success.

For example, my society (US) tries to value medicine sacredly. So ordinary people are reluctant to consciously analyze or question medical advice; they are instead to just trust its priests, namely doctors, without looking at doctor incentives or track records. Instead of thinking in terms of multiple dimensions of health, we boil it all down to a single health dimension, or even a binary of dead or alive.

Instead of seeing a continuum of cost-effectiveness of medical treatments, along which the rich would naturally go further, we want a binary of good vs bad treatments, where everyone should get the good ones no matter what their cost, and regardless of any other factors besides a diagnosis. We are not to make trades of non-sacred things for medicine, and we can’t quite believe it is ever necessary to trade medicine against other sacred things. Furthermore, we want there to be a sharp distinction between what is medicine and what is not medicine, and so we struggle to classify things like mental therapy or fresh food.

Okay, but if we see sacred things as especially important to us, why ever would we want to analyze them using styles that we usually apply to things that are far away and the least important to us? Well one theory might be that our brains find it hard to code each value in multiple ways, and so typically code our most important values as more abstracted ones, as we tend to apply them most often from a distance.

Maybe, but let me suggest another theory. When a group unites itself by sharing a key “sacred” value, then its members are especially eager to show each other that they value sacred things in the same way. However, when group members hear about and observe how an associate makes key sacred choices, they will naturally evaluate those choices from a distance. So each group member also wants to look at their own choices from afar, in order to see them in the same way that others will see them.

In this view, it is the fact groups tend to be united by sacred values that is key to explaining why they treat such values in the style usually appropriate for relatively unimportant things seen from far away, even though they actually want to value those things highly. Even though such a from-a-distance treatment will probably lead to a great many errors and misjudgments when actually trying to promote that thing.

You see, it may be more important to groups to pursue a sacred value together than to pursue it effectively. Such as the way the US spends 18% of GDP on medicine, as a costly signal of how sacred medicine is to us, even though the marginal health benefit of our medical spending seems to be near zero. And we show little interest in better institutions that could make such spending far more cost effective.

Because at least this way we all see each other’s ineffective medical choices in the same way. We agree on what to do. And after all, that’s the important thing about medicine, not whether we live or die.

Added 10Sep: Other dual process theories of brains give similar predictions.

GD Star Rating
loading...
Tagged as: , ,

Bizarre Accusations

Imagine that you planned a long hike through a remote area, and suggested that it might help to have an experienced hunter-gather along as a guide. Should listeners presume that you intend to imprison and enslave such guides to serve you? Or is it more plausible that you propose to hire such people as guides?

To me, hiring seems the obvious interpretation. But, to accuse me of advancing a racist slavery agenda, Audra Mitchell and Aadita Chaudhury make the opposite interpretation in their 2020 International Relations article “Worlding beyond ‘the’ ‘end’ of ‘the world’: white apocalyptic visions and BIPOC futurisms”.

In a chapter “Catastrophe, Social Collapse, and Human Extinction” in the 2008 book Global Catastrophic Risks I suggested that we might protect against human extinction by populating underground refuges with people skilled at surviving in a world without civilization:

A very small human population would mostly have to retrace the growth path of our human ancestors; one hundred people cannot support an industrial society today, and perhaps not even a farming society. They might have to start with hunting and gathering, until they could reach a scale where simple farming was feasible. And only when their farming population was large and dense enough could they consider returning to industry.

So it might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right. Perhaps such people could be rotated periodically from a well protected region where they practiced simple lifestyles, so they could keep their skills fresh. And perhaps we should test our refuge concepts, isolating real people near them for long periods to see how well particular sorts of refuges actually perform at returning their inhabitants to a simple sustainable lifestyle.

On this basis, Mitchell and Chaudhury call me a “white futurist” and “American settler economist” seeking to preserve existing Euro-centric power structures:

Indeed, many contributors to ‘end of the world’ discourses offer strategies for the reconstruction and ‘improvement’ of existing power structures after a global catastrophe. For example, American settler economist Robin Hanson calculates that if 100 humans survived a global catastrophic disaster that killed all others, they could eventually move back through the ‘stages’ of ‘human’ development, returning to the ‘hunter-gatherer stage’ within 20,000 years and then ‘progressing’ from there to a condition equivalent to contemporary society (defined in Euro-centric terms). …

some white futurists express concerns about the ‘de-volution’ of ‘humanity’ from its perceived pinnacle in Euro-centric societies. For example, American settler economist Hanson describes the emergence of ‘humanity’ in terms of four ‘progressions’

And solely on the basis of my book chapter quote above, Mitchell and Chaudhury bizarrely claim that I “quite literally” suggest imprisoning and enslaving people of color “to enable the future re-generation of whiteness”:

To achieve such ideal futures, many writers in the ‘end of the world’ genre treat [black, indigenous, people of color] as instruments or objects of sacrifice. In a stunning display of white possessive logic, Hanson suggests that, in the face of global crisis, it

‘might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course, such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right.

In this imaginary, Hanson quite literally suggests the (re-/continuing)imprisonment, (re-/continuing)enslavement and biopolitical (re-/continuing) instrumentalization of living BIPOC in order to enable the future re-generation of whiteness. This echoes the dystopian nightmare world described in …

And this in a academic journal article that supposedly passed peer review! (I was not one of the “peers” consulted.)

To be very clear, I proposed to hire skilled foragers and subsistence farmers to serve in such roles, compensating them as needed to gain their consent. I didn’t much care about their race, nor about the race of the world that would result from their repopulating the world. And presumably someone with substantial racial motivations would in fact care more about that last part; how exactly does repopulating the world with people of color promote “whiteness”?

GD Star Rating
loading...
Tagged as: ,

MacAskill on Value Lock-In

Will MacAskill has a new book out today, What We Owe The Future, most of which I agree with, even if that doesn’t exactly break new ground. Yes, the future might be very big, and that matters a lot, so we should be willing to do a lot to prevent extinction, collapse, or stagnation. I hope his book induces more careful future analysis, such as I tried in Age of Em. (FYI, MacAskill suggested that book’s title to me.) I also endorse his call for more policy and institutional experimentation. But, as is common in book reviews, I now focus on where I disagree.

Aside from the future being important, MacAskill main concern in his book is “value lock-in”, by which he means a future point in time when the values that control actions stop changing. But he actually mixes up two very different processes by which this result might arise. First, an immortal power with stable values might “take over the world”, and prevent deviations from its dictates. Second, in a stable universe decentralized competition between evolving entities might pick out some most “fit” values to be most common.

MacAskill’s most dramatic predictions are about this first “take over” process. He claims that the next century or so is the most important time in all of human history:

We hold the entire future in our hands. … By choosing wisely, we can be pivotal in putting humanity on the right course. … The values that humanity adopts in the next few centuries might shape the entire trajectory of the future. … Whether the future is government by values that are authoritarian or egalitarian, benevolent or sadistic, exploratory or rigid, might well be determined by what happens this century.

His reason: we will soon create AGI, or ems, who, being immortal, have forever stable values. Some org will likely use AGI to “take over the world”, and freeze in their values forever:

Advanced artificial intelligence could enable those in power to to lock in their values indefinitely. … Since [AGI] software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal. These two features of AGI – potentially rapid technological progress and in-principle immortality – combine to make value lock-in a real possibility. …

Using AGI, there are a number of ways that people could extend their values much farther into the future than ever before. First, people may be able to create AGI agents with goals closely assigned with their own which would act on their behalf. … [Second,] the goals of an AGI could be hard-coded: someone could carefully specify what future white want to see and ensure that the AGI aims to achieve it. … Third, people could potentially “upload”. …

International organizations or private actors may be able to leverage AGI to attain a level of power not seen since the days of the East India Company, which in effect ruled large areas of India. …

A single set of values could emerge. …The ruling ideology could in principle persist as long as civilization does. AGI systems could replicate themselves as many times as they wanted, just as easily as we can replicate software today. They would be immortal, freed from the biological process of aging, able to create back-ups of themselves and copy themselves onto new machines. … And there would not longer be competing value systems that could dislodge the status quo. …

Bostrom’s book Superintelligence. The scenario most closely associated with that book is one in which a single AI agent … quickly developing abilities far greater than the abilities of all of humanity combined. … It would therefore be incentivize to take over the world. … Recent work has looked at a broader range of scenarios. The move from subhuman intelligence to super intelligence need not be ultrafast or discontinuous to post a risk. And it need not be a single AI that takes over; it could be many. …

Values could become even more persistent in the future if a single value system were to become global dominant. If so, then the absence of conflict and competition would remove one reason for change in values over time. Conquest is the most dramatic pathway … and it may well be the most likely.

Now mere immortality seems far from sufficient to create either value stability or a takeover. On takeover, immortality is insufficient. Not only is a decentralized world of competing immortals easy to imagine, but in fact until recently individual bacteria, who very much compete, were thought to be immortal.

On values, immortality also seems far from sufficient to induce stable values. Human organizations like firms, clubs, cities, and nations seem to be roughly immortal, and yet their values often greatly change. Individual humans change their values over their lifetimes. Computer software is immortal, and yet its values often change, and it consistently rots. Yes, as I mentioned in my last post, some imagine that AGIs have a special value modularity that can ensure value stability. But we have many good reasons to doubt that scenario.

Thus MacAskill must be positing that a power who somehow manages to maintain stable values takes over and imposes its will everywhere forever. Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario. MacAskill claims that other scenarios are also relevant, but doesn’t even try to show how they could produce this result. For reasons I’ve given many times before, I’m skeptical of foom-like scenarios.

Furthermore, let me note that even if one power came to dominate Earth’s civilization for a very long time, it would still have to face competition from other grabby aliens in roughly a billion years. If so, forever just isn’t at issue here.

While MacAskill doesn’t endorse any regulations to deal with this stable-AGI-takes-over scenario, he does endorse regulations to deal with the other path to value stability: evolution. He wants civilization to create enough of a central power that it could stop change for a while, and also limit competition between values.

The theory of cultural evolution explains why many moral changes are contingent. … the predominant culture tends to entrench itself. … results in a world increasingly dominated by cultures with traits that encourage and enable entrenchment and thus persistence. …

If we don’t design our institutions to govern this transition well – preserving a plurality of values and the possibility of desirable moral progress. …

A second way for a culture to become more powerful is immigration [into it]. … A third way in which a cultural trait can gain influence is if it gives one group greater ability to survive or thrive in a novel environment. … A final way in which one culture can outcompete another is via population growth. … If the world converged on a single value system, there would be much less pressure on those values to change over time.

We should try to ensure that we have made as much moral progress as possible before any point of lock-in. … As an ideal, we could aim for what we could call the long reflection: a stable state of the world in which we are safe from calamity and can reflect on and debate the nature of the good life, working out what the more flourishing society would be. … It would therefore be worth spending many centuries to ensure that we’ve really figured things out before taking irreversible actions like locking in values or spreading across the stars. …

We would need to keep our options open as much as possible … a reason to prevent smaller-scale lock-ins … would favor political experimentation – increasing cultural and political diversity, if possible. …

That one society has greater fertility than another or exhibits faster economic growth does not imply that society is morally superior. In contrast, the most important mechanisms for improving our moral views are reason, reflection, and empathy, and the persuasion of others based on those mechanisms. … Certain forms of free speech would therefore be crucial to enable better ideas to spread. …

International norms or laws preventing any single country from becoming too populous, just as anti-trust regulation prevents any single company from dominating a market. … The lock-in paradox. We need to lock-in some institutions and ideas in order to prevent a more thorough-going lock-in of values. … If we wish to avoid the lock-in of bad moral views, an entirely laissez-faire approach would not be possible; over time, the forces of cultural evolution would dictate how the future goes, and the ideologies that lead to the greatest military powered that try to eliminate their competition would suppress all others.

I’ve recently described my doubts that expert deliberation has been a large force in value change so far. So I’m skeptical that will be a large force in the future. And the central powers (or global mobs) sufficient to promote a long reflection, or to limit nations competing, seem to risk creating value stability via the central dominance path discussed above. MacAskill doesn’t even consider this kind of risk from his favored regulations.

While competition may produce a value convergence in the long run, my guess is that convergence will happen a lot faster if we empower central orgs or mobs to regulate competition. I think that a great many folks prefer that latter scenario because they believe we know what are the best values, and fear that those values would not win an evolutionary competition. So they want to lock in current values via regs to limit competition and value change.

To his credit, MacAskill is less confident that currently popular values are in fact the best values. And his favored solution of more deliberation probably would’t hurt. I just don’t think he realizes just how dangerous are central powers able to regulate to promote deliberation and limit competition. And he seems way too confident about the chance of anything like foom soon.

GD Star Rating
loading...
Tagged as: ,

AGI Is Sacred

Sacred things are especially valuable, sharply distinguished, and idealized as having less decay, messiness, inhomogeneities, or internal conflicts. We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense.

We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association. (More)

When we treat something as sacred, we acquire the predictably extreme related expectations and values characteristic of our concept of “sacred”. This biases us in the usual case where such extremes are unreasonable. (To min such biases, try math as sacred.)

For example, most ancient societies had a great many gods, with widely varying abilities, features, and inclinations. And different societies had different gods. But while the ancients treated these gods as pretty sacred, Christians (and Jews) upped the ante. They “knew” from their God’s recorded actions that he was pretty long-lasting, powerful, and benevolent. But they moved way beyond those “facts” to draw more extreme, and thus more sacred, conclusions about their God.

For example, Christians came to focus on a single uniquely perfect God: eternal, all-powerful, all-good, omnipresent, all-knowing (even re the future), all-wise, never-changing, without origin, self-sufficient, spirit-not-matter, never lies nor betrays trust, and perfectly loving, beautiful, gracious, kind, and pretty much any other good feature you can name. The direction, if not always the magnitude, of these changes is well predicted by our sacredness concept.

It seems to me that we’ve seen a similar process recently regarding artificial intelligence. I recall that, decades ago, the idea that we could make artificial devices who could do many of the kinds of tasks that humans do, even if not quite as well, was pretty sacred. It inspired much reverence, and respect for its priests. But just as Christians upped the ante regarding God, many recently have upped the AI ante, focusing on an even more sacred variation on AI, namely AGI: artificial general intelligence.

The default AI scenario, the one that most straightforwardly projected past trends into the future, would go as follows. Many kinds of AI systems would specialize in many different tasks, each built and managed by different orgs. There’d also be a great many AI systems of each type, controlled by competing organizations, of roughly comparable cost-effectiveness.

Overall, the abilities of these AI would improve at roughly steady rates, with rate variations similar to what we’ve seen over the last seventy years. Individual AI systems would be introduced, rise in influence for a time, and then decline in influence, as they rotted and become obsolete relative to rivals. AI systems wouldn’t work equally well with all other systems, but would instead have varying degrees of compatibility and integration.

The fraction of GDP paid for such systems would increase over time, and this would likely lead to econ growth rate increases, perhaps very large ones. Eventually many AI systems would reach human level on many tasks, but then continue to improve. Different kinds of system abilities would reach human level at different times. Even after this point, most all AI activity would be doing relatively narrow tasks.

The upped-ante version of AI, namely AGI, instead changes this scenario in the direction of making it more sacred. Compared to AI, AGI is idealized, sharply distinguished from other AI, and associated with extreme values. For example:

1) Few discussions of AGI distinguish different types of them. Instead, there is usually just one unspecialized type of AGI, assumed to be at least as good as humans at absolutely everything.

2) AGI is not a name (like “economy” or “nation”) for a diverse collection of tools run by different orgs, tools which can all in principle be combined, but not always easily. An AGI is instead seen as a highly integrated system, fully and flexibly able to apply any subset its tools to any problem, without substantial barriers such as ownership conflicts, different representations, or incompatible standards.

3) An AGI is usually seen as a consistent and coherent ideal decision agent. For example, its beliefs are assumed all consistent with each other, fully updated on all its available info, and its actions are all part of a single coherent long-term plan. Humans greatly deviate from this ideal.

4) Unlike most human organizations, and many individual humans, AGIs are assumed to have no internal conflicts, where different parts work at cross purposes, struggling for control over the whole. Instead, AGIs can last forever maintaining completely reliable internal discipline.

5) Today virtually all known large software systems rot. That is, as they are changed to add features and adapt to outside changes, they gradually become harder to usefully modify, and are eventually discarded and replaced by new systems built from scratch. But an AGI is assumed to suffer no such rot. It can instead remain effective forever.

6) AGIs can change themselves internally without limit, and have sufficiently strong self-understanding to apply this ability usefully to all of their parts. This ability does not suffer from rot. Humans and human orgs are nothing like this.

7) AGIs are usually assumed to have a strong and sharp separation between a core “values” module and all their other parts. It is assumed that value tendencies are not in any way encoded into the other many complex and opaque modules of an AGI system. The values module can be made frozen and unchanging at no cost to performance, even in the long run, and in this way an AGI’s values can stay constant forever.

8) AGIs are often assumed to be very skilled, even perfect, at cooperating with each other. Some say that is because they can show each other their read-only values modules. In this case, AGI value modules are assumed to be small, simple, and standardized enough to be read and understood by other AGIs.

9) Many analyses assume there is only one AGI in existence, with all other humans and artificial systems at the time being vastly inferior. In fact this AGI is sometimes said to be more capable than the entire rest of the world put together. Some justify this by saying multiple AGIs cooperate so well as to be in effect a single AGI.

10) AGIs are often assumed to have unlimited powers of persuasion. They can convince humans, other AIs, and organizations of pretty much any claim, even claims that would seem to be strongly contrary to their interests, and even if those entities are initially quite wary and skeptical of the AGI, and have AI advisors.

11) AGIs are often assumed to have unlimited powers of deception. They could pretend to have one set of values but really have a completely different set of values, and completely fool the humans and orgs that developed them ever since they grew up from a “baby” AI. Even when those had AI advisors. This super power of deception apparently applies only to humans and their organizations, but not to other AGIs.

12) Many analyses assume a “foom” scenario wherein this single AGI in existence evolves very quickly, suddenly, and with little warning out of far less advanced AIs who were evolving far more slowly. This evolution is so fast as to prevent the use of trial and error to find and fix its problematic aspects.

13) The possible sudden appearance, in the not-near future, of such a unique powerful perfect creature, is seen by many as event containing overwhelming value leverage, for good or ill. To many, trying to influence this event is our most important and praise-worthy action, and its priests are the most important people to revere.

I hope you can see how these AGI idealizations and values follow pretty naturally from our concept of the sacred. Just as that concept predicts the changes that religious folks seeking a more sacred God made to their God, it also predicts that AI fans seeking a more sacred AI would change it in these directions, toward this sort of version of AGI.

I’m rather skeptical that actual future AI systems, even distant future advanced ones, are well thought of as having this package of extreme idealized features. The default AI scenario I sketched above makes more sense to me.

Added 7a: In the above I’m listing assumptions commonly made about AGI in AI risk discussions, not applying a particular definition of AGI.

GD Star Rating
loading...
Tagged as: ,

Is Nothing Sacred?

“is nothing sacred?” is spoken used to express shock when something you think is valuable or important is being changed or harmed (more)

Human groups often unite via agreeing on what to treat as “sacred”. While we don’t all agree on what is how sacred, almost all of us treat some things as pretty sacred way. Sacred things are especially valuable, sharply distinguished, and idealized, so they have less decay, messiness, inhomogeneities, or internal conflicts.

We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense. We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association.

Treating things as sacred will tend to bias our thinking when such things do not actually have all these features, or when our values regarding them don’t actually justify all these sacred valuing rules. Yes, the benefits we get from uniting into groups might justify paying the costs of this bias. But even so, we might wonder if there are cheaper ways to gain such benefits. In particular, we might wonder if we could change what things we see as sacred, so as to reduce these biases. Asked another way: is there anything that is in fact, naturally sacred, so that treating it as such induces the least bias?

Yes, I think so. And that thing is: math. We do not create math; we find it, and it describes us. Math objects are in fact quite idealized and immortal, mostly lacking internal messy inhomogeneities. Yes, proofs can have messy details, but their assumptions and conclusions are much simpler. Math concepts don’t even suffer from the cultural context-dependence or long-term conceptual drift suffered by most abstract language concepts.

We can draw clear lines distinguishing math vs. non-math objects. Usually no one can own math, avoiding the vulgarity of associated prices. And while we think about math cognitively, the value we put on any piece of math, or on math as a whole, tends to be come intuitively, even reverently, not via calculation.

Compared to other areas, math seems an at extreme of ease of evaluation of abilities and contributions, and thus math can suppress factionalism and corruption in such evaluations. This helps us to use math to judge mental ability, care, and clarity, especially in the young. So we use math tests to sort and assign prestige early in life.

As math is so prestigious and reliable to evaluate, we can more just let math priests tell us who is good at math, and then use that as a way to choose who to hire to do math. We can thus avoid using vulgar outcome-based forms of payment to compensate math workers. It doesn’t work so badly to give math priests self-rule an long job tenures. Furthermore, so many want to be math priests that their market wages are low, making math inequality feel less offensive.

The main thing that doesn’t fit re math as sacred is that today treating math as sacred doesn’t much help us unite some groups in contrast to other groups. Though that did happen long ago (e.g., among ancient Greeks). However, I don’t at all mind this aspect of math today.

The main bias I see is that treating math as sacred induces us to treat it as more valuable than it actually is. Many academic fields, for example, put way too high a priority on math models of their topics. Which distracts from actually learning about what is important. But, hey, at least math does in fact have a lot of uses, such as in engineering and finance. Math was even crucial to great advances in many areas of science.

Yes, many over-estimate math’s contributions. But even so, I can’t think of something else that is in fact more naturally “sacred” than math. If we all in fact have a deep need to treat some things as sacred, this seems a least biased target. If something must be sacred, let it be math.

GD Star Rating
loading...
Tagged as: ,