Monthly Archives: February 2022

Rationality As Rules

Our world is complex, where we try many things, with varying success. And when we look at the distribution of our successes and failures, we often notice patterns. Pattens we then summarize in terms of models and rules of thumb regarding ideal arrangements.

For example, an ideal sentence has a verb and a subject, starts with a capital letter, and ends in a period. An ideal essay has an intro and a conclusion, and each paragraph has an idea sequence of sentence types. In my old world of lisp, ideal computer code has no global variables, each function has only a few lines of code and is documented with text, and functions form a near tree structure with few distant connections.

Ideal rooms have little dust or dirt, few loose items around, and big items are where they appear in a designed floor plan. Ideal job performance follows ideal work schedules and agreed-on procedures. Ideal communication uses clear precise language and is never false or misleading. And so on.

Such simple and easily expressed rules and ideal descriptions can be quite helpful when we are learning how to do things. But eventually, if we are primarily focused on some other outcomes, we usually find that we want to sometimes deviate from the rules, and produce structures that differ from our simple ideals. Not every best sentence has a verb, not every best code function is documented, and the furniture isn’t always most useful when placed exactly according to the floor plan.

However, when we are inclined to suspect each others’ motives, we often turn rules of thumb into rules of criticism. That is, we turn them into norms or laws or explicit rules of home, work, etc. And at that point such rules discourage us from deviating, even when we expect such deviations to improve on rule-following. Yes, it is sometimes possible to apply for permissions to deviate, or to deviate first and then convince others to accept this. But even so, enforced rules change our behavior.

With sufficient distrust, it can make sense to enforce a limited set of such rules. We can lose on net in getting the very best outcomes when people have good motivations, but we can gain via cutting the harm that can result from poor motivations. At least when on average poor motivations tend to move choices away from our usual ideals.

For example, we humans are complex and twisted enough that sometimes we are better off when we lie to others, and when they lie to us. Even so, we can still want to enforce rule systems that detect and punish lies. Yes, this will discourage some useful lies. But that loss may be more than compensated by also discouraging damaging lies from people with conflicting interests. We can want rules that tend to push behavior toward the zero-lie ideal even when that is not actually the best scenario for us, all things considered.

I recently realized that “rationality” is mostly about such ideal-pushing rules. We say that we are more “rational” when we avoid contradictions, when our arguments follow valid logical structures, when our degrees of belief satisfy probability axioms, and when we update according to Bayes’ rule. But this is not because we can prove that such things are always better.

Oh sure, we can prove such things are better in some ideal contexts, but we also know that our real situations differ. For example, we know that we can often improve our situation via less accurate beliefs that influence how others think of us. (Our book Elephant in the Brain is all about this.) And even accuracy can often be improved via calculations that ignore key relevant information. Our minds and the problems we think about are complex, heuristic, and messy.

Yet if we sufficiently fear being maliciously persuaded by motivated reasoning from others with conflicting motives, we may still think ourselves better off if we support rationality norms, laws, and other rules that punish incoherence, inconsistency, and other deviations from simple rationality ideals.

The main complication comes when people want to argue for us to accept limited deviations from rationality norms on special topics. Such as God, patriotism, romance, or motherly love. It is certainly possible that we are built so as to be better off with certain “irrational” beliefs on such topics. But how exactly can we manage the debates in which we consider such claims?

If we apply the usual debate norms on these topics, they will tend to support the usual rational conclusions on what are the most accurate beliefs, even if they also suggest that other beliefs might be more beneficial. But can people be persuaded to adopt such beneficial beliefs if this is the status of the official public debates on those topics?

However, if we are to reject the usual rationality standards for those debates, what new standards can we adopt instead to protect us from malicious persuasion? Or what other methods can we trust to make such choices, if those methods are not to be disciplined by debates? I’m not yet seeing a way out.

GD Star Rating
loading...
Tagged as:

Self-Set Legal Liability

Today a big fraction of “constitutional law” issues are on our many awkward, incoherent, and inefficient collective choices regarding crime detection, punishment, co-liability, and freedoms of movement and privacy. My vouching proposal would instead privatize all of these choices, hopefully inducing more innovative, adaptive, and efficient versions. But it would not change how we decide what is a crime, how we judge particular accusations, or how we set priority levels for crime avoidance and detection.

In my vouching proposal, each kind of crime has a fine and a bounty. The fine sets how hard injurers and their vouchers will try to avoid causing the harm, while the bounty sets how hard bounty-hunters will work to detect who caused the harm if it happens. In this post, I’d like to consider further privatizing the choices of these two priority levels. I don’t have a fully worked out proposal here. I instead want more to frame the issues, and think aloud. Here goes.

Consider kinds of harms, like murder, rape, robbery, defamation, etc., where particular victims can be identified. We might want to let such victims set personal fine and bounty levels for each kind of harm that they might suffer, and to which others might contribute. If everyone were required to have an RFID tag that returns a pointer to a voucher, to prove that they are in fact vouched, then that pointer could also tell about that person’s personal fine and bounty levels, to help others better take those into account in their interactions.

For concreteness, consider the “harm” of being insulted. (I choose this example because it isn’t obvious whether this is in fact a harm that should be discouraged by law.) A potential victim of insults would seem to be well-placed to choose what fraction of the fine paid should go to pay for a bounty, as opposed to compensation to that victim. But setting a higher fine level would impose costs on others who might want to insult this victim. So we want the victim to pay a cost for raising their personal fine level. Ideally with the right cost, they’d set the level to match the actual harm they suffer from this event. Then others who faced this fine might make efficient choices regarding how hard to try to avoid insulting this victim.

Property taxes based on self-set property values can give property owners good incentives when those self-set values become legal property sales offers. Similarly, it seems to me that it might work to charge victims some fee in proportion to the insult fine levels that they set. Then the higher they set their insult fine, the more others will avoid insulting them, but the more they will have to pay in fees.

The key parameter here is the ratio of the personal annual fee paid to the personal fine level. This parameter may need to be set differently for each different kind of crime. How can we get such parameters near reasonable values?

For property taxes, it seems reasonable to add up all the expenses required to support property, such as the cost of roads, and set the property tax level so that the total tax revenue is near that sufficient to cover those property-supporting expenses. Similarly, my intuition is that the total amount of fees spent to set insult fine levels should be near the total amount of actual fees paid by those found guilty of insulting victims. My intuition is that these two numbers should be within a factor of ten of each other, and that setting them exactly equal wouldn’t be a terrible choice. (At least compared to our status quo.)

Now if these numbers are set to be similar, then the total amount of fees collected from victims would near the total fines paid by injurers, which would then be near the total amount of the premiums paid by voucher clients to their vouchers. Thus on average victim fees to set fine levels for hurting them could nearly pay for subsidies to on average cover all of the voucher premiums! So we needn’t worry about bankrupting injurers on average by forcing them to pay for vouchers.

Though, yes, those who seem to vouchers to have a much higher risk than average of hurting others would have to pay much higher premiums. (Those with lower than average risks might get cash rebates.) And that might well force such high risk clients to make big compromises via accepting unattractive co-liability, freedom, and punishment arrangements. Which we could think a just consequence of their risky inclinations, or we might feel sorry for some of them and subsidize their voucher premiums.

Yes, we might still worry about those who are too poor to afford large fines. Others would feel more free to insult them, or to cause them other harms. This is what efficiency requires, though again we could subsidize their fees if we felt sorry for them.

So far, I’ve focused on harms concentrated in particular victims; it makes sense for them to set personal fine levels. Other harms can be more diffuse, however, and harm a wider set of people together. For these, we’d want ways to help such groups to pay together to raise the fine levels regarding the harms that they might suffer together. But we have many promising “public goods mechanisms” for this purpose. And we still probably want to allow such fines to vary by group and context; setting a single level for all groups and contexts seems quite inefficient.

And that’s it, my out-loud thoughts on how to let people set personal priority levels regarding the harms that might befall them, in the context of my prior vouching proposal.

GD Star Rating
loading...
Tagged as: ,

J. Phil. Critique of Ems

The third most prestigious journal in philosophy, Journal of Philosophy, will publish this paper by Eric Mandelbaum:

Everything and More: The Prospects of Whole Brain Emulation. Whole Brain Emulation (WBE) … optimism may be misplaced. … [It] is, at best, no more compelling than any of the other far-flung routes to achieving superintelligence. Similarly skeptical conclusions are found regarding immortality.

Now as the paper never says anything on other routes to superintelligence, both of these claims seem pretty vague. However, reading the paper I think I can identify three key less vague claims. I’ll now argue that two of these claims are true but long known, while the third claim is false.

The paper’s first key claim is that WBE (or “uploads” or “ems”), can’t make us immortal if they are not conscious, and no one knows much about what things are conscious (assuming creatures could have exactly the same behavior yet be conscious or not):

Biological Theory posits that the coding and interchange of information between electrical and chemical formats gives rise to consciousness, and that the specific neural hardware we use is essential to phenomenal consciousness. … The explanatory gap is the thesis that we do not have any idea of how a subjective state (such as seeing red, or hearing middle C on a piano) could be identical to an objective state (such as having a certain pattern of neuronal activation). … it is a theory about our current epistemic position, one which claims that at this moment we have no clue how psychophysical identities could be true. The idea is that we do not yet possess the concepts to bridge this gap (although one day we may).

To which I respond: yes of course, we’ve long known this. The only data we seem to have about consciousness is the fact that many of us feel compelled to believe that some part of us is at that current moment conscious, even as we each feel unsure re the status of everyone and everything else, of ourselves in the past or future, or even of other parts of ourselves at this moment. So even though by assumption a WBE would also feel compelled to believe that part of it was conscious, the rest of us would feel unsure of that. And until we find some other data (and we can’t even imagine what such data could be), this is how the situation must remain. But we’ve long known this.

The paper’s second claim is that creating WBE gets harder the more brain cell details that we need to scan and emulate:

We have good evidence that some sub-connectomic properties do matter for psychology. … increases in (e.g.,) testosterone plainly do affect a wide range of behavior, … the idea that neural properties are not the functional realizers of the mind is, at the very least, very surprising. … subneural functionalist is also rather destructive to the idea that WBE is the best chance to achieve Superintelligence or immortality. … the more low-level the functional properties are, the more we will need to know (and the more information we would need to upload), meaning we would be much further away from achieving uploading. … If more than the connectome matters, if instead lower-level, finer grained details, such as ones that involve neurochemical elements, or other substances that correspond to our “hardware” are germane, then the road to emulation is much less clear.

Every discussion of WBE that I’ve ever seen (and I’ve seen them for over three decades) considers how much within-neuron structure will need to be scanned and emulated, and all such discussions have accepted that more details requires more work and delays the likely arrival date of WBE. Most everyone has also expected that the topology of neuron connections would be insufficient. So I don’t see why that claim is at all “surprising”.

The paper’s third claim is that an unconscious WBE would be useless as a superintelligence:

As human capital is the central driver of economic growth, having large amounts of readily available human-level intelligences will make for enormous technological and societal enhancement. … Say the Biological Theory is only true for phenomenal consciousness. Could the rest of cognition then be captured by the connectome, in which case WBE could still lead to superintelligence? The question turns, in part, on whether there can be intentionality without phenomenology. …. But could there also motivation, or desire, without any phenomenology? … If they have no motivations, then they will not do anything on their own. … If uploads lacked beliefs and desires, then they would just be giant calculators that we neither know how to control nor understand the mechanics of. … if uploads don’t have the normal attitudes, we will have no idea how motivate them to do anything … One may argue that cars and calculators do things without being motivated, but they do so at the behest of intelligent, motivated designers and users.

By definition, a WBE is a device with the same input-output behavior as a source human brain, and thus can be hooked up to artificial eyes, hands etc. to seem to act just as would its source human in the same situation. So it seems that employers could hire such a WBE to do jobs, just as they would have done with the source human. They could, if they wanted, select, train, instruct, incentivize, and monitor such WBE employees in exactly the same ways that they would have done with the source human as an employee. (They also have new options, that I discuss in my book Age of Em.)

This paper claims complains, however, that such WBE employees suffer from a fatal flaw: even though they seem to be able to do their jobs via having beliefs and motivations, they would actually only be using fake-beliefs and fake-motivations. Yet I fail to see how this prevents society from using WBE as effectively as we use other humans today, as a substitute for human capital to drive economic growth. Today, employers never know, nor need to know, if their employees have real or fake beliefs or motivations. Teachers never need to know if their students have real or fake beliefs. Armies never know if their soldiers have real or fake motivations. And so on. By definition WBE would claim to feel, and whether they really feel seems irrelevant to whether they can function in society.

QED.

GD Star Rating
loading...
Tagged as: ,

Can We Tame Political Minds?

Give me a firm spot on which to stand, and I shall move the earth. (Archimedes)

A democracy … can only exist until the voters discover that they can vote themselves largesse from the public treasury. (Tytler)

Politics is the mind killer. (Yudkowsky)

The world is a vast complex of interconnected subsystems. Yes, this suggests that you can influence most everything else via every little thing you do. So you might help the world by picking up some trash, saying a kind word, or rating a product on Yelp.

Even so, many are not satisfied to have some effect, they seek a max effect. For this reason, they say, they seek max personal popularity, wealth, or political power. Or they look for the most neglected people to help, like via African bed nets. Or they seek dramatic but plausibly neglected disaster scenarios to prevent, such as malicious foreigners, eco-apocalypse, or rampaging robots.

Our future is influenced by a great many things, including changes in tech, wealth, education, political power, military power, religion, art, culture, public opinion, and institutional structures. But which of these offers the strongest lever to influence that future? Note that if we propose to change one factor in order to induce changes in all the others, critics may reasonably question our ability to actually control that factor, since in the past such changes seem to have been greatly influenced by other factors.

Thus a longtime favorite topic in “serious” conversation is: where are the best social levers, i.e. factors which do sometimes change, which people like us (this varies with who is in the conversation) can somewhat influence, and where the effects of this factor on other factors seem lasting and stronger than reverse-direction effects.

When I was in tech, the consensus there saw tech as the strongest lever. I’ve heard artists make such claims about art. And I presume that priests, teachers, activists, and journalists are often told something similar about their factors.

We economists tend to see strong levers in the formal mechanisms of social institutions, which we happen to be well-placed to study. And in fact, we have seen big effects of such formal institutions in theory, the lab, and the field. Furthermore, we can imagine actually changing these mechanisms, because they tend to be stable, are sometimes changed, and can be clearly identified and concisely described. Even stronger levers are found in the higher level legal, regulatory, and political institutions that control all the other institutions.

My Ph.D. in social science at Caltech focused on such controlling institutions, via making formal game theory models, and testing them in the lab and field. This research finds that institution mechanisms and rules can have big effects on outcomes. Furthermore, we seem to see many big institutional failures in particular areas like telecom, transport, energy, education, housing, and medicine, wherein poor choices of institutions, laws, and regulations in such areas combine to induce large yet understandable waste and inefficiency. Yes institutions do matter, a lot.

However, an odd thing happens when we consider higher level models. When we model the effects of general legal and democratic institutions containing rational agents, we usually find that such institutions work out pretty well. Common fears of concentrated interests predating on diffuse interests, or of the poor taxing the rich to death, are not usually borne out. While the real world does seem full of big institutional problems at lower levels, our general models of political processes do not robustly predict these common problems. Even when such models include voters who are quite ignorant or error prone. What are such models missing?

Bryan Caplan’s book Myth of the Rational Voter gets a bit closer to the truth with his concept of “rational irrationality”. And I was heartened to see Alex Tabarrok [AT] and Ezra Klein [EK], who have quite different political inclinations, basically agree on the key problem in their recent podcast:

[AT:] Mancur Olson thought he saw … more and more of these distributional coalitions, which are not just redistributing resources to themselves, but also slowing down… change. … used to be that we required three people to be on the hiring committee. This year, we have nine … Now, we need [more] rules. … we’ve created this more bureaucratic, kind of rule-bound, legalistic and costly structure. And that’s not a distributional coalition. That’s not lobbying. That’s sort of something we’ve imposed upon ourselves. …

[EK:] it’s not that I want to go be part of slowing down society and an annoying bureaucrat. Everybody’s a hero of their own story. So how do you think the stories people tell themselves in our country have changed for this to be true? …

[AT:] an HOA composed of kind of randos from the community telling you what your windows can look like, it’s not an obvious outcome of a successful society developing coalitions who all want to pursue their own self-interest. … naked self-interest is less important than some other things. And I’ll give you an example which supports what you’re saying. And that is, if you look at renters and the opinions of renters, and they are almost as NIMBY, Not In My Backyard, as owners, right, which is crazy.… farmers get massive redistribution in their favor. … But yet, if you go to the public … They’re, oh, no, we’ve got to protect the family farm. …

[EK:] a lot of political science … traditionally thought redistribution would be more powerful than it has proven to be … as societies get richer, they begin emphasizing what he calls post-materialist values, these moral values, these identity values, values about fairness. (More)

That is, our larger political and legal systems induce, and do not fix, many more specific institutional failures. But not so much because of failures in the structure of our political or legal institutions. Instead, the key problem seems to lie in voters’ minds. In political contexts, minds that are usually quite capable of being reasonable and pragmatic, and attending to details, instead suffer from some strange problematic mix of confused, incoherent, and destructive pride, posturing, ideology, idealism, loyalty, and principles. For want of a better phrase, let’s just call these “political minds.”

Political minds are just not well described by the usual game theory or “rational” models. But they do seem to be a good candidate for a strong social level to move the future. Yes, political minds are probably somewhat influenced by political institutions, and by communications structures of who talks to and listens to whom. And by all the other systems in the world. Yet it seems much clearer how they influence other systems than how the other systems influence them. In particular, it is much clearer how political minds influence institution mechanisms than how those mechanisms influence political minds.

In our world today, political minds somehow induce and preserve our many more specific institutional failures. And also the accumulation of harmful veto players and added procedures discussed by [AT] and [EK]. Even so, as strong levers, these political minds remain gatekeepers of change. It seems hard to fix the problems they cause without somehow getting their buy-in. But can we tame politician minds?

This is surely one of the greatest questions to be pondered by those aware enough to see just how big a problem this is. I won’t pretend to answer it here, but I can at least review six possibilities.

War – One ancient solution was variation and selection of societies, such as via war and conquest. These can directly force societies to accept truths that they might not otherwise admit. But such processes are now far weaker, and political minds fiercely oppose strengthening them. Furthermore, the relevant political minds are in many ways now integrated at a global level.

Elitism – Another ancient solution was elitism: concentrate political influence into fewer higher quality hands. Today influence is not maximally distributed; we still don’t let kids or pets vote. But the trend has definitely been in that direction. We could today limit the franchise more, or give more political weight to those who past various quality tests. But gains there seem limited, and political minds today mostly decry such suggestions.

Train – A more modern approach is try to better train minds in general, in the hope that will also improve minds in political contexts. And perhaps universal education has helped somewhat there, though I have doubts. It would probably help to replace geometry with statistics in high school, and to teach more economics and evolutionary biology earlier. But remember that the key problem is reasonable minds turning unreasonable when politics shows up; none of these seem to do much there.

Teach – A more commonly “practiced” approach today is just to try to slowly persuade political minds person by person and topic by topic, to see and comprehend their many particular policy mistakes. And do this faster than new mistakes accumulate. That has long been a standard “educational” approach taken by economists and policy makers. It seems especially popular because one can pretend to do this while really just playing the usual political games. Yes, there are in fact people like Alex and Ezra who do see and call attention to real institutional failures. But overall this approach doesn’t seem to be going very well. Even so, it may still be our best hope.

Privatize – A long shot approach is to try to convince political minds to not trust their own judgements as political minds, and thus to try to reduce the scope for politics to influence human affairs. That is, push to privatize and take decisions away from large politicized units, and toward more local units who face stronger selection and market pressures, and induce less politicized minds. Of course many have been trying to do exactly this for centuries. Even so, this approach might still be our best hope.

Futarchy – My proposed solution is also to try to convince political minds to not trust their own judgements, but only regarding on matters of fact, and only relative to the judgements of speculative markets. Speculative market minds are in fact vastly more informed and rational than the usual political minds. And cheap small scale trials are feasible that could lead naturally to larger scale trials that could go a long way toward convincing many political minds of this key fact. It is quite possible to adopt political institutions that put speculative markets in charge of estimating matters of fact. At which point we’d only be subject to political mind failures regarding values. I have other ideas for this, but let’s tackle one problem at a time.

Politics is indeed the mind killer. But once we know that, what can we do? War could force truths, though at great expense. Elitism and training could improve minds, but only so far. Teaching and privatizing are being tried, but are progressing terribly slowly, if at all.

While it might never be possible to convince political minds to distrust themselves on facts, relative to speculative markets, this approach has hardly been tried, and seems cheap to try. So, world, why not try it?

GD Star Rating
loading...
Tagged as: ,

Want Your Complaint Heard?

People like to complain; social media is full of it. But such complaints seem less than fully satisfying, perhaps because we usually complain to third parties. Maybe what we really want is to know that the target of our complaint heard and understood it. If so, let’s make that possible.

Imagine that you feel a complaint coming on. So you go to YouHurtMe.Com, and navigate down a hierarchy of possible complaint target groups, to reach specific options like “White people who think they aren’t racist”, “Women who think they are too good for a man like me”, or “Students who grade grub to improve a B+”. You could pick larger encompassing target groups, or define some even more specific targets.

Once you find your target group, you next pick your specific complaint, such as “You actually are racist”, “You aren’t as good as you think”, or “Be grateful for your B+”. If you don’t see your complaint listed there, you can add one. Once you’ve declared yourself a complainer of this type, you can browse some essays expressing that complaint, and vote on which essay looks best.

Or you might add your own new essay for consideration. But your essay must be civil, and include at least one multiple-choice comprehension test question at the end.

Targets of complaints can also come to the website. They may sincerely want to hear complaints made against people like them, and want to show those who make such complaints that they’ve heard and understood them. The system asks each new visitor some questions designed to quickly identify complaints targeted at them. (They can refuse to answer some questions.)

They then select a matching complaint, like “You actually are racist”, read the top voted essay, and take the ending comprehension test. Having passed the test, they are now publicly listed among targets who have heard and understood the complaint.

We might want to allow those who’ve heard complaints about them to respond in some way, though perhaps that risks too much acrimony. Less problematically, we might allow compliments as well as complaints to be created and heard via this same structure.

But, bottom line: it seems quite feasible to let complainers know that the targets of their complaints have heard and understood them. Which is what complainers often say is the main thing they want: to be heard.

From a conversation with Agnes Callard.

GD Star Rating
loading...
Tagged as:

Losing My Religion

To a few of my associates, I gave the xmas present of a blog post on a topic they pick. Bryan Caplan just finally made his choice: the story of how I became an atheist.

My immediate family is very religious. My dad (now dead) was a part-time pastor for decades, my mom (still alive) wrote many Christian tween novels, one brother is now a pastor, and the other brother is the music director at what was my dad’s church. As a tween, I myself joined what my parents considered a Christian “cult”, and within a year my parents forbade me from associating with it.

In college I drifted slowly away, eventually to full atheism. (At a similar speed to most peoples’ biggest view changes.) But my change had little to do with disagreeing with church doctrines or with difficulties explaining evil. And I never resented nor confronted my parents for teaching me something with which I later came to disagree. This wasn’t about my relation to them either.

No, the main issue for me was that in college I became greatly persuaded by and deeply immersed in a physics view of the universe. It was not just one set of lenses through which one might look to gain insight. No it purported to offer a complete (if not fully fleshed-out) description of the reality accessible to me. It offered me many detailed ways to test that claim, and it passed those tests as far as I could tell. So far as I could see then, and now, the world immediately around me *IS* in fact the world of photons, electrons, protons, and neutrons described by the physics I learned.

But that world just offers few openings for hidden powers to be listening to or influencing my thoughts and feelings, or changing how my life goes according to my sins and prayers. Sure my family, coworkers, or governments might try to do those things. But I at least see many traces of their existence around me. It is the idea of completely hidden powers doing such things that seems crazy to me. Not logically impossible, but quite implausible given our evidence.

Now I must admit that a similar fraction of those who know physics better than most believe in the god of prayer, compared to others. So what else explains how physics influenced me, compared to them? It might be that I just know physics better than most of them. But modesty forces me to consider other possibilities.

Those of us who are different in the head tend to need some convincing of that fact. You see, we assume we are normal, and relevant evidence tends to be ambiguous. For example, most people I’ve seen doing their homework were doing it alone, in a library, on the bus, or in their bedroom. So I assumed most people were used to thinking by themselves. But I was wrong.

In seventh grade, my English teacher assigned me an unusual lesson plan: go to the library every day and just write. No particular topics, just on whatever I wanted. I loved it, and learned lots. My favorite class in high school was physics because it didn’t ask me to just accept things on faith; we could check claimed results in lab experiments.

In college as a physics major, I discovered that the last two years we went over exactly the same topics as the first two years, this time with more math. I instead want to really understand those topics. So I stopped doing the homework and instead spent the time playing with the equations. I’d ace the exams. I also began to browse libraries for interesting things, think about interesting questions that occurred to me, and worked on my own self-invented projects.

I bailed from my grad program in philosophy of science when it seemed I’d found answers to the main questions I’d had there. And after two years of working full time at Lockheed I switched to thirty hours per week so I could spend the rest of my time studying things on my own. And I’ve since change fields many times when it seemed I was learning less where I was than where I could switch to.

I often meet people who ask how to study a topic, what school should they go to, and I say aren’t you old enough to just go learn stuff by yourself? Most researchers are terrible at explaining why their projects offer the world the best progress bang for their effort buck, but I have no problem offering such explanations.

All of this I think suggests that I’m unusually willing to fully own all of my main opinions and research choices, instead of inheriting them from others. So perhaps that’s another explanation for my atheism. Most people accept the usual beliefs of others around them and assume they must have good reasons. I’m instead enough of a think-for-myself polymath that I have to see such reasons for myself, and know enough tools from enough fields to be able to follow most relevant arguments. And I just don’t see good reasons to believe in hidden powers influencing the thoughts, feelings, and life outcomes of most humans.

Merry Christmas, Bryan.

GD Star Rating
loading...
Tagged as: ,

On Disagreement, Again

The usual party chat rule says to not spend too long on any one topic, but instead to flit among topics unpredictably. Many thinkers also seem to follow a rule where if they think about a topic and then write up an opinion, they are done and don’t need to ever revisit the topic again. In contrast, I have great patience for returning again and again to the most important topics, even if they seem crazy hard. And for spending a lot time on each topic, even if I’m at a party.

A long while ago I spend years studying the rationality of disagreement, though I haven’t thought much about it lately. But rereading Yudkowsky’s Inadequate Equilibria recently inspires me to return to the topic. And I think I have a new take to report: unusual for me, I adopt a mixed intermediate position.

This topic forces one to try to choose between two opposing but persuasive sets of arguments. On the one side there is formal theory, to which I’ve contributed, which says that rational agents with different information and calculation strategies can’t have a common belief in, nor an ability to foresee, the sign of the difference in their opinions on any “random variable”. (That is, a parameter that can be different in each different state of the world.) For example, they can’t say “I expect your next estimate of the chance of rain here tomorrow to be higher than the estimate I just now told you.”

Yes, this requires that they’d have the same ignorant expectations given a common belief that they both knew nothing. (That is, the same “priors”.) And they must be listening to and taking seriously what the other says. But these seem reasonable assumptions.

An informal version of the argument asks you to imagine that you and someone similarly smart, thoughtful, and qualified each become aware that your independent thoughts and analyses on some question had come to substantially different conclusions. Yes, you might know things that they do not, but they may also know things that you do not. So as you discuss the topic and respond to each others’ arguments, you should expect to on average come to more similar opinions near some more intermediate conclusion. Neither has a good reason to prefer your initial analysis over the others’.

Yes, maybe you will discover that you just have a lot more relevant info and analysis. But if they see that, they should then defer more to you, as you would if you learned that they are more expert than you. And if you realized that you were more at risk of being proud and stubborn, that should tell you to reconsider your position and become more open to their arguments.

According to this theory, if you actually end up with common knowledge of or an ability to foresee differences of opinion, then at least one of you must be failing to satisfy the theory assumptions. At least one of you is not listening enough to, and taking seriously enough, the opinions of the other. Someone is being stubbornly irrational.

Okay, perhaps you are both afflicted by pride, stubbornness, partisanship, and biases of various sorts. What then?

You may find it much easier to identify more biases in them than you can find in yourself. You might even be able to verify that you suffer less from each of the biases that you suspect in them. And that you are also better able to pass specific intelligence, rationality, and knowledge tests of which you are fond. Even so, isn’t that roughly what you should expect even if the two of you were similarly biased, but just in different ways? On what basis can you reasonably conclude that you are less biased, even if stubborn, and so should stick more to your guns?

A key test is: do you in fact reliably defer to most others who can pass more of your tests, and who seem even smarter and more knowledgeable than you? If not, maybe you should admit that you typically suffer from accuracy-compromising stubbornness and pride, and so for accuracy purposes should listen a lot more to others. Even if you are listening about the right amount for other purposes.

Note that in a world where many others have widely differing opinions, it is simply not possible to agree with them all. The best that could be expected from a rational agent is to not consistently disagree with some average across them all, some average with appropriate weights for knowledge, intelligence, stubbornness, rationality, etc. But even our best people seem to consistently violate this standard.

All that we’ve discussed so far has been regarding just one of the two opposing but persuasive sets of arguments I mentioned. The other argument set centers around some examples where disagreement seems pretty reasonable. For example, fifteen years ago I said to “disagree with suicide rock”. A rock painted with words to pretend it was a sentient creature listening carefully to your words, but offering no evidence that it actually listened, should be treated like a simple painted rock. In that case, you have strong evidence to down-weight its claims.

A second example involves sleep. While we are sleeping we don’t usually have an opinion on if we are sleeping, as that issue doesn’t occur to us. But if the subject does come up, we often mistakenly assume that we are awake. Yet a person who is actually awake can have high confidence in that fact; they can know that while a dreaming mind is seriously broken, their mind is not so broken.

An application to disagreement comes when my wife awakes in the night, hears me snoring, and tells me that I’m snoring and should turn my head. Responding half asleep, I often deny that I’m snoring, as I then don’t remember hearing myself snore recently, and I assume that I’d hear such a thing. In this case, if my wife is in fact awake, she can comfortably disagree with me. She can be pretty sure that she did hear me snore and that I’m just less reliable due to being only half awake.

Yudkowsky uses a third example, which I also find persuasive, but at which many of you will balk. That is the majority of people who say they have direct personal evidence for God or other supernatural powers. Evidence that’s mainly in their feelings and minds, or in subtle patterns in how their personal life outcomes are correlated with their prayers and sins. Even though most people claim to believe in God, and point to this sort of evidence, Yudkowsky and I think that we can pretty confidently say that this evidence just isn’t strong enough to support that conclusion. Just as we can similarly say that personal anecdotes are usually insufficient to support the usual confidence in the health value of modern medicine.

Sure, its hard to say with much confidence that there isn’t a huge smart power somewhere out there in the universe. And yes, if this power did more obvious stuff here on Earth back in the day, that might have left a trail of testimony and other evidence, to which advocates might point. But there’s just no way that either of those considerations can remotely support the usual level of widespread confidence in a God meddling in detail with their heads and lives.

The most straightforward explanation I can see here is social desirability bias a bias that not only introduces predictable errors but also one’s willingness to notice and correlate such errors. By attributing their belief to “faith”, many of them do seem to acknowledge quite directly that their argument won’t stand up to the usual evaluation standards. They are instead believing because they want to believe. Because their social world rewards them for the “courage” and “affirmation” of such a belief.

And that pretty closely fits a social desirability bias. Their minds have turned off their rationality on this topic, and are not willing to consider the evidence I’d present, or the fact that the smartest most accomplished intellectuals today tend to be atheists. Much like the sleeper who just can’t or won’t see that their mind is broken and unable to notice that they are asleep.

In fact, it seems to me that this scenario matches a great many of the disagreements I’m willing to have with others. As I tend to be willing to consider hypotheses that others find distasteful or low status. Many people tell me that the pictures I paint in my two books are ugly, disrespectful, and demotivating, but far fewer offer any opposing concrete evidence. Even though most people seem able to notice the fact that social desirability would tend to make them less willing to consider such hypotheses, they just don’t want to go there.

Yes, there is an opposite problem: many people are especially attracted to socially undesirable hypotheses. A minority of folks see themselves as courageous “freethinkers” who by rights should be celebrated for their willingness to “think outside the box” and embrace a large fraction of the contrarian hypotheses that come their way. Alas, by being insufficiently picky about the contrarian stories they embrace, they encourage, not discourage, everyone else to embrace social desirability biases. On average, social desirability only causes modest biases in the social consensus, and thus only justifies modest disagreements from those who are especially rational. Going all in on a great many contrarian takes at once is a sign of an opposite problem.

Yes, the stance I’m taking implies that contrarian views, i.e., views that seem socially undesirable to embrace, are on average neglected, and thus more likely than the consensus is willing to acknowledge. But that is of course far from endorsing most of them with high confidence. For example, UFOs as aliens are indeed more likely than the usual prestigious consensus will admit, but could still be pretty unlikely. And assigning a somewhat higher chance to claims like that the moon landings were faked it is not at all the same as endorsing such claims.

So here’s my new take on the rationality of disagreement. When you have a similar level of expertise to others, you can justify disagreeing with an apparent social consensus only if you can identity a particularly strong way that the minds of most of those who think about the topic tend to get broken by the topic. Such as due to being asleep or suffering from a strong social desirability bias. (A few weak clues won’t do.)

I see this position as mildly supported by polls showing that people think that those in certain emotional states are less likely to be accurate in the context of a disagreement; different emotions plausibly trigger different degrees of willingness to be fair or rational. (Here are some other poll results on what people think predicts who is right in a disagreement.)

But beware of going too wild embracing most socially undesirable views. And you can’t just in general presume that others disagree with each of your many positions due to their minds being broken in some way that you can’t yet see. That way lies unjustified arrogance. You instead want specific concrete evidence of strongly broken minds.

Imagine that you specialize in a topic so much that you know nearly as much as the person in the world who knows the most, but do not have the sort of credentials or ways to prove your views that the world would easily accept. And this is not the sort of topic where insight can be quickly and easily translated into big wins, wins in either money or status. So if others had come to your conclusions before, they would not have gained much personally, nor found easy ways to persuade many others.

In this sort of case, I think you should feel more free to disagree. Though you should respect base rates, and try to test your views as fast and strongly as possible. As the world is just not listening to you, you can’t expect them yet to credit what you know. Just also don’t expect the world to reward you or pay you much attention, even if you are right.

GD Star Rating
loading...
Tagged as:

Brainwashing is Sorcery

Can’t bring yourself to slaughter a nearby village, or a long-time associate? Mysticism can help you believe they already attacked you first, and that the stakes are so much higher than your personal gain. (More)

Most states have breach-of-the-peace laws that criminalize … obscene or abusive language in a public place, engaging in noisy behaviors, fighting in a public place, resisting lawful arrest, and disrupting a lawful assembly or meeting. … vagrancy, loitering, and public intoxication. (More)

Most laws are defined in relatively objective ways, so that society can truthfully say “no one is above the law”. Those who violate the law can be found guilty and punished, while others remain free.

But most societies have also included a few less objective and more “flexible” offenses, flexible enough to let the powerful more arbitrarily punishment disliked parties. For example many ancient societies let you retaliate directly against someone who previously attacked you via “sorcery”. And many societies today allow punishment for vague crimes like “vagrancy” and “loitering”.

The key difference is that such “flexible offenses” tend to be defined more in terms of how someone important doesn’t like an outcome, and less in terms of what specifically someone did to induce that resulting dislike. And a big problem is that this flexibility often lies dormant for long periods, so that those offenses don’t appear to be applied very flexibly in practice. Until, in a new period of conflict, potential flexibility gets realized and weaponized.

Our world of talk, conversation, and debate are policed by some official laws, such as on “fraud” and “libel”, and by many more inform norms. These norms are often complex, and vary in complex ways with context. We academics have an especially rich and powerful set of such norms.

While most of these norms are relatively objective and helpful, we also seem to include some more flexible offenses, such as “brainwashing”, “propaganda”, “manipulation”, “deception”, “misinformation”, “harassment”, and “gaslighting”. Again the key is that these tend to be defined less in terms of what exactly was done wrong, and more in terms of a disliked result. For example, someone is said to be “brainwashed” if they afterward adopted disliked beliefs or actions. But if exactly the same process results in approved beliefs or actions, there are no complaints.

In times of relative peace and civility, such offenses are applied flexibly only rarely and inconsistently, when particular powerful people find an opening to bludgeon particular opponents. So we don’t much notice their flexibility. But at other times of more severe, aligned, and polarized conflict, they become key weapons in the great battles. We today live in such a time.

The problem isn’t with the general idea of laws or norms, with the idea of enforcing laws, nor with the idea of shunning or shaming those who violate norms. The problem is with a small subset of especially vague norms, offering “loopholes big enough to drive a truck through”, as they say. And with periods when passions become enflamed so much that people become willing to wield any available weapons, such as flexible laws and norms.

The main solution that I can see is to work harder make our laws and norms less flexible. That is, to more explicitly and clearly express and define them. To more clearly say what exactly are the disapproved behaviors, independent of the disliked beliefs that result. This isn’t as easy as many think, as our social norms do actually tend to be subtler, more context dependent, and less widely understood than we think. Even so, it is quite possible, and often worth the bother. Especially in times like ours.

Another complementary solution is to switch from norm to law enforcement, as I’ve previously suggested. Legal norms are reluctant to allow flexible laws, and legal process is less prone to mistaken rushes to judgement.

GD Star Rating
loading...
Tagged as: , , ,

Can Combined Agents Limit Drugs?

Using pre-covid stats, a new J. Law & Econ paper tries to account for all U.S. crime costs, i.e., costs due to not everyone fully obeying all laws. These costs include prevention efforts, opportunity costs, and risks to life and health. The annual social loss is estimated at $2.9T, comparable to the $2.7T we spend on food and shelter, the $3.8T on medicine, and a significant fraction of our $21T GDP. One of the biggest contributions is $1.1T from 104K lives lost in 2018 at $10.6M each, including $0.7T from 67K drug overdoses deaths.

But such drug deaths have been roughly doubling every decade since 1980, and in the year up to April 2021, there were 100K US drug overdose deaths, making that loss by itself $1T, at least if you accepted a $10M per life estimate, which I do think is too high. Even so, drug overdose deaths are clearly a huge problem, worth thinking about. What can we do?

Reading up on the topic, I see a lot of conflicting theories on what would work best. But a big part of the problem seems to me to be that it isn’t clear who exactly owns this problem. We might see it as a family problem, an employer problem, a medical problem, or a legal problem. Yet each of those groups resists taking responsibility, and we don’t fully empower any of them to deal well with the problem.

Now I’m no expert on drug overdosing, bit I do fancy myself a bit of an expert on getting organizations to own problems. So let me try my hand at that.

I’ve previously suggested that people choose health agents, who pay for and choose medicine but who lose lots of money if their clients become disabled, in pain, or die. I’ve also suggested that people choose crime vouchers, who must pay for cash fines when their clients are found guilty of crimes, but who have client-voucher contracts able to set client co-liability and to choose punishments and freedoms of association, movement, and privacy. I’ve also suggested having agents who insure you against hard times, career agents who get some fraction of your future income, and that parents get such a fraction to compensate for raising you.

So as a man with all these hammers staring at this tough nail of drug overdoses, I’m tempted to merge them into one big hammer and take a swing. That is, how would a merged agent who had all these incentives try to deal with a potential drug problem?

Imagine a for-profit experienced expert org approved by the client’s parents when they are a kid, or by the client when they are adult. In a world with with few legal constraints on the contracts that this agent can agree to with clients. An org who probably also represents many of this client’s friends and family. An org who gains from client income, but who must pay when a client is found guilty of a crime, or suffers hard times, pain, disability, or death. An org able to limit client freedoms of privacy, movement, and association, And able to set client punishments for verified events, and to make associated clients co-liable, so that they are all punished together re events involving any one of them.

Such an agent might make sure to get addicts a reliable drug supply, or to have overdose drugs readily available. Or they might forbid clients from mixing with drug types. Or they might test clients regularly, or encourage althetics that conflict with drug use. Or any of a thousand other possible approaches. The whole point is that I don’t have to figure that out; it would be their job to figure out what works.

Now if an org with incentives and powers like that can’t find a way to get clients to avoid becoming drug addicts, or to not overdose if they do, then that would probably either be due to some larger social context that they couldn’t change, or because many individuals just like drugs so much that they are willing to take substantial chances of overdosing.

What if a larger social policy related to drugs or users was a key problem? For example, maybe drug laws are too strict, or too lax. If so, I’d expect these orgs to figure out which and lobby for changes. And given their expertise and incentives, I’d be tempted to listen to them. If you didn’t trust them so much, well then you might consider using futarchy to choose. But honestly I expect such combined agents could handle the problem regardless of larger policies.

In sum, I suggest that the key underlying problem with drug overdoses is that no expert org owns the problem, by being approved by clients yet given clear abilities and incentives to solve the problem. Yes this is a big ask, and this is my generic solution to many problems. Doesn’t mean it won’t work.

GD Star Rating
loading...
Tagged as: ,

The Planiverse

I recently praised Planiverse as peak hard science fiction. But as I hadn’t read it in decades, I thought maybe I should reread it to see if it really lived up to my high praise.

The basic idea is that a computer prof and his students in our universe create a simulated 2D universe, which then somehow becomes a way to view and talk to one particular person in a real 2D universe. This person is contacted just as they begin a mystical quest across their planet’s one continent, which lets the reader see many aspects of life there. Note there isn’t a page-turning plot nor interesting character development; the story is mainly an excuse to describe its world.

The book seems crazy wrong on how its mystical quest ends, and on its assumed connection to a computer simulation in our universe. But I presume that the author would admit to those errors as the cost of telling his story. However, the book does very well on physics, chemistry, astronomy, geology, and low level engineering. That is, on noticing how such things change as one moves from our 3D world to this 2D world, including via many fascinating diagrams. In fact this book does far better than most “hard” science fiction. Which isn’t so surprising as it is the result of a long collaboration between dozens of scientists.

But alas no social scientists seem to have been included, as the book seem laughably wrong there. Let me explain.

On Earth, farming started when humans had a world population of ten million, and industry when that population was fifty times larger. Yet even with a big fraction of all those people helping to innovate, it took several centuries to go from steam engines to computers. Compared to that, progress in this 2D world seems crazy fast relative to its population. There people live about 130 years, and our hero rides in a boat, balloon, and plane, meets the guy who invented the steam engine, meets another guy who invented a keyboard-operated computer, and hears about a space station to which rockets deliver stuff every two weeks.

Yet the entire planet has only 25,000 people, the biggest city has 6000 people, and the biggest research city has 1000 people supporting 50 scientists. Info is only written in books, which have a similar number of pages as ours but only one short sentence per page. Each building has less than ten rooms, and each room can fit only a couple of people standing up, and only a handful of books or other items. In terms of the space to store stuff, their houses make our “tiny houses” look like warehouses by comparison. (Their entire planet has fewer book copies than did our ancient Library at Alexandria.)

There are only 20 steam engines on their planet, and only one tiny factory that makes them. Only one tiny factory makes steel. In fact most every kind of thing is made a single unique small factory of that type, and only produces a modest number of units of whatever it makes. Most machines shown have only a tiny number of parts.

Their 2D planet has a 1D surface, with one continent divided into two halves by one mountain peak. The two ends of that continent are two shores, and on each shore the fishing industry consists of ~6 boats that each fit two people each and an even smaller mass of fish. I have a hard time believing that enough fish would drift near enough to the shore to fill even these boats once a day.

As the planet surface is 1D, everyone must walk over or under everything and everyone else in order to walk any nontrivial distance. Including every rock and plant. So our hero has to basically go near everyone and everything in his journey from one shore to the mountain peak. Homes are buried underground, and must close their top door for the rivers that wash over them periodically.

So in sum, the first problem with Planiverse is that it has far too few people to support an industrial economy, especially one developing at the rate claimed for this. Each industry is too small to support much in the way of learning, scale economies, or a division of labor. It is all just too small.

So why not just assume a much larger world? Because then transport costs get crazy big. If there’s only one factory that makes a king of thing, then to get one of it to everyone each item has to be moved on average past half of everything and everyone. A cost that grows linearly with how many things and people there are. Specialization and transportation are in conflict.

A second lessor problem is that the systems shown seem too small and simple to actually function. Two dimensions just don’t seem to offer enough room to hold all needed subsystems, nor can they support as much modularity in subsystem design. Yet modularity is central to system design in our world. Let me explain.

In our 3D world, systems such as cells, organisms, machines, buildings, and cities consist of subsystems, each of which achieves a different function. For example, each of our buildings may have at least 17 separate subsystems. These deal with: structural support, fresh air, temperature control, sunlight, artificial light, water, sewage, gas, trash, security surveillance, electricity, internet, ambient sound, mail transport, human activities, and human transport. Most such subsystems have a connected volume dedicated to that function, a volume that reaches close to every point in the building. For example, the electrical power system has connected wires that go to near every part of the building, and also connect to an outside power source.

In 2D, however, a volume can only have two or fewer subsystems of connected volumes that go near every point. To have more subsystem volumes, you have to break them up, alternating control over key connecting volumes. For example, in a flat array of streets, you can’t have arrays of north-south streets and east-west streets without having intersections that alternate, halting the flow of one direction of streets to allow flow in the other direction.

If you wanted to also have two more arrays of streets, going NW-SE and NE-SW, you’d need over twice as many intersections, or each intersection with twice as many roads going in and out of it. With more subsystems you’d need even more numerous or conflicting intersections, making such subsystem even more limited and dependent on each other.

Planiverse presents some designs with a few such subsystem intersections, such as “zipper” organs inside organisms that allow volumes to alternate between being used for structural support and for transporting fluids, and a similar mechanism in buildings. It also shows how switches can be used to let signal wires cross each other. But it doesn’t really take seriously the difficulty of having 16 or more subsystem volumes all of which need to cross each other to function. The designs shown only describe a few subsystems.

If I look at the organisms, machines, buildings, and cities in my world, most of them just have far more parts with much more detail than I see in Planiverse design sketches. So I think that in a real 2D world these would all just have to be a lot more intricate and complicated, a complexity that would be much harder to manage because of all these intersection-induced subsystem dependencies. I’m not saying that life or civilization there is impossible, but we’d need to be looking at far larger and more complicated designs.

Thinking about this did make me consider how one might minimize such design complexity. And one robust solution is: packets. For example, in Planiverse instead of moving electricity via wires, it is moved via batteries, which can use a general transport system that moves many other kinds of objects. And instead of air pipes they used air bottles. So the more kinds of subsystems that can be implemented via packets that are all transported via the same generic transport system, the less you have to worry about subsystem intersections. Packets are what allow many kinds of signal systems to all share the same internet communication network. Even compression structural support can in principle be implemented via mass packets flying back and forth.

In 1KD dimensions, there is plenty of volume for different subsystems to each have their own connected volume. The problem there is that it is crazy expensive to put walls around such volumes. Each subsystem might have its own set of wires along which signals and materials are moved. But then the problem is to keep these wires from floating away and bumping into each other. Seems better to share fewer subsystems of wires with each subsystem using its own set of packets moving along those wires. Thus outside of our 3D world, the key to designing systems with many different kinds of subsystems seems to be packets.

In low D, one pushes different kinds of packets through tubes, while in high D, one drags different kinds of packets along attached to wires. Packets moving along wires for the 1KD win. Though I as of yet have no idea how to attach packets to move along a structure of wires in 1KD. Can anyone figure that out please?

GD Star Rating
loading...
Tagged as: ,