Why Not Wait On AI Risk?

Years ago when the AI risk conversation was just starting, I was a relative skeptic, but I was part of the conversation. Since then, the conversation has become much larger, but I seem no longer part of it; it seems years since others in this convo engaged me on it.

Clearly most who write on this do not sit close to my views, though I may sit closer to most who’ve considered getting into this topic, but instead found better things to do. (Far more resources are available to support advocates than skeptics.) So yes, I may be missing something that they all get. Furthermore, I’ve admittedly only read a small fraction of the huge amount since written in this area. Even so, I feel I should periodically try again to explain my reasoning, and ask others to please help show me what I’m missing.

The future AI scenario that treats “AI” most like prior wide tech categories (e.g., “energy” or “transport”) goes as follows. AI systems are available from many competing suppliers at similar prices, and their similar abilities increase gradually over time. Abilities don’t increase faster than customers can usefully apply them. Problems are mostly dealt with as they appear, instead of anticipated far in advance. Such systems slowly displace humans on specific tasks, and are on average roughly as task specialized as humans are now. AI firms distinguish themselves via the different tasks their systems do.

The places and groups who adopt such systems first are those flexible and rich enough to afford them, and having other complementary capital. Those who invest in AI capital on average gain from their investments. Those who invested in displaced capital may lose, though over the last two decades workers at more automated jobs have not seen any average effect on their wages or number of workers. AI today is only a rather minor contribution to our economy (<5%), and it has quite a long way to go before it can make a large contribution. And we today have only vague ideas of what AIs that made a much larger contribution would look like.

Today most of the ways that humans help and harm each other are via our relations. Such as: customer-supplier, employer-employee, citizen-politician, defendant-plaintiff, friend-friend, parent-child, lover-lover, victim-criminal-police-prosecutor-judge, army-army, slave-owner, and competitors. So as AIs replace humans in these roles, the main ways that AIs help and hurt humans are likely to also be via these roles.

Our usual story is that such hurt is limited by competition. For example, each army is limited by all the other armies that might oppose it. And your employer and landlord are limited in exploiting you by your option to switch to other employers and landlords. So unless AI makes such competition much less effective at limiting harms, it is hard to see how AI makes role-mediated harms worse. Sure smart AIs might be smarter than humans, but they will have other AI competitors and humans will have AI advisors. Humans don’t seem much worse off recently as firms and governments who are far more intelligent than individual humans have taken over many roles.

AI risk folks are especially concerned with losing control over AIs. But consider, for example, an AI hired by a taxi firm to do its scheduling. If such an AI stopped scheduling passengers to be picked up where they waited and delivered to where they wanted to go, the firm would notice quickly, and could then fire and replace this AI. But what if an AI who ran such a firm became unresponsive to its investors. Or if an AI who ran an army becoming unresponsive to its oversight government? In both cases, while such investors or governments might be able to cut off some outside supplies of resources, the AI might do substantial damage before such cutoffs bled it dry.

However, our world today is well acquainted with the prospect of “coups” wherein firm or army management becomes unresponsive to its relevant owners. Not only do our usual methods usually seem sufficient to the task, we don’t see much of an externality re these problems. You try to keep your firm under control, and I try to keep mine, but I’m not especially threatened by your losing control of yours. We care a bit more about others losing control of their cars, planes, or nuclear power plants, as those might hurt bystanders. But we care much less once others show us sufficient liability, and liability insurance, to cover our losses in such cases.

I don’t see why I should be much more worried about your losing control of your firm, or army, to an AI than to a human or group of humans. And liability insurance also seems a sufficient answer to your possibly losing control of an AI driving your car or plane. Furthermore, I don’t see why its worth putting much effort into planning how to control AIs far in advance of seeing much detail about how AIs actually do concrete tasks where loss of control matters. Knowing such detail has usually been key to controlling past systems, and money invested now, instead of spent on analysis now, gives us far more to spend on analysis later.

All of the above has been based on assuming that AI will be similar to past techs in how it diffuses and advances. Some say that AI might be different, just because, hey, anything might be different. Others, like my ex-co-blogger Eliezer Yudkowsky, and Nick Bostrom in his book Superintelligence, say more about why they expect advances at the scope of AGI to be far more lumpy than we’ve seen for most techs.

Yudkowsky paints a “foom” picture of a world full of familiar weak stupid slowly improving computers, until suddenly and unexpectedly a single super-smart un-controlled AGI with very powerful general abilities appears and is able to decisively overwhelm all other powers on Earth. Alternatively, he claims (quite implausibly I think) that all AGIs naturally coordinate to merge into a single system to defeat competition-based checks.

These folks seem to envision a few key discrete breakthrough insights that allow the first team that finds them to suddenly catapult their AI into abilities far beyond all other then-current systems. Big breakthroughs relative to the broad category of “mental tasks”, even bigger than if we found big breakthroughs relative to the less broad tech categories of “energy”, “transport”, or “shelter”. Yes of course change is often lumpy if we look at small tech scopes, but lumpy local changes aggregate into smoother change over wider scopes.

As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen of innovation in general, and more particular of tools, computers, AI, and even machine learning. And to postulate much more of a lumpy conceptual essence re the keys to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.

I don’t mind groups with small relative budgets exploring scenarios with proportionally small chances, but I lament such a large fraction of those willing to take the long term future seriously using this as their default AI scenario. And while I get why people like Yudkowsky focus on scenarios in which they fervently believe, I am honestly puzzled why so many AI risk experts seem to repudiate his extreme scenarios, and yet still see AI risk as a terribly important project to pursue right now. If AI isn’t unusually lumpy, then why are early efforts at AI control design especially valuable?

So far I’ve mentioned two widely expressed AI concerns. First, AIs may hurt human workers by displacing them, and second, AIs may start coups wherein they wrest control of some resources from their owners. A third widely expressed concern is that the world today may be stable, and contain value, only due to somewhat random and fragile configurations of culture, habits, beliefs, attitudes, institutions, values, etc. If so, our world may break if this stuff drifts out of a safe and stable range for such configurations. AI might be or facilitate such a change, and by helping to accelerate change, AI might accelerate the rate of configuration drift.

Similar concerns have often been expressed about allowing too many foreigners to immigrate into a society, or allowing the next youthful generation too much freedom to question and change inherited traditions. Or allowing a many other specific transformative techs, like genetic engineering, fusion energy, social media, or space. Or other big social changes, like gay marriage.

Many have deep and reasonable fears regarding big long-term change. And some seek to design AI so that it won’t allow excessive change. But this issue seems to me much more about change in general than about AI in particular. People focused on these concerns should be looking to stop or greatly limit and slow change in general, and not focus so much on AI. Big change can also happen without AI.

So what am I missing? Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?

GD Star Rating
loading...
Tagged as: , ,

Re An Accused, Tell The Truth

Agnes Callard says we should not fight her cancellation:

Within the mob there is no justice and no argument and no reasoning, no space for inquiry or investigation. The only good move is not to play. … If I am being canceled I want my friends … to stand by, remain silent, and do nothing. If you care about me, let them eat me alive. … The expectation that one’s friends exhibit the “courage” to speak up one one’s behalf, the inclination to see the cancellation as a test of the friendship, which suddenly requires proofs of loyalty — these are the first step on the road to the friend purge.

Here is how it goes: a few of the cancelee’s friends meet the expectation to speak up in support, but those who remain silent — which is most of them — become suspect. New, publicly aligned friends are acquired to take their place. The beleaguered cancelee now feels she sees who her “real friends” are, but in fact she has no friends anymore. All she has are allies. First she turned her friends, and perhaps even her family members, into allies; and then she acquired more allies to fill the ranks of the purged friends. The end result is a united front, but what I would call real friendship has gone missing in the bargain. I do not want any of that. I want friends who feel free to disagree with me both publicly and privately.

If I were accused of a crime, I wouldn’t want my friends to protest outside the courthouse, at least at first; I’d want to give the legal system a chance. But if my associates were called on to testify about me, I’d want them to comply, and to tell the truth as they saw it. Not to say whatever would seem to “support” me, but just to tell the truth.

Humans have only had legal systems for the last ten thousand years or so. For a million years before that, we had mob justice, which worked better than no justice, even if not as well as legal justice. (if you doubt this, see no justice among non-human primates.) Today we still handle some kinds of accusations and punishments via mobs. I’d rather we handled them via law, but given that some accusations are handled by mobs, I’d still want to help mob justice to work as well as possible. Mob justice is in fact possible, and legitimate.

Under mob justice, there is no central authority to subpoena witnesses. So people must instead volunteer their relevant testimony. But such testimony still functions as in legal trials to appropriately influence mob jury verdicts. Thus if I were accused under mob justice (as has in fact happened to me in the past), I’d want my associates to offer testimony relevant to that accusation. Not loyal ally support, but to just tell the relevant truth.

For example, many recent mob justice accusations have been of the form that someone’s statement is a “dog whistle”, purposely done to express nefarious beliefs or allegiances. Thus intent is relevant here, and intent is something on which close associates are often especially qualified to testify. The mob jury can thus reasonably want to hear associates’ take. Given what you know about this person’s views and styles, how plausible is it that their statement was in fact intended to express the alleged beliefs or connections?

We humans are often far more willing to say positive than negative things about associates. But this can work out okay as we commonly infer negative things from the unwillingness to say positive things. For example, when asked for a recommendation re a previous worker, many employers are willing to say express honest positive opinions, but will decline to say anything if their opinion is negative.

I have at times had private contact with people who actually hold views that, at least in a technical sense, might reasonably be labeled as racist or sexist. So if I had to answer the question of whether an expression of theirs might plausibly express such views, my honest answer would have to be yes. But if I had the option, I’d try to instead just say nothing about the subject. But for most of my associates, I’d happily say that such an interpretation is quite implausible, given what I know about them.

In this sort of context, Callard’s request for silence from her friends would hinder mob justice, and make it more likely to go awry against her. The silence of her friends (among which I count myself) would likely, and reasonably, be taken by the mob jury as evidence against her. I get that she is willing to accept this cost, for the cause of preventing the friend purge process that she reasonably detests. But I will hold my friends to a higher standard: don’t just support me unconditionally, but instead tell relevant truths.

If you don’t know anything relevant to the accusation, then yes stay silent. But if you have testimony relevant to the accusations against me, then speak up. Politely, calmly, and with appropriate qualifiers and doubts, but truthfully. Please friends, enemies, and others, in any trial, done at court or before a mob, just tell the relevant truth.

GD Star Rating
loading...
Tagged as: , , ,

You Owe Your Parents Grandkids

Humans have long respected a reciprocity norm: after A does something nice for B, then B is expected to do something nice for A. Yes, how nice a response, and how strongly or visibly we expect this, varies with context. For example, it depends on the prior relation between A and B, on who is aware of these nice things, on the relative costs to A and B of their nice things, on the kinds of nice things done, on if A was authorized to do their nice thing, on if A did their thing hoping for a reciprocal response, on if A could have or did propose an explicit trade, and on if B accepted such a proposal.

At one extreme we have legally enforced debts, which are excused only in extreme situations (e.g., bankruptcy), while at the other extreme we have only weakly enforced social norms. For example, often when two people in sequence enter two doors in quick succession, the first person holds open the first door for the second person, and then that second person is expected to hold open the second door for the first person. They won’t be arrested if they fail, and observers might excuse them if they seem to be in an unusual rush, or less physically able to to open doors. But otherwise observers may think tend to think a bit less of them.

The Hare Krishna religion once famously gamed this effect by offering flowers to passersby in airports, and then holding out bags for reciprocal donations. This worked, at least for a while.

Note that in many kinds of relations we can prefer that A and B have other motives for doing nice things for each other, besides the threat of censure for violating reciprocity norms. Even so, observers may still disapprove if they see a very lopsided relation, with one doing far more for the other than vice versa.

Note also that A explicitly asking B for a favors trade is not a strong requirement here, even for legally enforced debts. For example, hospitals can charge for the help they give to people brought to them unconsciously. Rescuers can charge for rescuing those who didn’t ask to be rescued. If while you were on vacation a contractor accidentally replaced the wrong house roof, and did your roof instead, you can still be forced to pay for it, if you benefited thereby. This happens under the ancient and well-established law against “unjust enrichment”.

All of which brings me to my father’s day theme: you owe your parents some grandkids. They did something very nice for you by creating you. (Yes, a few of you might be exceptions who were hurt by this, but only a few.) Yes, they didn’t ask you first, but they couldn’t have asked you first. (That will be different for ems, who can be asked before making a new copy.) And not begin asking first is only one of many factors that can weaken, but not usually eliminate, your debt.

Most parents did in fact hope that creating kids might lead to grandkids, and grandkids are one of things parents most often and greatly hope for from their kids. Yes, you might be excused if your parents were especially mean to you in other ways, or if having grandkids would be a special hardship for you. Yes, we might not want to make this debt legally enforceable, and yes it might be better if you did such things out of generosity or gratitude, rather than out of feelings of obligation. But even so, you do owe them grandkids, even if only a bit.

You exist only because of an unbroken chain of parents that goes back to the origin of life. I like to compare this to the “human chains” often used to rescue people drowning in the ocean. If you are in such a chain, you have an obligation not to let go of those next to you in the chain, as everyone after you may then drown. I tend to think that you have similar obligations to your chain of ancestors and descendants; if you don’t have kids, all of the chain after you won’t exist.

Yes, many people (falsely) think that you can’t have obligations to people who don’t yet exist. Which is why I like to point them to this obligation to their parents, who very much do or did exist. Most of your parents, and their parents, all the way back along the chain, wanted you to continue the chain. And you owe them this, at least a bit.

Added: In the case of the two doors, I’d say you have an obligation to put substantial weight in your decision-making on trying to reciprocate. But if even with adding that weight, it turns out that you are better off not opening that second door, you are excused. This is why we excuse those who are in a rush, or who have less physical ability to open the door.

GD Star Rating
loading...
Tagged as:

UFOs as US PsychOp

UFOs are objects in the sky that often seem to display amazing physical abilities. We have four main categories of theories to explain them. Under two kinds of theories, these abilities are illusory: (A) They could be errors, delusions, and misunderstandings, like seeing Venus or swamp gas. Or (B), they could be lies and hoaxes, intended to fool others. Under the other two kinds of theories, these abilities are real: (C) They are from aliens. Or (D) they are from Earth orgs that have been hiding their amazing abilities.

To estimate the chance of each kind of theory, one should multiply a prior chance for each type, a chance which ignores any concrete evidence of actual sightings, times a posterior chance that if each theory were true, we’d see the sort of sightings evidence that we do. (Renormalize these products to get posteriors.) A year ago I argued that the prior chance on the aliens theory wasn’t nearly as low as many seem to think; its much higher than in a typical murder trial, for example. So you have to actually look at the sightings evidence to judge its posterior chance, just as you would in a murder trial.

Today I want to consider the hoax category. In particular I want to consider the following hoax conspiracy theory: some part of the US government has, since the 1940s, had a long-term campaign to pay people to lie about seeing UFOs, and to make fake evidence to fool others into thinking they saw objects with amazing abilities. Of course once enough people heard about these events, then most reported sightings after that might be errors, delusions, and misunderstandings.

During the Second World War, the US government managed some pretty large and effective conspiracies. Such as the Manhattan project and the many ways they mislead Germany about our D-Day invasion. They seem thus to have then been roughly capable of managing a large UFO conspiracy. But, yes, the duration of this purported conspiracy would be much longer than in these prior examples.

Those prior successful conspiracies were also more closely related to military activities. Do we have evidence of a US ability or inclination regarding conspiracies more distant from military operations? Yes, in fact. The US had large and successful efforts, kept hidden for many decades, to move the fashion in art and writing away from Soviet styles, toward “modern” US styles. To make the US seem more prestigious relative to the USSR in the world’s eyes.

Okay, but what might the US government see itself as standing to gain from such a conspiracy? First, it is already known that the US government has promoted UFO groups in particular areas, and fed them false info on UFOs. They did this to “muddy the waters” regarding new tech that the US was developing and testing in the skies. Spies of foreign powers might plausibly hang near US testing facilities, and ask around around for reports of strange sightings. Such spies would get less useful info if local UFO groups are inclined to report many strange things unrelated to US tech testing.

In the Cold War, a big priority of the US military was to discourage enemies from launching a nuclear war against the US. And as an enemy is more likely to attack when then feel more confident of the consequences of their attack, one way to discourage such attacks is to muddy the waters re US military abilities, and re other possible powers who might react to such an attack. So if US could get enemy leaders to take UFO reports seriously, it could get those leaders to worry that they have underestimated US abilities, or that there are another hidden powers around.

Many UFO reports and interpretations have given the impression that the powers behind UFOs are especially interested in nuclear power and nuclear weapons, and that they fear or disapprove of such things. Enemy leaders who give credence to such reports might then fear that, if they initiated a nuclear attack, they’d suffer retaliation from such powers. Or maybe they’d just step in to take control after such a war weakened all of its participants.

I estimate roughly a one percent prior for this scenario, which is substantially higher than the prior I assigned to UFOs as aliens. Furthermore, this theory seems to quite naturally account for the key puzzles I struggled to explain regarding an aliens theory, namely that they are here while the universe looks empty, and that they stay near the limits of our vision, neither making themselves clearly visible nor completely invisible. This hoax category thus has the strongest posterior, in my view. (Yes I haven’t discussed the other two theory categories much; maybe I’ll say more on those some other day.)

Note that conditional on this UFO as US psychop theory being true, we should give more credence to other US conspiracy theories, such as that the US faked the moon landings. I thus now give more credence to this, even if I still see it as less likely than not. And conditional on believing other such theories, this UFO as US psychop theory becomes more believable.

Added 10a: As the two US WWII secrets I mentioned were kept for only a few years, some say large orgs can’t keep secrets longer than that. But the US kept secret for 41 years that it faked its excuse for the Vietnam war, and for 46 years that it spied on citizen phone calls via Project Minaret. KFC has kept its recipe secret for 70 years, and Coca-Cola has kept its secret for 130 years. Venice kept its glass-blowing methods secret for many centuries, and China kept secret its methods of making both silk and porcelain for over a thousand years.

Yes, there’s a difference between hidden and known secrets, but all else equal known secrets should be more vulnerable, as the curious can focus their efforts on revealing them. Much harder to focus efforts when searching for hidden secrets.

GD Star Rating
loading...
Tagged as: ,

Why Be Romantic

Romantic Beliefs Scale … [from] four beliefs: … Love Finds a Way, One & Only, Idealization, & Love at First Sight. Men were generally more romantic than women, & femininity was a stronger predictor of romanticism than was masculinity. (more)

Classical understanding is rational, scientific, unemotional, cerebral, and technologically savvy. … A romantic, oppositely, is intuitive, emotional, creative, and artistically inclined. He is more concerned with immediate appearances than underlying forms—he values aesthetics over utility. (more)

Romanticism was characterized by its emphasis on emotion and individualism, idealization of nature, suspicion of science and industrialization, and glorification of the past with a strong preference for the medieval rather than the classical. It was partly a reaction to the Industrial Revolution, the social and political norms of the Age of Enlightenment, and the scientific rationalization of nature—all components of modernity. It was embodied most strongly in the visual arts, music, and literature, … emphasized intense emotion as an authentic source of aesthetic experience, placing new emphasis on such emotions as fear, horror and terror, and awe — especially that experienced in confronting the new aesthetic categories of the sublime and beauty of nature. It elevated folk art and ancient custom to something noble, but also spontaneity as a desirable characteristic. (more)

Regarding mating, a romantic embraces their immediate strong hopeful feelings re particular partners and the mating process. A realist might warn against such embraces, suggesting that the guy you are with might not be so great for you, or care so much about you, and warning against future consequences of excess trust. But the romantic is less willing to embrace abstract thought or analysis regarding the consequences of following their feelings.

Outside of mating relations, a romantic tends to embrace immediate strong feelings regarding other things, and resists abstract analysis contrary to such feelings. So, for example, if their immediate feelings say that we help people via a min wage or rent control, or via their fav feel-good charity, they resist abstract analysis to the contrary. If their immediate feelings say that some human enhancement is an abomination against natural/God, or say either to trust, or not trust, foreigners, depending on the framing, then they are inclined to just stick with such initial judgments.

I can see three kinds of situations where this stance makes sense. First, when your priority is to show loyalty to associates who have limited ability or inclination to attend to abstract analysis, then you can want to stick with your initial feelings on topics to a similar extent as you expect from them. Hesitating or reasoning abstractly may be taken by them as an excuse to evade your initial feelings, or to hide the fact that they were other than they should have been.

Second, you might not be very good at abstract analysis, at least relative to distrusted parties who might try to trick you with misleading analysis, and you might not trust the systems of analysis they use. Wary of such misleading guidance, you might want to just stick with your initial feelings. Or you might see your initial feelings so reliable that there’s little point in considering more. Or you might think the topic of so little importance that it is worth little more consideration.

Third, you might want to hold fast to your motivations. Often the world is so eager to be and seem practical and reasonable that we suppress our contrary feelings and then lose track of what we care about. Even in areas of art where there might seem to be no reason not to fully embrace our feelings, we often find reasons to want our art to seem more reasonable. We can then go through the motions of doing stuff without really know how or if it matters to us. You might avoid this if you hold fast to your strong feelings and act on them. Overturning the inclinations of your feelings based on abstract analysis risks your suppressing and eventually losing your grip on those feelings.

One puzzle about romanticism is why the recent past tends to seem the most romantic. For example, objects and stories intended to evoke romantic feelings often try to evoke our few-generations-previous past. Perhaps this is because we naturally have strong feelings toward our “elders”, i.e., our parents, teachers, and mentors, and toward the worlds toward which they had strong feelings. Maybe we embrace our attachment to their worlds as a way hold and show loyalty to these elders.

Also, compared to the future and the more distant past, this few-gen-past has the most available detail on which we can become attached and anchor. So the romanticism puzzle is really limited to comparing that few-gen-past past to the present and its more recent past.

GD Star Rating
loading...
Tagged as: ,

Beware Cosmic Errors

Imagine that you came across an enormous dry grassland, continuously covered with dense grass. It seems to go on for thousands of miles in all directions, and historical records suggest that it has been in this same dry state for millions of years. You conclude that if a spark had touched it anywhere anytime during that period, a fire would have begun that would eventually spread across the entire grassland. 

In this situation you either have to believe that sparks are extremely unlikely, so that for example lightning is just a very rare thing in this world. Or you have to conclude that appearances are deceiving; there are many wide barriers that limit the spread of fire in space, or there are serious defects in your historical record. Either sparks almost never happen, fire starting in one place does not spread to the entire grasslands, or fires do periodically spread everywhere but quickly burn out and then their historical records are quickly erased. 

Now imagine that you came across an enormous pleasantly-wet mildly-windy barren land; it seems to be a millions-of-years stable continuum of sand that goes on for thousands of miles. You can tell from lab tests that this wet sand could serve as fertile soil. That it is, it has sufficient nutrients, water, sunlight, temperature, pressure, etc. to enable some kinds of grass seed to grow into grass plants that send out more seeds. And yet this land has apparently remained empty and barren for millions of years; it holds neither grass nor other life that might evolve from grass. 

In this situation, you either have to believe that almost no grass seed has ever fallen on this land for millions of years, or that the appearance of a stable continuum of sand is seriously misleading. Perhaps that are wide strong hidden barriers to the dispersal of seeds, such as wide barrier regions of no wind. Or perhaps some big disaster happens periodically to kill basically all seeds across this entire connected land, and then later all historical records of both the prior seeds and the event that killed them are erased. 

Imagine further in these situations that we the observers making these observations and drawing these conclusions are in fact made out of, or closely connected to, fire in the first case, and seeds in the second. We would then have to believe that our origins are extremely crazy rare, that we will either remain permanently isolated behind travel barriers, or that soon we will suffer a quite thorough death that erases most all records of our existence. 

These imaginary scenarios seem close analogues to humanity’s actual situation in the cosmos today. Except that now we are talking a period of fourteen billion years and a scope even more billions of lightyears wide. We seem to be close to becoming part of a fire or seed that would be capable of spreading across the cosmos, burning most all or turning most all to grass, or to some descendant life. And yet our historical records seem to be good enough to tell us that no such fire has yet happened, or that almost none of it has been turned to grass or descendant life. 

We must then conclude that either (A) we not remotely as close to these expansion abilities as we think, (B) the appearance of life like us is extremely rare, or we are seriously mistaken about either (C) the feasibility of long-distance travel, or (D) the absence of frequent cosmos-wide disasters that kill everything. Yes we do know of substantial obstacles to our future evolution and long-distance travel, and of periodic large disasters that would kill many things. But our best understanding is that these evolution and travel obstacles can be plausibly overcome, and that these large disasters have a quite limited scope. 

Our grabby aliens analysis suggests aliens who spread across the cosmos as would a fire or grass are in fact quite rare. They appear roughly once per million galaxies, and appear in time according to a power law that emphasizes later times which we are less able to see from here now; we’ll meet them in roughly a billion years if we expand. But in this post I want to remind us of other possibilities; maybe our future evolution or long-distance travel are much harder than they seem, or maybe there are hidden disasters much more severe and frequent than we suspect. Beware. 

GD Star Rating
loading...
Tagged as: ,

Why Allow Line Cutting?

People often cut in line:

“May I use the Xerox machine?”—enabled them to cut 60% of the time. Adding that they were rushed allowed them to cut 94% of the time. And “May I use the Xerox machine, because I need to make copies?” was almost as effective, despite its flimsiness. …

The person directly behind an intrusion usually gets to decide whether to allow it. … If that person doesn’t object, other queuers tend to stay quiet. (more)

A person cutting in line has a 54% chance that others in the line will object. With two people cutting in line, there is a 91.3% chance that someone will object. The proportion of people objecting from anywhere behind the cutter is 73.3%, with the person immediately behind the point of intrusion objecting most frequently. Nevertheless, physical altercation resulting from cutting is rare. …

Some passengers who do not normally use a wheelchair request one, to pass through security checks quickly and to be among the first to board an aircraft. At the conclusion of the flight, these passengers walk off the aircraft, instead of waiting for a wheelchair and thus being among the last to disembark. (more)

Here are three related examples I’ve witnessed:

On a freeway traffic is moving swiftly, but at a particular exit there is a line of cars twenty long waiting to exit. But a third of the cars skip the line, go up to the front of the exit, and then try to cut in. Even if a given car won’t let them in, one of the next two cars in line usually will.

On an airplane, when it is time to disembark, as soon as the seatbelt light goes off some passengers jump out of their seat and rush as far forward as far as they can, before others have gotten up out of their seat to block such movement.

At the front of an airport, three rows of cars are basically parked waiting to take away passengers on arriving flights. They sit there for up to thirty minutes, blocking traffic, and once their passengers arrive they take up to ten minutes more in a happy reunion. Airports have rules against this, and officials often blow a whistle at such cars to move on, but are satisfied if they just move down a car length or two. None are arrested or penalized in any way.

Why do people let others cut in line? The main explanation I can find offered are that people are nice to those with stronger needs:

Experimenters equipped with small bills approached 500 people in lines and offered a cash payment of up to $10 to cut in. … line-holders allowed the person to cut in but most wouldn’t accept the money in return. … took this to mean that people will allow cuts if they perceive the queue jumper has a real need to save time. (more)

When customers play the game just once, the only possible priority rule that can emerge is first in, first out; cut-ins must be rejected. But when players engage in repeated games, the pattern changes. Individuals in the line give way to those who appear to have more urgent needs or will require only a minimum of service time. (more)

This all seems to me more likely an example of hidden motives. While we like to claim that we are being nice, I suggest that we are avoiding confrontation. When someone makes an apparently aggressive move at our expense, we can either oppose them and risk a confrontation, or give in and avoid confrontation. Giving in is much easier for us when we have the excuse of how doing so is in fact us being nice.

We will often let people walk all over us as long as we can pretend we are thereby being nice. Even those tasked with enforcing rules against line cutting prefer to avoid confrontation. We all somehow seem to embrace the norm that those willing to risk confrontation should get their way, even if at others’ expense. We accept the dominance of the willing to try to dominate.

GD Star Rating
loading...
Tagged as:

Argument Selection Bias

One strategy to decide what to believe about X is to add up all the pro and con arguments that one is aware of regarding X, weighing each by its internal strength. Yes, it might not be obvious how to judge and combine arguments. But this strategy has a bigger problem; it risks a selection bias. What if the process that makes you aware of arguments has selected non-randomly from all the possible arguments?

One solution is to focus on very simple arguments. You might be able to exhaustively consider all arguments below some threshold of simplicity. However, here you still have to worry that simple arguments tend to favor a particular side of X. For example, if the question is “Is there some complex technical solution to simple problem X”, it may not work well to exclude all complex technical solution proposals.

We often see situations where far more effort seems to go into finding, honing, and publicizing pro-X arguments, relative to anti-X arguments. In this case the key question is what processes induced those asymmetric efforts. For example, as the left tends to dominate the high end of academia, very academic policy arguments strongly favor left policies. So the question is: what process induced such people to become left?

If new academics started out equally distributed on the left and right, and then searched among academic arguments, becoming more left only as they discovered mainly only left arguments in that space, then we wouldn’t have so much of a selection bias to worry about. However, if the initial distribution of academics leans heavily left for non-argument reasons, then there could be a big selection bias among very academic arguments, even if not perhaps among the arguments that induced people to become academics in the first place.

Often there are claims X where not only does most everyone support X, most everyone is also eager to repeat arguments favoring X, to identify and repudiate any who oppose X, and to ridicule their supporting arguments. In these cases, there is far less energy and effort available to find, hone, and express anti-X claims. For example, consider topics related to racism, sexism, pedophilia, inequality, IQ, genes, or the value of school and medicine. In these cases we should expect strong selection biases favoring X, and thus for weight-of-argument purposes we should adjust our opinions to less favor these X.

However, sometimes there are contrarian claims X where far more effort goes into finding, honing, and expressing arguments supporting X. Consider the claims of 911-truthers, for example. Here we should expect a bias against X among the simple arguments that most people would use to justify their dismissing X, but a bias favoring X among the more complex arguments that 911-truthers would find when studying the many details close to the issue.

What if a topic is local, of interest only to your immediate associates? In this case you should expect a bias favoring those who are more motivated to want others to believe X, and favoring those who are just generally better at finding, honing, and expressing arguments. Thus being known to be good at arguing should generally make one less effective at persuading associates.

In larger social worlds, however, where arguments can pass through many intermediaries, it won’t work as well to discount arguments by the abilities of their sources. In that case one will have to discount arguments based on overall features of the communities who favor and oppose X. Here those who are especially good at arguing will be especially tempted to join such discussions, as their audience is less able to apply personal discounts regarding their arguing abilities.

In all of these cases, we would ideally adjust our standards for discounting beliefs continuously, with the many parameters by which we estimate context-dependent selection biases. But we may sometimes instead feel constrained in our abilities to make such adjustments. Our lower level mental processes may just weigh up the arguments they hear without applying enough discounts.

In which case we might just want to limit our exposure to the sources that we expect to be unusually subject to favorable selection biases. This may sometimes justify common practices of sticking one’s head in the sand, and fingers in one’s ears, regarding suspect sources. And we might also reasonably show a “perverse” forbidden-fruit fascination with hearing arguments that favor forbidden views.

GD Star Rating
loading...
Tagged as: ,

You Choose Inequality

A simple but reasonable definition of inequality says that moving any part of a distribution toward its median value (while holding the rest of the distribution constant) reduces the inequality in that distribution. Moving a part away from the median value increases inequality.

The median adult income worldwide is ~$1000 ($3000 per household), and median wealth is $7500. If you make/have more than those amounts, and if you are trying to increase your personal income and wealth, then if successful your efforts will increase inequality in those distributions. The same applies for any distribution where you are above the median; your efforts to increase your personal value are efforts to increase inequality.

Thus you are trying to increase inequality if you try to increase your number of Twitter followers but have more than 200 now. And that’s compared to other people on Twitter. Compared the the median human, even going from zero to one Twitter followers increases inequality.

The median firm has four employees, so if your firm is larger, and you try to grow your firm, you are trying to increase inequality across firms. The median publication has one citation, and the median human has zero publication, so your trying to increase either of those numbers regarding yourself is trying to increase inequality.

Medians for the US are an IQ of 98, reading comprehension of 7th/8th grade, 4 books read per year, and 6.3/4.3 lifetime sex partners (M/F age 25-49). So if you are in US, your personal figures are higher, and you try to increase those figures, you are trying to increase US inequality. And if US numbers are higher than the world, you also increase world inequality, and that’s even true for many lower personal values.

You might try to justify improving any one above-median X by pointing to other Y on which you are below median, saying that you are a loser overall trying to improve your overall position. But are you really a loser compared to all humans alive today, or all humans ever so far, or all creatures ever so far?

Sometimes people try to justify their above-median efforts by claiming that they mainly fight against those who are even higher in the distribution than they. For example, their firm competes mainly with even larger firms, or their publications compete mainly with even more popular publications. But this just can’t be true for as many people as try to claim this justification. So how can we judge who are in fact the rare above-median Robin Hoods, taking from the even richer?

For everyone else, it seems you should admit that either (A) you count for more than others, so that your increases are more worthwhile than theirs, or (B) while reducing inequality is a nice goal, you have judged that it is just not as worthy a goal as just increasing these numbers in general, for anyone and everyone.

Added 6am: Sure, if your efforts to raise yourself also happen to also raise the entire rest of the distribution by the same proportion or amount, or cause especially big rises for some below median folks. then that may not increase inequality. But if your efforts raise yourself more than they raise others, the inequality effect issue remains.

GD Star Rating
loading...
Tagged as:

Decision Market Math

Let me share a bit of math I recently figured out regarding decision markets. And let me illustrate it with Fire-The-CEO markets.

Consider two ways that we can split $1 cash into two pieces. One way is: $1 = “$1 if A” + “$1 if not A”, where A is 1 or 0 depending on if a firm CEO stays in power til the end of the current quarter. Once we know the value of A, exactly one of these two assets can be exchanged for $1; the other is worthless. The chance a of the CEO staying is revealed by trades exchanging one unit of “$1 if A” for a units of $1.

The other way to split is $1 = “$x” + “$(1-x)”, where x a real number in [0,1], representing the stock price of that firm at quarter end, except rescaled and clipped so that x is always in [0,1]. Once we know the value of x, then one unit of “$x” can be exchanged for x units of $1, while one unit of “$(1-x)” can be exchanged for 1-x units of $1. The expected value x of the stock is revealed by trades exchanging one unit of “$x” for x units of $1.

We can combine this pair of two-way splits into a single four-way split:
$1 = “$x if A” + “$x if not A” + “$(1-x) if A” + “$(1-x) if not A”.
A simple combinatorial trading implementation would keep track of the quantities each user has of these four assets, and allow them to trade some of these assets for others, as long as none of these quantities became negative. The min of these four quantities is the cash amount that a user can walk away with at any time. And at quarter’s end, the rest turn into some amount of cash, which the user can then walk away with.

To advise the firm board on whether to fire the CEO, we are interested in the value that the CEO adds to the firm value. We can define this added value as x1-x2, where
x1 = E[x|A] is revealed by trades exchanging 1 unit of “$x if A” for x1 units of “$1 if A”
x2 = E[x|not A] is revealed by trades exchanging 1 unit of “$x if not A” for x2 units of “$1 if not A”.

In principle users could trade any bundle of these four assets for any other bundle. But three kinds of trades have the special feature of supporting maximal use of user assets in the following sense: when users make trades of only that type, two of their four asset quantities will reach zero at the same time. Reaching zero sets the limit of how far a user can trade in that direction.

To see this, let us define:
d1 = change in quantity of “$x if A”,
d2 = change in quantity of “$x if not A”,
d3 = change in quantity of “$(1-x) if A”,
d4 = change in quantity of “$(1-x) if not A”.

Two of these special kinds of trades correspond to the simple A and x trades that we described above. One kind exchanges 1 unit of “$1 if A” for a units of $1, so that d1=d3, d2=d4, -d1*(1-a)=a*d2. The other kind exchanges 1 unit of “$x” for x units of $1, so that d1=d2, d3=d4, -d1*(1-x)=x*d3.

The third special trade bundles the diagonals of our 2×2 array of assets, so that d1=d4, d2=d3, -q*d1=(1-q)*d2. But what does q mean? That’s the math I worked out: q = (1-a) + (2a-1)*x + 2a(1-a)*r*x, where r = (x1-x2)/x, and x = a*x1 + (1-a)*x2. So when we have market prices a,x from the other two special markets, we can describe trade ratios q in this diagonal market in terms of the more intuitive parameter r, i.e., the percent value the CEO adds to this firm.

When you subsidize markets with many possible dimensions of trade, you don’t have to subsidize all the dimensions equally. So in this case you could subsidize the q=r type trades much more than you do the a or x type trades. This would let you take a limited subsidy budget and direct it as much as possible toward the main dimension of interest: this CEO’s added value.

GD Star Rating
loading...
Tagged as: