Tag Archives: Competition

Earth: A Status Report

In a universe that is (so far) almost entirely dead, we find ourselves to be on a rare planet full not only of life, but now also of human-level intelligent self-aware creatures. This makes our planet a roughly a once-per-million-galaxy rarity, and if we ever get grabby we can expect to meet other grabby aliens in roughly a billion years.

We see that our world, our minds, and our preferences have been shaped by at least four billions years of natural selection. And we see that evolution going especially fast lately, as we humans pioneer many powerful new innovations. Our latest big thing: larger scale organizations, which have induced our current brief dreamtime, wherein we are unusually rich.

For preferences, evolution has given us humans a mix of (a) some robust general preferences, like wanting to be respected and rich, (b) some less robust but deeply embedded preferences, like preferring certain human body shapes, and (c) some less robust but cultural plastic preferences, such as which particular things each culture finds more impressive.

My main reaction to all this is to feel grateful to be a living intelligent creature, who is compatible enough with his world to often get what he wants. Especially to be living in such a rich era. I accept that I and my descendants will long continue to compete (in part by cooperating of course), and that as the world changes evolution will continue to change my descendants, including as needed their values.

Many see this situation quite differently from me, however. For example, “anti-natalists” see life as a terrible crime, as the badness of our pains outweigh the goodness of our pleasures, resulting in net negative value lives. They thus want life on Earth to go extinct. Maybe, they say, it would be okay to only create really-rich better-emotionally-adjusted creatures. But not the humans we have now.

Many kinds of “conservatives” are proud to note that their ancestors changed in order to win prior evolutionary competitions. But they are generally opposed to future such changes. They want only limited changes to our tech, culture, lives, and values; bigger changes seem like abominations to them.

Many “socialists” are furious that some of us are richer and more influential than others. Furious enough to burn down everything if we don’t switch soon to more egalitarian systems of distribution and control. The fact that our existing social systems won difficult prior contests does not carry much weight with them. They insist on big radical changes now, and disavow any failures associated with prior attempts made under their banner. None of that was “real” socialism, you see.

Due to continued global competition, local adoption of anti-natalist, conservative, or socialist agendas seems insufficient to ensure these as global outcomes. Now most fans of these things don’t care much about long term outcomes. But some do. Some of those hope that global social pressures, via global social norms, may be sufficient. And others suggest using stronger global governance.

In fact, our scales of governance, and level of global governance, have been increasing over centuries. Furthermore, over the last half century we have created a world community of elites, wherein global social norms and pressures have strong power.

However, competition at the largest scales has so far been our only robust solution to system rot and suicide, problems that may well apply to systems of global governance or norms. Furthermore, centralized rulers may be reluctant to allow civilization to expand to distant places which they would find it harder to control.

This post resulted from Agnes Callard asking me to comment on Scott Alexander’s essay Meditations On Moloch, wherein he takes similarly stark positions on these grand issues. Alexander is irate that the world is not adopting various utopian solutions to common problems, such as ending corporate welfare, smaller militaries, and common hospital medical record systems. He seems to blame all of that, and pretty much anything else that has ever gone wrong, on something he personalizes into a monster “Moloch.” And while Alexander isn’t very clear on what exactly that is, my best read is that it is the general phenomenon of competition (at least the bad sort); that at least seems central to most of the examples he gives.

Furthermore, Alexander fears that, in the long run, competition will force our descendants to give up absolutely everything that they value, just to exist. Now he has no empirical or theoretical proof that this will happen; his post is instead mostly a long passionate primal scream expressing his terror at this possibility.

(Yes, he and I are aware that cooperation and competition systems are often nested within each other. The issue here is about the largest outer-most active system.)

Alexander’s solution is:

Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans. … Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

By which Alexander means: start with a tiny weak AI, induce it to “foom” (sudden growth from tiny to huge), resulting in a single “super-intelligent” AI who rules our galaxy with an iron fist, but wrapped the velvet glove of being “friendly” = “aligned”. By definition, such a creature makes the best possible utopia for us all. Sure, Alexander has no idea how to reliably induce a foom or to create an aligned-through-foom AI, but there are some people pondering theses questions (who are generally not very optimistic).

My response: yes of course if we could easily and reliably create a god to mange a utopia where nothing ever goes wrong, maybe we should do so. But I see enormous risks in trying to induce a single AI to grow crazy fast and then conquer everything, and also in trying to control that thing later via pre-foom design. I also fear many other risks of a single global system, including rot, suicide, and preventing expansion.

Yes, we might take this chance if we were quite sure that in the long term all other alternatives result in near zero value, while this remained the only scenario that could result in substantial value. But that just doesn’t seem remotely like our actual situation to me.

Because: competition just isn’t as bad as Alexander fears. And it certainly shouldn’t be blamed for everything that has ever gone wrong. More like: it should be credited for everything that has ever gone right among life and humans.

First, we don’t have good reasons to expect competition, compared to an AI god, to lead more reliably to the extinction either of life or of creatures who value their experiences. Yes, you can fear those outcomes, but I can as easily fear your AI god.

Second, competition has so far reigned over four billion years of Earth life, and at least a half billion years of Earth brains, and on average those seem to have been brain lives worth living. As have been the hundred billion human brain lives so far. So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.

Now I suspect that Alexander might respond here thus:

The way that evolution has so far managed to let competing creatures typically achieve their values is by having those values change over time as their worlds change. But I want descendants to continue to achieve their values without having to change those values across generations.

However, relatively soon on evolutionary timescales, I’ve predicted that, given further competition, our descendants will come to just directly and abstractly value reproduction. And then after that, no descendant ever need to change their values. But I think even that situation isn’t good enough for Alexander; he wants our (his?) current human values to be the ones that continue and never change.

Now taken very concretely, this seems to require that our descendants never change their tastes in music, movies, or clothes. But I think Alexander has in mind only keeping values the same at some intermediate level of abstraction. Above the level of specific music styles, but below the level of just wanting to reproduce. However, not only has Alexander not been very clear regarding which exact value abstraction level he cares about, I’m not clear on why the rest of us should agree to with him about this level, or care as much as he does about it.

For example, what if most of our descendants get so used to communicating via text that they drop talking via sound, and thus also get less interesting in music? Oh they like artistic expressions using other mediums, such as text, but music becomes much more of a niche taste, mainly of interest to that fraction of our descendants who still attend a lot to sound.

This doesn’t seem like such a terrible future to me. Certainly not so terrible that we should risk everything to prevent it by trying to appoint an AI god. But if this scenario does actually seem that terrible to you, I guess maybe you should join Alexander’s camp. Unless all changes seem terrible to you, in which case you might join the conservative camp. Or maybe all life seems terrible to you, in which case you might join the anti-natalists.

Me, I accept the likelihood and good-enough-ness of modest “value drift” due to future competition. I’m not saying I have no preferences whatsoever about my descendants’ values. But relative to the plausible range I envision, I don’t feel greatly at risk. And definitely not so much at risk as to make desperate gambles that could go very wrong.

You might ask: if I don’t think making an AI god is the best way to get out of bad equilibria, what do I suggest instead? I’ll give the usual answer: innovation. For most problems, people have thought of plausible candidate solutions. What is usually needed is for people to test those solution in smaller scale trials. With smaller successes, it gets easier to entice people to coordinate to adopt them.

And how do you get people to try smaller versions? Dare them, inspire them, lead them, whatever works; this isn’t something I’m good at. In the long run, such trials tend to happen anyway, by accident, even when no one is inspired to do them on purpose. But the goal is to speed up that future, via smaller trials of promising innovation concepts.

Added 5Jan: While I was presuming that Alexander had intended substantial content to his claims about Moloch, many are saying no, he really just mean to say “bad equilibria are bad”. Which is just a mood well-expressed, but doesn’t remotely support the AI god strategy.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Super Hostile Takeovers

For a brief period in the late ’50s, until the mid-’60s, when modern hostile takeover techniques were perfected, we had a pretty much unregulated market for corporate control. Shareholders received on average 40% over the pre-bid price for their shares. But… 1968 … Williams Act … made it vastly more expensive for outsiders to mount successful tender offers. The highly profitable element of surprise was removed entirely.

The even stronger inhibition on takeovers resulted from actions taken by state legislatures and state courts in the ’80s. The number of hostile tender offers dropped precipitously and with it the most effective device for policing top managers of large, publicly held companies. … now, with the legal power to shift control in the hands of the incumbent [managers], they, rather than shareholders, will receive any premium paid for control. … It should come as no surprise then that, as hostile takeovers declined to 4% from 14% of all mergers, executive compensation started a steep climb. (more)

As this quote shows, current laws make it crazy hard to buy public firms, which has the effect of greatly entrenching CEO power and raising their compensation. Like blackmail laws, this is another way in which law goes out of its way to favor powerful elites. Law pretends to dislike and oppose elite dominance, but key details show otherwise.

Even during U.S. historical period when takeovers were easiest, still “shareholders received on average 40% over the pre-bid price for their shares.” That means those trying to takeover in essence faced a 40% tax; no point in taking over a firm if you can’t make it worth at least this much more. So this most effective device for policing top management would be even more effective if we could cut this tax, so takeovers could help in more cases.

The key problem is that when a takeover attempt starts to buy up lots of stock in a firm, people start to notice and then bid up their prices, expecting that a takeover will improve the value of the firm. Can we fix this problem?

Yes, consider that when the government wants to buy a bunch of land properties to build a project like a road, it faces a similar problem, that after the first few purchases the other property owners will greatly raise their price, knowing that the government can’t do its project without all the needed properties. 

The standard solution to this problem is eminent domain, where the government forces them all to sell at some official “market price”. But, as I’ve discussed, a better solution is to use a Harberger tax, where each property owner must always declare a value for their property, a value which is used both to set their property tax, but also to be an always-available sales price for the property. These values will generally be reasonable, due to owners trying to avoid paying high taxes, allowing the government or any other party to quickly assemble large property bundles for any big project without needing any special powers.

We could use the same trick for stocks. Tax stock ownership, and require every stock owner to declare a value for their stock, a value used both to set their tax, and also available to takeover attempts as a sales price. Then a takeover could happen overnight, as 51% of the stock is suddenly purchased at its declared Harberger tax value.

Most speculators might want to declare a value just above the current stock price, and we’d make it easy for them to just declare a percent increment, like say “My value is always 10% over the current market price.” If most did that, a takeover might only face a 10% tax, instead of the 40% tax described above.

I gotta admit that cases like current policy discouraging hostile takeovers makes me despair of trying to introduce any more complex or less effective innovations. The case for allowing more hostile takeovers seems to me especially simple and strong. If even a change this valuable and simple can’t be done, what hope is there for other policy changes?

Added 3p: The tax seems to be about the same size today, but so the main extra problem now is allowing far fewer takeovers:

In large-sample studies, the winning offer premium typically averages approximately 40%–50% relative to the target price two calendar months before the initial bid announcement. (more)

Of course we should also make it easier for someone who owns 51% of stock to actually control the firm. So not using poison pills, staggered boards, supermajority voting rules, voting vs non voting stock, required prior notice of or plan to purchase, etc.

GD Star Rating
a WordPress rating system
Tagged as: ,

Prediction Machines

One of my favorite books of the dotcom era was Information Rules, by Shapiro and Varian in 1998. At the time, tech boosters were saying that all the old business rules were obsolete, and anyone who disagreed “just doesn’t get it.” But Shapiro and Varian showed in detail how to understand the new internet economy in terms of standard economic concepts. They were mostly right, and Varian went on to become Google’s chief economist.

Today many tout a brave new AI-driven economic revolution, with some touting radical change. For example, a widely cited 2013 paper said:

47% of total US employment is in the high risk category … potentially automatable over … perhaps a decade or two.

Five years later, we haven’t yet seen changes remotely this big. And a new book is now a worthy successor to Information Rules:

In Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

As with Information Rules, these authors mostly focus on guessing the qualitative implications of such prediction machines. That is, they don’t say much about likely rates or magnitudes of change, but instead use basic economic analysis to guess likely directions of change. (Many example quotes below.) And I can heartily endorse almost all of these good solid guesses about change directions. A change in the cost of prediction is a fine way to frame recent tech advances, and if you want to figure out what they imply for your line of business, this is the book for you.

However, the book does at times go beyond estimating impact directions. It says “this time is different”, suggests “extraordinary changes over the next few years”, says an AI-induced recession might result from a burst of new tech, and the eventual impact of this tech will be similar to that of computers in general so far:

Everyone has had or will soon have an AI moment. We are accustomed to a media saturated with stories of new technologies that will change our lives. … Almost all of us are so used the the constant drumbeat of technology news that we numbly recite that the only thing immune to change is change itself. Until have our AI moment. Then we realize that this technology is different. p.2

In various ways, prediction machines can “use language, form abstractions and concepts, solve the kinds of problem now [as of 1955] reserve for humans, and improve themselves.” We do not speculate on whether this process heralds the arrival of general artificial intelligence, “the Singularity”, or Skynet. However, as you will see, this narrower focus on prediction still suggests extraordinary changes over the next few years. Just as cheap arithmetic enabled by computers proved powerful in using in dramatic change in business and personal lives, similar transformations will occur due to cheap prediction. p.39

Once an AI is better than humans at a particular task, job losses well happen quickly. We can be confident that new jobs will arise with a few ears and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question. p.212

And they offer a motivating example that would require pretty advanced tech:

At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. p.16

I can’t endorse any of these suggestions about magnitudes and rates of change. I estimate much smaller and slower change. But the book doesn’t argue for any of these claims, it more assumes them, and so I won’t bother to argue the topic here either. The book only mentions radical scenarios a few more times:

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines. How might an economist approach this question? … If you favor free trade between countries, then you … support developing AI, even if it replaces some jobs. Decades of research into the effect of trade show that other jobs will appear, and overall employment will not plummet. p.211

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve. p.222

Yes, research is underway to make prediction machines work in broader settings, but the break-through that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. … As with many AI-related issues, the future is highly uncertain. Is this the end of the world as we know it? not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decision with regard to AI. When we move beyond prediction machines to general artificial intelligence or even superintelligence, whatever that may be, then we will be at a different AI moment. That is something everyone agrees upon. p.223

As you can see, they don’t see radical scenarios as coming soon, nor see much urgency regarding them. A stance I’m happy to endorse. And I also endorse all those insightful qualitative change estimates, as illustrated by these samples: Continue reading "Prediction Machines" »

GD Star Rating
a WordPress rating system
Tagged as: , ,