Mysterious Motivation

Our lives are full of evidence that we don’t understand what motivates us. Kevin Simler and I recently published a book arguing that even though we humans are built to readily and confidently explain our motivations regarding pretty much everything we do, we in fact greatly misjudge our motives in ten big specific areas of life. For example, even though we think we choose medical treatments mainly to improve our health, we more use medicine to show concern about others, and to let them show concern about us. But a lot of other supporting evidence also suggests that we don’t understand our motivations. 

For example, when advertisers and sales-folk try to motivate us to buy products and services, they pay great attention to many issues that we would deny are important to us. We often make lists of the features we want in friends, lovers, homes, and jobs, and then find ourselves drawn to options that don’t score well on these lists. Managers struggle to motivate employees, and often attend to different issues to what employees say motivate them. 

While books on how to write fiction say motivation is central to characters and plot, most fiction attempts focused on the motives we usually attribute to ourselves fall flat, and feel unsatisfying. We are bothered by scenes showing just one level of motivation, such as a couple simply enjoying a romantic meal without subtext, as we expect multiple levels. 

While most people see their own lives as having meaning, they also find it easy to see lives different from theirs are empty and meaningless, without motivation. Teens often see this about most adult lives, and adults often see retired folks this way. Many see the lives of those with careers that don’t appeal to them, such as accounting, as empty and meaningless. Artists see non-artists this way. City dwellers often see those who live in suburbia this way, and many rural folks see city folks this way. Many modern people see the lives of most everyone before the industrial era as empty. We even sometimes see our own lives as meaningless, when our lives seem different enough from the lives we once had, or hoped to have.  

Apparently, an abstract description of a life can easily seem empty. Lives seem meaningful, with motivation, when we see enough concrete details about them that we can relate to, either via personal experience or compelling stories. I think this is so why many have call the world I describe in Age of Em a hell, even though to me it seems an okay world compared to most in history. They just don’t see enough relatable detail.  

Taken together, this all suggests great error in our abstract thinking about motivations. We find motivation in our own lives and in some fictional lives. And if our subconscious minds can pattern-match with enough detail of a life description, we might see it as similar enough to what we would find motivating to agree that such a life is likely motivating. But without sufficiently detailed pattern-matching, few abstract life descriptions seem motivating or meaningful to us. In the abstract, we just don’t understand why people with such lives get up in the morning, or don’t commit suicide. 

Motivation is pretty central to human behavior. If you don’t know the point of what you do, how can you calculate whether to do more or less, or something different? And how can you offer useful advice to others on what to do if you don’t know why they do what they do? So being told that you don’t actually understand your motives and those of others should be pretty shocking, and grab your attention. But in fact, it usually doesn’t.

It seems that, just as we are built to assume that we automatically know local norms, without needing much thought, we are also built to presume that we know our motives. We make decisions and, if asked, we have motives to which we attribute our behavior. But we don’t care much about abstract patterns of discrepancies between the two. We care about specific discrepancies, which could make us vulnerable to specific accusations that our motives violate norms in specific situations. Otherwise, as long as we believe that our behavior is achieving our actual motives, we don’t much care what those motives are. Whatever we want must be a good thing to want, and following intuition is good enough to get it; we don’t need to consciously think about it.  

I guess I’m weird, because I find the idea that I don’t know my motives, or what would motivate myself or others, quite disturbing.

GD Star Rating
loading...
Tagged as: ,

Prediction Machines

One of my favorite books of the dotcom era was Information Rules, by Shapiro and Varian in 1998. At the time, tech boosters were saying that all the old business rules were obsolete, and anyone who disagreed “just doesn’t get it.” But Shapiro and Varian showed in detail how to understand the new internet economy in terms of standard economic concepts. They were mostly right, and Varian went on to become Google’s chief economist.

Today many tout a brave new AI-driven economic revolution, with some touting radical change. For example, a widely cited 2013 paper said:

47% of total US employment is in the high risk category … potentially automatable over … perhaps a decade or two.

Five years later, we haven’t yet seen changes remotely this big. And a new book is now a worthy successor to Information Rules:

In Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

As with Information Rules, these authors mostly focus on guessing the qualitative implications of such prediction machines. That is, they don’t say much about likely rates or magnitudes of change, but instead use basic economic analysis to guess likely directions of change. (Many example quotes below.) And I can heartily endorse almost all of these good solid guesses about change directions. A change in the cost of prediction is a fine way to frame recent tech advances, and if you want to figure out what they imply for your line of business, this is the book for you.

However, the book does at times go beyond estimating impact directions. It says “this time is different”, suggests “extraordinary changes over the next few years”, says an AI-induced recession might result from a burst of new tech, and the eventual impact of this tech will be similar to that of computers in general so far:

Everyone has had or will soon have an AI moment. We are accustomed to a media saturated with stories of new technologies that will change our lives. … Almost all of us are so used the the constant drumbeat of technology news that we numbly recite that the only thing immune to change is change itself. Until have our AI moment. Then we realize that this technology is different. p.2

In various ways, prediction machines can “use language, form abstractions and concepts, solve the kinds of problem now [as of 1955] reserve for humans, and improve themselves.” We do not speculate on whether this process heralds the arrival of general artificial intelligence, “the Singularity”, or Skynet. However, as you will see, this narrower focus on prediction still suggests extraordinary changes over the next few years. Just as cheap arithmetic enabled by computers proved powerful in using in dramatic change in business and personal lives, similar transformations will occur due to cheap prediction. p.39

Once an AI is better than humans at a particular task, job losses well happen quickly. We can be confident that new jobs will arise with a few ears and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question. p.212

And they offer a motivating example that would require pretty advanced tech:

At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. p.16

I can’t endorse any of these suggestions about magnitudes and rates of change. I estimate much smaller and slower change. But the book doesn’t argue for any of these claims, it more assumes them, and so I won’t bother to argue the topic here either. The book only mentions radical scenarios a few more times:

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines. How might an economist approach this question? … If you favor free trade between countries, then you … support developing AI, even if it replaces some jobs. Decades of research into the effect of trade show that other jobs will appear, and overall employment will not plummet. p.211

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve. p.222

Yes, research is underway to make prediction machines work in broader settings, but the break-through that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. … As with many AI-related issues, the future is highly uncertain. Is this the end of the world as we know it? not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decision with regard to AI. When we move beyond prediction machines to general artificial intelligence or even superintelligence, whatever that may be, then we will be at a different AI moment. That is something everyone agrees upon. p.223

As you can see, they don’t see radical scenarios as coming soon, nor see much urgency regarding them. A stance I’m happy to endorse. And I also endorse all those insightful qualitative change estimates, as illustrated by these samples: Continue reading "Prediction Machines" »

GD Star Rating
loading...
Tagged as: , ,

How Best Help Distant Future?

I greatly enjoyed Charles Mann’s recent book The Wizard and the Prophet. It contained the following stat, which I find to be pretty damning of academia:

Between 1970 and 1989, more than three hundred academic studies of the Green Revolution appeared. Four out of five were negative. p.437

Mann just did a related TED talk, which I haven’t seen, and posted this related article:

The basis for arguing for action on climate change is the belief that we have a moral responsibility to people in the future. But this is asking one group of people to make wrenching changes to help a completely different set of people to whom they have no tangible connection. Indeed, this other set of people doesn’t exist. There is no way to know what those hypothetical future people will want.

Picture Manhattan Island in the 17th century. Suppose its original inhabitants, the Lenape, could determine its fate, in perfect awareness of future outcomes. In this fanciful situation, the Lenape know that Manhattan could end up hosting some of the world’s great storehouses of culture. All will give pleasure and instruction to countless people. But the Lenape also know that creating this cultural mecca will involve destroying a diverse and fecund ecosystem. I suspect the Lenape would have kept their rich, beautiful homeland. If so, would they have wronged the present?

Economists tend to scoff at these conundrums, saying they’re just a smokescreen for “paternalistic” intellectuals and social engineers “imposing their own value judgments on the rest of the world.” (I am quoting the Harvard University economist Martin Weitzman.) Instead, one should observe what people actually do — and respect that. In their daily lives, people care most about the next few years and don’t take the distant future into much consideration. …

Usually economists use 5 percent as a discount rate — for every year of waiting, the price goes down 5 percent, compounded. … The implications for climate change are both striking and, to many people, absurd: at a 5 percent discount rate, economist Graciela Chichilnisky has calculated, “the present value of the earth’s aggregate output discounted 200 years from now is a few hundred thousand dollars.” … Chichilnisky, a major figure in the IPCC, has argued that this kind of thinking is not only ridiculous but immoral; it exalts a “dictatorship of the present” over the future.

Economists could retort that people say they value the future, but don’t act like it, even when the future is their own. And it is demonstrably true that many — perhaps most — men and women don’t set aside for retirement, buy sufficient insurance, or prepare their wills. If people won’t make long-term provisions for their own lives, why should we expect people to bother about climate change for strangers many decades from now? …

In his book, Scheffler discusses Children of Men … The premise of both book and film is that humanity has become infertile, and our species is stumbling toward extinction. … Our conviction that life is worth living is “more threatened by the prospect of humanity’s disappearance than by the prospect of our own deaths,” Scheffler writes. The idea is startling: the existence of hypothetical future generations matters more to people than their own existence. What this suggests is that, contrary to economists, the discount rate accounts for only part of our relationship to the future. People are concerned about future generations. But trying to transform this general wish into specific deeds and plans is confounding. We have a general wish for action but no experience working on this scale, in this time-frame. …

Overall, climate change asks us to reach for higher levels on the ladder of concern. If nothing else, the many misadventures of foreign aid have shown how difficult it is for even the best-intentioned people from one culture to know how to help other cultures. Now add in all the conundrums of working to benefit people in the future, and the hurdles grow higher. Thinking of all the necessary actions across the world, decade upon decade — it freezes thought. All of which indicates that although people are motivated to reach for the upper rungs, our efforts are more likely to succeed if we stay on the lower, more local rungs.

I side with economists here. The fact that we can relate emotionally to Children of Men hardly shows that people would actually react as it depicts. Fictional reactions often differ greatly from real ones. And I’m skeptical of Mann’s theory that we really do care greatly about helping the distant future, but are befuddled by the cognitive complexity of the task. Consider two paths to helping the distant future:

  1. Lobby via media and politics for collective strategies to prevent global warming now.
  2. Save resources personally now to be spent later to accommodate any problems then.

The saving path seems much less cognitively demanding than the lobby path, and in fact quite feasible cognitively. Resources will be useful later no matter what are the actual future problems and goals. Yes, the saving path faces agency costs, to control distant future folks tasked with spending your savings. But the lobby path also has agency costs, to control government as an agent.

Yes, the value of the saving path relative to the lobby path is reduced to the degree that prevention is cheaper than accommodation, or collective action more effective than personal action. But the value of the saving path increases enormously with time, as investments typically grow about 5% per year. And cognitive complexity costs of the lobby path also increase exponentially with time, as it becomes harder to foresee the problems and values of the distant future. (Ems wouldn’t be grateful for your global warming prevention, for example.)

Wait long enough to help and the relative advantage of the saving path should become overwhelming. So the fact that we see far more interest in the lobby path, relative to the savings path, really does suggest that people just don’t care that much about the distant future, and that global warning concern is a smokescreen for other policy agendas. No matter how many crocodile tears people shed regarding fictional depictions.

Added 5a: The posited smokescreen motive would be hidden, and perhaps unconscious.

Added 6p: I am told that in a half dozen US it is cheap to create trusts and foundations that can accumulate assets for centuries, and then turn to helping with problems then, all without paying income or capital gains taxes on the accumulating assets.

GD Star Rating
loading...
Tagged as: , ,

Between Property and Liability

Last October I posted on Eric Posner and Glen Weyl’s proposal to generalize self-assessed property taxes. For many items, such as land and buildings, you’d pay an annual tax that is a standard percentage of your self-set sale-offer price for the item. This would avoid administrative property valuations, discourage people from sitting on stuff they don’t use, and make it much easier to assemble property into large units. Eminent domain would no longer be needed. They have a new book, Radical Markets, coming out in a few weeks, that I will review soon.

Some libertarian types disapprove on the grounds that this weakens property rights. Which it can, relative to a simple absolute property right. But simple property and liability have long been two quite different, and extreme, solutions to legal problems. Neither one is always best. In this post I want to point out that this alternate approach can be used not only to change traditional property to be more like liability, it can also be used to change traditional liability to be more like property. It is an interesting intermediate form between traditional property and liability. One I expect libertarian types to look on more favorably when applied to liability.

Today if someone smashes their car into yours, you can sue them for damages. But even if you convince the court that the event happened and that the party you sued was at fault, the amount of the damages will be set by a court’s judgement. They will mostly look at your demonstrable financial costs, and mostly ignore your value of leisure time, disability, pain, etc. You can’t do much to convince them that you suffer a higher cost from such events than others do.

To apply self-assessment to liability, we’d ask each person to estimate a function that outputs their loss in dollars, and takes as input different scenarios of events that could hurt them. The function would say how much they suffer in each scenario. (The function might interpolate between a set of concrete scenarios which the person rated.) We’d convolve this function with an official distribution over how often such events happen, and a tax rate function, to find each person’s total tax. This is like paying a tax for each property item you hold, but is instead adding up a tax for each possible scenario where you might be hurt.

Then if someone actually hurts you in some event, you could sue for the amount of damages your function declares for that event. Once the court was persuaded that the event happened and that the person you sued was at fault, the court could mostly just believe your estimate of harm, instead of trying to estimate it themselves. In this way the court could cheaply and accurately account for losses of limbs, time, pain, etc. As you’d set the damage levels yourself, this approach makes traditional liability more like property.

Added 15Apr: A reminder: this doesn’t have to produce any net tax revenue. It could just take from those who declare larger than average values of harms done to them, and rebate to those who declare lower than average values.

GD Star Rating
loading...
Tagged as:

Like the Ancients, We Have Gods. They’ll Get Greater.

Here’s a common story about gods. Our distant ancestors didn’t understand the world very well, and their minds contained powerful agent detectors. So they came to see agents all around them, such as in trees, clouds, mountains, and rivers. As these natural things vary enormously in size and power, our ancestors had to admit that such agents varied greatly in size and power. The big ones were thus “gods”, and to be feared. While our forager ancestors were fiercely egalitarian, and should thus naturally resent the existence of gods, gods were at least useful in limiting status ambitions of local humans; however big you were, you weren’t as big as gods. All-seeing powerful gods were also useful in enforcing norms; norm violators could expect to be punished by such gods.

However, once farming era war, density, and capital accumulation allowed powerful human rulers, these rulers co-opted gods to enforce their rule. Good gods turned bad. Rulers claimed the support of gods, or claimed to be gods themselves, allowing their decrees to take priority over social norms. However, now that we (mostly) know that there just isn’t a spirit world, and now that we can watch our rulers much more closely, we know that our rulers are mere humans without the support of gods. So we much less tolerate strong rulers, their claims of superiority, or their norm violations. Yay us.

There are some problems with this story, however. Until the Axial revolution of about 3500 years ago, most gods were local to a social group. For our forager ancestors, this made them VERY local, and thus typically small. Such gods cared much more that you show them loyalty than what you believed, and they weren’t very moralizing. Most gods had limited power; few were all-powerful, all-knowing, and immortal. People mostly had enough data to see that their rulers did not have vast personal powers. And finally, rather than reluctantly submitting to gods out of fear, we have long seen people quite eager to worship, praise, and idolize gods, and also their leaders, apparently greatly enjoying the experience.

Here’s a somewhat different story. Long before they became humans, our ancestors deeply craved both personal status, and also personal association with others who have the high status. This is ancient animal behavior. Forager egalitarian norms suppressed these urges, via emphasizing the also ancient envy and resentment of the high status. Foragers came to distinguish dominance, the bad status that forces submission via power, from prestige, the good status that invites you to learn and profit by watching and working with them. As part of their larger pattern of hidden motives, foragers often pretended that they liked leaders for their prestige, even when they really also accepted and even liked their dominance.

Once foragers believed in spirits, they also wanted to associate with high status spirits. Spirits increased the supply of high status others to associate with, which people liked. But foragers also preferred to associated with local spirits, to show local loyalties. With farming, social groups became larger, and status ambitions could also rise. Egalitarian norms were suppressed. So there came a demand for larger gods, encompassing the larger groups.

In this story the fact that ancient gods were spirits who could sometimes violate ordinary physical rules was incidental, not central. The key driving force was a desire to associate with high status others. The ability to violate physical rules did confer status, but it wasn’t a different kind of status than that held by powerful humans. So very powerful humans who claimed to be gods weren’t wrong, in terms of the essential dynamic. People were eager to worship and praise both kinds of gods, for similar reasons.

Thus today even if we don’t believe in spirts, we can still have gods, if we have people who can credibly acquire very high status, via prestige or dominance. High enough to induce not just grudging admiration, but eager and emotionally-unreserved submission and worship. And we do in fact have such people. We have people who are the best in the world at the abilities that the ancients would recognize for status, such as physical strength and coordination, musical or story telling ability, social savvy, and intelligence. And in addition, technology and social complexity offer many new ways to be impressive. We can buy impressive homes, clothes, and plastic surgery, and travel at impressive speeds via impressive vehicles. We can know amazing things about the universe, and about our social world, via science and surveillance.

So we today do in fact have gods, in effect if not in name. (Though actors who play gods on screen can be seen as ancient-style gods.) The resurgence of forager values in the industrial era makes us reluctant to admit it, but a casual review of celebrity culture makes it very clear, I’d say. Yes, we mostly admit that our celebrities don’t have supernatural powers, but that doesn’t much detract from the very high status that they have achieved, or our inclination to worship them.

While it isn’t obviously the most likely scenario, one likely and plausible future scenario that has been worked out in unusual detail is the em scenario, as discussed in my book Age of Em. Ems would acquire many more ways to be individually impressive, acquiring more of the features that made the mythical ancient gods so impressive. Ems could be immortal, occupy many powerful and diverse physical bodies, move around the world at the speed of light, think very very fast, have many copies, and perhaps even somewhat modify their brains to expand each copy’s mental capacity. Automation assistants could expand their abilities even more.

As most ems are copies of the few hundred most productive ems, there are enormous productivity differences among typical ems. By any reasonable measure, status would vary enormously. Some would be gods relative to others. Not just in a vague metaphorical sense, but in a deep gut-grabbing emotional sense. Humans, and ems, will deeply desire to associate with them, via praise, worship and more.

Our ancestors had gods, we have gods, and our descendants will like have even greater more compelling gods. The phenomena of gods is quite far from dead.

GD Star Rating
loading...
Tagged as: , ,

Toward Reality TV MBAs

The quality of firm managers matters enormously for firm productivity. How can we get better managers? We already select the best people in terms of simple features like intelligence, conscientiousness, etc. But apparently there is still huge variation in quality, even after controlling for such things. Typical MBA programs teach people some business basics, but don’t seem to help much; they mainly serve to select elites and connect them to each other.

I recently had dinner with a few San Francisco tech startup CEOs, who were worth high sums. They weren’t obviously that much smarter etc. than others. Their high value came from having actually navigated difficult business waters, successfully enough. That sort of experience and track record is gold. Some said that business success came from making the right decision at a half dozen key points; any wrong move would have killed them.

Some had first gained experience via being a personal assistant to someone else in such a role. Such an assistant goes to all meetings and sees pretty much everything that manager does, over a several year period. Apparently children learn similar things via parents dinner conversations:

The majority of male entrepreneurs in Norway start a firm in an industry closely related to the one in which their father is employed. These entrepreneurs outperform others in the same industry. … ‘Dinner table human capital’ – that is, industry knowledge learned through their parents – is an important factor.… the effect of parents helping out, although possibly quite important, is smaller. (more; HT Alex T)

If one can learn much from just watching the inside story of real firms over several years, that suggests a big win: record the full lives of many rising managers over several years, and show a mildly compressed and annotated selection of such recordings to aspiring managers. Such recordings could be compressed by deleting sleep and non-social periods. They could be annotated to identify key decisions and ask viewers to make their own choices, before they see actual choices. Recordings might be selected 2/3 from the most successful, and 1/3 from a sampling of others.

Yes, there are issues of privacy and business secrets. But these are already issues for personal assistants and others who attend key business meetings. Waiting five years could take away many business secret concerns. And we don’t have to make these videos available to the world; making manager experiences visible to only 100 times more people might increase our pool of good manager candidates by a factor of 100. And that could be worth trillions to the world economy.

GD Star Rating
loading...
Tagged as: ,

Toward Micro-Likes

Long ago when electricity and phones were new, they were largely unregulated, and privately funded. But then as the tech (and especially the interfaces) stopped changing so fast, and showed big scale and network economies, regulation stepped in. Today social media still seems new. But as it hasn’t been changing as much lately, and it also shows large scale and network economies, many are talking now about heavier regulation. In this post, let me suggest that a lot more change is possible; we aren’t near the sort of stability that electricity and phones reached when they became heavily regulated.

Back in the early days of the web and internet people predicted many big radical changes. Yet few then mentioned social media, the application now most strongly associated with this new frontier. What did we miss? The usual story, which I find plausible, is that we missed just how much people love to get many frequent signals of their social connections: likes, retweets, etc. Social media gives us more frequent “attaboy” and “we see & like you” signals. People care more than we realized about the frequency, relative to the size, of such signals.

But if that’s the key lesson, social media should be able to move a lot further in this direction. For example, today Facebook has two billion monthly users and produces four million likes per minute, for an average of about three likes per day per monthly user. Twitter has 300 million monthly users, who send 500 million tweets per day, for less than two tweets per day per monthly user. (I can’t find stats on Twitter likes or retweets.) Which I’d say is actually a pretty low rate of positive feedback.

Imagine you had a wall-sized screen, full of social media items, and that while you browsed this wall the direction of your gaze was tracked continuously to see which items your gaze was on or near. From that info, one could give the authors or subjects of those items far more granular info on who is paying how much attention to them. Not only on how often how much your stuff is watched, but also on the mood and mental state of those watchers. If some of those items were continuous video feeds from other people, then those others could be producing many more social media items to which others could attend.

Also, so far we’ve usually just naively counted likes, retweets, etc., as if everyone counted the same. But we could instead use non-uniform weights based on popularity or other measures. And given how much people like to participate in synchronized rituals, we could also create and publicize statistics on what groups of people are how synchronized in their social media actions. And offer new tools to help them synchronize more finely.

My point here isn’t to predict or recommend specific changes for future social media. I’m instead just trying to make the point that a lot of room for improvement remains. Such gains might be delayed or prevented by heavy regulation.

GD Star Rating
loading...
Tagged as: , ,

The Uploaded

In this post I again contrast my analysis of future ems in Age of Em with a fictional depictions of ems, and find that science fiction isn’t very realistic, having other priorities. Today’s example: The Uploaded, by Ferrett Steinmetz:

The world is run from the afterlife, by the minds of those uploaded at the point of death. Living is just waiting to die… and maintaining the vast servers which support digital Heaven. For one orphan that just isn’t enough – he wants more for himself and his sister than a life of servitude. Turns out he’s not the only one who wants to change the world.

The story is set 500 years and 14 human generations after a single genius invented ems. While others quickly found ways to copy this tech, his version was overwhelming preferred. (In part due to revelations of “draconian” competitor plans.) So much so that he basically was able to set the rules of this new world, and to set them globally. He became an immortal em, and so still rules the world. His rules, and the basic tech and econ arrangement, have remained stable for those 500 years, during which there seems to have been vastly less tech change and economic growth than we’ve seen in the last 500 years.

His rules are the these: typically when a biological humans dies, one emulation of them is created who is entitled to eternal leisure in luxurious virtual realities. That one em runs at ordinary human speed, no other copies of it are allowed, ems never inhabit android physical bodies, and ems are never created of still living biological humans. By now there are 15 times as many ems as humans, and major decisions are made by vote, which ems always win. Ems vote to divert most resources to their servers, and so biological humans are poor, their world is run down, and diseases are killing them off.

Virtual realities are so engaging that em parents can’t even be bothered to check in on their young children now in orphanages. But a few ems get bored and want to do useful jobs, and they take all the nice desk jobs. Old ems are stuck in their ways and uncreative, preventing change. Biological humans are only needed to do physical jobs, which are boring and soul-crushing. It is illegal for them to do programming. Some ems also spend lots of time watching via surveillance cameras, so biological humans are watched all the time.

Every day every biological human’s brain is scanned and evaluated by a team of ems, and put into one of five status levels. Higher levels are given nicer positions and privileges, while the lowest levels are not allowed to become ems. Biological humans are repeatedly told they need to focus on pleasing their em bosses so they can get into em heaven someday. To say more, I must give spoilers; you are warned. Continue reading "The Uploaded" »

GD Star Rating
loading...
Tagged as: ,

How Deviant Recent AI Progress Lumpiness?

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

(Sam didn’t raise the subject in his recent podcast with me.)

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant might be data on distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

GD Star Rating
loading...
Tagged as: , ,

Growth Is Change. So Is Death.

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

GD Star Rating
loading...
Tagged as: , ,