Big Impact Isn’t Big Data

A common heuristic for estimating the quality of something is: what has it done for me lately? For example, you could estimate the quality of a restaurant via a sum or average of how much you’ve enjoyed your meals there. Or you might weight recent visits more, since quality may change over time. Such methods are simple and robust, but they aren’t usually the best. For example, if you know of others who ate at that restaurant, their meal enjoyment is also data, data that can improve your quality estimate. Yes, those other people might have different meal priorities, and that may be a reason to give their meals less weight than your meals. But still, their data is useful.

Consider an extreme case where one meal, say your wedding reception meal, is far more important to you than the others. If you weigh your meal experiences in proportion to meal importance, your whole evaluation may depend mainly on one meal. Yes, if meals of that important type differ substantially from other meals then using this method best avoids biases from using unimportant types of meals to judge important types. But the noise in your estimate will be huge; individual restaurant meals can vary greatly for many random reasons even when the underlying quality stays the same. You just won’t know much about meal quality.

I mention all this because many seem eager to give the recent presidential election (and the recent Brexit vote) a huge weight in their estimate the quality of various prediction sources. Sources that did poorly on those two events are judged to be poor sources overall. And yes, if these were by far more important events to you, this strategy avoids the risk that familiar prediction sources have a different accuracy on events like this than they do on other events. Even so, this strategy mostly just puts you at the mercy of noise. If you use a small enough set of events to judge accuracy, you just aren’t going to be able to see much of a difference between sources; you will have little reason to think that those sources that did better on these few events will do much better on other future events.

Me, I don’t see much reason to think that familiar prediction sources have an accuracy that is very different on the most important events, relative to other events, and so I mainly trust comparisons that use a lot of data. For example, on large datasets prediction markets have shown a robustly high accuracy compared to other sources. Yes, you might find other particular sources that seem to do better in particular areas, but you have to worry about selection effects – how many similar sources did you look at to find those few winners? And if prediction market participants became convinced that these particular sources had high accuracy, they’d drive market prices to reflect those predictions.

GD Star Rating
a WordPress rating system
Tagged as:

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

GD Star Rating
a WordPress rating system
Tagged as: , ,

In Praise of Low Needs

We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

Of course few individuals today focus on filling the universe with life. Most attend to their individual needs. And as we’ve been getting rich over the last few centuries, our needs have changed. Many cite Maslow’s Hierarchy of Needs:

maslowshierarchyofneeds-svg

While few offer much concrete evidence for this, most seem to accept it or one of its many variations. Once our basic needs are met, our attention switches to “higher” needs. Wealth really does change humans. (I see this in part as our returning to forager values with increasing wealth.)

It is easy to assume that what is good for you is good overall. If you are an artist, you may assume the world is better when consumers more art. If you are a scientist, you may assume the world is better if it gives more attention and funding to science. Similarly, it is easy to assume that the world gets better if more of us get more of what we want, and thus move higher into Maslow’s Hierarchy.

But I worry: as we attend more to higher needs, we may grow and innovate less regarding lower needs. Can the universe really get filled by creatures focused mainly on self-actualization? Why should they risk or tolerate disruptions from innovations that advance low needs if they don’t care much for that stuff? And many today see their higher needs as conflicting with more capacity to fill low needs. For example, many see more physical capacities as coming at the expense of less nature, weaker indigenous cultures, larger more soul-crushing organizations, more dehumanizing capitalism, etc. Rich nations today do seem to have weaker growth in raw physical capacities because of such issues.

Yes, it is possible that even rich societies focused on high needs will consistently grow their capacities to satisfy low needs, and that will eventually lead to a universe densely filled with life. But still I worry about all those unknown obstacles yet to be seen as our descendants try to grow through another three to ten factors as large as humanity’s leap. At some of those obstacles, will a focus on high needs lead them to turn away from the grand growth path? To a comfortable “sustainable” stability without all that disruptive innovation? How much harder would become to restart growth again later?

Pretty much all the growth that we have seen so far has been in a context where humans, and their ancestors, were focused mainly on low needs. Our current turn toward high needs is quite new, and thus relatively unproven. Yes, we have continued to grow, but more slowly. That seems worth at least a bit of worry.

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.

GD Star Rating
a WordPress rating system
Tagged as: ,

Seduced by Tech

We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.

Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.

The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.

A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.

This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.

For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.

Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.

Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.

Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.

When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.

Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.

Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).

In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.

I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.

Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.

Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.

The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.

So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.

GD Star Rating
a WordPress rating system
Tagged as: ,

Chronicle Review Profile

I’m deeply honored to be the subject of a cover profile this week in The Chronicle Review:

chroniclecover-17oct2016

By David Wescott, the profile is titled Is This Economist Too Far Ahead of His Time?, October 16, 2016.

In academic journal articles where the author has an intended answer to a yes or no question, that answer is more often yes, and I think that applies here as well. The profile includes a lot about my book The Age of Em on a far future, and its title suggests that anyone who’d study a far future must be too far ahead of their time. But, when else would one study the far future other than well ahead of time? It seems to me that even in a rational world where everyone was of their time, some people would study other times. But perhaps the implied message is that we don’t live in such a world.

I’m honored to have been profiled, and broad ranging profiles tend to be imprecisely impressionistic. I think David Wescott did a good job overall, but since these impressions are about me, I’ll bother to comment on some (and signal my taste for precision). Here goes.

You inhabit a robotic body, and you stand roughly two millimeters tall. This is the world Robin Hanson is sketching out to a room of baffled undergraduates at George Mason University on a bright April morning.

Honestly, “baffled” is how most undergrads look to most professors during lectures.

Hanson is .. determined to promote his theories in an academy he finds deeply flawed; a doggedly rational thinker prone to intentionally provocative ideas that test the limits of what typically passes as scholarship.

Not sure I’m any more determined to self-promote than a typical academic. I try to be rational, but of course I fail. I seek the possibility of new useful info, and so use the surprise of a claim as a sign of its interestingness. Surprise correlates with “provocative”, and my innate social-cluelessness means I’ll neglect the usual social signs to “avoid this topic!” I question if I’m “intentionally provocative” beyond these two factors.

Hanson, deeply skeptical of conventional intellectual discourse,

I’m deeply skeptical of all discourse, intellectual or not, conventional or not.

At Caltech he found that economists based their ideas on simple models, which worked well in experiments but often failed to capture the complexities of the real world.

That is true of simple models in all fields, not just economics, and it is a feature not a bug. Models can be understood, while the full complexity of reality cannot.

But out of 3600 words, that’s all I have to correct, so good job David Wescott.

GD Star Rating
a WordPress rating system
Tagged as: ,

Smart Sincere Contrarian Trap

We talk as if we pick our beliefs mainly for accuracy, but in fact we have many social motives for picking beliefs. In particular, we use many kinds of beliefs as group affiliation/conformity signals. Some of us also use a few contrarian beliefs to signal cleverness and independence, but our groups have a limited tolerance for such things.

We can sometimes win socially by joining impressive leaders with the right sort of allies who support new fashions contrary to the main current beliefs. If enough others also join these new beliefs, they can become the new main beliefs of our larger group. At that point, those who continue to oppose them become the contrarians, and those who adopted the new fashions as they were gaining momentum gain more relative to latecomers. (Those who adopt fashions too early also tend to lose.)

As we are embarrassed if we seem to pick beliefs for any reason other than accuracy, this sort of new fashion move works better when supported by good accuracy-oriented reasons for changing to the new beliefs. This produces a weak tendency, all else equal, for group-based beliefs to get more accurate over time. However, many of our beliefs are about what actions are effective at achieving the motives we claim to have. And we are often hypocritical about our motives. Because of this, workable fashion moves need not just good reasons to belief claims about the efficacy of actions for stated motives, but also enough of a correspondence between the outcomes of those actions and our actual motives. Many possible fashion moves are unworkable because we don’t actually want to pursue the motives we proclaim.

Smarter people are better able to identify beliefs better supported by reasons, which all else equal makes those beliefs better candidates for new fashions. So those with enough status to start a new fashion may want to listen to smart people in the habit of looking for such candidates. But reasonably smart people who put in the effort are capable of finding a great many places where there are good reasons for picking a non-status-quo belief. And if they also happen to be sincere, they tend to visibly support many of those contrarian beliefs, even in the absence of supporting fashion movements with a decent chance of success. Which results in such high-effort smart sincere people sending bad group affiliation/conformity signals. So while potential leaders of new fashions want to listen to such people, they don’t want to publicly affiliate with them.

I fell into this smart sincere conformity trap long ago. I’ve studied many different areas, and when I’ve discovered an alternate belief that seems to have better supporting reasons than a usual belief, I have usually not hesitated to publicly embrace it. People have told me that it would have been okay for me to publicly embrace one contrarian belief. I might then have had enough overall status to plausibly lead that as a new fashion. But the problem is that I’ve supported many contrarian beliefs, not all derived from a common core principle. And so I’m not a good candidate to be a leader for any of my groups or contrarian views.

Which flags me as a smart sincere person. Good to listen to behind the scenes to get ideas for possible new fashions, but bad to embrace publicly as a loyal group member. I might gain if my contrarian views eventually became winning new fashions, but my early visible adoption of those views probably discourages others from trying to lead them, as they can less claim to have been first with those views.

If the only people who visibly supported contrarian views were smart sincere people who put in high effort, then such views might become known for high accuracy. This wouldn’t necessarily induce most people to adopt them, but it would help. However, there seem to be enough people who visibly adopt contrarian views for others reasons to sufficiently muddy the waters.

If prediction markets were widely adopted, the visible signals of which beliefs were more accurate would tend to embarrass more people into adopting them. Such people do not relish this prospect, as it would have them send bad group affiliation signals. Smart sincere people might relish the prospect, but there are not enough of them to make a difference, and even the few there are mostly don’t seem to relish it enough to work to get prediction markets adopted. Sincerely holding a belief isn’t quite the same as being willing to work for it.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Play Blindness

I’ve recently come to see play as a powerful concept for analyzing our behaviors. As I explained recently, play is a very old and robust capacity in many kinds of animals, apparently rediscovered several times.

In non-social play, an animal might play with their body or an object. When they feel safe and satisfied, they carve out a distinct space and time, within which they feel a deep pleasure from just trying out many random ways to interact, all chosen from a relatively safe space of variations. Often animals seek variations wherein they and their play objects reach some sort of interaction equilibrium, as when dribbling a ball. In such equilibria, they can successfully adjust to random interaction variations. Animals may end play abruptly if an non-play threat or opportunity appears.

In social play, an animal again waits until safe and satisfied, and feels pleasure from a large variety of safe behavior within a distinct space and time. The difference is that now they explore behavior that interacts with other animals, seeking equilibria that adjust well to changes in other animals’ behavior. Babies and mothers interact this way, and predators and prey act out variations on chasing and evading. Cats may play with mice before killing them.

These sorts of play can serve many functions, including learning, practice, and innovation. In addition, social play requires social skills of boundary management. That is, animals must develop ways to invite others to play, to indicate the kind of play intended, to assure others when play continues, and to indicate when play has ended. As with grooming, who one plays with becomes a signal of affiliation. Animals can work out their relative status via who tends to “win” inside play games, and communicate other things (such as flirtation) indirectly via play.

As humans have developed more kinds of social behavior, have better ways to communicate, and extend youthful behaviors into our whole lives, we have more ways to play. We can nest some types of play within others, and can create new types of play on the fly. Common features of most human play are some sort of safety prerequisites, a bounded space in which play happens, a feeling of pleasure from being included, a habit of exploring a wide range of options within play, limits on acceptable behavior, and special signals to initiate, continue, and end each play period.

For example, in mild-insult verbal banter play, we must each feel safe enough to focus on the banter, we and our allies are not supposed to threaten or interfere except via the banter, we are supposed to create each new response individually without help, responses are supposed to vary widely instead of repeating predictably, and some types of insults remain off limits. People may get quite emotionally hurt by such banter, but play can only continue while they pretend otherwise.

Another key feature of most human play is that we are supposed to only play for fun, instead of for gains outside of play. So we aren’t supposed to play golf to suck up to the boss, or to join a band to attract dates. Thus we typically suppress awareness of benefits outside of play. Most people find it hard to give coherent explanations of functions of play outside “fun.”

This seems to be one of humanity’s main blind spots regarding our motives. In general we try to explain most of our behaviors using the “highest” plausible motives we can find, and try to avoid violating social norms about appropriate motives. So we can be quite consciously clueless about why we play. That hardly means, however, that play serves no important functions in our lives. Far from it, in fact.

GD Star Rating
a WordPress rating system
Tagged as: ,

No One Rules The World

I’ve talked on my book Age of Em 79 times so far (#80 comes Saturday in Pisa, Italy). As it relies a lot on economics, while I mostly talk to non-econ audiences, I’ve been exposed a lot to how ordinary people react to economics. As I posted recently, one big thing I see a low confidence in any sort of social science to say anything generalizable about anything.

But the most common error I see is a lack of appreciation that coordination is hard. I hear things like:

If you asked most people today if they want a future like this, they’d say no. So how could it happen if most people don’t like it?

Their model seems to be that social outcomes are a weighted average of individual desires. If so, an outcome most people dislike just can’t happen. If you ask for a mechanism the most common choice is revolution: if there was some feature of the world that most people didn’t like, well of course they’d have a revolution to fix that. And then the world would be fixed. And not just small things: changes as big as the industrial or farming revolutions just wouldn’t happen if most people didn’t want them.

Now people seem to be vaguely aware that revolutions are hard and rare, that many attempted revolutions have failed, or succeeded but failed to achieve its stated aims, and that the world today has many features that majorities dislike. The world today has even more features where majorities feel unsure, not knowing what to think, because things are so complicated that it is hard to understand the feasible options and action consequences. Yet people seem to hold the future to a different standard, especially the far future.

Near-far theory (aka construal level theory) offers a plausible explanation for this different attitude toward the future. As we know a lot less detail about the future, we see it in a far mode, wherein we are more confident in our theories, see fewer relevant distinctions, and emphasize basic moral values relative to practical constraints. Even if the world around us seems too complex to understand and evaluate, issues and choices seem simpler and clearer regarding a distant future where in fact we can barely envision its outlines.

But of course coordination is actually very hard. Not only do most of us only dimly understand the actual range of options and consequences of our actions today, even when we do understand we find it hard to coordinate to achieve such outcomes. It is easier to act locally to achieve our local ends, but the net effect of local actions can result in net outcomes that most of us dislike. Coordination requires that we manage large organizations which are often weak, random, expensive, and out of control.

This seems especially true regarding the consequences of new tech. So far in history tech has mostly appeared whenever someone somewhere has wanted it enough, regardless of what the rest of the world thought. Mostly, no one has been driving the tech train. Sometimes we like the result, and sometimes we don’t. But no one rules the world, so these results mostly just happen either way.

GD Star Rating
a WordPress rating system
Tagged as: ,