Monthly Archives: November 2016

My Play

In social play, an animal again waits until safe and satisfied, and feels pleasure from a large variety of safe behavior within a distinct space and time. The difference is that now they explore behavior that interacts with other animals, seeking equilibria that adjust well to changes in other animals’ behavior. (more)

Over the course of their lives Kahneman and Tversky don’t seem to have actually made many big decisions. The major trajectories of their lives were determined by historical events, random coincidences, their own psychological needs and irresistible impulsions. .. Their lives weren’t so much shaped by decisions as by rapture. They were held rapt by each other’s minds. (more)

When tested in national surveys against such seemingly crucial factors as intelligence, ability, and salary, level of motivation proves to be a more significant component in predicting career success. While level of motivation is highly correlated with success, importantly, the source of motivation varies greatly among individuals and is unrelated to success. (more)

In recent posts I said that play is ancient and robust, and I outlined what play consists of. I claimed that play is a powerful concept, but I haven’t supported that claim much. Today, I’ll consider some personal examples.

As a kid I was a severe nerd. I was beaten up sometimes, and for years spent each recess being chased around the school yard. This made me quite cautious and defensive socially. Later I was terrified of girls and acted cautiously toward them too, which they didn’t take as a positive sign. In college I gave up on girls for a while, and then was surprised to find women attracted by my chatting sincerely about physics at the physics club.

Being good at school-work, I was more willing to take chances there, and focused more on what interested me. In college when I learned that the second two years of physics covered the same material as the first two years, just with more math, I stopped doing homework and played with the equations instead, and aced the exams. I went to grad school in philosophy of science because that interested me at the time, and then switched back to physics because I’d found good enough answers to my philosophy questions.

I left school for silicon valley when topics out there sounded more interesting, and a few years later switched to only working 30 hours a week so I could spend more time studying what I wanted. I started a PhD program at age 34, with two kids aged 0 and 2, and allowed myself to dabble in many topics not on the shortest path to tenure. Post tenure I’ve paid even less attention to the usual career rewards. I choose as my first book topic not the most marketable, impressive, or important topic, but the one that would most suck me in with fascinating detail. (I’d heard half the authors with a book contract don’t finish a book.)

So I must admit that much of my personal success in life has resulted less from econ-style conscious calculation, and more from play. Feeling safe enough to move into play mode freed me enough from anxiety to get things done. And even though my goals in more playful modes tended more to cuteness, curiosity, and glory, my acts there better achieved my long term goals than has conscious planning toward such ends. Yes, I did moderate my playful urges based on conscious thought, and that probably helped overall. Even so, I must admit that my personal experience raises doubts about the value of conscious planning.

My experience is somewhat unusual, but I still see play helping a lot in the successes of those I know and respect. While conscious planning can at times be important, what tends to matter more is finding a strong motivation, any strong motivation, to really get into whatever it is you are doing. And to feel comfortable enough to just explore even if none of your options seem especially promising and you face real career and resource pressures.

Playful motives are near and myopic but strong, while conscious planning can be accurate but far. Near beats far it seems. I’ll continue to ponder play, and hopefully find more to say.

GD Star Rating
loading...
Tagged as: , , ,

Careful Who You Call ‘Racist’

Imagine that you manage a restaurant, and suddenly during the evening shift a middle-aged woman stands up, points to another diner, and yells “Murderer!” She loudly appeals to everyone to help her restrain and punish this supposed murderer. (Think Catelyn seizing Tyrion in GoT.) When other diners are shy, she demands that you expel this murderer from your restaurant. She says that in a civilized society it is every good person’s duty to oppose murder, and explains her belief that her husband went to an early grave because this older man, her boss, worked him too hard. Sure her husband could have quit his job instead, but he just wasn’t that sort of person.

Will you expel this customer as requested? Probably not. Yes there is a plausible meaning of the word “murder” that applies, but the accused must satisfy a narrower meaning for such an appeal to move you. In this post I will suggest that we take a similar restricted attitude toward “racism” in politics. Let me explain.

Humans have many ways to persuade one another. We can make deals, or we can appeal to self-interest, mutual reciprocity, or shared loyalties. In addition, we can appeal to shared moral/social norms. This last sort of appeal draws on our unique human capacity to enforce what Boehm calls a “reverse dominance hierarchy.” Foragers coordinated to express norms, to monitor for violations, to agree on who is guilty, and then to punish those violators. Such norms covered only a limited range of behaviors, those worth the trouble of invoking this expensive, corruptible, and error-prone mechanism.

With farming and civilization we have introduced law. With law, we added a formal specialized process to support a subset of our especially shared, important, clear, and enforceable norms. Foragers would entertain most any argument against most anyone that most any behavior was a norm violation. For example, a band could declare a disliked forager guilty of using sorcery, even if no concrete physical evidence were offered. But farmer law usually limited accusations to clearly expressed pre-existing law, and limited the kinds of evidence that could be offered.

For example, multiple witnesses were often required, and instead of relying on median public opinion a special judge or jury looked into more detail to make a decision. Negligence levels are made extra forgiving due to the chance of honest mistakes. To be a good candidate for enforcement by farmer law, a norm needed especially wide support, and to be especially clear and easy to prove even by those unfamiliar with the details of a particular person’s habits and life. And the norm needed to be important enough to be worth paying the extra costs of legal enforcement, including a substantial expected level of error and corruption.

In the last few centuries governments have mostly taken over the “criminal” area of law, where it is now they who investigate and prosecute accusations, and punish the guilty. Because such governments can be more corruptible, error-prone, and inefficient, the criminal law process is only applied to an especially important subset of law. And even more restrictions are placed on government law, such as juries, statutes of limitations, prison as punishment, proportionate punishment, and a “beyond a reasonable doubt” standard of proof. To avoid costs of error and enforcement, we often try to catch fewer violators and punish them more strongly to compensate.

Today, many kinds of political arguments are offered for and against people, organizations, and policies. While many arguments appeal to self-interest and shared loyalties, others demand priority because of norm violations. The claim is that whatever other different interests we may have and pursue, it is essential that we set those aside to coordinate to punish key norm violations. And since many of these norms are, for various reasons, not enforced by formal law, we depend on other good people and organizations to respond to such moral calls to action.

And this all makes sense so far. But in the last half century in the West, preferences against “racism” have risen to at least near the level of moral norms. (We have related feelings on “sexism” and other “isms” but in this post I’ll focus on racism for concreteness.) Whatever else we may disagree on, we are told, we must coordinate to oppose racists, boycotting their businesses and drumming them out of public office. Which could make sense if enough of us agree strongly enough to make this a priority, and if we share an effective way to collectively identify such violations.

One problem, however, is that our commonly used concepts of “racism” seem more appropriate to ordinary conversation and persuasion than to usefully enforceable strong norms and law. Some favor concepts where most everyone is at least a bit racist, and others concepts based on hard-to-observe dispositions. But while such concepts may be useful in ordinary conversation or academic analysis, they are poorly suited for enforcing strong norms and law.

For example, many today claim that Trump is clearly racist, and invoke a shared norm against racism in their appeal for everyone to oppose Trump. No good person, they suggest, should cooperate in any way with Trump or his supporters. A good person can’t treat this as politics as usual, not when a norm violator stands among us unpunished! It is even hinted that people with positions of influence in important institutions, such as in media, academia, journalism, law, and governance, should deviate from their usual practice of following institutional norms of political neutrality, and instead tip the scales against Trump supporters, now that everything is at stake.

But as Scott Alexander recently tried to argue, the evidence offered for Trump racism doesn’t yet seem sufficient to hold up in a legal court, not at least if that court used a “racism” concept of the sort law prefers. If your concept of “racist” applies to a third of the population, or requires a subjective summing up of everything you’ve ever heard about the accused, it just won’t do for law.

Yes, people are trying Trump in a court of public opinion, not in a court of law. But my whole point here is that there is a continuum of cases, and we should hold a higher more-restrictive more-law-like standard for enforcing strong norms than we should in ordinary conversation and persuasion. Higher standards are also needed for larger more varied communities, when there are stronger possibilities of bias and corruption, and when the enforcing audience pays less attention to its job. So we should be a lot more careful with who we call “racist” than who we call “hot” or “smart”, for example. For those later judgements, which are not the basis of calls to enforcement of shared strong norms, it is more okay to just use your judgement based on everything you’ve heard.

Now I haven’t studied Trump or his supposed racism in much detail. So maybe in fact if you look carefully enough there is enough evidence to convict, even with the sort of simple clear-cut definition of “racism” that would make sense and be useful in law. But this appeal to norm enforcement should and will fail if that evidence can’t be made clear and visible enough to the typical audience member to whom this appeal is targeted. We must convict together or not at all; informal norm enforcement requires a strong consensus among its participants.

Maybe it is time to enshrine our anti-racism norm more formally in law. Then we could gain the benefits of law and avoid the many costs of informal mob enforcement of our anti-racism norms. I really don’t know. But I have a stronger opinion that if you are going to appeal to our sense of a strong shared norm against something like racism, you owe it to us all to hold yourself to a high standard of a clear important and visible violation of a nearly-law-appropriate concept. Because that is how law and norm enforcement need to work.

Yes we are limited in our ability to enforce norms and laws, and this limits our ability to encourage good behavior. And it may gall you to see bad behavior go unpunished due to these limitations. But wishes don’t make horses, and these costs are real. So until we can lower such costs, please do be careful who you call a “racist.”

GD Star Rating
loading...
Tagged as: , , ,

10 Year Blog Anniversary

Ten years ago today this blog began with this post. Since then we’ve had 3,772 posts, 104 thousand comments, & over 15 million page views. This started as a group blog, and later became my personal blog, and I’ve been posting less the last few years as I focused on writing books.

I still have mixed feelings about putting in effort to write blog posts, relative to longer more academic articles and books. I agree that a blog post can communicate a useful and original insight in just a few paragraphs to thousands, while an academic article or book might be read by only tens or hundreds. But a much higher fraction of academic readers will try to build on my insight in a way that becomes part of our shared accumulating edifice of human insight. My hope is even if the fraction of blog readers who also do this is small, it is large enough to make a comparable total number. Because if not, I fear blogging is mostly a waste.

GD Star Rating
loading...
Tagged as: ,

Dial It Back

In a repeated game, where the same people play the same game over and over, cooperation can more easily arise than in a one-shot version of the game, where such people play only once and then never interact again. This sort of cooperation gets easier the more that players care about the many future iterations of the game, compared to the current iteration.

When a group repeats the same game, but some iterations count much more than others, then defection from cooperation is most likely at a big “endgame” iteration. For example, spies who are moles in enemy organizations will usually hide and behave just as that organization wants and expects, waiting for a very big event so important that it can be worth spending their entire career investment to influence that event.

Many of our institutions function well because most participants set aside immediate selfish aims in order to conform to social norms, thereby gaining more support from the organization in the long term. But when one faces a single very important “endgame” event, one is then most tempted to deviate from the norms. And if many other participants also see that event as very important, then your knowing that they are tempted more to deviate tempts you more to deviate. So institutions can unravel when faced with very big events.

I’ve been disturbed by rising US political polarization over recent decades, with each election accompanied by more extreme rhetoric saying “absolutely everything is now at stake!” And I’ve been worried that important social institutions could erode when more people believe such claims. And now with Trump’s election, this sort of talk has gone off the charts. I’m hearing quite extreme things, even from quite powerful important people.

Many justify their extreme stance saying Trump has said things suggesting he is less than fully committed to existing institutions. So they must oppose him so strongly to save those institutions. But I’m also worried that such institutions are threatened by this never-compromise never-forget take-no-prisoners fight-fight-fight mood. If the other side decides that your side will no longer play by the usual institutional norms of fairness, they won’t feel inclined to play fair either. And this really all might go to hell.

So please everyone, dial it back a bit. Yes, if for you what Trump has already done is so bad that no compromise is tolerable, well then you are lost to me. But for the rest of you, I’m not saying to forgot, or to not watch carefully. But wait until Trump actually does something concrete that justifies loudly saying this time is clearly different and now everything is at sake. Yeah that may happen, but surely you want Trump folks to know that isn’t the only possible outcome. There need to be some things Trump folks could do to pursue some of their agendas that would be politics as usual. Politics where your side doesn’t run the presidency, and so you have to expect to lose on things where you would have won had Clinton become president. But still, politics where our existing institutions can continue to function without everyone expecting everyone else to defect from the usual norms because now everything is at stake.

Added 21Nov: Apparently before the election more people on Trump’s side were talked about presuming the election was rigged if their side lost. Without concrete evidence to support such accusations, that also seems a lamentable example of defecting from existing institutions because now everything is at stake. HT Carl Shulman.

GD Star Rating
loading...
Tagged as: ,

Trump, Political Innovator

People are complicated. Not only can each voter be described by a very high dimensional space of characteristics, the space of possible sets of voters is even larger. Because of this, coalition politics is intrinsically complex, making innovation possible and relevant.

That is, at any one time the existing political actors in some area use an existing set of identified political coalitions, and matching issues that animate them. However, these existing groups are but a tiny part of the vast space of possible groups and coalitions. And even if one had exhaustively searched the entire space and found the very best options, over time those would become stale, making new better options possible.

As usual in innovation, each actor can prefer to free-ride on the efforts of others, and wait to make use of new coalitions that others have worked to discover. But some political actors will more explore new possible coalitions and issues. Most will probably try to for a resurgence of old combinations that worked better in the past than they have recently. But some will try out more truly new combinations.

We expect those who innovate politically to differ in predictable ways. They will tend to be outsiders looking for a way in, and their personal preferences will less well match existing standard positions. Because innovators must search the space of possibilities, their positions and groups will be vaguer and vary more over time, and they will less hew to existing rules and taboos on such things. They will more often work their crowds on the fly to explore their reactions, relative to sticking to prepared speeches. Innovators will tend to arise more when power is more up for grabs, with many contenders. Successful innovation tends to be a surprise, and is more likely the longer it has been since a major innovation, or “realignment,” with more underlying social change during that period. When an innovator finds a new coalition to represent, that coalition will be less attracted to this politician’s personal features and more to the fact that someone is offering to represent them.

The next US president, Donald Trump, seems to be a textbook political innovator. During a period when his party was quite up for grabs with many contenders, he worked his crowds, taking a wide range of vague positions that varied over time, and often stepped over taboo lines. In the process, he surprised everyone by discovering a new coalition that others had not tried to represent, a group that likes him more for this representation than his personal features.

Many have expressed great anxiety about Trump’s win, saying that he is is bad overall because he induces greater global and domestic uncertainly. In their mind, this includes a higher chances of wars, coups, riots, collapse of democracy, and so on. But overall these seem to be generic consequences of political innovation. Innovation in general is disruptive and costly in the short run, but can aide adaptation in the long run.

So you can dislike Trump for two very different reasons, First, you can dislike innovation on the other side of the political spectrum, as you see that coming at the expense of your side. Or, or you can dislike political innovation in general. But if innovation is the process of adapting to changing conditions, it must be mostly a question of when, not if. And less frequent innovations probably result in bigger changes, which is probably more disruptive overall.

So what you should really be asking is: what were the obstacles to smaller past innovations in Trump’s new direction? And how can we reduce such obstacles?

GD Star Rating
loading...
Tagged as: ,

Get A Grip; There’s A Much Bigger Picture

Many seem to think the apocalypse is upon us – I hear oh so much much wailing and gnashing of teeth. But if you compare the policies, attitudes, and life histories of the US as it will be under Trump, to how they would have been under Clinton, that difference is very likely much smaller than the variation in such things around the world today, and also the variation within the US so far across its history. And all three of these differences are small compared the variation in such things across the history of human-like creatures so far, and also compared to that history yet to come.

That is, there are much bigger issues at play, if only you will stand back to see them. Now you might claim that pushing on the Trump vs. Clinton divide is your best way to push for the future outcomes you prefer within that larger future variation yet to come. And that might even be true. But if you haven’t actually thought about the variation yet to come and what might push on it, your claim sure sounds like wishful thinking. You want this thing that you feel so emotionally invested in at the moment to be the thing that matters most for the long run. But wishes don’t make horses.

To see the bigger picture, read more distant history. And maybe read my book, or any similar books you can find, that try seriously to see how strange the long term future might be, and what their issues may be. And then you can more usefully reconsider just what about this Trump vs. Clinton divide that so animates you now has much of a chance of mattering in the long run.

When you are in a frame of mind where Trump (or Clinton) equals the apocalypse, you are probably mostly horrified by most past human lives, attitudes, and policies, and also by likely long-run future variations. In such a mode you probably thank your lucky stars you live in the first human age and place not to be an apocalyptic hell-hole, and you desperately want to find a way to stop long-term change, to find a way to fill the next trillion years of the universe with something close to liberal democracies, suburban comfort, elites chosen by universities, engaging TV dramas, and a few more sub-generes of rock music. I suspect that this is the core emotion animating most hopes to create a friendly AI super intelligence to rule us all. But most likely, the future will be even stranger than the past. Get a grip, and deal with it.

GD Star Rating
loading...
Tagged as: , ,

Needed: Social Innovation Adaptation

This is the point during the electoral cycle when people are most willing to consider changing political systems. The nearly half of voters whose candidates just lost are now most open to changes that might have let their side win. But even in an election this acrimonious, that interest is paper thin, and blows away in the slightest breeze. Because politics isn’t about policy – what we really want is to feel part of a political tribe via talking with them about the same things. So if the rest of your tribe isn’t talking about system change, you don’t want to talk about that either.

So I want to tell or remind everyone that if you actually did care about outcomes instead of feeling part of a big tribe, large social gains wait untapped in better social institutions. In particular, very large gains await detailed field trials of institutional innovations. Let me explain.

Long ago when I was a physicist turned computer researcher who started to study economics, I noticed that it seemed far easier to design new better social institutions than to design new better computer algorithms or physical devices. This helped inspire me to switch to economics.

Once I was in graduate program with a thesis advisor who specialized in institution/mechanism design, I seemed to see a well established path for social innovations, from vague intuitions to theoretical analysis to lab experiments to simplified field experiments to complex practice. Of course as with most innovation paths, as costs rose along the path most candidates fell by the wayside. And yes, designing social institutions was harder that it looked at first, though it still seems easier than for computers and physical devices.

But it took me a long time to learn that this path is seriously broken near the end. Organizations with real problems do in fact sometimes allow simplified field trials of institutional alternatives that social scientists have proposed, but only in a very limited range of areas. And usually they mainly just do this to affiliate with prestigious academics; most aren’t actually much interested in adopting better institutions. (Firms mostly outsource social innovation to management consultants, who don’t actually endorse much. Yes startups explore some innovations, but relatively few.)

So by now academics have accumulated a large pile of promising institution ideas, many of which have supporting theory, lab experiments, and even simplified field trials. In addition, academics have even larger literatures that measure and theorize about existing social institutions. But even after promising results from simplified field experiments, much work usually remains to adapt such new proposals to the many complex details of existing social worlds. Complex worlds can’t usefully digest abstract academic ideas without such adaptation.

And the bottom line is that we very much lack organizations willing to do that work for social innovations. Organizations do this work more often for computer or device innovations, and sometimes social innovations get smuggled in via that route. A few organizations sometimes work on social innovations directly, but mostly to affiliate with prestigious academics, so if you aren’t such an academic you mostly can’t participate.

This is the point where I’ve found myself stuck with prediction & decision markets. There has been prestige and funding to prove theorems, do lab experiments, analyze field datasets, and even do limited simplified field trials. But there is little prestige or funding for that last key step of adapting academic ideas to complex social worlds. Its hard to apply rigorous general methods in such efforts, and so hard to publish on that academically. (Even blockchain folks interested have mainly been writing general code, not working with messy organizations.)

So if you want to make clubs, firms, cities, nations, and the world more effective and efficient, a highly effective strategy is to invest in widening the neglected bottleneck of the social innovation pathway. Get your organization to work on some ideas, or pay other organizations to work on them. Yes some ideas can only be tried out at large scales, but for most there are smaller scale analogues that it makes sense to work on first. I stand ready to help organizations do this for prediction & decision markets. But alas to most organizations I lack sufficient prestige for such associations.

GD Star Rating
loading...
Tagged as: ,

Big Impact Isn’t Big Data

A common heuristic for estimating the quality of something is: what has it done for me lately? For example, you could estimate the quality of a restaurant via a sum or average of how much you’ve enjoyed your meals there. Or you might weight recent visits more, since quality may change over time. Such methods are simple and robust, but they aren’t usually the best. For example, if you know of others who ate at that restaurant, their meal enjoyment is also data, data that can improve your quality estimate. Yes, those other people might have different meal priorities, and that may be a reason to give their meals less weight than your meals. But still, their data is useful.

Consider an extreme case where one meal, say your wedding reception meal, is far more important to you than the others. If you weigh your meal experiences in proportion to meal importance, your whole evaluation may depend mainly on one meal. Yes, if meals of that important type differ substantially from other meals then using this method best avoids biases from using unimportant types of meals to judge important types. But the noise in your estimate will be huge; individual restaurant meals can vary greatly for many random reasons even when the underlying quality stays the same. You just won’t know much about meal quality.

I mention all this because many seem eager to give the recent presidential election (and the recent Brexit vote) a huge weight in their estimate the quality of various prediction sources. Sources that did poorly on those two events are judged to be poor sources overall. And yes, if these were by far more important events to you, this strategy avoids the risk that familiar prediction sources have a different accuracy on events like this than they do on other events. Even so, this strategy mostly just puts you at the mercy of noise. If you use a small enough set of events to judge accuracy, you just aren’t going to be able to see much of a difference between sources; you will have little reason to think that those sources that did better on these few events will do much better on other future events.

Me, I don’t see much reason to think that familiar prediction sources have an accuracy that is very different on the most important events, relative to other events, and so I mainly trust comparisons that use a lot of data. For example, on large datasets prediction markets have shown a robustly high accuracy compared to other sources. Yes, you might find other particular sources that seem to do better in particular areas, but you have to worry about selection effects – how many similar sources did you look at to find those few winners? And if prediction market participants became convinced that these particular sources had high accuracy, they’d drive market prices to reflect those predictions.

GD Star Rating
loading...
Tagged as:

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

Added 15Dec: In this book chapter I expand a bit on this post.

GD Star Rating
loading...
Tagged as: , ,