Tag Archives: Future

Better Babblers

You can think of knowing how to write as knowing how to correlate words. Given no words, what first word should you write. Then given one word, what second word best correlates with that. Then given two words, what third word best fits with those two. And so on. Thus your knowledge of how to write can be broken into what you know at these different correlation orders: one word, two words, three words, and so on. Each time you pick a new word you can combine knowledge at these different orders, by weighing all their different recommendations for your next word.

This correlation order approach can also be applied at different scales. For example, given some classification of your first sentence, what kind of second sentence should follow? Given a classification of your first chapter, what kind of second chapter should follow? Many other kinds of knowledge can be similarly broken down into correlation orders, at different scales. We can do this for music, paintings, interior decoration, computer programs, math theorems, and so on.

Given a huge database, such as of writings, it is easy to get good at very low orders; you can just use the correlation frequencies found in your dataset. After that, simple statistical models applied to this database can give you good estimates for correlations to use at somewhat higher orders. And if you have enough data (roughly ten million examples per category I’m told) then recently popular machine learning techniques can improve your estimates at a next set of higher orders.

There are some cases where this is enough; either you can get enormous amounts of data, or learning low order correlations well is enough to solve your problem. These cases include many games with well defined rules, many physical tasks where exact simulations are feasible, and some kinds of language translation. But there are still many other cases where this is far from enough to achieve human level proficiency. In these cases an important part of what we know can be described as very high order correlations produced by “deep” knowledge structures that aren’t well reduced to low order correlations.

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

As we slowly get better at statistics and machine learning, our machines will slowly get better at babbling. The famous Eliza chatbot went surprisingly far using very low order correlations, and today chatbots best fool us into thinking they are human when they stick to babbling style conversations. So what does a world of better babblers look like?

First, machines will better mimic low quality student essays, so schools will have to try harder to keep such students from using artificial babblers.

Second, the better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles. So expect less use of simple easy-to-understand-and-predict speech in casual polite conversation, inspirational speeches, and public intellectual talk.

One option is to put a higher premium on talk that actually makes deep sense, in terms of deep concepts that experts understand. That would be nice for those of us who have always emphasized such things. But alas there are other options.

A second option is to put a higher premium on developing very distinctive styles of talking. This would be like how typical popular songs from two centuries ago could be sung and enjoyed by most anyone, compared to how popular music today is matched in great detail to the particular features of particular artists. Imagine most all future speakers having as distinct a personal talking style.

A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today. People using words and phrases and cultural references in ways that only folks very near in cultural space can clearly accept as within recent local fashion. Artificial babblers might not have enough data to track changing fashions in such narrow groups.

Bottom line: the more kinds of conversation styles that simple machines can manage, the more humans will try to avoid talking in those styles, a least when not talking to machines.

GD Star Rating
loading...
Tagged as: , ,

The Robot Protocol

Talking with a professor of robotics, I noticed a nice approachable question at the intersection of social science, computer science, and futurism.

Someday robots will mix with humans in public, walking our streets, parks, hospitals, and stores, driving our streets, swimming our waterways, and perhaps flying our skies. Such public robots may vary enormously in their mental and physical capacities, but if they are to mix smoothly with humans in public they then we will probably expect them to maintain a minimal set of common social capacities. Such as responding sensibly to “Who are you?” and “Get out of my way.” And the rest of us would have a new modified set of social norms for dealing with public robots via these capacities.

Together these common robot capacities and matching human social norms would become a “robot protocol.” Once ordinary people and robots makers have adapted to it, this protocol would be a standard persisting across space and time, and relatively hard to change. A standard that diverse robots could also use when interacting with each other in public.

Because it would be a wide and persistent standard, the robot protocol can’t be matched in much detail to the specific local costs of implementing various robot capacities. Instead, it could at best be matched to broad overall trends in such costs. To allow robots to walk among us, we’d try to be forgiving and only expect robots to have capacities that we especially value, and that are relatively cheap to implement in a wide range of contexts.

(Of course this general robot protocol isn’t the only thing that would coordinate robot and human interactions. There’d also be many other more context-dependent protocols.)

One simple option would be to expect each public robot to be “tethered” via fast robust communication to a person on call who can rapidly respond to all queries that the robot can’t handle itself. But it isn’t clear how sufficient this approach will be for many possible queries.

Robots would probably be expected to find and comply with any publicly posted rules for interacting in particular spaces, such as the rules we often post for humans on signs. Perhaps we will simplify such rules for robots. In addition, here are some things that people sometimes say to each other in public where we might perhaps want robots to have analogous capacities:

Who are you? What are you doing here? Why are you following me? Please don’t record me. I’m serving you with this legal warrant. Stop, this is the police! You are not allowed to be here; leave. Non-authorized personnel must evacuate this area immediately. Get out of my way. You are hurting me. Why are you calling attention to me? Can you help me? Can you take our picture? Where is the nearest bathroom? Where is a nearby recharging station? (I may add more here.)

It seems feasible to start now to think about the design of such a robot protocol. Of course in the end a robot protocol might be just a social convention without the force of law, and it may result more from decentralized evolution than centralized design. Even so, we may now know enough about human social preferences and the broad outlines of the costs of robot capacities to start to usefully think about this problem.

GD Star Rating
loading...
Tagged as: , ,

Big Software Firm Bleg

I haven’t yet posted much on AI as Software. But now I’ll say more, as I want to ask a question.

Someday ems may replace humans in most jobs, and my first book talks about how that might change many things. But whether or not ems are the first kind of software to replace humans wholesale in jobs, eventually non-em software may plausibly do this. Such software would replace ems if ems came first, but if not then such software would directly replace humans.

Many people suggest, implicitly or explicitly, that non-em software that takes over most jobs will differ in big ways from the software that we’ve seen over the last seventy years. But they are rarely clear on what exact differences they foresee. So the plan of my project is to just assume our past software experience is a good guide to future software. That is, to predict the future, one may 1) assume current distributions of software features will continue, or 2) project past feature trends into future changes, or 3) combine past software feature correlations with other ways we expect the future to differ.

This effort may encourage others to better clarify how they think future software will differ, and help us to estimate the consequences of such assumptions. It may also help us to more directly understand a software-dominated future, if there are many ways that future software won’t greatly change.

Today, each industry makes a kind of stuff (product or service) we want, or a kind of stuff that helps other industries to make stuff. But while such industries are often dominated by a small number of firms, the economy as a whole is not so dominated. This is mainly because there are so many different industries, and firms suffer when they try to participate in too many industries. Will this lack of concentration continue into a software dominated future?

Today each industry gets a lot of help from humans, and each industry helps to train its humans to better help that industry. In addition, a few special industries, such as schooling and parenting, change humans in more general ways, to help better in a wide range of industries. In a software dominated future, humans are replaced by software, and the schooling and parenting industries are replaced by a general software industry. Industry-independent development of software would happen in the general software industry, while specific adaptations for particular industries would happen within those industries.

If so, the new degree of producer concentration depends on two key factors: what fraction of software development is general as opposed to industry-specific, and how concentrated is this general software industry. Regarding this second factor, it is noteworthy that we now see some pretty big players in the software industry, such as Google, Apple, and Microsoft. And so a key question is the source of this concentration. That is, what exactly are the key advantages of big firms in today’s software market?

There are many possibilities, including patent pools and network effects among customers of key products. Another possibility, however, is one where I expect many of my readers to have relevant personal experience: scale economies in software production. Hence this bleg – a blog post asking a question.

If you are an experienced software professional who has worked both at a big software firm and also in other places, my key question for you is: by how much was your productive efficiency as a software developer increased (or decreased) due to working at a big software firm?  That is, how much more could you get done there that wasn’t attributable to having a bigger budget to do more, or to paying more for better people, tools, or resources. Instead, I’m looking for the net increase (or decrease) in your output due to software tools, resources, security, oversight, rules, or collaborators that are more feasible and hence more common at larger firms. Ideally you answer will be in the form of a percentage, such as “I seem to be 10% more productive working at a big software firm.”

Added 3:45p: I meant “productivity” in the economic sense of the inputs required to produce a given output, holding constant the specific kind of output produced. So this kind of productivity should ignore the number of users of the software, and the revenue gained per user. But if big vs small firms tend to make different kinds of software, which have different costs to make, those differences should be taken into account. For example, one should correct for needing more man-hours to add a line of code in a larger system, or in a more secure or reliable system.

GD Star Rating
loading...
Tagged as: , ,

On Homo Deus

Historian Yuval Harari’s best-selling book Sapiens mostly talked about history. His new book, Homo Deus, won’t be released in the US until February 21, but I managed to find a copy at the Istanbul airport – it came out in Europe last fall. This post is about the book, and it is long and full of quotes; you are warned. Continue reading "On Homo Deus" »

GD Star Rating
loading...
Tagged as: ,

Avoid “Posthuman” Label

Philosophy is mainly useful in inoculating you against other philosophy. Else you’ll be vulnerable to the first coherent philosophy you hear. (source)

Long ago (’81-83 at U Chicago) I studied Conceptual Foundations of Science (mainly philosophy of science) because I wanted to really understand this “science” thing, and the main thing I learned was to avoid the word “science”. If necessary, the word can refer to obvious social groups and how they maintain boundaries, but beyond that other words and concepts are more useful.

I’ve always felt similarly wary of “transhuman” and “posthuman”, because it isn’t clear what they can or do mean. In the latest Bioethics, David Lawrence elaborates an argument for such wariness:

Human is itself a greatly abused term, especially in the context of the enhancement/posthuman debate, and the myriad of meanings ascribed to it could give posthuman a very different slant depending on ones understanding. .. There are, perhaps, three main senses in which the term human is frequently employed- the biological, the moral, and the self- (or other-) idealizing. In the first of these, human .. refer[s] to our taxonomic species, In the second sense, human generally refers to a community of beings which qualify as having a certain moral value or status; and the third .. denoting .. what matters about those who matter. ..

It is a mistake to envisage the posthuman as a different species. It is a mistake to imagine traits such as immortality or godlike powers as being changes that indicate a significant discontinuity. .. The mere act of assigning terminology is inherently one of division. .. The use of these terms is designed to classify and separate. As I hope to have shown, this is precisely the problem with the notional posthuman. ..

The commentators on both sides of the debate concerning the meaning of posthuman do so as if it had currency. .. To use the term to imply species or value change, or a radical transition (the meaning of which is unclear in any case), there needs to be justification in a way which does not seem to have been delivered within the existing dialogue. Here, I have argued that this is not a plausible understanding, and furthermore that it is based in error. The analogous changes we have undergone throughout our history have not been thought to signal a qualitative change, or at least, not to any significant degree. We are, today, post-internet age humans; we are post-neolithic, post-bronze age, post-iron age. These transitions have not changed our value or the nature of our being: machine-age man, Homo augmentus, is still man. The touted posthuman is, in general, overhyped and unwarranted by the evidence – either factual, or conceptual – and does not seem to have been subject to a close analysis until now.

Here’s what Lawrence suggests we say instead:

Enhancement technologies exist, are used, and will continue to develop; and it is idle to claim that we ought avoid them wholesale. .. It is important that we find a way to reconcile ourselves with the beings we may become, since they and we are products of the same process. .. To be posthuman is in truth to be more human than human – more successful at embodying these traits than we, who consider ourselves the model of humanity, do. It is not, as critics may claim, to be beyond, to be something to fear, something fundamentally different.

A habit of talking as if there will be a natural progression from “human” to “transhuman” to “posthuman” makes our descendants by default into “others” less worthy of our help and allegiance, without specifying the key traits on which they will be deficient. Yes, it is possible that our descendants will in fact have traits we dislike so much as to make us reject them as no longer part of the “us” that matters. But this is hardly inevitable, and those who argue that it will happen should have to specify the particular key traits they expect will cause such a divergence.

Only half those who imagine entering a star trek transporter see the person who exits as themselves, but all those who imagine exiting see the person entering as themselves. Similarly, we tend to see all our ancestors for the last million years as part of the “us” that matters, even though many of them might reject us as being part of the “us” that matters to them. And so our descendants are more likely to see us today as part of the “us” that matters to them, compared to our seeing them in that way.

So let us talk first of the various kinds of descendants we may have, the traits by which they may differ from us, and which of those traits matter most to us in deciding who matters. After that, perhaps, we might argue about which descendants will become a “them” who matter much less to us. We could perhaps call such folks “posthuman,” but know that they will probably reject such a label.

GD Star Rating
loading...
Tagged as: ,

Beware Futurism As Political Allegory

Imagine that you are junior in high school who expects to attend college. At that point in your life you have opinions related to frequent personal choices if blue jeans feel comfortable or if you prefer vanilla to chocolate ice cream. And you have opinions on social norms in your social world, like how much money it is okay to borrow from a friend, how late one should stay at a party, or what are acceptable excuses for breaking up with boy/girlfriend. And you know you will soon need opinions on imminent major life choices, such as what college to attend, what major to have, and whether to live on campus.

But at that point in life you will have less need of opinions on what classes to take as college senior, and where to live then. You know you can wait and learn more before making such decisions. And you have even less need of opinions on borrowing money, staying at parties, or breaking up as a college senior. Social norms on those choices will come from future communities, who may not yet have even decided on such things.

In general, you should expect to have more sensible and stable opinions related to choices you actually make often, and less coherent and useful opinions regarding choices you will make in the future, after you learn many new things. You should have less coherent opinions on how your future communities will evaluate the morality and social acceptability of your future choices. And your opinions on collective choices, such as via government, should be even less reliable, as your incentives to get those right are even weaker.

All of this suggests that you be wary of simply asking your intuition for opinions about what you or anyone else should do in strange distant futures. Especially regarding moral and collective choices. Your intuition may dutifully generate such opinions, but they’ll probably depend a lot on how the questions were framed, and the context in which questions were asked. For more reliable opinions, try instead to chip away at such topics.

However, this context-dependence is gold to those who seek to influence others’ opinions. Warriors attack where an enemy is weak. When seeking to convert others to a point of view, you can have only limited influence on topics where they have accepted a particular framing, and have incentives to be careful. But you can more influence how a new topic is framed, and when there are many new topics you can emphasize the few where your preferred framing helps more.

So legal advocates want to control how courts pick cases to review and the new precedents they set. Political advocates want to influence which news stories get popular and how those stories are framed. Political advocates also seek to influence the choices and interpretations of cultural icons like songs and movies, because being less constrained by facts such things are more open to framing.

As with the example above of future college choices, distant future choices are less thoughtful or stable, and thus more subject to selection and framing effects. Future moral choices are even less stable, and more related to political positions that advocates want to push. And future moral choices expressed via culture like movies are even more flexible, and thus more useful. So newly-discussed culturally-expressed distant future collective moral choices create a perfect storm of random context-dependent unreliable opinions, and thus are ideal for advocacy influence, at least when you can get people to pay attention to them.

Of course most people are usually reluctant to think much about distant future choices, including moral and collective ones. Which greatly limits the value of such topics to advocates. But a few choices related to distant futures have engaged wider audiences, such as climate change and, recently, AI risk. And political advocates do seem quite eager to influence such topics, due to their potency. They seem select such topics from a far larger set of similarly important issues, in part for their potency at pushing common political positions. The science-fiction truism really does seem to apply: most talk on the distant future is really indirect talk on our world today.

Of course the future really will happen eventually, and we should want to consider choices today that importantly influence that future, some of those choices will have moral and collective aspects, some of these issues can be expressed via culture like movies, and at some point such issue discussion will be new. But as with big hard problems in general, it is probably better to chip away at such problems.

That is: Anchor your thoughts to reality rather than to fiction. Make sure you have a grip on current and past behavior before looking at related future behavior. Try to stick with analyzing facts for longer before being forced to make value choices. Think about amoral and decentralized choices carefully before considering moral and collective ones. Avoid feeling pressured to jump to strong conclusions on recently popular topics. Prefer robust and reliable methods even when they are less easy and direct. Mostly the distant future doesn’t need action today – decisions will wait a bit for us to think more carefully.

GD Star Rating
loading...
Tagged as: , ,

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
loading...
Tagged as: , , ,

Get A Grip; There’s A Much Bigger Picture

Many seem to think the apocalypse is upon us – I hear oh so much much wailing and gnashing of teeth. But if you compare the policies, attitudes, and life histories of the US as it will be under Trump, to how they would have been under Clinton, that difference is very likely much smaller than the variation in such things around the world today, and also the variation within the US so far across its history. And all three of these differences are small compared the variation in such things across the history of human-like creatures so far, and also compared to that history yet to come.

That is, there are much bigger issues at play, if only you will stand back to see them. Now you might claim that pushing on the Trump vs. Clinton divide is your best way to push for the future outcomes you prefer within that larger future variation yet to come. And that might even be true. But if you haven’t actually thought about the variation yet to come and what might push on it, your claim sure sounds like wishful thinking. You want this thing that you feel so emotionally invested in at the moment to be the thing that matters most for the long run. But wishes don’t make horses.

To see the bigger picture, read more distant history. And maybe read my book, or any similar books you can find, that try seriously to see how strange the long term future might be, and what their issues may be. And then you can more usefully reconsider just what about this Trump vs. Clinton divide that so animates you now has much of a chance of mattering in the long run.

When you are in a frame of mind where Trump (or Clinton) equals the apocalypse, you are probably mostly horrified by most past human lives, attitudes, and policies, and also by likely long-run future variations. In such a mode you probably thank your lucky stars you live in the first human age and place not to be an apocalyptic hell-hole, and you desperately want to find a way to stop long-term change, to find a way to fill the next trillion years of the universe with something close to liberal democracies, suburban comfort, elites chosen by universities, engaging TV dramas, and a few more sub-generes of rock music. I suspect that this is the core emotion animating most hopes to create a friendly AI super intelligence to rule us all. But most likely, the future will be even stranger than the past. Get a grip, and deal with it.

GD Star Rating
loading...
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

Added 15Dec: In this book chapter I expand a bit on this post.

GD Star Rating
loading...
Tagged as: , ,

In Praise of Low Needs

We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

Of course few individuals today focus on filling the universe with life. Most attend to their individual needs. And as we’ve been getting rich over the last few centuries, our needs have changed. Many cite Maslow’s Hierarchy of Needs:

maslowshierarchyofneeds-svg

While few offer much concrete evidence for this, most seem to accept it or one of its many variations. Once our basic needs are met, our attention switches to “higher” needs. Wealth really does change humans. (I see this in part as our returning to forager values with increasing wealth.)

It is easy to assume that what is good for you is good overall. If you are an artist, you may assume the world is better when consumers more art. If you are a scientist, you may assume the world is better if it gives more attention and funding to science. Similarly, it is easy to assume that the world gets better if more of us get more of what we want, and thus move higher into Maslow’s Hierarchy.

But I worry: as we attend more to higher needs, we may grow and innovate less regarding lower needs. Can the universe really get filled by creatures focused mainly on self-actualization? Why should they risk or tolerate disruptions from innovations that advance low needs if they don’t care much for that stuff? And many today see their higher needs as conflicting with more capacity to fill low needs. For example, many see more physical capacities as coming at the expense of less nature, weaker indigenous cultures, larger more soul-crushing organizations, more dehumanizing capitalism, etc. Rich nations today do seem to have weaker growth in raw physical capacities because of such issues.

Yes, it is possible that even rich societies focused on high needs will consistently grow their capacities to satisfy low needs, and that will eventually lead to a universe densely filled with life. But still I worry about all those unknown obstacles yet to be seen as our descendants try to grow through another three to ten factors as large as humanity’s leap. At some of those obstacles, will a focus on high needs lead them to turn away from the grand growth path? To a comfortable “sustainable” stability without all that disruptive innovation? How much harder would become to restart growth again later?

Pretty much all the growth that we have seen so far has been in a context where humans, and their ancestors, were focused mainly on low needs. Our current turn toward high needs is quite new, and thus relatively unproven. Yes, we have continued to grow, but more slowly. That seems worth at least a bit of worry.

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.

GD Star Rating
loading...
Tagged as: ,