Tag Archives: Future

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
loading...
Tagged as: , , ,

Get A Grip; There’s A Much Bigger Picture

Many seem to think the apocalypse is upon us – I hear oh so much much wailing and gnashing of teeth. But if you compare the policies, attitudes, and life histories of the US as it will be under Trump, to how they would have been under Clinton, that difference is very likely much smaller than the variation in such things around the world today, and also the variation within the US so far across its history. And all three of these differences are small compared the variation in such things across the history of human-like creatures so far, and also compared to that history yet to come.

That is, there are much bigger issues at play, if only you will stand back to see them. Now you might claim that pushing on the Trump vs. Clinton divide is your best way to push for the future outcomes you prefer within that larger future variation yet to come. And that might even be true. But if you haven’t actually thought about the variation yet to come and what might push on it, your claim sure sounds like wishful thinking. You want this thing that you feel so emotionally invested in at the moment to be the thing that matters most for the long run. But wishes don’t make horses.

To see the bigger picture, read more distant history. And maybe read my book, or any similar books you can find, that try seriously to see how strange the long term future might be, and what their issues may be. And then you can more usefully reconsider just what about this Trump vs. Clinton divide that so animates you now has much of a chance of mattering in the long run.

When you are in a frame of mind where Trump (or Clinton) equals the apocalypse, you are probably mostly horrified by most past human lives, attitudes, and policies, and also by likely long-run future variations. In such a mode you probably thank your lucky stars you live in the first human age and place not to be an apocalyptic hell-hole, and you desperately want to find a way to stop long-term change, to find a way to fill the next trillion years of the universe with something close to liberal democracies, suburban comfort, elites chosen by universities, engaging TV dramas, and a few more sub-generes of rock music. I suspect that this is the core emotion animating most hopes to create a friendly AI super intelligence to rule us all. But most likely, the future will be even stranger than the past. Get a grip, and deal with it.

GD Star Rating
loading...
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

GD Star Rating
loading...
Tagged as: , ,

In Praise of Low Needs

We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

Of course few individuals today focus on filling the universe with life. Most attend to their individual needs. And as we’ve been getting rich over the last few centuries, our needs have changed. Many cite Maslow’s Hierarchy of Needs:

maslowshierarchyofneeds-svg

While few offer much concrete evidence for this, most seem to accept it or one of its many variations. Once our basic needs are met, our attention switches to “higher” needs. Wealth really does change humans. (I see this in part as our returning to forager values with increasing wealth.)

It is easy to assume that what is good for you is good overall. If you are an artist, you may assume the world is better when consumers more art. If you are a scientist, you may assume the world is better if it gives more attention and funding to science. Similarly, it is easy to assume that the world gets better if more of us get more of what we want, and thus move higher into Maslow’s Hierarchy.

But I worry: as we attend more to higher needs, we may grow and innovate less regarding lower needs. Can the universe really get filled by creatures focused mainly on self-actualization? Why should they risk or tolerate disruptions from innovations that advance low needs if they don’t care much for that stuff? And many today see their higher needs as conflicting with more capacity to fill low needs. For example, many see more physical capacities as coming at the expense of less nature, weaker indigenous cultures, larger more soul-crushing organizations, more dehumanizing capitalism, etc. Rich nations today do seem to have weaker growth in raw physical capacities because of such issues.

Yes, it is possible that even rich societies focused on high needs will consistently grow their capacities to satisfy low needs, and that will eventually lead to a universe densely filled with life. But still I worry about all those unknown obstacles yet to be seen as our descendants try to grow through another three to ten factors as large as humanity’s leap. At some of those obstacles, will a focus on high needs lead them to turn away from the grand growth path? To a comfortable “sustainable” stability without all that disruptive innovation? How much harder would become to restart growth again later?

Pretty much all the growth that we have seen so far has been in a context where humans, and their ancestors, were focused mainly on low needs. Our current turn toward high needs is quite new, and thus relatively unproven. Yes, we have continued to grow, but more slowly. That seems worth at least a bit of worry.

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.

GD Star Rating
loading...
Tagged as: ,

Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
loading...
Tagged as: , ,

Change Favors The Robust, Not The Radical

There are futurists who like to think about the non-immediate future, and there are radicals who advocate for unusual policies, such as on work, diet, romance, governance, etc. And the intersection between these groups is larger than you might have expected by chance; futurists tend to be radicals and radicals tend to be futurists. This applies to me, in that I’ve both proposed a radical futarchy, and have a book on future ems.

The usual policies that we adopt in our usual world have a usual set of arguments in their favor, arguments usually tied to the details of our usual world. So those who want to argue instead for radical policies must both argue against the usual pro-arguments, and then also offer a new set of arguments in favor of their radical alternatives, arguments also tied to the details of our world. This can seem like a heavy burden.

So many who favor radical policies prefer to switch contexts and reject the relevance of the usual details of our world. By invoking a future where many things change, they feel they can just dismiss the usual arguments for the usual policies based on the usual details of our world. And at this point they usually rest, feeling their work is done. They like being in a situation where, even if they can’t argue very strongly for their radical policies, others also can’t argue very strongly against such policies. Intellectual stalemate can seem a big step up from the usual radical’s situation of being at a big argumentative disadvantage.

But while this may help to win (or at least not lose) argument games, it should not actually make us favor radical policies more. It should instead shift our attention to robust arguments, ones can apply over a wide range of possibilities. We need to hear positive arguments for why we should expect radical policies to work well robustly across a wide range of possible futures, relative to our status quo policies.

In my recent video discussion with James Hughes, he criticized me for assuming that many familiar elements of our world, such as property, markets, inequality, sexuality, and individual identities, continue into an em age. He instead foresaw an enormous hard-to-delimit range of possibilities. But then he seemed to think this favored his radical solution of a high-regulation high-redistribution strong global socialist government which greatly limits and keeps firm control over autonomous artificial intelligences. Yet he didn’t offer arguments for why this is a robust solution that we should expect to work well in a very wide variety of situations.

It seems to me that if we are going to focus on the axis of decentralized markets vs. more centralized and structured organizations, it is markets that have proven themselves to be the more robust mechanism, working reasonably well in a very wide range of situations. It is structured organizations that are more fragile, and fail more quickly as situations change. Firms go out of business often when their organizations fail to keep up with changing environments; decentralized markets disappearing because they fail to serve participants happens far less often.

GD Star Rating
loading...
Tagged as: ,

No Third AI Way

A few days ago in the Post:

Bryan Johnson .. wants to .. find a way to supercharge the human brain so that we can keep up with the machines. .. His science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain. .. Top neuroscientists who are building the chip .. hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks. .. In an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern.

In a MeaningOfLife.tv video discussion between James Hughes and I just posted today, Hughes said:

One of the reasons why I’m skeptical about the [em] scenario that you’ve outlined, is that I see a scenario where brains extending themselves though AI and computing tools basically slaved to the core personal identity of meat brains is a more likely scenario than one where we happily acknowledge the rights and autonomy of virtual persons. .. We need to have the kind of AI in our brain which is not just humans 1.0 that get shuffled off to the farm while the actual virtual workers do all the work, as you have imagined.

Many hope for a “third way” alternative to both ems and more standard AI software taking all the jobs. They hope that instead “we” can keep our jobs via new chips “in” or closely integrated with our brain. This seems to me mostly a false hope.

Yes of course if we have a strong enough global political coordination we could stake out a set of officially human jobs and forbid machines from doing them, no matter how much better machines might be at them. But if we don’t have such strong coordination, then the key question is whether there is an important set of jobs or tasks where ordinary human brains are more productive than artificial hardware. Having that hardware be located in server racks in distant data centers, versus in chips implanted in human brains, seems mostly irrelevant to this.

If artificial hardware can be similarly effective at such tasks, then it can have enormous economic advantages relative to human brains. Even today, the quantity of artificial hardware can be increased very rapidly in factories. And eventually, artificial hardware can be run at much faster speeds, with using much less energy. Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive. It is very hard to see humans outcompeting artificial hardware at such tasks unless the artificial hardware is just very bad at such tasks. That is in fact the case today, but it would not at all be the case with ems, nor with other AI with similar general mental abilities.

GD Star Rating
loading...
Tagged as: ,

Economic Singularity Review

The Economic Singularity: Artificial intelligence and the death of capitalism .. This new book from best-selling AI writer Calum Chace argues that within a few decades, most humans will not be able to work for money.

A strong claim! This book mentions me by name 15 times, especially on my review of Martin Ford’s Rise of the Robots, wherein I complain that Ford’s main evidence for saying “this time is different” is all the impressive demos he’s seen lately. Even though this was the main reason given in each previous automation boom for saying “this time is different.” This seems to be Chace’s main evidence as well:

Faster computers, the availability of large data sets, and the persistence of pioneering researchers have finally rendered [deep learning] effective this decade, leading to “all the impressive computing demos” referred to by Robin Hanson in chapter 3.3, along with some early applications. But the major applications are still waiting in the wings, poised to take the stage. ..

It’s time to answer the question: is it really different this time? Will machine intelligence automate most human jobs within the next few decades, and leave a large minority of people – perhaps a majority – unable to gain paid employment? It seems to me that you have to accept that this proposition is at least possible if you admit the following three premises: 1. It is possible to automate the cognitive and manual tasks that we carry out to do our jobs. 2. Machine intelligence is approaching or overtaking our ability to ingest, process and pass on data presented in visual form and in natural language. 3. Machine intelligence is improving at an exponential rate. This rate may or may not slow a little in the coming years, but it will continue to be very fast. No doubt it is still possible to reject one or more of these premises, but for me, the evidence assembled in this chapter makes that hard.

Well of course it is possible for this time to be different. But, um, why can’t these three statements have been true for centuries? It will eventually be possible to automate tasks, and we have been slowly but exponentially “approaching” that future point for centuries. And so we may still have centuries to go. As I recently explained, exponential tech growth is consistent with a relatively constant rate at which jobs are displaced by automation.

Chace makes a specific claim that seems to me quite wrong.

Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. .. Facebook has declared its ambition to make Hinton’s prediction come true. To this end, it established a basic research unit in 2013 called Facebook Artificial Intelligence Research (FAIR) with 50 employees, separate from the 100 people in its Applied Machine Learning team. So within a decade, machines are likely to be better than humans at recognising faces and other images, better at understanding and responding to human speech, and may even be possessed of common sense. And they will be getting faster and cheaper all the time. It is hard to believe that this will not have a profound impact on the job market.

I’ll give 50-1 odds against full human level common sense AI with a decade! Chace, I offer my $5,000 against your $100. Also happy to bet on “profound” job market impact, as I mentioned in my review of Ford. Chace, to his credit, sees value in such bets:

The economist Robin Hanson thinks that machines will eventually render most humans unemployed, but that it will not happen for many decades, probably centuries. Despite this scepticism, he proposes an interesting way to watch out for the eventuality: prediction markets. People make their best estimates when they have some skin in the forecasting game. Offering people the opportunity to bet real money on when they see their own jobs or other peoples’ jobs being automated may be an effective way to improve our forecasting.

Finally, Chace repeats Ford’s error in claiming economic collapse if median wages fall:

But as more and more people become unemployed, the consequent fall in demand will overtake the price reductions enabled by greater efficiency. Economic contraction is pretty much inevitable, and it will get so serious that something will have to be done. .. A modern developed society is not sustainable if a majority of its citizens are on the bread line.

Really, an economy can do fine if average demand is high and growing, even if median demand falls. It might be ethically lamentable, and the political system may have problems, but markets can do just fine.

GD Star Rating
loading...
Tagged as: ,

World Basic Income

Joseph said .. Let Pharaoh .. appoint officers over the land, and take up the fifth part of the land of Egypt in the seven plenteous years. .. And that food shall be for store to the land against the seven years of famine, which shall be in the land of Egypt; that the land perish not through the famine. And the thing was good in the eyes of Pharaoh. (Genesis 38)

[Medieval Europe] public authorities were doubly interested in the problem of food supplies; first, for humanitarian reasons and for good administration; second, for reasons of political stability because hunger was the most frequent cause of popular revolts and insurrections. In 1549 the Venetian officer Bernardo Navagero wrote to the Venetian senate: “I do not esteem that there is anything more important to the government of cities than this, namely the stocking of grains, because fortresses cannot be held if there are not victuals and because most revolts and seditions originate from hunger. (p42, Cipolla, Before the Industrial Revolution)

63% of Americans don’t have enough saved to cover even a $500 financial setback. (more)

Even in traditional societies with small governments, protecting citizens from starvation was considered a proper of role of the state. Both to improve welfare, and to prevent revolt. Today it could be more efficient if people used modern insurance institutions to protect themselves. But I can see many failing to do that, and so can see governments trying to insure their citizens against big disasters.

Of course rich nations today face little risk of famine. But as I discuss in my book, eventually when human level artificial intelligence (HLAI) can do almost all tasks cheaper, biological humans will lose pretty much all their jobs, and be forced to retire. While collectively humans will start out owning almost all the robot economy, and thus get rich fast, many individuals may own so little as to be at risk of starving, if not for individual or collective charity.

Yes, this sort of transition is a long way off; “this time isn’t different” yet. There may be centuries still to go. And if we first achieve HLAI via the relatively steady accumulation of better software, as we have been doing for seventy years, we may get plenty of warning about such a transition. However, if we instead first achieve HLAI via ems, as elaborated in my book, we may get much less warning; only five years might elapse between seeing visible effects and all jobs lost. Given how slowly our political systems typically changes state redistribution and insurance arrangements, it might be wiser to just set up a system far in advance that could deal with such problems if and when they appear. (A system also flexible enough to last over this long time scale.)

The ideal solution is global insurance. Buy insurance for citizens that pays off only when most biological humans lose their jobs, and have this insurance pay enough so these people don’t starve. Pay premiums well in advance, and use a stable insurance supplier with sufficient reinsurance. Don’t trust local assets to be sufficient to support local self-insurance; the economic gains from an HLAI economy may be very concentrated in a few dense cities of unknown locations.

Alas, political systems are even worse at preparing for problems that seem unlikely anytime soon. Which raises the question: should those who want to push for state HLAI insurance ally with folks focused on other issues? And that brings us to “universal basic income” (UBI), a topic in the news lately, and about which many have asked me in relation to my book.

Yes, there are many difficult issues with UBI, such as how strongly the public would favor it relative to traditional poverty programs, whether it would replace or add onto those other programs, and if replacing how much that could cut administrative costs and reduce poverty targeting. But in this post, I want to focus on how UBI might help to insure against job loss from relatively sudden unexpected HLAI.

Imagine a small “demonstration level” UBI, just big enough to one side to say “okay we started a UBI, now it is your turn to lower other poverty programs, before we raise UBI more.” Even such a small UBI might be enough to deal with HLAI, if its basic income level were tied to the average income level. After all, an HLAI economy could grow very fast, allowing very fast growth in the incomes that biological human gain from owning most of the capital in this new economy. Soon only a small fraction of that income could cover a low but starvation-averting UBI.

For example, a UBI set to x% of average income can be funded via a less than x% tax on all income over this UBI level. Since average US income per person is now $50K, a 10% version gives a UBI of $5K. While this might not let one live in an expensive city, a year ago I visited a 90-adult rural Virginia commune where this was actually their average income. Once freed from regulations, we might see more innovations like this in how to spend UBI.

However, I do see one big problem. Most UBI proposals are funded out of local general tax revenue, while the income of a HLAI economy might be quite unevenly distributed around the globe. The smaller the political unit considering a UBI, the worse this problem gets. Better insurance would come from a UBI that is funded out of a diversified global investment portfolio. But that isn’t usually how governments fund things. What to do?

A solution that occurs to me is to push for a World Basic Income (WBI). That is, try to create and grow a coalition of nations that implement a common basic income level, supported by a shared set of assets and contributions. I’m not sure how to set up the details, but citizens in any of these nations should get the same untaxed basic income, even if they face differing taxes on incomes above this level. And this alliance of nations would commit somehow to sharing some pool of assets and revenue to pay for this common basic income, so that everyone could expect to continue to receive their WBI even after an uneven disruptive HLAI revolution.

Yes, richer member nations of this alliance could achieve less local poverty reduction, as the shared WBI level couldn’t be above what the poor member nations could afford. But a common basic income should make it easier to let citizens move within this set of nations. You’d less have to worry about poor folks moving to your nation to take advantage of your poverty programs. And the more that poverty reduction were implemented via WBI, the bigger would be this advantage.

Yes, this seems a tall order, probably too tall. Probably nations won’t prepare, and will then respond to a HLAI transition slowly, and only with what ever resources they have at their disposal, which in some places will be too little. Which is why I recommend that individuals and smaller groups try to arrange their own assets, insurance, and sharing. Yes, it won’t be needed for a while, but if you wait until the signs of something big soon are clear, it might then be too late.

GD Star Rating
loading...
Tagged as: , ,

Star Trek As Fantasy

Frustrated that science fiction rarely makes economic sense, I just wrote a whole book trying to show how much consistent social detail one can offer, given key defining assumptions on a future scenario. Imagine my surprise then to learn that another book, Trekonomics, published exactly one day before mine, promises to make detailed economic sense out of the popular Star Trek shows. It seems endorsed by top economists Paul Krugman and Brad Delong, and has lots of MSM praise. From the jacket:

Manu Saadia takes a deep dive into the show’s most radical and provocative aspect: its detailed and consistent economic wisdom. .. looks at the hard economics that underpin the series’ ideal society.

Now Saadia does admit the space stuff is “hogwash”:

There will not be faster-than-light interstellar travel or matter-anti-matter reactors. Star Trek will not come to pass as seen on TV. .. There is no economic rationale for interstellar exploration, maned or unmanned. .. Settling a minuscule outpost on a faraway  world, sounds like complete idiocy. .. Interstellar exploration … cannot happen until society is so wealthy that not a single person has to waste his or her time on base economic pursuits. .. For a long while, there is no future but on Earth, in the cities of Earth. (pp. 215-221)

He says Trek is instead a sermon promoting social democracy: Continue reading "Star Trek As Fantasy" »

GD Star Rating
loading...
Tagged as: , ,