Tag Archives: Future

Em Software Results

After requesting your help, I should tell you what it added up to. The following is an excerpt from my book draft, illustrated by this diagram:

SoftwareIntensity

In our world, the cost of computing hardware has been falling rapidly for decades. This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete. The increasing quantity of software purchased has also led to larger software projects, which involve more engineers. This has shifted the emphasis toward more communication and negotiation, and also more modularity and standardization in software styles.

The cost of hiring human software engineers has not fallen much in decades. The increasing divergence between the cost of engineers and the cost of hardware has also lead to a decreased emphasis on raw performance, and increased emphasis on tools and habits that can quickly generate correct if inefficient performance. This has led to an increased emphasis on modularity, abstraction, and on high-level operating systems and languages. High level tools insulate engineers more from the details of hardware, and from distracting tasks like type checking and garbage collection. As a result, software is less efficient and well-adapted to context, but more valuable overall. An increasing focus on niche products has also increased the emphasis on modularity and abstraction.

Em software engineers would be selected for very high productivity, and use the tools and styles preferred by the highest productivity engineers. There would be little interest in tools and methods specialized to be useful “for dummies.” Since em computers would tend to be more reversible and error-prone, em software would be more focused on those cases as well. Because the em economy would be larger, its software industry would be larger as well, supporting more specialization.

The transition to an em economy would greatly lower wages, thus inducing a big one-time shift back toward an emphasis on raw context-dependent performance, relative to abstraction and easier modifiability. The move away from niche products would add to this tendency, as would the ability to save copies of the engineer who just wrote the software, to help later with modifying it. On the other hand, a move toward larger software projects could favor more abstraction and modularity.

After the em transition, the cost of em hardware would fall at about the same speed as the cost of other computer hardware. Because of this, the tradeoff between performance and other considerations would change much less as the cost of hardware fell. This should greatly extend the useful lifetime of programming languages, tools, and habits matched to particular performance tradeoff choices.

After an initial period of large rapid gains, the software and hardware designs for implementing brain emulations would probably reach diminishing returns, after which there would only be minor improvements. In contrast, non-em software will probably improve about as fast as computer hardware improves, since algorithm gains in many areas of computer science have for many decades typically remained close to hardware gains. Thus after ems appear, em software engineering and other computer-based work would slowly get more tool-intensive, with a larger fraction of value added by tools. However, for non-computer-based tools (e.g., bulldozers) their intensity of use and the fraction of value added by such tools would probably fall, since those tools probably improve less quickly than would em hardware.

For over a decade now, the speed of fast computer processors has increased at a much lower rate than the cost of computer hardware has fallen. We expect this trend to continue long into the future. In contrast, the em hardware cost will fall with the cost of computer hardware overall, because the emulation of brains is a very parallel task. Thus ems would see an increasing sluggishness of software that has a large serial component, i.e., which requires many steps to be taken one after the other, relative to more parallel software. This sluggishness would directly reduce the value of such software, and also make such software harder to write.

Thus over time serial software will become less valuable, relative to ems and parallel software. Em software engineers would come to rely less on software tools with a big serial component, and would instead emphasize parallel software, and tools that support that emphasis. Tools like automated type checking and garbage collection would tend to be done in parallel, or not at all. And if it ends up being too hard to write parallel software, then the value of software more generally may be reduced relative to the value of having ems do tasks without software assistance.

For tasks where parallel software and tools suffice, and where the software doesn’t need to interact with slower physical systems, em software engineers could be productive even when sped up to the top cheap speed. This would often make it feasible to avoid the costs of coordinating across engineers, by having a single engineer spend an entire subjective career creating a large software system. For an example, an engineer that spent a subjective century at one million times human speed would be done in less than one objective hour. When such a short delay is acceptable, parallel software could be written by a single engineer taking a subjective lifetime.

When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent. While today investors may spend most of their time tracking current software development projects, those who invest in em software projects of this sort might spend most of their time deciding when is the right time to initiate such a project. A software development race, with more than one team trying to get to market first, would only happen if the same sharp event triggered more than one development effort.

A single software engineer working for a lifetime on a project could still have troubles remembering software that he or she wrote decades before. Because of this, shorter-term copies of this engineer might help him or her to be more productive. For example, short-term em copies might search for and repair bugs, and end or retire once they have explained their work to the main copy. Short-term copies could also search among many possible designs for a module, and end or retire after reporting on their best design choice, to be re-implemented by the main copy. In addition, longer-term copies could be created to specialize in whole subsystems, and younger copies could be revived to continue the project when older copies reached the end of their productive lifetime. These approaches should allow single em software engineers to create far larger and more coherent software systems within a subjective lifetime.

Fast software engineers who focus on taking a lifetime to build a large software project, perhaps with the help of copies of themselves, would likely develop more personal and elaborate software styles and tools, and rely less on tools and approaches that help them to coordinate with other engineers with differing styles and uncertain quality. Such lone fast engineers would require local caches of relevant software libraries. When in distantly separated locations, such caches could get out of synch. Local copies of library software authors, available to update their contributions, might help reduce this problem. Out of synch libraries would increase the tendency toward divergent personal software styles.

When different parts of a project require different skills, a lone software engineer might have different young copies trained with different skills. Similarly, young copies could be trained in the subject areas where some software is to be applied, so that they can better understand what variations will have value there.

However, when a project requires different skills and expertise that is best matched to different temperaments and minds, then it may be worth paying extra costs of communication to allow different ems to work together on a project. In this case, such engineers would likely promote communication via more abstraction, modularity, and higher level languages and module interfaces. Such approaches also become more attractive when outsiders must test and validate software, to certify its appropriateness to customers. Enormous software systems could be created with modest sized teams working at the top cheap speed, with the assistance of many spurs. There may not be much need for even larger software teams.

The competition for higher status among ems would tend to encourage faster speeds than would otherwise be efficient. This tendency of fast ems to be high status would tend to raise the status of software engineers.

GD Star Rating
loading...
Tagged as: , ,

Hope For A Lumpy Filter

The great filter is the sum total of all of the obstacles that stand in the way of a simple dead planet (or similar sized material) proceeding to give rise to a cosmologically visible civilization. As there are 280 stars in the observable universe, and 260 within a billion light years, a simple dead planet faces at least roughly 60 to 80 factors of two obstacles to birthing a visible civilization within 13 billion years. If there is panspermia, i.e., a spreading of life at some earlier stage, the other obstacles must be even larger by the panspermia life-spreading factor.

We know of a great many possible candidate filters, both in our past and in our future. The total filter could be smooth, i.e. spread out relatively evenly among all of these candidates, or it could be lumpy, i.e., concentrated in only one or a few of these candidates. It turns out that we should hope for the filter to be lumpy.

For example, imagine that there are 15 plausible filter candidates, 10 in our past and 5 in our future. If the filter is maximally smooth, then given 60 total factors of two, each candidate would have four factors of two, leaving twenty in our future, for a net chance for us now of making it through the rest of the filter of only one in a million. On the other hand, if the filter is maximally lumpy, and all concentrated in only one random candidate, then we have a 2/3 chance of facing no filter at all in our future. Thus a lumpy filter gives us a much better chance of making it.

For “try-try” filters, a system can keep trying over and over until it succeeds. If a set of try-try steps must all succeed within the window of life on Earth, then the actual times to complete each step must be drawn from the same distribution, and so take similar times. The time remaining after the last step must also be drawn from a similar distribution.

A year ago I reported on a new study estimating that 1.75 to 3.25 billion years remains for life on Earth. This is a long time, and implies that there can’t be many prior try-try filter steps within the history of life on Earth. Only one or two, and none in the last half billion years. This suggests that the try-try part of the great filter is relatively lumpy, at least for the parts that have and will take place on Earth. Which according to the analysis above is good news.

Of course there can be other kinds of filter steps. For example, perhaps life has to hit on the right sort of genetic code right from the start; if life hits on the wrong code, life using that code will entrench itself too strongly to let the right sort of life take over. These sort of filter steps need not be roughly evenly distributed in time, and so timing data doesn’t say much about how lumpy or uniform are those steps.

It is nice to have some good news. Though I should also remind you of the bad news that anthropic analysis suggests that selection effects make future filters more likely than you would have otherwise thought.

GD Star Rating
loading...
Tagged as: ,

Great Filter TEDx

This Saturday I’ll speak on the great filter at TEDx Limassol in Cyprus. Though I first wrote about the subject in 1996, this is actually the first time I’ve been invited to speak on it. It only took 19 years. I’ll post links here to slides and video when available.

Added 22Sep: A preliminary version of the video can be found here starting at minute 34.

GD Star Rating
loading...
Tagged as: , ,

Regulating Infinity

As a professor of economics in the GMU Center for the Study of Public Choice, I and my colleagues are well aware of the many long detailed disputes on the proper scope of regulation.

One the one hand, the last few centuries has seen increasing demands for and expectations of government regulation. A wider range of things that might happen without regulation are seen as intolerable, and our increasing ability to manage large organizations and systems of surveillance is seen as making us increasingly capable of discerning relevant problems and managing regulatory solutions.

On the other hand, some don’t see many of the “problems” regulations are set up to address as legitimate ones for governments to tackle. And others see and fear regulatory overreach, wherein perhaps well-intentioned regulatory systems actually make most of us worse off, via capture, corruption, added costs, and slowed innovation.

The poster-children of regulatory overreach are 20th century totalitarian nations. Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation. So many accepted and even encouraged their nations to create vast intrusive organizations and regulatory systems. These are now largely seen to have gone too far.

Or course there have no doubt been other cases of regulatory under-reach; I don’t presume to settle this debate here. In this post I instead want to introduce jaded students of regulatory debates to something a bit new under the sun, namely a newly-prominent rationale and goal for regulation that has recently arisen in a part of the futurist community: stopping preference change.

In history we have seen change not only in technology and environments, but also in habits, cultures, attitudes, and preferences. New generations often act not just like the same people thrust into new situations, but like new kinds of people with new attitudes and preferences. This has often intensified intergenerational conflicts; generations have argued not only about who should consume and control what, but also about which generational values should dominate.

So far, this sort of intergenerational value conflict has been limited due to the relatively mild value changes that have so far appeared within individual lifetimes. But at least two robust trends suggest the future will have more value change, and thus more conflict:

  1. Longer lifespans – Holding other things constant, the longer people live the more generations will overlap at any one time, and the more different will be their values.
  2. Faster change – Holding other things constant, a faster rate of economic and social change will likely induce values to change faster as people adapt to these social changes.
  3. Value plasticity – It may become easier for our descendants to change their values, all else equal. This might be via stronger ads and schools, or direct brain rewiring. (This trend seems less robust.)

These trends robustly suggest that toward the end of their lives future folk will more often look with disapproval at the attitudes and behaviors of younger generations, even as these older generations have a smaller proportional influence on the world. There will be more “Get off my lawn! Damn kids got no respect.”

The futurists who most worry about this problem tend to assume a worst possible case. (Supporting quotes below.) That is, without a regulatory solution we face the prospect of quickly sharing the world with daemon spawn of titanic power who share almost none of our values. Not only might they not like our kind of music, they might not like music. They might not even be conscious. One standard example is that they might want only to fill the universe with paperclips, and rip us apart to make more paperclip materials. Futurists’ key argument: the space of possible values is vast, with most points far from us.

This increased intergenerational conflict is the new problem that tempts some futurists today to consider a new regulatory solution. And their preferred solution: a complete totalitarian takeover of the world, and maybe the universe, by a new super-intelligent computer.

You heard that right. Now to most of my social scientist colleagues, this will sound bonkers. But like totalitarian advocates of a century ago, these new futurists have a two-pronged argument. In addition to suggesting we’d be better off ruled by a super-intelligence, they say that a sudden takeover by such a computer will probably happen no matter what. So as long as we have to figure out how to control it, we might as well use it to solve the intergenerational conflict problem.

Now I’ve already discussed at some length why I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely (see here, here, here). Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn. But I do grant that increasing lifespans and faster change are likely to result in more intergenerational conflict. And I can also believe that as we continue to learn just how strange the future could be, many will be disturbed enough to seek regulation to prevent value change.

Thus I accept that our literatures on regulation should be expanded to add one more entry, on the problem of intergenerational value conflict and related regulatory solutions. Some will want to regulate infinity, to prevent the values of our descendants from eventually drifting away from our values to parts unknown.

I’m much more interested here in identifying this issue than in solving it. But if you want my current opinion it is that today we are just not up to the level of coordination required to usefully control value changes across generations. And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs.

Added 8a: Some think I’m unfair to the fear-AI position to call AIs our descendants and to describe them in terms of lifespan, growth rates and value plasticity. But surely AIs being made of metal or made in factories aren’t directly what causes concern. I’ve tried to identify the relevant factors but if you think I’ve missed the key factors do tell me what I’ve missed.

Added 4p: To try to be even clearer, the standard worrisome foom scenario has a single AI that grows in power very rapidly and whose effective values drift rapidly away from ones that initially seemed friendly to humans. I see this as a combination of such AI descendants having faster growth rates and more value plasticity, which are two of the three key features I listed.

Added 15Sep: A version of this post appeared as:

Robin Hanson, Regulating Infinity, Global Government Venturing, pp.30-31, September 2014.

Those promised supporting quotes: Continue reading "Regulating Infinity" »

GD Star Rating
loading...
Tagged as: ,

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
loading...
Tagged as: , ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Will Rituals Return?

Many social trends seem to have lasted for centuries. Some of these plausibly result from the high spatial densities, task specialization, and work coordination needed by industry production methods. Other industry-era trends plausibly result from increasing wealth weakening the fear that made us farmers, so that we revert to forager ways.

An especially interesting industry-era trend is the great fall in overt rituals – we industry folks have far fewer overt rituals than did foragers or farmers. From Randall Collins’ Interaction Ritual Chains:

Only around the nineteenth century, when mansions were build with separate entrance corridors, instead of one room connecting to the next) and back stairways for servants, did the fully private peerless introvert become common. … Until the beginning of the nineteenth century where is no distinctive ideology of intellectuals as withdrawn and at odds with the world. … The marketing of cultural products … put a premium on innovativeness, forcing periodic changes in fashion, and concentrating a new level of attention on the distinctive personality of the writer, musician, or artist. … The political ideology of individual freedom – which arose in a movement concerned largely to break into the aristocratic monopoly on power rather than to withdraw from it – was often blended with the ideology of the freelance writer, musician, or artist. … Alienation, rebellion, glorification of the inward, autonomous self, an oppositional self taking dominant society as its foil – this has become part of intellectual discourse. …

The daily and annual rounds of activity in premodern societies were permeated with rituals that we would easily recognize as such by their formality; living in a patrimonial household in a medieval community (not to mention living in a tribal society) would have been something like what our lives would be if Christmas or Thanksgiving happened several times a month, along with many lessor ceremonies that punctuated every day. … Modern life has its points of focused attention and emotional entrainment largely were we choose to make them, and largely in informal rituals, that it takes a sociologist to point out that they are indeed rituals. (pp. 362-368)

We can plausibly attribute our industry-era loss of rituals to many factors. Increasing wealth has given us more spatial privacy. Innovation has become increasingly important, and density and wealth are high enough to support fashion cycles, all of which raise the status of people with unusual behavior. These encourage us to signal our increasing wealth with more product and behavioral variety, instead of with more stuff. With increasing wealth our values have consistently moved away from conformity and tradition and toward self-direction and tolerance. Also, more forager-like egalitarianism has made us less ok with the explicit class distinctions that supported many farmer-era rituals. And our suppression of family clans has also suppressed many related rituals.

These factors seem likely to continue while per-capita wealth continues to increase. In that case overt ritual is likely to continue to decline. But there is no guaranteed that wealth will always increase. If we find ways (as with ems) to increase the population faster than we can increase wealth, wealth per person will fall. And if wealth falls, we may well see a revival of overt ritual.

I can’t think of a historical novel that makes clear not only how common was ritual and conformity in farmer or forager societies, but how well that comforted and satisfied people. Nor can I think of science fiction stories portraying a future full of beloved ritual. Or any stories that show how lonely and disconnected we modern folks often feel because we lack the rituals that gave deep meaning to so many humans before us. We tend to love novels that celebrate the values we hold dear, but that can blind us to seeing how others held different values dear.

Perhaps the closest examples are war stories, where soldiers find comfort in finding distinct roles and statuses that relate them to each other, and where they act out regular intense synchronized actions that lead to their security and protection. But that is usually seen as applying only to the special case of war, rather than to life more generally.

GD Star Rating
loading...
Tagged as: , , , ,

Trustworthy Telepresence

In a recent Ipsos/Reuters poll, which questioned 11,383 people in 24 countries, about half believed that they would be at a disadvantage in earning promotions because of the lack of face-to-face contact. Previous research suggests part-time telecommuters do not communicate less frequently with managers. … After four years of experience, the average male telecommuter will earn about 6.9% less than a non-telecommuter. (more)

Telecommuting requires the use of various types of media to communicate, such as the telephone and email. Emails have a time lag that does not allow for immediate feedback; telephone conversations make it harder to decipher the emotions of the person or team on the phone; and both of these forms of communication do not allow one to see the other person. Typical organization communication patterns are thus altered in telecommuting. For instance, teams using computer-mediated communication with computer conferencing take longer to make group decisions than face-to-face groups. (more)

Decades ago many futurists predicted that many workers would soon telecommute, and empty out cities. Their argument seemed persuasive: workers who work mainly on computers, or who don’t have to move much physical product, seem able to achieve enough coordination to do their jobs via phone, email, and infrequent in-person meetings. And huge cost savings could come from avoiding central city offices, homes near them, and commuting between the two. (For example, five firms might share the same offices, with each firm using them one day per week.)

But it hasn’t remotely happened that way. And the big question is: why?

Some say telecommuters would shirk and not work as much, but it is hard to see that would remain much of a problem with a constant video feed watching them. Bryan Caplan favors a signaling explain, that we show up in person to show our commitment to the firm. But a firm should prefer employees who show devotion via more total work, instead of wasting hours on the road. Yes inefficient signaling equilibria can exist, but firms have many ways to push for this alternate equilibrium.

The standard proximate cause, described in the quote above, is that workers and their bosses get a lot of detailed emotional info via frequent in-person meetings. Such detailed emotional info can help to build stronger feelings of mutual trust and affiliation. But the key question is, why are firms willing to pay so much for that? How does it help firm productivity enough to pay for its huge costs?

My guess: frequent detailed emotional info helps political coalitions, even if not firms. Being able to read detailed loyalty signals is central to maintaining political coalitions. The strongest coalitions take over firms and push policies that help them resist their rivals. If a firm part adopted local policies that weakened the abilities of locals to play politics, that part would be taken over by coalitions from other parts of the firm, who would then push for policies that help them. A lack of telecommuting is only one of a long list of examples of inefficient firm policies than can be reasonably be attributed to coalition politics.

Some people hope that very high resolution telepresence could finally give enough detailed emotional info to make telecommuting workable. And that might indeed give enough info to build strong mutual trust and loyalty. But it is hard to make very high resolution telepresence feel natural, and we still far from having enough bandwidth to cheaply send that much info.

Furthermore, by the time we do we may also have powerful robust ways to fake that info. That is, we might have software that takes outgoing video and audio feeds and edits them to remove signs of disloyalty, to make people seem more trusting and trustworthy than they actually are. And if we all know this is possible, we won’t trust what we see in telepresence.

So, for telepresence to actually foster enough loyalty and trust to make telecommuting viable, not only does it need to feel comfortable and natural and give very high bandwidth info, but the process would need to be controlled by some trusted party, who ensures that people aren’t faking their appearances in ways that make it hard to read real feelings. Setting up a system like that would be much more challenging that just distributing something like Skype software.

Of course eventually humans might have chips under their skin to manipulate their sight and sound in real physical meetings. And then they might want ways to assure others aren’t using those. But that is probably much further off. (And of course ems might always “fake” their physical appearance.)

Again, I have hopes, but only weak hopes, for telepresence allowing for mass human telecommuting.

Added 3July: Perhaps I could have been clearer. The individual telecommuter could clearly be at a political disadvantage by not being part of informal gossip and political conversation. He would have fewer useful allies, and they would thus prefer that he or she not telecommute.

GD Star Rating
loading...
Tagged as: , , ,

Auto-Auto Deadline Looms

It is well-known that while electricity led to big gains in factory productivity, few gains were realized until factories were reorganized to take full advantage of the new possibilities which electric motors allowed. Similarly, computers didn’t create big productivity gains in offices until work flow and tasks were reorganized to take full advantage.

Auto autos, i.e., self-driving cars, seem similar: while there could be modest immediate gains from reducing accident rates and lost productive time commuting, the biggest gains should come from reorganizing our cities to match them. Self-driving cars could drive fast close together to increase road throughput, and be shared to eliminate the need for parking. This should allow for larger higher-density cities. For example, four times bigger cities could plausibly be twenty-five percent more productive.

But to achieve most of these gain, we must make new buildings with matching heights and locations. And this requires that self-driving cars make their appearance before we stop making so many new buildings. Let me explain.

Since buildings tend to last for many decades, one of the main reasons that cities have been adding many new buildings is that they have had more people who need buildings in which to live and work. But world population growth is slowing down, and may peak around 2055. It should peak earlier in rich nations, and later in poor nations.

Cities with stable or declining population build a lot fewer buildings; it would take them a lot longer to change city organization to take advantage of self-driving cars. So the main hope for rapidly achieving big gains would be in rapidly growing cities. What we need is for self-driving cars to become available and cheap enough in cities that are still growing fast enough, and which have legal and political support for driving such cars fast close together, so they can achieve high throughput. That is, people need to be sufficiently rewarded for using cars in ways that allow more road throughput. And then economic activity needs to move from old cities to the new more efficient cities.

This actually seems like a pretty challenging goal. China and India are making lots of buildings today, but those buildings are not well-matched to self-driving cars. Self-driving cars aren’t about to explode there, and by the time they are cheap the building boom may be over. Google announced its self-driving car program almost four years ago, and that hasn’t exactly sparked a tidal wave of change. Furthermore, even if self-driving cars arrive soon enough, city-region politics may well not be up to the task of coordinating to encourage such cars to drive fast close together. And national borders, regulation, etc. may not let larger economies be flexible enough to move much activity to the new cities who manage to support auto autos well.

Alas, overall it is hard to be very optimistic here. I have hopes, but only weak hopes.

GD Star Rating
loading...
Tagged as: , , ,

Robot Econ in AER

In the May ‘014 American Economic Review, Fernald & Jones mention that having computers and robots replace human labor can dramatically increase growth rates:

Even more speculatively, artificial intelligence and machine learning could allow computers and robots to increasingly replace labor in the production function for goods. Brynjolfsson and McAfee (2012) discuss this possibility. In standard growth models, it is quite easy to show that this can lead to a rising capital share—which we intriguingly already see in many countries since around 1980 (Karabarbounis and Neiman 2013)—and to rising growth rates. In the limit, if capital can replace labor entirely, growth rates could explode, with incomes becoming infinite in finite time.

For example, drawing on Zeira (1998), assume the production function is

GrowthEquation

Suppose that over time, it becomes possible to replace more and more of the labor tasks with capital. In this case, the capital share will rise, and since the growth rate of income per person is 1/(1 − capital share ) × growth rate of A, the long-run growth rate will rise as well.6

GrowthFootnote

Of course the idea isn’t new; but apparently it is now more respectable.

GD Star Rating
loading...
Tagged as: , ,