Katja Grace and I recorded two more podcasts:
This adds to our nine previous podcasts:
Katja Grace and I recorded two more podcasts:
This adds to our nine previous podcasts:
While I’ve been part of grants before, and had research support, I’ve never had support for my futurist work, including the years I spent writing Age of Em. That now changes:
The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome. .. This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. (more)
Who is Open Philanthropy? From their summary:
Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. .. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. .. The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.
A key paragraph from my proposal:
Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.
I and they see value in such an analysis even if AI software ends up differing systematically from the software we’ve seen so far:
While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.
My idea is to extract from our decades of experience with software a more detailed description of the basic economics of software production and use. To distinguish, as time allows, many different kinds of inputs to production, styles of production, parts of produced products, and types of uses. And then to sketch out different rough “production functions” appropriate to different cases. That is, to begin to translate basic software engineering insight into economics language.
The simple assumption that software doesn’t fundamentally change in the future is the baseline scenario, to be fed into standard economic models to see what happens when such a more richly described software sector slowly grows to take over the economy. But a richer more detailed description of software economics can also give people a vocabulary for describing their alternative hypotheses about how software will change. And then this analysis framework can be adjusted to explore such alternative hypotheses.
So right from the start I’d like to offer this challenge:
Do you believe that the software that will let machines eventually do pretty much all jobs better than humans (or ems) will differ in foreseeable systematic ways from the software we have seen in the last seventy years of software practice? If so, please express your difference hypothesis as clearly as possible in terminology that would be understandable and familiar to software engineers and/or economists.
I will try to stretch the economic descriptions of software that I develop in the direction of encompassing the most common such hypotheses I find.
Folks near New York City, Washington DC, or the California Bay Area, consider seeing an upcoming Age of Em talk. (I’ll add more specific links as I get them.)
CA Bay Area
July 9, 10a-7p, Oakland, BIL Oakland
Aug 1, 1p, Mountain View, Benghazi Tech Talk, Google
Aug 2, 5p, Mountain View, RethinkDB
Aug 3, 7p, Oakland, Oakland Futurists
Aug 5-7, Berkeley, Effective Altruism Global
Aug 8, 7p, Palo Alto, Stanford Effective Altruism
New York City
My life has been, in part, a series of crusades. First I just wanted to understand as much as possible. Then I focused on big problems, wondering how to fix them. Digging deeper I was persuaded by economists: our key problems are institutional. Yes we can have lamentable preferences and cultures. But it is hard to find places to stand and levers to push to move these much, or even to understand the effects of changes. Institutions, in contrast, have specific details we can change, and economics can say which changes would help.
I learned that the world shows little interest in the institutional changes economists recommend, apparently because they just don’t believe us. So I focused on an uber institutional problem: what institutions can we use to decide together what to believe? A general solution to this problem might get us to believe economists, which could get us to adopt all the other economics solutions. Or to believe whomever happens to be right, when economists are wrong. I sought one ring to rule them all.
Of course it wasn’t obvious that a general solution exists, but amazingly I did find a pretty general one: prediction markets. And it was also pretty simple. But, alas, mostly illegal. So I pursued it. Trying to explain it, looking for everyone who had said something similar. Thinking and hearing of problems, and developing fixes. Testing it in the lab, and in the field. Spreading the word. I’ve been doing this for 28 years now. (Began at age 29.)
And I will keep at it. But I gotta admit it seems even harder to interest people in this one uber solution than in more specific solutions. Which leads me to think that most who favor specific solutions probably do so for reasons other than the ones economists give; they are happy to point to economist reasons when it supports them, and ignore economists otherwise. So in addition to pursuing this uber fix, I’ve been studying human behavior, trying to understand why we seem so disinterested.
Many economist solutions share a common feature: a focus on outcomes. This feature is shared by experiments, incentive contracts, track records, and prediction markets, and people show a surprising disinterest in all of them. And now I finally think I see a common cause: an ancient human habit of strong deference to the prestigious. As I recently explained, we want to affiliate with the prestigious, and feel that an overly skeptical attitude toward them taints this affiliation. So we tend to let the prestigious in each area X decide how to run area X, which they tend to arrange more to help them signal than to be useful. This happens in school, law, medicine, finance, research, and more.
So now I enter a new crusade: I am against prestige. I don’t yet know how, but I will seek ways to help people doubt and distrust the prestigious, so they can be more open to focusing on outcomes. Not to doubt that the prestigious are more impressive, but that letting them run the show produces good outcomes. I will be happy if other competent folks join me, though I’m not especially optimistic. Yet. Yet.
Over the next week I’ll give these talks on Age of Em:
I’ll also talk in Paris May 18, but that is by invitation only.
While last week I talked at U Rochester, the next three weeks I talk at:
All these talks are, of course, on my upcoming book The Age of Em.
I’ll do three public talks at U Rochester next week:
I leave Friday on a nine day trip to give six talks, all but one on Age of Em:
Imagine that one person, or a small group, wants to do something, like watch pornography, do uncertified medical procedures, have gay sex, worship Satan, shoot guns, drink raw milk, etc. Imagine further that many other people outside that small group don’t want them to do this. They instead want the government to make a law prohibiting similar groups from doing similar things.
In this prototypical situation, libertarians tend to say “let them do it” while others say “have the government make them stop.” If we take a cost-benefit perspective here, then the key question here is whether this small group gains more from their activity (or an added increment of it) than others lose (including losing via their “altruistic” concern for the small group). Since this small group would choose to do it if allowed, we can presume they expect to gain something. And if others complain and try to make them stop (or cut back), we can presume they expect to lose. So we are trying to estimate the relative magnitude of these two effects.
I see three considerations that, all else equal, lean this choice in the libertarian direction.
Again, each of these considerations leans the conclusion in a libertarian direction, all else equal. Yes, they can collectively be overcome by strong enough other considerations that lean the other way. For example, I’ll grant that for the case of air pollution, we plausibly have strong enough evidence of large harms on outsiders, harms insufficiently discouraged by local coordination and lawsuits. So yes in this case central government might be an attractive solution, if it can act cheaply and efficiently enough.
But the main point here is that the three considerations above justify a libertarian default that must be overcome by specific arguments to the contrary. If outsiders complain about an activity, but aren’t willing to buy less of it via contract, or to sue for less of it in court, maybe they aren’t really being hurt that much. There is an asymmetry here: if we don’t ban an activity and might get too much, contract & law could reduce it a lot, but if we ban an activity and might get too little, contract & law can’t increase it much.
Yes, other persuasive contrary considerations might be found, including considerations not based on the net harm of the disputed actions. But the less you think you know about these other considerations, the more your choice will be influenced by these three basic considerations, all of which seem to me pretty solid.
While I have said before that I am not a libertarian according to common strict definitions, I still usually tend to lean libertarian, because in fact arguments based on further considerations often seem to me pretty weak. While one can often make clever arguments, it is often hard to have much confidence in them; the world seems just too complex. And so I often have to fall back on simple defaults. Which, as I’ve argued above, are libertarian.