I’ve long described the following as the most obviously helpful policy response to the possibility of advanced AI. But even though I’ve long known many folks who say they are very worried about AI, I have yet to motivate any of them to actually pursue this response. Let me now try again.
While far from obvious, it is plausible that someday machines may displace most all humans from their jobs. It is also not crazy to think that this might happen relatively suddenly, and without much warning. Which seems a pretty big problem, given that most people don’t own much more besides their ability to earn wages. Without some sort of charity or insurance, they might starve. Thus each of us seems well-advised to try to set up some insurance re this risk, instead of just hoping to rely on charity.
One needs to set up insurance well before problems are realized or revealed. Yet it can be hard to motivate people to insure against risks that seem too unlikely or remote in time. Which might make the current moment an ideal time to consider this. Many are now worrying loudly about AI, saying that AI might soon take all the jobs, or worse. And yet I’m pretty sure that most investors see this risk as actually still pretty remote and unlikely. Maybe making now a great time to set this up.
I say “insurance” but, as this risk is easily measured and widely shared, we wouldn’t need to use the usual insurance industry. Here is a simple plan:
A) Carefully define the key event E of “automation suddenly takes most jobs”. Maybe “labor force participation rate falls from >35% to <10% in <10 years by date D”.
B) Collect some diversified financial assets A likely to retain substantial value both after such an event, and also if that event never happens. Such as global stock, bond, and real estate index funds.
C) Split these assets A into “A if E” and “A if not E”. This split can be done with no risk. We should always see prices satisfy p(A) = p(A if E) + p(A if not E).
D) Sell “A if E” assets to workers. The more they buy, the better insured they are. They could buy these slowly over time, instead of all at once.
E) Sell “A if not E” assets to any willing investors. Buyers of this are in effect selling robots-take-most-jobs insurance.
F) When the event E happens, “A if E” assets turn into A assets, which can then be sold off to pay for ex-worker living expenses. If E seems likely to happen soon, “A if E” assets can also be sold then to pay for living expenses of early job losers.
G) As date D approaches, workers switch to buying assets with later dates D’.
H) If date D comes without E ever having happened, “A if not E” assets turn into A assets, giving their investors a higher return than if they’d just bought A.
I) At all times the price ratios p(A if E)/p(A) for various dates D warn us all via a probability distribution over time of when robots might take most jobs.
And that’s it. If people vary in their tolerance for the risk of E, then there are gains from trade in having some people hold “A if E”, while others hold “A if not E”. And thus there is value to be released by splitting assets A into “A if E” and “A if not E”. Yes, regulations may now stupidly prevent selling such split assets to workers; most of the work here may be to overturn such regulations.
Sure, there would be some work to do to advise workers of how well they could expect to live when holding how much of each kind of A asset. Workers should prefer global portfolio assets A, to insure against regional risks, and should consider the risk-return tradeoff re different kinds of assets A.
But those are minor issues; the main priority is to get workers to hold such assets. Maybe tech firms could signal their concern about AI by buying such assets for their employees. And then maybe cities or regions could buy some on behalf of their citizens. (Note: due to regional risks, planning to tax some local citizens to support others can fail badly.)
Note that whether this plan is a good idea doesn’t depend much on the chances or timing of machines taking most jobs. The lower are these chances, the cheaper is this insurance, making it still a wise precaution.
Added 1Apr: Zvi complains that “if mass unemployment transforms society in bad ways, you still get hit.” But the more people get insured, the less bad society gets.
Added June 4: The cost of insuring against a 1% per year risk of workers losing a $30K US median wage job, is ~1% of that wage, or $300/yr.
The thing I find fascinating about the AI-induced unemployment is that it flips the usual class analysis totally on its head. It's the middle-class white-collar workers (perhaps even the extremely privileged few who work in creative industries who long believed their work was more "human" than anyone else's) whose jobs are under most immediate threat. I've heard multiple times that the job safest from automation is being a plumber.
This is really likely to change politics in the near future. It is true that UBI is already somewhat popular in liberal elite circles, but currently that feels more like a luxury belief than a deeply held conviction - the sort of thing people say in order to sound sophisticated. Well, soon it's going to be deadly serious, as these people unexpectedly find everything they've worked for washed away as the skilled manual labourers inherit the earth.
Isn't this analysis based on a fallacy of sorts? If AI results in everybody losing their jobs, what does that mean? It sounds to me like it means the costs of production have declined to near zero and thus the cost of goods will have declined to near zero - maybe too cheap to measure. Since goods are zero cost, the cost of charity is zero cost. Sounds like utopia come true - with all the good and bad that implies.