45 Comments

The thing I find fascinating about the AI-induced unemployment is that it flips the usual class analysis totally on its head. It's the middle-class white-collar workers (perhaps even the extremely privileged few who work in creative industries who long believed their work was more "human" than anyone else's) whose jobs are under most immediate threat. I've heard multiple times that the job safest from automation is being a plumber.

This is really likely to change politics in the near future. It is true that UBI is already somewhat popular in liberal elite circles, but currently that feels more like a luxury belief than a deeply held conviction - the sort of thing people say in order to sound sophisticated. Well, soon it's going to be deadly serious, as these people unexpectedly find everything they've worked for washed away as the skilled manual labourers inherit the earth.

Expand full comment

I think you mean, "as the owners of the AI companies inherit the earth."

Expand full comment

You are right. The safest jobs I can think of are blue collar jobs in wealthy countries. But how long will wealthy countries stay wealthy relative to other countries once AI takes off? And how sustainable will the blue collar wages be once everyone tries to retrain into a blue collar job?

Expand full comment

Isn't this analysis based on a fallacy of sorts? If AI results in everybody losing their jobs, what does that mean? It sounds to me like it means the costs of production have declined to near zero and thus the cost of goods will have declined to near zero - maybe too cheap to measure. Since goods are zero cost, the cost of charity is zero cost. Sounds like utopia come true - with all the good and bad that implies.

Expand full comment
author

There is a huge difference between zero and "near zero".

Expand full comment

By comparative advantage, human labor will still be worth *something*. (Assuming the robots consider us worthy of having property rights, that is.)

And if goods are very cheap, that *something* may be sufficient for a good life.

After all, in extremis the humans can just trade with each other (as we do now!).

Expand full comment

Not really. Even street beggars take in a few dollars. When goods and services are near zero in price, even the poor won't have much to worry about.

Expand full comment

The cost of labor and produced capital would fall to near zero, but non-produced capital would remain scarce. So non-produced capital would gobble up all of the income. Georgism seems quite resilient to that scenario.

Expand full comment

By non-produced capital do you mean real estate? If so, it strikes me that even real estate might drop in value. Given the human centric production model that has existed since the dawn of civilization, concentration of humans has been important, but in a world where humans are not involved in production (a world where the employment rate drops below 10%), that would free up vast amounts of real estate that is currently too uneconomical to develop.

Expand full comment

Yes, real estate is the dominant form of non-produced capital. Specifically, the land itself. The buildings are produced capital. I'm uncertain about humans spreading out, some will prefer to do that but many will want to live in cities because there's where you get the critical mass of people to have a wide choice of social activities. It might be that land becomes less valuable relative to other kinds of natural resources, but if labor costs drop to the energy cost of running GPUs then only natural resources and relative social status will be scarce, so all the value will be in those things.

Expand full comment

It will likely take a few years to a decade to develop. Our favorite AI optimist Tim Worstall firmly believes against all commonsense that the brain machines will release humans to go and do other more productive things. Wherever the "released" humans go they will find a robot doing the job cheaper, faster and better. Faced by mass unemployment with angry workers marching and waving baguettes, the state must either outlaw AI (in selected occupations) or institute income replacement. In the latter case, who will pay for the income replacement? The owners of the bots. This presents a further dilemma. A bot-owning entrepreneur has risked megabucks in the hope of making a profit. The entrepreneur might lose it all, as many new ventures do. If the entrepreneur makes a profit, the taxes to pay the income benefit kick in. I see this as a disincentive.

Tim Worstall imagines humanity lying in the lap of idle luxury while the bots do all the work for us. Never mind that for many of us, what we do is who we are. Utilitarianism is the default human mindset. Most of us get more pleasure from mental activities than physical. One of these activities is hunting and gathering and returning each day with some tasty roots or a delicious antelope. Without work, humans will decline mentally and morally very quickly.

A century from now, humans will no longer be the dominant species. A few unusually attractive specimens may be kept as pets for the robots.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

I'd assume that the reason people aren't interested in buying such insurance is because a world where the value of human labour goes to zero is a world where we get millions of machine slaves to produce stuff for us. It would be weird if we suddenly had all this abundance but people were worse off.

Expand full comment

You need to make sure you own the machines that produce the abundance before the value of your labour goes to zero permanently and irrevocably. The decisions we make now will affect us more than we can imagine.

Expand full comment

I don't see why I need to own the machines. Consider something like food: if machines get really good at agriculture and creating an abundance of food, I just can't imagine how food could become less accessible to me. It's not like the humans who own the machines are going to colossally increase the amount they eat each day to meet the surplus.

And if the people who own the machines don't want to use them for farming, then that leaves a window open for human labour to become more valuable again, bringing us back in the direction of where we are right now.

Expand full comment
author

Your imagination seems to be too limited.

Expand full comment

Yes, but her 2nd point is valid, I think. Assuming humans retain property rights, even if human labor has zero value to the machines (by comparative advantage, it shouldn't go all the way to 0.0) there's nothing preventing sscer running her own farm, growing her own food, and trading that with other humans for other things.

Expand full comment
author

To run a farm your way, you need to own a farm.

Expand full comment

Do you have a particular scenario in mind for how things could play out such that I need the insurance?. I can imagine machine owners becoming so powerful that people without any capital can't eat, but in a world with that level of injustice, shouldn't democracy kick in to fix the problem? And if not (because the machine owners take over the government) why should I expect my insurance to be paid out?

Expand full comment
author

Capitalists tend to prefer to enforce property rights. So if you own capital, you can keep its returns. Counting instead on the goodness of voters to save you from starvation seems pretty risky to me.

Expand full comment

I asked chat GPT: Which financial assets are likely to retain value in the event automation suddenly takes most jobs (labor force participation rate falls below 10%)?

It replied: Real Estate, Precious Metals, Cryptocurrencies, Blue-Chip Stocks and Infrastructure Assets

But I am a bit skeptical. Advanced AI should be able to produce abundant housing, create tech to extract gold for sea water, create better crypto and make current industries and infrastructure absolute.

Expand full comment

As long as live humans are around, some kinds of real estate will have special value, even if housing is too cheap to meter. There is only so much oceanfront property, property within walking distance of great universities, property in exceptionally scenic locations, etc.

Stocks are probably a good bet too, if only because managers will at least try to shift capital into whatever is actually profitable over time.

Expand full comment

oceanfront property can be created by dredging

Expand full comment

Only up to a point - if you make the oceanfront all fractal, at some point it becomes less attractive.

The point remains - somethings are and will remain scarce.

Expand full comment

You'd need A to be large enough to create UBI-like flows in perpetuity, since D will only come once if it does, and labor will be obsolete forever if E (assume away the issue of fertility over 2.1).

How many workers can afford to purchase UBI-in-perpetuity today, even mitigated by the probability of E? I estimate A to be of some 10M per person, so a premium of (present value) 100k to 1M (assume chance of E of 1% to 10%).

If, on account of the above, A is provided to all at a subsidy, how is that different from a Sovereign Fund funding an UBI + VAT complex? Nothing wrong with that ofc.

Expand full comment
author

You don't have to buy this insurance all at once. You can instead slowly accumulate it over time. If workers can't collectively afford to buy sufficient insurance, then their only other option is to try to steal more from others. But that may not end well.

Expand full comment

Agreed, but this is a bit of a Pascal Wager situation: when risks are remote and you can afford to insure against theme cheaply, there's too many to count. When you can start seeing them in the horizon it's too late, of course.

Re stealing, yes that's how it's going to end in practice. But combining an UBI with an equivalent VAT would be a decent way to steal in this scenario (even better if you can sweeten with a Sovereign Fund).

https://mendimeterastit.blogspot.com/2022/01/on-merging-ubi-and-vat-schemes.html

Expand full comment

Even a one time lump of sum payment is better than nothing. If urban tech workers, for instance, actually expected to lose their jobs in three to five years at the same time their local housing markets go pear shaped. It would be better to have something. At least that way, perhaps they could each afford a van to live in, with some savings left over for food while they retrain.

This is how life insurance used to be structured. In the event of a man's death his widow and children get a lump sum that he hopes they will use to get by until the children are old enough to work. And people actually did buy life insurance. The interesting thing is why nobody is buying this kind or offering this.

Expand full comment

"The interesting thing is why nobody is buying this kind or offering this." The recent AI doom-and-gloom is far too abstract for the average person to buy into. The reality people see around them is virtually no impact of AI on their lives, and relatively low unemployment.

Phrased another way: From history our prior probability that Luddites are correct should be very low. To elevate that probability to something large would take very compelling evidence. I don't believe a living-in-the-present person would see that compelling evidence around them, today. Buying into doom and gloom requires some form of projection into the future, which introduces uncertainties of its own and in general weakens the case that AI is a threat to worry about.

Expand full comment

Yes, but the technological unemployment scenario presupposes the end of most work, not just this or that industry, which is why this is so hard to solve: you have no one to share the risk with, even though Robin is trying to share with those who don't take the scenario seriously.

Expand full comment

More important, I think Robin's scheme shares risk with people who (a) don't take the risk seriously and (b) have capital (== will own the machines that make stuff).

The core problem Robin is trying to solve is that most people don't have *any* capital - even highly-paid people in rich countries. They live hand-to-mouth with no net savings.

But if they did have even a little capital and invested it in AI machines, they'd own some of the output of those machines - given their likely vast productivity, even a tiny investment might produce enough for a lavish lifestyle (by current standards).

Of course this assumes (a) humans will retain enforceable property rights and (b) the machines will consent to being owned.

Expand full comment

Can I also have "AI killed me and everyone I care about"-insurance?

Expand full comment

Why aren't such insurance products and financial assets already available? Obvious answers include a) there isn't sufficient real demand to offset production costs in such a way as to produce supernormal or even comparatively normal profits, or b) the policy regime precludes it. In the absence of decisive evidence either way, I would presume a good chance of both causes playing an important role.

Expand full comment
author

We already know that what I've proposed above is now illegal.

Expand full comment

Illegal in the US? Can you give me a source please? I am about to draft a mix of your model with future skills investment fund to be used for re-/upskilling too.

Expand full comment

"as this risk is easily measured and widely shared" - is it, though? Even if we somehow agree on what is the proper measure and even if by some miracle it won't get Goodharted to all seven hells, calculating it sounds like something beyond most people's capabilities.

Expand full comment
author

Labor force participation has long been measured well, and should remain pretty easy to measure even through this sort of disruption.

Expand full comment

For an economist/statistician, maybe, but I don't think you engage with the core of the argument: this suggests putting the risk calculations on, well, the average person.

Expand full comment

Are you sure the SEC or whichever regulatory body oversees insurance won’t want to get involved?

Expand full comment
author

SEC does now regulate "derivatives" which is what "A if E" are. As I said, we need them to allow the sale of such assets to ordinary workers.

Expand full comment

When you say collect these assets, is this presumably done by the government? How exactly would they handle this investing infrastructure?

Expand full comment

In the UK this is perfectly possible using a mutual club with membership classes for investors and workers.

Expand full comment
Comment deleted
Expand full comment
author

There are so many AI related companies. Without a convenient index fund, hard to invest in them all.

Expand full comment

Such ETFs apparently already exist, for instance First Trust Nasdaq Artificial Intelligence ETF; iShares Robotics and Artificial Intelligence ETF; Global X Robotics & Artificial Intelligence ETF.

Expand full comment
author

It makes sense for robots-took-most-jobs insurance assets to emphasize such index funds.

Expand full comment

When I took a look at four leading AI ETFs, I found that all had underperformed the market. Fees didn't help. A low-fee Vanguard style AI ETF should do better. It made me think that a better approach would be either to incorporate a wider range of companies in the AI index or to add them to your own portfolio. The reason is that it is not necessarily the AI companies who will do the best. It may well be companies that use their AI products to greatly boost their productivity. I suppose the location of biggest gains depends partly on how much competition there is between AI-creating companies (compared to companies that use their products) and how much of the value created they can capture. In other words, companies defined as "AI companies" may not get the most out of the technology, value-wise.

Expand full comment