It's not especially futuristic as it has roots going back a couple centuries. It's one of the items on the list of Enlightenment ideals that hasn't been achieved yet. Enlightenment intellectuals in 1790 certainly understood that Slavery wasn't acceptable in just liberal democratic society, even if it wasn't clear how it would be eliminated.
The ongoing project of progressively rolling out the enlightenment includes recognizing some sort of Georgist ideal that everyone ought to have a share in common resources that in political economy are called "land". IP, EM spectrum, etc are "land".
This isn't a matter of redistribution as much as correcting the glaring maldistribution or injustice of allowing private interests to completely control assets and income streams that rightfully belong to everyone.
Since all three processes are active areas of R&D, why isn't the discussion about how they will interact, rather than this 'there can only be one' argument?
"The expected probability of lossage depends on your priors and life experience." Yes! This suggests a little reverse engineering such that we ask what are the priors and life experiences of those who support Ems (and such like)? And I am left thinking that it is the "Less Wrong" contingency. Mostly young rationalists. And they (based on my priors and life experience) don't know nothing.
Up until now, your "third way" is pretty much the only way machines have contributed to economic growth. So it seems premature to predict that it won't play a role in the future also.
Relatedly, I feel a little uncomfortable with how commonly those arguing for the upcoming existence of technological unemployment advocate not just wealth redistribution, but a basic income scheme specifically. It seems to me that whether it will be necessary to redistribute wealth to large numbers of people who cannot get work is an issue on a separate level to how exactly that redistribution ought to be done, if it is done.
Basic income as a redistribution scheme has some interesting arguments in favor of it, but it just seems implausible to me that all the futurist folk advocating it would have first realized that redistribution would be necessary, and then separately weighed up the options and decided basic income is the best scheme. I can't help but think the fact that it just sounds futuristic and Star Trek-y has something to do with the focus on this particular approach to redistribution. Basic income just sounds like something the future will have, like flying cars.
To what extent do we expect there to be high bandwidth in and out of existing brains during the transition? If we expect slice/scan uploading, then there's not really much time to check whether we've left anything behind.
If we find a more gradual transition, it could be experienced as a series of increased capabilities with the available opportunity to evaluate retrospective regret.
The added expense is figuring out how to get much higher two-way bandwidth through the skull in ways that can interact with distributed or silicon-based services, But since it means we get some kind of capability increase faster (even if it's not yet a full em), it seems likely that we'll have some measure of progress here before the full phase change occurs.
I presume you will grant that my book http://ageofem.com does present a future scenario where are great many important things change, not just unemployment for humans.
In the limit of time that's correct, but I don't think that people are most concerned with t->inf, they're concerned with the prospects for their own lives.
One heuristic for minimizing value drift is maintaining continuity, and that's almost certainly what people are trying to suggest with the third way. I expect while we're still trying to get our hardware and software built to sustain em's, we'll be growing ourselves into beings that can more comfortably make the leap.
Putting an agency rich substrate near to human brains (below the latency horizon of sensing and thought) and letting the brains do what they do best, at least some parts of the human beings will grow into the new substrate. At some point, unless there's really an unsurpassable serial biological bottleneck, we'll end up as em's anyway, as you suggest.
But we should be aware of the possibility that the process is not inherently lossless (and the expected probability of lossage depends on your priors and life experience.) This scenario helps to frame the question: what might we care deeply about that could be forgotten and left behind even in such a gradual migration?
"Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive." This is of course a feature not a bug. Economic analysis makes it a bug. I feel foolish when I say "Sorry" to Siri, when it has bungled a request or command. In my endeavor to remain human I no longer use Siri.
(cont.) The post is otherwise perfectly fine and interesting. It's just jarring to me when I see people talking about a vastly different world (even if it's fast approaching) in terms of just one minor aspect of it.
And it's fine to look at one aspect at a time, but jobs isn't really the central point here; it's that humans aren't going to compete with ems or AIs in general. They won't just be out-working us. They'll be out-strategizing us and out-consuming us and out-politicizing us and a bunch of other things.
Technological unemployment is a 200-year old meme that happens to be really handy right now for governments that want more control over the internet and don't want to be blamed for unemployment. Baxter the robot is not taking your job.
When AI or ems come along and disrupt things enough to put us out of work, they'll also be destroying what we currently recognize as government, the global military balance of power, the economy, what it means to be human, and a bunch of other things. Unemployment will be low on the list of things to worry about.
It's not especially futuristic as it has roots going back a couple centuries. It's one of the items on the list of Enlightenment ideals that hasn't been achieved yet. Enlightenment intellectuals in 1790 certainly understood that Slavery wasn't acceptable in just liberal democratic society, even if it wasn't clear how it would be eliminated.
The ongoing project of progressively rolling out the enlightenment includes recognizing some sort of Georgist ideal that everyone ought to have a share in common resources that in political economy are called "land". IP, EM spectrum, etc are "land".
This isn't a matter of redistribution as much as correcting the glaring maldistribution or injustice of allowing private interests to completely control assets and income streams that rightfully belong to everyone.
Since all three processes are active areas of R&D, why isn't the discussion about how they will interact, rather than this 'there can only be one' argument?
I now think of this as the Highlander Fallacy.
"The expected probability of lossage depends on your priors and life experience." Yes! This suggests a little reverse engineering such that we ask what are the priors and life experiences of those who support Ems (and such like)? And I am left thinking that it is the "Less Wrong" contingency. Mostly young rationalists. And they (based on my priors and life experience) don't know nothing.
Up until now, your "third way" is pretty much the only way machines have contributed to economic growth. So it seems premature to predict that it won't play a role in the future also.
Relatedly, I feel a little uncomfortable with how commonly those arguing for the upcoming existence of technological unemployment advocate not just wealth redistribution, but a basic income scheme specifically. It seems to me that whether it will be necessary to redistribute wealth to large numbers of people who cannot get work is an issue on a separate level to how exactly that redistribution ought to be done, if it is done.
Basic income as a redistribution scheme has some interesting arguments in favor of it, but it just seems implausible to me that all the futurist folk advocating it would have first realized that redistribution would be necessary, and then separately weighed up the options and decided basic income is the best scheme. I can't help but think the fact that it just sounds futuristic and Star Trek-y has something to do with the focus on this particular approach to redistribution. Basic income just sounds like something the future will have, like flying cars.
Certainly. My crusade is limited to "technological unemployment" as a sloppy idea tied to too many wildly different concepts.
Definitely!
To what extent do we expect there to be high bandwidth in and out of existing brains during the transition? If we expect slice/scan uploading, then there's not really much time to check whether we've left anything behind.
If we find a more gradual transition, it could be experienced as a series of increased capabilities with the available opportunity to evaluate retrospective regret.
The added expense is figuring out how to get much higher two-way bandwidth through the skull in ways that can interact with distributed or silicon-based services, But since it means we get some kind of capability increase faster (even if it's not yet a full em), it seems likely that we'll have some measure of progress here before the full phase change occurs.
But why can't chips in far away server racks be still below the latency of sensing and thought for assisting with most practical purposes?
I presume you will grant that my book http://ageofem.com does present a future scenario where are great many important things change, not just unemployment for humans.
In the limit of time that's correct, but I don't think that people are most concerned with t->inf, they're concerned with the prospects for their own lives.
One heuristic for minimizing value drift is maintaining continuity, and that's almost certainly what people are trying to suggest with the third way. I expect while we're still trying to get our hardware and software built to sustain em's, we'll be growing ourselves into beings that can more comfortably make the leap.
Putting an agency rich substrate near to human brains (below the latency horizon of sensing and thought) and letting the brains do what they do best, at least some parts of the human beings will grow into the new substrate. At some point, unless there's really an unsurpassable serial biological bottleneck, we'll end up as em's anyway, as you suggest.
But we should be aware of the possibility that the process is not inherently lossless (and the expected probability of lossage depends on your priors and life experience.) This scenario helps to frame the question: what might we care deeply about that could be forgotten and left behind even in such a gradual migration?
Is what a commenter recently called "technical AI" a third way?
"Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive." This is of course a feature not a bug. Economic analysis makes it a bug. I feel foolish when I say "Sorry" to Siri, when it has bungled a request or command. In my endeavor to remain human I no longer use Siri.
(cont.) The post is otherwise perfectly fine and interesting. It's just jarring to me when I see people talking about a vastly different world (even if it's fast approaching) in terms of just one minor aspect of it.
And it's fine to look at one aspect at a time, but jobs isn't really the central point here; it's that humans aren't going to compete with ems or AIs in general. They won't just be out-working us. They'll be out-strategizing us and out-consuming us and out-politicizing us and a bunch of other things.
Technological unemployment is a 200-year old meme that happens to be really handy right now for governments that want more control over the internet and don't want to be blamed for unemployment. Baxter the robot is not taking your job.
When AI or ems come along and disrupt things enough to put us out of work, they'll also be destroying what we currently recognize as government, the global military balance of power, the economy, what it means to be human, and a bunch of other things. Unemployment will be low on the list of things to worry about.