You keep using this word "adaptive" in reference to culture, without ever being clear what you mean. You claim it is just the same word used in biology, but the technical definition used in biology makes reference to DNA-heritable traits and generations. Neither of those are compatible with "cultural evolution," since culture has no generations or DNA, and often changes in ways other than random variation followed by selection, namely social persuasion and dialectic.
If you mean a culture that successfully propagates itself into the future, then pre-modern cultures were *not* adaptive, because they aren't around anymore. They lost to modern culture, which did a better job of propagating through its environment - that environment being human minds.
Modern culture won't last forever either. No culture lasts forever - they all eventually lose to some successor culture that wins the battle of minds. That means no culture is adaptive.
Well, give that definition then, instead of just claiming it exists. Most of your arguments rest on this term, so it should be the first thing you straighten out.
Give me the definition *you yourself* endorse and want to use, not four or five different definitions generated by ChatGPT which may or may not be what you meant. I could dissect and talk about these, but it would be a waste of time because they're all slightly different. If you didn't take the time to write it, why should I take the time to read it?
All these different ChatGPT-generated definitions are vague about whether a lineage is a culture, or the DNA genes of those having the culture. The most apropos one is the one starting "a norm encouraging high fertility..." but what if a norm encourages low fertility *and also* groups that adopt it are copied more? It's failing to disambiguate about the exact thing I want you to disambiguate.
> A trait, behavior, or information pattern is adaptive to the extent that its presence causes the lineage carrying it to become more prevalent over time in its relevant population.
> A trait is adaptive if, in the environment where it operates: It increases the expected rate at which copies of itself (or its lineage) persist, spread, or are imitated, relative to alternatives.
> A norm encouraging high fertility is adaptive culturally if groups that adopt it grow faster or are copied more, causing the norm to spread across groups or generations.
> A transmissible trait is adaptive if it increases the expected future prevalence of the lineage carrying that trait, in its relevant population and environment.
The increase in the frequency of some lineage is an interesting thing to think about wrt culture.
The global emergent monoculture is by this definition adaptive, since its frequency within the population has increased over time (hence monoculture).
However if we have a prolonged fertility decline then this frequency could drop as other, more fertile, groups ascend. In that sense the dominant culture will become non-adaptive. This seems well-enough defined to me.
Culture does have some interesting features though: Unlike DNA it can hop between individuals. Not everyone born Amish, or devout Catholic, remains so. One can imagine a scenario where the low-fertility culture continues to thrive by siphoning off people from the high-fertility subcultures. Essentially thriving as a parasite.
Culture does usually have generations. Generation of copies. It is pretty similar to copies of DNA - though some quiibble about copying fidelity in some cases.
> there has always been a risk that as we change stuff about our bodies and its environment, we might stop being conscious. Such as by changing foods, brain tools, etc.
I’m not sure what foods or brain tools this could refer to. A clearer example would be drugs, eg sedatives or hallucinogens.
Unlikely. It takes modern technology to support modern food supply and distribution. Pre-industrial population levels were stagnant for a reason. Most of the Amish already aren't farmers anymore, branching into other trades and reliant on the industrial American economy. If, hypothetically, the Amish population continued to boom without constraint for 400 years (already unlikely), they would have to branch into industrial manufacturing to support themselves, which would change their culture in the same way the Puritan culture was changed.
Also, without the American government to keep them peaceful, they would eventually divide into warring factions, just as Enlightenment-era Europeans did. And the most successful warring factions would be those that embraced modern weapons technology. We'd see a replay of warring pre-modern states.
My day was going just fine until I read number 4. Something in my head tells me "this is not possible", probably the fact that I think cows are conscious in a meaningful way so we're talking spectrum not some kind of binary. But still, crazy to think about. Numbers 5 and 6 are shocking too. Good one, Robin.
You say: "In the last thousand years, we have not much increased the rate at which we can grow the human population, but we have greatly increased the rate at which we can grow wealth. As a result, we’ve seen increasing wealth per person. But we should expect this situation to end eventually, with a return to subsistence wages, once we find better techs for growing population faster." And I realize the idea of "population increase" is important to your way of thinking about how the future will unfold and which groups or cultures will "dominate." I am hoping though, that it will not be so. Living at "subsistence wages" does not benefit the individuals so involved, however great their numbers may be. Not having gigantic population would, since there would be more wealth to go around. If enough people could realize that, the pressure to have as much population as possible should diminish to the near vanishing point, or be concentrated in so few individuals that control of them by others not so obsessed should be feasible. Another factor that I see being significant is radical life extension. That, coupled with radical abundance from advancing technology, should result in new social driving forces. These should steer us away from a kind of future where "advances" are measured, like I think Malthus predicted long ago, in how many poor people you have rather than how well off the average person is. Looking a little beyond that, I don't think that "endless advances" in "living standards" are wanted either. You want to have an indefinite, joyful and meaningful existence for everybody, with arguably a slow, not rapid, advance in the number of people involved. Maybe we could live in something like em-cities, especially on orbiting platforms in space where we wouldn't have to compete with the terrestrial ecosphere. Maybe it would then be relatively easy to secure a very long (and happy) existence for, at least by today's standards, quite a few people, without straining the solar system's resources too much.
The question for the futurist is always how far can you extrapolate a trend before you hit a point where something qualitatively changes.
Regarding a return to subsistence wages, don't you think we'd get a revolution before then? At some point the average person will correctly perceive it's in their best interest to storm the Bastille. Historically when that happens the mob murders everyone in charge and society gets a reset.
Do the people doing the storming believe that? That's what counts. Mobs with pitchforks murder first and ask questions later. Any reading of human history will come to that conclusion.
This is especially so if the drop to subsistence wages happens quickly. The typical person doesn't benefit much from returns to capital. If you have too many people on the outside looking in, it seems rather obvious to me they won't take that lying down. It probably wouldn't come to pitchforks but rather the election of a populist leader eager to do their bidding.
When a model predicts an outcome and you have reason to think the reaction forces against that outcome will be powerful – that becomes a hint that the model as-is is being extrapolated too far.
I'm a physicist so my analogy would be electric fields inside a conductor. The proof that electric fields are (close to) zero inside a conductor is simply that if they aren't, the conduction electrons will redistribute in such a way to reduce the electric field, and so in equilibrium the electric field is driven to zero. "Subsistence wages for most humans" is tantamount to a strong electric field in a conductor: A situation that cannot persist. I assume that if you, Robin Hanson, were driven to a subsistence existence that you would be building Molotov cocktails along with the rest of us.
EDIT: To be clear I don't disagree with your model predictions; they seem well-grounded. But to my mind what your model is predicting is "TBD political instability and/or economic reconfiguration", not subsistence wages.
Without question "most issues that people have with AI are actually issues they have with the future, even without AI" ; we have to talk about destinations and ways of navigating much more... and we're still creating meaningful infrastructure to do so. (JOPRO Futures Center is in the works)
I’m interested in hearing more about how you see how AI will drive human wages to subsistence levels because I’m not seeing the connection yet; as in I see LLMs are really useful now but there are things like flexible and real time object recognition that they just can’t do that humans still need to do (ex: evaluating performance of a novel robot prototype). Maybe if we get human population growth debottkenecked. But also, from my understanding of the state of brain emulation research, ems are really far out. We haven’t been able to capture the controls complexity of even simple worms because biological neurons are so complicated.
The fear many people have, including myself, is that these issues will become relevant faster than society can adapt. Society will adapt one way or another, so perhaps I should clarify by saying adaptation where the outcome seems acceptable for most people. A future where I'm forced to live on AI subsistence wages is one I'd go a long way to prevent. But as far as I know, there are still no one selling insurance against job disruption from AI, while reputable forecasters are saying AI could replace most white collar workers in 5-20 years.
You keep using this word "adaptive" in reference to culture, without ever being clear what you mean. You claim it is just the same word used in biology, but the technical definition used in biology makes reference to DNA-heritable traits and generations. Neither of those are compatible with "cultural evolution," since culture has no generations or DNA, and often changes in ways other than random variation followed by selection, namely social persuasion and dialectic.
If you mean a culture that successfully propagates itself into the future, then pre-modern cultures were *not* adaptive, because they aren't around anymore. They lost to modern culture, which did a better job of propagating through its environment - that environment being human minds.
Modern culture won't last forever either. No culture lasts forever - they all eventually lose to some successor culture that wins the battle of minds. That means no culture is adaptive.
No there really is a bio definition of "adaptive" that doesn't refer to DNA. Don't you feel it is a bit tiresome to object every time I use this word?
Well, give that definition then, instead of just claiming it exists. Most of your arguments rest on this term, so it should be the first thing you straighten out.
https://chatgpt.com/share/6998a731-bf30-8009-8e90-e4976a62f66e
Give me the definition *you yourself* endorse and want to use, not four or five different definitions generated by ChatGPT which may or may not be what you meant. I could dissect and talk about these, but it would be a waste of time because they're all slightly different. If you didn't take the time to write it, why should I take the time to read it?
All these different ChatGPT-generated definitions are vague about whether a lineage is a culture, or the DNA genes of those having the culture. The most apropos one is the one starting "a norm encouraging high fertility..." but what if a norm encourages low fertility *and also* groups that adopt it are copied more? It's failing to disambiguate about the exact thing I want you to disambiguate.
> A trait, behavior, or information pattern is adaptive to the extent that its presence causes the lineage carrying it to become more prevalent over time in its relevant population.
> A trait is adaptive if, in the environment where it operates: It increases the expected rate at which copies of itself (or its lineage) persist, spread, or are imitated, relative to alternatives.
> A norm encouraging high fertility is adaptive culturally if groups that adopt it grow faster or are copied more, causing the norm to spread across groups or generations.
> A transmissible trait is adaptive if it increases the expected future prevalence of the lineage carrying that trait, in its relevant population and environment.
The increase in the frequency of some lineage is an interesting thing to think about wrt culture.
The global emergent monoculture is by this definition adaptive, since its frequency within the population has increased over time (hence monoculture).
However if we have a prolonged fertility decline then this frequency could drop as other, more fertile, groups ascend. In that sense the dominant culture will become non-adaptive. This seems well-enough defined to me.
Culture does have some interesting features though: Unlike DNA it can hop between individuals. Not everyone born Amish, or devout Catholic, remains so. One can imagine a scenario where the low-fertility culture continues to thrive by siphoning off people from the high-fertility subcultures. Essentially thriving as a parasite.
Culture does usually have generations. Generation of copies. It is pretty similar to copies of DNA - though some quiibble about copying fidelity in some cases.
> there has always been a risk that as we change stuff about our bodies and its environment, we might stop being conscious. Such as by changing foods, brain tools, etc.
I’m not sure what foods or brain tools this could refer to. A clearer example would be drugs, eg sedatives or hallucinogens.
In about 400 years North America will be Amish due to adaptiveness regarding fertility, this seems straightforward
Unlikely. It takes modern technology to support modern food supply and distribution. Pre-industrial population levels were stagnant for a reason. Most of the Amish already aren't farmers anymore, branching into other trades and reliant on the industrial American economy. If, hypothetically, the Amish population continued to boom without constraint for 400 years (already unlikely), they would have to branch into industrial manufacturing to support themselves, which would change their culture in the same way the Puritan culture was changed.
Also, without the American government to keep them peaceful, they would eventually divide into warring factions, just as Enlightenment-era Europeans did. And the most successful warring factions would be those that embraced modern weapons technology. We'd see a replay of warring pre-modern states.
My day was going just fine until I read number 4. Something in my head tells me "this is not possible", probably the fact that I think cows are conscious in a meaningful way so we're talking spectrum not some kind of binary. But still, crazy to think about. Numbers 5 and 6 are shocking too. Good one, Robin.
You say: "In the last thousand years, we have not much increased the rate at which we can grow the human population, but we have greatly increased the rate at which we can grow wealth. As a result, we’ve seen increasing wealth per person. But we should expect this situation to end eventually, with a return to subsistence wages, once we find better techs for growing population faster." And I realize the idea of "population increase" is important to your way of thinking about how the future will unfold and which groups or cultures will "dominate." I am hoping though, that it will not be so. Living at "subsistence wages" does not benefit the individuals so involved, however great their numbers may be. Not having gigantic population would, since there would be more wealth to go around. If enough people could realize that, the pressure to have as much population as possible should diminish to the near vanishing point, or be concentrated in so few individuals that control of them by others not so obsessed should be feasible. Another factor that I see being significant is radical life extension. That, coupled with radical abundance from advancing technology, should result in new social driving forces. These should steer us away from a kind of future where "advances" are measured, like I think Malthus predicted long ago, in how many poor people you have rather than how well off the average person is. Looking a little beyond that, I don't think that "endless advances" in "living standards" are wanted either. You want to have an indefinite, joyful and meaningful existence for everybody, with arguably a slow, not rapid, advance in the number of people involved. Maybe we could live in something like em-cities, especially on orbiting platforms in space where we wouldn't have to compete with the terrestrial ecosphere. Maybe it would then be relatively easy to secure a very long (and happy) existence for, at least by today's standards, quite a few people, without straining the solar system's resources too much.
You seem focused on what you want to happen, I'm just using standard econ to predict what will happen.
The question for the futurist is always how far can you extrapolate a trend before you hit a point where something qualitatively changes.
Regarding a return to subsistence wages, don't you think we'd get a revolution before then? At some point the average person will correctly perceive it's in their best interest to storm the Bastille. Historically when that happens the mob murders everyone in charge and society gets a reset.
Storming the Bastille would make wages less, not more.
Do the people doing the storming believe that? That's what counts. Mobs with pitchforks murder first and ask questions later. Any reading of human history will come to that conclusion.
This is especially so if the drop to subsistence wages happens quickly. The typical person doesn't benefit much from returns to capital. If you have too many people on the outside looking in, it seems rather obvious to me they won't take that lying down. It probably wouldn't come to pitchforks but rather the election of a populist leader eager to do their bidding.
OK, but then Storming the Bastille events don't contradict my predictions. I never claimed this would all play out peacefully.
When a model predicts an outcome and you have reason to think the reaction forces against that outcome will be powerful – that becomes a hint that the model as-is is being extrapolated too far.
I'm a physicist so my analogy would be electric fields inside a conductor. The proof that electric fields are (close to) zero inside a conductor is simply that if they aren't, the conduction electrons will redistribute in such a way to reduce the electric field, and so in equilibrium the electric field is driven to zero. "Subsistence wages for most humans" is tantamount to a strong electric field in a conductor: A situation that cannot persist. I assume that if you, Robin Hanson, were driven to a subsistence existence that you would be building Molotov cocktails along with the rest of us.
EDIT: To be clear I don't disagree with your model predictions; they seem well-grounded. But to my mind what your model is predicting is "TBD political instability and/or economic reconfiguration", not subsistence wages.
Human: Please don't let your predator robot eat me!
Robin: But the main difference between a predator and a parasite is the speed at which it kills you.
Human: Please get your hungry predator robot out of here!
Some people are better at using AI compared to others. Some people are better at dealing with the future than others. The acceleration isn't uniform.
Without question "most issues that people have with AI are actually issues they have with the future, even without AI" ; we have to talk about destinations and ways of navigating much more... and we're still creating meaningful infrastructure to do so. (JOPRO Futures Center is in the works)
I’m interested in hearing more about how you see how AI will drive human wages to subsistence levels because I’m not seeing the connection yet; as in I see LLMs are really useful now but there are things like flexible and real time object recognition that they just can’t do that humans still need to do (ex: evaluating performance of a novel robot prototype). Maybe if we get human population growth debottkenecked. But also, from my understanding of the state of brain emulation research, ems are really far out. We haven’t been able to capture the controls complexity of even simple worms because biological neurons are so complicated.
I made no claim above about when AI would reach human level, or when ems will be feasible.
Fair enough
The fear many people have, including myself, is that these issues will become relevant faster than society can adapt. Society will adapt one way or another, so perhaps I should clarify by saying adaptation where the outcome seems acceptable for most people. A future where I'm forced to live on AI subsistence wages is one I'd go a long way to prevent. But as far as I know, there are still no one selling insurance against job disruption from AI, while reputable forecasters are saying AI could replace most white collar workers in 5-20 years.
Yes, as I said, the main difference with AI is that it might make some things happen sooner, that were going to happen anyway even without AI.
In the last century, horses lost their transport jobs, resulting in a vast reduction in their population. Horses can act purposefully.
Moving on here.