A common immature attitude to desire is to assume that you are entitled to what you passionately desire, and to then cry if you don’t get it, to pressure others to give it to you.
Regarding the second attitude, expressed indifference is often just a defense mechanism against the intuition that our cherished values lack adaptive fitness.
Tl;dr particular moral beliefs are somewhat undeciable, but his answer to 1st-level-meta morality - how to get along with people who don't share your entire object-level morality - is simple and hard to argue against.
I couldn't comment over there (I'm not a paid subscriber), but his opening paragraph seems wrong. He provided no actual data (I would link to some tweets by Phil Magness on citations, but since Magness blocked me recently the twitter search function doesn't include him), but merely provided his own impressions of his own field, which is hardly the same as academia as a whole. Tyler Harper just made a big splash with his article on the Mellon Foundation dominating grants for the humanities, and that perspective is hardly limited to Mellon. DEI got extended into hiring for STEM, and Heath's fellow political philosophers preferring Rawls doesn't contradict that at all. Even pointing out that the commonly heard variants of CRT don't match what originated in law schools doesn't contradict the fact that these heretical https://zermatist.medium.com/on-pretentious-rhetoric-bf034a25bd41 versions have spread to administrators, and they can impose that ideology on other parts of the university.
That seems prima facie reasonable, but it seems to minimize the very real cases when 'the best feasible options' are just not good enough for you, and you'd rather just be indifferent or see it all burn down in those cases. Culture is a good example: if our replacer is sufficiently distant from us, why care about them at all? I don't see this as an issue of maturity. There are sometimes choices between two unacceptable options, i.e, would you rather be tortured with this implement or with this other one?
I believe that the right kind of AGI deployment is our best chance of fostering the kind of culture we want -- because it is rational.
"All of an AGI’s advantages culminate in its ability to learn to ... dependably discover objective knowledge or truths. This will be of tremendous help not only to create material abundance for everyone, but also to improve our individual ability to think more clearly. A dedicated personal AGI (you own it, and it serves your purpose, not some megacorporation’s) can be like a little angel on your shoulder to help you make better choices and to flourish."
I'm more inclined to believe that with all the focus on 'alignment' we are more likely to align with AIs that are aligned with our group, only to find the looming spectre of Dunbar's Number. Those of us who cannot survive that relative amount of anonymity -- well we're outside of the bonds of the plurality of humanity. This only puts digital steel around our bubbles, for which we will be called to perform, because it remembers everything.
All this is to say that perhaps we're making too much of culture itself. We need it too much because we've submitted our moral priorities to 'the world', and believe ourselves to be greatly capable of multicultural respect and flexibility.
Regarding the second attitude, expressed indifference is often just a defense mechanism against the intuition that our cherished values lack adaptive fitness.
I doubt that it's true apathy.
Is the professional field of Cultural Preservation an institutionalized expression of this attitude?
This is startlingly aligned with this recent piece of John Rawls https://open.substack.com/pub/persuasion1/p/the-unexpected-persistence-of-john?utm_source=share&utm_medium=android&r=cd06p
Tl;dr particular moral beliefs are somewhat undeciable, but his answer to 1st-level-meta morality - how to get along with people who don't share your entire object-level morality - is simple and hard to argue against.
I couldn't comment over there (I'm not a paid subscriber), but his opening paragraph seems wrong. He provided no actual data (I would link to some tweets by Phil Magness on citations, but since Magness blocked me recently the twitter search function doesn't include him), but merely provided his own impressions of his own field, which is hardly the same as academia as a whole. Tyler Harper just made a big splash with his article on the Mellon Foundation dominating grants for the humanities, and that perspective is hardly limited to Mellon. DEI got extended into hiring for STEM, and Heath's fellow political philosophers preferring Rawls doesn't contradict that at all. Even pointing out that the commonly heard variants of CRT don't match what originated in law schools doesn't contradict the fact that these heretical https://zermatist.medium.com/on-pretentious-rhetoric-bf034a25bd41 versions have spread to administrators, and they can impose that ideology on other parts of the university.
That seems prima facie reasonable, but it seems to minimize the very real cases when 'the best feasible options' are just not good enough for you, and you'd rather just be indifferent or see it all burn down in those cases. Culture is a good example: if our replacer is sufficiently distant from us, why care about them at all? I don't see this as an issue of maturity. There are sometimes choices between two unacceptable options, i.e, would you rather be tortured with this implement or with this other one?
I believe that the right kind of AGI deployment is our best chance of fostering the kind of culture we want -- because it is rational.
"All of an AGI’s advantages culminate in its ability to learn to ... dependably discover objective knowledge or truths. This will be of tremendous help not only to create material abundance for everyone, but also to improve our individual ability to think more clearly. A dedicated personal AGI (you own it, and it serves your purpose, not some megacorporation’s) can be like a little angel on your shoulder to help you make better choices and to flourish."
https://petervoss.substack.com/p/agi-intelligence-is-different-in
If we see a problem that afflicts us, and will also afflict future AI, why not start working on it now, instead of waiting for AI to fix it?
Personally, I can do a *lot* more by focusing on AGI. I believe that with our approach we're only low single digit years away from AGI.
I'm more inclined to believe that with all the focus on 'alignment' we are more likely to align with AIs that are aligned with our group, only to find the looming spectre of Dunbar's Number. Those of us who cannot survive that relative amount of anonymity -- well we're outside of the bonds of the plurality of humanity. This only puts digital steel around our bubbles, for which we will be called to perform, because it remembers everything.
All this is to say that perhaps we're making too much of culture itself. We need it too much because we've submitted our moral priorities to 'the world', and believe ourselves to be greatly capable of multicultural respect and flexibility.