Most people assimilate the values of their local culture, some of which they can express abstractly, and mix that with their genetic personality, individual experiences, and maybe also some thoughts, to produce the values they use to make decisions over their lifetimes.
Such values usually have enough detail to adequately address the actual world that they will typically face, even though such values are quite vaguely specified relative to the set of all possible worlds that one could possibly face. That is, if given a random possible choice from the space of all possible choices, most people would hardly know what to do, and if forced to choose would mostly react rather randomly to random surface features of the situation. We were not evolved or socialized to live in random worlds.
This is a problem for philosophers or futurists who make it their job to think about worlds very different from their own. How can we figure out what we want (or equivalently, “see as good”) in such strange contexts? To my eyes, most make relatively naive and shallow projections from their current culture-specific values to such strange worlds, projections that don’t account well for the historical contingencies that lead their local cultures to have the values that they have.
Can we do better? Here are some of the sources we can draw on to estimate our values:
Actual Actions. We can look at our past actions, and presume that in the future we’d want to do something like what we did in the past.
Action Intuitions. We can imagine various possible choice situations, and see which choices our intuitions favor then.
Outcome intuitions. We can imagine various possible choice outcomes, and see which of those outcomes our intuitions like.
Pattern intuitions. We often notice simple patterns in our choices or outcomes they like, and ask our intuitions if we approve of them.
Associates’ Data. We can include the actions and intuitions of our associates, not just ourselves, as part of the data we use.
Interpolation. We can use a simplicity prior to interpolate from actual cases of acts or intuitions to other cases.
Curve Fitting. We can use a simplicity prior to fit an all-case “curve” which doesn’t exactly match our actual acts and intuitions.
Principles. Theorists often work to find less obvious patterns which roughly fit actions or intuitions, and propose “principles” to account for them.
Causal Origins. Our theories of the causal processes that made us can suggest deeper simpler “curves” to fit to our acts/intuitions.
Others’ Synthesis. We can look to how others synthesize all these sources into their best guess values, and try to do like they do.
Most people seem to endorse some of their pattern intuitions, even when those are at odds with specific acts and intuitions. They thus endorse at least some error theory, whereby undesired errors have influenced their specific past actions and intuitions. Such error theories justify not only rejecting specific intuitions in favor of curve-fitting alternatives, but can also favor deeper “curves” that account more for our causal origins.
The key hard question here is this: what aspects of the causal influences that lead to you do you now embrace, and which do you instead reject as “random” errors that you want to cut out? Consider two extremes.
At one extreme, one could endorse absolutely every random element that contributed to any prior choice or intuition. When there is any possible way to interpret apparent inconsistencies as actually consistent according to some complex tortured theory, you embrace such a theory. You could also embrace any random framing or context that led to any particular choice. Sure, you admit that you might randomly have made a different choice, if that context had been different, but now that you’ve made that choice, you became the new you who values it, and you intend to keep future choices consistent with all random past choices. Or maybe just embrace all future random choices, regardless of consistency with past choices.
At the other extreme, you might see yourself as primarily the result of natural selection, both of genes and of memes, and see your core non-random value as that of doing the best you can to continue to “win” at that game. You can see the many contingent choices made so far in constructing you, and the choices that you have made during your life, as all being random best guesses about how to win at that game, guesses that you are willing to reject as errors if you learn that they no longer seem so effective at wining. In this view, everything about you that won’t help your descendants be selected in the long run is a random error that you want to detect and reject.
In between these two extremes, you embraces as part of the “core” you some aspects that were once evolutions best guess at winning the natural selection game, even if it turns out that they later came to seem less plausibly winning strategies. Yet you reject other random influences on you as errors that get in the say of that “core” you trying to achieve its core ends. But how exactly can you choose where to draw this line, between historical influences that you embrace, and those you reject?
As I discussed above, the usual approach is to just assimilate the values of your culture without reflecting much on what line that implies. If the other people around you act in a certain way, you can feel pretty safe assuming that you won’t be overly criticized for acting in similar ways. But the more that you try to extrapolate such values to strange possible situations, the less sure you can be that others would extrapolate in the same way. Yes, you could just join some particular community that specializes in such topics, and conform to the recent choices of its most prestigious members. But while this is a reasonable recipe for social acceptance, it seems an especially poor proxy for what I really value regarding strange future scenarios. (Like re AI risk.)
Without a satisfactory principle by which to pick an intermediate position, I’m tempted to go to one or the other extreme. And between these extremes, the embrace-all-random-influences seems hard to swallow. Which pushes me to see my core more as trying to win the natural selection game. I can see why that might feel awkward, but what other principled options can you offer?
I feel like this whole discussion is a bit confused or at least ambiguous. I mean are you suggeating there is some kind of normative fact you can discover about how you should extrapolate your values to very different futures?
Sure, the kind of considerations you list are ones that I happen to find persuasive when I engage in reflection about my values. However, it doesn't seem like there is anything even slightly inconsistent about saying:
"Yes, my values are a totally contingent feature of my biology and the particular time and place I live but so what. Those are my values so I'll just apply them to very different futures w/o modification even though I know if I lived there I'd have different values"
Unless you think there are something like objective (mind independent) moral facts there isn't any truth to be discerned here...only a choice to be made.
It's no different than deciding whether you (all things considered) want to eat a piece of cake. Maybe thinking about the fact that in the same situation you'd advise a friend not to eat it moves you to decide you don't really want to eat it. But maybe it just doesn't and neither response is anymore correct than the other since there is no external standard against which your judgements are being measured.
> "...Which pushes me to see my core more as trying to win the natural selection game."
Consider scenario A: your descendants evolved/were engineered to become small non-sapient mouse-like rodents, but these rodents were highly successful and managed to spread around the galaxy aboard spaceships of an alien race. The aliens like the rodents and will never let them go extinct for millions of years. You have no sapient descendants.
Now consider scenario B: your descendants are sapient, intelligent, but of much smaller number and biomass, and the aliens restrict your descendants to Earth, though we may suppose the descendants still don't go extinct for millions of years.
Which would you pick, A or B? The rodents in A are "winning the natural selection game" much better than the people in B. But I think you would prefer B over A. This demonstrates that you do not simply want to reproduce as much as possible, but you wish to reproduce certain *phenotypes* that you prefer over others, such as sapience.
If you grant that, it now becomes a question of not simply reproducing as much as possible, but of which phenotypes you prefer among your descendants. Is it merely sapience, or would you like your descendants to be good and high-value people in other ways as well, such as having honesty or kindness?