24 Comments

I feel like this whole discussion is a bit confused or at least ambiguous. I mean are you suggeating there is some kind of normative fact you can discover about how you should extrapolate your values to very different futures?

Sure, the kind of considerations you list are ones that I happen to find persuasive when I engage in reflection about my values. However, it doesn't seem like there is anything even slightly inconsistent about saying:

"Yes, my values are a totally contingent feature of my biology and the particular time and place I live but so what. Those are my values so I'll just apply them to very different futures w/o modification even though I know if I lived there I'd have different values"

Unless you think there are something like objective (mind independent) moral facts there isn't any truth to be discerned here...only a choice to be made.

It's no different than deciding whether you (all things considered) want to eat a piece of cake. Maybe thinking about the fact that in the same situation you'd advise a friend not to eat it moves you to decide you don't really want to eat it. But maybe it just doesn't and neither response is anymore correct than the other since there is no external standard against which your judgements are being measured.

Expand full comment

> "...Which pushes me to see my core more as trying to win the natural selection game."

Consider scenario A: your descendants evolved/were engineered to become small non-sapient mouse-like rodents, but these rodents were highly successful and managed to spread around the galaxy aboard spaceships of an alien race. The aliens like the rodents and will never let them go extinct for millions of years. You have no sapient descendants.

Now consider scenario B: your descendants are sapient, intelligent, but of much smaller number and biomass, and the aliens restrict your descendants to Earth, though we may suppose the descendants still don't go extinct for millions of years.

Which would you pick, A or B? The rodents in A are "winning the natural selection game" much better than the people in B. But I think you would prefer B over A. This demonstrates that you do not simply want to reproduce as much as possible, but you wish to reproduce certain *phenotypes* that you prefer over others, such as sapience.

If you grant that, it now becomes a question of not simply reproducing as much as possible, but of which phenotypes you prefer among your descendants. Is it merely sapience, or would you like your descendants to be good and high-value people in other ways as well, such as having honesty or kindness?

Expand full comment
May 19, 2023·edited May 19, 2023

We start with a disorganized set of beliefs and values, granted to us by nature and random circumstance, many of them irrational, unfounded, and self-contradictory. Then we try to organize these beliefs and values so that:

1. The beliefs and values are justified by more fundamental or basic beliefs and values, in a directed acyclic graph.

2. The beliefs and values are not in contradiction with themselves. In every hypothetical or actual scenario we consider, our organized beliefs and values will yield a single consistent judgment.

3. The most fundamental beliefs and values - the roots of the directed acyclic graph - seem "self-evident," and are irreducible to simpler beliefs or values.

In order to organize our beliefs and values this way, we'll have to discard and revise a lot of them. The theoretical endpoint of this process - which no human can actually reach - is the beliefs and values that we "ought to" hold, contingent on the beliefs and values that we initially held. These are the beliefs and values that we would persuade ourselves to hold if we thought about the issues for long enough.

You mention "predicting" what we would value in different situations by statistical curve-fitting. The problem with such an approach is that it's not persuasive. If I predict that in a certain situation I could be induced to insanity, this does not persuade me to adopt this insanity in the present. What *does* persuade me is noticing that some of my beliefs are in contradiction with each other, or noticing that some of them are insufficiently justified.

Expand full comment
May 23, 2023·edited May 23, 2023

This seems to resolve the question ("which of my origins?") best in my view.

And thinking about the issues can (should!) be a very formal process of isolating premises, developing logical proofs, calculating probabilities, etc. It's computation, in short, and has an objectively correct answer (at least within the limits of computable systems, and based on subjectively-held values).

Expand full comment

The standard answer in moral philosophy is "reflective equilibrium", i.e. using your best judgment about how to revise plausible-seeming principles and/or intuitions about cases to bring them into mutual coherence.

I don't find it very appealing to defer to "causal origins": even granting that our values are shaped by natural selection, why would that give us any reason to value "winning the natural selection game", given that that does not well match the actual *content* of our values? I'm much more inclined to ignore origins completely and instead try to identify principles about what is (most plausibly) worth valuing. These judgments necessarily stem from our actual perspectives, but can go beyond them in various ways. For example, thinking about the badness of suffering in our own lives, and those we care about, can lead us to value the avoidance of suffering no matter who (or what) is experiencing it. And similarly, I think, for reflections about the good things in life.

Expand full comment
author

I don't see how just "thinking about" the issue helps you choose which of your historically contingent causes to embrace and which to reject.

Expand full comment

TBF this is usually embraced by philosophers who are some kind of realists about morals.

Expand full comment

I'd push back a bit on that. Reflective equilibrium is something you can appeal to if you are some kind of moral realist (at least a kind who thinks moral claims reflect some external facet of reality) who also believes that intuitions offer some kind of evidence about moral facts (despite the access worries).

I don't think it makes as much sense on an anti-realist view (which I take Robin to be adopting here). It may be that some philosophers defend an anti-realist reflective equilibrium as determining the meaninga of our moral talk in some way but that's just a boring fact about linguistic meaning and doesn't have any normative force.

Indeed, as I'll argue below, I kinda feel this question is kinda confused on an anti-realist understanding. I mean you can interrogate your own meta-preferences but you can't really get to any discoveries about how you should make the choice...it's just a process of internal deliberation no different than asking yourself it you want to eat that piece of cake.

(tho I'm sure some philosopher will have advocated every possible position so there's probably someone whose a moral anti-realist who accepts normative facts about your meta-ethic deliberations...if you can defend true contradictions...)

Expand full comment

Reflective equilibrium doesn't take moral realism as a premise. It's an empirical idea about a person's state of mind. It is the same kind of idea as the notion of equilibrium in physics, or an attractor of a system of differential equations.

Your mind is in reflective equilibrium if further reflection on your state of mind does not bring up contradictions/inconsistencies that you want to resolve. You want to reach reflective equilibrium to the extent that you don't like having contradictions/inconsistencies in your beliefs and values.

Expand full comment

The fact that we sometimes reflect on our beliefs and modify them as a result isn't a claim anyone disputes. The reason it is raised in philosophical contexts is primarily to confer justification or resolve access worries (we are justified in accepting the results of reflective equilibrium).

I think this is backed up by the Stanford encyclopedia of philosophy entry here: https://plato.stanford.edu/entries/reflective-equilibrium/#DisNarWidRefEqu

Note that there is no empirical claim identified in the article, rather, it all focuses on whether this method provides some kind of evidentiary support, justification or related epistemic concepts.

But that isn't really that relevant if there are no underlying moral facts to be right or wrong about.

Expand full comment
May 20, 2023·edited May 20, 2023

The SEP article says clearly in the introduction: "Viewed most generally, a 'reflective equilibrium' is the end-point of a deliberative process in which we reflect on and revise our beliefs about an area of inquiry, moral or non-moral." This definition uses only empirical language about what happens in a mental process. This definition makes no reference to what is objectively right or wrong.

Skimming the rest of the SEP article, I'm having trouble finding anything that clearly presumes moral realism; they talk about coherence and agreement and beliefs, all empirical mental concepts without need for moral realism. "Evidentiary support" and "justification" should also not be interpreted as necessarily in reference to some final objective truth; justification is just a mental process of linking two ideas together.

The concept of reflective equilibrium is useful because, if you are a rational person, you are highly concerned with eliminating contradictions and inconsistencies in your thinking. Reflective equilibrium is simply your hypothetical state of mind after you have finished doing so. Because I try to be a rational person, I would like to hold whatever beliefs I would hold in reflective equilibrium. That is the relevance to me, without needing to mention anything about moral realism (yet).

Expand full comment

Maybe an analogy would help:

If someone says: the solution to how we know whether to believe in God is we go with our feelings.

The implication and interest of the claim is that going with our feelings is a epistemically acceptable way to reach that conclusion.

Yes, there is also an empirical claim buried in there (in fact that's how we do reach such deciscions) but it's neither controversial nor the philosophically interesting claim. And the use of the term solution in the statement clearly indicates that what's being claimed goes beyond a mere description of how beliefs are in fact formed.

--

In other words if the claim here is merely: ppl do decide what to believe by reflecting on them the right response is "well, obviously but why does that solve anything"

Expand full comment
May 20, 2023·edited May 20, 2023

Why start a second thread? Last time that happened there was an issue with finding things from the other thread.

No, the interesting part of "reflective equilibrium" is not, "ppl do decide what to believe by reflecting on them." People *never reach* reflective equilibrium about their moral views. Reflective equilibrium is about the *ideal endpoint* after an endless amount of hypothetical discussion and review of evidence. Nobody has the time or resources to do that on most issues. Thus, reflective equilibrium is not merely a description of what people already do.

Reflective equilibrium is to moral discussion, as "perfect play" is to chess. It is an important concept, but not often reached in practice. (In fact, "perfect play" in chess is a specific example of reflective equilibrium; perfect play is the move a chess master would arrive at if he had endless time and resources to reflect on the question and correct defects and inconsistencies in his prior reasoning.)

Expand full comment

No one disputes the claim that we do sometimes reflect on things. The end of the first paragraph makes the issue clear: "Even though it is part of our everyday practice, is this approach to deliberating about what is right and finding justification for our views defensible?"

We all accept that this IS a part of our everyday practice. No one says people don't reflect on their beliefs. But why is that raised in philosophical discussions? And why would it be a 'solution' to the issues raised here?

For the point referenced in the quote: as a claim to some kind of justification!

If you think there is an interesting empirical claim here then make it. If it's nothing but: sometimes we reflect on things why should we care? Why would that solve anything any more than the fact that we sometimes reevaluate our beliefs when someone pretty asks us to?

Expand full comment

I feel like I have already answered all your questions.

- The notion of justification for moral views does not inherently presume moral realism. (I went into that already)

- My last paragraph in the preceding post is all about explaining why you should care about reflective equilibrium without needing to subscribe to moral realism.

You may find it easier to talk about reflective equilibrium of what you *want*, rather than reflective equilibrium of what "is moral," if you don't like the idea of labeling things as moral or non-moral. It amounts to the same thing, because what you want comprises your values, and includes what conditions you desire for the world at large. I would call a desire for the world at large to take on a certain condition, to be a moral position, but if you don't like that label you don't have to use it; it is enough to acknowledge you have such desires, and seek reflective equilibrium among them because you want to avoid inconsistencies in what you want.

Expand full comment

Agree on the second option, but I think the level of selection you treat as fundamental makes a big difference - i.e. is the 'game' maximising your genetics / memetics / 'something else' that captures the knowledge in both of those (plus that elsewhere in the institutions that allow me to exist).

I would probably go with the later and say that my governing principle was the growth of knowledge and complexity in the universe. Sounds a bit pompous though.

Expand full comment

Isn't this the opposite of the takeaway from your AI risk post, which seems to argue that it's overly parochial to be concerned about humans and our current values being replaced by potentially random and arbitrary AI values? I find it hard to see how you can self-consistently arrive it, "It's the right conclusion to prefer your own culture's/genome's/life history's values over all others within the scope of values humans might hold, but also the wrong conclusion to be opposed to a world where all past and present human values are overwritten by entities that do things to which we assign no value whatsoever."

To be clear, I think your conclusion in this post is mostly the right one.

Expand full comment

Value is impersonal; it is The Good. There is also the good-for-me, the good-for-my-genetic -line, the good-for-my-culture, etc. My culture tends to inculcate precepts about how we had best act to maximize the good-for-my-culture; if my culture interacts little with other cultures, this will in practice be negligibly different from The Good. Natural selection inclines me to try to maximize the good-for-my-genetic-line (but natural selection is a crude workman--I often behave differently); nobody thinks this is The Good.

Expand full comment