Philosophy triumphs easily over past evils and future evils; but present evils triumph over it. Francois De La Rochefoucauld.
Katja Grace’s translation: We nobly analyse distant things, and in the present do whatever the hell we want.
If you stopped an articulate person who had just passed a homeless bum, and asked why she did not help, she’d probably explain this isn’t a simple question. She might mention ethics complexities, but she’d probably focus on the complex social context. Is the bum mentally ill, sick, stupid, lazy, or faking? Does the bum have family who should help first, did he arrive recently in this area, and who is best placed to know what he needs?
At my Georgetown lecture last night on our robot future, the smart econ students focused their questions almost entirely on ethics. They seemed to assume they understood enough about the social situation, and were obsessed with the ethical ways for humans to treat robots, robots to treat humans, etc. I’ll bet they’d also be quick to condemn Roman centurions’ ethics, also figuring they understood enough about their social situation. But I think they’d need to learn lots more about either of these worlds before they could begin to offer useful ethics advice.
Some of my young idealistic friends like to talk about figuring out what they could do to most help the world, and might go to Burma to see how the really poor live. I tell them one has to learn lots of details about a place to figure out how to improve it, and they’d do better to try this on a part of the world they understand better. But that doesn’t sound nearly as fun as saving the whole world all at once.
Humans overwhelmed by the social complexities of helping a bum nearby think they know enough about societies far away, so that ethics becomes the main concern there. I see the same thing in discussions of future biotech or nanotech – ethics becomes the main frame, even though we only have the faintest ideas of how future societies might integrate those techs. Beware the easy confidence of advising worlds far from your knowledge or consequence.
Added 29Oct: The obvious way to help poor folk far away without relying on your poor understanding of their world is to rely on the one thing you know best about their world: it is poor. Invite them to move to your rich world, to share in its riches. If your neighbors hinder you, use what you know about them to change that.
I've mentioned it before on this blog, but if you take the view that "morality" or ethics are just attempts to rationalize genetic based group cooperation strategies, then you should not expect to be able to perform calculus with morality - it is not a logical construct, it is simply feelings. Any attempt to perform moral calculations (I should help this person rather than this other person) is doomed to fail if approached analytically. Go with your feelings, if you want to help A rather than B, then do so.
Eyal, Liberman, and Trope (2008) have a paper (pdf) applying near-far theory (aka construal level theory) to morality. The abstract:
We propose that people judge immoral acts as more offensive and moral acts as more virtuous when the acts are psychologically distant than near. This is because people construe more distant situations in terms of moral principles, rather than attenuating situation-specific considerations. Results of four studies support these predictions. Study 1 shows that more temporally distant transgressions (e.g., eating one’s dead dog) are construed in terms of moral principles rather than contextual information. Studies 2 and 3 further show that morally offensive actions are judged more severely when imagined from a more distant temporal (Study 2) or social (Study 3) perspective. Finally, Study 4 shows that moral acts (e.g., adopting a disabled child) are judged more positively from temporal distance. The findings suggest that people more readily apply their moral principles to distant rather than proximal behaviors.