15 Comments

Does anyone know why Huck does what he does? I'm looking for examples of belief driven characters in fiction. How belief drives a character to action. Are there any meaty eassay's, Web-sites, or refrence books regarding fictional characters and how their beliefs drive ther actions.

Expand full comment

Sorry to be slow responding! Robin, I agree that these discrepancies exist. But aren't they usually considered as cases of self-deception or cognitive dissonance? People still think they believe what they claim to. (Admittedly this gets into some slippery philosophical distinctions.) My e-zombies were meant to have Overcome Bias and to be free of self-deception, but still to act similarly to self-deceived humans.

Expand full comment

Hal, consider people who vigorously argue for their claims but tend to find excuses not to bet on those claims, nor to act on their claims in other ways. To some extent, such people already are the "zombies" you find it hard to imagine.

Expand full comment

It's hard to imagine a society of beings who don't believe in the theories and positions they argue for. I can't help thinking that pending a vast realignment of human psychology, belief is a necessary ingredient for vigorous pursuit and advocacy of ideas.

One could imagine what we might call "epistemological zombies", beings who act just like people who have strong beliefs, but who don't actually hold those beliefs at all. (They would mostly all privately believe what seems to be the consensus belief.) These e-zombies would continue to theorize, argue, debate, look for evidence favoring their positions, but they would not actually believe that what they are arguing for was likely to be true. Their motivation in doing all this would presumably be the same things that seem to motivate humans to adopt diverse beliefs: evolutionary competition, sexual selection, etc. Champion an outlandish but ultimately successful idea and you receive great rewards. You don't have to actually believe, in order to do all this.

Now I don't think Robin means to go this far. My reading is that he envisions beings who are sort of halfway between ordinary humans and e-zombies. These beings would not claim to believe what they don't, but they would still act similarly to humans who are motivated by their belief in diverse and unusual ideas. For example, they would still pursue evidence for certain theories despite disbelieving in them, as vigorously as today's humans do who are motivated by belief. Again their motivation would be the rewards of finding new and surprising evidence for out-of-favor ideas.

Doesn't the fact that people have evolved with belief highly correlated with their actions suggest that this semi-e-zombie behavior is not consistent with human cognitive limitations? Otherwise, why don't people behave this way already?

Expand full comment

First, a general point: random search should generally be less effective than belief-driven search, assuming that the group of searchers possesses more than zero information regarding what they are searching for. Beliefs are almost never formed without at least some supporting evidence. Thus, a search space defined by searcher-belief will generally be smaller than one not so constrained, and should usually contain the answer being sought (or at least some approximation thereof).

Robin: in theory, I agree with you. The social rewards "need not" depend on beliefs; we could, for instance, make a social choice to set up a large array of X-Prizes for the falsification of certain commonly believed propositions of fact. Looking at the world we actually inhabit, however, social rewards (and especially self-satisfaction) do seem to be closely connected with the falsifier's belief state. Few people cruise around the internet critiquing other's assertions at random; rather, they seek out those with whom they disagree. Even academia, where the rewards of falsification should be highest, involves depressingly little attention given to systematic devil's advocacy. Rather, academics, like everyone else, mostly attack the statements of those with whom they disagree.

Until it becomes apparent that the social rewards of finding holes in analyzes are, in fact, large enough to generate a substantial amount of argumentation, even in cases where there is no disagreement, I think it will be hard to make the case that diversity of belief does not benefit us. Maybe it would be possible to create a world in which that was no longer the case, but that only shows that diversity of belief isn't necessarily valuable in all possible worlds; it does not show that it isn't valuable in the world we actually inhabit.

In other words, "need not depend" != "do not depend."

Expand full comment

Belief affects allocation of scarce resources during problem solving. If there is no cost, you can try all solution methods. If you can only afford one or two approaches then belief is critical.

During a course in complex analysis, my prof taught from book he was writing. He tested his problem sets on his students. During the semester several of the statements we were given to prove were incorrect. I found counter examples in two such cases. During the final exam I thought I had found another counter example on an exam question. In that case, I was mistaken. Given time constraints, my beliefs strongly affected my likelihood of solving the problems. It was important to intuit the correct initial belief.

For the professor using the class to proofread his new text, he needed both students who believed the statements were true and students who believed the statements were false. Diversity of beliefs helped him.

Expand full comment

nerdbound, a vast amount of empirical game theory work suggests that people can indeed choose randomly among large sets of possibilities.

Mark, the social rewards from finding holes in analyzes need not depend on whether you belief the analyzes' conclusions are wrong.

Expand full comment

Robin: I think you are missing something here. Argument is a knowledge-enhancing social practice. Having to defend our true beliefs when they are attacked causes us to aggregate further evidence for them, and examine how justifiable they are based on our evidence. Using devil's advocacy to serve that function works only up to a point, because a devil's advocate isn't as motivated as a true believer to find weaknesses in our arguments.

To be sure, if we happen to know (using our patented True Belief Machine) that a belief is correct, it is better for everyone to hold it. But in the absence of a TBM, we need some diversity of belief, and indeed, some argument in favor of false beliefs, in order to ensure that dominant beliefs are subject to continuing scrutiny and (potential) revision.

For more discussion on this point, see my paper here, particularly Part III.

Expand full comment

"Our beliefs are the conclusions we form from our analyzes about the world (including ourselves)."

How would one share the product of info and analysis, other than in the form of conclusions (i.e. "beliefs")?

Expand full comment

The tree of increasing probability supports branches of increasing possibility.

[Not being mystical, just short of time.]

Expand full comment

Robin, this seems a bit abstract to the point where I'm not really sure what you actually want. What do you plan to do about it in practice? How do you propose the pool of beliefs should be shrunken? How should a shrunken pool of beliefs be enforced? Who decides which beliefs should remain and which ones need to go?

And lastly, remember that nobody is stopping you from trying to annihilate any belief you want through persuasive argument.

Expand full comment

When people are trying to outargue others, they look closely at their own positions in order to best defend them.

When everyone agrees, people do not feel the need to closely examine their positions. Turning a critical eye upon them is likely to earn the ire of our peers. As a consequence, those are the conditions under which flaws of the positions are most likely to go unnoticed.

The solution is not to promote a diversity of thought in itself. The solution is to promote critical thinking as a valuable thing in itself, not just as a rhetorical weapon for beating down our enemies.

Expand full comment

I'm not sure that this is right. I'm thinking of Feyerabend's position in the philosophy of science. While his position is pretty radical, some of his thoughts seem to be on the right track. In particular, he argues that a diversity of beliefs leads to progress in science.

A prominent example that I hope I'm not getting wrong is that of Hans Christian Orsted, who discovered electromagnetism. His interest in connections between electricity and magnetism were inspired by a number of false beliefs about the 'unity of nature', which (as I understand it) he treated in a kinda mystical way. As per the Wikipedia page, it was his conversations about possible connections between electricity and magnetism, along with his belief in the unity of nature which led him to become a physics professor, which in turn led him to make the discovery. Thus, a scientific belief was made earlier because someone believed the wrong things and thus found a weak, untested hypothesis to be exciting.

Regardless of the truth of the example, the general point is that false beliefs can lead someone to care about some issue more than, on the evidence, the issue deserves to be cared about. Science proceeds in roughly two stages: hypothesis generation and hypothesis testing. While the latter is perhaps a process we would like to standardize, the former occasionally rewards bizarre beliefs and creativity. It may be easier to come up with a bizarre new belief if it's tangentially related to something you read in Aristotle (or Husserl or Marx or...). Thus, if we're researching sociology, a community with no Marxists seems to be at a disadvantage, even though Marx was wrong about many things.

'The best strategy is to choose randomly among the possibilities'? That implies that (a) all possible hypotheses can be listed, and (b) we can choose randomly. Both implications are surely false. The most basic sociology of science should be enough to tell us that there are usually only a few hypotheses being considered at a time, and that almost everyone is on the bandwagon of the dominant hypothesis. Newtonian physicists before Einstein were not considering Einstein's theory and rejecting it; they had never considered it before Einstein thought it up. And even ideal rational agents (who perhaps could choose randomly among a list) cannot choose randomly over a list of infinite size (and surely the set of all possible scientific hypotheses to explain a phenomenon is infinite?) We need to generate good hypotheses to choose among, and that may demand differences in belief. Even beliefs that are totally wrong may be good metaphors to help people grasp something else in a new, unexpected way.

Expand full comment

This can only ever be a temporary observation though, Robin. Eventually we will share our observations and evidence, and we all know where that leads in terms of diverse beliefs. If a single piece of evidence can lead to multiple priors, well, that's a human problem, not a probability problem.

Expand full comment

Surely the important distinction is that a diversity of *weakly held* beliefs is valuable? The more beliefs we have, the greater the chance that some are useful; the downside is that some will be positively harmful, but if we only believe them weakly, good ones should outcompete bad ones.

Expand full comment