One strategy to decide what to believe about X is to add up all the pro and con arguments that one is aware of regarding X, weighing each by its internal strength. Yes, it might not be obvious how to judge and combine arguments. But this strategy has a bigger problem; it risks a selection bias. What if the process that makes you aware of arguments has selected non-randomly from all the possible arguments?
One solution is to focus on very simple arguments. You might be able to exhaustively consider all arguments below some threshold of simplicity. However, here you still have to worry that simple arguments tend to favor a particular side of X. For example, if the question is “Is there some complex technical solution to simple problem X”, it may not work well to exclude all complex technical solution proposals.
We often see situations where far more effort seems to go into finding, honing, and publicizing pro-X arguments, relative to anti-X arguments. In this case the key question is what processes induced those asymmetric efforts. For example, as the left tends to dominate the high end of academia, very academic policy arguments strongly favor left policies. So the question is: what process induced such people to become left?
If new academics started out equally distributed on the left and right, and then searched among academic arguments, becoming more left only as they discovered mainly only left arguments in that space, then we wouldn’t have so much of a selection bias to worry about. However, if the initial distribution of academics leans heavily left for non-argument reasons, then there could be a big selection bias among very academic arguments, even if not perhaps among the arguments that induced people to become academics in the first place.
Often there are claims X where not only does most everyone support X, most everyone is also eager to repeat arguments favoring X, to identify and repudiate any who oppose X, and to ridicule their supporting arguments. In these cases, there is far less energy and effort available to find, hone, and express anti-X claims. For example, consider topics related to racism, sexism, pedophilia, inequality, IQ, genes, or the value of school and medicine. In these cases we should expect strong selection biases favoring X, and thus for weight-of-argument purposes we should adjust our opinions to less favor these X.
However, sometimes there are contrarian claims X where far more effort goes into finding, honing, and expressing arguments supporting X. Consider the claims of 911-truthers, for example. Here we should expect a bias against X among the simple arguments that most people would use to justify their dismissing X, but a bias favoring X among the more complex arguments that 911-truthers would find when studying the many details close to the issue.
What if a topic is local, of interest only to your immediate associates? In this case you should expect a bias favoring those who are more motivated to want others to believe X, and favoring those who are just generally better at finding, honing, and expressing arguments. Thus being known to be good at arguing should generally make one less effective at persuading associates.
In larger social worlds, however, where arguments can pass through many intermediaries, it won’t work as well to discount arguments by the abilities of their sources. In that case one will have to discount arguments based on overall features of the communities who favor and oppose X. Here those who are especially good at arguing will be especially tempted to join such discussions, as their audience is less able to apply personal discounts regarding their arguing abilities.
In all of these cases, we would ideally adjust our standards for discounting beliefs continuously, with the many parameters by which we estimate context-dependent selection biases. But we may sometimes instead feel constrained in our abilities to make such adjustments. Our lower level mental processes may just weigh up the arguments they hear without applying enough discounts.
In which case we might just want to limit our exposure to the sources that we expect to be unusually subject to favorable selection biases. This may sometimes justify common practices of sticking one’s head in the sand, and fingers in one’s ears, regarding suspect sources. And we might also reasonably show a “perverse” forbidden-fruit fascination with hearing arguments that favor forbidden views.
For example, if the question is “Is there some complex technical solution to simple problem X”, it may not work well to exclude all complex technical solution proposals.This is contradictory. If a simple problem X could have a complex, technical solution, then it would not be a simple problem, after all.
And if you cannot thru simple reasoning come to the strong, internally consistent conclusion that a complex, technical solutions could not be applicable or superior, you should upgrade X to "complex and technical". You should rationally decide that you believe, that you do not understand X and therefore should not believe anything about it.
> we might just want to limit our exposure to the sources that we expect to be unusually subject to favorable selection biases
It seems to me the meta-danger of such an approach is that we might develop a selection bias toward sources.