The answer you give about Hofstadter's Law doesn't seem relevant to the question you're asking. I agree with your explaination that, if I expect it will take 10 weeks to do something but I'm given 15 weeks to do it, it will take me 15 weeks to do it. Hofstadter's Law says that, given a task that will take 15 weeks, people will estimate it to take them 10 weeks, even when they think they are being generous. This is logically a different issue.

Given that this site is about bias, you could have mentioned superiority bias (i.e. thinking, "This would take the average person 15 weeks, but of course I'm better than average, so it will only take me 10 weeks") or beneffectance effect on memory (=egocentric attribution errors) (i.e. "In the past, this took me 15 weeks, but the delay was due to circumstances beyond my control. When I have control, it will take 10 weeks".) or some variation of availability heuristic (remembering the days when you powered through the task rather than the days when you were distracted by minutiae).

Personally, I find arrogant, self-inflated people to be jerks and honest, self-critical people to be better company. I don't think I'm alone in that. Neither my preference nor your preference counts as a scientific argument.

As for bias being evolutionarily advantageous (in the context of deception): maybe, but so is violence against out-groups. That doesn't mean that it's morally good or socially desirable to have these biases. Your final point seems to be that lying is a more "fundamental" human behaviour than others. That sounds very arbitrary to say the least.

Expand full comment

I don't understand how a bunch of neurons obeying the laws of physics (thru chemistry) generates my consciousness, yet I'm also certain that my consciousness is intrinsically related to my brain and will extinguish with it.

People have been 'certain' of a great number of things throughout history, and many of them ended up being provably wrong. It is probably a good thing to be 'certain' of as few things as possible, and thereby be open to finding out new information that might modify or expand or even invalidate parts of one's worldview.

I like what Jerry Fodor says on this topic:

Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness. . .

Expand full comment

The philosopher David Chalmers was on Blogginheads TV last weekend, and he was a student of Hofstadter. He mentioned that toasters has a certain amount of consciousness. I haven't smoked pot since college, so he lost me there.

I don't understand how a bunch of neurons obeying the laws of physics (thru chemistry) generates my consciousness, yet I'm also certain that my consciousness is intrinsically related to my brain and will extinguish with it. But if you look at the brain as an evolutionary creation with all its deficiencies, you see it as aggressive (predator), a deceiver (camouflage), a persuader (seduction, selling). Getting the math right (unbiasedness) is merely one of several 'good' qualities in our brains.

Expand full comment

It's not entirely meaningless, as it states that consciousnesses that we do know exist are composed of these systems of dynamic patterns, as opposed to something else, and it implies that any way of creating similar patterns (say, mechanical computation) would also create consciousness.

Expand full comment

Not directly related to your argument, but beware the word "sufficiently" in arguments or definitions. consciousness happens spontaneously after a system of dynamic patterns is sufficiently complex. is meaningless, as you can just note that any non-conscious system is clearly not sufficiently complex.

My favorite example of this is the contrapositive to Clarke's Law: Any technology distinguishable from magic is insufficiently advanced.

Expand full comment

Incompletely Understanding the Unknowable

I'm now glad that our search for understanding will never come to an end. Gödel's theorem ensured there would always be a job for mathematicians.

Expand full comment

Part of the problem with all of this is the unexpected appearance of inevitable infinite regresses that must be artificially shut off. This was a problem first recognized by Luce and Raiffa, thinking about the problem of bounded rationality.

So, the problem is in a world of imperfect information, how much effort should we spend on gathering information about the problem we wish to solve? But, ah, there is the higher level problem: how much time should we spend on thinking about how long we should gather information? Which implies yet the next level up, and so forth without limit in principle.

Well, for most of us it is obvious that one does not want to get too caught up in such loops because one will have very high probabilities of spending too much time thinking about thinking about thinking about... if one lets oneself get too high in the levels of doing so. So, we tend to just artificially cut things off at somewhat lower levels, using rules of thumb or other intuitive mechanisms. We have committees on committees, but I have never heard of a committee on committees on committees, although some of the plannification exercises in some of the more navel-gazing central planning agencies began to resemble this.

But, getting back to the original problem. Getting caught up in such escalating levels of analysis is more likely for someone who is in the depths of solving a problem (or completing a major project, or whatever). One runs into problems, and then one finds oneself worrying how long or in what way they should solve the problem, and then...

Expand full comment