12 Comments

No. Some die and some don't. Those are really different levels! And as (most) biological organisms age, it's internal differences that become increasingly important in determining which of those outcomes occurs.

Expand full comment

Alternate scenario: A small fraction of family are deemed functional, and all else dysfunctional.

Average minds are all considered functional, but why not say Stephen Hawking is instead and say all the average folk suffer the same dysfunction? Hell, average people might all have some minor aphasia but learn to correct for it, so it doesn't seem as interesting.

One grandmother's pessary is different enough from the other's hysterectomy to me.

Expand full comment

But happy, high-functioning families are quite varied, ranging from childless to broods of 15, with low, medium, high incomes, indulging a great variety of interests. Whether broken families are more varied is highly debatable.

The conclusion I draw is that Tolstoy may have been unable to resist starting a book with a glib phrase, no matter whether really true.

Expand full comment

Internal factors are more important in that they strongly influence capabilities toward the same low level, which was the premise.

Expand full comment

Biological bodies, on the other hand, become less varied in their capabilities as they break down. For example, with increased age the contributions of other factors like social status to human mortality rates becomes weaker.

This is evidence against your premise. If external factors become less important to mortality, then internal factors are *more* important, not less.

Expand full comment

The purchasers of supercomputers are vastly more competent and willing to learn what is needed to make a good purchasing decision. That effect might swamp any of the effect described by Robin in this post.

Expand full comment

In which capability dimensions do they vary more?

Expand full comment

1) I am trying but I still can't see a distinction between biological and non biological systems. Between birth and death (i.e. min/max broken parts) for humans, for example, there is a maximum in variation of capabilities just like the three pillar structure. What am i missing?

2) Are you basically saying that between maximum and minimum entropy, a system's variation in "capabilities" is maximized somewhere? What is wrong with that statement?

3) Thinking of the universe, is this basically the anthropic principle, assuming #2 is correct?

4) Aside from entropy the first thing that came to mind was financial markets. "Fat tails" seem to max out at an intermediate level of "broken parts". Where am I wrong on this one?

BTW, this is such a fascinating question/puzzle, thanks for the proposition.

Expand full comment

> systems whose capabilities vary the most are systems with an intermediate number of broken parts. In contrast, systems with few or many broken parts vary less in their capabilities

This seems to work if and only if the parts break independently; the variance of the average of a large number of independent random variables decreases as the suqare root of the number of random variables. If you have what superficially look like a large number of "independent" parts that are all corellated to each other, this won't work.

For example, you said:

"Biological bodies and minds, in contrast, have very many broken parts, and so when versions have fewer breaks they are both more capable and more varied in their capabilities."

if the brokenness of one part causes other parts to break, then we might see a "cascade" of correlated random variables changing value together, and see great variation in broken bodies or minds. The healthcare and welfare system seems to limit this to an extent, but if you removed that you might see a steady stream of people with multiple injuries and morbidities.

Expand full comment

Am I missing something? It would seem that once a system -car, computer, etc- fails, it's equally as capable as any other nonfunctioning item, much as one dead body is as dead as another.

In this sense, reliability matters more as a dynamic function. Something has to be going on, a continuous state of change, for 'breaking' to be meaningful.

The platform example makes me think of how my P Chem professor described entropy, "At a very basic level, a function of geometry". There are simply more states where two thing are separate, than when they're together. Something needs to be keeping them together. If the universe is a case, then does that mean the universe varies more or less with declining capability?

Expand full comment

> For what other interesting types of systems do we know if the more capable systems vary more or less in particular capabilities?

For some reason, this makes me think of regular computers versus supercomputers.

The former fail in gazillions of ways in every hardware part, every bit of software, and the exponential number of interactions between them.

The latter, though, hardly ever fail though it is perfectly normal to have hundreds of nodes out of action or performing poorly. (I heard once that Google doesn't even bother to remove dead computers from its datacenters.)

I suppose this illustrates your point: supercomputers are massively more complex and more expensive, but are much more reliable than 'intermediate' desktop computers (and actually, the same point is probably true of smaller-than-desktop computers - what was the last embedded computer I used that failed? I can't think what.).

Expand full comment

> For what other interesting types of systems do we know if the more capable systems vary more or less in particular capabilities?

university departments vary more as they are more broken

Expand full comment