9 Comments

Reminds me of the "demonstration farms" model used by the early Extension Service (which was briefly described in Gawande's recent New Yorker article as a model for innovation in health care.

But reading T.R. Reid's book on health care, both France and Germany use smart cards to carry a person's health care history, an example of an innovation which must have been centrally managed. That's a reminder of what seems true for innovation in an American context is not necessarily universally true.

Expand full comment

Of course he didn't. He meant that he hoped that we could trust the experts, because that's a pretty critical component in division of labor: the generation of expertise is expensive.

The problem comes with the fact that there's no economic incentive in the current climate for an expert to be right. There's economic incentive for an expert to be convincing, and there's incentive for an expert to push his particular expertise. And so there's counterpressure against experts, because we don't trust that they're telling us what's best for us, but rather what's best for them.

There's a good and entertaining analysis of this here.

Expand full comment

One might hope that we had central technology expertsYou don't have to be an axiomatic libertarian to see what's wrong with that. Heck, James Scott could tell you.

Expand full comment

Your post focuses on the Benefits side of the equation.

I can see analogous models for the Cost side.

Specifically, having someone accessible (nearby) to learn from, having an authority figure to learn from, having local institutional knowledge to adopt.

Most innovations require a number of tweeks to get them to be profitable changes. Failure to make these tweeks and understand the innovation leads to trend chasing PHBs who give innovations a bad name.

Just my .02

Expand full comment

It's been my impression that lot of bad, faddish ideas spread fairly often, especially in the business world. That being the case, and since slowness of spread allows more opportunity for good ideas to be distinguished from bad, maybe if the spread of ideas were sped up overall, we would see disproportionately more bad ideas spread, and that would not be a good thing. Since there are vastly more bad ideas generated than good ideas, speeding up the spread of ideas could have a massive downside.

Expand full comment

Actually, the question that should be asked is what impedes diffusion and acceptance of ideas within organizations.

And, there what is interesting is the sociology of organizations. For example, if a manager does not understand the technological solution that a subordinate presents to him/her, or the manager does not have the technical skills to understand the solution, the manager is, in effect, threatened by the subordinate, because the only way to implement the solution is to give more power to a subordinate. Not goin' to happen.

So, what might be necessary is to train the manager on the technical skills, or to have managers with technical skills sufficient to understand what the subordinate is saying. Otherwise, change is a threat, and change is a status changer.

Expand full comment

This statement gave me pause:

One might hope that we had central technology experts, and once they approved a new tech, everyone would adopt it.

Oh my. I sure hope you meant those words in a manner more analogous to setting voluntary IEEE-style standards, rather than Government-imposed regulations, or even worse, centralized Soviet-style planning?!

/axiomatic libertarian

Expand full comment

I have been watching another innovation process up close and personal: the slow diffusion of virtual worlds by educational institutions, corporations and government agencies. One point from this post and one from the prior post seem particularly relevant.

First, virtual worlds are a new product, but consistent with the last post, the real returns have been to those who have been focusing on the process innovations--the people who are figuring out how to use the new technology more effectively, whether for virtual conferences, telecommuting, product prototyping, etc., or (one of my favorite examples) behavioral therapy for people with brain trauma or autism spectrum disorder.

Second, the AER article refers to "conformity" in talking about social influence, but at least in the case of virtual worlds, it is even more useful to think about credentialism. People with credentials (impressive job titles or a history of past success with tech innovation) have far more success in having their proposals accepted, and then others with less impressive credentials argue their own case by saying, in effect, "this highly credentialed person is doing X, so it is a reasonable idea that we could pursue as well.

Credentialism may be particularly influential in this domain because virtual worlds don't yet have much credibility (serious enterprises tend to shun "games"). It might be less important for agriculture or other innovations that are more familiar and already respectable.

Expand full comment

People don’t believe something works until they’ve seen it work in something pretty close to their situation.

Wait, so people are rational after all?

Expand full comment