Category Archives: Innovation

Slow Potatos

The Spanish transplanted the spud to Europe in the 16th century, by way of the Canary Islands. Growing underground — bulbous, white, and strange — potatoes had image problems on the Continent at first. … The subterranean bizarreness of tuberous growth compared unfavorably to the airy, sunlit, wholesomeness of the familiar cereal grains — barley, rye, oats and wheat — that had sustained Europe for centuries.

The spud did not become a staple food in Europe until the 17th and 18th centuries, when warfare was widespread and frequent. Reader argues that this was no coincidence: Disruptions and upheavals inflicted by marauding armies changed the diet and tastes of the Continent, with massive demographic and economic consequences. When grain fields weren’t being torched or requisitioned, armies were camping on them or marching through them. It wasn’t a matter of choice but a lack of options that really dropped the potato onto Europe’s plate around 1700. While cereal grains were exposed to the ravages of war, potatoes were safely hidden in the ground and, when the tides of war receded, could be harvested and stored. This was when Europe discovered that the potato may be monotonous, but it is also extraordinarily nutritious, yielding four times more calories per acre than grain.

That is from a Post review of the book Potato.  Now my historian colleague John Nye tells me that since the potato took a lot more labor, it wasn’t really four times more productive.  And he’s pretty skeptical of the above story.  But still, I find it interesting that in what was basically a farming economy, a free more productive farm tech took so long to catch on, and even then perhaps only because of something largely incidental to its productivity.   The fact that such a thing is so hard to imagine today shows just how dramatically the industrial revolution has changed how we innovate.

GD Star Rating
loading...
Tagged as:

The Growth Groove Game

My head is full of big history questions after spending two days at a related workshop.

How did humanity become so influential and powerful?  Apparently we found a growth groove that let us keep on accumulating power-enhancing innovations.  But what key features make this growth groove possible?

A great many features get mentioned, including brains, language, culture, fire, tools, large tribes, mind-reading, trade, specialization, domestication, trust, capital, machines, artificial power, cities, science, writing, printing, leisure, property, law, marriage, patents, and signaling.  They can all seem like plausible candidates, at least for some places and times.  But which features were how important?

It turns out that we just don't know.  But we do have some strong clues.  So a fun armchair game is to guess which were the key features.  But before you play, remember the game's key rule: your story must fit history as we know it.  So let's review that history.

A good measure of humanity's overall influence/power is "world product," and history is reasonably well summarized as:

Continue reading "The Growth Groove Game" »

GD Star Rating
loading...
Tagged as:

A New Day

Somewhere in the vastnesses of the Internet and the almost equally impenetrable thicket of my bookmark collection, there is a post by someone who was learning Zen meditation…

Someone who was surprised by how many of the thoughts that crossed his mind, as he tried to meditate, were old thoughts – thoughts he had thunk many times before.  He was successful in banishing these old thoughts, but did he succeed in meditating?  No; once the comfortable routine thoughts were banished, new and interesting and more distracting thoughts began to cross his mind instead.

I was struck, on reading this, how much of my life I had allowed to fall into routine patterns.  Once you actually see that, it takes on a nightmarish quality:  You can imagine your fraction of novelty diminishing and diminishing, so slowly you never take alarm, until finally you spend until the end of time watching the same videos over and over again, and thinking the same thoughts each time.

Sometime in the next week – January 1st if you have that available, or maybe January 3rd or 4th if the weekend is more convenient – I suggest you hold a New Day, where you don't do anything old.

Don't read any book you've read before.  Don't read any author you've read before.  Don't visit any website you've visited before.  Don't play any game you've played before.  Don't listen to familiar music that you already know you'll like.  If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk.  Don't go to any restaurant you've been to before, order a dish that you haven't had before.  Talk to new people (even if you have to find them in an IRC channel) about something you don't spend much time discussing.

And most of all, if you become aware of yourself musing on any thought you've thunk before, then muse on something else.  Rehearse no old grievances, replay no old fantasies.

If it works, you could make it a holiday tradition, and do it every New Year.

GD Star Rating
loading...

Abstraction, Not Analogy

I’m not that happy with framing our analysis choices here as “surface analogies” versus “inside views.” More useful, I think, to see this as a choice of abstractions.  An abstraction neglects some details to emphasize others.  While random abstractions are useless, we have a rich library of a useful abstractions, tied to specific useful insights.

For example, consider the oldest known tool, the hammer.  To understand how well an ordinary hammer performs its main function, we can abstract from details of shape and materials.  To calculate the kinetics energy it delivers, we need only look at its length, head mass, and recoil energy percentage (given by its bending strength).  To check that it can be held comfortably, we need the handle’s radius, surface coefficient of friction, and shock absorption ability.  To estimate error rates we need only consider its length and head diameter.

For other purposes, we can use other abstractions:

  • To see that it is not a good thing to throw at people, we can note it is heavy, hard, and sharp.
  • To see that it is not a good thing to hold high in a lightning storm, we can note it is long and conducts electricity.
  • To evaluate the cost to carry it around in a tool kit, we consider its volume and mass.
  • To judge its suitability as decorative wall art, we consider its texture and color balance.
  • To predict who will hold it when, we consider who owns it, and who they know.
  • To understand its symbolic meaning in a story, we use a library of common hammer symbolisms.
  • To understand its early place in human history, we consider its easy availability and frequent gains from smashing open shells.
  • To predict when it is displaced by powered hammers, we can focus on the cost, human energy required, and weight of the two tools.
  • To understand its value and cost in our economy, we can focus on its market price and quantity.
  • [I’m sure we could extend this list.]

Continue reading "Abstraction, Not Analogy" »

GD Star Rating
loading...
Tagged as:

Intelligence in Economics

Followup toEconomic Definition of Intelligence?

After I challenged Robin to show how economic concepts can be useful in defining or measuring intelligence, Robin responded by – as I interpret it – challenging me to show why a generalized concept of "intelligence" is any use in economics.

Well, I’m not an economist (as you may have noticed) but I’ll try to respond as best I can.

My primary view of the world tends to be through the lens of AI.  If I talk about economics, I’m going to try to subsume it into notions like expected utility maximization (I manufacture lots of copies of something that I can use to achieve my goals) or information theory (if you manufacture lots of copies of something, my probability of seeing a copy goes up).  This subsumption isn’t meant to be some kind of challenge for academic supremacy – it’s just what happens if you ask an AI guy an econ question.

So first, let me describe what I see when I look at economics:

I see a special case of game theory in which some interactions are highly regular and repeatable:  You can take 3 units of steel and 1 unit of labor and make 1 truck that will transport 5 units of grain between Chicago and Manchester once per week, and agents can potentially do this over and over again.  If the numbers aren’t constant, they’re at least regular – there’s diminishing marginal utility, or supply/demand curves, rather than rolling random dice every time.  Imagine economics if no two elements of reality were fungible – you’d just have a huge incompressible problem in non-zero-sum game theory.

This may be, for example, why we don’t think of scientists writing papers that build on the work of other scientists in terms of an economy of science papers – if you turn an economist loose on science, they may measure scientist salaries paid in fungible dollars, or try to see whether scientists trade countable citations with each other.  But it’s much less likely to occur to them to analyze the way that units of scientific knowledge are produced from previous units plus scientific labor.  Where information is concerned, two identical copies of a file are the same information as one file.  So every unit of knowledge is unique, non-fungible, and so is each act of production.  There isn’t even a common currency that measures how much a given paper contributes to human knowledge.  (I don’t know what economists don’t know, so do correct me if this is actually extensively studied.)

Since "intelligence" deals with an informational domain, building a bridge from it to economics isn’t trivial – but where do factories come from, anyway?  Why do humans get a higher return on capital than chimpanzees?

Continue reading "Intelligence in Economics" »

GD Star Rating
loading...

‘Anyone who thinks the Large Hadron Collider will destroy the world is a t**t.’


This week is Big Bang Week at the BBC, with various programmes devoted to the switch-on of CERN’s Large Hadron Collider (LHC) on Wednesday morning.  Many of these programmes are covered in this week’s issue of the Radio Times—the BBC’s listings magazine—which also features a short interview with Professor Brian Cox, chair of particle physics at the University of Manchester. Asked about concerns that the LHC could destroy the earth, he replies:

‘The nonsense you find on the web about “doomsday scenarios” is conspiracy theory rubbish generated by a small group of nutters, primarily on the other side of the Atlantic.  These people also think that the Theory of Relativity is a Jewish conspiracy and that America didn’t land on the Moon.  Both are more likely, by the way, than the LHC destroying the world.  I’m slightly irritated, because this non-story is symptomatic of a larger mistrust in science, particularly in the US, which includes things like intelligent design. [… A]nyone who thinks the LHC will destroy the world is a t**t.’ (Final word censored by Radio Times.) [1]

Who counts as a nutter and a t**t on this reckoning?  It is true that anyone who thinks there is a 100% chance that the LHC will definitely destroy the world is confused—but it’s probably also true that not many people really think this.  On the other hand, if anyone who thinks that it is worth taking seriously the (admittedly very slim) possibility that the LHC will destroy the world is a t**t, then there are many apparently very clever t**ts knocking about in our universities.  Among these are several of my colleagues: Nick Shackel has previously blogged about the risks of turning on the LHC, as has Toby Ord; and Rafaela Hillerbrand, Toby Ord, and Anders Sandberg recently presented on this topic at the recent Future of Humanity Institute-hosted conference on Global Catastrophic Risks. And, despite having chatted to each of these people about the LHC at some point or another, I’ve never heard any of them express sympathy for the view that the Theory or Relativity is a Jewish conspiracy or that nobody landed on the Moon.  So, are they t**ts or not?

Continue reading "‘Anyone who thinks the Large Hadron Collider will destroy the world is a t**t.’" »

GD Star Rating
loading...
Tagged as: , , , , , ,

In Innovation, Meta is Max

Building on my intro to innovation, which summarized previous work, let me now offer a new insight: the max net-impact innovations, by far, have been meta-innovations, i.e., innovations that changed how fast other innovations accumulated. 

Concrete studies of specific innovations in biology, tech, and business suggest that most innovation value comes from bazillions of small innovations.  Yes once in a while a large innovation appears, like computers or a-bombs, but even these require scores of supporting innovations to fulfill their potential.  And if we consider innovations that improve not just one industry or region, but the entire world overall, then it is hard to identify any innovations responsible for identifiable deviations from the steady exponential growth (plus noise) that comes from the usual bazillions of small innovations.

Yet we know of perhaps four innovations responsible for deviations that were not only identifiable, but overwhelmingly so!  As I reviewed in my singularity econ article, these are the innovations that allowed animal brains, human brains, farming, and industry.  These innovations do not seem to have directly increased the level of the relevant economy much — instead within less than a previous doubling time each innovation increased the rate of innovation accumulation by a factor of sixty or more.  (If several innovations appeared together, they apparently had a single common cause.)

The first human minds, the first farmers, and the first industrialists were actually not much better hunters, gatherers, or builders than their immediate ancestors — they were mainly just better at getting better faster. 

So if you want to worry about big disruptive future innovations, you should definitely consider meta-innovations.  Admittedly, since the above data covers only innovations that improved the world overall, it does not include predatory innovations that mainly let one group gain at the expense of other groups.  So this data allows the possibility of very large predatory innovations.  But if you are thinking about non-predatory innovations, history suggests you should be most concerned about a single new meta-innovation. 

GD Star Rating
loading...
Tagged as: ,

Intro to Innovation

The topic of innovation comes up often here, so let us review some basics:

  1. Systems are parts in a structure; innovations are better part or structure designs. An innovation embodies insights whose value depends on a context, and so changes with that context.
  2. Most net growth in the number or size of large systems has been due to collecting innovations.
  3. Wars, quakes, and diseases may be distributed so most impact comes from the few largest instances.  In contrast, in large systems most innovation value comes from many small innovations.  Even big innovations require many matching small innovations to be viable.
  4. Innovation rates increase when early innovations make it easier to pursue later innovations, and decrease when the most valuable easiest innovations tend to be pursued first.  Steady (exponential) growth suggests that these factors roughly cancel.  Since growth rates commonly increase then decrease, usually the second factor eventually wins.   
  5. Innovation in large systems comes mostly from part innovation, so system innovation is steadier than part innovation, and the largest systems grow steadiest.
  6. System structures vary in how well they encourage and test innovations locally and then distribute the best ones widely.  Better structures for this are meta-innovations.
    • Good modularity reduces the need to match innovations in differing parts. 
    • Good abstraction puts similar innovation problems within the same part.
  7. If a barrier isolates two systems, the faster growing one eventually dominates.  A system that better promotes innovation can lose to a system with a larger source of innovation. 
  8. In large innovation pools, similar innovations commonly arise from several semi-independent sources at nearly the same time.  No single source was essential.
  9. Current human society can give incentives to innovate too much, when innovation is used to signal, and to innovate too little, when innovators are not paid the full value gained by others.

I learned this stuff long ago so I have little idea how commonly known this all is. 

GD Star Rating
loading...
Tagged as: