Powered by Disqus
My last post quoted Drexler on science vs. engineering. Here he is on exploratory engineering:
Exploring, not the time-bound consequences of human actions, but the timeless implications of known physical law. …. Call it “exploratory engineering”; as applied by Tsiolkovsky a century ago, this method of study showed that rocket technology could open a world beyond the bounds of the Earth. Applied today, this method shows that atomically precise technologies can open a world beyond the bounds of the Industrial Revolution.
Drexler’s most famous book was his ’86 Engines of Creation, but his best was his ’92 Nanosystems, which explored nanotech engineering. The book shows impressive courage, venturing far beyond familiar intellectual shores, impressive breadth, requiring mastery of a wide range of science and engineering, and impressive accomplishment, as little in there is likely to be very wrong. This makes Drexler one of my heroes, and an inspiration in my current efforts to think through the social implications of ems.
Alas, Drexler also deserves some criticism. His latest book, Radical Abundance, like several prior books, goes well beyond physical science and engineering to discuss social implications at length. Alas, though his impressive breadth doesn’t extend much into social science, like most “hard” sci/tech folks Drexler seems mostly unaware of this. He seems to toss together his own seat-of-the-pants social reasoning as he can, and then figure that anything he can’t work out must be unknown to all. Sometimes this goes badly. Continue reading "My Critique Of Drexler" »
My second interview with economist Robin Hanson was by far the most vigorous debate ever on Singularity 1 on 1. I have to say that I have rarely disagreed more with any of my podcast guests before. … I believe that it is ideas like Robin’s that may, and often do, have a direct impact on our future. … On the one hand, I really like Robin a lot: He is that most likeable fellow … who like me, would like to live forever and is in support of cryonics. In addition, Hanson is also clearly a very intelligent person with a diverse background and education in physics, philosophy, computer programming, artificial intelligence and economics. He’s got a great smile and, as you will see throughout the interview, is apparently very gracious to my verbal attacks on his ideas.
On the other hand, after reading his book draft on the [future] Em Economy I believe that some of his suggestions have much less to do with social science and much more with his libertarian bias and what I will call “an extremist politics in disguise.”
So, here is the gist of our disagreement:
I say that there is no social science that, in between the lines of its economic reasoning, can logically or reasonably suggest details such as: policies of social discrimination and collective punishment; the complete privatization of law, detection of crime, punishment and adjudication; that some should be run 1,000 times faster than others, while at the same time giving them 1,000 times more voting power; that emulations who can’t pay for their storage fees should be either restored from previous back-ups or be outright deleted (isn’t this like saying that if you fail to pay your rent you should be shot dead?!)…
Suggestions like the above are no mere details: they are extremist bias for Laissez-faire ideology while dangerously masquerading as (impartial) social science. … Because not only that he doesn’t give any justification for the above suggestions of his, but also because, in principle, no social science could ever give justification for issues which are profoundly ethical and political in nature. (Thus you can say that I am in a way arguing about the proper limits, scope and sphere of economics, where using its tools can give us any worthy and useful insights we can use for the benefit of our whole society.) (more)
You might think that Danaylov’s complaint is that I use the wrong social science, one biased too far toward libertarian conclusions. But in fact his complaint seems to be mainly against the very idea of social science: an ability to predict social outcomes. He apparently argues that since 1) future social outcomes depend in many billions of individual choices, 2) ethical and political considerations are relevant to such choices, and 3) humans have free will to be influenced by such considerations in making their choices, that therefore 4) it should be impossible to predict future social outcomes at a rate better than random chance.
For example, if allowing some ems to run faster than others might offend common ethical ideals of equality, it must be impossible to predict that this will actually happen. While one might be able to use physics to predict the future paths of bouncing billiard balls, as soon as a human will free will enters the picture making a choice where ethics is relevant, all must fade into an opaque cloud of possibilities; no predictions are possible.
Now I haven’t viewed them, but I find it extremely hard to believe that out of 90 interviews on the future, Danaylov has always vigorously complained whenever anyone even implicitly suggested that they could any better than random chance in guessing future outcomes in any context influenced by a human choice where ethics or politics might have been relevant. I’m in fact pretty sure he must have nodded in agreement with many explicit forecasts. So why complain more about me then?
It seems to me that the real complaint here is that I forecast that human choices will in fact result in outcomes that violate the ethical principles Danaylov holds dear. He objects much more to my predicting a future of more inequality than if I had predicted a future of more equality. That is, I’m guessing he mostly approves of idealistic, and disapproves of cynical, predictions. Social science must be impossible if it would predict non-idealistic outcomes, because, well, just because.
FYI, I also did this BBC interview a few months back.
Scenario planning is a popular way to think about possible futures. In scenario planning, one seeks a modest number of scenarios that are each internally consistent, story-like, describe equilibrium rather than transitory situations, and are archetypal in representing clusters of relevant driving forces. The set of scenarios should cover a wide range of possibilities across key axes of uncertainty and disagreement.
Ask most “hard” science folks about scenario planning and they’ll roll their eyes, seeing it as hopelessly informal and muddled. And yes, one reason for its popularity is probably that insiders can usually make it say whatever they want it to say. Nevertheless, when I try to think hard about the future I am usually drawn to something very much like scenario planning. It does in fact seem a robustly useful tool.
It often seems useful to collect a set of scenarios defined in terms of their reference to a “baseline” scenario. For example, macroeconomic scenarios are often (e.g.) defined in terms of deviation from baseline projections of constant growth, stable market shares, etc.
If one chooses a most probable scenario as a baseline, as in microeconomic projections, then variations on that baseline may conveniently have similar probabilities to one another. However, it seems to me that it is often more useful to instead pick baselines that are simple, i.e., where they and simple variations can be more easily analyzed for their consequences.
For example even if a major war is likely sometime in the next century, one may prefer to use as a baseline a scenario where there are no such wars. This baseline will make it easier to analyze the consequences of particular war scenarios, such as adding a war between India and Pakistan, or between China and Taiwan. Even if a war between India and Pakistan is more likely than not within a century, using the scenario of such a war as a baseline will make it harder to define and describe other scenarios as variations on that baseline.
Of course the scenario where an asteroid destroys all life on Earth is extremely simple, in the sense of making it very easy to forecast socially relevant consequences. So clearly you usually don’t want the simplest possible scenario. You instead want to a mix of reasons for choosing scenario features.
Some features will be chosen because they are central to your forecasting goals, and others will be chosen because they seem far more likely than alternatives. But still other baseline scenario features should be chosen because they make it easier to analyze the consequences of that scenario and of simple variations on it.
In economics, we often use competitive baseline scenarios, i.e., scenarios where supply and demand analysis applies well. We do this not such much because we believe that this is the usual situation, but because such scenarios make great baselines. We can more easily estimate the consequences of variations by seeing them as situations where supply or demand changes. We also consider variations where supply and demand applies less well, but we know it will be harder to calculate the consequences of such scenarios and variations on them.
Yes, it is often a good idea to first look for your keys under the lamppost. You keys are probably not there, but that is a good place to anchor your mental map of the territory, so you can plan your search of the dark.
It is one of the most fundamental questions in the social and human sciences: how culturally plastic are people? Many anthropologists have long championed the view that humans are very plastic; with matching upbringing people can be made to behave a very wide range of ways, and to want a very wide range of things. Others say human nature is far more constrained, and collect descriptions of “human universals” (See Brown’s 1991 book.)
This dispute has been politically potent. For example, in gender relations some have said that social institutions should reflect the fact that men and women have certain innate differences, while others say that we can pick most any way we want the genders to relate, and then teach our children to be like that.
But let’s set those issues aside, look to the distant future, and ask: do varying degrees of human cultural plasticity make different predictions about the future?
The easiest predictions are at the extremes. For example, if human nature is extremely rigid, and hard to change, then humans will most likely just go extinct. Eventually environments will change, and other creature will evolve or be designed that are better adapted to those new environments. Humans won’t adapt very well, by assumption, so they lose.
At the other extreme, if human nature is very plastic, then it will adapt to most changes, and change to embody whatever innovations are required for such adaptation. But then there would be very little left of us by the end; our descendants would become whatever any initially very plastic species would have become in such an environment.
So if you want some distinctive human features to last, you’ll have to hope for an intermediate level of plasticity. Human nature has to be flexible enough to not be out competed by a more flexible design platform, but inflexible enough to retain some of its original features.
For example, consider the programming language FORTRAN:
Originally developed by IBM … in the 1950s for scientific and engineering applications, Fortran came to dominate this area of programming early on and has been in continual use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics and computational chemistry. It is one of the most popular languages in the area of high-performance computing and is the language used for programs that benchmark and rank the world’s fastest supercomputers. (more)
FORTRAN isn’t the best possible programming language, but because it was first, it collected a powerful installed base well adapted to it. It has been flexible enough to stick around, but it isn’t infinitely flexible — one can very much recognize early FORTRAN features in current versions.
Similarly, humans have the advantage of being the first species to master culture in a powerful way. We have slowly accumulated many powerful innovations we call civilization, and we’ve invested a lot in adapting those innovations to the particulars of humanity. This installed based of the ways civilization is matched well to humans gives us an advantage over creatures with a substantially differing design.
If humans are flexible enough, but not too flexible, we may become the FORTRAN of future minds, clunky but still useful enough to keep around, noticeably retaining many of its original features.
I should note that some hope to preserve humanity by ending decentralized competition; they hope a central power will ensure than human features survive regardless of their local efficiency in future environments. I have a lot of concerns about that, but yes it should be included on the list of possibilities.
Both Nature and New Scientist recently covered the work of Peter Turchin, who suggests, based on prior trends, that the US is in for a new period of political instability peaking around 2020. He finds that historically US instability has peaked about every fifty years:
He also found this 50 years cycle in Roman and French history, but not in Chinese history. This evidence seems sufficient to mildly raise my expectation of instability at that time, relative to what I would have otherwise thought. Turchin also sees a 150 year cycle in six (de-trended) parameters that suggest instability:
This suggests a US peak in the decades surrounding 2040. Other civilizations have had such long waves, but with widely varying periods. This also mildly raises my expectation of instability in that period.
Even so, the strongest trend we see is a long term worldwide decline in such things. So my strongest expectation is for a continued long term decline in instability. But yes, let’s watch out for the US in 2020.
Don’t be thrown by a bit of silence at the start of the m4a one. We also don’t have the time right now to figure out how to put it in better formats. Sorry about that. If anyone else does, and posts such files, I’ll link to them.
A new NBER working paper suggests that similar venture capitalists (VCs) are worse at making or managing shared investments:
This paper explores two broad questions on collaboration between individuals. First, we investigate what personal characteristics affect people’s desire to work together. Second, given the influence of these personal characteristics, we analyze whether this attraction enhances or detracts from performance. Addressing these problems in the venture capital syndication setting, we show that venture capitalists exhibit strong detrimental homophily in their co-investment decisions. We find that individual venture capitalists choose to collaborate with other venture capitalists for both ability-based characteristics (e.g., whether both individuals in a dyad obtained a degree from a top university) and affinity-based characteristics (e.g., whether individuals in a pair share the same ethnic background, attended the same school, or worked for the same employer previously). Moreover, frequent collaborators in syndication are those venture capitalists who display a high level of mutual affinity. We find that while collaborating for ability-based characteristics enhances investment performance, collaborating for affinity-based characteristics dramatically reduces the probability of investment success. A variety of tests show that the cost of affinity is not driven by selection into inferior deals; the effect is most likely attributable to poor decision-making by high-affinity syndicates post investment. Taken together, our results suggest that non-ability-based “birds-of-a-feather-flock-
together” effects in collaboration can be costly.
Given that homophily rather than heterophily remains the norm, it seems these investors are not learning this lesson, or value working and affiliating with similar peers over maximising profits. All very well for them. But if you have a project that you truly want to succeed, you may be better off doing it with a talented stranger rather than the college mates you clicked with on day one. And if you are letting others invest on your behalf, you should beware of handing your money over to a homogeneous friendship group.
I wonder if this kind of research influences the institutional investors who often fund VCs? If not, it would suggest that even this highly competitive investment market is falling short of its potential to fund and grow promising new companies.
Some research suggests that corporations with more female board members perform better, though the direction of causality is disputed. I doubt females are innately more talented board members, so the causation, if real, could be the result of female ‘outsiders’ generating better management than a clique of natural friends. Shareholders don’t share the benefits of board members enjoying each other’s company, so if they had effective control of the companies they owned you might expect then to appoint a diverse ‘team of rivals’ to the board to closely scrutinise one another’s ideas. My impression is that precisely the opposite is the norm.
While Paul Krugman and I disagree on some things, but those disagreements are small compared to our common ground, from being both economists. Especially on the future, where most folks rely way too much on their intuitive naive social science. So it was a pleasure to read Krugman being reasonable on the future:
Maybe by the 24th century it’ll be different again, but I’m not so sure about that optimistic view of Captain Picard. One thing I think we see is that greed has a way of breaking through, no matter what we do on other fronts. … I think we’re probably going to have something like a market as far as the eye can see, although actually by the 24th century, since the artificial intelligences will probably be doing everything … I don’t know how they’ll do it, but we don’t need to know because they’ll do it. …
You’d like to imagine that we could eventually get to a point where we really are post-scarcity. But it’s a hard road. John Maynard Keynes wrote an optimistic essay called “Economic Possibilities for our Grandchildren” [PDF] in the ’30s where he talked about once the world was four times or eight times as rich as it was when he was writing, at that point we would no longer be concerned about material things and we could get past all of this striving and greed. And actually we are about as much richer as we were supposed to be according to [Keynes] projection, and somehow the striving and greed is still with us. So it’s a further away goal than we’d like to imagine. …
When I’m having a bad day, I try to think, “What are the possible routes by which we don’t turn into a dystopian society?” I mean, we’ve got the environmental threat, … [and] there’s real echoes of the 1930s in a lot of what’s going on politically, mostly in Europe, but there’s some of it here too. And information technology has been so far by and large a force for liberation, but it’s not too hard to see how it could turn into a force for the opposite. …
It’s quite possible that the long run state, that the natural state, except for special episodes, is one of extreme inequality. … I was asked to write something … written as if looking backwards from the year 2096. … I wrote of a society where basically not just the middle class was gone but education was devalued and wealth came largely just from owning resources — back to the old days of a resource-based aristocracy. We still think of that “Ozzie and Harriet society,” … that we had for a generation after World War II as being somehow the natural end state of modern technology, modern development, and I guess the balance of the evidence says, no, that’s not how it works. …
I’m not sure exactly how major media organizations are going to survive in the long run. … We thought for a while that it was going to be very democratizing, and it turns out not to be. … You end up with what is a very hierarchical system, in which a few people really do garner the great bulk of the attention in any particular area of discussion. (more; HT Tyler)
It seems here that for Krugman a society with extreme inequality must be dystopian, no matter what its other features. With that view I heartily disagree. But that’s a political value judgment, not economics.