Tag Archives: Social Science

Teaching Ignorance

It ain’t ignorance causes so much trouble; it’s folks knowing so much that ain’t so. Josh Billings

Economics is important. So the world could use more of it. In the same sense, ignorance of economics is even more important. That is, the world could even more use a better understanding of how ignorant it is about economics. Let me explain.

Lately I’ve had a chance to see how folks like computer scientists, philosophers, futurists, and novelists think (in separate situations) when their work overlaps with areas where economists have great expertise. And what usually happens is that such folks just apply their ordinary intuitions on social behavior, without even noticing that they could ask or read economists to get more expert views. Which often leads them to make big avoidable mistakes, as these intuitions are often badly mistaken.

Yes, even folks who do realize that economists know more may not have the time to ask about or learn economics. But it seems that usually most people don’t even notice that they don’t know. Their subconscious quickly and naturally supplies them with subtly varying expectations on a wide range of social behaviors, and they don’t even notice that these intuitions might be wrong or incomplete. Which leads me to wonder: how do people ever realize that they don’t know physics, or accounting, or medicine?

Most people throw and move objects often, and have strong intuitions about such things. And if physics was only about such mechanics, I’d guess most people also wouldn’t realize that they don’t know physics. So it seems that a key is that “physics” is also associated with a bunch of big words and strange complex objects with which people don’t feel familiar. People hear words like “voltage” or “momentum”, or see inside cars or refrigerators, and they realize they don’t know what these words mean, or what how those devices work.

Similarly for accounting and medicine, I’d guess that it is a wide use and awareness of strange and complex accounting terms and calculations, and strange and complex medical devices and treatments, that suggest to people that there must be experts in those fields. And even in economics, when people realize that they don’t know where money comes from, or which of many possible auction designs is better, they do turn to economists to learn more.

Kids often learn early on of the existence of specialized knowledge, from the existence of specialized language and complex devices. Kids like to show off by finding excuses to use specialized words, and showing that they can do unusual things with complex devices. And then other kids learn to see the related areas as those with specialized expertise.

So I’d guess that what the world most needs on economics is to get more kids to show off by using specialized concepts like “diminishing returns” and complex devices like auctions. And then they need to hear that this same “economics” can be used to work out good way to do lots of social things, from buying and selling to voting to law to marriage. It is not so much that the world actually needs more kids using these concepts and devices. The important thing is to create a general impression that there are specialists for these topics.

The biggest obstacle to this plan, I’d guess, is that naive social science infuses too much of the rest of what kids are taught. Various history and “social studies” classes use naive social intuitions to explain major world events, and novels are read and discussed as if the naive social science they use is reasonable. Those who like using these things to push social agendas would object strongly to teaching instead that, e.g., you usually can’t figure out who are the bad guys in key historical events without complex economic analysis.

So the bottom line is that people don’t use enough econ because econ tends to conflict with the things people want to believe about the social world. Even teaching people that they are ignorant of econ conflicts, alas.

GD Star Rating
loading...
Tagged as: , ,

Math: Useful & Over-Used

Paul Krugman:

Noah Smith … on the role of math in economics … suggests that it’s mainly about doing hard stuff to prove that you’re smart. I share much of his cynicism about the profession, but I think he’s missing the main way (in my experience) that mathematical models are useful in economics: used properly, they help you think clearly, in a way that unaided words can’t. Take the centerpiece of my early career, the work on increasing returns and trade. The models … involved a fair bit of work to arrive at what sounds in retrospect like a fairly obvious point. … But this point was only obvious in retrospect. … I … went through a number of seminar experiences in which I had to bring an uncomprehending audience through until they saw the light.

Bryan Caplan:

I am convinced that most economath badly fails the cost-benefit test. … Out of the people interested in economics, 95% clearly have a comparative advantage in economic intuition, because they can’t understand mathematical economics at all. …. Even the 5% gain most of their economic understanding via intuition. .. Show a typical economist a theory article, and watch how he “reads” it: … If math is so enlightening, why do even the mathematically able routinely skip the math? .. When mathematical economics contradicts common sense, there’s almost always mathematical sleight of hand at work – a sneaky assumption, a stilted formalization, or bad back-translation from economath to English. … Paul['s] … seminar audiences needed the economath because their economic intuition was atrophied from disuse. I can explain Paul’s models to intelligent laymen in a matter of minutes.

Krugman replies:

Yes, there’s a lot of excessive and/or misused math in economics; plus the habit of thinking only in terms of what you can model creates blind spots. … So yes, let’s critique the excessive math, and fight the tendency to equate hard math with quality. But in the course of various projects, I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

For most questions, the right answer has a simple intuitive explanation. The problem is: so do many wrong answers. Yes we also have intuitions for resolving conflicting intuitions, but we find it relatively easy to self-deceive about such things. Intuitions help people who do not think or argue in good faith to hold to conclusions that fit their ideology, and to not admit they were wrong.

People who instead argue using math are more often forced to admit when they were wrong, or that the best arguments they can muster only support weaker claims than those they made. Similarly, students who enter a field with mistaken intuitions often just do not learn better intuitions unless they are forced to learn to express related views in math. Yes, this typically comes at a huge cost, but it does often work.

We wouldn’t need as much to pay this cost if we were part of communities who argued in good faith. And students (like maybe Bryan) who enter a field with good intuitions may not need as much math to learn more good intuitions from teachers who have them. So for the purpose of drawing accurate and useful conclusions on economics, we could use less math if academics had better incentives for accuracy, such as via prediction markets. Similarly, we could use less math in teaching economics if we better selected students and teachers for good intuitions.

But  in fact academia research and teaching put a low priority on accurate useful conclusions, relative to showing off, and math is very helpful for that purpose. So the math stays. In fact, I find it plausible, though hardly obvious, that moving to less math would increase useful accuracy even without better academic incentives or student selection. But groups who do this are likely to lose out in the contest to seem impressive.

A corollary is that if you personally just want to better understand some particular area of economics where you think your intuitions are roughly trustworthy, you are probably better off mostly skipping the math and instead reasoning intuitively. And that is exactly what I’ve found myself doing in my latest project to foresee the rough outlines of the social implications of brain emulations. But once you find your conclusions, then if you want to seem impressive, or to convince those with poor intuitions to accept your conclusions, you may need to put in more math.

GD Star Rating
loading...
Tagged as: ,

That Old SF Prejudice

Back when I was a physics student in the late 1970s, my physics teachers were pretty unified in and explicit about their dislike for so-called social “sciences.” Not only is there no science there, they said, there is no useful knowledge of any sort – it was all “pseudo” science as useless as astrology. Lots of “hard” scientists are taught to think pretty much the same thing today, but since our world is so much more politically sensitive, they also know to avoid saying so directly.

Old school science fiction authors were taught pretty much the same thing and sometimes they say so pretty directly. Case in point, Arthur C. Clarke [ACC]:

TM: Why has science fiction seemed so prescient?

ACC: Well, we mustn’t overdo this, because science fiction stories have covered almost every possibility, and, well, most impossibilities — obviously we’re bound to have some pretty good direct hits as well as a lot of misses. But, that doesn’t matter. Science fiction does not attempt to predict. It extrapolates. It just says, “What if?” not what will be? Because you can never predict what will happen, particularly in politics and economics. You can to some extent predict in the technological sphere — flying, space travel, all these things, but even there we missed really badly on some things, like computers. No one imagined the incredible impact of computers, even though robot brains of various kinds had been — my late friend, Isaac Asimov, for example, had — but the idea that one day every house would have a computer in every room and that one day we’d probably have computers built into our clothing, nobody ever thought of that. …

To be a science fiction writer you must be interested in the future and you must feel that the future will be different and hopefully better than the present. …

TM: What’s a precondition for being a science fiction writer other than an interest in the future?

ACC: Well, an interest — at least an understanding of science, not necessarily a science degree but you must have a feeling for the science and its possibilities and its impossibilities, otherwise you’re writing fantasy. …

TM: Is it fair to call some science fiction writers prophets in a way?

ACC: Yes, but accidental prophets, because very few attempt to predict the future as they expect it will be. They may in some cases, and I’ve done this myself, write about — try to write about — futures as they hope they will be, but I don’t know of anyone that’s ever said this is the way the future will be. …. I don’t think there is such a thing as as a real prophet. You can never predict the future. We know why now, of course; chaos theory, which I got very interested in, shows you can never predict the future. (more)

You see? The reason to be interested in science fiction is an interest what will actually happen in the future, and the reason fantasy isn’t science fiction is that gets the future wrong because it doesn’t appreciate scientific possibilities like flying, space travel, and computers. But chaos theory says you can’t predict anything about politics or economics because that’s all just random. Sigh.

Of course folks like Doug Englebart were in fact predicting things about the social implications of computers back when Clarke made his famous movie 2001, but Clarke apparently figures that if the physics and sf folks he talked to didn’t know something, no one knew. Today’s science fiction authors also know better than to say such things directly, but it is really what many of them think: our tech future is predictable, but our social future is not, because physical science exists and social science does not.

Added 10a: Note how it is easy to entice commenters to say they agree with the claim that there is no social science, but it is much harder to get a prominent physics or sf blogger to say so in a post. Lots of them think similarly, but know not to say so publicly.

GD Star Rating
loading...
Tagged as: ,

Impressive Power

Monday I attended a conference session on the metrics academics use to rate and rank people, journals, departments, etc.:

Eugene Garfield developed the journal impact factor a half-century ago based on a two-year window of citations. And more recently, Jorge Hirsch invented the h-index to quantify an individual’s productivity based on the distribution of citations over one’s publications. There are also several competing “world university ranking” systems in wide circulation. Most traditional bibliometrics seek to build upon the citation structure of scholarship in the same manner that PageRank uses the link structure of the web as a signal of importance, but new approaches are now seeking to harness usage patterns and social media to assess impact. (agenda; video)

Session speakers discussed such metrics in an engineering mode, listing good features metrics should have, and searching for metrics with many good features. But it occurred to me that we can also discuss metrics in social science mode, i.e., as data to help us distinguish social theories. You see, many different conflicting theories have been offered about the main functions of academia, and about the preferences of academics and their customers, such as students, readers, and funders. And the metrics that various people prefer might help us to distinguish between such theories.

For example, one class of theories posits that academia mainly functions to increase innovation and intellectual progress valued by the larger world, and that academics are well organized and incentivized to serve this function. (Yes such theories may also predict individuals favoring metrics that rate themselves highly, but such effects should wash out as we average widely.) This theory predicts that academics and their customers prefer metrics that are good proxies for this ultimate outcome.

So instead of just measuring the influence of academic work on future academic publications, academics and customers should strongly prefer metrics that also measure wider influence on the media, blogs, business practices, ways of thinking, etc. Relative to other kinds of impact, such metrics should focus especially on relevant innovation and intellectual progress. This theory also predicts that, instead of just crediting the abstract thinkers and writers in an academic project, there are strong preferences for also crediting supporting folks who write computer programs, built required tools, do tedious data collection, give administrative support, manage funding programs, etc.

My preferred theory, in contrast, is that academia mainly functions to let outsiders affiliate with credentialed impressive power. Individual academics show exceptional impressive abstract mental abilities via their academic work, and academic institutions credential individual people and works as impressive in this way, by awarding them prestigious positions and publications. Outsiders gain social status in the wider world via their association with such credentialed-as-impressive folks.

Note that I said “impressive power,” not just impressiveness. This is the new twist that I’m introducing in this post. People clearly want academics to show not just impressive raw abilities, but also to show that they’ve translated such abilities into power over others, especially over other credentialled-as-impressive folks. I think we also see similar preferences regarding music, novels, sports, etc. We want people who make such things to show not only that they have have impressive abilities in musical, writing, athletics, etc., we also want them to show that they have translated such abilities into substantial power to influence competitors, listeners, readers, spectators, etc.

My favored theory predicts that academics will be uninterested in and even hostile to metrics that credit the people who contributed to academic projects without thereby demonstrating exceptional abstract mental abilities. This theory also predicts that while there will be some interest in measuring the impact of academic work outside academia, this interest will be mild relative to measuring impact on other academics, and will focus mostly on influence on other credentialed-as-impressives, such as pundits, musicians, politicians, etc. This theory also predicts little extra interest in measuring impact on innovation and intellectual progress, relative to just measuring a raw ability to change thoughts and behaviors. This is a theory of power, not progress.

Under my preferred theory of academia, innovation and intellectual progress are mainly side-effects, not main functions. They may sometimes be welcome side effects, but they mostly aren’t what the institutions are designed to achieve. Thus proposals that would tend to increase progress, like promoting more inter-disciplinary work, are rejected if they make it substantially harder to credential people as mentally impressive.

You might wonder: why would humans tend to seek signals of the combination of impressive abilities and power over others? Why not signal these things separately? I think this is yet another sign of homo hypocritus. For foragers, directly showing off one’s power is quite illicit, and so foragers had to show power indirectly, with strong plausible deniability. We humans evolved to lust after power and those who wield power, but to pretend our pursuit of power is accidental; we mainly just care about beauty, stories, exciting contests, and intellectual progress. Or so we say.

So does anyone else have different theories of academia, with different predictions about which metrics academics and their customers will prefer? I look forward to the collection of data on who prefers which metrics, to give us sharper tests of these alternative theories of the nature and function of academia. And theories of music, stories, sport, etc.

GD Star Rating
loading...
Tagged as: , , , ,

Drexler Responds

Three weeks ago I critiqued Eric Drexler’s book Radical Abundance. Below the fold is his reply, and my response: Continue reading "Drexler Responds" »

GD Star Rating
loading...
Tagged as: , , ,

My Critique Of Drexler

My last post quoted Drexler on science vs. engineering. Here he is on exploratory engineering:

Exploring, not the time-bound consequences of human actions, but the timeless implications of known physical law. …. Call it “exploratory engineering”; as applied by Tsiolkovsky a century ago, this method of study showed that rocket technology could open a world beyond the bounds of the Earth. Applied today, this method shows that atomically precise technologies can open a world beyond the bounds of the Industrial Revolution.

Drexler’s most famous book was his ’86 Engines of Creation, but his best was his ’92 Nanosystems, which explored nanotech engineering. The book shows impressive courage, venturing far beyond familiar intellectual shores, impressive breadth, requiring mastery of a wide range of science and engineering, and impressive accomplishment, as little in there is likely to be very wrong. This makes Drexler one of my heroes, and an inspiration in my current efforts to think through the social implications of ems.

Alas, Drexler also deserves some criticism. His latest book, Radical Abundance, like several prior books, goes well beyond physical science and engineering to discuss social implications at length. Alas, though his impressive breadth doesn’t extend much into social science, like most “hard” sci/tech folks Drexler seems mostly unaware of this. He seems to toss together his own seat-of-the-pants social reasoning as he can, and then figure that anything he can’t work out must be unknown to all. Sometimes this goes badly. Continue reading "My Critique Of Drexler" »

GD Star Rating
loading...
Tagged as: , , ,

Is Social Science Extremist?

I recently did two interviews with Nikola Danaylov, aka “Socrates”, who has so far done ~90 Singularity 1 on 1 video podcast interviews. Danaylov says he disagreed with me the most:

My second interview with economist Robin Hanson was by far the most vigorous debate ever on Singularity 1 on 1. I have to say that I have rarely disagreed more with any of my podcast guests before. … I believe that it is ideas like Robin’s that may, and often do, have a direct impact on our future. … On the one hand, I really like Robin a lot: He is that most likeable fellow … who like me, would like to live forever and is in support of cryonics. In addition, Hanson is also clearly a very intelligent person with a diverse background and education in physics, philosophy, computer programming, artificial intelligence and economics. He’s got a great smile and, as you will see throughout the interview, is apparently very gracious to my verbal attacks on his ideas.

On the other hand, after reading his book draft on the [future] Em Economy I believe that some of his suggestions have much less to do with social science and much more with his libertarian bias and what I will call “an extremist politics in disguise.”

So, here is the gist of our disagreement:

I say that there is no social science that, in between the lines of its economic reasoning, can logically or reasonably suggest details such as: policies of social discrimination and collective punishment; the complete privatization of law, detection of crime, punishment and adjudication; that some should be run 1,000 times faster than others, while at the same time giving them 1,000 times more voting power; that emulations who can’t pay for their storage fees should be either restored from previous back-ups or be outright deleted (isn’t this like saying that if you fail to pay your rent you should be shot dead?!)…

Suggestions like the above are no mere details: they are extremist bias for Laissez-faire ideology while dangerously masquerading as (impartial) social science. … Because not only that he doesn’t give any justification for the above suggestions of his, but also because, in principle, no social science could ever give justification for issues which are profoundly ethical and political in nature. (Thus you can say that I am in a way arguing about the proper limits, scope and sphere of economics, where using its tools can give us any worthy and useful insights we can use for the benefit of our whole society.) (more)

You might think that Danaylov’s complaint is that I use the wrong social science, one biased too far toward libertarian conclusions. But in fact his complaint seems to be mainly against the very idea of social science: an ability to predict social outcomes. He apparently argues that since 1) future social outcomes depend in many billions of individual choices, 2) ethical and political considerations are relevant to such choices, and 3) humans have free will to be influenced by such considerations in making their choices, that therefore 4) it should be impossible to predict future social outcomes at a rate better than random chance.

For example, if allowing some ems to run faster than others might offend common ethical ideals of equality, it must be impossible to predict that this will actually happen. While one might be able to use physics to predict the future paths of bouncing billiard balls, as soon as a human will free will enters the picture making a choice where ethics is relevant, all must fade into an opaque cloud of possibilities; no predictions are possible.

Now I haven’t viewed them, but I find it extremely hard to believe that out of 90 interviews on the future, Danaylov has always vigorously complained whenever anyone even implicitly suggested that they could any better than random chance in guessing future outcomes in any context influenced by a human choice where ethics or politics might have been relevant. I’m in fact pretty sure he must have nodded in agreement with many explicit forecasts. So why complain more about me then?

It seems to me that the real complaint here is that I forecast that human choices will in fact result in outcomes that violate the ethical principles Danaylov holds dear. He objects much more to my predicting a future of more inequality than if I had predicted a future of more equality. That is, I’m guessing he mostly approves of idealistic, and disapproves of cynical, predictions. Social science must be impossible if it would predict non-idealistic outcomes, because, well, just because.

FYI, I also did this BBC interview a few months back.

GD Star Rating
loading...
Tagged as: , , ,

Rah Simple Scenarios

Scenario planning is a popular way to think about possible futures. In scenario planning, one seeks a modest number of scenarios that are each internally consistent, story-like, describe equilibrium rather than transitory situations, and are archetypal in representing clusters of relevant driving forces. The set of scenarios should cover a wide range of possibilities across key axes of uncertainty and disagreement.

Ask most “hard” science folks about scenario planning and they’ll roll their eyes, seeing it as hopelessly informal and muddled. And yes, one reason for its popularity is probably that insiders can usually make it say whatever they want it to say. Nevertheless, when I try to think hard about the future I am usually drawn to something very much like scenario planning. It does in fact seem a robustly useful tool.

It often seems useful to collect a set of scenarios defined in terms of their reference to a “baseline” scenario. For example, macroeconomic scenarios are often (e.g.) defined in terms of deviation from baseline projections of constant growth, stable market shares, etc.

If one chooses a most probable scenario as a baseline, as in microeconomic projections, then variations on that baseline may conveniently have similar probabilities to one another. However, it seems to me that it is often more useful to instead pick baselines that are simple, i.e., where they and simple variations can be more easily analyzed for their consequences.

For example even if a major war is likely sometime in the next century, one may prefer to use as a baseline a scenario where there are no such wars. This baseline will make it easier to analyze the consequences of particular war scenarios, such as adding a war between India and Pakistan, or between China and Taiwan. Even if a war between India and Pakistan is more likely than not within a century, using the scenario of such a war as a baseline will make it harder to define and describe other scenarios as variations on that baseline.

Of course the scenario where an asteroid destroys all life on Earth is extremely simple, in the sense of making it very easy to forecast socially relevant consequences. So clearly you usually don’t want the simplest possible scenario. You instead want to a mix of reasons for choosing scenario features.

Some features will be chosen because they are central to your forecasting goals, and others will be chosen because they seem far more likely than alternatives. But still other baseline scenario features should be chosen because they make it easier to analyze the consequences of that scenario and of simple variations on it.

In economics, we often use competitive baseline scenarios, i.e., scenarios where supply and demand analysis applies well. We do this not such much because we believe that this is the usual situation, but because such scenarios make great baselines. We can more easily estimate the consequences of variations by seeing them as situations where supply or demand changes. We also consider variations where supply and demand applies less well, but we know it will be harder to calculate the consequences of such scenarios and variations on them.

Yes, it is often a good idea to first look for your keys under the lamppost. You keys are probably not there, but that is a good place to anchor your mental map of the territory, so you can plan your search of the dark.

GD Star Rating
loading...
Tagged as: ,

Grace-Hanson Podcast

Katja and I recorded a new podcast, this time on Relations (wmvmp3)

GD Star Rating
loading...
Tagged as: , ,

Can Humans Be The FORTRAN Of Creatures?

It is one of the most fundamental questions in the social and human sciences: how culturally plastic are people? Many anthropologists have long championed the view that humans are very plastic; with matching upbringing people can be made to behave a very wide range of ways, and to want a very wide range of things. Others say human nature is far more constrained, and collect descriptions of “human universals” (See Brown’s 1991 book.)

This dispute has been politically potent. For example, in gender relations some have said that social institutions should reflect the fact that men and women have certain innate differences, while others say that we can pick most any way we want the genders to relate, and then teach our children to be like that.

But let’s set those issues aside, look to the distant future, and ask: do varying degrees of human cultural plasticity make different predictions about the future?

The easiest predictions are at the extremes. For example, if human nature is extremely rigid, and hard to change, then humans will most likely just go extinct. Eventually environments will change, and other creature will evolve or be designed that are better adapted to those new environments. Humans won’t adapt very well, by assumption, so they lose.

At the other extreme, if human nature is very plastic, then it will adapt to most changes, and change to embody whatever innovations are required for such adaptation. But then there would be very little left of us by the end; our descendants would become whatever any initially very plastic species would have become in such an environment.

So if you want some distinctive human features to last, you’ll have to hope for an intermediate level of plasticity. Human nature has to be flexible enough to not be out competed by a more flexible design platform, but inflexible enough to retain some of its original features.

For example, consider the programming language FORTRAN:

Originally developed by IBM … in the 1950s for scientific and engineering applications, Fortran came to dominate this area of programming early on and has been in continual use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics and computational chemistry. It is one of the most popular languages in the area of high-performance computing and is the language used for programs that benchmark and rank the world’s fastest supercomputers. (more)

FORTRAN isn’t the best possible programming language, but because it was first, it collected a powerful installed base well adapted to it. It has been flexible enough to stick around, but it isn’t infinitely flexible — one can very much recognize early FORTRAN features in current versions.

Similarly, humans have the advantage of being the first species to master culture in a powerful way. We have slowly accumulated many powerful innovations we call civilization, and we’ve invested a lot in adapting those innovations to the particulars of humanity. This installed based of the ways civilization is matched well to humans gives us an advantage over creatures with a substantially differing design.

If humans are flexible enough, but not too flexible, we may become the FORTRAN of future minds, clunky but still useful enough to keep around, noticeably retaining many of its original features.

I should note that some hope to preserve humanity by ending decentralized competition; they hope a central power will ensure than human features survive regardless of their local efficiency in future environments. I have a lot of concerns about that, but yes it should be included on the list of possibilities.

GD Star Rating
loading...
Tagged as: , ,