A Tangled Task Future

Imagine that you want to untangle a pile of cables. It wasn’t tangled on purpose; tangling just resulted naturally from how these cables were used. You’d probably look for the least tangled cable in the least tangled part of the pile, and start to work there. In this post I will argue that, in a nutshell, this is how we are slowly automating our world of work: we are un- and re-tangling it.

This has many implications, including for the long-term future of human-like creatures in a competitive world. But first we have a bit of explaining to do. Continue reading "A Tangled Task Future" »

GD Star Rating
loading...
Tagged as: , ,

A Call To Adventure

I turn 58 soon, and I’m starting to realize that I may not live long enough to finish many of my great life projects. So I want to try to tempt younger folks to continue them. Hence this call to adventure.

One way to create meaning for your life is join a grand project. Or start a new one. A project that is both obviously important, and that might also bring you personal glory, if you were to made a noticeable contribution to it.

Yes, most don’t seek meaning this way. But many of our favorite fictional characters do. If you are one of the few who find grand adventures irresistibly romantic, then this post is for you. I call you to adventure.

Two great adventures actually, in this post. Both seem important, and in the ballpark of doable, at least for the right sort of person.

ADVENTURE ONE: The first adventure is to remake collective decision-making via decision markets (a.k.a. futarchy). Much of the pain and loss in the world results from bad decisions by key organizations, such as firms, clubs, cities, and nations. Some of these bad decisions result because actors with the wrong mix of values hold too much power. But most result from our not aggregating info well; people who could have or did know better were not enticed enough to share what they know. Or others didn’t believe them.

We actually know of a family of simple robust mechanisms that typically do much better at aggregating info. And we have a rough idea of how organizations could use such mechanisms. We even had a large academic literature testing and elaborating these mechanisms, resulting in a big pile of designs, theorems, software, computer simulations, lab tests, and field tests. We don’t need more of these, at least for now.

What we need is concrete evolution within real organizations. Like most good abstract ideas, what this innovation most needs are efforts to work out variations that can fit well in particular existing organization contexts. That is, design and try out variations that can avoid the several practical obstacles that we know about, and help identify more such obstacles to work on.

This adventure less needs intellectuals, and more sharp folks willing to get their hands dirty dealing with the complexities of real organizations, and with enough pull to get real organizations near them to try new and disruptive methods.

Since these mechanisms have great potential in a wide range of organizations, we first need to create versions that are seen to work reliably over a substantial time in concrete contexts where substantial value is at stake. With such a concrete track record, we can then push to get related versions tried in related contexts. Eventually such diffusion could result in better collective decision making worldwide, for many kinds of organizations and decisions.

And you might have been one of the few brave far-sighted heroes who made it happen.

ADVENTURE TWO: The second adventure is to figure out real typical human motives in typical familiar situations. You might think we humans would have figured this out long ago. But as Kevin Simler and I argue in our new book The Elephant in the Brain: Hidden Motives in Everyday Life, we seem to be quite mistaken about our basic motives in many familiar situations.

Kevin and I don’t claim that our usual stated motives aren’t part of the answer, only that they are much less than we like to think. We also don’t claim to have locked down the correct answer in all these situations. We instead offer plausible enough alternatives to suggest that the many puzzles with our usual stories are due to more than random noise. There really are systematic hidden motives behind our behaviors, motives substantially different from the ones we claim.

A good strategy for uncovering real typical human motives is to triangulate the many puzzles in our stated motives across a wide range of areas of human behavior. In each area specialists tend to think that the usual stated motive deserves to be given a strong prior, and they rarely think we’ve acquired enough “extraordinary evidence” to support the “extraordinary claims” that our usual stated motives are wrong. And if you only ever look at evidence in a narrow area, it can be hard to escape this trap.

The solution is expect substantial correlations between our motives in different areas. Look for hidden motive explanations of behaviors that can simultaneously account for puzzles in a wide range of areas, using only a few key assumptions. By insisting on a high ratio of apparently different puzzles explained to new supporting assumptions made, you can keep yourself disciplined enough not to be fooled by randomness.

This strategy is most effective when executed over a lifetime. The more different areas that you understand well enough to see the key puzzles and usual claims, the better you can triangulate their puzzles to find common explanations. And the more areas that you have learned so far, the easier it becomes to learn new areas; areas and methods used to study them tend to have many things in common.

This adventure needs more intellectual heroes. While these heroes may focus for a time on studying particular areas, over the long run their priority is to learn and triangulate many areas. They seek simple coherent accounts that explain diverse areas of human behavior. To figure out what the hell most humans are actually up to most of the time. Which we do not actually know now. And which would enable better policy; today policy reform efforts are often wasted due to mistaken assumptions about actual motives.

Wouldn’t someone who took a lifetime to help work that out be a hero of the highest order?

Come, adventures await. For the few, the brave, the determined, the insightful. Might that be you?

GD Star Rating
loading...
Tagged as: , ,

Conformity Excuses

From a distance it seems hard to explain a lot of human behavior without presuming that we humans have strong desires to conform to the behaviors of others. But when we look at our conscious thoughts and motivations regarding our specific behaviors, we find almost no conformity pressures. We are only rarely aware that we do anything, or avoid doing other things, because we want to conform.

The obvious explanation is that we make many excuses for our conformity – we make up other mostly-false explanations for why we like the same things that others like, and dislike other things. And since we do a lot of conforming, there must be a lot of bias here. So we can uncover and understand a lot of our biases if we can identify and understand these excuses. Here are a few possibilities that come to mind. I expect there are many others.

I picked my likes first, my group second. We like to point out that we are okay with liking many things that many others in the world don’t like. Yes, the people around us tend to like those same things, but that isn’t us conforming to those social neighbors, because we picked the things we like first, and then picked those people around us as a consequence. Or so we say. But we conform far more to our neighbors than can plausibly be explained by our limited selection power.

I just couldn’t be happy elsewhere. We tend to tell ourselves that we couldn’t be happy in a different profession, city, or culture, in part to excuse our reluctance to deviate from the standard practices of such things. We’d actually adjust fine to much larger moves than we are willing to consider.

I actually like small differences. We notice that we don’t like to come to a party in the exact same dress as someone else. We also want different home decorations and garden layouts, and we don’t want to be reading the exact same book as everyone else at the moment. We then extrapolate and think we don’t mind being arbitrarily different.

In future, this will be more popular. We are often okay with doing something different today because we imagine that it will become much more popular later. Then we can be celebrated for being one of the first to like it. If we were sure that few would ever like it, we’d be much less willing to like it now.

Second tier folks aren’t remotely as good. While we personally can tell the difference between someone who is very bad and someone who is very good, we usually just don’t have the discernment to tell the difference in quality between the most popular folks and second tier folks who are much less popular. But we tell ourselves that we can tell the difference, to justify our strong emphasis on those most popular folks.

Unpopular things are objectively defective. We probably make many specific excuses about unpopular things, to justify our neglect of them.

GD Star Rating
loading...
Tagged as:

Design A Better Chess

Friday the Wall Street Journal published my review of Garry Kasparov’s new book Deep Thinking. I end with:

I’ve always been a bit skeptical of the high status of chess champions, whom many consider intellectuals (rather than, say, sports stars). But in “Deep Thinking,” Mr. Kasparov has changed my mind. He praises Mikhail Botvinnik, the founder of the Soviet chess school where he trained, for practicing an “intense regime of self-criticism.” Chess champions are rewarded for brutal honesty about their habits and strategies. If only most tenured professors and business executives were this conscious of their limitations and blind spots.

“Few young stars in any discipline are aware of why they excel,” Mr. Kasparov writes. Like Mr. Kasparov, I don’t know why he was great. But I know now why I’m glad we have him. We need at least a few of our most celebrated minds to be this intellectually honest with themselves, and with us.

While all sports reward honesty and self-criticism on your sports performance, in more intellectual sports that honesty can more influence your opinions on more important topics. Which raises the question: can we design a game that promotes even more useful honestly? As I spent some of my youth doing game design, and had a friend who shared that interest, I know that designing games is hard; there are many relevant constraints of which most players are unaware (see the usual literature). For this game design task, all those usual constraints apply, and we must attend to some added criteria:

  • Relevant: We’d like the topics where this game rewards insight and understanding to be closer to the topics that matter, where brutal honesty would be more useful to the world.
  • Fair: Even with relevant topics, the game can’t seem to greatly favor people who by class or culture get much more direct personal info and experience regarding those relevant topics. Anyone should be able to learn the game by playing it.
  • Fragmented: Performance must be broken into many little games, where winning one game gives little or no direct advantage in future games. Thus consistent wins allow strong inferences on underlying ability.
  • Isolated: Players can’t easily get help from hidden allies outside the game.
  • Status: Chess is seen as very high status, because so many high status people have treated it as high status for so long. Somehow this new game needs to have a shot at achieving a status that high.

If these criteria could be met, high capability people might try to achieve status by consistently winning at this game, the opinions they generate on relevant topics might be more honest and accurate, and the rest of us might then be more inclined to listen to those accurate and relevant opinions.

GD Star Rating
loading...
Tagged as: ,

I’m Not Seaing It

Imagine you run a small business in an area where a criminal organization runs a protection racket. “Nice shop here, shame if something were to happen to it.” So you pay.

Someone tells you that they’ve never seen payment demanded from the homeless guy who sells pencils on the corner. Nor the shady guy who sells watches in the alley. And maybe not even from food trucks.

So this person suggests that you relocate your small business to a truck. Or at least a trailer park. Because then criminals might not bother you. And if they do you can more easily move to another town. You should also move your home to an RV, or a trailer park, for the same reason.

Enough of you together might even create whole mobile towns that better evade both local criminals and local governments. If locals don’t treat you right, you’ll be outta there. Your group could then govern itself more, instead of having to do what locals say. And that would create more experiments in governance, which would help the world to innovate and improve our mechanisms of governance.

This isn’t fantasy because trucks, RVs, and trailer parks already exist. Oh and have you heard of all the great ideas for improving trucks? There are ideas for how trucks could be used to make energy, food, and potable water, and how they could clean up pollution and pull CO2 from the air. Anything you think is expensive on a truck might soon be cheap. What are you waiting for!?

Not persuaded? That’s how I feel about Joe Quirk and Patri Friedman’s new book Seasteading: How Floating Nations will Restore the Environment, Enrich the Poor, Cure the Sick, and Liberate Humanity from Politicians.

They argue that cruise ships and oil rig platforms prove that we already know how to live on the ocean. And we have so many great new related ideas — there are ways to make ocean houses, things ocean machines could do, and products and services that ocean living people could sell. The book is mostly about all those great ocean ideas, for food, energy, clean water, CO2, etc.

Presumably, in time the usual profit motives would get all that ocean tech developed without your help. The reason Quirk and Freidman say they wrote this book, to entice you to help, is because they think sea-living folks could create more experiments in governance, because nations don’t officially claim control over people far from shore. And offshore mobility would enable a different better set of experiments. They are hoping you care enough about that to go live on the ocean.

In 366 pages the authors are careful to never say which particular governance variations they are so eager to try, variations that are today blocked by all land governments everywhere. Somewhat suspiciously like blockchain folks eager for “commerce” without government interference. (They just want to trade “stuff,” okay?)

The book talks about seeking approval from governments for early experiments, and wanting to keep good relations with neighboring nations. Seasteads won’t be used to evade taxes, they say. And whatever products and services they sell to land-based customers must meet regulations that those customers must live by.

Long ago people who didn’t like local governments tended to head for mountains and jungles, where they were harder to find and tax. That doesn’t work as well today, as governments can now find people much more easily, even on the ocean.

The book suggests that seastead mobility would make governance different and better for them. But one must pay a big added cost for mobility, both on land and sea. And the cost of moving large seasteads seems to me comparable to the cost to move a home or business located in a trailer on land. Yet the existence of trailer parks hasn’t obviously unleashed much great land governance.

The book claims that nations won’t interfere w/ seasteads because “China has not invaded Hong Kong. Malaysia has not invaded Singapore .. The Cayman Islands .. adopts a spiteful stance toward US and EU regulator policies” (p.270). Yet as recently as 1982 an international treaty UNCLOS extended national powers out to 200+ miles, within which nations “reserve the right to regulate `artificial islands, installations, and structures.’” (p.13) It seems to me that when there is enough economic activity in the oceans, nations would get around to trying to control it.

Yeah nations can be slow to act, so maybe there’d be some interim period when seasteads could experiment. But even then I find it hard to imagine that seasteads would substantially increase the total governance experimentation on Earth, even for an interim.

The world is full of families, firms, clubs, churches, group homes able to try many governance variations. Apparently, “there are close to 600,000 cities, towns, villages, hamlets etc. in the world.” Some of these are “intentional communities” that experiment with many social variations, in far easier environments than the ocean.

Yes, many governance variations do not seem to have been tried much, but that seems mostly due to a lack of interest. I can’t get people to do futarchy experiments, even though it could be tried in organizations of most any size. Scholars have proposed many as-yet-untried governance mechanisms, such as voting rules, that could also be tried in organizations of any size. US libertarians can’t even get enough of them to move to New Hampshire to make a big governance difference there.

Yes, there are far fewer such polities in the world that could try experiments on governance issues that only apply to polities containing at least a million people. But I find it hard to imagine a million people all going to live on the sea just so they can do experiments at that scale. And even if they did, it would only create a small percentage change in the number of such polities.

Maybe if ocean tech advances as fast as some hope, many will eventually live on the ocean, just for the economic benefits. But in that case I expect the usual nations to extend control over this new activity. And any new governance units that do form would only add a small fraction to Earth entities able to experiment with governance variations.

My guess is that the real appeal here is related to why people find pirate stories “romantic.” They just like the abstract idea that pirates are “free”, even if they don’t have any particular forbidden action in mind to do as a pirate. And just as most who enjoy reading pirates stories would never actually choose to be a pirate, most seastead supporters like the idea of supporting sea “freedom”, even if no way they’d go live on the ocean, and even if they have no particular usually-forbidden thing they want “free” sea folks to try.

Seasteading, I’m just not seaing it.

GD Star Rating
loading...
Tagged as: ,

When to Parrot, Pander, or Think for Yourself

Humans are built to argue and persuade. We tend to win when we endorse arguments that others accept, and win even more when we can generate new arguments that others will accept. This is both because people notice who originated the arguments that they accept, and because this ability helps us to move others toward opinions that favor our policies and people.

All of this is of course relative to some community who evaluates our arguments. Sometimes the larger world defers to a community of experts, and then it is that community who you must persuade. In other cases, people insist on deciding for themselves, and then you have to persuade them directly.

Consider three prototypical discussions:

  1. Peers in a car, talking on the path to drive to reach an event where they are late.
  2. Ordinary people, talking on if and how black holes leak information.
  3. Parents, talking on how Santa Claus plans to delivers presents Christmas eve.

In case #1, it can be reasonable for peers to think sincerely, in the sense of looking for arguments to persuade themselves, and then offering those same arguments to each other. It can be reasonable here to speak clearly and directly, to find and point out flaws in others’ arguments, and to believe that the net result is to find better approximations to truth.

In case #2, most people are wise to mostly parrot what they hear experts say on the topic. The more they try to make up their own arguments, or even to adapt arguments they’ve heard to particular contexts, the more they risk looking stupid. Especially if experts respond. On such topics, it can pay to be abstract and somewhat unclear, so that one can never be clearly shown to be wrong.

In case #3, parents gain little from offering complex new arguments, or even finding flaws in the usual kid arguments, at least when only parents can understand these. Parents instead gain from finding variations on the usual kid arguments that kids can understand, variations that get kids to do what parents want. Parents can also gain from talking at two levels at once, one discussion at a surface visible to kids, and another at a level visible only to other parents.

These three cases illustrate the three general cases, where your main audience is 1) about as capable , 2) more capable, or 3) less capable than you in generating and evaluating arguments on this topic. Your optimal argumentation strategy depends on in which of these cases you find yourself.

When your audience is about the same as you, you can most usefully “think for yourself”, in the sense that if an argument persuades you it will probably persuade your audience as well, at least if it uses popular premises. So you can be more comfortable in thinking sincerely, searching for arguments that will persuade you. You can be eager to find fault w/ arguments and criticize them, and to listen to such criticisms to see if they persuade you. And you can more trust the final consensus after your discussion.

The main exception here is where you tend to accept premises that are unpopular with your audience. In this case, you can either disconnect with that audience, not caring to try to persuade them, or you can focus less on sincerity and more on persuasion, seeking arguments that will convince them given their different premises.

When your audience is much more capable than you, then you can’t trust your own argument generation mechanism. You must instead mostly look to what persuades your superiors and try to parrot that. You may well fail if you try to adapt standard arguments to particular new situations, or if you try to evaluate detailed criticisms of those arguments. So you try to avoid such things. You instead seek generic positions that don’t depend as much on context, expressed in not entirely clear language that lets you decide at the last minute what exactly you meant.

When your audience is much less capable than you, then arguments that persuade you tend to be too complex to persuade them. So you must instead search for arguments that will persuade them, even if they seem wrong to you. That is, you must pander. You are less interested in rebuttals or flaws that are too complex to explain to your audience, though you are plenty interested in finding flaws that your audience can understand. You are also not interested in finding complex fixes and solutions to such flaws.

You must attend not only to the internal coherence of your arguments, but also to the many particular confusions and mistakes to which your audience is inclined. You must usually try arguments out to see how well they work on your audience. You may also gain by using extra layers of meaning to talk more indirectly to impress your more capable sub-audience.

What if, in addition to persuading best, you want to signal that you are more capable? To show that you are not less capable than your audience, you might go out of your way to show that you can sincerely, on the fly and without assistance, and without studying or practicing on your audience, construct new arguments that plausibly apply to your particular context, and identify flaws with new arguments offered by others. You’d be sincerely argumentative.

To suggest that you are more capable than your audience, you might instead show that you pay attention to the detailed mistakes and beliefs of your audience, and that you first try arguments out on them. You might try to show that you are able to find arguments by which you could persuade that audience of a wide range of conclusions, not just the conclusions you privately find the most believable. You might also show that you can simultaneously make persuasive arguments to your general audience, while also discreetly making impressive comments to a sub-audience that is much more capable. Sincerely “thinking for yourself” can look bad here.

In a world where people following the strategies I’ve outlined above, the quality of general opinion on each topic probably depends most strongly on something near the typical capability of the relevant audience that evaluates arguments on that topic. (I’d guess roughly the 80th percentile matters most on average.) The less capable mostly parrot up, and the more capable mostly pander down. Thus firms tend to be run in ways that makes sense to that rank employee or investor. Nations are run in ways that make sense to that rank citizen. Stories make sense to that rank reader/viewer. And so on. Competition between elites pandering down may on net improve opinion, as may selective parroting from below, though neither seems clear to me.

If we used better institutions for key decisions (e.g., prediction/ decision markets), then the audience that matters might become much more capable, to our general benefit. Alas that initial worse audience usually decides not to use better institutions. And in a world of ems typical audiences also become much more capable, to their benefit.

GD Star Rating
loading...
Tagged as: , ,

Compelling ≠ Accurate

Bryan Caplan:

As a rule, I don’t care for “hard sci-fi.”  In fact, artistically speaking, I normally dislike true stories of any kind.  And I barely care about continuity errors.  When I read novels or watch movies, I crave what I call “emotional truth.” ..  “it’s the idea of becoming someone else for a little while. Being inside another skin. Moving differently, thinking differently, feeling differently.” .. When creators spend a lot of mental energy on the accuracy of their physics or the historical sequence of events, they tend to lose sight of their characters’ inner lives.  A well-told story is designed to maximize the audiences’ identification with the characters .. you know a creator has succeeded when you temporarily lose yourself in the story.

Many have said similar things. For example, Jerome Bruner:

There are two modes of cognitive functioning, two modes of thought, each providing distinctive ways of ordering experience, of constructing reality. The two (though complementary) are irreducible to one another. .. Each .. has operating principles of its own and its own criteria of well-formedness. They differ radically in their procedures for verification. A good story and a well-formed argument are different natural kinds. Both can be used as means for convincing another. Yet what they convince of is fundamentally different: arguments convince one of their truth, stories of their lifelikeness. The one verifies by eventual appeal to procedures for establishing formal and empirical proof. The other establishes not truth but verisimilitude. ..

“Great” storytelling, inevitably, is about compelling human plights that are “accessible” to readers. But at the same time, the plights must be set forth with sufficient subjunctivity to allow them to be rewritten by the reader, rewritten so as to allow play for the reader’s imagination.

Yes, readers (or viewers) value stories where readers lose themselves, feel like they are inside character inner lives, and identify with those characters. To readers, such stories feel “lifelike” — in some important way “like” real and true events. And yes, surely this is because these best stories do in fact match some template in reader minds, a template knitted in part from the many details of the world that readers have witnessed during their lives.

But, such stories are much better described as “compelling” than “true.” As a large literature has shown, the stories that we like differ in many big and systematic ways from real life events. Stories differ not only in external physical and social environments, but also in the personalities and preferences of individuals. Furthermore, even conditional on those things, stories also differ in the feelings that individuals have and the choices that they make.

We understand some but not all things about why people are built to prefer unrealistic stories. But there seems little doubt that the stories we like are in fact unrealistic. Compelling but not “true.”

I’m not denying that some stories are more realistic, I’m doubting that the stories that we get more lost in are in fact mainly those more realistic stories.

GD Star Rating
loading...
Tagged as:

What TED Needs

Most people want, and gain value from, religious-like communities, strongly bonded by rituals, mutual aid, and implausible beliefs. (Patriotism and political ideologies can count here.) I once embraced that deeply and fully. But then I acquired a strong self-identity as an honest intellectual, which often conflicts with common religious practices. However, I get that my sort of intellectual identity is never going to be common. So religion will continue, even with ems. Realistically, the best widespread religion I’m going to get is one that at least celebrates intellectuals and their ideals, even if it doesn’t fully embrace them, and does so in a form that is accessible to a wide public.

I’ve given four TEDx talks so far, and will give another in two weeks. Ten days ago I had the honor of giving a talk on Age of Em at the annual TED conference in Vancouver (video not yet posted). And I have to say that the TED community seems to come about as as close as I can realistically expect to my ideal religion. It is high status, accessible to a wide public, and has a strong sense of a shared community, and of self-sacrifice for community ideals. It has lots of ritual, music, and art, and it celebrates innovation and intellectuals. It even gives lip service to many intellectual virtues. If borderline religious elements sometimes make me uncomfortable, well that’s my fault, not theirs.

The main TED event differs from other TEDx events. Next year the price will be near $10K just for registration, and even then you have to submit an application, and some are rejected. At that high price the main attendees are investors and CEOs looking to network with each other. As a result, it isn’t really a place to geek out talking ideas. But that seems mainly a result of TED’s great success, and overall it does seem to help the larger TED enterprise. Chris Anderson deserves enormous credit for shepherding all this success.

The most encouraging talk I heard at TED 2017 was by David Brenner on his efforts to disinfect human spaces. Apparently there are frequencies of ultraviolet (UV) light that don’t penetrate skin past the top layer of dead skin cells, but still penetrate all the way through almost all bacteria and viruses in the air and on smooth-enough surfaces. So we should be able to use special UV lights to easily disinfect surfaces around humans. For example, we might cheaply sterilize whole hospitals. And maybe also airports during pandemics. This seems an obvious no brainer that should have been possible anytime in the last century (assuming they’ve done penetration-depth vs. frequency measurements right). Yet Brenner has been working on this for five years and still seems far from getting regulatory approval. This seems to me a bad case of civilization and regulatory failure. Even so, the potential remains great.

The most discouraging talk I heard was by Jim Yong Kim, President of the World Bank Group. He talked about how he fought the World Bank for years, because they insisted on using cost-effectiveness criteria to pick medical investments. He showed us pictures of particular people helped by less cost-effective treatments, daring us to say they were not worth helping. And he said people in poor nations have status-based “aspirations” for the same sort of hospitals and schools found in rich nations, even if they aren’t cost-effective, and who are we to tell them no. Now that he runs the World Bank (nominated by Obama in 2012), his priorities can now win more. The audience cheered. 🙁

All strong religions seem to need some implausible beliefs, and perhaps for TED one of them is the idea we need only point out problems to good people, to have those problems solved. But if not, then what I think TED audiences most need to hear are basic reviews on the topics of market failure and regulatory costs.

At TED 2017 I heard many talks where speakers point out a way that our world is not ideal. For example, speakers talked about how tech firms compete to entice users to just pay attention to them, how cities seem to be spread out more than is ideal, and how inner city grocery stores have less fresh food. But speakers never attributed problems to a particular standard kind of market failure, much less suggest a particular institutional solution because it matched the kind of market failure it was to address. While speakers tend to imply government regulation and redistribution as solutions, they never consider the many ways that regulation and redistribution can go wrong and be costly.

It is as if TED audiences, who hear talks on a great specialized many areas of science and tech, were completely unaware of key long-established and strongly-relevant areas of scholarship. If TED audiences were instead well informed about institution design, market failures, and regulatory costs, then a speaker who pointed out a problem would be expected to place it within our standard classifications of ways that things can go wrong. They’d be expected to pick the standard kind of institutional solution to each kind of problem, or explain why their particular problem needs an unusual solution. And they’d be expected to address the standard ways that such a solution could be costly or go wrong. Perhaps even adjust their solution to deal with case-specific costs and failure modes.

None of this is about left vs. right, it is just about good policy analysis. But perhaps this is just a bridge too far. Until the wider public becomes informed about these things, maybe TED speakers must also assume that their audience is ignorant of them as well. But if TED wants to better help the world to actually solve its problems, this is what its audience most needs to hear.

GD Star Rating
loading...
Tagged as: ,

Steven Levy’s Generic Skepticism

Steven Levy praises TED to the heavens:

Not every talk is one for the ages, but the TED News Feed is in sync with Ezra Pound’s insufficiently famous quote that “literature is news that stays news.” In TED’s world, at least when it’s working well, the news that stays news is science — as well as the recognizable truths of who we are as a species, and what we are capable of, good or evil. .. Much of the TED News Feed was an implicit rebuke of the politics of the day. Generally, TED speakers are believers in the scientific method. There were even a couple of talks this year whose very point was that there is a thing called truth.

Well, except for my talk:

Still, the TED News Feed was not free of potentially fake news, albeit of the scientific kind. A speaker named Robin Hanson (a George Mason professor and a guru of prediction markets) gave what he described as a data-driven set of predictions of a world where super-intelligent robots would rule the earth after forcing humans to “retire.” It seemed to me that he simply labeled his sci-fi fantasy as non-fiction. Plus, when I checked his website later, I learned he “invented a new form of government called futarchy,” and that his favorite musician was Vangelis. (When I later asked Anderson about that talk, he explained, without necessarily endorsing my criticism, that it was “a roll of the dice,” and that generally it was a good thing when talks took risks.)

That is all of Steven Levy’s critique; there is no more. He actually came up to me after my talk, saying something generically skeptical. I pointed out that I’d written a whole book full of analysis detail, and I asked him to pick out anything specific I had said that he doubted, offering to explain my reasoning on that. But he instead just walked away.

Maybe Mr. Levy comes from a part of science I’m not familiar with, but in the parts of science I know, a critic of a purported scientific analysis is expected to offer specific criticisms, in addition to any general negative rating. The 130 words he devoted here was enough space to at least hint at which of my claims he doubted. And for the record, in my books and talks I’m very clear that my analysis is theory-driven, not data-driven, and that it is conditional on my key technology assumptions.

GD Star Rating
loading...
Tagged as:

Superhumans Live Among Us

Computers are impressive machines, and they get more impressive every year, as hardware gets cheaper and software gets better. But while they are substantially better than humans on many important tasks, still overall humans earn far more income from using their smarts than do computers. And at past rates of progress it looks like it will take centuries before computers earn more income overall.

The usual explanation for why humans are so much more capable is their flexibility, which probably results mainly from their breadth. A computer doing a task usually has available to it a far smaller range of methods, knowledge, and data. When what it has are good enough, a computer can be far more accurate and cheaper than a human. But when when a computer lacks important relevant method, knowledge, and data, then you just can’t do without that human flexibility and breadth. You might hire a human to work with a computer, but still you need that human on the team.

In our world today, most people are specialists; they spend years learning the methods, knowledge, and data relevant to an existing recognized specialty area. And when your problem falls well within such an existing area, that is exactly the sort of person you want to work on it.

But often we face problems that don’t fall well within existing specialty areas. If we can give a short list of specialty areas that cover our problem, then we can collect a team with members in all those areas. Because talking between people is much less efficient that communication within one person, this team will take a lot longer to solve our problem. But still, eventually such teams are usually up to the task.

However, sometimes we face problems where we don’t know which kinds of expertise are relevant. In such cases what we really need is a person who is expert in far more areas than are most people. Let me call such people “polymaths”, though that word is often used for people who have wide interests but not wide expertise. A polymath with expertise in enough areas has a far better chance of solving broad hard-to-classify problems. A polymath is to an ordinary human as that human is to a computer. At least in terms of relative flexibility and breadth, and thus generality.

Quite often a specialist will see that some of their tools apply to a problem, and not realize that there are tools from other areas that also apply. And if specialists from other areas tell them that other tools do apply, they will usually not have sufficient expertise to directly evaluate that claim. And so the usual human arrogance will often lead them to disagree. Specialists from each area will say that they can help, and discount the possibility of help from other kinds of specialists.

Now a clear long track record showing that teams that include several kinds of specialists tend to solve a certain kind of problem better may convince many specialists that other specialists are relevant. But we often lack such clear long track records. In such cases, we often get stuck in a pattern of having a particular kind of expert deal with a particular kind of problem, even when other kinds of experts could help.

The same thing applies when humans know more than computers. Usually there’s nothing the human could say to prove to the computer that it is missing important relevant tools and knowledge. The computer just doesn’t understand these other tools well enough. So the computer has to just be told to defer to the human when the human thinks it knows better.

Bottom line: superhuman really live among us, whose better abilities compared to us really are analogous to the way we are so much better than computers: they have more flexibility, due to more breadth of expertise. But without clear track records, they usually don’t have ways to convince us to listen to them. Once we’ve found one kind of expert relevant to a problem, those experts tend to tell us that other kinds aren’t needed, and we tend to believe them.

Superhumans walk among us, but don’t get the respect they deserve. We reserve our highest honors for those who are best at specific recognized specialty areas, and mainly only recognize polymaths when they are good enough at one such area.

Added 22Apr: Actually, someone with multiple expertise areas isn’t what I meant if they haven’t worked to integrate them. Compared to computers, the human mind can not only do many things, it has integrated those tools together well. When areas overall, one needs a common representation to accommodate them both. Is one a special case of the other? Do they focus on different parameters in a common parameter space? I mean to refer to a polymath who has successfully integrated their many areas of expertise.

GD Star Rating
loading...
Tagged as: