Tag Archives: AI

Wanted: Elite Crowds

This weekend I was in a AAAI (Association for the Advancement of Artificial Intelligence) Fall Symposium on Machine Aggregation of Human Judgment. It was my job to give a short summary about our symposium to the eight co-located symposia. Here is what I said.

In most of AI, data is input, and judgements are output. But here humans turn data into judgements, and then machines and institutions combine those judgements. This work is often inspired by a “wisdom of crowds” idea that we often rely too much on arrogant over-rated experts instead of the under-rated insight of everyone else. Boo elites; rah ordinary folks!

Many of the symposium folks are part of the IARPA ACE project, which is structured as a competition between four teams, each of which must collect several hundred participants to answer the same real-time intelligence questions, with roughly a hundred active questions at any one time. Each team uses a different approach. The two most common ways are to ask many people for estimates, and then average them somehow, or to have people trade in speculative betting markets. ACE is now in its second of four years. So, what have we learned?

First, we’ve learned that it helps to transform probability estimates into log-odds before averaging them. Weights can then correct well for predictable over- or under-confidence. We’ve also learned better ways to elicit estimates. For example, instead of asking for a 90% confidence interval on a number, it is better to ask for an interval, and then for a probability. It works even better to ask about an interval someone else picked. Also, instead of asking people directly for their confidence, it is better to ask them how much their opinion would change if they knew what others know.

Our DAGGRE team is trying to improve accuracy by breaking down questions into a set of related correlated questions. ACE has also learned how to make people better at estimating, both by training them in basic probability theory, and by having them work together in teams.

But the biggest thing we’ve learned is that people are unequal – the best way to get good crowd wisdom is to have a good crowd. Contributions that most improve accuracy are more extreme, more recent, by those who contribute more often, and come with more confidence. In our DAGGRE system, most value comes from a few dozen of our thousands of participants. True, these elites might not be the same folks you’d have picked via resumes, and tracking success may give better incentives. But still, what we’ve most learned about the wisdom of crowds is that it is best to have an elite “crowd.”

GD Star Rating
loading...
Tagged as: , ,

Miller’s Singularity Rising

James Miller, who posted once here at OB, has a new book, Singularity Rising, out Oct 2. I’ve read an advance copy. Here are my various reactions to the book.

Miller discusses several possible paths to super-intelligence, but never says which paths he thinks likely, nor when any might happen. However, he is confident that one will happen eventually, he calls Kurzweil’s 2045 forecast “robust”, and he offers readers personal advice as if something will happen in their lifetimes.

I get a lot of coverage in chapter 13, which discusses whole brain emulations. (And Katja is mentioned on pp.213-214.) While Miller focuses mostly on what emulations imply for humans, he does note that many ems could die from poverty or obsolescence. He make no overall judgement on the scenario, however, other than to once use the word “dystopian.”

While Miller’s discussion of emulations is entirely of the scenario of a large economy containing many emulations, his discussion of non-emulation AI is entirely of the scenario of a single “ultra AI”. He never considers a single ultra emulation, nor an economy of many AIs. Nor does he explain these choices.

On ultra AIs, Miller considers only an “intelligence explosion” scenario where a human level AI turns itself into an ultra AI “in a period of weeks, days, or even hours.” His arguments for this extremely short timescale are:

  1. Self-reproducing nanotech factories might double every hour,
  2. On a scale of all possible minds, a chimp isn’t far from von Neuman in intelligence, and
  3. Evolution has trouble coordinating changes, but an AI could use brain materials and structures that evolution couldn’t.

I’ve said before that I don’t see how these imply a weeks timescale for one human level AI to make itself more powerful than the entire rest of the world put together. Miller explains my skepticism:

As Hanson told me, the implausibility of some James Bond villains illustrates a reason to be skeptical of an intelligence explosion. A few of these villains had their own private islands on which they created new powerful weapons. But weapons development is a time and resource intensive task, making it extremely unlikely that the villains small team of followers could out-innovate all of the weapons developers in the rest of the world by producing spectacularly destructive instruments that no other military force possessed. Thinking that a few henchmen, even if led by an evil genius, would do a better job at weapons development than a major defense contractor is as silly as believing that the professor on Gilligan’s Island really could have created his own coconut based technology. …

Think of an innovation race between a single AI and the entirety of mankind. For an intelligence explosion to occur, the AI has to not only win the race, but finish before humanity completes its next stride. A sufficiently smart AI could certainly do this, but an AI only a bit brighter than von Neumann would not have the slightest chance of achieving this margin of victory. (pp.215-216)

As you can tell from this quotation, Miller’s book often reads like the economics textbook he wrote. He is usually content to be a tutor, explaining common positions and intuitions behind common arguments. He does, however, explain some of his personal contributions to this field, such as his argument that preventing the destruction of the world can be a public good undersupplied by private firms, and that development might slow down just before an anticipated explosion, if investors think non-investors will gain or lose just as much as investors from the change.

I’m not sure this book has much of a chance to get very popular. The competition is fierce, Miller isn’t already famous, and while his writing quality is good, it isn’t at the popular blockbuster popular book level. But I wish his book all the success it can muster.

GD Star Rating
loading...
Tagged as: , ,

AI Progress Estimate

From ’85 to ’93 I was an AI researcher, first at Lockheed AI Center, then at the NASA Ames AI group. In ’91 I presented at IJCAI, the main international AI conference, on a probability related paper. Back then this was radical – one questioner at my talk asked “How can this be AI, since it uses math?” Probability specialists created their own AI conference UAI, to have a place to publish.

Today probability math is well accepted in AI. The long AI battle between the neats and scruffs was won handily by the neats – math and theory are very accepted today. UAI is still around though, and a week ago I presented another probability related paper there (slides, audio), on our combo prediction market algorithm. And listening to all the others talks at the conference let me reflect on the state of the field, and its progress in the last 21 years.

Overall I can’t complain much about emphasis. I saw roughly the right mix of theory vs. application, of general vs. specific results, etc. I doubt the field would progress more than a factor of two faster if such parameters were exactly optimized. The most impressive demo I saw was Video In Sentences Out, an end-to-end integrated system for writing text summaries of simple videos. Their final test stats:

Human judges rated each video-sentence pair to assess whether the sentence was true of the video and whether it described a salient event depicted in that video. 26.7% (601/2247) of the video-sentence pairs were deemed to be true and 7.9% (178/2247) of the video-sentence pairs were deemed to be salient.

This is actually pretty impressive, once you understand just how hard the problem is. Yes, we have a long way to go, but are making steady progress.

So how far have we come in last twenty years, compared to how far we have to go to reach human level abilities? I’d guess that relative to the starting point of our abilities of twenty years ago, we’ve come about 5-10% of the distance toward human level abilities. At least in probability-related areas, which I’ve known best. I’d also say there hasn’t been noticeable acceleration over that time. Over a thirty year period, it is even fair to say there has been deceleration, since Pearl’s classic ’88 book was such a big advance.Robin Hanson

I asked a few other folks at UAI who had been in the field for twenty years to estimate the same things, and they roughly agreed – about 5-10% of the distance has been covered in that time, without noticeable acceleration. It would be useful to survey senior experts in other areas of AI, to get related estimates for their areas. If this 5-10% estimate is typical, as I suspect it is, then an outside view calculation suggests we probably have at least a century to go, and maybe a great many centuries, at current rates of progress.

Added 21Oct: At the recent Singularity Summit, I asked speaker Melanie Mitchell to estimate how far we’ve come in her field of analogical reasoning in the last twenty years. She estimated 5 percent of the way to human level abilities, with no noticeable acceleration.

Added 11Dec: At the Artificial General Intelligence conference, Murray Shanahan says that looking at his twenty years experience in the knowledge representation field, he estimates we have come 10% of the way, with no noticeable acceleration.

Added 4Oct’13: At an NSF workshop on social computing, Wendy Hall said that in her twenty years in computer-assisted training, we’ve moved less than 1% of the way to human level abilities. Claire Cardie said that in her twenty years in natural language processing, we’ve come 20% of the way. Boi Faltings says that in his field of solving constraint satisfaction problems, they were past human level abilities twenty years ago, and are even further past that today.

Let me clarify that I mean to ask people about progress in a field of AI as it was conceived twenty years ago. Looking backward one can define areas in which we’ve made great progress. But to avoid selection biases, I want my survey to focus on areas as they were defined back then.

Added 21May’14: At a private event, after Aaron Dollar talked on robotics, he told me that in twenty years we’ve come less than 1% of the distance to human level abilities in his subfield of robotic grasping manipulation. But he has seen noticeable acceleration over that time.

Added 28Aug’14: After coming to a talk of mine, Peter Norvig told me that he agrees with both Claire Cardie and Boi Faltings, that on speech recognition and machine translation we’ve gone from not usable to usable in 20 years, though we still have far to go on deeper question answering, and for retrieving a fact or page that is relevant to a search query we’ve far surpassed human ability in recall and do pretty well on precision.

Added 14Sep’14: At a closed academic workshop, Timothy Meese, who researches early vision processing in humans, told me he estimates about 5% progress in his field in the last 20 years, with a noticeable deceleration.

GD Star Rating
loading...
Tagged as: , ,

Robot ethics returns

People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations:

  1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI
  2. People vastly misjudge how much ethics contributes to the total value society creates

A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this.

So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now.

Some other predictions, to help check this theory:

  • Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general
  • International trade should prompt more ethical feelings than local trade
  • Stories of old should be more moralizing than stories of now
  • Historical figures should be seen in a more moral light than present-day celebrities
  • Space travel should be discussed in terms of more moral goals than Earth travel.
  • Ethical features of obscure cultures should be relatively salient compared to familiar cultures

More? Which of these are actually true?

There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still seems to me a promising model for variations in ethical concern.

Added: I’m not confident that there is disproportionate interest compared to other topic areas. I seem to have heard about it too much, but this could be a sampling bias.

GD Star Rating
loading...
Tagged as: , ,

Hutter on Singularity

Back in July I posted my response to Chalmers’ singularity essay, published in the Journal of Consciousness Studies (JCS) where his paper was published. A paper copy of a JCS issue with thirteen responses recently showed up in my mail, though no JCS electronic copy is yet available. [Added 4Mar: it is now here.] Reading through the responses, the best (besides mine) was by Marcus Hutter.

I didn’t learn much new, but compared to the rest, Hutter is relatively savvy on social issues. He isn’t sure if it is possible to be much more intelligent than a human (as opposed to just thinking faster), but he is sure there is lots of room for improvement overall:

The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. …

When building AIs or tinkering with our virtual selves, we could try out a lot of different goals. … But ultimately we will lose control, and the AGIs themselves will build further AGIs. … Some aspects of this might be independent of the initial goal structure and predictable. Probably this initial vorld is a society of cooperating and competing agents. There will be competition over limited (computational) resources, and those virtuals who have the goal to acquire them will naturally be more successful. … The successful virtuals will spread (in various ways), the others perish, and soon their society will consist mainly of virtuals whose goal is to compete over resources, where hostility will only be limited if this is in the virtuals’ best interest. For instance, current society has replaced war mostly by economic competition. … This world will likely neither be heaven nor hell for the virtuals. They will “like” to fight over resources, and the winners will “enjoy” it, while the losers will “hate” it. …

In the human world, local conflicts and global war is increasingly replaced by economic competition, which might itself be replaced by even more constructive global collaboration, as long as violaters can quickly and effectively (and non-violently?) be eliminated. It is possible that this requires a powerful single (virtual) world government, to give up individual privacy, and to severely limit individual freedom (cf. ant hills or bee hives).

Hutter noted (as have I) that cheap life is valued less:

Unless a global copy protection mechanism is deliberately installed, … copying virtual structures should be as cheap and effortless as it is for software and data today. The only cost is developing the structures in the first place, and the memory to store and the comp to run them. … One consequence … [is] life becoming much more diverse. …

Another consequence should be that life becomes less valuable. … Cheap machines decreased the value of physical labor. … In games, we value our own life and that of our opponents less than real life, … because games can be reset and one can be resurrected. … Why not participate in a dangerous fun activity. … It may be ethically acceptable to freeze, duplicate, slow-down, modify (brain experiments), or even kill (oneself or other) AIs at will, if they are abundant and/or backups are available, just what we are used to doing with software. So laws preventing experimentation with intelligences for moral reasons may not emerge.

Hutter also tried to imagine what such a society would look like from outside:

Imagine an inward explosion, where a fixed amount of matter is transformed into increasingly efficient computers until it becomes computronium. The virtual society like a well-functioning real society will likely evolve and progress, or at least change. Soon the speed of their affairs will make them beyond comprehension for the outsiders. … After a brief period, intelligent interaction between insiders and outsiders becomes impossible. …

Let us now consider outward explosion, where an increasing amount of matter is transformed into computers of fixed efficiency. … Outsiders will soon get into resource competition with the expanding computer world, and being inferior to the virtual intelligences, probably only have the option to flee. This might work for a while, but soon … escape becomes impossible, ending or converting the outsiders’ existence.

When foragers were outside of farmer societies, or farmers outside of industrial cities, change was faster on the inside, and the faster change got the harder it was for outsiders to understand. But there was no sharp boundary when understanding became “impossible.” While farmers were greedy for more land, and displaced foragers on farmable (or herd able) land quickly in farming doubling time terms, industry has been much less expansionary. While eventually industry might displace all farming, farming modes of production can continue to use land for many industry doubling times into an industrial revolution.

Similarly, a new faster economic growth mode might well continue to let old farming and industrial modes of production continue for a great many doubling times of the new mode. If land area is not central to the new mode of production, why expect old land uses to be quickly displaced?

GD Star Rating
loading...
Tagged as: ,

Debating Yudkowsky

On Wednesday I debated my ex-co-blogger Eliezer Yudkowsky at a private Jane Street Capital event (crude audio here, from 4:45; better video here, transcript here).

I “won” in the sense of gaining more audience votes — the vote was 45-40 (him to me) before, and 32-33 after the debate. That makes me two for two, after my similar “win” over Bryan Caplan (42-10 before, 25-20 after). This probably says little about me, however, since contrarians usually “win” such debates.

Our topic was: Compared to the farming and industrial revolutions, intelligence explosion first-movers will quickly control a much larger fraction of their new world. He was pro, I was con. We also debated this subject here on Overcoming Bias from June to December 2008. Let me now try to summarize my current position.

The key issue is: how chunky and powerful are as-yet-undiscovered insights into the architecture of “thinking” in general (vs. on particular topics)? Assume there are many such insights, each requiring that brains be restructured to take advantage. (Ordinary humans couldn’t use them.) Also assume that the field of AI research reaches a key pivotal level of development. And at that point, imagine some AI research team discovers a powerful insight, and builds an AI with an architecture embodying it. Such an AI might then search for more such insights more efficiently than all other the AI research teams who share their results put together.

This new fast AI might then use its advantage to find another powerful insight, restructure itself to take advantage of it, and so on until it was fantastically good at thinking in general. (Or if the first insight were super-powerful, it might jump to this level in one step.) How good? So good that it could greatly out-compete the entire rest of the world at the key task of learning the vast ocean of specific knowledge and insights useful for functioning in the world. So good that even though it started out knowing almost nothing, after a few weeks it knows more than the entire rest of the world put together.

(Note that the advantages of silicon and self-modifiable code over biological brains do not count as relevant chunky architectural insights — they are available to all competing AI teams.)

In the debate, Eliezer gave six reasons to think very powerful brain architectural insights remain undiscovered:

  1. Human mind abilities have a strong common IQ factor.
  2. Humans show many specific mental failings in reasoning.
  3. Humans have completely dominated their chimp siblings.
  4. Chimps can’t function as “scientists” in human society.
  5. “Science” was invented, allowing progress in diverse fields.
  6. AGI researchers focus on architectures, share little content.

My responses: Continue reading "Debating Yudkowsky" »

GD Star Rating
loading...
Tagged as: , ,

Chalmers Reply #2

In April 2010 I commented on David Chalmers’ singularity paper:

The natural and common human obsession with how much [robot] values differ overall from ours distracts us from worrying effectively. … [Instead:]
1. Reduce the salience of the them-us distinction relative to other distinctions. …
2. Have them and us use the same (or at least similar) institutions to keep peace among themselves and ourselves as we use to keep peace between them and us.

I just wrote a 3000 word new comment on this paper, for a journal. Mostly I complain Chalmers didn’t say much beyond what we should have already known. But my conclusion is less meta:

The most robust and promising route to low cost and mutually beneficial mitigation of these [us vs. superintelligence] conflicts is strong legal enforcement of retirement and bequest contracts. Such contracts could let older generations directly save for their later years, and cheaply pay younger generations to preserve old loyalties. Simple consistent and broad-based enforcement of these and related contracts seem our best chance to entrench the enforcement of such contracts deep in legal practice. Our descendants should be reluctant to violate deeply entrenched practices of contract law for fear that violations would lead to further unraveling of contract practice, which threatens larger social orders built on contract enforcement.

As Chalmers notes in footnote 19, this approach is not guaranteed to work in all possible scenarios. Nevertheless, compare it to the ideal Chalmers favors:

AI systems such that we can prove they will always have certain benign values, and such that we can prove that any systems they will create will also have those values, and so on … represents a sort of ideal that we might aim for (p.35).

Compared to the strong and strict controls and regimentation required to even attempt to prove that values disliked by older generations could never arise in any later generations, enforcing contracts where older generations pay younger generations to preserve specific loyalties seems to me a far easier, safer and more workable approach, with many successful historical analogies on which to build.

GD Star Rating
loading...
Tagged as: , ,

Stross on Singularity

I’ve long enjoyed the science fiction novels of Charlie Stross, so I’m honored that he linked to my Betterness Explosion from his Three arguments against the singularity:

I periodically get email from folks who, having read “Accelerando”, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. … It’s time to set the record straight. … Santa Claus doesn’t exist. …

(Economic libertarianism is based on … reductionist … 19th century classical economics — a drastic over-simplification of human behaviour. … If acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism.) …

I can’t prove that there isn’t going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won’t work, or that we are or aren’t living in a simulation. … However, … the prospects aren’t good.

First: super-intelligent AI is unlikely because … human-equivalent AI is unlikely. … We’re likely to leave out … needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own. … We clearly want machines that perform human-like tasks. … But whether we want them to be conscious and volitional is another question entirely.

Uploading … is not obviously impossible. … Imagine most of the inhabited universe has been converted to a computer network, … programs live side by side with downloaded human minds and accompanying simulated human bodies. … A human mind would lumber about in a massively inappropriate body simulation. … I strongly suspect that the hardest part of mind uploading … [is] the body and its interactions with its surroundings. …

Moving on to the Simulation Argument: … anyone capable of creating an ancestor simulation wouldn’t be focussing their attention on any ancestors as primitive as us. … This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment … We may eventually see mind uploading, but … our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it. …

The simulation hypothesis … we can’t actually prove anything about it. …. Any way you cut these three ideas, they don’t provide much in the way of referent points for building a good life. … It’s unwise to live on the assumption that they’re coming down the pipeline within my lifetime.

Alas Stross’s post is a bit of a rant – strong on emotion, but weak on argument. Maybe Stross did or will explain more elsewhere, but while he makes clear that he doesn’t want to associate with singularity fans, Stross doesn’t make clear that he actually disagrees much. Most thoughtful singularity fans probably agree that where possible hand-coded AI would be designed to be solicitous and avoid human failings, that simple unmodified upload minds are probably not competitive creatures in the long run, and that only a tiny fraction of our distant descendants would be interested in simulating us. (We libertarian-leaning economists even agree that classical econ greatly simplifies.)

But the fact that hand-coded AIs would differ in many ways from humans says little on the key issues of when AI will appear, how fast they’d improve, how local would be that growth, and how fast the world economy would grow as a result. The fact that eventually unmodified human uploads would not be competitive says little on the key issues of whether uploads come before powerful hand-coded AI, how long nearly unmodified uploads would dominate, or just how far from humans would be the most competitive creatures. And the fact that few descendants would simulate ancestor humans says little on the key question of how that small fraction multiplied by the vast number of descendants compares to the actual number of ancestor humans. (And the fact that classical econ greatly simplifies says little on the pleasantness of libertarian policies.)

Stross seems smart and well-read enough to have interesting things to say on these key questions, if only he can overcome his personal revulsion against affiliating with singularity fans, to directly engage these questions.

GD Star Rating
loading...
Tagged as: , , ,

The Betterness Explosion

We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better.

After several iterations this better person might have a much better betterness theory. Then he might quickly make everything around him much better. Not just better looking hair, better jokes, or better sleep. He might start a better business, and get better at getting investors to invest, customers to buy, and employees to work. Or he might focus on making better investments. Or he might run for office and get better at getting elected, and then make his city or nation run better. Or he might create a better weapon, revolution, or army, to conquer any who oppose him.

Via such a “betterness explosion,” one way or another this better person might, if so inclined, soon own, rule, or conquer the world. Which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice generous person who will treat the rest of us well. Right?

OK, this might sound silly. After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?

But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.” They imagine an “intelligence explosion,” by which they don’t just mean that eventually the future world and many of its creatures will be more mentally capable than us in many ways, or even that the rate at which the world makes itself more mentally capable will speed up, similar to how growth rates have sped up over the long sweep of history. No, these smart well-meaning folks instead imagine that once someone has a powerful theory of “intelligence,” that person could create a particular “intelligent” creature which is good at making itself more “intelligent,” which then lets that creature get more “intelligent” about making itself “intelligent.” Within a few days or weeks, the story goes, this one creature could get so “intelligent” that it could do pretty much anything, including taking over the world.

I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.) And this fits well with common usage of the term “intelligence.” When we talk about machines or people or companies or even nations being “intelligent,” we mainly mean that such things are broadly mentally or computationally capable, in ways that are important for their tasks and goals. That is, an “intelligent” thing has a great many useful capabilities, not some particular specific capability called “intelligence.” To make something broadly smarter, you have to improve a wide range of its capabilities. And there is generally no easy or fast way to do that.

Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities. For example, if you drug a person so that they can hardly think, then getting rid of that drug can suddenly improve a great many of their mental abilities. But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare.

All of which is to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously.

GD Star Rating
loading...
Tagged as: ,

Econ of AI on BHTV

Karl Smith of Modeled Behavior and I did a blogging heads tv show on the economics of artificial intelligence:

It was a pleasure to talk Karl, since he is that rare combination: someone who both takes powerful future technologies seriously, and who understands social science. (Watching it now, I suspect that if you counted minutes you’d find I talked too much – sorry Karl.)

I made an analogy between three ways to grow a nation, and to grow a mind. Growing nations:

  1. Play the usual game of trading with other nations, etc.
  2. Develop good internal support for investment & innovation.
  3. Move all your people to become part of a rich nation.

Growing minds:

  1. Play the usual game of writing code to do more things well.
  2. Develop a super learning algorithm to grow from “scratch.”
  3. Copy an existing human brain, via whole brain emulation.

When possible, I favor approach #3.

I also made the point that while people like to justify having fewer kids in terms giving each kid more help, the factors that seem to influence the choice of zero vs. one kid seem pretty similar to the factors that influence some vs. more kids.  This fits better with the choice really being about more for parents vs. more for the kids. Anyone know of hard data on factors that influence zero vs. one kid relative to some vs. more kids?

GD Star Rating
loading...
Tagged as: , , ,