Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

GD Star Rating
loading...
Tagged as: , , , ,

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
loading...
Tagged as: , ,

Lost For Words, On Purpose

When we use words to say how we feel, the more relevant concepts and distinctions that we know, the more precisely we can express our feelings. So you might think that the number of relevant distinctions we can express on a topic rises with a topic’s importance. That is, the more we care about something, the more distinctions we can make about it.

But consider the two cases of food and love/sex (which I’m lumping together here). It seems to me that while these topics are of comparable importance, we have a lot more ways to clearly express distinctions on foods than on love/sex. So when people want to express feelings on love/sex, they often retreat to awkward analogies and suggestive poetry. Two different categories of explanations stand out here:

1) Love/sex is low dimensional. While we care a lot about love/sex, there are only a few things we care about. Consider money as an analogy. While money is important, and finance experts know a great many distinctions, for most people the key relevant distinction is usually more vs. less money; the rest is detail. Similarly, evolution theory suggests that only a small number of dimensions about love/sex matter much to us.

2) Clear love/sex talk looks bad.  Love/sex are to supposed to have lots of non-verbal talk, so a verbal focus can detract from that. We have a norm that love/sex is to be personal and private, a norm you might seem to violate via comfortable impersonal talk that could easily be understood if quoted. And if you only talk in private, you learn fewer words, and need them less. Also, a precise vocabulary used clearly could make it seem like what you wanted from love/sex was fungible – you aren’t so much attached to particular people as to the bundle of features they provide. Precise talk could make it easier for us to consciously know what we want when, which makes it harder to self-deceive about what we want. And having available more precise words about our love/sex relations could force us to acknowledge smaller changes in relation status — if “love” is all there is, you can keep “loving” someone even as many things change.

It seems to me that both kinds of things must be going on. Even when we care greatly about a topic, we may not care about many dimensions, and we may be better off not being able to express ourselves clearly.

GD Star Rating
loading...
Tagged as: , , ,

Conflicting Abstractions

My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.

What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.

And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.

The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.

Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.

This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.

GD Star Rating
loading...
Tagged as: ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Tegmark’s Vast Math

I recently had a surprise chance to meet Max Tegmark, and so I first quickly read his enjoyable new book The Mathematical Universe. It covers many foundations of physics topics that he correctly says are unfairly neglected. Since I’ve collected many opinions on foundation of physics over decades, I can’t resist mentioning the many ways I agree and disagree with him.

Let me start with what Tegmark presents as his main point, which is that the total universe is BIG, almost as big as it could possibly be. There’s a vast universe out there that we can’t see, and will never see. That is, not only does space extent far beyond our cosmological horizon, but out there are places where physics sits in very different equilibria of fundamental physics (e.g., has a different number of useful dimensions), and nearby are the different “many worlds” of quantum mechanics.

Furthermore, and this is Tegmark’s most unique point, there are whole different places “out there” completely causally (and spatially) disconnected from our universe, which follow completely different fundamental physics. In fact, all such mathematically describable places really exist, in the sense that any self-aware creatures there actually feel. Tegmark seems to stop short, however, of David Lewis, who said that all self-consistent possible worlds really exist.

Tegmark’s strongest argument for his distinctive claim, I think, is that we might find that the basic math of our physics is rare in allowing for intelligent life. In that case, the fact of our existence should make us suspect that many places with physics based on other maths are out there somewhere: Continue reading "Tegmark’s Vast Math" »

GD Star Rating
loading...
Tagged as:

SciCast Pays Big Again

Back in May I said that while SciCast hadn’t previously been allowed to pay participants, we were finally running a four week experiment to reward random activities. That experiment paid big and showed big effects; we saw far more activity on days when we paid cash.

In the next four weeks we’ll run another experiment that pays even more:

SciCast is running a new special! For four weeks, you can win prizes on some days of the week:

  • On Tuesdays, win a $25 Amazon gift card with activity.
  • On Wednesdays, win an activity badge for your profile.
  • On Thursdays, win a $25 Amazon gift card with accurate forecasting.
  • On Fridays, win an accuracy badge for your profile.

On each activity prize day, up to 80 valid forecasts and comments made that day will be randomly selected to win. On each accuracy prize day, your chance of winning any of 80 prizes is proportional to your forecasting accuracy. Be sure to use SciCast from July 22 to August 15!

So this time we’ll compare activity incentives to accuracy incentives. Will we get more activity on days when we reward activity, and more accuracy on days when we reward accuracy? Now our accuracy incentives are admittedly weak, in that we’ll evaluate the accuracy of each trade/edit via price changes over only a few weeks after the trade. But hey, its something. Hopefully we can do a better experiment next year.

SciCast now has 532 questions on science and technology, and you can make conditional forecasts on most of them. Come!

GD Star Rating
loading...
Tagged as:

Bets As Loyalty Signals

Why do men give women engagement rings? A standard story is that a ring shows commitment; by paying a cost that one would lose if the marriage fails, one shows that one places a high value on the marriage.

However, as a signal the ring has two problems. On the one hand, if the ring is easy to sell for its purchase price, then it detracts from the woman’s signal of the value she places on the marriage. Accepting a ring makes her look mercenary. On the other hand, if the ring can’t be sold for near its purchase price, and if the woman values the ring itself at less than its price, then the couple destroys value in order to allow the signal.

These are common problems with loyalty signals – either value is destroyed, or stronger signals on one side weakens signals from other sides. Value-destroying loyalty signals are very common in couples, clubs, churches, firms, professions, and nations. For example, we might give up poker nights for a spouse, pork food for a religion, casual clothes to be a manager, or old-world customs for a new nation.

A few days ago I had an idea for a more efficient loyalty signal. Imagine that when he was twenty a man made a $5000 bet that he would never marry before the age of fifty. Then when he is thirty-five and wants to marry, he can send a strong signal of his desire to marry just by his willingness to lose this bet. Since the bet is lost to a third party, it doesn’t hinder the bride’s ability to signal her loyalty. And assuming the bet is made at fair odds, the lost bets are on average paid to versions of this man in alternative scenarios where he doesn’t marry by fifty. So he retains the value, which is not destroyed.

Today this approach probably suffers from being weird, so doing this would also send an unwelcome signal of weirdness. But it is only a signal of one’s weirdness when one made the bet – maybe one can credibly claim to be less weird later when marrying. And the bet would remain potent as a signal of devotion.

There are many related applications. For example, a young person who bet that they would never join a religion might later credibly signal their devotion to that religion, and perhaps avoid having to eat and dress funny to show such devotion. Also, someone who bet that they would never change countries might signal their loyalty when they moved to a new nation. To let my future self signal his devotion to his political party, perhaps I should bet today that I’ll never join a political party. Do I have any takers?

Added 20July: Of course the need to lose a bet to get married would discourage some from getting married. But the same harm happens for any expectation of needing to send a loyalty signal if one gets married. This effect isn’t particular to bets as loyalty signals; it happens for all kinds of loyalty signals.

Mechanically one way to implement marriage bets as loyalty signals would be for parents to buy their sons male spinster insurance, which pays money to the son when he is fifty if he never marries, and otherwise gives him a nice visible cheap pin/brooch when he gets married. His new wife can wear the pin to brag about his devotion. The pin might be color coded to indicate how much money he sacrificed.

GD Star Rating
loading...
Tagged as: , ,

More Stories As Religion

Most people who say they are atheist or agnostic still believe in supernatural powers:

In the United States, 38% of people who identified themselves as atheist or agnostic went on to claim to believe in a God or a Higher Power. While the UK is often defined as an irreligious place, a recent survey … found that … only 13 per cent of adults agreed with the statement “humans are purely material beings with no spiritual element”. …

When researchers asked people whether they had taken part in esoteric spiritual practices such as having a Reiki session or having their aura read, the results were almost identical (between 38 and 40%) for people who defined themselves as religious, non-religious or atheist.

This is plausibly reinforced by fiction, which (as I’ve said) serves similar functions to religion:

In almost all fictional worlds, God exists, whether the stories are written by people of a religious, atheist or indeterminate beliefs.

It’s not that a deity appears directly in tales. It is that the fundamental basis of stories appears to be the link between the moral decisions made by the protagonists and the same characters’ ultimate destiny. The payback is always appropriate to the choices made. An unnamed, unidentified mechanism ensures that this is so, and is a fundamental element of stories—perhaps the fundamental element of narratives.

In children’s stories, this can be very simple: the good guys win, the bad guys lose. In narratives for older readers, the ending is more complex, with some lose ends left dangling, and others ambiguous. Yet the ultimate appropriateness of the ending is rarely in doubt. If a tale ended with Harry Potter being tortured to death and the Dursley family dancing on his grave, the audience would be horrified, of course, but also puzzled: that’s not what happens in stories. Similarly, in a tragedy, we would be surprised if King Lear’s cruelty to Cordelia did not lead to his demise.

Indeed, it appears that stories exist to establish that there exists a mechanism or a person—cosmic destiny, karma, God, fate, Mother Nature—to make sure the right thing happens to the right person. Without this overarching moral mechanism, narratives become records of unrelated arbitrary events, and lose much of their entertainment value. In contrast, the stories which become universally popular appear to be carefully composed records of cosmic justice at work.

In manuals for writers (see “Screenplay” by Syd Field, for example) this process is often defined in some detail. Would-be screenwriters are taught that during the build-up of the story, the villain can sin (take unfair advantages) to his or her heart’s content without punishment, but the heroic protagonist must be karmically punished for even the slightest deviation from the path of moral rectitude. The hero does eventually win the fight, not by being bigger or stronger, but because of the choices he makes.

This process is so well-established in narrative creation that the literati have even created a specific category for the minority of tales which fail to follow this pattern. They are known as “bleak” narratives. An example is A Fine Balance, by Rohinton Mistry, in which the likable central characters suffer terrible fates while the horrible faceless villains triumph entirely unmolested.

While some bleak stories are well-received by critics, they rarely win mass popularity among readers or moviegoers. Stories without the appropriate outcome mechanism feel incomplete. The purveyor of cosmic justice is not just a cast member, but appears to be the hidden heart of the show. (more)

GD Star Rating
loading...
Tagged as: ,

Boston Talks This Week

Monday: Why is Abstraction both Statusful and Silly? 7:00p, 98 Elm St Apt 1, Somerville.
Tuesday: Shall We Vote On Values, But Bet On Beliefs? noon, 206 Lake Hall, Northeastern Univ.
Wednesday: Factoring Geopolitical Risk Into Decision-Making, 12:20p, Global ICON Conf.

GD Star Rating
loading...
Tagged as: