Tag Archives: Web/Tech

Why Not Friend Match?

Bryan Caplan recently pointed out to a few of us that while many dating web sites offer to help you find matching romantic mates, there are far fewer friend finding helpers.  We tend to collect friends informally, by liking the people we meet for other reasons, and especially friends of friends. But for mating purposes we are more willing to choose folks based on a list of their interests, an intro paragraph, a picture, etc.  Why the difference?

The explanation that occurs to me is:  We need mates more for their simple surface features, while we need friends more to serve as social allies in our existing social network.  Since we need friends in substantial part to serve as allies in our social world, supporting us against opposing coalitions, it makes sense to draw our friends from our existing social world.  And since we need mates more for their personal quality, e.g., good genes, youth, wealth, smarts, mood, etc., it makes sense to pick them more via such features.

Now if the personal qualities we sought in mates were difficult to discern and describe, dating web sites wouldn’t be very useful; we’d more want to rely on personal experience and on folks who know us well recommending others who they thought would match us well.  And we do like to think that our mate (and friend) preferences are complex and subtle, not easily captured in a few match website entries. But in fact, I suspect, the truth is that we are more mating simpletons than we care to admit; we can actually find much of what we need to know about potential mates in a few simple items, especially the picture.

Added 1p: Many suggest the explanation is that friends are worth much less than mates because we can have many friends.  But I value top friends similarly to top mates – am I unusual?

Added 15Aug: Al Roth weighs in.

Added 12Nov 2013: Now there is at least one friend match site: bigfriendo

GD Star Rating
Tagged as: ,

Two Movies

I have two movies to recommend.

  1. Nobody Knows is terribly touching, and for exactly that reason, hard to watch.  It depicts dramatic story-like events, but it doesn’t give the usual cues to suggest you process it in a story-like far mode.  The main characters are children, who you see in near mode, up close and personal, mostly without words.  If you love children, you will love these children.  Things happen to them, but slowly, and without clear “here is a key event” markers.  So you process the events as near, with less story-mode emotional distance; you are more naked to the full terror of bad possibilities.  It makes me wonder what other stories would feel like, if we felt them as nearby.  And if I would dare to watch them.
  2. The Third & The Seven, a free ten minute entirely CG (computer graphics) clip, is a truly spectacular demo of what CG can do today.  I’ve watched it daily for two weeks now and still marvel at its details. See the hidef version if you can.  If you doubt at all that virtual reality could really be as detailed and vivid as our reality, take a look. (HT Rob Wiblin).
GD Star Rating
Tagged as: , ,

AI In Far And Near View

Looking far into the distance, your eyes often see a sharp boundary between earth and sky. But if you were to travel to that furthest part of earth your eye can now see, you may not find a sharp boundary there.  Far mode simplifies, not only suppressing detail, but making you think detail is unimportant.  If you saw two ships battling on the horizon, you’d be too tempted to expect the bigger ship to win.

From a distance, future techs also seem overly simple and hence disruptive.  If in 1672 you had seen Verbiest’s steam-powered vehicle, you might have imagined that the first nation with cheap capable cars could conquer the world.  After all, they might build tanks and troop transports, and literally run circles around enemy troops.  But while having somewhat better cars did sometimes help some nations, it was far from an overwhelming advantage. Cars slowly gained in cost, ability, and number; there was no particular day when one nation had vastly more capable cars.

Similar scenarios have played out for a great many techs, like rockets, radios, lasers, or computers.  While one might imagine from afar that the difference between none of a tech and a “full” version would give a dramatic advantage, actual progress was more incremental, reducing team differences in tech levels.  Overall differences in wealth and tech capability were usually better explanations for the advantages some nations had over others.

The first far images of nanotech were also simple, stark, and disruptive.  They imagined one team could quickly and reliably assemble, from cheap plentiful feedstocks, large quantities of a large set of big atom arrangements, while other teams had near-current capabilities.  In this scenario, the first first team might well conquer the world, or accidentally destroy it via “grey goo.”

The nanotech transition seems less disruptive, however, if we see more detail, and imagine a series of incrementally more capable assemblers, able to build larger objects, faster, more reliably, from more types of feedstocks, using more kinds of local chemical bonds, at a wider range of assembler-assembled angles, and so on.  After all, we already have ribosome assemblers, with a very limited range of feeds, bonds, angles, etc.  Each new type of assembler would lower the cost of making a new class of objects.

Far images of artificial intelligence (AI) can also be overly stark.  If you saw minds as having a single relevant “intelligence” parameter, with humans unable but machines able to change their parameter, you might well rue the day a machine whizzed past the human level.  Especially if you thought God-levels might follow a month later, and if you thought this parameter’s typical value was what determined a team’s power. Continue reading "AI In Far And Near View" »

GD Star Rating
Tagged as: , , , ,

Privacy Is Far

The British government has decided to go ahead with its plans under what it calls the Intercept Modernisation Programme to force every telecommunication company and Internet service provider to keep a record of all its customers’ personal communications, showing whom they have contacted and when and where, as well as the Web sites they have visited. … The information … will be accessible to 653 public bodies, ”including police, local councils, the Financial Services Authority, the ambulance service, fire authorities and even prison governors.

”They will not require the permission of a judge or a magistrate to obtain the information, but simply the authorisation of a senior police officer or the equivalent of a deputy head of department at a local authority,” The Telegraph says.

The only bit of good news, if you can call it that, is that the information won’t be held in a central database … and the full rollout will be delayed until after the next election. If the Tories or Liberal Democrats win, they say that the intercept program will be changed in scope and function. However, as happened in the United States after the last election, once politicians are in power, promises about privacy and spying on citizens seem to become less important.

More here.  Two decades ago when wonks discussed the coming brave new web/internet world, privacy was an huge concern.  In contrast, today when people choose what to reveal on the web, privacy seems a minor concern.  Together, these suggest that privacy is far – we care about privacy as a high noble social concern, but not as a personal practical matter.  (At least not until someone close in our social world starts to see our private info.)

But if so, why do politicians prefer to schedule to invade your privacy in the future, instead of now?  Won’t that make us all the more concerned about it?

My guess: a broad national policy today is near in time, but far in social scope, so still invokes a substantially far view.  So politicians are still held to ideals on it.  But the far view makes us idealize our future politicians more than today’s; we think our side is more likely to win, and future politicians will act more ideally.  So we don’t expect future politicians to let such privacy invasions go forward.  And since all far events tend to seem less likely, there is less to worry about.  When it actually happens later, they can say move along, there’s no news here, this was scheduled long ago.

Many said Bush’s privacy invasions revealed his evilness, but few care Obama has no plans to reverse those invasions.  Even if UK and US governments don’t misuse this info, their policies will give cover for similar policies elsewhere.  From afar, big brother epitomizes evil and must be resisted.  Up close, he seems tame, until he doesn’t, when its too late.

GD Star Rating
Tagged as: , ,

Bad Emulation Advance

You may recall my guess is that within a century or so, human whole brain emulations (ems) will induce a change so huge as to be in the top four changes in the last hundred million years. So major advances toward such ems are big news:

IBM’s Almaden Research Center … announced … they have created the largest brain simulation to date on a supercomputer. The number of neurons and synapses in the simulation exceed those in a cat’s brain; previous simulations have reached only the level of mouse and rat brains. … C2 … re-create[s] 1 billion neurons connected by 10 trillion individual synapses. C2 runs on “Dawn,” a BlueGene/P supercomputer. …  DARPA … is spending at least US $40 million to develop an electronic processor that mimics the mammalian brain’s function, size, and power consumption. The DARPA project … was launched late last year and will continue until 2015 with a goal of a prototype chip simulating 10 billion neurons connected via 1 trillion synapses. The device must use 1 kilowatt or less (about what a space heater uses) and take up less than 2 liters in volume. …

“Each neuron in the network is a faithful reproduction of what we now know about neurons,” he says. This in itself is an enormous step forward for neuroscience, .. Dawn … takes 500 seconds for it to simulate 5 seconds of brain activity, and it consumes 1.4 MW.

“Enormous step” seems a bit too much, but even so Randal Koene agrees this is big news:

This recent demonstration of computing power in simulations of biologically inspired neuronal networks is a good measure to indicate how far we have come and when it will be possible to emulate the necessary operations of a complete human brain. Given the storage capacity that was used in the simulation, at least some relevant information could be stored for each updatable synapse in the experiment. That makes this markedly different than the storageless simulations carried out by Izhikevich.

Even if big news, this is not good news.  You see, ems require three techs, and we have clear preferences over which tech is ready last:

  1. Computing power – As a steadily and gradually advancing tech, this makes the em transition more gradual and predictable.  Here first only expensive ems are available, and then they slowly take over jobs as their costs fall.  Since it is a large industry with many competing producers, we need worry less about disruptions from unequal tech access.
  2. Brain scanning – As this is also a relatively gradually advancing tech, it should also make for a more gradual predictable transition.  But since it is now a rather small industry, surprise investments could make for more development surprise.  Also, since the use of this tech is very lumpy, we may get billions, even trillions, of copies of the first scanned human.  And the first team to make that successful scan might gain much power, if it hasn’t made cooperative deals with other teams. By the time a second, or hundredth, human is scanned most of the economic niches may be filled with copies of the first few ems.
  3. Cell modeling – This sort of progress may be more random and harder to predict – a sudden burst of insight is more likely to create an unexpected and sudden em transition.  This could induce large disruptive inequality in economic and military power, both among teams trying to succeed and among ordinary folks displaced by em labor.

This new DARPA project seems focused more on advancing special computing hardware than cell-modeling.  If so, it makes scenario #1 less likely, which is bad.  Can someone please tell these DARPA knuckle-heads that they are funding exactly the wrong research?

GD Star Rating
Tagged as: , ,

Take Both Econ, Techies Seriously

Martin Ford, software CEO and author of a new (bad) book on how automation may destroy our economy, is right about one thing:

Among people who work in the field of computer technology, it is fairly routine to speculate about the likelihood that computers will someday approach, or possibly even exceed, human beings in general capability and intelligence. … While technologists are actively thinking about, and writing books about, intelligent machines, the idea that technology will ever truly replace a large fraction of the human workforce and lead to permanent, structural unemployment is, for the majority of economists, almost unthinkable. For mainstream economists, at least in the long run, technological advancement always leads to more prosperity and more jobs.

Yes, techies agree on the long term plausibility of machines doing almost all jobs at a cost below human subsistence wages, thereby gaining almost all income, while economists ignore this scenario.  E.g., Tyler Cowen:

In the longer run … computers will free up lots of human labor — but in the meantime it will have drastic implications for income redistribution, across both individuals and across economic sectors. … Robin Hanson believes we are headed back toward a Malthusian equilibrium; in contrast I believe that machines will never outcompete humans across the board.

Arnold Kling agreed:

I agree that Singularians are far too optimistic about artificial intelligence. It is a variation of the “fatal conceit” problem. Most of human intelligence is tacit knowledge, consisting of elaborate metaphors that are originally acquired from sensory experience. Artificial intelligence is an attempt to arrive at the same point through top-down design. … Computers and robots will be economically significant but not paradigm-shifting.

Economists should listen more to techies on what techs will be feasible at what costs, but techies should also listen more to economists on the social implications of tech costs.  Alas, just as economists prefer to rely on their intuitive folk tech forecasts, techies prefer to rely instead on their intuitive folk economics.  E.g., Martin Ford’s misguided intuitions: Continue reading "Take Both Econ, Techies Seriously" »

GD Star Rating
Tagged as: , ,

Prefer Law To Values

On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted.  Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values. When I asked “what if I chose to become a robot?”, they said I should lose all human privileges, and be treated like the other robots.  I winced; seems anti-robot feelings are even stronger than anti-immigrant feelings, which bodes for a stormy robot transition.

At a workshop following last weekend’s Singularity Summit two dozen thoughtful experts mostly agreed that it is very important that future robots have the right values.  It was heartening that most were willing accept high status robots, with vast impressive capabilities, but even so I thought they missed the big picture.  Let me explain.

Imagine that you were forced to leave your current nation, and had to choose another place to live.  Would you seek a nation where the people there were short, stupid, sickly, etc.?  Would you select a nation based on what the World Values Survey says about typical survey question responses there?

I doubt it.  Besides wanting a place with people you already know and like, you’d want a place where you could “prosper”, i.e., where they valued the skills you had to offer, had many nice products and services you valued for cheap, and where predation was kept in check, so that you didn’t much have to fear theft of your life, limb, or livelihood.  If you similarly had to choose a place to retire, you might pay less attention to whether they valued your skills, but you would still look for people you knew and liked, low prices on stuff you liked, and predation kept in check. Continue reading "Prefer Law To Values" »

GD Star Rating
Tagged as: , , ,

The First Tech Bubble: 1720

The first global financial bubble in stock prices occurred 1720 …  Using newly collected stock prices for British and Dutch firms in 1720, we find evidence against indiscriminate irrational exuberance and evidence in favor of speculation about two factors:  the Atlantic trade and the incorporation of insurance companies. The fundamentals of both sectors may have led to high expectations of future growth.  Our findings are consistent with the hypothesis that financial bubbles require a plausible story to justify investor optimism. …

Although 1720 is not generally viewed as a period of technological novelty, we argue in this paper that there were at least three critical innovations that took place in a very short span of time; two of which were financial innovations, the other was a major potential shift in the configuration of global trade.  The first innovation was financial engineering at a national scale. The Mississippi Company and the South Sea Company issued equity shares in exchange for government debt; in effect converting the national debt into corporate stock. …

The second innovation was an incipient shift in global trade. Both of the companies were set up to exploit trade in the Americas. … The third innovation was also financial. The first publicly traded insurance corporations were chartered in Great Britain 1720, as a result of the Act. As such, they represented a new model of capital formation for maritime insurance firms – in a nation built on maritime trade.

More here.  These sure do seem like big innovations, which eventually did have large implications.  The general lesson: it is easy to over-estimate the profits to be gained by first-movers exploiting even very large innovations.

GD Star Rating
Tagged as: ,

How Is Our Era Unique?

We are entering an era where most anyone can quickly talk to most anyone else who can talk.  Talking will get easier as more people speak English, and perhaps as automatic translation is improved.  Easy talking wasn’t true before the widespread use of telephones, and it won’t be true after our descendants spread across the stars, or think billions of times faster.  The next few centuries will contain the easiest talking era in all of history.

For similar reasons, our current era is likely unique in having the least contact with strange cultures.  Our distant ancestors heard rumors from travelers about distant strange cultures.  Our descendants may also have contact with strange cultures when they re-engineer themselves and fragment Cambrian-explosion-style into a vast space of possible creatures, grouped into local cultures.  Or they may spread across space, and diverge culturally due to the rare slow contact across such vast distances.

I also suspect our era is uniquely rich, in terms of thinking-talking folks having a median income so far above their subsistence levels. (This goes with a uniquely high econ growth rate and low-vs-median income inequality.)  Most animals have always been pretty close to subsistence level, and until the industrial revolution so were most humans.  Today median world income is now roughly five times subsistence level and rising.  But eventually incomes must fall, as we may learn to make people much faster (as in brain ems), or when econ growth rates fall below feasible population growth rates.

In what other ways is our era likely unique?  You will of course have diverse opinions, but I’m most interested in analyses based on assumptions I share: our lineage probably won’t go extinct, we’ll keep growing, spread across space, redesign our minds and bodies, and eventually learn all tech, all within a mostly competitive framework.

GD Star Rating
Tagged as: ,

Space Storm Insurance

Within 90 seconds, the entire eastern half of the US is without power.  … A year later and millions of Americans are dead and the nation’s infrastructure lies in tatters. …. An extraordinary report funded by NASA and issued by the US National Academy of Sciences (NAS) in January this year claims [the Sun] could do just that.  … A severe space weather event in the US could induce ground currents that would knock out 300 key transformers within about 90 seconds, cutting off the power for more than 130 million people. … this whole situation would not improve for months, maybe years: melted transformer hubs cannot be repaired, only replaced. …  Within a month, then, the handful of spare transformers would be used up. The rest will have to be built to order, something that can take up to 12 months. … According to the NAS report, the impact of what it terms a “severe geomagnetic storm scenario” could be as high as $2 trillion. And that’s just the first year after the storm. The NAS puts the recovery time at four to 10 years.

That is from a recent New Scientist.  Here is that NAS report, and here is a 2000 article in IEEE Spectrum.  This sort of disaster could come from a solar storm about as strong as one we saw in 1859.   I just consulted for a prestigious government consulting firm, who told me they are trying but have yet to convince US government agencies to take this problem seriously.  Apparently it would just take about ten million dollars to protect the US power industry from a huge solar flare, and this would also help protect against a nuke EMP.  But apparently too many crazies support the idea for US bureaucrats to want to take the idea seriously.  Hat tip to Robert Koslover.

GD Star Rating
Tagged as: ,