Tag Archives: Ems

Philosophy Vs. Duck Tests

Philosophers, and intellectuals more broadly, love to point out how things might be more complex than they seem. They identify more and subtler distinctions, suggest more complex dependencies, and warn against relying on “shallow” advisors less “deep” than they. Subtly and complexity is basically what they have to sell.

I’ve often heard people resist such sales pressure by saying things like “if it looks like a duck, walks like a duck, and quacks like a duck, it’s a duck.” Instead of using complex analysis and concepts to infer and apply deep structures, they prefer to such use a “duck test” and judge by adding up many weak surface clues. When a deep analysis disagrees with a shallow appearance, they usually prefer to go shallow.

Interestingly, this whole duck example came from philosophers trying to warn against judging from surface appearances:

In 1738 a French automaton maker fooled the world into thinking he’d replicated life, and accidentally created a flippant philosophical conundrum we are still using. You’ve heard the phrase “if it looks like a duck, walks like a duck and quacks like a duck, then it’s a duck” haven’t you? .. People were saying this .. in the 18th century about a certain mechanical duck. And they were being very serious.

That mechanical duck was built to astound audiences, by quacking, moving it’s head to eat some grain which the mechanical marvel seemingly digested and then after a short time, the machine would round things off by plopping out a dollop of, what has been described as, foul smelling sh*t. ..The “looks like a duck” phrase (or Duck Test as some call it) is now thought of as a mildly amusing philosophical argument but back in the 18th century would certainly have been more akin to the way the Turing Test challenges artificial intelligence systems to fool the assessor into believing the system is a real human and not a computer. (more)

Philosophers had lectured saying, “See, you wouldn’t want to be fooled by surface appearances to call this automaton a duck would you?” But then others defiantly embraced this example, saying, “We plan to do exactly that; if it appears in enough ways to be a duck, that’s good enough for us.”

The philosophy of mind topics, such as classifying and judging minds, are topics where many intellectuals offer deep analysis. Imagine you have a wide range of creatures and objects that have various similarities to creatures. For each one you want to estimate many capacities. Does it have a distinct personality? Can it plan, remember, communicate, think, desire, exercise self-control, or judge right and wrong? Does it get embarrassed, proud, or enraged? Can it can guess how others feel? Does it feel fear, hunger, or joy, pain, pleasure? Is it conscious? You also want to judge: if you had two such characters, which one you would more try to make happy, save from destruction, avoid harming, or punish for causing a death. And which is more likely to have a soul?

This is a huge range of topics, on which learned intellectuals have written many thousands of books and articles, arguing for the relevance of a great many distinctions, to be taken into account in many subtle ways. But if ordinary people use simple-minded duck tests on such topics, they’d tend to judge each one by simply adding up many weak clues. And if people were especially simple-minded, they might even judge them all using roughly same set of weak clues. Even though some of the above capacities (e.g., plan, remember) are ones that many machines have today, if weakly, while other capacities (e.g. conscious, soul), are especially abstract and contentious.

Amazingly, as I posted a few days ago, this extreme scenario looks pretty close to the truth! At least as a first approximation. When 2400 people compared 13 diverse characters on the above 18 capacities and 6 judgements, one factor explained 88% of the total variance in capacities, while a second factor explained 8%, leaving only 4% unexplained. The study found some weak correlations with political and religious attitudes, but otherwise its main result is that survey responses on these many mind topics are mostly made using the same simple duck test (plus noise of course).

Now this study is hardly the last word. I’d love to see a survey with even more characters, and the judgements should be included in the factor analysis. And we also know that people are capable of “dehumanization”, i.e., using motivated reasoning to give lower scores to humans when they want to avoid blame for mistreatment.

But if these basic results continue to hold, they have big implications for how most people will treat various possible future creatures, including aliens, chimeras, alters, robots, AI, and ems. We don’t need a subtle analysis to predict how people will treat such things. We need only predict a wide range of apparent capacities for such creatures, and perhaps also a degree of motivated reasoning. The more such capacities creatures have, and at higher levels, and the weaker the motivated reasoning, then the higher people will rate them. And when people are motivated to rate creatures lower, they will do this via rating them lower on many capacities at once, as slave-owners have often done with slaves.

If you believe that such ratings will often be influenced by whether creatures are made out silicon or biochemicals, or whether they are natural or artificial, then you either have to believe that such factors will only work indirectly via a broad influence over all of these capacities and judgements together, or that the factor analysis of a bigger survey will find big factors associated with such things. I’ve offered to bet that a new bigger survey will not find such big factors.

Bryan Caplan says that he disagrees about how future ems will be treated, but calls survey factor analyses irrelevant, and so won’t bet on them. He is instead very impressed that subjects gave a low rating on the main factor to a character called “robot” in the paper, and described this way to survey participants:

Kismet is part of a new class of “sociable” robots that can engage people in natural interaction. To do this, Kismet perceives a variety of natural social signals from sound and sight, and delivers his own signals back to the human partner through gaze direction, facial expression, body posture, and vocal babbles.

Apparently Bryan is confident that ems and all future artificial creatures will be rated as lowly as this character, so he offers to bet on how a survey will rank an “em” character. Alas it is unlikely that the next few surveys would include such a character, in part because it is a pretty unfamiliar concept for most people.

I just don’t see “robot” as useful category here, such that we should expect most everything given this label to rate the same. That seems to me like expecting all bipedal creatures to rate low if a bipedal barbie doll rates low.

The survey above suggests instead is that what matters is how creatures are rated on many specific capacities. I expect that most people correctly estimated, from their experience with many other machines they’ve seen and heard of, that Kismet is in fact pretty bad at most of the listed capacities. In contrast, when most people are presented with fictional “robots” that are presented as being quite good at many of these capacities, such people consistently rate those “robots” relatively high on most other capacities and judgments. I’d bet a survey will also show that if such characters were included.

Because while people are often impressed with intellectuals’ subtle analysis, they still usually judge creature mental capacities via a simple duck test. If it quacks like a mind, its a mind.

GD Star Rating
loading...
Tagged as: ,

“Human” Seems Low Dimensional

Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.

Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.

Now consider the question of how “human-like” something is. Many indicators may be relevant to judging this, and one may draw many implications from such a judgment. In principle this concept of “human-like” could be high dimensional, so that there are many separate packages of indicators relevant for judging matching packages of implications. But anecdotally, humans seem to have a tendency to “anthropomorphize,” that is, to treat non-humans as if they were somewhat human in a simple low-dimensional way that doesn’t recognize many dimensions of difference. That is, things just seem more or less human. So the more ways in which something is human-like, the more you can reasonably guess that it will be human like in other ways. This tendency appears in a wide range of ordinary environments, and its targets include plants, animals, weather, planets, luck, sculptures, machines, and software.

We feel more morally responsible for how we treat more human-like things. We are more inclined to anthropomorphize things that seem more similar to humans in their actions or appearance, when we more desire to make sense of our environment, and when we more desire social connection. When these conditions are less met, we are more inclined to “dehumanize”, that is to treat human things as less than fully human. We also dehumanize to feel less morally responsible for our treatment of out-groups.

One study published in Science in 2007 asked 2400 people to make 78 pair-wise comparisons between 13 characters (a baby, chimp, dead woman, dog, fetus, frog, girl, God, man, vegetative man, robot, woman, you) on 18 mental capacities and 6 evaluation judgements. An “experience” factor explained 88% of capacity variation, being correlated with capacities for hunger, fear, pain, pleasure, rage, desire, personality, consciousness, pride, embarrassment, and joy. This factor had a strong 0.85 correlation with a desire to avoid harm to the character. A second “agency” factor explained 8% of the variance, being correlated with capacities for self-control, morality, memory, emotion recognition, planning, communication, and thought. This factor had a strong 0.82 correlation with a desire to punish for wrongdoing. Both factors correlated with liking a character, wanting it to be happy, and seeing it as having a soul (Gray et al. 2007).

Though it would be great to get more data, especially on more than 13 characters, this study does confirm the usual anecdotal description that anthropomorphizing is essentially a low dimensional phenomena. And if true, this fact has implications for how biological humans would treat ems.

My colleague Bryan Caplan insists that because ems would not be made out of familiar squishy carbon-based biochemicals, humans would feel confident that ems have no conscious feelings, and thus eagerly enslave and harshly treat ems, as Bryan says that our moral reluctance is the main reason why most humans today are not harshly treated slaves. However, this in essence claims the existence of a big added factor explaining judgements related to “human-like”, a factor beyond those seen in the above survey.

After all, “consciousness” is already one of the items included in the above survey. But it was just one among many contributors to the main experience factor; it wasn’t overwhelming compare to the rest. And I’m pretty sure that if one tried to add being made of biochemicals as a predictor of this main factor, it would help but remain only one weak predictor among many. You might think that these survey participants are wrong, of course, but we are trying to estimate what typical people will think in the future, not what is philosophically correct.

I’m also pretty sure that while the “robot” in the study was rated low on experience, that was because it was rated low on capacities like for pain, pleasure, rage, desire, and personality. Ems, being more articulate and expressive than most humans, could quickly convince most biological humans that they act very much like creatures with such capacities. You might claim that humans will all insist on rating anything not made of biochemicals as all very low on all such capacities, but that is not what we see in the above survey, nor what we see in how people react to fictional robot characters, such as from Westworld or Battlestar Galactica. When such characters act very much like creatures with these key capacities, they are seen as creatures that we should avoid hurting. I offer to bet $10,000 at even odds that this is what we will see in an extended survey like the one above that includes such characters.

Bryan also says that an ability to select most ems from scans of the few best suited humans implies that ems are extremely docile. While today when we select workers we often value docility, we value many other features more, and tradeoffs between available features result in the most desired workers being far from the most docile. Bryan claims that such tradeoffs will disappear once you can select from among a billion or more humans. But today when we select the world’s best paid actors, musicians, athletes, and writers, a few workers can in fact supply the entire world in related product categories, and we can in fact select from everyone in the world to fill those roles. Yet those roles are not filled with extremely docile people. I don’t see why this tradeoff shouldn’t continue in an age of em.

Added July 17: Bryan rejects my bet because:

I don’t put much stock in any one academic paper, especially on a weird topic. .. Robin’s interpretation of the paper .. is unconvincing to me. .. How so? Unfortunately, we have so little common ground here I’d have to go through the post line-by-line just to get started. .. a survey .. is probably a “far” answer that wouldn’t predict much about concrete behavior.

That is, nothing anyone says can be trusted on this topic, except Bryan’s intuition. He instead proposes a bet where I pay him up front, and he might pay me at our life end.

Seems to me Bryan disagrees not just with me, but also with the authors of this Science paper, as well as its editors and referees at Science. About what the survey means. But he seems to accept that a similar survey would show as I claim. And since he’s on record to say there isn’t that much difference between a survey and a vote, it seems he must accept this for predicting vote outcomes.

Added July 19: I offer to bet anyone $10K at even odds that in the next published survey with a similar size and care to the one above, but with at least twice as many characters, over 80% of the variance will be explained by two factors, neither of which is focused on the substance (e.g., carbon, silicon) out of which a character is made.

GD Star Rating
loading...
Tagged as: , ,

Boost For Being Best

The fraction of a normal distribution that is six or more standard deviations above the mean is one in ten billion. But the world has almost eight billion people in it. So in principle we should be able to get six standard deviations in performance gain by selecting the world’s best person at something, compared to using an average person.

I’m revising Age of Em for a paperback edition, expected in April. The rest of this post is from a draft of new text elaborating that point, and its implication for em leisure:

Em workers also earn wage premiums when they are the very best in the world at what they do. Even under the most severe wage competition, a best em can earn an extra wage equal to the difference between their productivity and the productivity of the second best em. When clans coordinate internally on wage negotiations, this is the difference in productivity between clans. (Clans who can’t coordinate internally are selected out of the em world, as they don’t cover their fixed costs, such as for training and marketing.)

Out of 10 billion independently and normally distributed (IID) samples, the maximum is on average about 6.4 standard deviations above the mean. Average spacings between the second, third, fourth highest samples are roughly 0.147, 0.075, and 0.05 standard deviations respectively (Branwen 2017). So when ems are selected out of 10 billion humans, the best em clan may be this much better than other em clans on normally distributed parameters. Using the log-normal wage distribution observed in our world (Provenzano 2015), this predicts that the best human in the world at any particular task is four to five times more productive than the median person, is over three percent more productive than the second most productive person, and is five percent more productive than the third most productive person.

If em clan relative productivity is drawn from this same distribution, if maximum em productivity comes at a 70 hour workweek, and if the best and second best em clans do not coordinate on wages they accept, then even under the strongest wage competition between clans, the best clan could take an extra 20 minutes a day more leisure, or two minutes per work hour, in addition to the six minutes per hour and other work breaks they take to be maximally productive.

This 20 minute figure is an underestimate for four reasons. First, the effective sample size of ems is smaller due to age limits on desirable ems. Second, most parameters are distributed so that the tails are thicker than in the normal distribution (Reed and Jorgensen 2004).

Third, differing wealth effects may add to differing productivity effects. On average over the last 11 years, the five richest people on Earth have each been about 10 percent richer than the next richest person. If future em income ratios were like this current wealth ratio, then the best em worker could afford roughly an extra hour per day of leisure, or an additional six minutes per hour.

Fourth, competition probably does not take the strongest possible form, and the best few ems can probably coordinate to some extent. For example, if the best two em clans coordinate completely on wages, but compete strongly with the third best clan, then instead of the best and second best taking 20 and zero minutes of extra leisure per day, they could take 30 and 10 extra minutes, respectively.

Plausibly then, the best em workers can afford to take an additional two to six minutes of leisure per hour of work in a ten hour work day, in addition to the over six minutes per hour of break needed for maximum productivity.

GD Star Rating
loading...
Tagged as: ,

A Post-Em-Era Hint

A few months ago I noticed a pattern across the past eras of forager, farmer industry: each era has a major cycle (ice ages, empires rise & fall, business cycle) with a period of about one third of that era’s doubling time. So I tentatively suggested that a em future might also have a major cycle of roughly one third of its doubling time. If that economic doubling time is about a month, the em major cycle period might be about a week.

Now I report another pattern, to be treated similarly. In roughly the middle of each past era, a pair of major innovations in calculating and communicating appeared, and gradually went from barely existing to having big social impacts.

  • Forager: At unknown periods during the roughly two million year forager era, humanoids evolved reasoning and language. That is, we became able to think about and say many complex things to each other, including our reasons for and against claims.
  • Farmer: While the farming era lasted roughly 7 to 10 millennia, the first known writing was 5 millennia ago, and the first known math textbooks 4 millennia ago. About 2.5 millennia ago writing became widespread enough to induce major religious changes worldwide.
  • Industry: While the industry era has lasted roughly 16 to 24 decades, depending on how you count, the telegraph was developed 18 decades ago, and the wholesale switch from mechanical to digital electronic communication happened 4 to 6 decades ago. The idea of the computer was described 20 decades ago, the first digital computer was made 7 decades ago, and computers became widespread roughly 3 decades ago.

Note that innovations in calculation and communication were not independent, but instead intertwined with and enabled each other. Note also that these innovations did not change the growth rate of the world economy at the time; each era continued doubling at the same rate as before. But these innovations still seem essential to enabling the following era. It is hard to imagine farming before language and reasoning, nor industry before math and writing, nor ems before digital computers and communication.

This pattern weakly suggests that another pair of key innovations in calculation and communication may appear and then grow in importance across a wide middle of the em era. This era may only last a year or two in objective time, though typical ems may experience millennia during this time.

This innovation pair would be interdependent, not change the growth rate, and perhaps enable a new era to follow. I can think of two plausible candidates:

  1. Ems might discover a better language for expressing and manipulating something like brain states. This could help ems to share their thoughts and use auxiliary hardware to help calculate useful thoughts.
  2. Ems might develop analogues to combinatorial prediction markets, and thus better share beliefs and aggregate information on a wide range of topics.

(Or maybe the innovation produces some combination of these.) Again, these are crude speculations based on a weak inference from a rough pattern in only three data points. But even so, they give us a vague hint about what an age after ems might look like. And such hints are actually pretty hard to find.

GD Star Rating
loading...
Tagged as: , , ,

Fuller on Age of Em

I’d heard that an academic review of Age of Em was forthcoming from the new Journal of Posthuman Studies. And after hearing about Baum’s review, the author Steve Fuller of this second academic review (which won’t be published for a few months) gave me permission to quote from it here. First some praise: Continue reading "Fuller on Age of Em" »

GD Star Rating
loading...
Tagged as:

Baum on Age of Em

In the Journal Futures, Seth Baum gives the first academic review of Age of Em. First, some words of praise: Continue reading "Baum on Age of Em" »

GD Star Rating
loading...
Tagged as: , ,

Future Gender Is Far

What’s the worst systematic bias in thinking on the future? My guess: too much abstraction. The far vs. near mode distinction was first noticed in future thinking, because the effect is so big there.

I posted a few weeks ago that the problem with the word “posthuman” is that it assumes our descendants will differ somehow in a way to make them “other,” without specifying any a particular change to do that. It abstracts from particular changes to just embody the abstract idea of othering-change. And I’ve previously noted there are taboos against assuming that something we see as a problem won’t be solved, and even against presenting such a problem without proposing a solution.

In this post let me point out that a related problem plagues future gender relation thoughts. While many hope that future gender relations will be “better”, most aren’t at all clear on what specifically that entails. For some, all differing behaviors and expectations about genders should disappear, while for others only “legitimate” differences remain, with little agreement on which are legitimate. This makes it hard to describe any concrete future of gender relations without violating our taboo against failing to solve problems.

For example, at The Good Men Project, Joseph Gelfer discusses the Age of Em. He seems to like or respect the book overall:

Fascinating exploration of what the world may look like once large numbers of computer-based brain emulations are a reality.

But he less likes what he reads on gender:

Hanson sees a future where an em workforce mirrors the most useful and productive forms of workforce that we experience today. .. likely choose [to scan] workaholic competitive types. Because such types tend to be male, Hanson imagines an em workforce that is disproportionately male (these workers also tend to rise early, work alone and use stimulants).

This disproportionately male workforce has implications for how sexuality manifests in em society. First, because the reproductive impetus of sex is erased in the world of ems, sexual desire will be seen as less compelling. In turn, this could lead to “mind tweaks” that have the effect of castration, .. [or] greater cultural acceptance of non-hetero forms of sexual orientation, or software that make ems of the same sex appear as the opposite sex. .. [or] paying professional em sex workers.

It is important to note that Hanson does not argue that this is the way em society should look, rather how he imagines it will look by extrapolating what he identifies in society both today and through the arc of human history. So, if we can identify certain male traits that stretch back to the beginning of the agricultural era, we should also be able to locate those same traits in the em era. What might be missing in this methodology is a full application of exponential change. In other words, Hanson rightly notes how population, technology and so forth have evolved with increasing speed throughout history, yet does not apply that same speed of evolution to attitudes towards gender. Given how much perceptions around gender have changed in the past 50 years, if we accept a pattern of exponential development in such perceptions, the minds that are scanned for first generation ems will likely have a very different attitude toward gender than today, let alone thousands of years past. (more)

Obviously Gelfer doesn’t like something about the scenario I describe, but he doesn’t identify anything particular he disagrees with, nor offer any particular arguments. His only contrary argument is a maximally abstract “exponential” trend, whereby everything gets better. Therefore gender relations must get better, therefore any future gender relations feature that he or anyone doesn’t like is doubtful.

For the record, I didn’t say the em world selects for “competitive types”, that people would work alone, or that there’d be more men. Instead I have a whole section on a likely “Gender Imbalance”:

Although it is hard to predict which gender will be more in demand in the em world, one gender might end up supplying proportionally more workers than the other.

Though I doubt Gelfer is any happier with a future with may more women than men; any big imbalance probably sounds worse to most people, and thus can’t happen according to the better future gender relations principle.

I suspect Gelfer’s errors about my book are consistently in the direction of incorrectly attributing features to the scenario that he likes less. People usually paint the future as a heaven or a hell, and so if my scenario isn’t Gelfer’s heaven, it must be his hell.

GD Star Rating
loading...
Tagged as: , ,

Imagine A Mars Boom

Most who think they like the future really just like where their favorite stories took place. As a result, much future talk focuses on space, even though prospects for much activity beyond Earth anytime foreseeable seem dim. Even so, consider the following hypothetical, with three key assumptions:

Mars boom: An extremely valuable material (anti-matter? glueballs? negative mass?) is found on Mars, justifying huge economic efforts to extract it, process it, and return it to Earth. Many orgs compete strongly against one another in all of these stages to profit from the Martian boom.

A few top workers: As robots just aren’t yet up to the task, a thousand humans must be sent to and housed on Mars. The cost of this is so great that all trips are one-way, at least for a while, and it is worth paying extra to get the very highest quality workers possible. So Martians are very impressive workers, and Mars is “where the action is” in terms of influencing the future. As slavery is rare on Earth, most all Mars workers must volunteer for the move.

Martians as aliens: Many, perhaps even most, people on Earth see those who live on Mars as aliens, for whom the usual moral rules do not apply – morality is to protect Earthlings only. Such Earth folks are less reluctant to enslave Martians. Martians undergo some changes to their body, and perhaps also to their brain, but when seen in films or tv, or when talked to via (20+min delayed) Skype, Martians act very human.

Okay, now my question for you is: Are most Martians slaves? Are they selected for and trained into being extremely docile and servile?

Slavery might let Martian orgs make Martians work harder, and thereby extract more profit from each worker. But an expectation of being enslaved should make it much harder to attract the very best human workers to volunteer. Many Earth governments may even not allow free Earthlings to volunteer to become enslaved Martians. So my best guess is that in this hypothetical, Martians are free workers, rich and high status celebrities followed and admired by most Earthlings.

I’ve created this Mars scenario as an allegory of my em scenario, because someone I respect recently told me they were persuaded by Bryan Caplan’s claim that ems would be very docile slaves. As with these hypothesized Martians, the em economy would produce enormous wealth and be where the action is, and it would result from competing orgs enticing a thousand or fewer of the most productive humans to volunteer for an expensive one-way trip to become ems. When viewed in virtual reality, or in android bodies, these ems would act very human. While some like Bryan see ems as worth little moral consideration, others disagree.

GD Star Rating
loading...
Tagged as: , ,

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

Added 15Dec: In this book chapter I expand a bit on this post.

GD Star Rating
loading...
Tagged as: , ,