Search Results for: sim

Fading Past Blocks Simulation Argument

The simulation argument was famously elaborated by Nick Bostrom. The idea is that our descendants may be able to create simulated creatures like you, and put them in simulated environments that look like the one you now find yourself in. If so, you can’t be sure that you are not now one of these future simulated people. The chance that you should assign to this possibility depends on the number of such future creatures, relative to the number of real creatures like you today.

More precisely, let P be the fraction of descendant civs that become able to create these ancestors simulations, I the fraction of these that actually do so, N the average number of ancestors simulated by each such civ per ancestor who once existed, and S the chance that you are now such an ancestor sim. Bostrom says that S = P*I*N/(P*I*N+1), and that N is very large, which implies that either P or I is very small, or that S is near 1. That is, if the future will simulate many ancestors, then you are one.

However, I will now show that this argument collapses if we allow the inclination to simulate ancestors to depend on the time duration that has elapsed between those ancestors and the descendants who might simulate them. My main claim is that our interest in the past generally seems to fall away with time faster than the rate at which the population grows with time. For example, while over the last century world population has doubled roughly every 40 to 60 years, this graph shows much faster declines in how often books mention of each of these specific past years: 1880, 1900, 1920, 1940, 1960.

Let us now include this fading past effect in a simple formal model. Let t denote a cultural “time” (not necessarily clock time), relative to which population (really a density of observer-moments) grows exponentially forever, while interest in the past declines exponentially. More formally, assume that it is already possible to create ancestor sims, that population grows as eg*t, that a constant fraction a of this population is turned into simulated ancestors, and that the relative fraction of these simulated ancestors associated with simulating a time t units into the past goes as eb*g*t. Thus for b>1 per-person interest in past people falls as e-(b-1)*g*t.

Given these assumptions, the ratio of future ancestors simulations of the current population to that actual current population is F = a*b/(b-1), and S = F/(F+1). So, for example, if at any one time 10% of people are ancestor simulations, and if interest in the past falls by 12% every time population rises by 10%, then a = 0.1, b = 1.2, and F = 0.6, giving each person who seems to be real a S = 3/8 chance of instead being an ancestor simulation. If a = 0.001 instead, then F = 0.006, and each person should estimate a S =~0.6% chance of being an ancestor simulation.

The above assumed that ancestor sims are possible and are being done now. If instead sims can’t start being created until c time units in the future, then we instead have F = a*(b/(b-1))*e(b-1)*g*c, giving an even smaller chance S of your being an ancestor simulation.

By the way, these calculations can also be done in terms of rank. If all people in history are ordered in time, with r=0 being the first person ever, and all others having r>0, then we could assume that a fraction a of people are always ancestor simulations, and that interest in past people falls as rb, and we’d again get the same result F = a*b/(b-1).

Thus given the realistic tendency to have less interest in past people the further away they are in time, and the likely small fraction of future economies that could plausibly be devoted to simulating ancestors, I feel comfortable telling you: you are most likely not an ancestor simulation.

Added 9pm: See this more careful analysis by Anders Sandberg of falling interest in year names. Seems to me that fall in interest is in fact faster than the population growth rate, even a century after the date.

GD Star Rating
Tagged as: ,

SETI Optimism is Human Future Pessimism

If one takes the hard steps model of evolution seriously, humans seem to be early in the history of the universe. We can explain this by postulating that grabby aliens set an early deadline; humans couldn’t show up after aliens had filled the universe. As our grabby aliens model has three free parameters, each of which can be estimated from data, we are forced to conclude that such aliens are quite rare; if we are lucky enough to survive that long we should meet them in roughly a billion years.

This next diagram shows distributions over how many galaxies each one controls when they meet each other. The distributions shown are for expansion speed s=c; more generally this goes as (s/c)3. (The likelihood ratio for not seeing big alien volumes today is only one above s/c ~ 3/4.)

As you can see, for the best estimate power of n=6, each one typically comes to control millions of galaxies. (We avoid making assumptions about what happens after GCs meet. All our distributions depend on hard steps power n. All were made with help of my coauthors Daniel Martin, Calvin McCarter, and Johnathan Paulson.)

Assume that each grabby civilization (GC) arises “soon” (within 10Myr) from a non-grabby civilization (NGC). As GCs by definition keep expanding fast and change the appearance of their volumes, NGCs that don’t become GCs don’t expand much or long, or don’t change their volume appearances. As NGCs are much harder to see, and don’t much block GC behavior, there could be far more of them than there are GCs. 

Thus a key question about aliens is: what is the ratio R between NGCs and GCs? And this ratio R is at the heart of a key conflict: you need to expect a high ratio R to be optimistic about SETI success anytime soon, but you need to expect a low ratio R to be optimistic about the future prospects of our descendants. (I described this conflict abstractly in my original great filter paper; here I discuss specific numbers.) Continue reading "SETI Optimism is Human Future Pessimism" »

GD Star Rating
Tagged as:

Sim Argument Confidence

Nick Bostrom once argued that you must choose between three options re the possibility that you are now actually living in and experiencing a simulation created by future folks to explore their past: (A) its true, you are most likely a sim person living in a sim, either of this sort or another, (B) future folk will never be able to do this, because it just isn’t possible, they die first, or they never get rich and able enough, or (C) future folk can do this, but they do not choose to do it much, so that most people experiencing a world like yours are real humans now, not future sim people.

This argument seems very solid to me: future folks either do it, can’t do it, or choose not to. If you ask folks to pick from these options you get a simple pattern of responses:

Here we see 40% in denial, hoping for another option, and the others about equally divided among the three options. But if you ask people to estimate the chances of each option, a different picture emerges. Lognormal distributions (which ignore the fact that chances can’t exceed 100%) are decent fits to these distributions, and here are their medians:

So when we look at the people who are most confident that each option is wrong, we see a very different picture. Their strongest confidence, by far, is that they can’t possibly be living in a sim, and their weakest confidence, by a large margin, is that the future will be able to create sims. So if we go by confidence, poll respondents’ favored answer is that the future will either die soon or never grow beyond limited abilities, or that sims are just impossible.

My answer is that the future mostly won’t choose to sim us:

I doubt I’m living in a simulation, because I doubt the future is that interested in simulating us; we spend very little time today doing any sort of simulation of typical farming or forager-era folks, for example. (More)

If our descendants become better adapted to their new environment, they are likely to evolve to become rather different from us, so that they spend much less of their income on sim-like stories and games, and what sims they do like should be overwhelmingly of creatures much like them, which we just aren’t. Furthermore, if such creatures have near subsistence income, and if a fully conscious sim creature costs nearly as much to support as future creatures cost, entertainment sims containing fully conscious folks should be rather rare. (More)

If we look at all the ways that we today try to simulate our past, such as in stories and games, our interest in sims of particular historical places and times fades quickly with our cultural distance from them, and especially with declining influence over our culture. We are especially interested in Ancient Greece, Rome, China, and Egypt, because those places were most like us and most influenced us. But even so, we consume very few stories and games about those eras. And regarding all the other ancient cultures even less connected to us, we show far less interest.

As we look back further in time, we can track decline in both world population, and in our interest in stories and games about those eras. During the farming era population declined by about a factor of two every millennium, but it seems to me that our interest in stories and games of those eras declines much faster. There’s far less than half as much interest in 500AD than in 1500AD, and that fact continues for each 1000 year step backward.

So even if future folk make many sims of their ancestors, people like us probably aren’t often included. Unless perhaps we happen to be especially interesting.

GD Star Rating
Tagged as: , ,

Simple Sims On Pandemic Variance

I’ve said it isn’t crazy to consider cutting pandemic deaths via more infection inequality, including via deliberate exposure. Some have said I’m evil to suggest that, while others have said it just can’t work. In this post, I address those latter doubts, by offering specific sim models wherein variance and deliberate exposure save lives. 

Of course, these models can’t prove that we should now adopt such policies. Every model makes specific assumptions that may not be true. The goal here is instead to show that that these ideas aren’t crazy. If they work and make sense in specific plausible situations, then we can’t dismiss them without knowing enough about our actual specific situation.

First, let me point you all to this Javascript sim model done by Zach Hess. He built this at my suggestion, but I haven’t yet learned enough Javascript to figure it all out. (Anyone want to translate it to pseudo-code?) It distinguishes 6 disease states: never-sick, exposed, recovered, asymptomatic sick, symptomatic sick, and in-intensive-care, and 3 kinds of workers: medical, critical, and general. It allows people to be put into quarantine.  

I think, but am not sure, that this model enforces a constraint on the total number of people who can fit into quarantine, and that having more available critical and medical workers makes sick folks less likely to die. Zack finds, for his default parameter values, that deliberately exposing & quarantining critical and medical workers early ends up saving lives. I presume he’s right. 

Over the last few days, I put together this simple spreadsheet model. (Feel free to copy, change, etc.) It doesn’t distinguish critical vs. medical vs. general workers, and so doesn’t capture gains from treating those differently. My baseline model starts with one contagious person in a US-sized population of 327M uninfected. 

After 7 days each contagious person becomes visibly sick, 10% of these sick need an average of 7 ICU days of help, and after 7 days some fraction of sick folks die, while the rest recover and become immune. Sick folks are added onto the usual 10K people who need ICU help each day, and their death rate goes as the logarithm of the daily total number of people who need ICU help. If only 10K people total need ICU help, only 0.4% of sick folks die, but if 50K per day people need ICU help, then 3% of them die.

The number of infected people who become contagious each day is proportional to the product of the uninfected count times the contagious count. Except that there is a quarantine that always holds 10M people, with a proportion of contagious vs. uninfected the same as the larger population. People in quarantine have only 2% of the usual rate of infecting others. The infection rate parameter is set so that, early on, the death so far count doubles about every 6 days. 

In that baseline mode, 14.3M people die within a year. The number of contagious peaks on day 168 and daily deaths peak on day 177, when 9.7% of sick folks die. I compare that baseline model with three variations. 

  1. Here, the infection rate is cut uniformly by 5%, from 1.0 to 0.95. As a result, 11.9M people die, with 16% fewer deaths than baseline. Contagious and deaths peak on days 195 and 205, and the peak death % is 9.2%.
  2. Here, instead of having one uniform population all with the same infection constant of 1.0, they are split into two initially equal-sized types, for whom these constants are 0.6 and 1.4. So while they together initially produce the same number of infected, one type gets infected 2.3 times as easily as the other type. In this variation, 10.4M people die, with 27% fewer deaths than baseline. Contagion and deaths peak on days 167 and 175, when the peak death % is 9.2%.
  3. Here, for the first 30 days 1.3M people per day are deliberately infected and then immediately placed into quarantine for 7 days until they get sick. They displace random people who would otherwise have been in quarantine. In this variation, 11.3M die, with 21% fewer deaths than baseline. The contagious count peaks on day 53, and deaths on day 40, when the death rate is 8.5%.

These simple models show that, to cut deaths, deliberate exposure can make sense, as can ways to cut infection rates and increase variance in who is more vs. less easily infected. For more details, these 3 graphs show # contagious, death % of sick, and # newly dead, all vs. days:

Of course there might be bugs in my spreadsheet; please do point them out.

Added 8am: Let me also note that in such simple models it does not help society to deliberately infect yourself, if once infected your chance of infecting others is the same as that of an average person who was infected accidentally. In that case you just pull all the curves forward in time a bit, and by increasing the rate of new sick folks slightly you increase their death rate slightly, and thus increase total deaths.

Added 09Mar: I found a small error in my spreadsheet, and so replaced the numbers and graphs above with corrected versions.

Added 17Mar: See more sims where select old or young to for deliberate exposure here.

GD Star Rating
Tagged as:

Who Likes Simple Rules?

Some puzzles:

  • People are often okay with having either policy A or policy B adopted as the standard policy for all cases. But then they object greatly to a policy of randomly picking A or B in particular cases in order to find out which one works better, and then adopt it for everyone.
  • People don’t like speed and red-light cameras; they prefer human cops who will use discretion. On average people don’t think that speeding enforcement discretion will be used to benefit society, but 3 out of 4 expect that it will benefit them personally. More generally people seem to like a crime law system where at least a dozen different people are authorized to in effect pardon any given person accused of any given crime; most people expect to benefit personally from such discretion.
  • In many European nations citizens send their tax info into the government who then tells them how much tax they owe. But in the US and many other nations, too many people oppose this policy. The most vocal opponents think they benefit personally from being able to pay less than what the government would say they owe.
  • The British National Health Service gets a lot of criticism from choosing treatments by estimating their cost per quality-adjusted-life-year. US folks wouldn’t tolerate such a policy. Critics lobbying to get exceptional treatment say things like “one cannot assume that someone who is wheel-chair bound cannot live as or more happily. … [set] both false limits on healthcare and reducing freedom of choice. … reflects an overly utilitarian approach”
  • There’s long been opposition to using an official value of life parameter in deciding government policies. Juries have also severely punished firms for using such parameters to make firm decisions.
  • In academic departments like mine, we tell new professors that to get tenure they need to publish enough papers in good journals. But we refuse to say how many is enough or which journals count as how good. We’d keep the flexibility to make whatever decision we want at the last minute.
  • People who hire lawyers rarely know their track record at winning vs. losing court cases. The info is public, but so few are interested that it is rarely collected or consulted. People who hire do know the prestige of their schools and employers, and decide based on that.
  • When government leases its land to private parties, sometimes it uses centralized, formal mechanisms, like auctions, and sometimes it uses decentralized and informal mechanisms. People seem to intuitively prefer the latter sort of mechanism, even though the former seems to works better. In one study “auctioned leases generate 67% larger up-front payments … [and were] 44% more productive”.
  • People consistently invest in managed investment funds, which after the management fee consistently return less than index funds, which follow a simple clear rule. Investors seem to enjoy bragging about personal connections to people running prestigious investment funds.
  • When firms go public via an IPO, they typically pay a bank 7% of their value to manage the process, which is supposedly spent on lobbying others to buy. Google famously used an auction to cut that fee, but banks have succeed in squashing that rebellion. When firms try to sell themselves to other firms to acquire, they typically pay 10% if they are priced at less than $1M, 6-8% if priced $10-30M, and 2-4% if priced over $100M.
  • Most elite colleges decide who to admit via opaque and frequently changing criteria, criteria which allow much discretion by admissions personnel, and criteria about which some communities learn much more than others. Many elites learn to game such systems to give their kids big advantages. While some complain, the system seems stable.
  • In a Twitter poll, the main complaints about my fire-the-CEO decisions markets proposal are that they don’t want a simple clear mechanical process to fire CEOs, and they don’t want to explicitly say that the firm makes such choices in order to maximize profits. They instead want some people to have discretion on CEO firing, and they want firm goals to be implicit and ambiguous.

The common pattern here seems to me to be a dislike of clear formal overt rules, mechanisms, and criteria, relative to informal decisions and negotiations. Especially disliked are rules based on explicit metrics that might reject or disapprove people. To the extent that there are rules, there seems to be a preference for authorizing some people to have discretion to make arbitrary choices, regarding which they are not held strongly to account.

To someone concerned about bribes, corruption, and self-perpetuating cabals of insiders, a simple clear mechanism like an auction might seem an elegant way to prevent all of that. And most people give lip service to being concerned about such things. Also, yes explicit rules don’t always capture all subtleties, and allowing some discretion can better accommodate unusual details of particular situations.

However, my best guess is that most people mainly favor discretion as a way to promote an informal favoritism from which they expect to benefit. They believe that they are unusually smart, attractive, charismatic, well-connected, and well-liked, just the sort of people who tend to be favored by informal discretion.

Furthermore, they want to project to associates an image of being the sort of person who is confidently supports the elites who have discretion, and who expects in general to benefit from their discretion. (This incentive tends to induce overconfidence.)

That is, the sort of people who are eager to have a fair neutral objective decision-making process tend to be losers who don’t expect to be able to work the informal system of favors well, and who have accepted this fact about themselves. And that’s just not the sort of image that most people want to project.

This whole equilibrium is of course a serious problem for we economists, computer scientists, and other mechanism and institution designers. We can’t just propose explicit rules that would work if adopted, if people prefer to reject such rules to signal their social confidence.

GD Star Rating
Tagged as: ,

Reversible Simulations 

Physicist Sabine Hossenfelder is irate that non-physicists use the hypothesis that we live in a computer simulation to intrude on the territory of physicists:

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature. If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. ..

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation. (more)

Video games today typically only compute visual and auditory details of scenes that players are currently viewing, and then only to a resolution players are capable of noticing. The physics, chemistry, etc. is also made only as consistent and exact as typical players will notice. And most players don’t notice enough to bother them.

What if it were physicists playing a video game? What if they recorded a long video game period from several points of view, and were then able go back and spend years scouring their data carefully? Mightn’t they then be able to prove deviations? Of course, if they tried long and hard enough. And all the more so if the game allowed players to construct many complex measuring devices.

But if the physicists were entirely within a simulation, then all the measuring, recording, and computing devices available to those physicists would be under full control of the simulators. If devices gave measurements showing deviations, the output of those devices could just be directly changed. Or recordings of previous measurements could be changed. Or simulators could change the high level output of computer calculations that study measurements. Or they might perhaps more directly change what the physicists see, remember, or think.

In addition, within a few decades computers in our world will typically use reversible computation (as I discuss in my book), wherein costs are low to reverse previous computations. When simulations are run on reversible computers, it becomes feasible and even cheap to wait until a simulation reveals some problem, and then reverse the simulation back to a earlier point, make some changes, and run the simulation forward again to see it the problem is avoided. And repeat until the problem is in fact avoided.

So those running a simulation containing physicists who could detect deviations from some purported physics of the simulated world could actually wait until some simulated physicist claimed to have detected a deviation. Or even wait until an article based on their claim was accepted for peer review. And then back up the simulation and add more physics detail to try to avoid the problem.

Yes, to implement a strategy like this those running the simulation might have to understand the physics issues as well as did the physicists in the simulation. And they’d have to adjust the cost of computing their simulation to the types of tests that physicists inside examined. In the worse case, if the simulated universe seemed to allow for very large incompressible computations, then if the simulators couldn’t find a way to fudge that by changing high level outputs, they might have to find an excuse to kill off the physicists, to directly change their thoughts, or to end the simulation.

But overall it seems to me that those running a simulation containing physicists have many good options short of ending the simulation. Sabine Hossenfelder goes on to say:

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

Here I just don’t see what Sabine can be thinking. Today we can quickly make many copies of most any item that we can make in factories from concise designs. Yes, quantum states have a “no-cloning theorem”, but even so if we knew of a good quantum state to start a system in, we should be able to create many such systems that start in that same state. And I know of no serious claim that human minds make important use of unclonable quantum states, or that this would prevent creating many such systems fast.

Yes, biological systems today can be hard to copy fast, because they are so crammed with intricate detail. But as with other organs like bones, hearts, ears, eyes, and skin, most of the complexity in biological brain cells probably isn’t used directly for the function that those cells provide the rest of the body, in this case signal processing. So just as emulations of bones, hearts, ears, eyes, and skin can be much simpler than those organs, a brain emulation should be much simpler than a brain.

Maybe Sabine will explain her reasoning here.

GD Star Rating

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
Tagged as: , ,

All Is Simple Parts Interacting Simply

In physics, I got a BS in ’81, a MS in ’84, and published two peer-reviewed journal articles in ’03 & ’06. I’m not tracking the latest developments in physics very closely, but what I’m about to tell you is very old standard physics that I’m quite sure hasn’t changed. Even so, it seems to be something many people just don’t get. So let me explain it.

There is nothing that we know of that isn’t described well by physics, and everything that physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides.

For example, ordinary field theories have a limited number of fields at each point in space-time, with each field having a limited number of degrees of freedom. Each field has a few simple interactions with other fields, and with its own space-time derivatives. With limited energy, this latter effect limits how fast a field changes in space and time.

As a second example, ordinary digital electronics is made mostly of simple logic units, each with only a few inputs, a few outputs, and a few bits of internal state. Typically: two inputs, one output, and zero or one bits of state. Interactions between logic units are via simple wires that force the voltage and current to be almost the same at matching ends.

As a third example, cellular automatons are often taken as a clear simple metaphor for typical physical systems. Each such automation has a discrete array of cells, each of which has a few possible states. At discrete time steps, the state of each cell is a simple standard function of the states of that cell and its neighbors at the last time step. The famous “game of life” uses a two dimensional array with one bit per cell.

This basic physics fact, that everything is made of simple parts interacting simply, implies that anything complex, able to represent many different possibilities, is made of many parts. And anything able to manage complex interaction relations is spread across time, constructed via many simple interactions built up over time. So if you look at a disk of a complex movie, you’ll find lots of tiny structures encoding bits. If you look at an organism that survives in a complex environment, you’ll find lots of tiny parts with many non-regular interactions.

Physicists have learned that we only we ever get empirical evidence about the state of things via their interactions with other things. When such interactions the state of one thing create correlations with the state of another, we can use that correlation, together with knowledge of one state, as evidence about the other state. If a feature or state doesn’t influence any interactions with familiar things, we could drop it from our model of the world and get all the same predictions. (Though we might include it anyway for simplicity, so that similar parts have similar features and states.)

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth. For humans and their immediate environments on Earth, we know exactly what are all the parts, what states they hold, and all of their simple interactions. Thermodynamics assures us that there can’t be a lot of hidden states around holding many bits that interact with familiar states.

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies. When we can figure out quantities that are easier to calculate, as long as the parts and interactions we think are going on are in fact the only things going on, then we usually see those quantities just as calculated.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

Note that even if we are only complex arrangements of interacting parts, as social creatures it makes sense for us to care in a certain sense about each others’ “feelings.” Creatures like us maintain an internal “feeling” state that tracks how well things are going for us, with high-satisfied states when things are going well and and low-dissatisfied states when things are going badly. This internal state influences our behavior, and so social creatures around us want to try to infer this state, and to influence it. We may, for example, try to notice when our allies have a dissatisfied state and look for ways to help them to be more satisfied. Thus we care about others’ “feelings”, are wary of false indicators of them, and study behaviors in some detail to figure out what reliably indicates these internal states.

In the modern world we now encounter a wider range of creature-like things with feeling-related surface appearances. These include video game characters, movie characters, robots, statues, paintings, stuffed animals, and so on. And so it makes sense for us to apply our careful-study habits to ask which of these are “real” feelings, in the sense of being the those where it makes sense to apply our evolved feeling-related habits. But while it makes sense to be skeptical that any particular claimed feeling is “real” in this sense, it makes much less sense to apply this skepticism to “mere” physical systems. After all, as far as we know all familiar systems, and all the systems they interact with to any important degree, are mere physical systems.

If everything around us is explained by ordinary physics, then a detailed examination of the ordinary physics of familiar systems will eventually tells us everything there is to know about the causes and consequences of our feelings. It will say how many different feelings we are capable of, what outside factors influence them, and how our words and actions depend on them.

What more is or could be there to know about feelings than this? For example, you might ask: does a system have “feelings” if it has some of the same internal states as a human, but where those states have no dependence on outside factors and no influence on the world? But questions like this seem to me less about the world and more about what concepts are the most valuable to use in this space. While crude concepts served us well in the past, as we encounter a wider range of creature-like systems than before, we will need refine our concepts for this new world.

But, again, that seems to be more about what feelings concepts are useful in this new world, and much less about where feelings “really” are in the world. Physics call tell us all there is to say about that.

(This post is a followup to my prior post on Sean Carroll’s Big Picture.)

GD Star Rating
Tagged as:

Assimilated Futures

I’ve long said that it is backwards to worry that technology will change faster than society can adapt, because the ability of society adapt is one of the main constraints on how fast we adopt new technologies. This insightful 2012 post by Venkatesh Rao elaborates on a related theme:

Both science fiction and futurism … fail to capture the way we don’t seem to notice when the future actually arrives. … The future always seems like something that is going to happen rather than something that is happening. …

Futurists, artists and edge-culturists … like to pretend that they are the lonely, brave guardians of the species who deal with the “real” future and pre-digest it for the rest of us. But … the cultural edge is just as frozen in time as the mainstream, … people who seek more stimulation than the mainstream, and draw on imagined futures to feed their cravings rather than inform actual future-manufacturing. …

When you are sitting on a typical modern jetliner, you are traveling at 500 mph in an aluminum tube that is actually capable of some pretty scary acrobatics. … Yet a typical air traveler never experiences anything that one of our ancestors could not experience on a fast chariot or a boat. Air travel is manufactured normalcy. …

This suggests that only those futures arrive for which there is human capacity to cope. This conclusion is not true, because a future can arrive before humans figure out whether they have the ability to cope. For instance, the widespread problem of obesity suggests that food-abundance arrived before we figured out that most of us cannot cope. And this is one piece of the future that cannot be relegated to specialists. …

Successful products are precisely those that do not attempt to move user experiences significantly, even if the underlying technology has shifted radically. In fact the whole point of user experience design is to manufacture the necessary normalcy for a product to succeed and get integrated. … What we get is a Darwinian weeding out of those manifestations of the future that break the continuity of technological experience. …

What about edge-culturists who think they are more alive to the real oncoming future? … The edge today looks strangely similar to the edge in any previous century. It is defined by reactionary musical and sartorial tastes and being a little more outrageous than everybody else in challenging the prevailing culture of manners. … If it reveals anything about technology or the future, it is mostly by accident. . …

At a more human level, I find that I am unable to relate to people who are deeply into any sort of cyberculture or other future-obsessed edge zone. There is a certain extreme banality to my thoughts when I think about the future. Futurists as a subculture seem to organize their lives as future-experience theaters. These theaters are perhaps entertaining and interesting in their own right, as a sort of performance art, but are not of much interest or value to people who are interested in the future in the form it might arrive in, for all.

It is easy to make the distinction explicit. Most futurists are interested in the future beyond the [manufactured normalcy field]. I am primarily interested in the future once it enters the Field, and the process by which it gets integrated into it. This is also where the future turns into money, so perhaps my motivations are less intellectual than they are narrowly mercenary. …

This also explains why so few futurists make any money. They are attracted to exactly those parts of the future that are worth very little. They find visions of changed human behavior stimulating. Technological change serves as a basis for constructing aspirational visions of changed humanity. Unfortunately, technological change actually arrives in ways that leave human behavior minimally altered. .. The mainstream never ends up looking like the edge of today. Not even close. The mainstream seeks placidity while the edge seeks stimulation. (more)

Yes, I’m a guilty-as-charged futurist focused on changes far enough distant that there’s little money to be made understanding them now. But I share Rao’s emotional distance from the future-obsessed cultural edge. I want to understand the future not as morality tale to validate my complaints against today’s dominant culture; I instead want to foresee the assimilated future. That is, I want to see how future people will actually see their own world, after they’ve found ways to see it banally as a minimal change from the past.

Cultural futurists have complained that the future I describe in my upcoming book The Age of Em is too conservative in presuming the continuation of supply and demand, inequality, big organizations, status seeking, and so on. Don’t I know that tech will change everything, and soon? No, actually I don’t know that.

Added: To be clear, eventually fundamentals may well change. But the rate of such changes is low enough that in a medium term future most fundamental features probably haven’t changed yet.

GD Star Rating
Tagged as:

Error Is Not Simple

At her Rationally Speaking podcast, Julia Galef talked to me about signaling as a broad theory of human behavior.

Julia is smart and thoughtful, and fully engaged the idea. Even so, I’m not sure I convinced her. I might have had a better chance if we’d dived quickly into a detailed summaries of related datums. Instead we more talked more abstractly about her concern that signaling seems a complex theory, and shouldn’t we look to simpler theories first. For example, on the datums that we see little correlation between medicine and health, and that people show little interest in private info on medicine effectiveness, Julia said:

Like the fact that humans are bad at probability and are pretty scope insensitive, and don’t really feel the difference between a 5% chance of failure versus an 8% chance of failure. Also the fact that humans are superstitious thinkers, that on some level, it feels like if we don’t think about risks, they can’t hurt us, or something like that. … It feels like that I would have put a significant amount of weigh even in the absence of signaling caring, that people would fail to purchase that useful information.

Yes, the fact that we follow heuristics does predict that our actions deviate from those of perfect rationality agents. It predicts that instead of spending just the right amount on something like medicine, we may spend too much or too little. Similarly, it predicts we might get too much or too little info on medical quality.

But by itself that doesn’t predict that we will spend too much on medicine, and too little on medical quality info. In fact, we see a great many other kinds of areas, such as buying more energy efficient light bulbs, where people seem to spend too little. And we see a great many other areas were people seem too eager to gain and apply quality info; we eagerly consume news media full of info with little practical application.

As I said in the podcast, but perhaps didn’t explain well enough, we are often tempted to explain otherwise-puzzling behaviors in terms of simple error theories; the world is complex so people just can’t get it right. This won’t explain why we tend to do the same things as others who are socially near, but that we often like to explain as social copying and conformity; we try to do what others do so we won’t look weird, and maybe others know something.

But even conformity, by itself, won’t explain the particular choices that a group of socially adjacent people make. It doesn’t predict that elderly women in Miami tend to spend too much on medicine, for example. It is these patterns across space, time, group, industry, etc. that I try to explain via signaling. For example, relative to other products and services, people have consistently spent too much on medicine all through history, especially in rich societies, and for women and the elderly.

I’ve offered a signaling story to try to simultaneously explain these and many other details, and yes it takes a few pages to explain. That may sound more complex than “its all just random mistakes”, but to explain any specific dataset of choices, that basic error story must be augmented with a great many specific ad hoc hypotheses of the form “and in this case, the particular mistake these people tend to make happens to be this.”

The combination of “its just error” and all those specific hypotheses is what makes that total hypothesis actually a lot more complex and a priori unlikely than the sorts of signaling stories that I offer. Which is why I’d say such signaling hypotheses are favored more by the data, at least when they fit reasonably well and are generated by a relatively small set of core hypotheses.

GD Star Rating
Tagged as: ,