My Kind of Atheist

I think I’ve mentioned somewhere in public that I’m now an atheist, even though I grew up in a very Christian family, and I even joined a “cult” at a young age (against disapproving parents). The proximate cause of my atheism was learning physics in college. But I don’t think I’ve ever clarified in public what kind of an “atheist” or “agnostic” I am. So here goes.

The universe is vast and most of it is very far away in space and time, making our knowledge of those distant parts very thin. So it isn’t at all crazy to think that very powerful beings exist somewhere far away out there, or far before us or after us in time. In fact, many of us hope that we now can give rise to such powerful beings in the distant future. If those powerful beings count as “gods”, then I’m certainly open to the idea that such gods exist somewhere in space-time.

It also isn’t crazy to imagine powerful beings that are “closer” in space and time, but far away in causal connection. They could be in parallel “planes”, in other dimensions, or in “dark” matter that doesn’t interact much with our matter. Or they might perhaps have little interest in influencing or interacting with our sort of things. Or they might just “like to watch.”

But to most religious people, a key emotional appeal of religion is the idea that gods often “answer” prayer by intervening in their world. Sometimes intervening in their head to make them feel different, but also sometimes responding to prayers about their test tomorrow, their friend’s marriage, or their aunt’s hemorrhoids. It is these sort of prayer-answering “gods” in which I just can’t believe. Not that I’m absolutely sure it they don’t exist, but I’m sure enough that the term “atheist” fits much better than the term “agnostic.”

These sort of gods supposedly intervene in our world millions of times daily to respond positively to particular prayers, and yet they do not noticeably intervene in world affairs. Not only can we find no physical trace of any machinery or system by which such gods exert their influence, even though we understand the physics of our local world very well, but the history of life and civilization shows no obvious traces of their influence. They know of terrible things that go wrong in our world, but instead of doing much about those things, these gods instead prioritize not leaving any clear evidence of their existence or influence. And yet for some reason they don’t mind people believing in them enough to pray to them, as they often reward such prayers with favorable interventions.

Yes, the space of possible minds is vast, as is the space of possible motivations. So yes somewhere in that space is a subspace of minds who would behave in exactly this manner, if they were powerful enough to count as “gods”. But the relative size of that subspace seems to me rather small, relative to that total space. And so the prior probability that all or most nearby gods have this sort of strange motivation also seems to me quite small. It seems a crazy implausible hypothesis.

Yes, the fact that people claim to feel that gods answer their prayers is, all else equal, evidence for that hypothesis. But the other obvious hypothesis to consider here is that people claim this because it comforts them to believe so, not because they’ve carefully studied their evidence. Long ago people had much less evidence on physics and the universe, and for them it was both plausible and socially functional to believe in powerful gods who sometimes responded to humans, including their prayers. This belief became deeply embedded in cultures, cultures which just do not respond very quickly or strongly to recent changes in our best evidence on physics and the universe. (Though they respond quickly enough to make up excuses like “God wants you to believe in him for special reasons.”) And so many still believe that gods answer prayers.

In conclusion, it isn’t crazy to think there are powerful gods far away in space or time, and perhaps close but far in causal connection. But it does seem to me crazy to believe in gods nearby who favorably answer prayers, but who also hide and don’t intervene much in world affairs. That hypothesis seems vastly less likely than the obvious alternative, of slowly updating cultures.

I expect my position to be pretty widely held among thoughtful intellectuals; can we find a good name for it? Prayer-atheists perhaps?

GD Star Rating
loading...
Tagged as:

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Compulsory Licensing Of Backroom IT?

We now understand one of the main reasons that many leading firms have been winning relative to others, resulting in higher markups, profits, and wage inequality:

The biggest companies in every field are pulling away from their peers faster than ever, sucking up the lion’s share of revenue, profits and productivity gains. Economists have proposed many possible explanations: top managers flocking to top firms, automation creating an imbalance in productivity, merger-and-acquisition mania, lack of antitrust regulation and more. But new data suggests that … IT spending that goes into hiring developers and creating software owned and used exclusively by a firm is the key competitive advantage. It’s different from our standard understanding of R&D in that this software is used solely by the company, and isn’t part of products developed for its customers.

Today’s big winners went all in. …Tech companies such as Google, Facebook, Amazon and Apple—as well as other giants including General Motors and Nissan in the automotive sector, and Pfizer and Roche in pharmaceuticals—built their own software and even their own hardware, inventing and perfecting their own processes instead of aligning their business model with some outside developer’s idea of it. … “IT intensity,” is relevant not just in the U.S. but across 25 other countries as well. …

When new technologies were developed in the past, they would diffuse to other firms fast enough so that productivity rose across entire industries. … But imagine instead of power looms, someone is trying to copy and reproduce Google’s cloud infrastructure itself. … Things have just gotten too complicated. The technologies we rely on now are massive and inextricably linked to the engineers, workers, systems and business models built around them. … While in the past it might have been possible to license, steal or copy someone else’s technology, these days that technology can’t be separated from the systems of which it’s a part. … Walmart built an elaborate logistics system around bar code scanners, which allowed it to beat out smaller retail rivals. Notably, it never sold this technology to any competitors. (more)

A policy paper goes into more detail. First, why is the IT of some firms so much better?

Proprietary IT thus provides a specific mechanism that can help explain the reallocation to more productive firms, rising industry concentration, also growing productivity dispersion between firms within industries, and growing profit margins. … There is a significant literature that identifies IT-related differences in productivity arising from complementary skills, managerial practices, and business models that are themselves unevenly distributed. Skills and managerial knowledge needed to use major new technologies have often been unevenly distributed initially because much must be learned through experience, which tends to differ substantially from firm to firm.

Yes, skills vary, but there are also just big random factors in the success of large IT systems, even for similar skills. What can we do about all this?

While there may be other reasons to question antitrust policies, the general rise in industry concentration does not appear to raise troubling issues for antitrust enforcement at this point by itself. …

Both IP law and antitrust law pay heed to … balancing innovation incentives against the need for disclosure and competition, balancing concerns about market power against considerations of efficiency. … This balance has been lost with regard to information technology … the policy challenge is to offset this trend. … This problem might require some lessening of innovation incentives. … The challenge both today and in the future for both IP and antitrust policy is to facilitate the diffusion of new technical knowledge and right now the trend seems to be in the wrong direction. …

To the extent that rising use of employee noncompete agreements limits the ability of technical employees to take their skills to new firms, diffusion is slowed. Similarly, for extensions of trade secrecy law to cover knowhow or the presumption of inevitable disclosure. Patents are required to disclose the technical information needed to “enable” the invention, but perhaps these requirements are ineffective, especially in IT fields. And if patents are not licensed, they become a barrier to diffusion. Perhaps some forms of compulsory licensing might overcome this problem. Moreover, machine learning technologies portend even greater difficulties encouraging diffusion in the future because use of these technologies requires not only skilled employees, but also access to critical large datasets.

It seems that making good backroom software, to use internally, has become something of a natural monopoly. Creating such IT has large fixed costs and big random factors. So an obvious question is whether we can usefully regulate this natural monopoly. And one standard approach to regulating monopolies is to force them to sell to everyone at regulated prices. Which in this context we call “compulsory licensing”; firms could be forced to lease their backroom IT to other firms in the same industry at regulated prices.

Note that while compulsory licensing of patents is rare in the US, it is common worldwide, and it one of the reasons that US drug firms get proportionally less of their revenue from outside the US; other nations force them to license their patents at particular low prices. So worldwide there is a lot of precedent for compulsory licensing.

The article above claimed that backroom IT is:

inextricably linked to the engineers, workers, systems and business models built around them. … While in the past it might have been possible to license, steal or copy someone else’s technology, these days that technology can’t be separated from the systems of which it’s a part.

I’m not yet convinced of this, and so I want to hear independent IT folks weigh in on this key question. I can see that different IT subsystems could be mixed up with each other, but I’m less convinced that the total set of backroom IT of a firm depends that much on its particular products and services. Maybe other firms in an industry would have to take the entire backroom IT bundle of the leading firm, rather than being able to pick and choose among subsystems. But when the leading IT bundle is so much better, I could see this option being attractive to the other firms.

The leading firm might incur some costs in making its IT package modular enough to separate it from its particular products and services. But such modularity is a good design discipline, and a compulsory licensing regime could compensate firms for such costs.

Note that I’m not saying that it is obvious that this is a good solution. I’m just saying that this is a standard obvious policy response to consider, so someone should be looking into it. At the moment I’m not seeing other good options, aside from just accepting the increased IT-induced firm inequality and its many consequences.

Added 12:30: Okay, so far the pretty consistent answer I’ve heard is that it is very hard to take software written for internal use and make it available for outside use. Even if you insist outsiders do things your way.

So assuming we are stuck with industry leaders winning big compared to others due to better IT, one worry for the future is what happens when leaders of different industries start to coordinate their IT with each other. Like phone firms are now coordinating with car firms. Such firms might merge to encourage their synergies. They we might have single firms as big winning leaders in larger economic sectors.

GD Star Rating
loading...
Tagged as: , ,

Dalio’s Principles

When I write and talk about hidden motives, many respond by asking how they could be more honest about their motives. I usually emphasize that we have limited budgets for honesty, and that it is much harder to be honest about yourself than others. And it is especially hard to be honest about the life areas that are the most sacred to you. But some people insist on trying to be very honest, and our book can make them unhappy when they see just how far they have to go.

It is probably easier to be honest if you have community support for honesty. And that makes it interesting to study the few groups who have gone the furthest in trying to create such community support. An interesting example is the hedge fund Bridgewater, as described in Dalio’s book Principles:

An idea meritocracy where people can speak up and say what they really think. (more)

#1 New York Times Bestseller … Ray Dalio, one of the world’s most successful investors and entrepreneurs, shares the unconventional principles that he’s developed, refined, and used over the past forty years to create unique results in both life and business—and which any person or organization can adopt to help achieve their goals. … Bridgewater has made more money for its clients than any other hedge fund in history and grown into the fifth most important private company in the United States. … Along the way, Dalio discovered a set of unique principles that have led to Bridgewater’s exceptionally effective culture. … It is these principles … that he believes are the reason behind his success. … are built around his cornerstones of “radical truth” and “radical transparency,” … “baseball cards” for all employees that distill their strengths and weaknesses, and employing computerized decision-making systems to make believability-weighted decisions. (more)

This book seems useful if you were the absolute undisputed ruler of a firm, so that you could push a culture of your choice and fire anyone who seems to resist. And were successful enough to have crowds eager to join, even after you’d fired many. And didn’t need to coordinate strongly with customers, suppliers, investors, and complementors. Which I guess applies to Dalio.

But he has little advice to offer those who don’t sit in an organization or social network that consistently rewards “radical truth.” He offers no help in thinking about how to trade honesty against the others things your social contexts will demand of you. Dalio repeatedly encourages honesty, but he admits that it is often painful, and that many aren’t suited for it. He mainly just says to push through the pain, and get rid of people who resist it, and says that these big visible up-front costs will all be worth it in the long run.

Dalio also seems to equate conflict and negative opinions with honesty. That is, he seeks a culture where people can say things that others would rather not hear, but doesn’t seem to consider that such negative opinions need not be “honest” opinions. The book makes hundreds of claims, but doesn’t cite outside sources, nor compare itself to other writings on the subject. Dalio doesn’t point to particular evidence in support of particular claims, nor give them any differing degrees of confidence, nor credit particular people as the source of particular claims. It is all just stuff he’s all sure of, that he endorses, all supported by the evidence of his firm’s success.

I can believe that the firm Bridgewater is full of open conflict, with negative opinions being frequently and directly expressed. And it would be interesting to study social behavior in such a context. I accept that this firm functions doing things this way. But I can’t tell if it succeeds because of or in spite of this open conflict. Yes this firm succeeds, but then so do many others with very different cultures. The fact that the top guy seems pretty self-absorbed and not very aware of the questions others are likely to ask of his book is not a good sign.

But if its a bad sign its not much of one; plenty of self-absorbed people have built many wonderful things. What he has helped to build might in fact be wonderful. Its just too bad that we can’t tell much about that from his book.

GD Star Rating
loading...
Tagged as: ,

General Evolvable Brains

Human brains today can do a remarkably wide range of tasks. Our mental capacities seem much more “general” than that of all the artificial systems we’ve ever created. Those who are trying to improve such systems have long wondered: what is the secret of human general intelligence? In this post I want to consider we can learn about this from fact that the brain evolved. How would an evolved brain be general?

A key problem faced by single-celled organisms is how to make all of their materials and processes out of the available sources of energy and materials. They do this mostly via metabolism, which is mostly a set of enzymes that encourage particular reactions converting some materials into others. Together with cell-wall containers to keep those enzymes close to each other. Some organisms are more general than others, in that they can do this key task in a wider range of environments.

Most single-celled organisms use an especially evolvable metabolism design space. That is, their basic overall metabolism system seems especially well-suited to finding innovations and adaptations mostly via blind random search, in a way that avoids getting stuck in local maxima. As I explained in a recent post, natural metabolisms are evolvable in part because they have genotypes that are highly redundant relative to phenotypes: many sets of enzymes can map any given set of inputs into any given set of outputs. And this redundancy requires a substantial overcapacity; the metabolism needs to contain many more enzymes than are strictly needed to create any given mapping.

The main way that such organisms are general is that they have metabolisms with a large library of enzymes. Not just a large library of genes that could code for enzymes if turned on, but an actual large set of enzymes usually created. They make many more enzymes than they actually need in each particular environment where they find themselves. This comes at a great cost; making all those enzymes and driving their reactions doesn’t come cheap.

A relevant analogous toy problem is that of logic gates mapping input signals onto output signals:

[In] a computer logic gate toy problem, … there are four input lines, four output lines, and sixteen binary logic gates between. The genotype specifies the type of each gate and the set of wires connecting all these things, while the phenotype is the mapping between input and output gates. … All mappings between four inputs and four outputs can be produced using only four internal gates; sixteen gates is a factor of four more than needed. But in the case of four gates the set of genotypes is not big enough compared to the set of phenotypes to allow easy evolution. For [evolvable] innovation, sixteen gates is enough, but four gates is not. (more)

Note that evolution doesn’t always use such highly evolvable design spaces. For example, our skeletal structure doesn’t have lots of extra bones sitting around ready to be swapped into new roles in new environments. In such cases, evolution chose not to pay large extra costs for generality and evolvability, because the environment seemed predictable enough to stay close to a good enough design. As a result, innovation and adaptation of skeletal structure is much slower and more painful, and could fail badly in novel enough environments.

Now let’s consider brains. It may be that for some tasks, evolution found such an effective structure that it chose to commit to that structure, betting that its solution was stable and reliable enough across future environments to let it forgoe the big extra costs of more general and evolvable designs. But if we are looking to explain a surprising generality, flexibility, and rapid evolution in human brains, it makes sense to consider the possibility that human brain design took a different path, one more like that of single-celled metabolism.

That is, one straightforward way to design a general evolvable brain is to use a extra large toolbox of mental modules that can be connected together in many different ways. While each tool might be a carefully constructed jewel, the whole set of tools would have less of an overall structure. Like a pile of logical gates that can be connected many ways, or metabolism sub-networks that can be connected together into many networks. In this case, the secret to general evolvable intelligence would be less in the particular tools and more in having an extra large set of tools, plus some simple general ways to search in the space of tool combinations. A tool set so large that the brain can do most tasks in a great many different ways.

Much of the search for brain innovations and adaptations would then be a search in the space of ways to connect these tools together. Some aspects of this search could happen over evolutionary timescales, some could happen over the lifetime of particular brains, and some could happen on the timescale of cultural evolution, once that got started.

On the timescale of an individual brain lifetime, a search for tool combinations would start with brains that are highly connected, and then prune long term connections as particular desired paths between tools are found. As one learned how to do a task better, one would activate smaller brain volumes. When some brain parts were damaged, brains would often be able to find other combinations of the remaining tools to achieve similar functions. Even losing a whole half of a brain might not greatly reduce performance. And these are all in fact common patterns for human brains.

Yes, something important happened early in human history. Some key event changed the growth rate of human abilities, though not immediate ability levels, and it did this without much changing brain modules and structures, which remain quite close to those of other primates. Plausibly, we had finally collected enough hard-wired tools, or refined them well enough, to let us start to reliably copy each others’ behaviors. And that allowed cultural evolution, a much-faster-than-evolutionary search in the space of practices. Such practices included choices of which combinations of brain modules to activate in which contexts.

What can this view say about the future of brains? On ems, it suggests that human brains have a lot of extra capacity. We can probably go far in taking an em that can do a job task and throwing away brain modules not needed for that task. At some point cutting hurts performance too much, but for many job tasks you might cut 50% to 90% before then.

Regarding other artificial intelligence, it suggests that if we still have a lot to learn via substantially random search, with no grand theory to integrate it all, then we’ll have to focus on collecting more better tools. Machines would gradually get better as we collect more tools. There may be thresholds where you need enough tools to do a certain jobs well, and while most tools would make only small contributions, perhaps there are a few bigger tools that matter more. So key thresholds would come from the existence of key jobs, and from the lumpiness of tools. We should expect progress to be relatively continuous, except perhaps due to the discovery of especially  lumpy tools, or to passing thresholds that enable key jobs to be done.

GD Star Rating
loading...
Tagged as: ,

Economists Rarely Say “Nothing But”

Imagine someone said:

Those physicists go too far. They say conservation of momentum applies exactly at all times to absolutely everything in the universe. And yet they can’t predict whether I will raise my right or left hand next. Clearly there is more going on than their theories can explain. They should talk less and read more literature. Maybe then they’d stop saying immoral things like Earth’s energy is finite.

Sounds silly, right? But many literary types really don’t like economics (in part due to politics), and they often try to justify their dislike via a similar critique. They say that we economists claim that complex human behavior is “nothing but” simple economic patterns. For example, in the latest New Yorker magazine, journalist and novelist John Lanchester tries to make such a case in an article titled:

Can Economists and Humanists Ever Be Friends? One discipline reduces behavior to elegantly simple rules; the other wallows in our full, complex particularity. What can they learn from each other?

He starts by focusing on our book Elephant in the Brain. He says we make reasonable points, but then go too far:

The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. … Erving Goffman’s “The Presentation of Self in Everyday Life,” or … Pierre Bourdieu’s masterpiece “Distinction” … are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.

This intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). … Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve … or mb=mc. … These are powerful tools, which can be taken too far.

You might think that Lanchester would support his claim that we overreach by pointing to particular large claims and then offering evidence that they are false in particular ways. Oddly, you’d be wrong. (Our book mentions no math nor rules of any sort.) He actually seems to accept most specific claims we make, even pretty big ones:

Many of the details of Hanson and Simler’s thesis are persuasive, and the idea of an “introspective taboo” that prevents us from telling the truth to ourselves about our motives is worth contemplating. … The writers argue that the purpose of medicine is as often to signal concern as it is to cure disease. They propose that the purpose of religion is as often to enhance feelings of community as it is to enact transcendental beliefs. … Some of their most provocative ideas are in the area of education, which they believe is a form of domestication. … Having watched one son go all the way through secondary school, and with another who still has three years to go, I found that account painfully close to the reality of what modern schooling is like.

While Lanchester does argue against some specific claims, these are not claims that we actually made. For example:

“The Elephant in the Brain”… has moments of laughable wrongness. We’re told, “Maya Angelou … managed not to woo Bill Clinton with her poetry but rather to impress him—so much so that he invited her to perform at his presidential inauguration in 1993.” The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.

But we said nothing like “Angelou’s career amounts to nothing more than.” Saying that she impressed Clinton with her poetry is not remotely to imply there was “nothing more” to her career. Also:

More generally, Hanson and Simler’s emphasis on signalling and unconscious motives suggests that the most important part of our actions is the motives themselves, rather than the things we achieve. … The last sentence of the book makes the point that “we may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” With that one observation, acknowledging that the consequences of our actions are more important than our motives, the argument of the book implodes.

We emphasize “signalling and unconscious motives” because is the topic of our book. We don’t ever say motives are the most important part of our actions, and as he notes, in our conclusion we suggest the opposite. Just as a book on auto repair doesn’t automatically claim auto repair to be the most important thing in the world, a book on hidden motives needn’t claim motives are the most important aspect of our lives. And we don’t.

In attributing “overreach” to us, Lanchester seems to rely most heavily on a quick answer I gave in an interview, where Tyler Cowen asked me to respond “in as crude or blunt terms as possible”:

Wait, though—surely signalling doesn’t account for everything? Hanson … was asked to give a “short, quick and dirty” answer to the question of how much human behavior “ultimately can be traced back to some kind of signalling.” His answer: “In a rich society like ours, well over ninety per cent.” … That made me laugh, and also shake my head. … There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.

That quote is not from our book, and is from a context where you shouldn’t expect it to be easy to see exactly what was meant. And saying that a signaling motive is on average one of the strongest (if often unconscious) motives in an area of life is to say that this motive importantly shapes some key patterns of behavior in this area of life; it is not remotely to claim that this fact explains most of details of human behavior in this area! So shaping key patterns in 90% of areas explains far less than 90% of all behavior details. Saying that signaling is an important motive doesn’t at all say that human behavior is “nothing more” than signaling. Other motives contribute, we vary in how honest and conscious we are of each motive, there are usually a great many ways to signal any given thing in any given context, and many different cultural equilibria can coordinate individual behavior. There remains plenty of room for complexity, as people like Goffman and Bourdieu illustrate.

Saying that an abstraction is important doesn’t say that the things to which it applies are “nothing but” that abstraction. For example, conservation of momentum applies to all physical behavior, yet it explains only a tiny fraction of the variance in behavior of physical objects. Natural selection applies to all species, yet most species details must be explained in other ways. If most roads try to help people get from points A to B, that simple fact is far from sufficient to predict where all the roads are. The fact that a piece of computer code is designed help people navigate roads explains only a tiny fraction of which characters are where in the code. Financial accounting applies to nearly 100% of firms, yet it explains only a small fraction of firm behavior. All people need air and food to survive, and will have a finite lifespan, and yet these facts explain only a tiny fraction of their behavior.

Look, averaging over many people and contexts there must be some strongest motive overall. Economists might be wrong about what that is, and our book might be wrong. But it isn’t overreach or oversimplification to make a tentative guess about it, and knowing that strongest motive won’t let you explain most details of human behavior. As an analogy, consider that every nation has a largest export commodity. Knowing this commodity will help you understand something about this nation, but it isn’t remotely reasonable to say that a nation is “nothing more” than its largest export commodity, nor to think this fact will explain most details of behavior in this nation.

There are many reasonable complaints one can make about economics. I’ve made many myself. But this complaint that we “overreach” by “reducing complexity to simple rules” seems to me mostly rhetorical flourish without substance. For example, most models we fit to data have error terms to accommodate everything else that we’ve left out of that particular model. We economists are surely wrong about many things, but to argue that we are wrong about a particular thing you’ll actually need to talk about details related to that thing, instead of waving your hands in the general direction of “complexity.”

GD Star Rating
loading...
Tagged as: , ,

Today, Ems Seem Unnatural

The main objections to “test tube babies” weren’t about the consequences for mothers or babies, they were about doing something “unnatural”:

Given the number of babies that have now been conceived through IVF — more than 4 million of them at last count — it’s easy to forget how controversial the procedure was during the time when, medically and culturally, it was new. … They weren’t entirely sure how IVF was different from cloning, or from the “ethereal conception” that was artificial insemination. They balked at the notion of “assembly-line fetuses grown in test tubes.” … For many, IVF smacked of a moral overstep — or at least of a potential one. … James Watson publicly decried the procedure, telling a Congressional committee in 1974 that … “All hell will break loose, politically and morally, all over the world.” (more)

Similarly, for most ordinary people, the problem with ems isn’t that the scanning process might kill the original human, or that the em might be an unconscious zombie due to their new hardware not supporting consciousness. In fact, people more averse to death have fewer objections to ems, as they see ems as a way to avoid death. The main objections to ems are just that ems seem “unnatural”:

In four studies (including pilot) with a total of 952 participants, it was shown that biological and cultural cognitive factors help to determine how strongly people condemn mind upload. … Participants read a story about a scientist who successfully transfers his consciousness (uploads his mind) onto a computer. … In the story, the scientist injects himself with nano-machines that enter his brain and substitute his neurons one-by-one. After a neuron has been substituted, the functioning of that neuron is copied (uploaded) on a computer; and after each neuron has been copied/uploaded the nano-machines shut down, and the scientist’s body falls on the ground completely limp. Finally, the scientist wakes up inside the computer.

The following variations made NO difference:

[In Study 1] we modified our original vignette by changing the target of mind upload to be either (1) a computer, (2) an android body, (3) a chimpanzee, or (4) an artificial brain. …

[In Study 2] we changed the story in a manner that the scientist merely ingests the nano-machines in a capsule form. Furthermore, we used a 2 × 2 experimental set-up to investigate whether the body dying on a physical level [heart stops or the brain stops] impacts the condemnation of the scientist’s actions. We also investigated whether giving participants information on how the transformation feels for the scientist once he is in the new platform has an impact on the results.

What did matter:

People who value purity norms and have higher sexual disgust sensitivity are more inclined to condemn mind upload. Furthermore, people who are anxious about death and condemn suicidal acts were more accepting of mind upload. Finally, higher science fiction literacy and/or hobbyism strongly predicted approval of mind upload. Several possible confounding factors were ruled out, including personality, values, individual tendencies towards rationality, and theory of mind capacities. (paper; summary; HT Stefan Schubert)

As with IVF, once ems are commonplace they will probably also come to seem less unnatural; strange never-before-seen possibilities evoke more fear and disgust than common things, unless those common things seem directly problematic.

GD Star Rating
loading...
Tagged as: ,

Yay Marriage

This Saturday I acquire my first kid-in-law, when one of my two sons marries. I’m supposed to be happy for the couple, and I am indeed happy. Not only that, I’m happy to participate in a ceremony wherein many of their associates create common knowledge about our willingness to spend resources to collectively declare our happiness about this marriage. But I wonder: what does this fact say?

We often celebrate general symbols, as with holidays. When we celebrate particular people we know, we often celebrate accomplishments, as in elections, graduations, sport wins, and retirements. Sometimes we celebrate nothing in particular, as with birthdays, just to have an excuse to get together.

But I see our more heart-felt collective celebrations as choices to commit: marriages, baby showers, baptisms, citizenship, and commitments to join groups as doctors, soldiers, and nuns do. It makes sense to celebrate commitments together, if a community is supposed to be part of the commitment. Committing to each other seems one of the most heart-felt things we ever do.

It seems to me that our most hopeful and heart-felt commitment celebrations are marriages and baby showers, which are of course related. And this suggests that these are among the most important commitments we make, not just as individuals, but as communities offering our support to individuals.

Our society today doesn’t support monogamy and marriage as strongly as did ancestral societies. We have far weaker legal and social sanctions against those who divorce, don’t marry, or cheat on marriages. When some express strong criticisms of marriage, others usually don’t take much offense or argue against them very vigorously. We even allow and often encourage experiments with other arrangements.

But the unparalleled joy and hope we feel at weddings, and perhaps baby showers, and our eagerness to participate in them, are real data, not to be ignored. These feelings say that we see these events as very important, and we guess that getting married or having kids is on average a better choice than staying single or childless. We accept that people must make their own choices for their lives, but on average we hope for marriage and kids. Especially we parents.

Commitments are choices to neglect future preferences. Staying with a spouse or a child for only as long as you feel in the mood in the moment is not a commitment, and our deep hope and celebration of these commitments says that we see such neglect as often wise. You may not always be happy with such choices, but a commitment to them can bring deep satisfying meaning to your life.

We don’t often say these things directly or our loud. But you can see us saying these things by the way standing with you at your wedding, beaming with hope and pride.

GD Star Rating
loading...
Tagged as: ,

Maps of Meaning

Like many folks recently, I decided to learn more about Jordan Peterson. Not being eager for self-help or political discussion, I went to his most well-known academic book, Maps of Meaning. Here is Peterson’s summary: 

I came to realize that ideologies had a narrative structure – that they were stories, in a word – and that the emotional stability of individuals depended upon the integrity of their stories. I came to realize that stories had a religious substructure (or, to put it another way, that well-constructed stories had a nature so compelling that they gathered religious behaviors and attitudes around them, as a matter of course). I understood, finally, that the world that stories describe is not the objective world, but the world of value – and that it is in this world that we live, first and foremost. … I have come to understand what it is that our stories protect us from, and why we will do anything to maintain their stability. I now realize how it can be that our religious mythologies are true, and why that truth places a virtually intolerable burden of responsibility on the individual. I know now why rejection of such responsibility ensures that the unknown will manifest a demonic face, and why those who shrink from their potential seek revenge wherever they can find it. (more)

In his book, Peterson mainly offers his best-guess description of common conceptual structures underlying many familiar cultural elements, such as myths, stories, histories, rituals, dreams, and language. He connects these structures to cultural examples, to a few psychology patterns, and to rationales of why such structures would make sense. 

But while he can be abstract at times, Peterson doesn’t go meta. He doesn’t offer readers any degree of certainty in his claims, nor distinguish in which claims he’s more confident. He doesn’t say how widely others agree with him, he doesn’t mention any competing accounts to his own, and he doesn’t consider examples that might go against his account. He seems to presume that the common underlying structures of past cultures embody great wisdom for human behavior today, yet he doesn’t argue for that explicitly, he doesn’t consider any other forces that might shape such structures, and he doesn’t consider how fast their relevance declines as the world changes. The book isn’t easy to read, with overly long and obscure words, and way too much repetition. He shouldn’t have used his own voice for his audiobook. 

In sum, Peterson comes across as pompous, self-absorbed, and not very self-aware. But on the one key criteria by which such a book should most be judged, I have to give it to him: the book offers insight. The first third of the book felt solid, almost self-evident: yes such structures make sense and do underly many cultural patterns. From then on the book slowly became more speculative, until at the end I was less nodding and more rolling my eyes. Not that most things he said even then were obviously wrong, just that it felt too hard to tell if they were right.  (And alas, I have no idea how original is this book’s insight.) 

Let me finish by offering a small insight I had while reading the book, one I haven’t heard from elsewhere. A few weeks ago I talked about how biological evolution avoids local maxima via highly redundant genotypes:

There are of course far more types of reactions between molecules than there are types of molecules. So using Wagner’s definitions, the set of genotypes is vastly larger than the set of phenotypes. Thus a great many genotypes result in exactly the same phenotype, and in fact each genotype has many neighboring genotypes with that same exact phenotype. And if we lump all the connected genotypes that have the same phenotype together into a unit (a unit Wagner calls a “genotype network”), and then look at the network of one-neighbor connections between such units, we will find that this network is highly connected.

That is, if one presumes that evolution (using a large population of variants) finds it easy to make “neutral” moves between genotypes with exactly the same phenotype, and hence the same fitness, then large networks connecting genotypes with the same phenotype imply that it only takes a few non-neutral moves between neighbors to get to most other phenotypes. There are no wide deep valleys to cross. Evolution can search large spaces of big possible changes, and doesn’t have a problem finding innovations with big differences. (more) 

It occurs to me that this is also an advantage of traditional ways of encoding cultural values. An explicit formal encoding of values, such as found in modern legal codes, is far less redundant. Most random changes to such an abstract formal encoding create big bad changes to behavior. But when values are encoded in many stories, histories, rituals, etc., a change to any one of them needn’t much change overall behavior. So the genotype can drift until it is near a one-step change to a better phenotype. This allows culture to evolve more incrementally, and avoid local maxima. 

Implicit culture seems more evolvable, at least to the extent slow evolution is acceptable. We today are changing culture quite rapidly, and often based on pretty abstract and explicit arguments. We should worry more about getting stuck in local maxima.  

GD Star Rating
loading...
Tagged as: , ,

Responses to Sex Inequality Critics

As I promised yesterday, here are specific responses to the nine mass media articles that mentioned my sex redistribution post in the eight most popular media outlets, as measured by SemRush “organic traffic”. (For example, the note (21M) means 21 million in monthly traffic.) Quotes are indented; my responses are not.

My responses are somewhat repetitive, as most seem content to claim that self-labeled “incels” advocating for sex redistribution are deeply icky people, and especially that they are women-hating. Even if that were true, however, that doesn’t to me say much about the wisdom or value of sex redistribution. I’m much more interested in general sex inequality than I am in the issues of the tiny fraction self-labeled “incel” activists.  Continue reading "Responses to Sex Inequality Critics" »

GD Star Rating
loading...
Tagged as: , ,