Oxford To Publish The Age Of Em

Eighteen months ago I asked here for readers to criticize my Em Econ book draft, then 62K words. (137 of you sent comments – thanks!) Today I announce that Oxford University Press will publish its descendant (now 212K words) in Spring 2016. Tentative title, summary, outline:

The Age Of Em: Envisioning Brain Emulation Societies

Author Robin Hanson takes an oft-mentioned disruptive future tech, brain emulations, and expertly analyzes its social consequences in unprecedented breadth and detail. His book is intended to prove: we can foresee our social future, not just by projecting trends, but also by analyzing the detailed social consequences of particular disruptive future technologies.

I. Basics
1. Start: Contents, Preface, Introduction, Summary
2. Modes: Precedents, Factors, Dreamtime, Limits
3. Mechanics: Emulations, Opacity, Hardware, Security
II. Physics
4. Scales: Time, Space, Reversing
5. Infrastructure: Climate, Cooling, Buildings
6. Existence: Virtuality, Views, Fakery, Copying, Darkness
7. Farewells: Fragility, Retirement, Death
III. Economics
8. Labor: Wages, Selection, Enough
9. Efficiency: Competition, Eliteness, Spurs, Power
10. Business: Institutions, Growth, Finance, Manufacturing
11. Lifecycle: Careers, Age, Preparation, Training
IV. Organization
12. Clumping: Cities, Speeds, Transport
13. Extremes: Software, Inequality, War
14. Groups: Clans, Nepotism, Firms, Teams
15. Conflict: Governance, Law, Innovation
V. Sociology
16. Connection: Mating, Signaling, Identity, Ritual
17. Collaboration: Conversation, Synchronization, Coalitions
18. Society: Profanity, Divisions, Culture, Stories
19. Minds: Humans, Unhumans, Intelligence, Psychology
VI. Implications
20. Variations: Trends, Alternatives, Transition, Aliens
21. Choices: Evaluation, Policy, Charity, Success
22. Finale: Critics, Conclusion, References, Thanks
23. Appendix: Motivation, Method, Biases

GD Star Rating
loading...
Tagged as: ,

Advice Shows Status

When we give and seek advice, we think and talk as if we mainly just want to exchange useful information on the topic at hand. But seeking someone’s advice shows them respect, especially if that advice is followed. And in fact, a lot of our advice giving and taking behavior can be better understand in such status terms:

When making decisions together, we tend to give everyone an equal chance to voice their opinion. To make the best decisions, however, each opinion must be scaled according to its reliability. Using behavioral experiments and computational modelling, we tested (in Denmark, Iran, and China) the extent to which people follow this latter, normative strategy. We found that people show a strong equality bias: they weight each other’s opinion equally regardless of differences in their reliability, even when this strategy was at odds with explicit feedback or monetary incentives. (more)

Individuals in powerful positions are the worst offenders. According to one experimental study, they feel competitive when they receive advice from experts, which inflates their confidence and leads them to dismiss what the experts are telling them. High-power participants in the study ignored almost two-thirds of the advice they received. Other participants (the control and low-power groups) ignored advice about half as often. … Research shows that they value advice more if it comes from a confident source, even though confidence doesn’t signal validity. Conversely, seekers tend to assume that advice is off-base when it veers from the norm or comes from people with whom they’ve had frequent discord. (Experimental studies show that neither indicates poor quality.) Seekers also don’t embrace advice when advisers disagree among themselves. And they fail to compensate sufficiently for distorted advice that stems from conflicts of interest, even when their advisers have acknowledged the conflicts and the potential for self-serving motives. … Though many people give unsolicited advice, it’s usually considered intrusive and seldom followed. Another way advisers overstep is to chime in when they’re not qualified to do so. … many advisers take offense when their guidance isn’t accepted wholesale, curtailing further discussion. (more)

GD Star Rating
loading...
Tagged as: , , ,

Bowing To Elites

Imagine that that you are a politically savvy forager in a band of size thirty, or a politically savvy farmer near a village of size thousand. You have some big decisions to make, including who to put in various roles, such as son-in-law, co-hunter, employer, renter, cobbler, or healer. Many people may see your choices. How should you decide?

Well first you meet potential candidates in person and see how much you intuitively respect them, get along with them, and can agree on relative status. It isn’t enough for you to have seen their handiwork, you want to make an ally out of these associates, and that won’t work without respect, chemistry, and peace. Second, you see what your closest allies think of candidates. You want to be allies together, so it is best if they also respect and get along with your new allies.

Third, if there is a strong leader in your world, you want to know what that leader thinks. Even if this leader says explicitly that you can do anything you like, they don’t care, if you get any hint whatsoever that they do care, you’ll look closely to infer their preferences. And you’ll avoid doing anything they’d dislike too much, unless your alliance is ready to mount an overt challenge.

Fourth, even if there is no strong leader, there may be a dominant coalition encompassing your band or town. This is a group of people who tend to support each other, get deference from others, and win in conflicts. We call these people “elites.” If your world has elites, you’ll want to treat their shared opinions like those of a strong leader. If elites would gossip disapproval of a choice, maybe you don’t want it.

What if someone sets up objective metrics to rate people in suitability for the roles you are choosing? Say an archery contest for picking hunters, or a cobbler contest to pick cobblers. Or public track records of how often healer patients die, or how long cobbler shoes last. Should you let it be known that such metrics weigh heavily in your choices?

You’ll first want to see what your elites or leader think of these metrics. If they are enthusiastic, then great, use them. And if elites strongly oppose, you’d best only use them when elites can’t see. But what if elites say, “Yeah you could use those metrics, but watch out because they can be misleading and make perverse incentives, and don’t forget that we elites have set up this whole other helpful process for rating people in such roles.”

Well in this case you should worry that elites are jealous of this alternative metric displacing their advice. They like the power and rents that come from advising on who to pick for what. So elites may undermine this metric, and punish those who use it.

When elites advise people on who to pick for what, they will favor candidates who seem loyal to elites, and punish those who seem disloyal, or who aren’t sufficiently deferential. But since most candidates are respectful enough, elites often pick those they think will actually do well in the role. All else equal, that will make them look good, and help their society. While their first priority is loyalty, looking good is often a close second.

Since humans evolved to be unconscious political savants, this is my basic model to explain the many puzzles I listed in my last post. When choosing lawyers, doctors, real estate agents, pundits, teachers, and more, elites put many obstacles in the way of objective metrics like track records, contests, or prediction markets. Elites instead suggest picking via personal impressions, personal recommendations, and school and institution prestige. We ordinary people mostly follow this elite advice. We don’t seek objective metrics, and instead use elite endorsements, such as the prestige of where someone went to school or now works. In general we favor those who elites say have the potential to do X, over those who actually did X.

This all pushes me to more favor two hypotheses:

  1. We choose people for roles mostly via evolved mental modules designed mainly to do well at coalition politics. The resulting system does often pick people roughly well for their roles, but more as a side than a direct effect.
  2. In our society, academia reigns as a high elite, especially on advice for who to put in what roles. When ordinary people see another institution framed as competing directly with academia, that other institution loses. Pretty much all prestigious institutions in our society are seen as allied with academia, not as competing with it. Even religions, often disapproved by academics, rely on academic seminary degrees, and strongly push kids to gain academic prestige.

We like to see ourselves as egalitarian, resisting any overt dominance by our supposed betters. But in fact, unconsciously, we have elites and we bow to them. We give lip service to rebelling against them, and they pretend to be beaten back. But in fact we constantly watch out for any actions of ours that might seem to threaten elites, and we avoid them like the plague. Which explains our instinctive aversion to objective metrics in people choice, when such metrics compete with elite advice.

Added 8am: I’m talking here about how we intuitively react to the possibility of elite disapproval; I’m not talking about how elites actually react. Also, our intuitive reluctance to embrace track records isn’t strong enough to prevent us from telling specific stories about our specific achievements. Stories are way too big in our lives for that. We already norms against bragging, and yet we still manage to make our selves look good in stories.

GD Star Rating
loading...
Tagged as: , , ,

Dissing Track Records

Years ago I was being surprised to learn that patients usually can’t pick docs based on track records of previous patient outcomes. Because, people say, that would invade privacy and make bad incentives for docs picking patients. They suggest instead relying on personal impressions, wait times, “bedside” manner, and prestige of doc med school or hospital. (Yeah, those couldn’t possibly make bad incentives.) Few ever study if such cues correlate with patient outcomes, and we actively prevent the collection of patient satisfaction track records.

For lawyers, most trials are in the public record, so privacy shouldn’t be an obstacle to getting track records. So people pick lawyers based on track records, right? Actually no. People who ask are repeatedly told: no practically you can’t get lawyer track records, so just pick lawyers based on personal impressions or the prestige of their law firm or school. (Few study if those correlate with client outcomes.)

A new firm Premonition has been trying to change that:

Despite being public record, court data is surprisingly inaccessible in bulk, nor is there a unified system to access it, outside of the Federal Courts. Clerks of courts refused Premonition requests for case data. Resolved to go about it the hard way, Unwin … wrote a web crawler to mine courthouse web sites for the data, read it, then analyze it in a database. …

Many publications run “Top Lawyer” lists, people who are recognized by their peers as being “the best”. Premonition analyzed the win rates of these attorneys, it turned out most were average. The only way that they stood out was a disproportionate number of appealed and re-opened cases, i.e. they were good at dragging out litigation. They discovered that even the law firms themselves were poor at picking litigators. In a study of the United Kingdom Court of Appeals, it found a slight negative correlation of -0.1 between win rates and re-hiring rates, i.e. a barrister 20% better than their peers was actually 2% less likely to be re-hired! … Premonition was formed in March 2014 and expected to find a fertile market for their services amongst the big law firms. They found little appetite and much opposition. …

The system found an attorney with 22 straight wins before the judge – the next person down was 7. A bit of checking revealed the lawyer was actually a criminal defense specialist who operated out of a strip mall. … The firm claims such outliers are far from rare. Their web site … shows an example of an attorney with 32 straight wins before a judge in Orange County, Florida. (more)

As a society we supposedly coordinate in many ways to make medicine and law more effective, such as via funding med research, licensing professionals, and publishing legal precedents. Yet we don’t bother to coordinate to create track records for docs or lawyers, and in fact our public representatives tend to actively block such things. And strikingly: customers don’t much care. A politician who proposed to dump professional licensing would face outrage, and lose. A politician who proposed to post public track records would instead lose by being too boring.

On reflection, these examples are part of a larger pattern. For example, I’ve mentioned before that a media firm had a project to collect track records of media pundits, but then abandoned the project once it realized that this would reduce reader demand for pundits. Readers are instead told to pick pundits based on their wit, fame, and publication prestige. If readers really wanted pundit track records, some publication would offer them, but readers don’t much care.

Attempts to publish track records of school teachers based on students outcomes have produced mostly opposition. Parents are instead encouraged to rely on personal impressions and the prestige of where the person teaches or went to school. No one even considers doing this for college teachers, we at most just survey student satisfaction just after a class ends (and don’t even do that right).

Regarding student evaluations, we coordinate greatly to make standard widely accessible tests for deciding who to admit to schools. But we have almost no such measures of students when they leave school for work. Instead of showing employers a standard measure of what students have learned, we tell employers to rely on personal impressions and the prestige of the school from which the student came. Some have suggested making standard what-I-learned tests, but few are interested, including employers.

For researchers like myself, publications and job position are measures of endorsements by prestigious authorities. Citations are a better measure of the long term impact of research on intellectual progress, but citations get much less attention in evaluations of researchers. Academics don’t put their citation count on their vita (= resume), and when a reporter decides which researcher to call, or a department decides who to hire, they don’t look much at citations. (Yes, I look better by citations than by publications or jobs, and my prestige is based more on the later.)

Related is the phenomenon of people being more interested in others said to have the potential to achieve X, than in people who have actually achieved X. Related also is the phenomenon of firms being reluctant to use formulaic measures of employee performance that aren’t mediated mostly by subjective boss evaluations.

It seems to me that there are striking common patterns here, and I have in mind a common explanation for them. But I’ll wait to explain that in my next post. Till then, how do you explain these patterns? And what other data do we have on how we treat track records elsewhere?

Added 22Mar: Real estate sales are also technically in the public record, and yet it is hard for customers to collect comparable sales track records for real estate agents, and few seem to care enough to ask for them.

GD Star Rating
loading...
Tagged as: , , ,

Ford’s Rise of Robots

In the April issue of Reason magazine I review Martin Ford’s new book Rise of the Robots:

Basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It’s as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now. … [But] In the end, it seems that Martin Ford’s main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction. (more)

I’ll admit Ford is hardly alone, and he ably summarizes what are quite common views. Even so, I’m skeptical.

GD Star Rating
loading...
Tagged as: ,

The Data We Need

Almost all research into human behavior focuses on particular behaviors. (Yes, not extremely particular, but also not extremely general.) For example, an academic journal article might focus on professional licensing of dentists, incentive contracts for teachers, how Walmart changes small towns, whether diabetes patients take their medicine, how much we spend on xmas presents, or if there are fewer modern wars between democracies. Academics become experts in such particular areas.

After people have read many articles on many particular kinds of human behavior, they often express opinions about larger aggregates of human behavior. They say that government policy tends to favor the rich, that people would be happier with less government, that the young don’t listen enough to the old, that supply and demand is a good first approximation, that people are more selfish than they claim, or that most people do most things with an eye to signaling. Yes, people often express opinions on these broader subjects before they read many articles, and their opinions change suspiciously little as a result of reading many articles. But even so, if asked to justify their more general views academics usually point to a sampling of particular articles.

Much of my intellectual life in the last decade has been spent in the mode of collecting many specific results, and trying to fit them into larger simpler pictures of human behavior. So both I and the academics I’m describing above in essence present themselves as using these many results presented in academic papers about particular human behaviors as data to support their broader inferences about human behavior. But we do almost all of this informally, via our vague impressionistic memories of what has been the gist of the many articles we’ve read, and our intuitions about what more general claims seem how consistent with those particulars.

Of course there is nothing especially wrong with intuitively matching data and theory; it is what we humans evolved to do, and we wouldn’t be such a successful species if we couldn’t at least do it tolerably well sometimes. It takes time and effort to turn complex experiences into precise sharable data sets, and to turn our theoretical intuitions into precise testable formal theories. Such efforts aren’t always worth the bother.

But most of these academic papers on particular human behaviors do in fact pay the bother to substantially formalize their data, their theories, or both. And if it is worth the bother to do this for all of these particular behaviors, it is hard to see why it isn’t be worth the bother for the broader generalizations we make from them. Thus I propose: let’s create formal data sets where the data points are particular categories of human behavior.

To make my proposal clearer let’s for now restrict attention to explaining government regulatory policies. We could create a data set where the datums are particular kinds of products and services that governments now provide, subsidize, tax, advise, restrict, etc. For such datums we could start to collect features about them into a formal data set. Such features could say how long that sort of thing has been going on, how widely it is practiced around the world, how variable has been that practice over space and time, how familiar are ordinary people today with its details, what sort of justifications do people offer for it, what sort of emotional associations do people have with it, how much do we spend on it, and so on. We might also include anything we know about how such things correlate with age, gender, wealth, latitude, etc.

Generalizing to human behavior more broadly, we could collect a data set of particular behaviors, many of which seem puzzling at least to someone. I often post on this blog about puzzling behaviors. Each such category of behaviors could be one or more data points in this data set. And relevant features to code about those behaviors could be drawn from the features we tend to invoke when we try to explain those behaviors. Such as how common is that behavior, how much repeated experience do people have with it, how much do they get to see about the behavior of others, how strong are the emotional associations, how much would it make people look bad to admit to particular motives, and so on.

Now all this is of course much easier said than done. Is it a lot of work to look up various papers and summarize their key results as entries in this data set, or just to look at real world behaviors and put them into simple categories. It is also work to think carefully about how to usefully divide up the space of actions and features. First efforts will no doubt get it wrong in part, and have to be partially redone. But this is the sort of work that usually goes into all the academic papers on particular behaviors. Yes it is work, but if those particular efforts are worth the bother, then this should be as well.

As a first cut, I’d suggest just picking some more limited category, such as perhaps government regulations, collecting some plausible data points, making some guesses about what useful features might be, and then just doing a quick survey of some social scientists where they each fill in the data table with their best guesses for data point features. If you ask enough people, you can average out a lot of individual noise, and at least have a data set about what social scientists think are features of items in this area. With this you could start to do some exploratory data analysis, and start to think about what theories might well account for the patterns you see.

Now one obvious problem with my proposal is that while it looks time consuming and tedious, it isn’t obviously impressive. Researchers who specialize in particular areas will complain about your data entries related to their areas, and you won’t be able to satisfy them all. So you will end up with a chorus of critics saying your data is all wrong, and your efforts will look too low brow to cower them with your impressive tech. So I can see why this hasn’t been done much. Even so, I think this is the data set we need.

GD Star Rating
loading...
Tagged as: , ,

Life Before Earth

This paper is two years old now, but still seems big news to me:

OriginLife

Genetic complexity, roughly measured by the number of non-redundant functional nucleotides … Linear regression of genetic complexity (on a log scale) extrapolated back to just one base pair suggests the time of the origin of life = 9.7 ± 2.5 billion years ago. … There was no intelligent life in our universe at the time of the origin of Earth, because the universe was 8 billion years old at that time, whereas the development of intelligent life requires ca. 10 billion years of evolution. (source; discussion; HT Stuart LaForge)

That seems remarkably close to the age of the universe, 13.8 billion years. Yes it might be a coincidence, but we have other reasons to suspect life began before Earth. So I take this as a substantial if hardly overwhelming confirmation.

GD Star Rating
loading...
Tagged as:

Student Status Puzzle

Grad students vary in their research autonomy. Some students are very willing to ask for advice and to listen to it carefully, while others put a high priority on generating and pursuing their own research ideas their own way. This varies with personality, in that more independent people pick more independent strategies. It varies over time, in that students tend to start out deferring at first, and then later in their career switch to making more independent choices. It also varies by topic; students defer more in more technical topics, and where topic choices need more supporting infrastructure, such as with lab experiments. It also varies by level of abstraction; students defer more on how to pursue a project than on which project ideas to pursue.

Many of these variations seem roughly explained by near-far theory, in that people defer more when near, and less when far. These variations seem at least plausibly justifiable, though doubts make sense too. Another kind of variation is more puzzling, however: students at top schools seem more deferential than those at lower rank schools.

Top students expect to get lots of advice, and they take it to heart. In contrast, students at lower ranked schools seem determined to generate their own research ideas from deep in their own “soul”. This happens not only for picking a Ph.D. thesis, but even just for picking topics of research papers assigned in classes. Students seem as averse to getting research topic advice as they would be to advice on with whom to fall in love. Not only are they wary of getting research ideas from professors, they even fear that reading academic journals will pollute the purity of their true vision. It seems a moral matter to them.

Of course any one student might be correct that they have a special insight into what topics are neglected by their local professors. But the overall pattern here seems perverse; people who hardly understand the basics of a field see themselves as better qualified to identify feasible interesting research topics than those nearby with higher status, and who have been in the fields for decades.

One reason may be overconfidence; students think their profs deserve more to be at a lower rank school than they do, and so estimate a lower quality difference between they and their profs. More data supporting this is that students also seem to accept the relative status ranking of profs at their own school, and so focus most of their attention on the locally top status profs. It is as if each student thinks that they personally have so far been assigned too low of a status, but thinks most others have been correctly assigned.

Another reason may be like our preferring potential to achievement; students try to fulfill the heroic researcher stories they’ve heard, wherein researchers get much credit for embracing ideas early that others come to respect later. Which can make some sense. But these students are trying to do this way too early in their career, and they go way too far with it. Being smarter and knowing more, students at top schools understand this better.

GD Star Rating
loading...
Tagged as: , , , ,

Yawning At Utopia

The prospect of better physical devices, such as logic gates or solar cells, often generates huge interest and investment. Of course there are many more physical devices where improvements generate much less interest, because we haven’t yet found nearly as much use for those devices. But even so, for devices we often use, small improvements can be very big news.

Similarly, there are many widely used computer algorithms where small improvements also generate big interest and financial investments. Of course most gains aren’t like this. For example, there is less interest in techniques tied to very narrow contexts, such as ways to reorganize particular programs. But when wide use is plausible, algorithm gains can be big news.

We can do engineering and design not only with physical and software systems, but also with social systems. There should of course be less interest in designs tied to very particular contexts, such as reorganizing the management of a particular firm. But we often repeatedly use some simple social mechanisms, like voting. So we should be a lot more interest in improving the designs of these.

I started out in engineering, moved to physics, then to software, and then finally to economics. That last move was very much inspired by big apparent gains from better social institutions. I knew that in physical and software engineering we put in huge efforts to scour the vast space of possible designs to find even small gains on devices of moderate generality. Yet in economics it seemed that big gains could be found from very simple easy to find innovations on general mechanisms of wide applicability.

Over two decades later, I must admit that the world shows far less interest in better designs for institutions and social mechanisms, relative to better designs for physical and software systems. Few talk about them, and even fewer business ventures pursue them. Some say that physics and software designs are far more valuable because we know far less about economics; these proposed social designs just don’t work. But this claim seems just wrong to me.

Yes of course any particular argument for any particular social design will make convenient but questionable assumptions. But this is also true for our main arguments for physical or software designs. They also almost always neglect relevant considerations. Tractable analysis simplifies.

I recently posted on a new voting mechanism. Voting is a very general process whose main purposes are also pretty general. I’ve also posted for years about the very general advantages of prediction markets for the problem of info aggregation, which is a very general problem. (Scott Sumner sees their gains as so obvious he calls anything else “Stone Age Economics”.) I just heard a nice talk on better political institutions to promote urban density. And economic journals are full of articles describing new institution designs, and testing the effects of institutions that are not widely adopted.

Yes, proposed new social mechanisms often fail along the path from simple theory models to complex models to lab experiments to small field experiments to large field trials. But physical and software designs also often fail along this path. I don’t see social designs as failing much more often, except for the key failing of not generating much enthusiasm or interest. That is, most people just don’t seem to care how well social designs do in theory or lab or field tests. Even most social scientists don’t care much about design innovations outside their specialty areas.

Yes in the last decade or so there has been more enthusiasm for social innovations embodied in physical and software innovations, like smart phones or block chains. But this enthusiasm seems to be mainly an accidental side effect of tech enthusiasm. For example, while many are excited by Uber achieving new value in cheaper-if-nominally-illegal cab services, most of those gains could have come decades ago from just deregulating cabs, an option in which there was little interest. As another example, there is far more interest today in prediction markets build on block chains than in ordinary prediction markets, even though far more value could be achieved by the later.

I should admit that this all confirms Bryan Caplan’s claim that few people can generate much emotional enthusiasm for efficiency. Bryan says people are far more engaged by moral arguments. I’d say people are also far more engaged by following fashion and by us vs. them coalition politics. Most apparent interest in innovation in social designs can be attributed to these three sources; we explain little more by positing an additional direct interest in helping us all get more of what we want.

This seems mostly also true at the level of smaller organizations like firms. While people give lip service to increasing the efficiency or effectiveness of the organization as a whole, that in fact generates little passion. The passion we do see in the name of efficiency mostly advances particular factions and individual careers. Homo hypocritus is quite skilled at saying that he serves the great good, while actually serving far more personal ends.

Added 9a: Many of you seem to be stuck on the ideas that social innovations can’t be tested unless the entire world agrees to adopt them. Or an entire nation, or city. Yes, some innovations are like that. (There are also physical and software innovations like that.) But a great many social innovations can be tried out on very small scales, where regulations do not block them. And there is very little interest in pursuing these innovations.

GD Star Rating
loading...
Tagged as: ,

Growth Could Slow

Human history has seen accelerating growth, via a sequence of faster growth modes. First humans grew faster than other primates, then farmers grew faster than foragers, and recently industry has grown faster than farming. Most likely, another even faster growth mode lies ahead. But it is worth remembering that this need not happen. For a very concrete historical analogue, the Cambrian Explosion of multi-cellular life seems to have resulted from an accelerating series of key transitions. But then around 520 million years ago, after life had explored most multi-cellular variations, change slowed way down:

In just a few tens of millions of years – a geological instant – almost every major animal group we know made its first appearance in the fossil record, and the ecology of the planet was transformed forever. …

Scientists have struggled to explain what sparked this sudden burst of innovation. Until recently, most efforts tried to find a single trigger, but over the past year or two, a different explanation has begun to emerge. The Cambrian explosion appears to have been life’s equivalent of the perfect storm. Instead of one trigger, there was a whole array of them amplifying one another to generate a hotbed of animal evolution the likes of which the world has never seen before or since. …

The first sign of multicellular animals is in rocks about 750 million years old, which contain fossilised biomolecules found today only in sponges. Then another 150 million apparently uneventful years passed before the appearance of the Ediacaran fauna. This enigmatic group of multicellular organisms of uncertain affinities to other lifeforms flourished in the oceans up to the beginning of the Cambrian. Then [110 million years later] all hell broke loose. … Studies of “molecular clocks” – which use the gradual accumulation of genetic changes to estimate when particular evolutionary branches diverged – suggest that animal complexity emerged before the Cambrian. …

Two huge ecological innovations that make their debut in the Cambrian fossil record. …The first is the ability to burrow into the sea floor. … The second innovation was predation. … What else were these early creatures waiting for? One intriguing possibility is that they were waiting for fertiliser. Geological evidence suggests that rising sea levels during the Cambrian could have increased erosion, boosting levels of nutrients such as calcium, phosphate and potassium in the oceans. …

Atmospheric oxygen levels crept up gradually. … The crucial threshold seemed to be between 1 and 5 per cent of present oxygen levels. Geochemists’ best guess at when the ancient oceans reached this point is about 550 million years ago – just in time to kick off predation and its resulting ecological feedback. …

Precambrian oceans were full of single-celled algae and bacteria. When these small cells died, they would have started to sink, decomposing quickly as they went – and because decomposition consumes oxygen, this would have kept ocean waters anoxic. Filter-feeding sponges, which evolved sometime before the Ediacaran,then started clearing these cells out of the water column before they died and decomposed. The sponges themselves, being larger, were more likely to be buried in the sediment after death, allowing oxygen to remain in the water. Over time, this would have led ever more of the ocean to become oxygenated. (more)

So it remains possible that growth will slow down now, or after the next transition, even if a new series of accelerating transitions lies far ahead.

GD Star Rating
loading...
Tagged as: ,