Monthly Archives: February 2013

What Predicts Growth?

I just heard a fascinating talk by Enrico Spolaore of this paper on what predicts local growth rates over the very long run. He considers three periods: before the farming revolution, from farming to 1500, and from 1500 to today. The results:

  1. The first regions to adopt farming tended to equatorial non-tropic coastal (but not island) regions with lots of domesticable animals (table 2, column 1).
  2. The regions that had the most people in 1500 were those that first adopted farming, and also tended to be tropical inland regions (table 2, column 4).
  3. The regions that were richest per person in 2005 had no overall relation to populous 1500 regions (table 3, column 1), yet were places of folks whose ancestors came from places where farming and big states first started. Rich places also tend to be cool (i.e., toward poles) coasts or islands (table 5) filled with people that are more related culturally and genetically to the industry-era leaders of US and Europe (tables 6,7).

These results tend to support the idea that innovation sharing was central. The first farming innovations were shared along coasts in mild environments, i.e., not too cold or tropical. During the farming era, sharing happened more via inland invasions of peoples, which tropics aided. Industry first thrived in islands better insulated from invasion, industry travel and trade was more sea-based, and sharing of industry was more via people who could relate more to each other.

Changing technologies of travel seem to have made a huge difference. When travel was very hard, it happened first along coasts in mild climates. As domesticated animals made long-distance land travel easier, inland invasions dominated. Then when sea travel made travel far easier, and invasions got harder, cultural barriers mattered most.

GD Star Rating
loading...
Tagged as: , , ,

Who is setting global priorities?

In a situation where different activities have very different benefit to cost ratios, it is important to set priorities, and finish those with the highest values first.  Any individual who didn’t set priorities would achieve much less than they could; they might end up malnourished because they are busy reading their junk mail. While it is relatively easy to set priorities for a single human’s personal life – not that we always follow them – setting priorities for humanity as a whole is very difficult and requires in-depth study.

The central limit theorem suggests that the cost effectiveness of different projects ought to have a ‘log normal’ distribution, if not an even fatter-tailed one. Furthermore, there is no reason to think that (e.g.) political reform, different environmental causes, R&D for various technologies, conflict resolution, poverty reduction and so on are ee in the same ball-park of cost effectiveness, so we should anticipate a large variance in the distribution. This would leave some causes orders of magnitude more important than others. What research on this topic has been done, by groups like J-PAL, GiveWell, the WHO, and so on, indeed finds that the value of different methods of improving the world varies dramatically, with some doing enormous amounts of good and others achieving next to nothing. Unfortunately, as far as I am aware – and I would love to be informed otherwise – there is no one who has taken on the role of picking out and promoting the most important tasks we face.

The Copenhagen Consensus set out to fill this gap in 2003, and produced reports that were of mixed quality, though excellent value for money and a substantial improvement on what existed before. Sadly, it is not currently planning another round of research because it is out of funding (though still taking donations). In the absence of a comprehensive and broad comparison of different causes, resources naturally flow to the most powerful or vocal interest groups, or the approaches that people intuitively guess are best. Given our terrible instincts for risks and magnitudes we don’t have regular direct experience with, it would be an extraordinary coincidence if these actually were the most valuable projects to be embarking on.

The natural home for a properly-funded and ongoing global prioritisation research project would be the World Bank or alternatively, the OECD, or a university. If anyone is reading this and has some influence: global prioritisation looks like a cost effective cause to hop on. Though given the lack of research on the topic, I’ll admit it is hard to be sure!

GD Star Rating
loading...
Tagged as: ,

Is most research a waste?

Over at 80,000 Hours we have been looking into which research questions are most important or prone to neglect. As part of that, I was recently lucky enough to have dinner with Iain Chalmers, one of the founders of the Cochrane Collaborations. He let me know about this helpful summary of reasons to think most clinical research is predictably wasteful:

“Worldwide, over US$100 billion is invested every year in supporting biomedical research, which results in an estimated 1 million research publications per year

a recently updated systematic review of 79 follow-up studies of research reported in abstracts estimated the rate of publication of full reports after 9 years to be only 53%.

An efficient system of research should address health problems of importance to populations and the interventions and outcomes considered important by patients and clinicians. However, public funding of research is correlated only modestly with disease burden, if at all.6–8 Within specific health problems there is little research on the extent to which questions addressed by researchers match questions of relevance to patients and clinicians. In an analysis of 334 studies, only nine compared researchers’ priorities with those of patients or clinicians.9 The findings of these studies have revealed some dramatic mismatches. For example, the research priorities of patients with osteoarthritis of the knee and the clinicians looking after them favoured more rigorous evaluation of physiotherapy and surgery, and assessment of educational and coping strategies. Only 9% of patients wanted more research on drugs, yet over 80% of randomised controlled trials in patients with osteoarthritis of the knee were drug evaluations.10 This interest in non-drug interventions in users of research results is reflected in the fact that the vast majority of the most frequently consulted Cochrane reviews are about non-drug forms of treatment.

New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence. Many researchers do not do this—for example, Cooper and colleagues 13 found that only 11 of 24 responding authors of trial reports that had been added to existing systematic reviews were even aware of the relevant reviews when they designed their new studies.

New research is also too often wasteful because of inadequate attention to other important elements of study design or conduct. For example, in a sample of 234 clinical trials reported in the major general medical journals, concealment of treatment allocation was often inadequate (18%) or unclear (26%).16 In an assessment of 487 primary studies of diagnostic accuracy, 20% used different reference standards for positive and negative tests, thus overestimating accuracy, and only 17% used double-blind reading of tests.17

More generally, studies with results that are disappointing are less likely to be published promptly,19 more likely to be published in grey literature, and less likely to proceed from abstracts to full reports.2 The problem of biased under-reporting of research results mainly from decisions taken by research sponsors and researchers, not from journal editors rejecting submitted reports.20 Over the past decade, biased under-reporting and over-reporting of research have been increasingly acknowledged as unacceptable, both on scientific and on ethical grounds.

Although their quality has improved, reports of research remain much less useful than they should be. Sometimes this is because of frankly biased reporting—eg, adverse effects of treatments are suppressed, the choice of primary outcomes is changed between trial protocol and trial reports,21 and the way data are presented does not allow comparisons with other, related studies. But even when trial reports are free of such biases, there are many respects in which reports could be made more useful to clinicians, patients, and researchers. We select here just two of these. First, if clinicians are to be expected to implement treatments that have been shown in research to be useful, they need adequate descriptions of the interventions assessed, especially when these are non-drug interventions, such as setting up a stroke unit, off ering a low fat diet, or giving smoking cessation advice. Adequate information on interventions is available in around 60% of reports of clinical trials;22 yet, by checking references, contacting authors, and doing additional searches, it is possible to increase to 90% the proportion of trials for which adequate information could be made available.22

Although some waste in the production and reporting of research evidence is inevitable and bearable, we were surprised by the levels of waste suggested in the evidence we have pieced together. Since research must pass through all four stages shown in the fi gure, the waste is cumulative. If the losses estimated in the fi gure apply more generally, then the roughly 50% loss at stages 2, 3, and 4 would lead to a greater than 85% loss, which implies that the dividends from tens of billions of dollars of investment in research are lost every year because of correctable problems.”

His assessment was that the research profession could not be expected to fix up these problems internally, as it  had not done so already despite widespread knowledge of these problems, and had no additional incentive to do so now. It needs external intervention and some options are proposed in the paper.

There is a precedent for this. The US recently joined a growing list of countries who have helped their researchers coordinate to weaken the academic publishing racket, by insisting that publicly-funded research be free and openly available within a year. So long as academics are permitted to publish publicly-funded research in pay-for-access journals, established and prestigious journals can earn big rents by selling their prestige to researchers – to help them advance their careers – in exchange for copyright on their publicly-funded research. Now that researchers aren’t permitted to sell that copyright, an individual who would refuse to do so out of principle won’t be outcompeted by less scrupulous colleagues.

Likewise, rules that require everyone receiving public money to do the public-spirited thing, for instance by checking for systematic reviews, publishing null results,  pre-registering their approach to data analysis, opening their data to scrutiny by colleagues, and so on, would make it harder for unscrupulous researchers to get ahead with corner-cutting or worse chicanery.

GD Star Rating
loading...
Tagged as:

Bike Helmet Laws Fail

Two years ago I posted on evidence that called into question the effectiveness of bike helmet laws. A new NBER paper confirms this skepticism:

Using hospital-level panel data and triple difference models. … We consider the effects of the [US bike helmet] laws directly on [’91-’08 US] bicycle related head injuries, bicycle related non-head injuries, and injuries as a result of participating in other wheeled sports (primarily skateboarding, roller skates and scooters). For 5-19 year olds, we find the helmet laws are associated with a 13 percent reduction in bicycle head injuries, but the laws are also associated with a 9 percent reduction in non-head bicycle related injuries and an 11 percent increase in all types of injuries from the wheeled sports. ..

The estimated reduction in head injuries resulting from helmet laws is robust to changes in the definition of the control group, to changes in the type of fixed effects included (state versus hospital), and to changes in the samples of states and hospitals evaluated. … Considering the different offsetting results, we run our preferred specification on injury counts for 1) all head injuries and 2) total (all head and body) injuries arising from cycling and wheeled sports. The net effects of the helmet laws are small and are not statistically different from zero. (more)

GD Star Rating
loading...
Tagged as: ,

Goldilocks Disruptions

A society’s history of climatic shocks shaped the timing of its adoption of farming. Specifically, as long as climatic disturbances did not lead to a collapse of the underlying resource base, the rate at which foragers were climatically propelled to experiment with their habitats determined the accumulation of tacit knowledge complementary to farming. Thus, differences in climatic volatility across hunter-gatherer societies gave rise to the observed spatial variation in the timing of the adoption of agriculture. ….

Conducting a comprehensive empirical investigation at both cross-country and cross-archaeological site levels, the analysis establishes that, conditional on biogeographic endowments, climatic volatility has a non-monotonic effect on the timing of the transition to agriculture. Farming was adopted earlier in regions characterized by intermediate levels of climatic volatility, with regions subject to either too high or too low intertemporal variability systematically transiting later. Reassuringly, the results hold at different levels of aggregation and using alternative sources of climatic sequences. (more)

For the industrial revolution, the analogous disturbance might have been war and invasion. Were the first adopters of the industrial revolution the places that suffered an intermediate level of war and invasion? Enough to keep folks from getting too comfy in their old ways, but not so much that everything gets destroyed all the time. I’m not sure, but it sounds plausible.

Today the main disruptions are economic; societies rise and fall due to changes in the economic fortunes of particular industries or economic styles. Thus a lesson for the next great revolution might be that it will first benefit the societies that have adapted to dealing with an intermediate level of economic disruption. Which ones are those?

GD Star Rating
loading...
Tagged as: , ,

Farmers’ New Rituals

A theory of ritual says the calm bookish kinds of rituals we are most familiar with started with farming; forager rituals were much more intense. There seems to be lots of supporting data:

Whitehouse believes rituals are always about building community — which arguably makes them central to understanding how civilization itself began. … Whitehouse’s theory [is] that rituals come in two broad types, which have different effects on group bonding. Routine actions such as prayers at church, mosque or synagogue, or the daily pledge of allegiance recited in many US elementary schools, are rituals operating in what Whitehouse calls the ‘doctrinal mode’. He argues that these rituals, which are easily transmitted to children and strangers, are well suited to forging religions, tribes, cities and nations — broad-based communities that do not depend on face-to-face contact.

Rare, traumatic activities such as beating, scarring or self-mutilation, by contrast, are rituals operating in what Whitehouse calls the ‘imagistic mode’. “Traumatic rituals create strong bonds among those who experience them together,” he says, which makes them especially suited to creating small, intensely committed groups such as cults, military platoons or terrorist cells. “With the imagistic mode, we never find groups of the same kind of scale, uniformity, centralization or hierarchical structure that typifies the doctrinal mode,” he says. … Continue reading "Farmers’ New Rituals" »

GD Star Rating
loading...
Tagged as: ,

Is Social Science Extremist?

I recently did two interviews with Nikola Danaylov, aka “Socrates”, who has so far done ~90 Singularity 1 on 1 video podcast interviews. Danaylov says he disagreed with me the most:

My second interview with economist Robin Hanson was by far the most vigorous debate ever on Singularity 1 on 1. I have to say that I have rarely disagreed more with any of my podcast guests before. … I believe that it is ideas like Robin’s that may, and often do, have a direct impact on our future. … On the one hand, I really like Robin a lot: He is that most likeable fellow … who like me, would like to live forever and is in support of cryonics. In addition, Hanson is also clearly a very intelligent person with a diverse background and education in physics, philosophy, computer programming, artificial intelligence and economics. He’s got a great smile and, as you will see throughout the interview, is apparently very gracious to my verbal attacks on his ideas.

On the other hand, after reading his book draft on the [future] Em Economy I believe that some of his suggestions have much less to do with social science and much more with his libertarian bias and what I will call “an extremist politics in disguise.”

So, here is the gist of our disagreement:

I say that there is no social science that, in between the lines of its economic reasoning, can logically or reasonably suggest details such as: policies of social discrimination and collective punishment; the complete privatization of law, detection of crime, punishment and adjudication; that some should be run 1,000 times faster than others, while at the same time giving them 1,000 times more voting power; that emulations who can’t pay for their storage fees should be either restored from previous back-ups or be outright deleted (isn’t this like saying that if you fail to pay your rent you should be shot dead?!)…

Suggestions like the above are no mere details: they are extremist bias for Laissez-faire ideology while dangerously masquerading as (impartial) social science. … Because not only that he doesn’t give any justification for the above suggestions of his, but also because, in principle, no social science could ever give justification for issues which are profoundly ethical and political in nature. (Thus you can say that I am in a way arguing about the proper limits, scope and sphere of economics, where using its tools can give us any worthy and useful insights we can use for the benefit of our whole society.) (more)

You might think that Danaylov’s complaint is that I use the wrong social science, one biased too far toward libertarian conclusions. But in fact his complaint seems to be mainly against the very idea of social science: an ability to predict social outcomes. He apparently argues that since 1) future social outcomes depend in many billions of individual choices, 2) ethical and political considerations are relevant to such choices, and 3) humans have free will to be influenced by such considerations in making their choices, that therefore 4) it should be impossible to predict future social outcomes at a rate better than random chance.

For example, if allowing some ems to run faster than others might offend common ethical ideals of equality, it must be impossible to predict that this will actually happen. While one might be able to use physics to predict the future paths of bouncing billiard balls, as soon as a human will free will enters the picture making a choice where ethics is relevant, all must fade into an opaque cloud of possibilities; no predictions are possible.

Now I haven’t viewed them, but I find it extremely hard to believe that out of 90 interviews on the future, Danaylov has always vigorously complained whenever anyone even implicitly suggested that they could any better than random chance in guessing future outcomes in any context influenced by a human choice where ethics or politics might have been relevant. I’m in fact pretty sure he must have nodded in agreement with many explicit forecasts. So why complain more about me then?

It seems to me that the real complaint here is that I forecast that human choices will in fact result in outcomes that violate the ethical principles Danaylov holds dear. He objects much more to my predicting a future of more inequality than if I had predicted a future of more equality. That is, I’m guessing he mostly approves of idealistic, and disapproves of cynical, predictions. Social science must be impossible if it would predict non-idealistic outcomes, because, well, just because.

FYI, I also did this BBC interview a few months back.

GD Star Rating
loading...
Tagged as: , , ,

Which biases matter most? Let’s prioritise the worst!

As part of our self-improvement program at the Centre for Effective Altruism I decided to present a lecture on cognitive biases and how to overcome them. Trying to put this together reminded me of a problem I have long had with the self-improvement literature on biases, along with those for health, safety and nutrition: they don’t prioritise. Kahneman’s book Thinking Fast and Slow represents an excellent summary of the literature on biases and heuristics, but risks overwhelming or demoralising the reader with the number of errors they need to avoid. Other sources are even less helpful at highlighting which biases are most destructive.

You might say ‘avoid them all’, but it turns out that clever and effort-consuming strategies are required to overcome most biases; mere awareness is rarely enough. As a result, it may not be worth the effort in many cases. Even if it were usually worth it, most folks will only ever put a limited effort into reducing their cognitive biases, so we should guide their attention towards the strategies which offer the biggest ‘benefit to cost ratio’ first.

There is a bias underlying this scattershot approach to overcoming bias: we are inclined to allocate equal time or value to each category or instance of something we are presented with, even if they are arbitrary, or at least not a good signal of their importance. Expressions of this bias include:

  • Allocating equal or similar migrant places or development aid funding to different countries out of ‘fairness’, even if they vary in size, need, etc.
  • Making a decision by weighing the number, or length, of ‘pro’ and ‘con’ arguments on each side.
  • Offering similar attention or research funding to different categories of cancer (breast, pancreas, lung), even though some kill ten times as many people as others.
  • Providing equal funding for a given project to every geographic district, even if the boundaries of those districts were not drawn with reference to need for the project.

Fortunately, I don’t think we need tackle most of the scores of cognitive biases out there to significantly improve our rationality. My guess is that some kind of Pareto or ’80-20′ principle applies, in which case a minority of our biases are doing most of the damage. We just have to work out which ones! Unfortunately, as far as I can tell this hasn’t yet been attempted by anyone, even the Centre for Applied Rationality, and there are a lot to sift through. So, I’d appreciate your help to produce a shortlist. You can have input through the comments below, or by voting on this Google form. I’ll gradually cut out options which don’t attract any votes.

Ultimately, we are seeking biases that have a large and harmful impact on our decisions. Some correlated characteristics I would suggest are that it:

  • potentially influences your thinking on many things
  • is likely to change your beliefs a great deal
  • doesn’t have many redeeming ‘heuristic’ features
  • disproportionately influences major choices
  • has a large effect substantiated by many studies, and so is less likely the result of publication bias.

We face the problem that more expansive categories can make a bias look like it has a larger impact (e.g. ‘cancer’ would look really bad but none of ‘pancreatic cancer’, ‘breast cancer’, etc would stand out individually). For our purposes it would be ideal to group and rate categories of biases after breaking them down by ‘which intervention would neutralise this.’ I don’t know of such a categorisation and don’t have time to make one now. I don’t expect that this problem will be too severe for a first cut.

GD Star Rating
loading...
Tagged as: , ,

Foom Debate, Again

My ex-co-blogger Eliezer Yudkowsky last June:

I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future – to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.

I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera. (more)

When we shared this blog, Eliezer and I had a long debate here on his “AI foom” claims. Later, we debated in person once. (See also slides 34,35 of this 3yr-old talk.) I don’t accept the above as characterizing my position well. I’ve written up a summaries before, but let me try again, this time trying to more directly address the above critique.

Eliezer basically claims that the ability of an AI to change its own mental architecture is such a potent advantage as to make it likely that a cheap unnoticed and initially low ability AI (a mere “small project machine in a basement”) could without warning over a short time (e.g., a weekend) become so powerful as to be able to take over the world.

As this would be a sudden big sustainable increase in the overall growth rate in the broad capacity of the world economy, I do find it useful to compare to compare this hypothesized future event to the other pasts events that produce similar outcomes, namely a big sudden sustainable global broad capacity rate increase. The last three were the transitions to humans, farming, and industry.

I don’t claim there is some hidden natural law requiring such events to have the same causal factors or structure, or to appear at particular times. But I do think these events suggest a useful if weak data-driven prior on the kinds of factors likely to induce such events, on the rate at which they occur, and on their accompanying inequality in gains. In particular, they tell us that such events are very rare, that over the last three events gains have been spread increasingly equally, and that these three events seem mainly due to better ways to share innovations.

Eliezer sees the essence of his scenario as being a change in the “basic” architecture of the world’s best optimization process, and he sees the main prior examples of this as the origin of natural selection and the arrival of humans. He also sees his scenario as differing enough from the other studied growth scenarios as to make analogies to them of little use.

However, since most global bio or econ growth processes can be thought of as optimization processes, this comes down to his judgement on what counts as a “basic” structure change, and on how different such scenarios are from other scenarios. And in my judgement the right place to get and hone our intuitions about such things is our academic literature on global growth processes.

Economists have a big literature on processes by which large economies grow, increasing our overall capacities to achieve all the things we value. There are of course many other growth literatures, and some of these deal in growths of capacities, but these usually deal with far more limited systems. Of these many growth literatures it is the economic growth literature that is closest to dealing with the broad capability growth posited in a fast growing AI scenario.

It is this rich literature that seems to me the right place to find and hone our categories for thinking about growing broadly capable systems. One should review many formal theoretical models, and many less formal applications of such models to particular empirical contexts, collecting “data” points of what is thought to increase or decrease growth of what in what contexts, and collecting useful categories for organizing such data points.

With such useful categories in hand one can then go into a new scenario such as AI foom and have a reasonable basis for saying how similar that new scenario seems to old scenarios, which old scenarios it seems most like if any, and which parts of that new scenario are central vs. peripheral. Yes of course if this new area became mature it could also influence how we think about other scenarios.

But until we actually see substantial AI self-growth, most of the conceptual influence should go the other way. Relying instead primarily on newly made up categories and similarity maps between them, concepts and maps which have not been vetted or honed in dealing with real problems, seems to me a mistake. Yes of course a new problem may require one to introduce some new concepts to describe it, but that is hardly the same as largely ignoring old concepts.

So, I fully grant that the ability of AIs to intentionally change mind designs would be a new factor in the world, and it could make a difference for AI ability to self-improve. But while the history of growth over the last few million years has seen many dozens of factors come and go, or increase and decrease in importance, it has only seen three events in which overall growth rates greatly increased suddenly and sustainably. So the mere addition of one more factor seems unlikely to generate foom, unless our relevant categories for growth causing factors suggest that this factor is unusually likely to have such an effect.

This is the sense in which I long ago warned against over-reliance on “unvetted” abstractions. I wasn’t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems. On the subject of an AI growth foom, most of those abstractions should come from the field of economic growth.

GD Star Rating
loading...
Tagged as: , , , ,

What motivates cognition?

When I was a teenager, I think I engaged in a lot of motivated cognition. At least in an absolute sense; I don’t know how much is common. Much was regarding trees. Before I thought about this in detail, I assumed that how motivated cognition mostly works is this: I wanted to believe X, and so believed X regardless of the evidence. I looked for reasons to justify my fixed beliefs, while turning a blind eye to this dubious behavior.

On closer in(tro)spection, this is what I think really happened. I felt strongly that X was true because many good and smart adults had told me so. I also explicitly believed I should believe whatever my reasoning told me. I was inclined to change my beliefs when the information changed. However I knew that I did this, I feared that my reasoning was fallible, and I was terrified that I would come to believe not-X even though X was the truth. Then the truth would come out, or more evidence at least (and obviously the truth would be X), then all the good people who knew X would consider me evil, which was equivalent to being evil. They would also consider me stupid, for not seeing the proper counterarguments. So it was sickening to not be able to come up with a counterargument, because such a failure would immediately turn me into an evil and stupid person. Needless to say, I was quite an expert, especially on counterarguments.

So unlike in my usual model of motivated cognition, my arguments were directed at persuading myself of things I feared doubting, rather than justifying fixed beliefs to others. How often is this really what’s going on?

GD Star Rating
loading...