Monthly Archives: April 2021

Motive/Emotive Blindspot

In this short post what I try to say is unusually imprecise, relative to what I usually try to say. Yet it seems important enough to try to say it anyway.

I’ve noticed a big hole in my understanding, which I think is shared by most economists, and perhaps also most social scientists: details about motives and emotions are especially hard to predict. Consider:

  1. Most of us find it hard to predict how we, or our associates, will feel in particular situations.
  2. We care greatly about how we & associates feel, yet we usually only influence feelings in rather indirect ways.
  3. Even when we have an inkling about how we feel now, we are usually pretty reluctant tell details on that.
  4. Organizations find it hard to motivate, and to predict the motives of, employees and associates.
  5. Marketers find it hard to motivate, and to predict the motives of, customers.
  6. Movie makers find it very hard to predict which movies people will like.
  7. It is hard for authors, even good ones, to imagine how characters would feel in various situations.
  8. It is hard for even good actors to believable portray what characters feel in situations.
  9. We poorly understand declining motive power of religion & ideology, nor which ones motivate what.
  10. We poorly understand declining emotion power of rituals, nor which ones induce which emotions.

We seem to be built to find it hard to see and predict both our and others’ motives and emotions. Oh we can, from a distance, see some average tendencies well enough to predict a great many overall social tendencies. But when we get to details, up close, our vision fails us.

In many common situations, the motive/emotive variance that we find it hard to predict isn’t much correlated across people or time, and so doesn’t much get in the way of aggregate predictions. But in other common situations, that puzzling variance can be quite correlated.

GD Star Rating
loading...
Tagged as: ,

Shoulda-Listened Futures

Over the decades I have written many times on how prediction markets might help the intellectual world. But usually my pitch has been to those who want to get a better actionable info out of intellectuals, or to help the world to make better intellectual progress in the long run. Problem is, such customers seem pretty scarce. So in this post I want to outline an idea that is a bit closer to a business proposal, in that I can better identify concrete customers who might pay for it.

For every successful intellectual there are (at least) hundreds of failures. People who started out along a path, but then were not sufficiently rewarded or encouraged, and so then either quit or persisted in relative obscurity. And a great many of these (maybe even a majority) think that the world done them wrong, that their intellectual contributions were underrated. And no doubt many of them are right. Such malcontents are my intended customers.

These “world shoulda listened to me” customers might pay to have some of their works evaluated by posterity. For example, for every $1 saved now that gains a 3% real rate of return, $19 in real assets are available in a century to pay historians for evaluations. At a 6% rate of return (or 3% for 2 centuries), that’s $339. Furthermore, if future historians needed only to randomly evaluate 1% of the works assigned them, then if malcontents paid $10 per work to be maybe evaluated, historians could spend $20K (or $339K) per work they evaluate. Considering all the added knowledge and tools to which future historians may have access, that seems enough to do a substantial evaluation, especially if they evaluate several related works at the same time.

Given a substantial chance (1% will do) that a work might be evaluated by historians in a century or two, we could then create (conditional) prediction markets now estimating those future evaluations. So a customer might pay their $20 now, and get an immediate prediction market estimate of that future evaluation for their work. That $20 might pay $10 for the (chance of a) future evaluation and another $10 to establish and subsidize a prediction market over the coming centuries until resolution.

Finally, if customers thought market estimate regarding their works looked too low, then they could of course try to bet to raise those estimates. Skeptics would no doubt lie waiting to bet against them, and on average this tendency of authors to bet to support their works would probably subsidize these markets, and so lower the fees that the system needs to charge.

Of course even with big budgets for evaluations, if we want future historians to make reliable enough formal estimates that we can bet on in advance, then we will need to give them a well-defined-enough task to accomplish. And we need to define this task in a way that discourages future historians from expressing their gratitude to all these people who funded their work by giving them all an A+.

I suggest we have future historians estimate each work’s ideal attention: how much attention each particular work should have been given during some time period. So we should pick some measure of attention, a measure that we can calculate for works when they are submitted, and track over time. This measure should weigh if the dissertation was approved, the paper was published and where, how many cites did it get, etc. If we add up all the initial attention for submitted works, then we can assign historians the task of (counterfactually) reallocating this total attention across all the submitted works. So to give more attention to some, they’d have to take away attention from others.

Okay, so now they can’t give every work an A+. (And we ensure that bet assets have bounded values.) But our job isn’t done. We also need to give them a principle to follow when allocating attention among all these prior works. What objective would they be trying to accomplish via this reallocation of attention?

I suggest that the objective just be intellectual progress, toward the world having access to more accurate and useful beliefs. A set of works should have gotten more attention if in that case the world would have been more likely to have more quickly come to appreciate valuable truths. And this task is probably easier if we ask future historians to use their future values in this task, instead of asking them to try to judge according to our values today.

These evaluation tasks probably get easier if historians randomly pick related sets of works to evaluate together, instead of independently picking each work to evaluate. And this system can probably offer scaled fees, wherein the chance that your work gets evaluated rises linearly with the price you paid for that chance. There are probably a lot more details to work out, but I expect I’ve already said enough for most people to decide roughly how much they like this idea.

Once there were many works in this system, and many prediction markets estimating their shoulda-been attention, then we could look to see if market speculators see any overall biases in today’s intellectual worlds. That is, topics, methods, disciplines, genders, etc. to which speculators estimate that the world today is giving too little attention. That could be pretty dramatic and damning evidence of bias, by someone, evidence to which we’d all be wise to attend.

One obvious test of this approach would be to assign historians today the task of reallocating attention among papers published a century or two ago. Perhaps assign multiple independent groups, and see how correlated are their evaluations, and how that correlation varies across topic areas. Perhaps repeating in a decade or two, to see how much evaluations drift over time.

Showing these correlations to potential customers might convince them that there’s a good enough chance that such a system will later correctly vindicate their neglected contributions. And these tests may show good scopes to use, for related works and time periods to evaluate together, and how narrow or broad should be the expertise of the evaluators.

This whole shoulda-listened-futures approach could or course also be applied to many other kinds of works, not just intellectual works. You’d just have to establish your standards for how future historians are to allocate shoulda attention, and trust them to actually follow those standards. Doing tests on works from centuries ago here could also help to show if this is a viable approach for these kinds of works.

Added 7am 28Apr: On average more assets will be available to pay for future evaluations if the fees paid are invested in risky assets. So instead of promising a particular percentage chance of evaluation, it may make more sense to specify how fees will be invested, set the (real) amount to be spent on each evaluation, and then promise that the chance of evaluation for each work will be set by the investment return relative to the initial fee paid. Yes that induces more evaluations in state of the world where investments do better, but customers are already accepting a big chance that their work will never be directly evaluated.

GD Star Rating
loading...
Tagged as: , ,

Schulze-Makuch & Bains on The Great Filter

In their 2016 journal article “The Cosmic Zoo: The (Near) Inevitability of the Evolution of Complex, Macroscopic Life“, Dirk Schulze-Makuch and William Bains write:

An important question is … whether there exists what Robin Hanson calls “The Great Filter” somewhere between the formation of planets and the rise of technological civilizations. …

Our argument … is that the evolution of complex life [from simple life] is likely … [because] functions found in complex organisms have evolved multiple times, an argument we will elaborate in the bulk of this paper … [and] life started as a simple organism, close to [a] “wall” of minimum complexity … With time, the most complex life is therefore likely to become more complex. … If the Great Filter is at the origin of life, we live in a relatively empty universe, but if the origin of life is common, we live in a Cosmic Zoo where such complex life is abundant.

Here they seem to say that the great filter must lie at the origin of life, and seem unclear on if it could also lie in our future.

In the introduction to in their longer 2017 book, The Cosmic Zoo: Complex Life on Many Worlds, Schulze-Makuch and Bains write:

We see no examples of intelligent, radio-transmitting, spaceship-making life in the sky. So there must be what Robin Hanson calls ‘The Great Filter’ between the existence of planets and the occurrence of a technological civilisation. That filter could, in principle, be any of the many steps that have led to modern humanity over roughly the last 4 billion years. So which of those major steps or transitions are highly likely and which are unlikely? …

if the origin of life is common and habitable rocky planets are abundant then life is common, and we live in a Cosmic Zoo. … Our hypothesis is that all major transitions or key innovations of life toward higher complexity will be achieved by a sufficient large biosphere in a semi-stable habitat given enough time. There are only two transitions of which we have little insight and much speculation—the origin of life itself, and the origin (or survival) of technological intelligence. Either one of these could explain the Fermi Paradox – why we have not discovered (yet) any sign of technologically advanced life in the Universe.

So now they add that (part of) the filter could lie at the origin of human-level language & tech. In the conclusion of their book they say:

There is strong evidence that most of the key innovations that we discussed in… this book follow the Many Paths model. … There are, however, two prominent exceptions to our assessment. The first exception is the origin of life itself. … The second exception … is the rise of technologically advanced life itself. …The third and least attractive option is that the Great Filter still lies ahead of us. Maybe technological advanced species arise often, but are then almost immediately snuffed out.

So now they make clear that (part of) the filter could also lie in humanity’s future. (Though they don’t make it clear to me if they accept that we know the great filter is huge and must lie somewhere; the only question is where it lies.)

In the conclusion of their paper, Schulze-Makuch and Bains say:

We find that, with the exception of the origin of life and the origin of technological intelligence, we can favour the Critical Path [= fixed time delay] model or the Many Paths [= independent origins] model in most cases. The origin of oxygenesis, may be a Many Paths process, and we favour that interpretation, but may also be Random Walk [= long expected time] events.

So now they seem to also add the ability to use oxygen as a candidate filter step. And earlier in the paper they also say:

We postulate that the evolution of a genome in which the default expression status was “off” was the key, and unique, transition that allowed eukaryotes to evolve the complex systems that they show today, not the evolution of any of those control systems per se. Whether the evolution of a “default off” logic was a uniquely unlikely, Random Walk event or a probable, Many Paths, event is unclear at this point.

(They also discuss this in their book.) Which adds one more candidate: the origin of the eukaryote “default off” gene logic.

In their detailed analyses, Schulze-Makuch and Bains look at two key indicators: whether a step was plausibly essential for the eventual rise of advanced tech, and whether we can find multiple independent origins of that step in Earth’s fossil record. These seem to me to both be excellent criteria, and Schulze-Makuch and Bains seem to expertly apply them in their detailed discussion. They are a great read and I recommend them.

My complaint is with Schulze-Makuch and Bains’ titles, abstracts, and other summaries, which seem to arbitrarily drop many viable options. By their analysis criteria, Schulze-Makuch and Bains find five plausible candidates for great filter steps along our timeline: (1) life origin ~3.7Gya, (2) oxygen processing ~3.1Gya (3) Eukaryote default-off genetic control ~1.8Gya, (4) human-level language/tech ~0.01Gya, and (5) future obstacles to our becoming grabby. With five plausible hard steps, it seems unreasonable to claim that “if the origin of life is common, we live in a Cosmic Zoo where such complex life is abundant”.

Schulze-Makuch and Bains seem to justify dropping some of these options because they don’t “favour” them. But I can find no explicit arguments or analysis in their article or book for why these are less viable candidates. Yes, a step being essential and only having been seen once in our history only suggests, but hardly assures, that this is a hard step. Maybe other independent origins happened, but have not yet been seen in our fossil record. Or maybe this did only happen once, but that was just random luck and they could easily have happened a bit later. But these caveats are just as true of all of Schulze-Makuch and Bains’ candidate steps.

I thus conclude that we know of four plausible and concrete candidates for great filter steps before our current state. Now I’m not entirely comfortable with postulating a step very recently, given the consistent trend in increasing brain sizes over the last half billion years. But Schulze-Makuch and Bains do offer plausible arguments for why this might in fact have been an unlikely step. So I accept that they have found four plausible hard great filter steps in our past.

The total number of hard steps in the great filter sets the power in our power law model for the origin of grabby aliens. This number includes not only the hard filter steps that we’ve found in the fossil record of Earth until now, but also any future steps that we may yet encounter, any steps on Earth that we haven’t yet noticed in our fossil record, and any steps that may have occurred on a prior “Eden” which seeded Earth via panspermia. Six steps isn’t a crazy middle estimate, given all these considerations.

GD Star Rating
loading...
Tagged as:

Explaining Regulation

During this pandemic, elites have greatly enjoyed getting to feel important by weighing in on big pandemic policy questions, such as masks, lockdowns, travel restrictions, vaccine tests, vaccine distribution, etc. Each elite can feel self-righteous in their concern for others, and morally outraged when the world doesn’t follow their recommendations. Don’t people know that this is too important for XYZ to get in the way of the world believing that they are right? Unconsciously, they seek to signal that they are in fact elites, by the facts that they agree with elites, that other elites listen to them, and that the world does what elites say.

Imagine that these key pandemic policy choices had been made instead by private actors. Such as vaccine makers testing, pricing, and distributing as they wished, airlines limiting travel as they wished, and legal liability via tracking discouraging overly risky behavior. Government could have influenced these choices indirectly via subsidies and taxes, but the key specific choices would still have been made privately.

In this scenario, talking head elites would have been a lot more frustrated, as they’d have to direct their advice to these private actors, who are much less visibly eager than public officials to slavishly follow elite advice. So elites could less clearly show that they are elites by the fact that the world promptly and respectably obeys their commands.

When these private actors made choices that later seemed like mistakes in retrospect, then elites who resented their neglect would make passionate calls to change legal standards in order to rain down retribution and punishment upon these private actors, to “hold them to account.” Even though they were not at fault according to prior legal standards. However, when private decisions seemed right in retrospect, there’d be few passionate calls to rain down extra rewards on them. As we’ve seen recently in the “opiod crisis”, or earlier with subprime loans, cigarettes, and nuclear power.

In contrast, when government authorities do exactly what elites tell them, and yet in retrospect those decisions look mistaken, there are few calls to hold to account these authorities, or the elites and media who goaded them on. We then hear all about how uncertainty is a real thing, and even good decisions can look bad in retrospect. Given these sort of “heads I win, tails we flip again” standards, it is no surprise that private actors would often rather that key decisions be made by government officials. Even if those decisions will be made worse, private actors can avoid frequent retribution for in-hindsight mistakes.

In principle, elites could argue at higher levels of abstraction, not about specific mask or travel rules, but about how best to structure the general institutions and systems of information and incentives in which various choices are made. Then elites could respond to a crisis by reevaluating and refining these more abstract systems. But, alas, most elites don’t know enough to argue at this level. Some people with doctorates in economics or computer science are up to this task, but in our world we use a great many weak indicators to decide who counts as “elites”, and the vast majority of those who quality simply don’t know how to think about abstract institution design questions. But masks, etc. they think they understand.

Yes, there are many other topics which require great expertise, such as for example designing nuclear reactors. In many such cases, elites realize that they don’t know enough to offer judgments on details, and so don’t express opinions at detail levels. When something goes wrong, they instead may just say “more must be done”, even though they almost never say “less must be done” after a long period without things going wrong. Or they may respond to a problem by saying “government-authorized authorities must oversee more of these details”, though again they hardly ever suggest overseeing fewer details in other situations.

So the problem with regulation is more fundamentally that elites focus on reacting to concrete failures, instead of looking for missed opportunities, and they don’t understand much more than “do more” and “oversee more” as the possible institutional responses to concrete problems that they see need expertise. Nor do they understand much about how to design better institutions other than to respond in these ways to more particular observed problems.

And that’s my simple theory of most regulation. Elites love to pontificate on the problems of the day, and want whatever consensus they produce to be quickly enacted by authorities. As government officials are far more prompt and subservient in such responses, elites prefer government authorities to have strong regulatory powers. Elites enforce this preference via asymmetric pressures on private actors, punishing failure but not rewarding success, yet doing neither for public actors and their elite supporters.

Elon Musk is in for a world of pain if any of his many quite risky ventures ever stumbles, as elites are mad at him for ignoring their advice that none of his ventures ever had a chance. Zuckerberg is already being credibly threatened with punishment for supposed missteps by Facebook, even though it isn’t at all clear what they did wrong, and with no gratitude shown for all the social value they’ve contributed thus far.

All this gives me mixed feelings when I see smart people offer good advice in elite discussions on concrete topics like masks, vaccines, etc. Yes, given that this is how decisions are going to be made, it is better to make good than bad choices. But I wish such advisors more often and visibly said that this isn’t how such decisions should made. We should instead design good general institutions we can trust to deal with each crisis without needing constant elite micromanagement.

GD Star Rating
loading...
Tagged as: , ,

Try-Two Contest Board

Imagine that a restaurant wants to ask its associates (cooks, servers, etc.) what are the best two menu items to put on its menu as specials on a particular night. They have a large set of possible menu items to consider, the measure of success is menu item sales revenue, and they want a mechanism that is both fun and easy. (Which rules out conditional prediction markets, at least for now.)

Here’s an idea. Start with a contest board like this, on a wall near associates:

Continue reading "Try-Two Contest Board" »

GD Star Rating
loading...
Tagged as: ,

Do Your Thoughts Scale?

Most intellectuals don’t pick their topics based on fundamental value. They instead opportunistically read the many clues around them regarding on which topics they are more likely to be rewarded. Now if you, in contrast, have the slack and inclination to instead pursue what seems fundamentally important, I salute you. And to help you, I now review some related considerations that you might overlook:

  • Rewards: You don’t want to focus *only on topics where others offer rewards, but that does help, so don’t ignore it.
  • Impressive: In particular, if your work can help you look impressive, that can help you get more support later.
  • Generality: The more general your topic, the more different useful applications you and others might later find.
  • Approachable: It is not enough for insights on X to be valuable, you need some ideas for how to get insights on X.
  • Pioneering: Due to diminishing returns, the 10th insight in an area offers more gains relative to costs than the 1000th.
  • Advantage: If you will compete with others on your topic, seek some sort of comparative advantage relative to them.
  • Actionable: Cosmically big topics are insufficient; you also need key concrete actions which your results could inform.
  • Near-term: The sooner that relevant actions could be taken the better; actions in a century matter a lot less.
  • Scales-well: You want to join an intellectual community that will achieve big scale economies in accumulating insights.

This last consideration is so important, and so oft overlooked, that I will now spend the rest of this post on it. The world gains vastly more when intellectuals can organize themselves via a division of labor to each look into different topics and then combine all their efforts into a unified total perspective. So that over time their efforts accumulate into progress. Most intellectuals pretend that their usual habits ensure this, but this isn’t remotely true. Continue reading "Do Your Thoughts Scale?" »

GD Star Rating
loading...
Tagged as:

Managed Competition or Competing Managers?

Competition and cooperation [as] opposites, with vice on one side and virtue on the other … is a false dichotomy … The market-based competition envisioned in economics is disciplined by rules and reputations. … Just as competition is not a shorthand for “anything goes,” the quick and thoughtless inference that cooperation is necessarily virtuous is often unjustified. In many cases, cooperation is a tool for an in-group to take advantage of those outside the group. …

Competition refers to a situation in which people or organizations (such as firms) apply their efforts and talents toward a certain goal, and they receive results based substantially on their performance relative to each other. … Cooperation refers to a situation in which the participants seek out win-win outcomes from working together. (More)

Raw unconstrained competition looks scary; lies, betrayal, predation, starvation, war; so many things can go wrong! Which makes “managed competition” sound so comforting; whew, someone will limit the problems. Someone like a boss, police officer, sports referee, or government regulator.

However, raw unconstrained management also looks scary; that’s tyranny, which can go wrong in so so many ways! Such as via incompetence, exploitation, and rot. And so we can be comforted to hear that managers must compete. For example, when individual managers compete for jobs, firms compete for customers, or politicians compete for votes.

But who will guard the guardians? If we embed competitions within larger systems of managers, and also embed managers within larger systems of competition, won’t they all sit within some maximally-encompassing system, which must then be either competition, management, or some awkward mix of the two? This is the fundamental hard problem of design and governance, from which there is no easy escape. Continue reading "Managed Competition or Competing Managers?" »

GD Star Rating
loading...
Tagged as: , ,

A Zoologist’s Guide to Our Past

In his new book The Zoologist’s Guide to the Galaxy: What Animals on Earth Reveal About Aliens–and Ourselves, Cambridge zoologist Arik Kershenbaum purports to tell us what intelligent aliens will be like when we meet them:

This book is about how we can use that realistic scientific approach to draw conclusions, with some confidence, about alien life – and intelligent life in particular. (p.1)

Now, that won’t be for a long time, and they will even then be far more advanced than us:

We are absolutely in the infancy of our technological development, and that makes it exceptionally likely that any aliens we encounter will be more advanced than us. (p.160)

The chances of us encountering intelligent aliens [anytime soon] is so remote as to be almost dismissed. (p.320)

Even so, this is what aliens will be like:

One way to prepare ourselves mentally and practically for First Contact is … to reconcile ourselves to the fact that there are certain properties that intelligent life must have. … their behavior, how they move and feed and come together in societies, will be similar to ours. …

[Aliens and us] both have families and pets, read and write books, and care for our children and our relatives. … this situation is actually very likely. Those evolutionary focus that push us to be the way we are must also be pushing life on other planets to be like us. (pp.322-323)

And this will be their origin story: Continue reading "A Zoologist’s Guide to Our Past" »

GD Star Rating
loading...
Tagged as: , ,

Real Vs. Fake Stories: Complements or Substitutes?

Regarding meaningful stories and narratives, I see two huge trends over the last century or so.

  1. First, we’ve seen a great increase in the amount of fiction consumed. People now spend many hours of day watching TV and movies, reading novels, etc. Centuries ago this fraction of time was far lower. An important fraction of these stories take place in universes which make a lot more emotional and moral sense than our real world seems to, especially on larger historical and cosmological scales.
  2. Second, we’ve seen a great decline in passions regarding grand historical and cosmological narratives. Religion, nationalism, and ideology all seem to have waned. Yes many people still care a lot about such things today, but centuries ago people eagerly and repeatedly went to war over such things. (We even instituted “freedom of speech” to cut back on their destructive enthusiasm.)

Note that I’m not saying that these “real” narratives are true, just that many people treat them as true. (Or as more true.) This is in stark contrast to stories that inspire and engage people, but which people don’t even pretend are true. (Trekkies love Star Trek, but don’t claim it really happened.)

One simple interpretation of these two trends is that “fake” stories are a substitute for “real” ones. To review, A and B are substitutes when you less want A the more you have of B, while A and B are complements when you more want A the more you have of B. So the theory here would be that we less want “real” stories the more “fake” stories we consume.

One problem with my theory is that most people seem to think fake and real stories are complements:

Now if we just look at random stories, and ignore their types, it seems clear that individual stories are on net substitutes. We only have so many hours a day to consume stories, so if we spend another hour on a particular story, that leaves fewer hours for other stories. So if individual stories are substitutes, it seems plausible that so are categories of stories.

But they why would all these poll respondents be wrong? I suggest: social desirability bias. Stories are seen as good things, and good things are seen to be even better if they are complements. (E.g., exercise and healthy eating.) So I suggest poll respondents are saying that story types are complements mainly to show their support for the good thing of stories.

So if fake and real stories are substitutes, from which side were recent changes driven? A simple tech theory would be that we have improved our ability to tell and share fake stories far more than we’ve improved our ability to construct grant historical and cosmological narratives.

GD Star Rating
loading...
Tagged as: , ,

Position Vs. Topic Contrarians

[You can take] an authority defying position [that you] can share with like-minded folks and which might later lead to glory, while avoiding most of the accuracy-reducing costs of disagreement: be contrarian on questions, not answers. (More)

People love to discuss and argue, but usually not on topics where everyone expects everyone to agree. Instead, it is the prospect of disagreement that gives energy and life to most conversation. Even if your conversation partners nod in enthusiastic agreement, they expect that others out there would not so easily agree.

Sometimes people agree with majorities and at other times they agree with minorities. When they take the latter route they often proudly claim that this shows they are motivated mainly by truth, as that explains their willingness to suffer disapproval from a majority.

But in fact, taking a minority position can show your independence and defiance, and it can often get you more attention, which you can use to show how likeable, clever, and articulate you are in the way that you take your contrarian position. Also, sometimes a minority is especially grateful for your show of loyalty to them. And you may hope for larger reputation gains if you are later proved right for taking a minority, relative to a majority, stance. Thus it isn’t at all obvious that being contrarian in this way reliably shows one’s truth-orientation.

As the above quote indicates, there is another kind of contrarian, who instead of taking unusual positions on familiar questions, focuses on unusual questions. Contrarians of this sort are less likely to be wrong and to cause the larger world to go wrong in listening to them. And they contribute more to an intellectual division of labor, wherein we all specialize on different mixes of topics, and then share our conclusions with each other.

But while a topic contrarian seems to contribute more to our all becoming better informed on everything, topic contrarians gain far fewer advantages from their stance. Human conversations tend to follow a norm of sticking to whatever are the current common topics, and so those who speak to other topics are mostly ignored. For example, in policy worlds, there’s a saying that there’s no point in releasing a white paper on a topic that hasn’t been in the news in the last two weeks.

So while audiences often listen especially attentively to position contrarians, they may not even hear a topic contrarian. Which means they are much less likely to notice how likable, clever, or articulate you are about that. Few will see your talking about a weird topic as showing loyalty to them. Yes, you might later gain reputation if your topic later becomes more popular, but usually folks will just see you as bad at following fashion.

I thus conclude that topic contrarians can better argue that their stance suggests a truth orientation, as they gain so much less in other ways.

GD Star Rating
loading...
Tagged as: