Tell Me Your Politics and I Can Tell You What You Think About Nanotechnology

Ronald Bailey has a column in Reason where he describes the results of the paper Affect, Values, and Nanotechnology Risk Perceptions by Dan M. Kahan, Paul Slovic, Donald Braman, John Gastil, Geoffrey L. Cohen. The conclusion is that views on risks of nanotechnology are readily elicited even when people know that they do not know much about the subject and these views become strengthened along ideological lines by more facts. Facts do not matter as much as values: people appear to make a quick gut feeling decision (probably by looking at the word "technology"), which is then shaped by their ideological outlook. Individualists tend to see the risks as smaller than communitarians. There are similar studies showing the same thing about biotechnology, and in my experience the same thing happens when the public gets exposed to discussions about human enhancement.

The authors claim that this result does not fit with "rational weigher" models where people try to maximize their utility, nor with "irrational weigher" models where cognitive biases and bounded rationality dominates. Rational  individualists and communitarians ought not differ on their risk evaluations, and the authors claim it is unlikely that different cultural backgrounds would cause differing biases. They suggest a "cultural weigher" model where individuals donโ€™t simply weigh risks, but rather evaluate what one position or another on those risks will signify about how society should be organized. When people learn about nanotechnology or something similar, they do not update instrumental risk probabilities but develop a position with respect to the technology that will best express their cultural identities.

This does not bode well for public deliberations on new technologies (or political decisions on them), since it seems to suggest that the only thing that will be achieved in the deliberations is a fuller understanding of how to express already decided cultural/ideological identities in regards to the technology. It does suggest that storytelling around technologies, in particular stories about how they will fit various social projects, will have much more impact than commonly believed. Not very good for a rational discussion or decision-making, unless we can find ways of removing the cultural/ideological assumptions of participants, which is probably pretty hard work in deliberations and impossible in public decisionmaking. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://byrneseyeview.com Byrne

    Sounds like people could be deciding whether to become individualists or communalists based on their risk preference and their assessment of other peoples’ ability to foresee risk: an individualist might say “People love gambling, and everybody knows it, so we ought to structure our relationships so that everyone can take as much risk as they want, so long as they accept the consequences,” while communalists might claim that “Everyone loves to take risks, but nobody likes the consequences, so we should probably socialize the costs and benefits so they all even out.” In which case individualists will pursue innovation and assume that catastrophes will be contained by the self-interest of those at risk, while communalists will be as static as they can afford to be.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    This is an important result for us; thanks Anders for bringing it to our attention! It unfortunately confirms my impression, as I described in December:

    Most people just don’t care much about the future itself; they mainly like the future as a dramatic backdrop for admiring impressive people and cool gadgets, or for taking sides in current ideological battles.

    This is a big reason that I hesitate to write more for the public about the social implications of future technologies.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    This comment over at Reason Online describes similar data on global warming.

  • http://dirtsimple.org/ Phillip J. Eby

    It seems to me that this is just an example of the general principle that when a person is faced with a choice that they don’t have any personal knowledge or involvement in, they fall back on the model of “what would someone like me (i.e. people I identify with) do?”. I doubt it has anything in particular to do with either technology or risks; it’s simply that an absence of personal knowledge causes one to fall back on general principles.

  • Jef Allbright

    This item is remarkable not for its content, but for its meta-statement that social scientists still want for evidence that human decision-making isn’t and can’t be objectively rational. This unsurprisingly negative outlook will flip to positive when we accept a more realistic model and implement systems for collaborative decision-making that exploit increasing awareness of our present fine-grained values, promoted by increasingly effective **principles** of scientific/instrumental knowledge in an intentional process with real-world feedback driving an increasingly coherent model. Wash, rinse, repeat.

    The emphasis on principles rather than ends provides a basis for agreement increasing with the applicability of the principle over increasing scale. It removes from the formulation any spurious conflict over perceived desired ends with their entailments of unanticipated and unintended consequences, and keeps the focus on promotion of our present values into the future (to the extent that they actually work, i.e., they represent a coherent model of observed “reality”.)

    As for “individualists” versus “communalists”, any agent is necessarily an individualist as a decision-maker. A more accurate distinction would not be so either-or, but rather, would indicate the identification of the decision-making Self over extended context, e.g. individual, family, tribe/organization, humanity, sentient creatures…)

    Fundamentally, decisions are seen as increasingly “right” to the extent that they are seen to implement principles promoting an increasing context of increasingly coherent values over increasing scope of consequences.

    It has never been about facts and truth, but about subjective awareness of how to promote one’s values, whatever they might be. We can be thankful that the universe provides a consistent ratchet effect, selecting for “what works.” We can be even more thankful when more of us recognize this and contribute to putting a longer lever on the ratchet handle, heck, maybe even some gears.

  • http://www.aleph.se/andart/ Anders Sandberg

    I couldn’t resist commenting a bit further on this on my blog,
    http://www.aleph.se/andart/archives/2007/06/politics_nanotechnology_and_palestina.html
    bringing up another tremendously interesting paper:
    http://www.pnas.org/cgi/content/abstract/104/18/7357
    Sacred bounds on rational resolution of violent political conflict by Jeremy Ginges, Scott Atran, Douglas Medin and Khalil Shikaki (PNAS, May 1, 2007 vol. 104 no. 18 7357-7360). They show that opposition to compromise over sacred issues is increased by offering incentives to compromise while it is decreased when the adversary makes a symbolic compromise about their own sacred values.

    In this context, it might suggest a way to deal with disagreements about emerging technology. But it will still largely be an issue of bootstrapping constructive compromises rather than making people rational.

  • Hopefully Anonymous

    super-interesting. Pragmatically, your (Anders) OP suggests that we can reduce cognitive bias in the public in evaluating risks of “emerging technologies” by naming them strategically. For example, not using the word “technology”.

  • TGGP

    The authors claim that this result does not fit […] with “irrational weigher” models where cognitive biases and bounded rationality dominates.
    I don’t see what’s wrong with the “irrational weigher” model here. Maybe I just don’t understand it enough. Nor does it seem as obvious to me that “t is unlikely that different cultural backgrounds would cause differing biases”, especially given that communal cultures have been found by behavioral studies to have differing biases (like the Fundamental Attribution Error) from more individualist cultures.

  • http://www2.blogger.com/profile/15437653761628928730 Kevembuangga

    “Tell Me Your Politics and I Can Tell You What You Think About Nanotechnology”

    Can you do the reverse?

    I think nanotechnology is just junk, a childish dream about extrapolating folkmechanics to the nanoscale level and that it is dangerous but ONLY in the same way that polluting chemicals are, no goo of whatever color.

    So what are my political inclinations?

  • http://www.aleph.se/andart/ Anders Sandberg

    Hmm Kevembuangga, I see an interesting contradiction in the statement that it is both a childish dream based on folkmechanics (implying that it won’t work at all) and the statement that it is dangerous like pollution. Not much belief in authority, concern about safety. I think that would pretty clearly place you in the egalitarian-communitarian group:
    http://research.yale.edu/culturalcognition/index.php?option=content&task=view&id=45

  • http://profile.typekey.com/jefallbright/ Jef Allbright

    Anders wrote:

    “They show that opposition to compromise over sacred issues is increased by offering incentives to compromise while it is decreased when the adversary makes a symbolic compromise about their own sacred values.

    In this context, it might suggest a way to deal with disagreements about emerging technology. But it will still largely be an issue of bootstrapping constructive compromises rather than making people rational.”

    Compromise can be warranted as a tactic, but not as a strategy. It’s a zero-sum approach settling for less than growth. Bootstrapping, yes, but that’s not compromise.

    A bigger picture approach to such issues — which we might expect to become practical with the increasing awareness afforded by technology — is to show that the tree of the probable, with its roots firmly grounded in what we think of as the physics of reality, supports increasingly diverse branches of the possible, representing the values of individual agents. Agreement between divergent branches is increasingly achievable in terms of the support provided by principles convergent toward the trunk.

    Staying in the Red Queen’s Race requires better than compromise. It requires a positive-sum strategy promoting growth of our common branch of the tree of the probable, thriving on competition between diverse branches of the possible. At this time, the tree-tips are only just becoming aware of their tree, but they need not be rational, only increasingly aware of their values and their supporting principles, and new growth will continue to surprise.

    [Apologies for the heavy metaphor; it seems the only way to convey a rather large concept in a rather small space.]

  • Hopefully Anonymous

    Jeff, I think you may have missed that Anders was suggesting “symbolic compromise” with the emphasis on symbolic. I think it’s implied that people dumb enough not to be able to overcome their own self-harming biases might also be dumb enough to be fooled by symbolic compromises. I hope that’s true to the extent it can provide an easy solution for managing one element of existential risk.

  • http://www.baseballprospectus.com guy in the veal calf office

    I agree that its pretty natural for people surveyed upon to items they know little about, to take a view or position from within a framework that they well understand. People like to talk, debate and pontificate about things that they know incompletely. Its fun and passes the time. But I disagree that this matters much or, as Anders says, “This does not bode well for public deliberations on new technologies (or political decisions on them)….”

    Most policy arises from the contest of informed, interested and organized groups that will have the necessary expertise. Most technological advances occur without public discussion. Not always, but usually. A survey of random people will elicit off-the-cuff responses from people far away from the actual deliberation of policy or the advancement of technology.

    This view is borne out by tax policy. Everyone surveyed will have an opinion about progressivity, optimum tax systems, etc, but only a few will enter the sausage factory that is the Joint Committee on Taxation. The opinions of the many are usually too diffuse to matter.

    As the previously cited comment in Reason points out, Climate Change mitigation may become an exception to the foregoing when the next president is inaugurated.

  • http://www2.blogger.com/profile/15437653761628928730 Kevembuangga

    Anders Sandberg : I see an interesting contradiction in the statement that it is both a childish dream based on folkmechanics (implying that it won’t work at all)

    A truly “biased” view! ๐Ÿ˜‰
    Don’t “imply” on my behalf!
    I am not saying that it won’t work at all, I am saying that it’s just a hype name for fancy chemicals and that if nanotech ever comes to anything like self-replication or “autonomous gremlins” it will be more on the side of synthetic biology than akin to “mechanical assembly” or nano-robots.
    This mechanical approach is THE idiotic idea behind nanotech, molecules are not “bricks and cogs” they have much more interesting properties and overlooking those just to pander to the crude folkmechanic fantasies of the layman is childish (and unrealistic), SciFi for the morons.
    And fancy chemicals of any kind are dangerous because you never know with what they will interact, even very “primitive” nanotech can be dangerous : Type of buckyball shown to cause brain damage in fish.
    So there is no contradiction in my statement.

    As for your assessement of my political position it is ridiculously hare brained as well as the article it is based on :
    ONE axis of classification, “egalitarian and solidaristic” versus “hierarchical and individualistic” !!!
    I am solidaristic/individualistic depending on specific cases, non-hierarchical and especially non-egalitarian since having an even modestly above average IQ meant I have been bugged by morons all my life.

  • michael vassar

    Hmm. People with modestly above average IQ and love of exclamation marks seem to be trolling here. Probably time for a moderator to do something about that if the quality of the site is to be maintained.

  • aaron davies

    hopefully, it has been said that the “N” was dropped from “MRI” because stupid people were afraid of anything “nuclear”.

  • http://www2.blogger.com/profile/15437653761628928730 Kevembuangga

    michael vassar : People with modestly above average IQ and love of exclamation marks seem to be trolling here.

    modestly above average IQ = Mensa level = about 130 millions worldwide, so yes, above and modestly.

    seem to be trolling here
    Oh! Sorry I forgot, the slightest innuendo that any singularity related topic is delusional junk is trolling!
    Of course! Belief in singularity is no bias, sure!

    Probably time for a moderator to do something about that if the quality of the site is to be maintained.

    Would not the “quality” of the site be better maintained by allowing valid critiscism, like noticing that a pretense to “model” a complex opinion like political stance along a single axis is ridiculous?

    I know singularitarians LOVE censorship, is it because they cannot back their faith by valid arguments?

  • Hopefully Anonymous

    Michael, Kevembuangga’s first two posts were far from trolling. Particularly his second post which in my opinion was refreshing skepticism (at least in this forum) regarding the viability of elements of nanotechnology. Unfortunately his most recent 4:10pm post is moving into a trolling direction, with unnuanced coments like “I know singulatarians LOVE censorship” -as ridiculous as some of the statements Kevembuangga has criticized in this thread. So Kevembuangga, I request that you don’t give up the moral high ground so quickly. More nuanced criticism, less unnuanced ad hominem, please.

  • Doug S.

    My theory on nanotechnology: if there was a more efficient form of “grey goo” than bacteria, it probably would have evolved already. What’s the difference between a bacterium and the so-called “nanobot” of science fiction?

  • Nick Tarleton

    Who says diamondoid replicators are capable of arising without intelligent help?

  • http://www.aleph.se Anders Sandberg

    Does it matter at all for this discussion whether nanotechnology can work? I am pretty certain that if the study had asked people about their views of nonexistent technologies like tautotechnology and hexatechnology, or classical but not widely known technologies like fluidistors, people would have given the same answers. The issue here is that for many people facts don’t matter as much as putting concepts they encounter into the ideological/cultural frameworks they already have.

    Hopefully we (and relevant decisionmakers) care about the facts of the matter, but when it comes to debate Overcoming Bias is probably better for discussing the biases in thinking that occur when new technology is considered than actually considering the technology itself.

  • William Newman

    (There must be some more appropriate place to discuss this question, and I for one will be happy to leave if someone points it out, and might even try to bite my tongue until someone does. But first…)

    Just because evolution is very clever with carbon and water doesn’t mean that bacteria are anywhere near the end of the line in tiny self-reproducing forms.
    Look at the macroscopic technical stuff we’ve come up with that never seems to appear in macroscopic organisms: radio and radar, for instance, or all sorts of structural materials that neither plants nor animals use, or fast transistor switches instead of the crazy tricks the brain uses to try to compensate for neuron slowness in important subsystems (for stereo hearing, e.g.). Do you think that radio wouldn’t be a competitive advantage to some species of animals, or that it is fundamentally chemically impractical to grow a radio organ? I think a more likely explanation is that evolution can’t get from here to there in any reasonable time.

    Also, even accepting a limitation to carbon and water, various fundamental things in the design of life look to me like local optima far from global optima. Ribosomes built largely out of RNA, for example. Something radically different could be more efficient. How does one place a bound on much more efficient?

    (Chomp.)

  • http://profile.typekey.com/hollerith/ Richard Hollerith

    Anders Sandberg: nice entry! My compliments to you.

  • Hopefully Anonymous

    William,
    It’s a good question. I don’t find the argument “because it hasn’t already evolved, it can’t alreay exist” very compelling either. Although I do think there are species with functional radar (bats), so that may not be the best example.

    I think overcomingbias could adopt a trick from dailykos and have regular “open threads”, or also sponsor a message board where we could develop our own threads rather than hijack existing ones when side interesting topics come up.

  • http://profile.typekey.com/jefallbright/ Jef Allbright

    Anders wrote:

    Does it matter at all for this discussion whether [x] can work?

    Anders, this is basic to my points above. Fundamentally, people come to agree, and cooperate, on the basis of similar values in the present. Goals are second-order, dependent on a framework of expectation, defined in terms of values, and set in the uncertain future.

  • http://www2.blogger.com/profile/15437653761628928730 Kevembuangga

    Anders Sandberg: Overcoming Bias is probably better for discussing the biases in thinking that occur when new technology is considered than actually considering the technology itself.

    From your own contributions in this thread and comments it doesn’t look so.
    Trying to shoehorn the pro/con debate about nanotech into a simplistic, one axis value statement, democratic/individualistic (good) versus authoritarian/collectivist (baaad, commies and fascists..) is not very enlightening.
    It sounds more of plain pro-nanotech propaganda or, could it be the result of some bias of yours?
    Furthermore, bickering about non existent and implausible technologies (advanced nanotech, not just buckyballs, smart paints or photocells) reminds of “how many angels can dance on the head of a pin”.

    Hopefully Anonymous : More nuanced criticism, less unnuanced ad hominem, please.

    Oh! Yeah?
    So, calling me a troll isn’t ad hominem?
    And asking for censorship is “nuanced criticism”?
    Michael Vassar is a well known singularitarian.
    Am I wrong on this, is “singularitarian” an epithet?
    Denouncing the obvious love of singularitarians for censorship and their lack of solid arguments about the plausibility of the Singularity is “unnuanced”?
    This reeks of double standards!!!

  • http://www.aleph.se/ Anders Sandberg

    Hmm, who is trying to shoehorn whom here? While I personally take a pro-nanotech, individualistic stance, I think the issue brought up by the paper is valid regardless of your political or nanotech stance. If its conclusions hold, good technologies might not be developed and bad ones might be because of how they fit with dominant cultural schematas. Isn’t that something everybody who likes to overcome bias would like to work against?

    If the acceptance or rejection of technology is done without reference to the actual content of the technology in case, it is almost certainly an irrational act. There might be some diffusely encoded information in cultural systems that actually do contribute some rational information to this kind of decision-making (e.g. the often true fact that a fix that doesn’t correct an underlying problem will be less efficient than a fix that does, so any technology that sounds like a superficial fix should be suspect) but I think it tends to be rather limited. Even by the standards of social values it is irrational to not examine the content of a largely unknown technology to see how good or bad it is – the collectivist/individualist relevance of nanotechnology or cognition enhancement are pretty nontrivial when examined, and just assuming without examination that it will be good or bad for whatever goal one has is just as irrational as making assumptions about its safety.

    The problems with doing a rational, careful examination of a new technology are of course sizeable in themselves. That is why I’m not too keen on the kind of loose throwing around of statements we would get in this thread if we tried to apply it to analyse nanotechnology per se. There are far better forums for that elsewhere, or we could create a dedicated thread for trying to understand the biases affecting nanotechnology evaluation per se.

  • Hopefully Anonymous

    Kevembuangga,
    This is the response I get for saying that you’re not a troll and that you add value to the thread?
    1. It wasn’t an ad hominem for Michael Vassar to say you were trolling on the thread. It was just an incorrect assessment in my opinion -a real troll would be as annoying for you as for me or him, because a real troll is essentially spamming and wasting all of our times. It’s usefully descriptive, not attacking, to point out real trolls. I just agree with you that he was wrong in identifying you as a troll.
    2. Because Michael Vassar is a singulatarian and wanted you removed from the thread as a troll does not follow that “singulatarians love censorship”.
    3. It is unnuanced to claim “singulatarians love censorship”.
    4. This sort of stuff distracts from your very interesting analysis and critiques. More baby, less bathwater, please.

  • http://www.acceleratingfuture.com/michael/blog Michael Anissimov

    Nanotech debate! Time to pick a side of the rope and pull on it.

  • http://www2.blogger.com/profile/15437653761628928730 Kevembuangga

    Anders Sandberg : who is trying to shoehorn whom here?

    You are trying to reduce the pro/con nanotech issue to a matter of “irrationality” of the opponents having its source in political bias, what did I miss?

    I think the issue brought up by the paper is valid regardless of your political or nanotech stance.

    I think not, since, as I said, playing down the political position to the “classic” left/right distinction is simplistic.
    Though this does capture a lot of the variance in political opinions it is too crude a criterion if some real discussion is to be made, isn’t OvercomingBias a place where such debates are supposed to happen (not only for nanotech).

    If the acceptance or rejection of technology is done without reference to the actual content of the technology in case, it is almost certainly an irrational act.

    To use your own words isn’t this an “interesting contradiction”?
    Because that’s what you are actually doing : “Does it matter at all for this discussion whether nanotechnology can work?”

    Even by the standards of social values it is irrational to not examine the content of a largely unknown technology to see how good or bad it is

    Didn’t I explain why I think Nanotechnology is a misnomer (polite word for hype…) for parts of physics and chemistry?
    Didn’t I say that throwing around fancy molecules is dangerous and why, asbestos, CFC, pesticides…
    The fact that such warnings are actually supported by many people on the sole basis of their political positions does not detracts from the rationality of the arguments deriving from previous experiences with the same kind of carelessness.

    That is why I’m not too keen on the kind of loose throwing around of statements we would get in this thread if we tried to apply it to analyse nanotechnology per se.

    More self contradictions in your arguments.

    Hopefully Anonymous : This is the response I get for saying that you’re not a troll and that you add value to the thread?

    Wasn’t your statement “less unnuanced ad hominem, please” adressed to me?

    Because Michael Vassar is a singulatarian and wanted you removed from the thread as a troll does not follow that “singulatarians love censorship”.

    Oh! It was just an “incorrect assessment”, yeah?
    Though my view that singulatarians love censorship is an ad hominem?
    Can you explain the “rationale” in your distinction?

    I certainly stand by my assertion that singulatarians love censorship and are unable to provide solid arguments for the plausibility of the Singularity.
    I have been banned from Michael Anissimov blog on the ground that “this is not the place to criticize Singularity, love it or leave it”, LOL…
    I also had fruitless discussions with Kaj Sotala, though not banned, some singularitarians are more honest that others.
    You’ll probably see another ad hominem in the above sentence, so please tell me which word I can use to call attention to the fact that refusing to engage in argumentation on contentious points is a conspicuous feature of intellectual dishonesty.

  • Hopefully Anonymous

    Kevembuangga, the examples you provided don’t add up to “Singulatarians love censorship”. And I’m neither God nor your daddy, it’s not my job to call out everyone who uses an ad hominem in this thread or elsewhere. “refusing to engage in argumentation on contentious points is a conspicuous feature of intellectual dishonesty” works on its own as a phrase. “X loves censorship” is a poor replacement in my opinion. Even better? “You’re failing to optimize our mutual persistence odds in my opinion by refusing to engage in discussion on this particular contentious point with me”. Because intellectual honesty is only valuable to the degree that it optimizes our mutual persistence odds, right?

  • Stuart Armstrong

    If it is concluded that the risks of nanotech outweigh the benefits, what sop can we give the individualistic people? (the hierarchical ones will follow the scientific opinion). Some ideas:

    1) Make the limitations on nanotech market-based in some way, in design and/or in enforcement
    2) Aggressively open up other areas of research where the risks are lower but irrational prejudice against them is high (maybe biotech for instance)
    3) Pay researchers or companies to stop work on nanotech. Get those opposed to nanotech to contribute, as individuals, to the fund that pays for this
    4) Clear up a lot of regulations in other domains – even some that is worthwhile and justifiable (but not vital)
    5) Incorporate elements that individualist would like – such as betting markets – into the risk assessment for nanotech. Incorporate them into future risk assessments for other technologies

    If it is concluded that the benefits of nanotech outweigh the risks, what sop can we give the egalitarian and communitarian people? Some ideas:

    1) Set up panels, boards of experts, and other organisms to overview the research. Invite prominent critics to be part of them
    2) Restrict the power and duration of patents on nanotech
    3) Force nanotech companies to contribute to some public good
    4) Set the safety bar on nanotech higher than it rationally should be
    5) Subsidise nanotech research that contributes to egalitarian goals. Pay for the subsidy through general taxation

  • Stuart Armstrong

    Anders Sandberg: nice entry! My compliments to you.
    I second that.

    I don’t find the argument “because it hasn’t already evolved, it can’t alreay exist” very compelling either.
    However, the argument “it can exist, hence we can build it” is very unconvincing too…

    unless we can find ways of removing the cultural/ideological assumptions of participants,
    We need people to take more responsibility for their decisions, and not just behave as unaccountable ideological purists whose opinions won’t change the debate anyway. Apart from betting markets, how about referendums? The swiss seem much for rational on their political issues that most…

  • http://www2.blogger.com/profile/15437653761628928730 Kevembuangga

    Hopefully Anonymous : And I’m neither God nor your daddy

    You try your best nevertheless it seems, it’s getting boring and off topic.

    Because intellectual honesty is only valuable to the degree that it optimizes our mutual persistence odds, right?

    Because intellectual honesty is a prerequisite to avoid dead ends in iterated prisonners dilemma games.
    Iterated prisonners dilemma games are the rational way to establish trust between competing parties.
    A “dishonest” move cripple the game (thru decreased trust) for quite a while.

    Stuart Armstrong : (the hierarchical ones will follow the scientific opinion).

    Ahem…
    You mean just like about Global Warming or Evolution versus Creationism?

  • Hopefully Anonymous

    Kevembuangga: “Because intellectual honesty is a prerequisite to avoid dead ends in iterated prisonners dilemma games.
    Iterated prisonners dilemma games are the rational way to establish trust between competing parties.
    A “dishonest” move cripple the game (thru decreased trust) for quite a while.”

    Establishing trust between competing parties is only valuable to the degree that it optimizes our mutual persistence odds, right?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    The discussion here is too far away from the topic of the post. I’m sure there are plenty of other places to have these other debates.