For rational agents, learning new different info separately causes divergence, but learning about the views of others causes convergence, and the latter process overwhelms the former. For humans, it depends of course on how they actually reason and listen.
Call my argument the Keynesian argument against cognitive convergence. The invisible hand of the market(-place-of-ideas) will cause consensus when we're all dead. Waiting for it works if Aubrey de Grey's plans to overcome the longevity problem works. So while I'm hopeful I'm not betting on it.
Reverse fight club (machine learning) is fun but oversold as a panacea.
Common sense... I don't even... actually I might, Your Ideas arn't self evident because you said they are. Stop being lazy and explain yourself.
We are living in a very large state space of possibilities and we already have inferential distance type problems (ever tried to explain conservation-of-energy to a non-STEM-type-person?). Spreading of knowledge exacerbates divisions among us through the mechanism of increasing inferential distances yet further. perhaps when we have a good way of explaining the edges of the possible to people (and actually use it when we educate) then in like a thousand years we'll have a convergence but right now homeopathy is still like a thing...
Size of problem and rate of shift in cutting edge exceeds rate of knowledge dissemination by um, a lot.
Don't get me wrong about the spread of knowledge it is an unambiguous good, but it does have side effects which warrant contemplation
Learning doesn't inherently cause conflict but it provides the opportunity for it like this: I go online, I read a post on a strange non-mainstream idea, then I try to relate this new idea to someone I know, if I do a good job they get it (cool), but I'm not perfect (far from yada yada) so sometimes I say something ether bombastic or offensive while relating the idea and BAM I have alienated one person against that idea (wow that description makes me sound bad at talking /sigh)
Things don't always go down like this but it happens enough (5%-ish?)
So new Idea + bad (or to excitable) explainer == possibility for conflict, Possibility x Frequency of exchange == amount of conflict.
I think it's a very 'change-is-scary-and-hard' type thing and one must be exceptionally careful or feathers get ruffled (or maybe it's cause I'm a Newengland-er)
Or put another way: The herd (of people) only gets new ideas one person at a time and the path of an individual mind from never-heard-of-it to Interesting-tell-me-more, or I-agree is long and littered with many off-ramps to F-You-that's-insane or back-away-slowly-while-smiling-and-noding
But I am parroting (poorly) an article I can't find right now so sorry that it's coming out sorta garbled.
Pretty much that /\ /\ /\ , But as much fun as "Gate-Keepers-Bad, Ger Ger /Shake-Fist" is (Go pirate party Go).
I like the more nuanced view of the current 'increased/ing' division as just the natural outcome of faster cultural growth.
Or said another way, If the internet sped up communication (it did), and if communication allows/causes cultural growth/learning then the internet sped that up too. So the price of faster cultural learning/growing is an increase in cultural volatility (if volatility = division). The outcome of the above being no amount of tweaking will decrease the division unless* there is some natural end-point to cultural learning some inevitable idea-set that everyone will trend to given enough time. I don't think there is one, Robin sometimes posits "forager values" as what people lean towards when they get rich (ie freedom of choice) but I don't really buy that one myself...
Idk Robin do you buy my communication -> learning, learning == growth, and division == volatility when one translates idea words into market words?
*Side note maybe killing the internet will stop the accelerating learning -> more division, train but when the elite tried that last the Arab spring happened so I'm guessing it will be some time before they try it again.
It's pretty clear that the Internet brought forward a revolution akin to that of the printing press, and the ruling ideology is struggling to contain the rise in contrarian viewpoints and their wider spread. The counter-revolution via a fenced Internet under the control of Google, Facebook, Twitter and Apple is working pretty well -- the next logical step is to make the current ad-hoc systems for regulating discussion a permanent fixture. That's where the Science story comes in. It's about control, not about trustworthiness, obviously: if it were, there's no doubt in the world that Robin's system would be ten times better.
But they have Top Men working on the problem. Top Men.
That has to be a context dependent result.
Wasn't it already shown long ago that providing the same info to people who already disagree causes them to move even further apart?
For rational agents, learning new different info separately causes divergence, but learning about the views of others causes convergence, and the latter process overwhelms the former. For humans, it depends of course on how they actually reason and listen.
Call my argument the Keynesian argument against cognitive convergence. The invisible hand of the market(-place-of-ideas) will cause consensus when we're all dead. Waiting for it works if Aubrey de Grey's plans to overcome the longevity problem works. So while I'm hopeful I'm not betting on it.
Reverse fight club (machine learning) is fun but oversold as a panacea.
Common sense... I don't even... actually I might, Your Ideas arn't self evident because you said they are. Stop being lazy and explain yourself.
We are living in a very large state space of possibilities and we already have inferential distance type problems (ever tried to explain conservation-of-energy to a non-STEM-type-person?). Spreading of knowledge exacerbates divisions among us through the mechanism of increasing inferential distances yet further. perhaps when we have a good way of explaining the edges of the possible to people (and actually use it when we educate) then in like a thousand years we'll have a convergence but right now homeopathy is still like a thing...
Size of problem and rate of shift in cutting edge exceeds rate of knowledge dissemination by um, a lot.
Don't get me wrong about the spread of knowledge it is an unambiguous good, but it does have side effects which warrant contemplation
In machine learning and in common sense it produces convergence. Venkatesh Rao has claimed otherwise, but very non-credibly .
Learning doesn't inherently cause conflict but it provides the opportunity for it like this: I go online, I read a post on a strange non-mainstream idea, then I try to relate this new idea to someone I know, if I do a good job they get it (cool), but I'm not perfect (far from yada yada) so sometimes I say something ether bombastic or offensive while relating the idea and BAM I have alienated one person against that idea (wow that description makes me sound bad at talking /sigh)
Things don't always go down like this but it happens enough (5%-ish?)
So new Idea + bad (or to excitable) explainer == possibility for conflict, Possibility x Frequency of exchange == amount of conflict.
I think it's a very 'change-is-scary-and-hard' type thing and one must be exceptionally careful or feathers get ruffled (or maybe it's cause I'm a Newengland-er)
Or put another way: The herd (of people) only gets new ideas one person at a time and the path of an individual mind from never-heard-of-it to Interesting-tell-me-more, or I-agree is long and littered with many off-ramps to F-You-that's-insane or back-away-slowly-while-smiling-and-noding
But I am parroting (poorly) an article I can't find right now so sorry that it's coming out sorta garbled.
I don't see why learning would produce more conflict by itself. Nor even why it would make more divergence in individual views.
Pretty much that /\ /\ /\ , But as much fun as "Gate-Keepers-Bad, Ger Ger /Shake-Fist" is (Go pirate party Go).
I like the more nuanced view of the current 'increased/ing' division as just the natural outcome of faster cultural growth.
Or said another way, If the internet sped up communication (it did), and if communication allows/causes cultural growth/learning then the internet sped that up too. So the price of faster cultural learning/growing is an increase in cultural volatility (if volatility = division). The outcome of the above being no amount of tweaking will decrease the division unless* there is some natural end-point to cultural learning some inevitable idea-set that everyone will trend to given enough time. I don't think there is one, Robin sometimes posits "forager values" as what people lean towards when they get rich (ie freedom of choice) but I don't really buy that one myself...
Idk Robin do you buy my communication -> learning, learning == growth, and division == volatility when one translates idea words into market words?
*Side note maybe killing the internet will stop the accelerating learning -> more division, train but when the elite tried that last the Arab spring happened so I'm guessing it will be some time before they try it again.
Your critique seems pretty compelling to me. Which makes me eager to see a response.
It's pretty clear that the Internet brought forward a revolution akin to that of the printing press, and the ruling ideology is struggling to contain the rise in contrarian viewpoints and their wider spread. The counter-revolution via a fenced Internet under the control of Google, Facebook, Twitter and Apple is working pretty well -- the next logical step is to make the current ad-hoc systems for regulating discussion a permanent fixture. That's where the Science story comes in. It's about control, not about trustworthiness, obviously: if it were, there's no doubt in the world that Robin's system would be ten times better.