Blog comments vary greatly in quality, and often low quality comments drive away readers and high quality comments. This blog is no exception. We now have a “like” button in our comments section. If the people willing to like a comment have on average better taste than the people willing to write a comment, readers and authors could avoid low quality comments by focusing on the most liked comments. It isn’t obvious why this assumption should hold, but I thought likes probably couldn’t make comments much worse, so, why not give it a try.
Another part is that people avoid posting redundant comments. "Liking" allows people to keep adding emphasis to a good comment, instead of thinking "oh, someone already said what I wanted to" and leaving. And since good comments are a narrower target than bad comments, this redundancy-aversion lets more bad comments through since they're bad in different ways.
A simple model: suppose that there are 10 possible good ideas about what to say in response to a post, and 90 possible bad ideas. Each reader has 1 idea, 50/50 whether it's good or bad. If readers comment with their idea only if it hasn't already been said, and there's just a small number of commenters, the post will get a few comments with nearly 50% of them good. If there's a large number of commenters, you'll get a lot of comments with a bit over 10% of them good - much worse signal:nosie ratio. But if readers vote on comments, upvoting a comment that expresses the same idea that they had, then good comments will get upvoted 9x as much as bad comments so a larger readership will improve the signal:noise ratio.
I suspect that a lot of malicious comments have strong narrow appeal to people who feel the same way. So someone says something that is hated by 15% of readers, disliked by 75% of readers, liked by 5% of readers and loved by 5% of readers. Lets say that only people who "love" comments bother to upvote them. If 100 people read the comment, in spite of the fact that the overwhelming majority hated or disliked the comment, it still shows up as +5.
That in no way accurately reflects the community's views on the comment.
The karma system of LW is just an elaborate scheme to confirm the world-view of Eliezer Yudkowsky and boost his status.
The whole EY/LW world-view consists of a few extreme one-dimensional ideas with little to no real-world justification. Then everyone is expected to nod along and get 'points' by 'confirming' this nonsense with long-winded impressive sounding jargon. It's really quite bizarre.
On the other hand, LW itself is a 'honey-trap' set up to stop smart people working on AI ....think about it...all the smart guys who would have been working on potentially dangerous AI are diverted into reading and writing additive LW posts all day indeed...in that sense it's brilliant.
I said I know it's only rock 'n roll but I like itI said I know it's only rock 'n roll but I like itI said I know it's only rock 'n roll but I like it, like it, yes, I doOh, well, I like it, I like it. I like it...
No comments are censored on LessWrong. Or at least, not purely because of downvoting. You can set your personal settings to totally ignore how other people have voted if you like, when you view comments.
Interesting thought in the last paragraph. I suspect the assumption holds because we are biased in various ways towards over-estimating the quality of our own comments. Some of it's egotism, some of it's just Illusion of Transparency (i.e. it's hard for us to tell how clear we're being.)
Also, I participate in LessWrong, and in retrospect it often seems to me that my strongly upvoted comments are my better comments.
I think "like" buttons are adequate and elegant in their simplicity. Comments that are irrelevant/malicious will be identifiable by their lack of likes. I find the decision of whether or not to like relatively easy compared with e.g. the decision of whether or not to downvote on LW.
It looks like there is filtering; you can choose to sort by rating (dropdown box at the top of the comments).
We Have Comment Likes
Its part of it for sure, but the explanation lies here:
http://blog.disqus.com/post...
Peace mate.
this might explain where this "high rep" came from I was referred too
I think this what victor was referring too.
My main complaint, reading comments without javascript, has been fixed. I think this is the only disqus site I have see that does it. Thanks!
(PS - it remembered my name this time.)
That's a big part of it.
Another part is that people avoid posting redundant comments. "Liking" allows people to keep adding emphasis to a good comment, instead of thinking "oh, someone already said what I wanted to" and leaving. And since good comments are a narrower target than bad comments, this redundancy-aversion lets more bad comments through since they're bad in different ways.
A simple model: suppose that there are 10 possible good ideas about what to say in response to a post, and 90 possible bad ideas. Each reader has 1 idea, 50/50 whether it's good or bad. If readers comment with their idea only if it hasn't already been said, and there's just a small number of commenters, the post will get a few comments with nearly 50% of them good. If there's a large number of commenters, you'll get a lot of comments with a bit over 10% of them good - much worse signal:nosie ratio. But if readers vote on comments, upvoting a comment that expresses the same idea that they had, then good comments will get upvoted 9x as much as bad comments so a larger readership will improve the signal:noise ratio.
I wonder how well a voting system with "vote for" and "vote against" would work.
I suspect that a lot of malicious comments have strong narrow appeal to people who feel the same way. So someone says something that is hated by 15% of readers, disliked by 75% of readers, liked by 5% of readers and loved by 5% of readers. Lets say that only people who "love" comments bother to upvote them. If 100 people read the comment, in spite of the fact that the overwhelming majority hated or disliked the comment, it still shows up as +5.
That in no way accurately reflects the community's views on the comment.
I would suggest that this is true of some people, whereas the reverse is true of others, and by and large input from the first type is more valuable.
That is not true - comments and whole posts have been deleted by the moderators.
The karma system of LW is just an elaborate scheme to confirm the world-view of Eliezer Yudkowsky and boost his status.
The whole EY/LW world-view consists of a few extreme one-dimensional ideas with little to no real-world justification. Then everyone is expected to nod along and get 'points' by 'confirming' this nonsense with long-winded impressive sounding jargon. It's really quite bizarre.
On the other hand, LW itself is a 'honey-trap' set up to stop smart people working on AI ....think about it...all the smart guys who would have been working on potentially dangerous AI are diverted into reading and writing additive LW posts all day indeed...in that sense it's brilliant.
I said I know it's only rock 'n roll but I like itI said I know it's only rock 'n roll but I like itI said I know it's only rock 'n roll but I like it, like it, yes, I doOh, well, I like it, I like it. I like it...
-It's Only Rock 'n Roll (Rolling Stones)
No comments are censored on LessWrong. Or at least, not purely because of downvoting. You can set your personal settings to totally ignore how other people have voted if you like, when you view comments.
Interesting thought in the last paragraph. I suspect the assumption holds because we are biased in various ways towards over-estimating the quality of our own comments. Some of it's egotism, some of it's just Illusion of Transparency (i.e. it's hard for us to tell how clear we're being.)
Also, I participate in LessWrong, and in retrospect it often seems to me that my strongly upvoted comments are my better comments.
I think "like" buttons are adequate and elegant in their simplicity. Comments that are irrelevant/malicious will be identifiable by their lack of likes. I find the decision of whether or not to like relatively easy compared with e.g. the decision of whether or not to downvote on LW.
It looks like there is filtering; you can choose to sort by rating (dropdown box at the top of the comments).
The presence of 'dislike' may result in doubling the effects of votes (upvote one and downvote another).
Like - more people should see thisFlag - irrelevant or maliciousNo Action - no strong opinion
With no dislike option, you can never decide "this is relevant but less people should see this". It's a reasonable decision.
Yes, that makes good sense.