Long ago when electricity and phones were new, they were largely unregulated, and privately funded. But then as the tech (and especially the interfaces) stopped changing so fast, and showed big scale and network economies, regulation stepped in. Today social media still seems new. But as it hasn’t been changing as much lately, and it also shows large scale and network economies, many are talking now about heavier regulation. In this post, let me suggest that a
" Social media gives us more frequent “attaboy” and “we see & like you” signals. People care more than we realized about the frequency, relative to the size, of such signals."
When all you have is a hammer, everything looks like a nail, and it makes you stupid. It's hard to be more idiotic than to view all social media through Hanson's signaling lens, and it leads to such moronic ideas as his wall-sized screen devoted to measuring popularity.
As for "regulation", it's the ISPs who are doing the regulating and stifling ... that's why (government enforced) net neutrality is needed.
"Yet few then mentioned social media, the application now most strongly associated with this new frontier. "
I was a developer of the early network (ARPANET). The purpose was, and still is, sharing and communication. The number one application prior to Mosaic was email, but we were aware of ideas like Ted Nelson's Xanadu. What Hanson missed is neither here nor there.
A lot of people, having realized an inch of what is possible to do with the data gathered of them up to now, feel disturbed already. Although I do think it was naïve to trust social media platforms with an abundance of intimate data despite all the warning voices, I think it is worthwhile for people in general to think about and discuss the potential ways to use and abuse data gathered and combined of their behavior.
Centralized systems can and will abuse power given to them; I don't believe such systems will lead to more freedom if they aren't being regulated. It isn't nearly as bad as saying we should let governments prone to cementing their power do as they like, as regulating their actions would stop their plans halfway through, but it is in the same direction.
The obvious counterargument to this is that we shouldn't be so concerned whether someone will abuse possible data people will give them out of their free will, as we are simply giving people more of what they want; this is close to Robin's point. However, we might also say we should in no way regulate legal or illegal drugs, in no way tax harmful substances (alcohol, cigarettes, etc.) and in no way intervene when two people are beating the spit out of each other, since we'd just prevent them from getting what they want.
That particular reductio ad absurdum isn't fair nor does it hold up to extreme scrutiny, but it does convey the general idea that it is not necessarily good to just give people more of what they want.
I would assume all decent social media networks are doing this, just not showing them to you: extracting implicit feedback from events like scrolls and link clicks. Even if a user doesn't explicitly click 'like', taking 15s to scroll to the next item rather than 1s is a strong signal, hovering with a mouse is a further signal, and highlighting or clicking on a link in it is an even stronger signal. (For example, Google uses a ton of implicit signals in ranking search results, like the "long click": if you click on a link and you don't come back to the search results or it takes you many seconds, that implies you found what you wanted. No need for 1-5 stars!)
These signals then feed into decisions as to whether to boost the relevant items into more feeds or show them for longer or suggest you to other people. So in a way, all these 'micro-likes' are already being collected, recorded, aggregated, and exposed to the creator as macro-likes (since the more micro-likes you accumulate, the larger the chance of someone enacting a macro-like). So social media already covers a lot of the possible micro-liking.
Where likes, micro or macro, really seem to be missing is offline: https://www.newyorker.com/m...
> "Imagine you had a wall-sized screen, full of social media items, and that while you browsed this wall the direction of your gaze was tracked continuously to see which items your gaze was on or near."
And of course Facebook's Oculus and VR in general is working hard on eyetracking - for foveated rendering, mostly, but it'll be useful for a lot of other stuff, heatmaps being the most obvious... Replaces crude JS-based estimation and mouse-tracking.
How would this be an improvement?
Checkout 'Black Mirror' episode 'Nosedive.'
Twitter lets you see interaction metrics for your own tweets including views and times people expand your tweet; and snapchat lets you see who viewed your stories too
Actually, and you can "react" to posts on Facebook
Instagram sometimes tells you how many people viewed a post, and you can always see how many people watched one of your stories. Not quite as far as you're suggesting, but it's in the same direction