This is our monthly place to discuss related topics that have not appeared in recent posts.
I apologize if this idea/subject has already been beaten to death on this particular blog, but I'd like to know what people here think about "low cost political involvement," and what people categorize as "low cost" as a relative measure to potential benefit. It seems to me that most people have a cursory knowledge of political engagement, without ever having really participated in political organization of any libertarian group.
Thus, they don't know how easy or difficult taking market share from totalitarianism actually is. As the USA spirals into totalitarianism, it might actually be a good thing to find this out: or at least take some "off the cuff" estimates/measurements. What if it's not that hard, but all the people currently involved are abject idiots, or FBI guys who don't want liberty to be popular?
Might it not make sense to actually find out how hard creating a new industrial revolution --the way the first one was created-- actually is?
The reason I ask is because it's become pretty obvious to me that the Libertarian Party is controlled by a couple of FBI agents, or some similar perversely-motivated equivalent. (Those who are not aware of the long history the government has of infiltrating and externally controlling those who threaten it, might think this is idiotic conspiracy babble, but before dismissing it as such, it would be good to read "The Man Who Killed Kennedy: The Case Against LBJ" by Roger Stone; "In The Spirit of Crazy Horse" by Peter Matthiessen, http://www.dickshovel.com/d... , "Hell's Angels" by Hunter S. Thompson, "Mindhunter" by John Douglas, http://www.greenisthenewred... , and many other similar online articles) It seems to me that people who are highly rational would be able to quickly figure out how to make the Libertarian Party far more effective | some interaction with it.
Also, they might actually see how technology solutions could solve the problems inherent in the LP. The LP now has easy ballot access in all 50 States, and so the controllers of the LP are forced to burn a lot of money, and hire anti-liberty idiots to do the minimum necessary ballot access work, lest they achieve "accidental success" by allowing libertarian activists to "double circulate" to put State Legislative candidates on the ballot simultaneously with the statewide (usually delusional) candidates. Those few candidates who are not delusional are starting to realize that they should run for State legislature.
Moreover, the LP has low "barriers to entry" sure, it's a bunch of do-nothings, but being involved with it shows one how access to the ballot is gained, how skewed the ballot access laws are against minor parties, and what means minor parties have at their disposal. (Because the media is largely conformist and controlled and has been trained by the major parties, controversy is necessary to gain media attention. ...But it's better to simply count on receiving no media attention, and expending all energies on door-to-door efforts, which can be paid for, etc.)
There are a lot of attempts to make the world more intelligent and more friendly, but the only tried and true way is to make the world more free. This is a known quantity; it's what made America smarter and better than the Soviet Union and Communist China of the 1950s, and better than Germany from 1934-1945. The only way to bitch about America now is out of that context: which shows that political situations aren't hopeless, they are just perversely-incentivized when smart people have "better things to do." It's far easier to attend a libertarian Party meeting and figure out what's going on than to attend a Republican or Democrat meeting and figure out what's going on. The Libertarian meeting is like a "sandbox" on Wikipedia, and there may actually be some people there who are smart enough to clearly see the problems in terms of both strategy and philosophy.
I'm not suggesting that building friendly AI should be abandoned: I think it's a great idea, and I think it will make the world a far more benevolent place. But I also think there's a lot to be learned about benevolence from actually directly engaging with the systems that currently determine the "friendliness" level in the current system.
I'm also not suggesting that anyone who is a productive AI programmer stop what they're doing to waste tons of time "taking over" a libertarian party, and running it the right way. However, I'm suggesting that a few hours per week would be enough for an intelligent person to figure out what's going on, and that such information might make a huge difference when it's time to instantiate a hundred thousand "political robots," or when politics dictates that one is finally forced to "get involved" (due to an American "Kristallnacht" or something similar.)
Many of the political "strategy" comments I read in places like this and LessWrong are very uninformed, to the point where I might call them stupid or "unwittingly self-destructive." However, the people making such comments are often very smart. (Eliezer's comment at Cato strike me as highly intelligent, and it makes me wonder if he'd be good at strategic political engagement.)
My criticism is not of the people, it's of their comprehension of politics, which is solely due to their lack of experience in politics. I'm also not suggesting that it's anything less than the shitpile it appears to be, but I am suggesting that it might be possible to spray some sort of bacteria on that shitpile that makes it rapidly disappear. Whereas that's an "outside the box" analogy, it beats trying to pick up the shitpile and move it, then falling into it, and getting your good clothes messed up. As far as this analogy, if you never see the shitpile, you can't even estimate what sort of interaction with it might accomplish one's goal of "getting it to go away."
The western world owes modern jury trials to John Lilburne AKA "Freeborn John." Jury trials were already a feature of the common law, and were used to reduce tolerance of arbitrary punishment for "speech" in 1670(Penn) and 1735 (Zenger). In 1731 Nicholas Bayard was executed for speech, using a Dutch jury that couldn't speak English. Since that time, juries have been both successfully controlled by the police state sociopaths (judges, cops, prosecutors), and unsuccessfully controlled, or "independent." However, the attempt is always there to control the jury.
The Libertarian Party's best-educated activists understand that "freedom vs. tyranny" ultimately always plays out in the courts, and that legislative attempts don't generally expand freedom, but can only slow it, and only then, when they are totally optimal and elections are won.
But right now, the Libertarian Party doesn't even do the things its institutional memory indicates it should be doing. It doesn't for example, replicate Randolph's victory in Alaska. It doesn't build outward (winnable races), instead of upward (unwinnable races). It doesn't use libertarian I&R to offset the cost of ballot access. It allows one man in the LNC to be a failure point for the entire party. Etc. It allows fatal ideas to be promoted as if they are viable at various meetings and events.
The Libertarian Party also wastes a lot of money hiring Democrats to do its work. Such people then fail to advertise properly for the LP, and shrink rather than grow support. Longtime activists such as myself are marginalized, intentionally.
It could all be a lot smarter, and more effective.
Also, it could be a lot smarter technologically. The LP is using 10 year old technology. Libertarian activists often don't have useful communication networks, and can't disseminate vital information about why campaigns failed or succeeded, or how infiltration was used to defeat an entire state party. There still is no intelligent grassroots software for "taking territory" that corresponds to a politically-subdivided map of the USA (no measurement-and-feedback incremental improvement plan).
The D and R parties are pretty well controlled by totalitarians. I still think that the LP could be used to outflank them.
If tomorrow, every brain at LessWrong, this site, Cato, and Reason simply began working intelligently (without failure being seen as a viable option) a few hours per week at civic engagement, the Libertarian Party (or its viable replacement) would sweep every election.
Because we're not at the point yet where it's certain death to oppose the government, the historical record of worldwide democide indicates that I have a good idea.
This is an assertion, supported by few hard facts, but my estimation of the importance of multiple relevant variables makes me believe I'm right. Also: If you actually go out and engage, you may find that simply being among the people who call themselves libertarians will bring the problems inherent in my thinking to light. Then, at least, there would be a strong basis for saying "Jake Witmer is full of shit, he totally gets it wrong, and here's why: _____."
I could never go to a LessWrong meetup and feel like I had anything to contribute. Like Temple Grandin, my algebra skills aren't what they should be. That said, the lowest and most inconsistent brick mason should be able to support his own individual freedom, or the system itself is designed incorrectly.
A lot of people here claim to be libertarians. It does no good to claim that from inside of a death camp, or inside of giant federal reserve plantation, or even inside of a semi-free country, where one is forced to accept degrading stupidity like being a party to tax-financed drug prohibition.
We'll never be better than that if we don't fight that, because right now, brutality is paying a lot of sociopaths' bills.
So, how hard is it to fight clear tyranny?
I pretty much guarantee that most of you on this board don't know the answer to that question. Have you ever walked door-to-door with a political cause you're advocating, and seen how many people you could seemingly convince? (It doesn't matter what that cause is, just smile at people, interact with them, and see who you can convince.) I've found that I can convince or "accurately place on a scale of 1-5" about 50 in one evening (this is all that is necessary to intelligently allocate further political resources). At this rate, given a worthy libertarian cause, there are many areas of the country where I would achieve some measure of political success after 300 or so evenings. This is what allows dedicated activists to slowly gain notoriety, and "take territory."
Your local calculations might be different, and your experience might range from interesting or "good" to "horrible." That said, even if you didn't learn anything about politics and had a horrible experience, you'd learn to what extent your future involvement in it is even possible, and you might also be able to see and profile other people who, like yourself, had no hope of influencing local elections.
Right now, the Libertarian Party isn't even as intelligent as a mid-sized corporation. Its members don't want freedom as badly as paycheck-incentivized employees want one year of wages at around $20/hour. (This can actually be calculated once you know how much they spend per signature on ballot access, and once you know how much their candidates have spent per vote.) To me, if we lose all freedom under those conditions, we are proving that we are barely deserving of being called moral humans. After all, we are at the stage in the decline of a republic that precedes our American "Kristallnacht." They haven't taken away our guns, nor totally eliminated jury trials. (Jury trials are just rigged by "voir dire.")
In any event, that's a call for more political engagement, even if that means less philosophical discussion of politics among those who are "already in agreement" or "diametrically opposed." (Preaching to the converted, or the impossible-to-convert.)
What do you call Australia?
Mass marketing such a product would be heavily restricted by health care laws. There's simply no way to get it out to a large audience (60 million is 20% of the US) if a licensed surgeon is legally required to perform the procedure in person. I expect brain ems to be in the prototyping stages by the time that gets worked out, which will quickly overshadow it.
I want to know how probable it is that they will implement hell if we don't stop them by force first.
I think it's safe to say that most people (or at least the ones that would visit this blog. I doubt there are many buddhists or animists here) who do not belong to a religious denomination with a hell do not belong to such a denomination in large part precisely because of their personal aversion to the concept of hell. Of course you can never be too sure... Then again, maybe Hedonic Treader does want to know how common it is for religious people to rationalize a cruel god.
I think the unspoken rationale for asking about religious people is "Religious people believe in Hell either because they're cruel themselves, or because they are rationalizing away a cruel God. If I ask them if they would implement a Hell without God, that should show that either they are cruel, if they want to, or that they are rationalizing, if they don't." It would not even occur to someone who thinks that way, to ask if non-religious people would implement a Hell.
I'm confused about why your question pertained specifically to religious people. I would find it (mildly) interesting to know whether those who believe in a real hell are more or less disposed to believe in an artificial one.
[It would only be interesting if there's a big difference. Religious people are more likely to think hell is (was) a good idea, but nonreligious are more likely to think we need one (because we don't have one already).]
I know of it, but haven't read it yet. Tad William's Otherland series has some similar themes (the ethics of world-building technology).
I'm specifically interested in a predictive model of which humans would do what if they had such technology.
Even more precisely, I'm interested in raising general moral interest in such a model.
Retribution is pointless, especially if it's eternal, just a waste of resources. Deterrence and reform can actually be useful and the emotion of wanting retribution probably only evolved to compel social organisms to perform actions that contribute to deterrence and reform.
There might be some small advantages (small because most people don't need this stuff to be oklayish human beings) in the first life, but it would be cruel and unusual punishment in the second life for those who fail the test and the importance of the quality of life in the first life will of course diminish when people know for sure there is a much longer second life (plus murder would become effectively a transfer of locale rather than the end of life). In fact I'm not sure what the point of still making babies in the first life would be.
You'd also run into all the age-old philosophical knots like "doesn't the authority presiding over heaven and hell deserve to go to hell him/her/itself for allowing personalities doomed to go hell to come into existence?"
You know that's the plot of Iain M Bank's Surface Detail, right?
I think there is use to a selective reward philosophy, that is, selfish agents might be incentivized to act prosocially if they can expect - or just suspect - to be rewarded selectively for it.
Same for punishment, except that punishment is harm while reward is benefit.
A problem is indeed that the rules shouldn't be dumb, which is all the more reason to focus on reward rather than punishment - at least, if the rules are dumb, someone still gets a benefit from the existence of reward, while no one is harmed by unreasonable punishment.
I think desire for heaven/hell implementation is ethically relevant because there is a small probability that posthumanity can actually achieve the technology for it, and knowledge of humanity's preferences what to do with it matter in deciding if and whom to give power, and how much, all else equal.
Maybe a little later than that, but: Throw away those primitive smart phones, smart watches, Google-glass, etc., and instead use (locally-wireless) electronics-to-brain connections, thus enabling (via networked or similar) practical high-speed mind-to-machine and mind-to-mind communications. Share your thoughts, literally, with another person. And... not just your conscious thoughts. Record (or supplement) your dreams, or even share your thoughts, dreams, and -- how shall I put this -- "sensory" experiences (ahem) with another person (or a group of facebook-like friends). Participate in "social media," or watch movies, or play on-line games, or blog, etc., all while you are, quite-literally, asleep. Or awake. Or in-between, if you prefer. Oh, and if you think your privacy has been disappearing? Well, after you get that minor little brain implant (and hey, maybe you won't even need one, if there is a way found to read-from/write-to enough of your neurons via a purely-external transducer), you'll never feel alone again. Anywhere. Anytime. Ever. Or maybe that won't happen at all. But I wouldn't bet against it.
Do you have a tentative release date for the popular-audience book?
Like most people, whether religious or not, I am in favour of implementing a version of purgatory. Specifically, one of the major purposes of imprisonment is retribution.Admittedly, this is not normally how prisons are described, but I think it fits within the general ambit of your comment.I am not hubristic enough to think I could implement a heaven.
Thanks for the reply -
Yeah - so your view, that these differences represent different kinds of signalling rather than levels - is what I understood to be the common understanding. And I certainly don't think that there isn't different forms of signalling - but wonder if the overall story wouldn't benefit from the introduction of contexts that have variant costs.
One reason it might is that it means that you don't necessarily have to posit signalling strategies that matter (are worth it) in contexts where intuitively it doesn't seem like they matter at all.
The example that comes to mind is Robin's discussion of why nerds like games:
And the last suggestion that he makes is that nerds enjoy games in order to signal their selfishness and competitiveness in real life. What strikes me about this suggestion is the way it posits a high-stakes signalling explanation for a social context which is much more intuitively understood as low-stakes.
The value of such a low-signal context would be not in signalling to your intimate peers that you're capable of selfishness - but in having a low-cost environment that allows you to practice/simulate these behaviours so you can later deploy them in the real world.
Think of a professional tennis player that has a practice partner (also a professional). They are practicing together not to signal to each other that they are going to beat the other in real competition - but simply to get a chance to hone their skills in a context where failure isn't going to cost them (as it would in a tournament).
This just strikes me as a much more plausible and intuitive explanation - one that is made possible by allowing for low-cost contexts.
This doesn't mean that you wouldn't necessarily be able to avoid positing the existence of covert/unconscious signals in various contexts (in fact, wouldn't you expect to see more covert/unconscious signalling in high-cost signalling contexts?). But the principle of parsimony does suggest to me that where we can avoid positing these sorts of covert signalling behaviours - we should.