46 Comments

Reminds me of being in seminar break-out group at Cornell's School of Industrial and Labor Relations. Faced with a stack of Tinker Toys(TM), a time limit, and a model, one of the men in my group made assignments and we started assembly. After a round or two, I said, "I can't do this, let's break it down this way." My group was the only one to use up ALL the Tinker Toys(TM) with lots of time left over for discussion. Only one of the observers (also a woman) - watching from the larger class - brought up the point of my courage in calling out the flaw in the assembly line - the one 'correction' that guaranteed our extraordinary success.

Expand full comment

@f26939f398e5b2e21ea353b06370c426:disqus Probably the Kochs' "ideology" is just whatever happens to be favorable to their financial interests.

Expand full comment

 TGGP:

A lot of the Koch-critical arguments I heard from Cato types actually depicted it as a split over tactics rather than ideology.

Isn't that because no one quite knows what the Koches think ideologically?

But their ideological nebulousness invites the question of what they're hiding. To me, the Objectivist-neocon axis points to militarism, which is an inconvenient position among libertarian intellectuals.

Expand full comment

A lot of the Koch-critical arguments I heard from Cato types actually depicted it as a split over tactics rather than ideology. Cato thinks they are playing a "long game" by changing ideas. The Koch brothers (who admittedly had been alienated from Ed Crane for inscrutable reasons for years) decided that Tea Party politicians had more immediate prospects for success and also decided that it was important to defeat Obama. Ed Crane's replacement, John Allison, is himself an Objectivist and has said some things on foreign policy disturbing to others at Cato. The end result of the Kochs stacking the board may have been a shift toward neoconservatism, but I don't know if the Kochs are particularly interested in that angle. Neoconservatives are dominant within the Republican party, and the Republican party is most closely aligned with their main priorities (including a favorable tax/regulatory regime for Koch Industries).

Expand full comment

 @f26939f398e5b2e21ea353b06370c426:disqus According to Wikipedia:  http://en.wikipedia.org/wik...

typical Libertarians, notably the Cato Institute have been traditionally critical of Objectivism, until very recent years. I wounder whether the Kochs have anything to do with this change of position.

Expand full comment

dmytryl:

Yet, they seem to take themselves very seriously, some people dropping out of school moving to guru, to work on friendly AI.

You probably have better information than I. My impression was that this circle is egoistic. No self-sacrifice for these guys and gals! Truly Objectivism 2.0 (VV.) Yudkowsky reported that he draws a large salary  (so he can live in a spacious home in Berkeley). His digs la causa. Muelhauser doesn't work for free, and the chronology doesn't suggest that the "executive director" position he holds was what motivated him to leave school when his degree was nigh.

I would think the norms outwardly governing the leaders would set the style for the followers. That many are now working to promote themselves commercially as rationality trainers also suggests that this isn't about self-sacrifice for the AI. 

But I have no inside knowledge, and since I'm not a software engineer, their thought forms simply seem alien. (As often do yours, although I often agree with you. :))

As to whether a different kind of nutcase is likely to join them, I'd have to say I don't really have an opinion and am not really concerned. I really don't think one should have to moderate one's views because of how nutcases might construe them. 

Expand full comment

dmytryl:

Yet, they seem to take themselves very seriously, some people dropping out of school moving to guru, to work on friendly AI.

You probably have better information than I. My impression was that this circle is egoistic.No self-sacrifice for these guys and gals! Truly Objectivism 2.0 (VV.) Yudkowsky reported that he draws a salary of $140,000 (so he can live in a spacious home in Berkeley). His digs la causa. Muelhauser doesn't work for free, and the chronology doesn't suggest that the "executive director" position he holds was what motivated him to leave school when his degree was nigh.

I would think the norms outwardly governing the leaders would set the style for the followers. That many are now working to promote themselves commercially as rationality trainers also suggests that this isn't about self-sacrifice for the AI.

But I have no inside knowledge, and since I'm not a software engineer, their thought forms simply seem alien. (As often do yours, although I often agree with you. :))

As to whether a different kind of nutcase is likely to join them, I'd have to say I don't really have an opinion and am not really concerned. I really don't think one should have to moderate one's views because of how nutcases might construe them. 

Expand full comment

 TGGP:The Koch Brothers Ayn Rand-derived militarism became most apparent in the struggle for control of the Cato Institute, where they sought to appoint some neocons to the Board. (Also consider their donations in 2008 and 2012 to the Romney campaign--while Thiel was supporting Paul.)

Given that some of the people the Koch brothers want to put on CATO's board are firmly in the neocon camp, one doesn't have to be a rocket scientist to see that a Koch-controlled CATO Institute is one big win for unbridled militarism and one big setback for those advocating small government and individual liberty. - http://www.huffingtonpost.c...

See also (from libertarians):

http://kochvcato.com/2012/0...

and

http://www.richardcyoung.co...

Expand full comment

I had not heard the Kochs are militarist hawks. I don't know if they've said much about foreign policy. I know one explicitly left the LP for the GOP, but said it was because the LP insisted on a platform of abolishing taxation.

Expand full comment

dmytryl:

Yet, they seem to take themselves very seriously, some people dropping out of school moving to guru, to work on friendly AI.

You probably have better information than I. My impression was that this circle is egoistic.No self-sacrifice for these guys and gals! Truly Objectivism 2.0 (VV.) Yudkowsky reported that he draws a salary of $140,000 (so he can live in a spacious home in Berkeley). His digs la causa. Muelhauser doesn't work for free, and the chronology doesn't suggest that the "executive director" position he holds was what motivated him to leave school when his degree was nigh.

I would think the norms outwardly governing the leaders would set the style for the followers. That many are now working to promote themselves commercially as rationality trainers also suggests that this isn't about self-sacrifice for the AI. 

But I have no inside knowledge, and since I'm not a software engineer, their thought forms simply seem alien. (As often do yours, although I often agree with you. :))

As to whether a different kind of nutcase is likely to join them, I'd have to say I don't really have an opinion and am not really concerned. I really don't think one should have to moderate one's views because of how nutcases might construe them. 

Expand full comment

Make no mistake, both Yudkowsky and Hanson are very very dangerous.  No way! Everyone knows that the Koch brothers wouldn't know a political plot from a plot of grass and that their hearts are with the "47%." And that Peter Thiel is deeply concerned that his vast wealth have no disproportionate influence on American political and intellectual life.

Everyone knows, after all, that the powerful use outwardly powerful people to be their pawns. (And that reality is exactly as it seems to the utterly naive.)

Expand full comment

Make no mistake, both Yudkowsky and Hanson are very very dangerous.  But lets not forget Nick Bostrom!  This outwardly mild-mannered, genial follow, is in fact every bit as dangerous, and no doubt a devious plotter.

Expand full comment

>But was Goertzel actually mailed threats?

I recall him complain about that somewhere on the OpenCog pages. I don't think he was mailed threats in paper mail, no. Probably just general green ink stuff about how he would kill everyone and how he must stop, maybe its unfair to describe as threats. In any case, presently they simply believe that AI researchers are too dull.

Originally he stated pointedly that in any conflict between humanity and the AI, his sympathies lie with the AI! This kind of 180 degree turn doesn't suggest seriousness--nor does the ability of his followers to ignore the switch suggest that they're serious. (Which is, of course, fortunate.)

This only suggests that they ought not to take themselves and each other seriously. Yet, they seem to take themselves very seriously, some people dropping out of school moving to guru, to work on friendly AI. Also, when you think you're the brightest AI researcher, you think that others would have sympathies lie with AI, and would be unable to change their mind.

edit: The question is, actually, do you think extremists would be put off from joining by something? I mean, consider evolution of today's potential unabomber. His views gradually get more extreme and he gets crazier. Wouldn't such a person discover this club, and join it (perhaps not sharing the optimism for the FAI?)

Expand full comment

dmytryl:

Back in the day he'd talk how Novamente would kill everyone, and no-one mailed Ben Goertzel a bomb (some threats but not a bomb).

As crazy and reactionary this (correctly termed) doomsday cult is, I wouldn't imply (in today's climate) that it even potentially inspires terrorist acts based only on the kind of stupid, tasteless talk that's been cited. But was Goertzel actually mailed threats? That would indeed suggest a level of dangerous seriousness that I hadn't foreseen. Do you recall from what source you learned of the threatening letters?

Yudkowsky himself hasn't even been consistent on whose side he's on. Originally he stated pointedly that in any conflict between humanity and the AI, his sympathies lie with the AI! This kind of 180 degree turn doesn't suggest seriousness--nor does the ability of his followers to ignore the switch suggest that they're serious. (Which is, of course, fortunate.) Even in a wholly imaginary war, someone who doesn't know what side he's on isn't a person to be taken seriously, except in his capacity to divert and confuse.

Expand full comment

VV: I remember the one about stopping Moore's law. Original version used word 'we' and 'sabotage' rather than singleton world government and military action. Either way it takes significant insanity to put time into this work. Also, keep in mind that these folks are having meetings off-line, and keep in mind that as of now Yudkowsky is simply sure that other AI researchers are too dull, something that can not really last long as the other AI researchers make software that does pretty awesome things, in ways that are not only difficult to comprehend but even difficult to deceive yourself that you understood. The Novamente going to kill us all incident is rather alarming as an example. And the rage is dangerous when there's insane acolytes around.

Expand full comment

Mother nature is part of the male high-status conspiracy:https://www.nationalreview...."It is a curious scientific fact (explained in evolutionary biology by the Trivers-Willard hypothesis — Willard, notice) that high-status animals tend to have more male offspring than female offspring, which holds true across many species, from red deer to mink to Homo sap. The offspring of rich families are statistically biased in favor of sons — the children of the general population are 51 percent male and 49 percent female, but the children of the Forbes billionaire list are 60 percent male."

Expand full comment