Economist turned politician Andrew Leigh writes on “Five science breakthroughs that will transform politics“: In this article, I’ve focused on ideas that are just over the horizon for most of us. So green roofs, LED lights, genetically modified crops, 3D printers and geo-engineering are important, but improvements are likely to be steady rather than seismic. Instead, I’ve chosen “disruptive ideas” that could radically affect the way our society operates. … 1. Driverless electric cars … 2. Space elevators … 3. Nanotechnology … 4. Ubiquitous Data … 5. Machine Intelligence …
Vern Ehlers was a physicist who represented Michigan from 1993 until 2011. The arrogance and ignorance of people in other countries about the "stupidity" of Americans never ceases to amuse me.
Thanks Robin. I loved reading your materials on ems, but your comments about the speech are spot on. Aside from the fact that the policy implications of ems are really tough, they're also politically controversial. I had some more radical policy ideas in an earlier draft, but scrapped them because I didn't want to attract tabloid silliness ('Slavery should be resinstated, MP says'). One of the striking things about politics compared with academia is that the rewards to producing interesting ideas in politics are not always positive...
On another topic entirely, entering politics has gotten me considerably more interested in your work on bias and error. In his EconTalk on antifragility, Nassim Taleb argued that politics is particularly bad at admitting and correcting mistakes, and I'm keen to think hard about how to correct this.
Why would anyone hire people when enhanced Ems can outperform any human? We are, as a whole, ambitious. We want to improve our position. Our Ems would be infused with this trait and do the same, and wouldn't need humans slowing them down. Unemployment runs wild. By the time politicians realize this and try to rein in the Ems it'll be too late. The Ems will fight back. They are, after all, only human. Sort of.
If you've ever read the novel Dune by Frank Herbert, their history includes the Butlerian Jihad, which is a war between people and machines. Humans eventually won and came up with a new commandment: "Thou shalt not make a machine in the likeness of a human mind." Upon penalty of death.
I don't really think the Dune scenario is likely, though. I don't see how humans could win that war.
After spending years getting expert in thinking about good policy for this our industrial era, Leigh can see that ems are a whole new era where policy must be re-thought, starting back from basics. He doesn’t want to do that – he’d rather build on the expertise he has acquired to attack our many important industrial era problems.Robin, I can't help but wonder if this sort of attitude explains some of your hostility to the SIAI. If they are successful in creating a powerful FAI singleton it would lead to an era where your kind of economic analysis was less useful. In particular it would be an era where, rather than creating institutions that channel human self interest into socially productive channels, we simply gave control over to creatures that directly desired to be socially productive, creatures that don't need to be manipulated into constructive actions by social institutions. It would make a lot of current economic policy thinking obsolete.
By suggesting this I don't mean to undermine your criticisms of the SIAI, I find some of them, such as your criticism of the AI Foom concept extremely thought provoking. I just think it's something to consider.
"I’m much more convinced of enhanced human intelligence and strong AI (or one of those) occuring closer to 2030 as Kurzweil predicts."
This prediction depends on two crucial predicaments:1) realization of quantum computing2) simultaneous survival of humanity AND the specific socio-political and moral structures that label scientific and technological progress as valuable.
While I do not want to speculate about number 1, number 2 seems highly unlikely. We will either exterminate ourselves with more and more escalatingly powerful nanoweapons, or, after the first nanoterrorist accident, the bioluddite movement that Kurzweil labels "shallow and powerless" will simply seize control and ban technological progress. Taking both our tendency to overreact to outlier events (9/11 and the Patriot Act, Vietnam and pacifism ) and our increasing ability to monitor and control everything that happens anywhere, I think that option number 2 seems like the obvious and - to be honest - the best chance humanity gets.
Can anyone imagine someone who can talk this intelligently about science and it’s impact on society ever being elected to the US Congress?
Bill Foster comes to mind. He was a particle physicist at Fermilab for 22 years before representing Illinois's 14th congressional district 2008-2011.
I don't think Leigh is being closed minded here. His objections are valid and Robin has no answer to them either. There's probably no perfect solution: making EMs (or copying) them will be banned, or EMs will be salves or the world will devolve into social darwinism and end up like a dystopian sci-fi movie. Either way, someone will have their freedoms restricted, in two of the possibilities it would be all the EMs (millions, maybe even billions of them) or almost the entire human population who lose most of their freedoms, in one (the first) it's a handful of rich people who will be restricted from getting richer really, really, really fast (they'll still get richer really, really fast). So to me the choice seems very simple, but I'm sure <del>greedy selfish bastards</del> libertarians will find a way to disagree.
EMs as a serious policy issue?
Hanson expects the EM thing to occur sometime between say 2030 and 2100, right?
I'm much more convinced of enhanced human intelligence and strong AI (or one of those) occuring closer to 2030 as Kurzweil predicts. Say one of those occurs in 2040, that is still much sooner than an average EM possibility of 2060.
I'm biased in the sense that I think something like augmented human intelligence would have to come before EMs -- if they ever happen.
While fun to consider the morality of nanobots beginning to enter the public's brains in 2027 as Kurzweil said in Newsweek in 2007, even that discussion probably wouldn't amount to much until the 2020s.
Can anyone imagine someone who can talk this intelligently about science and it's impact on society ever being elected to the US Congress? As an Aussie it makes me proud.
I am exploring some of these moral issues about EMs - making copies, turning them off etc. - in my new novel Newton's Ark (available on Amazon). It's not a heavily philosophical treatment, more an attempt to have real characters confront these issues and deal with them.