49 Comments

Bias on TED: "Dan Gilbert: Exploring the frontiers of happiness"

Expand full comment

for those wondering how to define / create Intelligence:

http://www.sciam.com/articl...

Expand full comment

Eliezer, can you give us a review of Wendell Wallach's recent book on machine ethics ?

Expand full comment

Probably, most of the readers of this site are from the US or other English-speaking countries, and probably find most acronyms (which are heavily used here) rather obvious. For me, it is not so obvious that AFAICT means "as far as I can tell", at least at first sight. So I've found this link which I hope others who, like myself, are not native English speakers, will find useful:

http://www.acronymfinder.com/

Expand full comment

Parallelism

The human mind is (a least in one way!) similar to a modern home computer. The computer can do "general" tasks only serially (or with a low level of parallelism if you have dual/quad core), but is able to do graphics tasks with a high level of parallelism (GPUs have the equivalent of many hundreds of cores).

Geniuses often describe the process of reaching their discoveries visually. Daniel Tammet - the idiot savant who could do large number multiplication in his head, described the process as seeing two shapes, and then a third shape (the answer) emerges before his eyes. Einstein imaged riding on a beam of light.

By transforming the problem from words in to pictures, they are transferring it from their CPU to their GPU - from the serial part of their brain to the parallel, and exploiting the extra power there. Visual analogies may be an important tool geniuses use.

So my question is (obviously, since I am posting it on OB): What is a good visual analogy for thinking itself?

Expand full comment

There is a fairly old meme floating around, I don't know where it originated but have recently seen it in open source circles and long before that in science fiction, - "It's amazing what one person can accomplish if they don't care who gets the credit."

Expand full comment

Did the whole "hypocrisy" thread just get deleted?

Expand full comment

random link: objectivists vs. subjectivists, fight! (not upper-case objectivists).

Expand full comment

Friendly AI.

Why should we want it to be friendly? If humans could create something that would outlast our Sun, would that be worth the price of the destruction of other life on Earth now?

Thinking on that made me imagine an AI which was only concerned for its own survival, and increasing its control over more and more stars. A monster, which would kill everything in its path.

So how could you program in a sense of wonder, a delight in diversity, valuing life as a Good in Itself, in machine code?

Expand full comment

@talisman So I take it that you think Eliezer's predictions are rightYes; not that I'm an expert.

... but will not be believed?So far, there seems to be little uptake.

Expand full comment

@Joshua Fox re Cassandra:

So I take it that you think Eliezer's predictions are right but will not be believed?

Or are you misusing the reference?

Expand full comment

Will - Pick the field that your ideas fit into. Estimate earnings increase due to PhD vs. cost of PhD in dollars and foregone salary.

I don't know how it works in the social sciences. In the "hard sciences", getting a PhD is of much less benefit if you don't go to an ivy league school, since you're unlikely to get a job as a professor or as a lab head. In computer science, people who don't get PhDs often, maybe usually, get paid more than people who do; because they have more years experience, and because they're more likely to learn high-paying skills (DB management, Java Enterprise) than interesting ones, and because the field has a history of caring less about degrees.

I think the best way to do research is to start a clothing store or something equally mundane; run it for 20 years; retire; then do your research. (Good AI option: Get an MD in neurology, work 20 years, retire, work on AI.) It's very hard to do basic research in a research job. Grants are very application-oriented.

Expand full comment

"Weapons of math destruction" have come up before.

Expand full comment

This is wishing you the best of luck, Will. (Sorry if my previous comment came off the wrong way.)

Expand full comment

Eliezer - Noted. I figured as much from the outside looking in, but you might have thought of something I didn't.

Davis - I care very little about what people think of me in a personal sense. I'm actually a pretty weird guy. I just want my ideas, which I believe to be correct, to have the best chance to be taken seriously. There is a reason people get PhDs after all, it has to signal something (plus I'd love to be a professor for hedonic reasons). I can definitely learn a lot from whatever field I pick, but for the most part I am an autodidact, and that is not my only concern when it comes to choosing a program.

Expand full comment