61 Comments

I'm curious if anyone has any pre-formed riff they could give on the relationship of instinct to general intelligence. That is, I would assume by Occam, that the brain is primarily a repetition of a basic simple design, yet there is clearly some method of encoding design patterns that translate into fairly specific predictable skills in the organism.

This would seem somewhat analogous to a kernel of friendliness controlling behavior for an AI as it passed from newborn to well-trained.

Expand full comment

Is there any research on how well you can read off a person's IQ and personality traits from his or her appearance?

Expand full comment

Thank you.The topic seems like ideal blog fodder. It'd be pretty dense reading as a book. I do well to keep up with a few pages worth a day.

Came aboard with the evolution topics a month or two back and had no idea what I'd missed or where it was all heading, so the Future Salon talk, the Singularity Summit audios, and forthcoming book chapters have helped bring me up to speed enough to put what I've read here in context.

Still think I need to go back and read your posts here from the start to catch up though.

Expand full comment

Hopefully some of your question is answered by Knowability of Friendly AI, a (temporarily?) abandoned work-in-progress - my getting bogged down in this sort of document is why I now blog.

Expand full comment

Relatively new to the forum and just watched the 2 1/2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction. My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn't answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.

This was the question about the friendly AI: "Why are you assuming it knows the outcome of its modifications?"

Any pointer to the answer would be much appreciated.

Expand full comment

Robin, At the risk of sounding very ill informed I do not see all of the legal and economic barriers to creating decision markets of which you speak.

Maybe it would help if I tell you what I do see: A successful Hollywood stock exchange, heavy trading in a variety of stock derivatives, and now betting on which CEO will be fired next at Paddy Power. To the extremely untrained eye it looks like combining these is possible and would give the result you’re looking for.

I certainly have not put in the time you have, I'm not sure if anyone else has, and I think a post highlighting the barriers you've faced and still face would be interesting.

Expand full comment

Are there any useful interactive worksheets or online training programs for improving thinking, or calibrating your probability assessments? If not, maybe a Call for Volunteers to see if someone is willing to create an online training application for probability calibration. (Assuming we believe the claims that such training is useful; I have no particular insight whether these claims are true.)

Expand full comment

If I run the game of Life on my computer, does it really generate waste heat that it wouldn't have if I ran some cellular automaton with no self-organization?

Does it matter? Organization->heat doesn't mean no_organization->less_heat. No matter what you use it for, your computer will generate vastly more waste heat than is thermodynamically necessary for what it's computing.

Expand full comment

"self-organizing systems have to generate a little waste heat"

If I run the game of Life on my computer, does it really generate waste heat that it wouldn't have if I ran some cellular automaton with no self-organization?

Expand full comment

There is a very accessible presentation of Pearl's theory of causality available here.

Expand full comment

Unit, try this old writing of mine. Roughly, self-organizing systems have to generate a little waste heat, but that's all.

Expand full comment

1. The ratio (value of one life) / (value of one dollar) may have very different values in (say) the US and Somalia, but I don't see why you should assume that it's only the numerator that varies.

2. Typical human lives in very poor countries are arguably much worse than typical human lives in rich ones: they're liable to be shorter, less enjoyable, less productive of things that other people value, and so on. It's somewhat taboo to say that this means theose lives are "less valuable", but I think the taboo is mostly the result of sloppy thinking. (Note that "this person's life is less valuable than that person's" and "this person's interests count for less than that person's" are entirely different propositions.)

3. In very poor countries, quality and length of life are often very badly affected by things that could be fixed cheaply (measuring cost in dollars). You could save, or extend, or improve, many many lives in Somalia for $10000. Not so many in the USA.

Expand full comment

Steven: who's "we"? Empirically, I think those studies mean third-worlders value their own lives less than first-worlders value their own, at least monetarily (presumably the third-worlders value a given amount of money more than the first-worlders). You can bet that both groups value foreign lives considerably less (and are scope-insensitive about them). Normatively, it would seem we should value a person's life as much as that person does, which supports #3. This isn't as repugnant as it sounds, both because the difference in real value is less (possibly much less) when you consider the differing utility of money, and because a third-worlder genuinely can expect fewer QALYs than a first-worlder. However, in practice #1 may be better, at least because advocating #3 (a) sounds evil to most people and (b) could genuinely lead to evil behavior in people just looking for an excuse to assign third-world lives zero or near-zero value.

(See also this comment by Michael Vassar, saying "we should value a particular human life at the lower of total preference for the continuation of that life and replacement cost for that life" and "because economical thinking is confusing or corrupting to people below a very high IQ threshold, we maintain a convenient fiction of infinite value.")

Expand full comment

Here's a riddle that's been bugging me. If I understand correctly, economists have some different methods they use to calculate how much people value a human life, and in the western world that ends up being several million dollars. If you did the same analysis in the third world, you would probably get a much lower number. So do we 1) value both western and third-world lives at the western amount? 2) value both western and third-world lives at the third-world amount (or something in between)?, or 3) value western lives at a much greater amount than third-world lives, so the life of 1 westerner is worth the lives of N third-worlders? 2 seems absurd and 3 seems morally wrong, so we're left with 1, valuing both western and third-world lives at the western amount. But in a wealthier future, we will probably value human lives at much a greater amount of money still. Does that mean we should value today's lives at future amounts of money (billions, say)? That doesn't seem feasible either.

I've probably mixed up "is" and "ought" a bit, and I suppose I could have added 4) stop attempting to think rationally about money/lives tradeoffs... but I hope you can see the riddle here.

Expand full comment

I have a question regarding causality, statistical inference, and confounding factors:

Is it reasonable to say that cigarette lighters cause cancer?

(If you know of a formal mathematical model/definition of causation, what answer does that formalization give?)

Expand full comment

Any chance of a post of (or a link to) a practical 'Newbie Guide to the Prediction Market Scene'? The reason for asking is that I'd like to start participating in a 'prediction market' later this month, but don't yet know where to start.

Expand full comment