It seems to be that the theory that's over there so far is still incomplete- a sort of 'halfway house' to a possible proof of universal values. As I stated;

the notion of a ‘timeless decision theory’ is oxy-moronic, what exists on the platonic level is no more a ‘decision theory’ than mechanics is ‘timeless thermodyamics’ or algebra is ‘timeless statistics’

If you carry their ideas further you will see the distinction between preference versus decision making start to blur. Remember, the theory assumes a platonic (timeless) reality, and whatever is on the platonic level is by definition universal. If anything resembling a set of preferences over all minds appears on that level then universal values will be proved.

Lets see if I'm finally right about this and at last 'universal values' are proven.

Expand full comment

Your description does not sound like what I read over there. The Smoking Lesion and Murder Lesion both take for granted the desirability of activities most readers personally don't approve of. So it is subjective and depends on a unique utility function. The issue of disposition vs decision is distinct from one of determinism vs probability. I predict you will continue to be universally ridiculed.

Expand full comment

I'm looking forward to getting confirmation that there are 'universal values'. Remember, this was a core claim of mine going back to 2002, for which I've been 'universally ridiculed' on transhumanist lists ever since ;)

Now look at the 'Less Wrong' folks, they are talking about a 'timeless decision theory'. (an analogy of decision theory applied to the platonic level of reality).


Firstly the notion a 'timeless decision theory' is oxy-moronic, what exists on the platonic level is no more a 'decision theory' than mechanics is 'timeless thermodyamics' or algebra is 'timeless statistics'. No.

None the less, the basic idea of a timeless analogy to decision theory seems a good one (in fact I implied it myself in the ontology matrix I once posted to this blog).

So what actually is this new theory? Well, it actually blurs the distinction between preferences and decision making, because it's actually a theory of 'dispositions' rather than 'probability'. And remember its timeless (universally valid). You may as well just call it 'platonic consequentalism' or 'universal morality' for short. So there are universal values after all it seems!

My congratulations to the 'Less Wrong' crowd for proving true what I've been claiming all these years :D

Expand full comment

I looked at the Rand experiment, and, indeed, extra routine care didn't seem to help. However, everyone who participated in the RAND experiment did, in fact, get some health insurance, even if it was only "catastrophic" coverage. And the level of expenses at which "catastrophic" coverage kicked in varied by income. Every person in the RAND study, even the "control group", would have been able to pay for, say, a heart transplant if they happened to need one.

In other words, this wouldn't have happened to anyone in the study.

Expand full comment

recent study of back treatment vs placebo, there was no difference in effectiveness--


Expand full comment

Wrong Doug. We would expect spending to be correlated with health because the wealthy are healthier, what's disputed is if the difference in health care causes those health differences. The RAND experiment actually intervened to give people extra health insurance and compared the outcomes to a control population in order to determine the effect of healthcare on the margin. Honestly, I've seen you commenting here before but its almost as if you haven't been reading.

Expand full comment

One thing frequently mentioned on this blog is that health care spending is uncorrelated with health, and, therefore, health care isn't effective at producing health.

Clearly, lowering interest rates doesn't reduce unemployment. Just look at the correlations!

Expand full comment

I want to ask everyone and myself: In the unpredictably-near-or-far future, what level of supposed safety and minimization of risk will you accept before seriously using chemical nootropics and brain-enhancing surgery to increase your mental functions?

The brain is so much more complex than any other part of our anatomy. I'm wondering whether there will be a time in our lives if we will be willing to make the decision to radically alter our brains on a hardware level, now we've left the fetus and developed, in the hope of increasing cognitive functions or competing with those who do (assuming it achieves some critical mass in society).

No one is even really understands "mental functions" anyway.

Will a "medical study" showing no side-effects of some Hormone K (ted chiang allusion) be enough?

Will people surviving a surgery and making more money be enough?

What about the unknown changes? Will you wait years and years for evidence on the effects of more subtle cognitive functions like creativity? Can you measure these? Will the stats mean anything?

I have great (and I'll name it as it is here) faith in the benefits of technological advances, especially computer- and medicine- related. But personally, whether from cowardice or prudence, I am very hesitant to mess with my brain (though I do mess with my mind, introspection-wise), even with something as innocuous as pot, which I haven't used before even though I'm pretty sure that casual use doesn't do anything bad to you. But it might, and I might never notice. So I'm asking for personal, subjective levels of acceptable risk and acceptable ignorance for trying to physically (chemicals and surgery) improve one's cognitive abilities (language acquisition, working memory, memorization, visualization, associativity, creativity, quick-thinking, empathy etc. etc.)

Expand full comment

Has anyone ever noticed anything peculiar about the number 27?

Some facts that have struck me as a little odd:

(1) There are 27 bones in the human hand(2) The orbital period of the moon is 27 days(3) There are far more references to the number '27' in pop culture than would reasonably be expected by chance

What's going on here folks? Evidence of a simulation overlord/AGI/alien conspiracy? Nonsense? Or does anyone see some special mathematical import to the number 27?

Expand full comment

It's irrational, yet it's the standard response. People - at least, modern liberal Westerners and ancient Buddhists, always prioritize eliminating suffering over increasing enjoyment.

Expand full comment

TGGP and ao, yes.

Expand full comment

If the machine produces an indefinitely large number of worlds filled with suffering, why mightn't it produce an indefinitely large number of worlds filled with happiness? If happiness is an integer counter in a program, as easy to make the sign bit positive than negative...

Even if you assume there's an infinite amount of suffering, why would it be a higher infinity than the infinity of happiness?

Expand full comment

I think the point only stands if the additional beings experienced more suffering than happiness (using some appropriate "conversion factor" between positive and negative emotional states)... if suffering is measured without regard to happiness, then killing everyone who isn't orgasmically happy at all times is really the only morally correct course of action!

Expand full comment

Indeed, I would run that machine solely to produce infinite suffering because it would be funny.

Expand full comment

You mean like people for whom the military life appeals because it supplies discipline?

Expand full comment

Hopefully Anonymous,

Referring to the paternalistic constraint of a PhD program, do you mean a PhD program is a mechanism for people to force themselves to study harder and learn more than they would on their own? If so, then I completely agree. In fact that would be one of the primary reasons I am in a PhD program.

Autodidacts like Robin, in contrast, don't need such a mechanism, and so they discount in their theories of education.

In the same vein I think economics PhD programs attract many individuals who have personality types very similar to rational actors, which makes them much more likely to accept that model as a reasonably representative model. It takes a person with a lot of self-control and discipline to make it through a PhD econ program, and to the extent that our introspection causes us to find models that represent our own personal characteristics as more intuitively appealing we would expect rational actor models to be overemphasized in economics. This would explain why economists don't have as strong of a gut dislike of the rational actor models that, for instance, sociologists do.... And yes, I am suggesting you can make it through a sociology more easily than an econ program.

Expand full comment