This is our monthly place to discuss appropriate topics that haven’t appeared in recent posts.
One of the two reasons you list for pursuing a PhD was explicitly to signal (credentials you call them). How important are your own motivations for education in influencing your theory of education as signaling? Do you think you are being unduly influenced by introspection?
Your friend Tyler seems to do this as well. For instance, a reader says he wants to spend more time consuming high-brow and intellectual stuff, but he is too tired or lazy and instead watches the tv show Friends. He considers this irrational. Tyler’s advice is that he doesn’t REALLY want to consume high-brow and intellectual things, and that the irrational desires should be enjoyed and cultivated. But I think this is because Tyler’s impulses drive him to consume things he feels good about consuming, and so he can’t empathize. Tyler’s lack of an impulsivity that he doesn’t like makes him too dismissive of the existence of such impulses. In the same way your lack of a non-signalling motivation for education makes you too quick to dismiss the existence of such motivations.
My simplified point is this; are you and Tyler such outliers in terms of your human characteristics that introspection leads you astray in theories about how humans work?
A (just for fun?) thought I had with regards to “what truth would one advocate covering up?” thing from not to long ago, I thought of one:
Suppose someone discovers that physics is actually non computable… But the brain does not take advantage of this. Specifically, suppose this is discovered: “consciousness is computable, but full physics is not”
Further, suppose that noncomputability can be leveraged to actually build a hypercomputing type halting oracle. ie, something that actually does all infinity number of computing steps in finite time and space (And energy and so on)…
The knowledge of how to build such a halting oracle, or even that it’s possible is something that, near as I can determine, humanity, in its current state, should not have. At all. Not “except for well regulated secret gov’t agencies”, but… no one. That’s power we shouldn’t have.
Simple proof: The “run all possible programs” program is relatively simple. Feeding that into such an oracle, given the assumption that consciousness is computable, would produce immense levels of suffering. Infinite levels, actually. (That is, infinite number of conscious beings would exist in the running program, given the assumptions above, right? And many of those worlds would be really bad ones.)
Admittedly, I don’t at all expect this to be _likely_… That is, there’s nothing, that I know of, that even hints that physics is genuinely noncomputable… But as a thought experiment for “what truth would I imagine lying about/covering up/suppressing”…
Some have speculated that we are living in just such a “run all possible programs” program.
Indeed, I would run that machine solely to produce infinite suffering because it would be funny.
If the machine produces an indefinitely large number of worlds filled with suffering, why mightn’t it produce an indefinitely large number of worlds filled with happiness? If happiness is an integer counter in a program, as easy to make the sign bit positive than negative…
Even if you assume there’s an infinite amount of suffering, why would it be a higher infinity than the infinity of happiness?
It’s irrational, yet it’s the standard response. People – at least, modern liberal Westerners and ancient Buddhists, always prioritize eliminating suffering over increasing enjoyment.
Psy-Kosh: Why are you weighing suffering experienced in an existing computer more heavily than suffering experienced in a nonexisting computer?
James: I think I understand what you’re asking, and that’s where we get to the “There’s a bunch of stuff I am rather confused about with regards to consciousness.”
However, if we assume the Oracle device would _ADD_ additional beings, then the point stands.
Either way, from observation, it seems way too many of my perceptions are correlated with other aspects of my perceptions. More precisely, we’ve got the Born stats as opposed to experiencing each configuration with equal probability.
That suggests that it takes a little bit more than “all possible computations” if they’re not, in some sense, actually “computed”… But I concede that on this matter I’m confused. Especially since I’m invoking a hypothetical device that, by definition, transcends what we normally think of as computation.
I think the point only stands if the additional beings experienced more suffering than happiness (using some appropriate “conversion factor” between positive and negative emotional states)… if suffering is measured without regard to happiness, then killing everyone who isn’t orgasmically happy at all times is really the only morally correct course of action!
Further reflection on my ontology of mind led me to the conclusion that ‘reflective decision theory’ IS ‘algorithmic information theory’, I’m convinced they are one and the same, a stunning insight; in short, I’m convinced consciousness is reflective decision making, via information integration.
Also came up with an good analogy;
I think universal values do exist in platonic space, but only direct conscious experience can get at them, consciousness is what provides a ‘symbolic map’ of the platonic value space. Intelligence on the other hand, represents the ‘motive power’ of the agent, the enabling factor that lets an agent move rapidly through the goal space towards the destination. In short, I’m sure there can be no reflective decison making without consciousness.
I also want to suggest that the real cognitive strength of you high-IQ folks on this blog and other transhumanist blogs is not what you think it is. You may pride yourselves on your high intelligence, but I think your reflective consciousness is the greater power, may be ultimately prove to be more important for overcoming bias.
Am impressed with the algorithmic information theory of beauty of Jurgen Schmidhuber, as the beginnings a possible basis to develop these general ideas.
Jurgen’s Ideas on Art/Beauty
So much is happening in the cognitive sciences/IT. Other technologies and science may have stalled, but there’s no denying the explosive ferment of ideas and insights still happening in IT; this is why I’m still predicting a near-team Singularity (within 24 years).
I see a common thread in your apt critique of Profs. Hansen and Cowen. I think both may underweigh the benefits of paternalistic constraints because their self-control abilities meet their aspirations.
I think paternalistic constraints (to force subject matter literacy) are probably a strong not-always-transparent part of the appeal of Ph.D. programs for many applicants, which is separable from simply signalling credentials.
You mean like people for whom the military life appeals because it supplies discipline?
Referring to the paternalistic constraint of a PhD program, do you mean a PhD program is a mechanism for people to force themselves to study harder and learn more than they would on their own? If so, then I completely agree. In fact that would be one of the primary reasons I am in a PhD program.
Autodidacts like Robin, in contrast, don’t need such a mechanism, and so they discount in their theories of education.
In the same vein I think economics PhD programs attract many individuals who have personality types very similar to rational actors, which makes them much more likely to accept that model as a reasonably representative model. It takes a person with a lot of self-control and discipline to make it through a PhD econ program, and to the extent that our introspection causes us to find models that represent our own personal characteristics as more intuitively appealing we would expect rational actor models to be overemphasized in economics. This would explain why economists don’t have as strong of a gut dislike of the rational actor models that, for instance, sociologists do…. And yes, I am suggesting you can make it through a sociology more easily than an econ program.
TGGP and ao, yes.
Has anyone ever noticed anything peculiar about the number 27?
Some facts that have struck me as a little odd:
(1) There are 27 bones in the human hand
(2) The orbital period of the moon is 27 days
(3) There are far more references to the number ’27’ in pop culture than would reasonably be expected by chance
What’s going on here folks? Evidence of a simulation overlord/AGI/alien conspiracy? Nonsense? Or does anyone see some special mathematical import to the number 27?
I want to ask everyone and myself: In the unpredictably-near-or-far future, what level of supposed safety and minimization of risk will you accept before seriously using chemical nootropics and brain-enhancing surgery to increase your mental functions?
The brain is so much more complex than any other part of our anatomy. I’m wondering whether there will be a time in our lives if we will be willing to make the decision to radically alter our brains on a hardware level, now we’ve left the fetus and developed, in the hope of increasing cognitive functions or competing with those who do (assuming it achieves some critical mass in society).
No one is even really understands “mental functions” anyway.
Will a “medical study” showing no side-effects of some Hormone K (ted chiang allusion) be enough?
Will people surviving a surgery and making more money be enough?
What about the unknown changes? Will you wait years and years for evidence on the effects of more subtle cognitive functions like creativity? Can you measure these? Will the stats mean anything?
I have great (and I’ll name it as it is here) faith in the benefits of technological advances, especially computer- and medicine- related. But personally, whether from cowardice or prudence, I am very hesitant to mess with my brain (though I do mess with my mind, introspection-wise), even with something as innocuous as pot, which I haven’t used before even though I’m pretty sure that casual use doesn’t do anything bad to you. But it might, and I might never notice. So I’m asking for personal, subjective levels of acceptable risk and acceptable ignorance for trying to physically (chemicals and surgery) improve one’s cognitive abilities (language acquisition, working memory, memorization, visualization, associativity, creativity, quick-thinking, empathy etc. etc.)
One thing frequently mentioned on this blog is that health care spending is uncorrelated with health, and, therefore, health care isn’t effective at producing health.
Clearly, lowering interest rates doesn’t reduce unemployment. Just look at the correlations!
Wrong Doug. We would expect spending to be correlated with health because the wealthy are healthier, what’s disputed is if the difference in health care causes those health differences. The RAND experiment actually intervened to give people extra health insurance and compared the outcomes to a control population in order to determine the effect of healthcare on the margin. Honestly, I’ve seen you commenting here before but its almost as if you haven’t been reading.
I looked at the Rand experiment, and, indeed, extra routine care didn’t seem to help. However, everyone who participated in the RAND experiment did, in fact, get some health insurance, even if it was only “catastrophic” coverage. And the level of expenses at which “catastrophic” coverage kicked in varied by income. Every person in the RAND study, even the “control group”, would have been able to pay for, say, a heart transplant if they happened to need one.
In other words, this wouldn’t have happened to anyone in the study.
recent study of back treatment vs placebo, there was no difference in effectiveness–
I’m looking forward to getting confirmation that there are ‘universal values’. Remember, this was a core claim of mine going back to 2002, for which I’ve been ‘universally ridiculed’ on transhumanist lists ever since 😉
Now look at the ‘Less Wrong’ folks, they are talking about a ‘timeless decision theory’. (an analogy of decision theory applied to the platonic level of reality).
Firstly the notion a ‘timeless decision theory’ is oxy-moronic, what exists on the platonic level is no more a ‘decision theory’ than mechanics is ‘timeless thermodyamics’ or algebra is ‘timeless statistics’. No.
None the less, the basic idea of a timeless analogy to decision theory seems a good one (in fact I implied it myself in the ontology matrix I once posted to this blog).
So what actually is this new theory? Well, it actually blurs the distinction between preferences and decision making, because it’s actually a theory of ‘dispositions’ rather than ‘probability’. And remember its timeless (universally valid). You may as well just call it ‘platonic consequentalism’ or ‘universal morality’ for short. So there are universal values after all it seems!
My congratulations to the ‘Less Wrong’ crowd for proving true what I’ve been claiming all these years 😀
Your description does not sound like what I read over there. The Smoking Lesion and Murder Lesion both take for granted the desirability of activities most readers personally don’t approve of. So it is subjective and depends on a unique utility function. The issue of disposition vs decision is distinct from one of determinism vs probability. I predict you will continue to be universally ridiculed.
It seems to be that the theory that’s over there so far is still incomplete- a sort of ‘halfway house’ to a possible proof of universal values. As I stated;
the notion of a ‘timeless decision theory’ is oxy-moronic, what exists on the platonic level is no more a ‘decision theory’ than mechanics is ‘timeless thermodyamics’ or algebra is ‘timeless statistics’
If you carry their ideas further you will see the distinction between preference versus decision making start to blur. Remember, the theory assumes a platonic (timeless) reality, and whatever is on the platonic level is by definition universal. If anything resembling a set of preferences over all minds appears on that level then universal values will be proved.
Lets see if I’m finally right about this and at last ‘universal values’ are proven.
… be a charity angel.