Category Archives: Meta

Last Post Requests?

I’m strongly tempted to quit blogging for a while, to free up time for more ambitious projects. My two year anniversary is coming up in a few weeks.  If I quit then it would be nice to have some sense of completion.  Toward that end, are there any post topics you’d like to request?  Perhaps posts I once promised or at least suggested I might someday make?

GD Star Rating
Tagged as:

Crisis of Faith

Followup toMake an Extraordinary Effort, The Meditation on Curiosity, Avoiding Your Belief’s Real Weak Points

"It ain’t a true crisis of faith unless things could just as easily go either way."
        – Thor Shenkel

Many in this world retain beliefs whose flaws a ten-year-old could point out, if that ten-year-old were hearing the beliefs for the first time.  These are not subtle errors we are talking about.  They would be child’s play for an unattached mind to relinquish, if the skepticism of a ten-year-old were applied without evasion. As Premise Checker put it, "Had the idea of god not come along until the scientific age, only an exceptionally weird person would invent such an idea and pretend that it explained anything."

And yet skillful scientific specialists, even the major innovators of a field, even in this very day and age, do not apply that skepticism successfully.  Nobel laureate Robert Aumann, of Aumann’s Agreement Theorem, is an Orthodox Jew:  I feel reasonably confident in venturing that Aumann must, at one point or another, have questioned his faith.  And yet he did not doubt successfullyWe change our minds less often than we think.

This should scare you down to the marrow of your bones.  It means you can be a world-class scientist and conversant with Bayesian mathematics and still fail to reject a belief whose absurdity a fresh-eyed ten-year-old could see.  It shows the invincible defensive position which a belief can create for itself, if it has long festered in your mind.

What does it take to defeat an error which has built itself a fortress?

But by the time you know it is an error, it is already defeated.  The dilemma is not "How can I reject long-held false belief X?" but "How do I know if long-held belief X is false?"  Self-honesty is at its most fragile when we’re not sure which path is the righteous one.  And so the question becomes:

How can we create in ourselves a true crisis of faith, that could just as easily go either way?

Continue reading "Crisis of Faith" »

GD Star Rating

Bay Area Meetup for Singularity Summit

Posted on behalf of Mike Howard:

This is a call for preferences on the proposed Bay Area meetup to coincide with the Singularity Summit on 24-25 October. Not just for Singularitarians, all aspiring rationalists are welcome. From the replies so far it’s likely to be in San Jose.

Eliezer, myself and probably most Summit attendees would really rather avoid the night between the Friday Workshop and Saturday Summit, so maybe either Saturday evening or sometime Thursday or Sunday?

Please comment below or email me (cursor_loop 4t yahoo p0int com) if you might want to come, and if you have any preferences such as when and where you can come, when and where you’d prefer to come, and any recommendations for a particular place to go.  (Comments preferred to emails.) We need to pick a date ASAP before everyone books travel.

GD Star Rating

Singularity Summit 2008

FYI all:  The Singularity Summit 2008 is coming up, 9am-5pm October 25th, 2008 in San Jose, CA.  This is run by my host organization, the Singularity Institute.  Speakers this year include Vernor Vinge, Marvin Minsky, the CTO of Intel, and the chair of the X Prize Foundation.

Before anyone posts any angry comments: yes, the registration costs actual money this year.  The Singularity Institute has run free events before, and will run free events in the future.  But while past Singularity Summits have been media successes, they haven’t been fundraising successes up to this point.  So Tyler Emerson et. al. are trying it a little differently.  TANSTAAFL.

Lots of speakers talking for short periods this year.  I’m intrigued by that format.  We’ll see how it goes.

Continue reading "Singularity Summit 2008" »

GD Star Rating

Brief Break

I’ve been feeling burned on Overcoming Bias lately, meaning that I take too long to write my posts, which decreases the amount of recovery time, making me feel more burned, etc.

So I’m taking at most a one-week break.  I’ll post small units of rationality quotes each day, so as to not quite abandon you.  I may even post some actual writing, if I feel spontaneous, but definitely not for the next two days; I have to enforce this break upon myself.

When I get back, my schedule calls for me to finish up the Anthropomorphism sequence, and then talk about Marcus Hutter’s AIXI, which I think is the last brain-malfunction-causing subject I need to discuss.  My posts should then hopefully go back to being shorter and easier.

Hey, at least I got through over a solid year of posts without taking a vacation.

GD Star Rating

Setting Up Metaethics

Followup toIs Morality Given?, Is Morality Preference?, Moral Complexities, Could Anything Be Right?, The Bedrock of Fairness, …

Intuitions about morality seem to split up into two broad camps: morality-as-given and morality-as-preference.

Some perceive morality as a fixed given, independent of our whims, about which we form changeable beliefs.  This view’s great advantage is that it seems more normal up at the level of everyday moral conversations: it is the intuition underlying our everyday notions of "moral error", "moral progress", "moral argument", or "just because you want to murder someone doesn’t make it right".

Others choose to describe morality as a preference – as a desire in some particular person; nowhere else is it written.  This view’s great advantage is that it has an easier time living with reductionism – fitting the notion of "morality" into a universe of mere physics.  It has an easier time at the meta level, answering questions like "What is morality?" and "Where does morality come from?"

Both intuitions must contend with seemingly impossible questions.  For example, Moore’s Open Question:  Even if you come up with some simple answer that fits on T-Shirt, like "Happiness is the sum total of goodness!", you would need to argue the identity.  It isn’t instantly obvious to everyone that goodness is happiness, which seems to indicate that happiness and rightness were different concepts to start with.  What was that second concept, then, originally?

Or if "Morality is mere preference!" then why care about human preferences?  How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"?

So what we should want, ideally, is a metaethic that:

  1. Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not;
  2. Fits naturally into a non-mysterious universe, postulating no exception to reductionism;
  3. Does not oversimplify humanity’s complicated moral arguments and many terminal values;
  4. Answers all the impossible questions.

Continue reading "Setting Up Metaethics" »

GD Star Rating

Changing Your Metaethics

Followup toThe Moral Void, Joy in the Merely Real, No Universally Compelling Arguments, Where Recursive Justification Hits Bottom, The Gift We Give To Tomorrow, Does Your Morality Care What You Think?, Existential Angst Factory, …

If you say, "Killing people is wrong," that’s morality.  If you say, "You shouldn’t kill people because God prohibited it," or "You shouldn’t kill people because it goes against the trend of the universe", that’s metaethics.

Just as there’s far more agreement on Special Relativity than there is on the question "What is science?", people find it much easier to agree "Murder is bad" than to agree what makes it bad, or what it means for something to be bad.

People do get attached to their metaethics.  Indeed they frequently insist that if their metaethic is wrong, all morality necessarily falls apart.  It might be interesting to set up a panel of metaethicists – theists, Objectivists, Platonists, etc. – all of whom agree that killing is wrong; all of whom disagree on what it means for a thing to be "wrong"; and all of whom insist that if their metaethic is untrue, then morality falls apart.

Clearly a good number of people, if they are to make philosophical progress, will need to shift metathics at some point in their lives.  You may have to do it.

At that point, it might be useful to have an open line of retreat – not a retreat from morality, but a retreat from Your-Current-Metaethic.  (You know, the one that, if it is not true, leaves no possible basis for not killing people.)

And so I’ve been setting up these lines of retreat, in many and various posts, summarized below.  For I have learned that to change metaethical beliefs is nigh-impossible in the presence of an unanswered attachment.

Continue reading "Changing Your Metaethics" »

GD Star Rating

Posting May Slow

Greetings, fearless readers:

Due to the Oxford conference on Global Catastrophic Risk, I may miss some posts – possibly quite a few.

Or possibly not.

Just so you don’t think I’m dead.


GD Star Rating

2 of 10, not 3 total

There is no rule against commenting more than 3 times in a thread.  Sorry if anyone has gotten this impression.

However, among the 10 "Recent Comments" visible in the sidebar at right, usually no more than 2, rarely 3, and never 4, should be yours.  This is meant to ensure no one person dominates a thread; it gives others a chance to respond to others’ responses.  One-line comments that quickly correct an error may be common-sensically excepted from this rule.

You need not refrain from commenting, just wait a bit.

GD Star Rating

The Conversation So Far

(I paraphrase.)

After a year of Robin pestering co-blogger Eliezer "Can we talking about singularity on the blog now, can we?" and Eliezer saying "Not yet," Robin speaks up on the occasion of his IEEE Spectrum singularity article:

Robin: Hey Eliezer, I see you’ve been talking for years about an AI-singularity.  Have a look; I’ve analyzed the history of previous "singularities" (as Vinge defines the term) and can use that to forecast the timing, speedup, and transition inequalities of the next singularity.  I can also find a tech that looks pretty likely to appear within the predicted time-frame, and an economic analysis suggests it could plausibly deliver the forecasted speedup.  And this tech is a kind of AI! 

  I really don’t have time to talk, but you are looking at untrustworthy surface analogies, not reliable deep causes.  My deep insight is that optimization processes are more powerful the smaller and better is their protected meta-level, and history is divided into epochs according to the arrival of new long-term optimization processes, and to a lesser extent their meta-level innovations, after each of which ordinary innovation rates speed up.  The two optimization processes so far were natural selection and cultured brains, and key meta-innovations were cells, sex, writing, and scientific thinking. I’m talking about a future singularity due to a transistor-based machine with no (and therefore the best) protected meta-level.  My deep insight suggests this would have an extremely large speedup and transition inequality. 

Robin:  This history of when innovation rates sped up by how much just doesn’t seem to support your claim that the strongest speedups are caused by and coincide with new optimization processes, and to a lesser extent protected meta-level innovations.  There is some correlation, but it seems weak.  And since you don’t argue for a timing for your postulated singularity, why can’t we think yours will happen after the singularity I outline? 

Eliezer:  Sorry, no time to talk.

To be continued. 

GD Star Rating
Tagged as: ,