70 Comments

hmmmm.... naga po ha bakin namn haha oo naga sayo ha bakin namn haha bakin namn po ha bakin namn haha ...? :D

Expand full comment

 > Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?

You haven't looked?

Expand full comment

He might be talking about thermodynamic time asymmetry. 

See e.g. his post 'Scandalous Heat'. Also see the follow-up.

Expand full comment

Standard physics equations, when simply projected backwards in time, give very wrong estimates of past features. To get accurate estimates you have to add in an "arbitrary" and hard to adequately formalize constraint that entropy was very low in the distant past.

That sounds strange. Can you please elaborate?

As far as I know, the second law of thermodynamics is part of the "standard physics equations". Do you mean reversible physics equations?

Similarly, part of what growth experts know is which models to use how in order to get reasonable growth features.

Correct me if I'm wrong, but mathematical models of economic growth typically don't fit experimental data very well (except perhaps World3, which economists look with disgust). Of course, this doesn't give Yudkowsky free pass to make things up.

Expand full comment

@5dcdf28d944831f2fb87d48b81500c66:disqus

"But in any realistic scenario, you can't make the assumption thatthe other player will behave like you, even if at some point in thepast you were copies."

I know they would share my values, unless there is too much change in between.

I know that I don't change so much over short periods of time.

I would cooperate with copies of myself, so unless they were brainwashed, they would do the same.

"Keep in mind that other people are already almost perfect copies ofyourself, given the overall genetic similarity between all humanity,and also the cultural similarity between you and those you normallyinteract with."

Lol no.

Most people care about completely differnt things than I do.

"1. Why should you expect AGI to be much more rational than humans?Humans are only as rational as evolution made them. An AGI would beonly as rational as its maker made it."

Intelligent design > evolution.

This will be the first time intelligent design makes minds that can reason.

"2. You don't need to change terminal values to achieve a failure tocooperate. In Prisoner's Dilemma, the two players have the same payoffmatrix, that is, the same terminal values. Nevertheless, unless the twoplayers are equal, they will not cooperate."

Payoff matrix != terminal values.

A terminal value is an expression how you want the world to be structured.

I want the world to be structured differently than most people do, but identical to how copies of me would.

That makes cooperating instead of defecting rational, and a functional (not insane) AI would do the same.

Expand full comment

Actually I think Yudkowskys right about this one (shame he got everything else wrong though).

Ultimately, matters will not be resolved by internet debates, but by empirial facts on the ground.   The debate is resolved when there an empirical #win.  So we should emphasize again, the importance of #winning.

Hackers Maxim #11'The one true alpha move is #winning'

Expand full comment

Test. Ignore.

Expand full comment

AnotherScaryRobot, note that bigger brains don’t correlate well with intelligence:

Research suggests that bigger animals may need bigger brains simply because there is more to control -- for example they need to move bigger muscles and therefore need more and bigger nerves to move them.

Also see this article:

[...] animal brains [...], which can vary in size by more than a hundred-fold—in mass, number of neurons, number of synapses, take your pick—and yet not be any smarter. Brains get their size not primarily because of the intelligence they’re carrying, but because of the size of the body they’re dragging.I’ve termed this the “big embarrassment of neuroscience”, and the embarrassment is that we currently have no good explanation for why bigger bodies have bigger brains.If we can’t explain what a hundred times larger brain does for its user, then we should moderate our confidence in any attempt we might have for building a brain of our own.

ETA: See also here.

Expand full comment

 Beware of the choice of the arrows in a Bayesian network. Except special cases, the direction of the arrow is arbitrary to some extent.

Expand full comment

We don't know that anything as elaborate as developing better algorithms is required. The brain is would seem to be significantly resource-limited as a consequence of the caloric energy available in the human ancestral environment. There are also some very real practical physical problems associated with fitting the thing into your head — problems which make humans very susceptible to head injury, and require babies to be born in an extremely premature state.

The brain contains many repetitive structures. It may well be the case that given an emulated human brain free from the above constraints, simply adding many more copies of these repetitive structures could enhance cognition significantly, with no "intelligent" changes to underlying architecture and no development of new algorithms.

Expand full comment

Standard physics equations, when simply projected backwards in time, give very wrong estimates of past features. To get accurate estimates you have to add in an "arbitrary" and hard to adequately formalize constraint that entropy was very low in the distant past. Part of what a physics expert knows is which models to use how in order to get reasonable estimates from them.

Similarly, part of what growth experts know is which models to use how in order to get reasonable growth features. Since it is easy for models to foom but apparently hard for reality to foom, they reasonably avoid foom models. Why isn't that attitude toward models just as reasonable as the physicists' attitude toward models that estimate the past?

Expand full comment

 Thank you for making this polite enquiry, Abram.  It seems that my comments have now been uncensored.

Expand full comment

I actually don't believe that the empirical and theoretical sides of economic theory relate to one another significantly. Theoretical econ appears too me to perdict foom in general and then to be modified ad-hoc to pretend that it doesn't.

Expand full comment

The serial speed plateau is only because they keep trying to keep a larger and larger number of components in synchrony.

Reference?

I said we were seeing a "technology explosion" - not a "utility explosion".  We can't really measure utilities on an absolute scale.

But we can measure, or at least estimate, utilities on a relative, personal scale. Talking about a "technological explosion" without reference to utility seems rather vague.

Expand full comment

The serial speed plateau is only because they keep trying to keep a larger and larger number of components in synchrony.  I said we were seeing a "technology explosion" - not a "utility explosion".  We can't really measure utilities on an absolute scale.

Expand full comment

"You can sort by oldest first."

Oh, I hadn't seen that.

This is useful.

Expand full comment