47 Comments

Looking forward to the day I can walk into my local Walmart and get the family AI. I am certainly up for the AI to taking the kids to their activities, helping with homework, preparing the kids for exams, walking the dogs, changing the cat box, cleaning the fish tank, doing the housework, mowing the lawn, working on the yard, planning our meals, doing the shopping, repainting my house, folding the laundry... ahhh... the possibilities are endless!

Karen

Expand full comment

Phil, that point actually supports with Eliezer's position that the problem of AGI is simply an issue of software.

Of course, unfortunately for Eliezer, this also means that there is very little evidence regarding his proposed timeframe: Roger Schank and Daniel Dennett could easily turn out to be right.

Expand full comment

How do old-timers address Kurzweil's argument about how exponential growth in computing power will make AGI feasible for the first time in the mid-21st century?If we had a computer today that had infinite memory, and could give the results to any terminating computation in zero time, we would not know how to build an AGI with it. (Some people are of the opinion that some type of lookup-table or theorem-prover could succeed in this case, but I disagree. There is not enough data for a lookup table, and we wouldn't know how to formalize the world for the theorem prover.)

Expand full comment

Cyan, thanks for the references, I am tracking those down as well.

To clarify (not that anybody cares), when I wrote "defining what 'A' and 'B' are in, P(A|B)" what I mean is that I want to see how this way of looking at reasoning doesn't fail for the same reasons Eliezer (accurately IMO) refers to GOFAI as "suggestively named lisp tokens". Bayesian update may be more sophisticated than pure deduction but the reference issue is what I'm really keen on understanding.

Expand full comment

Poke: you have to go back further than behaviorism to find a time when it was scientifically plausible to suppose that introspection is infallible, regardless of whether some philosophers of mind may have held the opinion more recently than that. Behaviorism itself was a reaction to the introspective methods of late 19th century psychology.

I agree that there was extreme overoptimism in being able to understand cognition in the early days of AI, but they quickly realized things were not as simple as they seemed when they failed so miserably. And even in the early days, nobody would have accepted the much stronger belief you stated that "introspection is infallible," which was my point.

I think I agree with you your sentiment, but expressing that as "introspection is infallible" is profoundly misleading.

Expand full comment

Joseph Knecht,

The infallibility of introspection was a central belief in philosophy for hundreds of years. Most people nowadays don't explicitly endorse the belief but their beliefs about how we should approach and understand the mind are clearly shaped by people who did.

Expand full comment

Section five of Artificial Intelligence as a positive and negative factor in global risk talks about using theorem provers in the design of silicon chips. I recognised the software in question, ACL2, An Computational Logic for Applicative Common Lisp. I'm interested in it as part of my vision of the medium term future of computer programming languages. I've downloaded the software and tried to learn to drive it.

Notice that the vision I sketched goes too far. Provers, such as ACL2, can show that algorithms compute the same function, but they do not prove results about space and time requirements. They cannot express the idea than one algorithm is faster, or that another uses less memory. (Well, actually they can, you code an instrumented interpreter for the algorithms and prove results about the interpreter, but that is my point, there is another level required.) So my vision is not the next step, but since it builds on stuff that ACL2 cannot do, it is two steps on from current research. Also my vision is well short of general AI.

My opinion is that we are conducting AI research three or more conceptual levels below where the action is, and can therefore make no direct progress. We can only enlarge and depend the computing culture, with the hope of moving up a level at some time in the future. Meanwhile you can download ACL2 from the University of Texas and get a feel for the state of the art. Then you can have an opinion too!

Expand full comment

Steven Pinker mentions a putative 'language of thought' in his new book 'The Stuff of Thought'.

I sent Pinker an e-mail saying that it sounded like he was looking for a general purpose 'Upper Ontology':

Upper Ontology

Pinker's comment:

"Yes, I agree that what I am calling a language of thought is closely related to what computer scientists call an ontology."

I dropped the concept of a putative 'Universal Parser'. Pinker's comment:

"I’m not sure whether a universal parser is feasible (in practice – in principle, I’d insist that it is). As I note in chapter 8 (and in the “Talking Heads” chapter in The Language Instinct), sentence interpretation in context requires considerable knowledge about the speaker’s intentions, which may require duplicating a good part of the speaker’s social and cultural knowledge base. That doesn’t seem to be that easy to implement, especially if it is meant to apply cross-culturally, to any language. But perhaps some day."

Upper Ontology. Universal Parser. Hmm. Sounds a possible new 'big insight' into AGI.

An Upper Ontology for General Purpose Reality Modelling.

Hee hee...

Expand full comment

Re: "Was Google the first search engine?"

No, but look at Microsoft or Intel.

Of course the first seed AI being the ancestor of the last AI is far from a certain outcome.

E.g. maybe the builders of the the first seed AI will cripple it with takeoff constraints - and so inadvertently allow a subsequent AI to take over before its air supply can get cut off, and its lunch can be eaten.

Also, the rise to power of these things may take more than "a few months".

Expand full comment

And Robin: I'm 47 so it will not work to reply to me that after I turn 40 I will probably have some other theory of how scientific ability varies with age which favors people in their 40s.

Expand full comment

Heh, I was just going to mention the age of 40 as the point past which the brain is too old to wield the knowledge necessary to make the kind of predictions Schank is trying to make. So for example, while Schank can read E.T. Jaynes's Probability Theory: the Logic of Science as soon as it is published just as Eliezer can, Schank is over 40 when it is published, so he cannot rotate and transpose the material in his head like Eliezer can.

Expand full comment

I wish I had known what I knew now at age 15. I think I could have made better use of it then.

I figure it takes around ten years to produce an FAI researcher. I figure I hit my expiration date at forty - that's when my father suddenly turned old. So I'm writing a series of letters to my successors, who will probably be around 15-17 when they read them, and then they have ten years to follow the path, with more of a boost than I ever got.

The possibility is never far from my mind, that whatever I do, and however far I or any other FAI researchers manage to get, it's going to have to be transmitted to someone who learns the theory at 18, before it can be used natively.

Maybe us Old Fogeys will still have a reservoir of experience that makes us strong and valuable. Maybe not.

No, I don't think the young are generically smarter than the old, or generically more trustworthy. But I don't trust many people at all, old warriors or young. When I take a stand on a Singularity issue, I'm standing in the direct center of my expertise. Anyone who wants to argue with that can argue with my arguments; it would be silly for me to trust their authority. I wouldn't argue with Sebastian Thrun about mobile robotics, or with Peter Norvig about search. If they want to argue with me about recursive self-enhancement, they're welcome to present arguments.

By default, I have to assume that their knowledge of my professional sphere is at the standard bright-informed-amateur level, because that's what it usually is with AIfolk.

Expand full comment

...rethink my not-very-carefully-obtained view that Bayesianism is rarely useful because in actual reasoning about the world priors are almost always inaccurate to the point of uselessness, and defining what 'A' and 'B' are in, P(A|B) is almost always hopeless.

I suggest "Bayesian Data Analysis, 2nd ed." by Gelman et. al. If you're tackling Pearl on causality, then this is the right book for you on practical applications of the Bayesian approach. (If you want fundamental justification, I suggest reading the first two chapters of Jaynes's "Probability Theory: The Logic of Science"; they cover the Cox Theorems that provide grounds for using the Bayesian approach.)

Expand full comment

See my more added to the post.

Expand full comment

poke,are you saying that vision is hard because the output isn't really any clean format, but part of our representation of the world? and, moreover, there's feedback from our knowledge of the world, telling us what to expect to see; so that we might as well be working on general intelligence?

Expand full comment

@Poke: your first point with regard to misconceptions is a straw man: *nobody* in their right mind would ever say that introspection is infallible, nor have I ever read of a non-crank who held such a position either.

Expand full comment