13 Comments

Parts of current machine learning systems are still coded by humans, but my point is that it's no longer the "content" of intelligence that is coded, but just a general learning framework.

For instance, consider the DeepMind system that can play ~50 Atari games. In traditional machine learning, humans would have to define a bunch of features, then the learning algorithm would take those feature values as input. Figuring out the best features was difficult work that involved a lot of human labor and insight. In the DeepMind case, an example feature might be "is any moving object on a course that will collide with my character in the next 2 seconds?" You can train an Atari playing system by defining and manually coding up hundreds or thousands of such features, hoping that the combination is enough that your model can learn how to play the game well.

How DeepMind's Atari system actually works is that the only inputs to its learning algorithm are the pixel values on the screen. It is trivial to write the code to give the learning system the pixel values. The input "features" are identical for every Atari game. So none of the intelligence about how to play the game is hand coded. (I think the only other hand coded part is some function that extracts the score from the screen). The amount of work replaced by not having to manually define features is huge.

This is a continuation of a shift in how AI systems are built. Before machine learning, humans would specify both the "features", and also how the features should interact to produce intelligence. With traditional ML, you let the system learn the interactions and only hand-code the features. Now, we can let the system learn both (instead of defining features, you let the system 'perceive' raw input). This is the distinction that I see you not acknowledging when you talk about non-em AI involving "hand coding" intelligence.

Expand full comment

The future can be a lot different than we can know. For example, our quasi intelligent agents will probably remove any need for ems, though I think there would still be great incentive for the challenge of immortality that would outweigh it being non economic. And knowing enough to do it may still leave a biological solution preferred.

Expand full comment

No Tyler hasn't commented AFAIK, which is why it made sense for me to make a prediction on what he would say.

Expand full comment

Has Tyler Cowen actually commented much on the topic of superintelligence? I didn't find anything clear on the first page of Google results. If so, Elon Musk, Stephen Hawking, and Bill Gates, as scientists or entrepreneurs, each have reputations for being geniuses, but that's because of the status they've earned, not because of expertise in artificial intelligence. Either way, have you asked Dr. Cowen what he would consider an authority on ems?

He said the neuroscientists he talks to never mention it, but it's not as if neuroscientists would ever go out of their way to talk about ems. Brain-scan computer uploads happening in one hundred years aren't relevant to their research field, or on their horizon as career scientists. Why would they bring it up for no reason? Also, IIRC, Daniel Levitin is a neuroscientist Dr. Cowen cited in his post, but Dr. Levitin already said ems could or would happen, just later than you predicted. What other experts might Dr. Cowen consider authorities on ems?

If there isn't special reason to think they'd be experts, i.e., they're not computer scientists or neuroscientists, is he just waiting for a sufficiently high-status "smart-person/genius" to come forward praising your book?

Expand full comment

I don't think non-destructiveness is required. There are enough people who wouldn't conceive of their biological destruction as death.

Expand full comment

I think one challenge is that there are so many assumptions, it's hard to know which is going to be the hardest. To me ems are going to require several advances I can't imagine yet. I believe the most challenging is the non-destructive brain inspection required to create an em. So the assumption on p 47, that it will be possible to scan a brain at the proper level of resolution leaves me lost as to the overall believability of the rest of the book, much like my opposition to the "transporter" ruins star trek for me.

It will require several major advances in physics, and uncertainty principle like paradoxes to overcome in terms of observing the central nervous system without changing its state. If the relative and continuous levels of electro-chemical activity are important, which they look to be, the entire observation may have to be made at one moment.

To me, a more interesting route is th concept of cognitive enhancement, where our computational capacity is expanded via additions to our brain, wired to the nervous system. As they become more sophisticated, what it means to be us would be more encompassed in the peripherals than in our meat brain-even the motivation for our behavior (simplistically things like release of dopamine). These peripherals could even have a shared component, losing the concept of individual identity. The meat brain could then die off without losing all of what it is to be us. In this case, the things that would motivate the action of those brains would likely be wholly different than our current motivations, and thus gets me to some of the scenarios you consider.

Expand full comment

Except it doesn't seem to me a matter of topic. It isn't unsafe to express an opinion on ems. It's not the topic but the opinion itself that is taboo.

Expand full comment

Maybe the DAO hacker is really the "AI Foom" making its first actions in the physical world by funding itself through ETH puts?

:)

Yes, I see neither "AI Foom" nor Ems as being remotely likely. I am, however, quite partial to a variation of the simulation argument.

Expand full comment

Now this is some serious (cynical yes, but "rings-of-the-truth" accurate) wisdom:"One does not express serious opinions on topics not yet authorized by the proper prestigious people."You know, I've experienced this. But worse, I suspect that I have also contributed to the enforcing of it, on at least some occasions.

Expand full comment

On the old AI foom question... I'm not sure I understand exactly what it is you have long been referring to as 'content'. Is the (vast quantities of) training data fed into machine learning algorithms the content? Or, is content more about the question of which implementation of which algorithm to use for each problem, what asssumptions are made, what simplifications, etc.? Or, third possibility, is content something more like the tools and techniques and so on that we, or an AI system, use to interact with the world?

On a related note, I was interested to see this recent talk by Peter Norvig, in which he argues for the need to develop techniques for modifying machine learning software in a precise, targeted, understandable way, in the way that traditional software can be modified. He specifically mentions machine learning's non-modularity as a source of difficulty in this area. I'm not sure whether increasing modularity in ML will be possible or feasible, but perhaps if it is then this will lead to more of the code reuse, standards-following, interoperability, and so on, that you have argued you expect to see in a world where AI is as widely used as traditional software is today.

Expand full comment

Machine learning systems are also hand-coded, at least for now. In most any AI system the system itself is capable of creating things that a human could also have added directly. But there is always a part of the system that it couldn't have added to itself.

Expand full comment

Good post. One tangential comment:

"we have far more reasons to doubt we will eventually know how to hand-code software as broadly smart as humans"

I see you refer often to creating general AI without ems as "hand-coding" intelligence. I've commented before about how your writing on AI doesn't distinguish between machine learning and 'good old fashioned AI' which focused on explicit coding of rules for intelligence. It used to be that machine learning systems required humans to hand-craft "features" that the computer would learn. The most modern machine learning methods no longer require humans to create features.

So it's misleading to describe the kind of machine learned systems that might lead to general AI as needing to have their intelligence "hand coded." The thing that is coded will be the learning algorithm, but all the actual intelligence will arise from data/experience.

Expand full comment

If Cowen is looking for a "scientific consensus" for a multi-disciplinary series of claims about some point in the future, I think he'll have to wait until that future unfolds. Possibly longer.

Maybe I'm out of my league, but so far into the book, Age of Em seems very plausible. I'd also like to know what in particular Cowen doubts.

Expand full comment