From a wide-ranging profile of me in the Cyprus Mail (a newspaper):
I’ve learned a bit about his lifestyle – [Hanson] reads widely; he goes biking; he likes movies, and peruses ‘100 Best Films’ lists to check how many he’s seen – but not very much. A profile is supposed to be personal, I remind him. But he shakes his head.
When interviewers talk to a musician or an athlete (or indeed a well-known academic), he points out, they’re forever asking them to ‘tell me about the rest of your life’ – yet “the way people become famous musicians or athletes is to focus so much of their energy on this professional thing, [so] there usually isn’t much of a ‘rest of their life’. And that’s not a message people usually want to hear, so they make up silly things in order to seem personal.”
Ears, mouth, eyes are much eaiser to model than the brain. We already have decent ear and eye emulations.
Okay, can somebody explain this to me? You say in the article that we'd talk to the em like a regular human, but just because we have an exact model of the brain doesn't mean we would have a model of the ears, mouth, and eyes, does it? And even if we scanned those too, we wouldn't know how to hook them up to the computer, would we? I would expect the way we communicate with the first ems to be very different from normal speech.