Discussion about this post

User's avatar
Donald Hobson's avatar

Wouldn't security risks also be a large barrier to the sharing of raw cognitive content. Verifying that raw cognitive content does not contain any malicious tricks stuck in by a smart adversary is not necessarily easy.

The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.Doesn't the existence of the AIXI algorithm disagree? I don't doubt that to be a good doctor, you need to know about human biology, I just don't see why you can't get that info out of raw medical scans and DNA sequences.

If you insist the AI needs some extra cognitive content, where does that come from. Why can't whatever device produces it be part of the AI.

Expand full comment
Overcoming Bias Commenter's avatar

In response to Robin:

understanding low level brain processes enough to aid em corner-cutting need not help much with understanding high level architecture. Certainly this could be true given what we know now, but I'm pretty confident that it is unlikely, based on a fairly large number of examples of how people are trying and the tools they need.

I guess to you it seems likely but I don't know why.

If we want to pursue this probably the only way to pin down where we diverge is to get into the specifics of how we judge where the probability mass is in this domain. I can't do that right now but I'm willing if you want to later.

Expand full comment
27 more comments...

No posts