Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.
Wheels actually weren't used that much before a thousand years ago. Legs have been vastly more useful until recently.
Having a few semesters in a lab focusing on simulating biological neural networks, at that time we did consider the network of neurons simpler to model than trying to do a detailed model of individual neurons. E.g., a network of neurons based on something like Hebbian theory (1949) for neuronal activation and plasticity is much simpler than trying to actually model the wide variety of actual neurons, their complex electro-chemical processes, the effect of neurotransmitter levels, etc.
I'm new to this so pardon of this has been said before, but hasn't the trend been replacing humans with task specific automated processes? And it makes sense-- that's is where immediate results are achieved and you are not wasting resources emulating all the extra stuff that's only ever needed in a general purpose agent?
And doesn't that, in turn, mean that we will skip right past the human like ems who switch on their love for music or whatever during their spare cycles and go straight for the non-sentient optimized for productivity end state?
The general trend of technology seems to me to be increasing decentralization, interdependence, and specialization, allowing gains via scale economies.
Consider just how many tasks a human leg is designed to perform, and under what limitations. Then look at wheels, and the environments they are useful in. Seems to me that rather than wheels lacking a whole bunch of unnecessary features of legs, they still do have those features, only they're outsourced, rather than performed inhouse. So wheels don't include functionality for repairing themselves - instead they are designed to be more easily removed and replaced, with the creation of new wheels performed in factories. They don't need to handle as many diverse environments as do legs - they run on long flat surfaces that are purposefully created for them to run on. They don't improve by recombination of internally-contained blueprints - instead, design is handled by external organizations that can make improvements, both small and incremental (as is done by genes) or larger, riskier, long-shot changes.
Without any of this supporting infrastructure, wheels just don't work, and aren't even remotely competitive with self-growing self-maintaining broadly useful legs.
"So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates."
There are general problem-solving algorithms such as AIXI that look simple when written mathematically, but they tend to be uncomputable, and therefore far too complex in the relevant sense. In mathematical notation it is all too easy to say "and then repeat for an infinite number of cases".
> It's possible that general intelligence is analogous to arithmetic.
which is easier to do depends on where you are starting from. Also, organisms don;t just have legs, they have self-repairing legs.
Which is simpler, a leg or a wheel? We were able to build internal combustion engines long before we built viable robotic legs.
The brain is over-engineered?
We managed to figure out how to use computers for arithmetic without understanding everything the brain does. I'd guess a calculator uses algorithms significantly simpler than the ones the brain uses. It's possible that general intelligence is analogous to arithmetic.
Seems like you're making the same claim Mark Bahner more explicitly states above: that you believe neurons, or perhaps only the kind of neurons found in the neocortex, will spontaneously produce useful functionality when a bunch of them are clumped together; that there is no meaningful superstructure existing across many or all neurons in aggregate. Your evidence for this is the uniformity and flexibility of the neurons in the neocortex.
But memory addresses in a computer are also uniform and flexible. Yet computers don't spontaneously perform useful activity, they need actual software to be written for them. Their uniformity allows them to store and run many different programs, but does not for a moment mean they behave the same under any configuration.
Are you sure the same isn't true for the brain - that there is no 'software' analog existing in the brain at a higher conceptual level than the level of individual neurons?
> Second, even if you do use an instrument which is capable of detecting the data, you would assume it is random variation unless you already knew about the encoding
There are ways of making a formal judgement about how random something is.
> And I don't think physical scanning is going to help you copy the contents of the hard drive, until you also know what method of encoding the hard drive uses.
The encoding isn't something separate from the decoding.(Ie it's not like a book written in an unknown language).
If you do, a sufficiently fine-grained scan of a brain or a PCyou will capture both at once.
Of course there is a real catch about "sufficiently fine grained". The less you know about brains, the more you would have to brute-force the issue, leading to quantitative problems.
I'm always amazed by the ability of some people to make confident predictions about the nature and behaviour of AIs of unknown architecture that haven't been built yet.
If we explicitly programme in ethics to our AIs for safety reasons, they will have ethics.
If we implictly train them into ethics thorough socialisation, they way we train ethics into human children, they will have both ethics and, as a necessary prerequisite, social skills.
I think the best way to approach not knowing what level we need to read at is to look at what we are contained by in physics/instrumentation/present experimental research results with some napkins numbers:
speed of light in grey matter:
~4.70650973710659 * 10^-13 = (4.23* 10^4) * (4 * pi * 10^-7) * (8.854187817620 * 10^-12)
~1,457,640.81124346 m/s = 1/sqrt(4.70650973710659 * 10^-13)
published spacial resolution with eeg beaming techniques: ~1mm
max sampling rate on biosemi hardware: ~60k hz, so ~30k hz
calcating distace (mm, rough spacial resolution) the light (or information would travel in this env):
0.0485880270414487 mm = 1,457,640.81124346 / 1000 / 30000
Seeing as ~.05 mm is still within the cavity of the brain (no information has escaped at the hardware sampling rates) we, probably would be over sampling for nearly everything except for that which is ~.05 mm from the electrodes, theoretically speaking since there has nothing (to my knowledge) been published with such resolution in eeg
However in my lab, we sampled at 2048 (which most in the field considered to be too high anyways) which puts us:
~0.711738677364971 mm = 1,457,640.81124346 / 1000 / 2048 (this is ~.160 GB/minute at 24 bit resolution)
Which still can be considered to be oversampling as far a information content escaping the brain depending on the roi, but pretty close what I think would be acceptable for exploring now within the bounds of publish work.
The min size of neuron ive seen thrown around has been around 0.002 mm, so even theoretically speeking from what the hardware can do now, we are still about 25x off from mesuring changes accross neurons with eeg.
But considering that the timescale of reaction times to motor tasks is greater than 150ms, I think there is considerable leeway in classifying neural states to sufficient accuracy (as the field has thus far with non spacial techniques), not to mention that even measuring at a 2048, you see sinusoidal fluctuations from bin to bin so you are going to have to be averaging some where anyways and the level of a single neuron is likely to not be very helpful in the context of what every other neuron is doing at that time in the brain (or in the body).
Human brains are based on how neurons are. If we simplify what nature did, then we won't be able to get uploads to run on them. Simplifying either of these gives an advantage to ordinary AI (OAI?).