7 Comments

"I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers."

I'm skeptical about the "scan" part, not the "model" part.

Eventually, I expect that modeling brain cells will be possible towhatever degree of accuracy we need. We already usesimulated neural nets for economically viable applications.

To my mind, the question is whether it will be cheaper totrain neural nets or to scan an existing person's brain in orderto acquire economically useful expertise. I think training will becheaper. We already do it and find it useful, while scanningrequires developing a whole new technology with very fine-grainedresolution. In principle, we could develop a scanning technology.In practice, I think it will lose the race with training neural nets(and other AI techniques providing similar capabilities)

Expand full comment

I agree with Dr. Hanson that it's a matter of when, not if, and I think that most neuroscientists would agree, too. As pointed out, we already have done something like this for the human auditory system with cochlear implants. For space, a number of things I originally wrote in my review got edited out. Regarding the timing of the age of em, I wrote "I'm only quibbling about the time scale not the substance of the prediction."

I also mentioned a practical, personal reason for wanting the age to come in my lifetime. "I've been struggling with a certain guitarpassage. I've seen other people play it, I've watched instructional videos. could know what it feels like for Stephen Stills toplay it, I could jump start my fingers and brain to actually do it."

Expand full comment

Another plausible hypothesis.

Robin in his post today (http://www.overcomingbias.c... offered two more:

* "we often have academics who visit for lunch and take the common academic stance of reluctance to state opinions which they can’t back up with academic evidence"

* "One does not express serious opinions on topics not yet authorized by the proper prestigious people."

Of course some of these are supported by each other.

Expand full comment

I would suggest that most "neuroscientists" aren't really in a position to say what is or isn't possible even within the field of neuroscience because most of those who i have worked with or went to their talks have little clue on the physics that underlies these systems, the physics of the tooling that they rely on, nor even able to process the raw data collected from such systems. It is a joke. And this is coming from someone who does not have a degree yet managed to work in a research lab and get published in neuroimage earlier this year just because I could do the work while not having to worry about my living situation or my next meal and thought it would be interesting (how often does one get the opportunity to write software for brain waves?).

- The cells in the human body are at min ~2 microns in size, beamforming techniques with eeg and mni head models can allow for ~1mm spacial precision (http://dx.doi.org.sci-hub.c... with a high enough sensor count/optimum positioning for roi locations, not even including adding other techniques for decorrelating signal from noise, which can drop you into the 10-100s of microns range today.

- Depending on sampling rate and sensors (at least in EEG), you have anywhere between 10's of MB to 100's of GB data collected per min, but I think it is a red herring because depending on how you represent or process the data in real time you could collect less while still having very accurate and precise measurements for any given threshold. I've constructed beamformers from datasets where the raw data was around 2GB per min by "training" on higher order moments where I only needed to store 10MB of it to get similar weights. I think fMRI is good for structural imaging but plenty of that data is already open to the public by some universities which is useful for the setting up the problem, but temporal resolution is shit way too expensive and impractical for future everyday use (where you'd want to be collecting data to model such human behaviors anyway).

- Any talk of algorithms/libs for computing any of the above will be lost of most neuroscientists who are far more worried about their next paper, grant, talking to some "hot" tech company like magicleap (who haven't shipped a product yet pitch all the amazing things they can do…), not in actually trying to achieve some tangible goal that might matter to most people. For example even in the paper I cited above they talk about how "hard" it is to do pca or ica for noise decorrelation, yet even in 2007 you had algorithms that could do such in efficient ways like Jacobi-davidson method( http://netlib.org/utk/peopl... and today we have and computing libraries around where the Jacobi-davidson method is used in armadillo (http://arma.sourceforge.net/), where you can link agaisnt libs like openblas for parallelism out of the box, and stuff like this has been around for years! People in the computational physics or chemisty commutiny are much more better to tackle some of these problems, but they are more concerned with their respective fields.

- I think there needs to be more experimentation thats not driven by academia status games, but the tooling is expensive now. I'm working open source hardware/software (https://github.com/cinquemb... on a project now that is a fork of system (https://open-ephys.atlassia... that is priced between $2-5k where i want to bring such capabilities down to around $500 (including electrode costs, but no cap), which should make it in the level hobbyists can explore more unguided/freely with it, but mostly in my free time when I dont feel burned out by life (would be awesome if others could help). The only market force that I can see now is people trying to upsell headphones with one or two electrodes in them by marketing them for making people feel "calm" or whatever new age bullshit, but a lot of academics see money and are in conversations with such. Personally I see it as a distraction because most people aren't interested in paying for such at those prices now, do not demand for products like them now, and nothing suggest they would want to in the future, but people pay a lot for video games and I can see that people would want a hands free experience why playing those which is why I'm more interested in the technology to applying to the gaming market vs the "old" people market of meditation retreats and such (too in person, too fractured, doesn't scale, doesn't help people collect data in "real" world to better try to understand these processes).

I could go on and on, but I'll stop here today :P

Expand full comment

I don't find Tyler's argument very convincing, partially for the reasons you mention, and partially because I think people (me included) aren't very good at predicting the future.

I'd like to see evidence that market prices showed signs of the advent of the internet (surely an economically significant event) 100 years ahead of said advent. In fact, the markets hadn't caught up even in, say, '93, as far as I can tell. But Vannevar Bush told us it would happen, even if he was wrong about some of the details. So sometimes visionaries are right, and markets lag. Figuring out which visionaries to believe (they're mostly cranks) is an unsolved problem.

Will we make ems? I don't think I know enough to say, either way.

That said, I don't think ems will be the _next_ fundamental change on the order of the industrial revolution, etc. Ems are speculative. Machine learning is progressing so quickly now that I expect it to radically transform things over the next few decades.

Ems are speculative. Machine learning is happening now. I see it happening, but I don't know what I ought to bet on to take advantage of seeing it happen, so what I see does not enter the market as information, yet. Maybe I'm a crank ;).

Expand full comment

They have a lot to lose as experts if the speculations turn out wrong, and by their nature speculations are...speculative.

Long-term speculation is hard to falsify until its propounders are safely dead. I suspect this is the reason for reluctance: it may seem a cheap way to get acclaim without empirical responsibility or consequences.

Expand full comment

Experts in any field seem to be very unwilling to speculate, or even endorse speculation, about long-term developments in their field.

I'm not sure why, but here are some ideas:

1 - They have a lot to lose as experts if the speculations turn out wrong, and by their nature speculations are...speculative.

2 - They are very focused on immediate problems and progress. This is what they're paid to to do, and where they get their professional prestige.

3 - They are more keenly aware than non-experts of the many difficulties there will be in the actual implementation of speculative ideas. While they may know intellectually that these difficulties are not insurmountable in principle, as an expert they're overwhelmed by the amount of work yet to be done, and tend to assume it'll never happen.

4 - Even if they think the speculations are reasonable and will turn out correct in the long run, because of #3 they fear losing professional respect within their field - other experts may be discouraged by the amount of work yet to be done, and so consider as "crazy" anyone who takes a longer-term view.

Supporting these ideas is the observation that those few experts who are willing to engage in speculation tend to be from the very top (Nobel laureates, etc.) or very bottom of their field.

Those, in other words, who are either so respected they don't fear a loss of status, or who have no status to lose in the first place.

Expand full comment