19 Comments
User's avatar
Berder's avatar

"We might similarly set a prior over parameter settings and track changes in those prior values."

Practically speaking, you can't do that. Every time you train a network you're going to get wildly different values of almost all of your parameters. The value of an individual parameter is meaningless and random, so tracking it would serve no purpose.

Here's something you probably don't know. In practice, the "fully trained" network has almost identical edge weights to the initial "randomly initialized" network. The training makes only small changes, essentially just fine tuning the random initialization. This is because the weight space (the vector space of all the weights) is so high dimensional, which means that wherever you are in weight space, you're *close* to a viable solution, of which there are very many. So it only takes a small adjustment from the random initialization to get to the closest viable solution.

This "small adjustment" makes a big difference in the network's behavior, but it is small in terms of the absolute sizes of the weight updates.

Expand full comment
Robin Hanson's avatar

I meant high level parameters that define the learning process, not the specific parameter values that are learned from data.

Expand full comment
Spencer Marlen-Starr's avatar

Those are typically called tuning parameters, or alternatively, hyperparameters.

Expand full comment
Robin Hanson's avatar

yes

Expand full comment
Peter Gerdes's avatar

I feel like the measure of "economy dominated by AI" is a very flawed measure as if AI is cheap whatever it does wont be a large fraction of economic activity.

I mean, suppose that in 40 years virtually no human is physically involved in any sort of manufacture, resource extraction, farming, routine janitorial work or construction but it's able to supply that labour really cheaply. Does that count or not? I mean, if we get really good at programming AI without much effort is then it seems likely that those aspects of production become a relatively small amount of economic activity (in dollar terms).

Doesn't AI only com to dominate economic activity if it's both very useful but also requires some relatively limited resource (eg requires huge amounts of power). Unless you just mean the amount of economic activity it's involved with which sounds bad since it means that if everyone wears AI enabled smart glasses it's true even if they only add a bit of value.

Expand full comment
Robin Hanson's avatar

Whatever we pay the most for at the time will be what we care the most about then on the margin. Economic growth rates, if they changed, would be a measure of our having found a way to gain more value faster.

Expand full comment
Peter Gerdes's avatar

What we want to measure here is something more like: the total value that a monopoly on that technology would be worth.

Expand full comment
Peter Gerdes's avatar

Yes, I agree with both of those claims on an appropriate interpretation but neither of those supports the claim that the total economic activity devoted to a good is a decent measure of how important that good is to us or how much it has changed our lives.

The marginal value of an extra unit of artificial fertilizer (or antibiotics) is quite low because we can make it pretty cheaply so it's quite plentiful. But that doesn't tell us anything about the total value the invention of artificial fertilizer (antibiotics) added to the economy or how much smaller the economy would be w/o it.

One could easily imagine AI becoming like artificial fertilizer/antibiotics. Something both responsible for a huge change in our lives and increase in economic productivity but not valuable on the margin because it's cheap to make so we've got plenty.

Expand full comment
Charles Niswander's avatar

I think it may upend our traditional ideas of economy, labor and many other things. I understand what you mean about the terminology though, at least from a semantic point of view.

Expand full comment
Joe Canimal's avatar

Here's an alternative idea: let's allow marketing hype and emotion-laden doomerism untethered by any prior or indeed any rational governing process to dictate our feelings about AI, and rush to pass sweeping legislation/regulations that will squelch any newcomers in the space to the benefit of big incumbent lobbyists.

Expand full comment
Mike Randolph's avatar

I think that large language models are more promising and powerful than Robin suggests. I also think that the fraction of world income that goes to pay for AI systems is not the best metric to measure their progress and impact. Bing tells me Tyler would say a better metric would be how much they increase economic productivity, which has been stagnant for decades despite previous technological innovations.

Expand full comment
Bjarte Rundereim's avatar

Questions from a medium size man:

Is it possible to imagine an AI system which is more complicated than human knowledge is capable of evaluating?

If so, is it possible to imagine that AI can run out of the scope of human control?

If only the very most capable AI experts will be able to control AI, who can control the controllers?

Expand full comment
Peter Gerdes's avatar

Ha, "medium sized man" amuses me (it's what my wife calls me whenever I object about her calling our dog little when she's cute).

Anyway what do you mean by evaluating? We will certainly be able to evaluate it in many ways (how much electricity it uses, how fast it can solve various problems etc). Indeed, it kinda seems to me to be fundamentally impossible in that if you've described what it is you mean clearly enough you've told me how to evaluate in that way.

Expand full comment
Bjarte Rundereim's avatar

Not exactly what I meant.

I was aiming more at evaluating or controlling the processes that are initiated by the programs involving AI. AI means that the hardware run by the processes, whatever they may be; will run without the detailed outcome is whether intended or controlled by the programmer. Else, what is the I in AI?

So, if some setup begins to produce far outside what was intended, and maybe also producing dangerous material or processes; or materials and processes that were unknown to the instigators; (the I in AI, you know) and in that sense outside the previous knowledge of the instigator, and even outside human competence.

The word-producing toys that we hear about, seem to produce all sorts of new combinations of facts and structures, and even contrafactual and "new" "facts" (all four quotation marks intended) all by themselves, on the basis of very few and common parameters as input.

It seems to me that we are headed for a lot of unplowed and possibly unsafe ground.

Expand full comment
Peter Gerdes's avatar

Yes, that's basically the whole issue of AI alignment. Given that we want AI to be able to perform quite complex and loosely defined tasks (eg give people good advice about how to solve a problem) in ways that go beyond what a human could do can we ensure that AI doesn't subtly trick us and actually pursue some goal we find distasteful (especially given that it's hard to specify just what kind of behavior is acceptable).

Personally, Im not very concerned about alignment in that sense. I think a more plausible concern is mentally ill AI. I mean, sure sometimes our 'software' fails in blunt ways (epileptic cascade etc..) but it can also have complex failure modes like skizophrenia and an AI that exhibited that kind of failure could potentially do a great deal of damage (obv it will probably be different I just mean complex failure modes worry me more than the AI secretly scheming out it's diabolical goal like a Bond villain).

Expand full comment
Bjarte Rundereim's avatar

"mentally ill AI" - That's a good one.

You mean "badly programmed"??

Or it has internalized fake news or skewed facts?

Both possibilities are humanly possible and part of what is, of course.

Garbage in - Garbage out.

One should never equate intelligence and wisdom.

Especially not in programmers.

Expand full comment
Jake's avatar

The bigger problem is that we just don't know how to encode motivation or objectives in an AI system. We can only do pattern matching: given these inputs, generate these outputs. We can be "relatively" confident of how such a system will behave under inputs very similar to the how we trained the system, but the whole exercise is about applying the system to novel inputs. We have increasingly good techniques for better generalization, but still have plenty of unexpected results for unanticipated inputs.

Expand full comment
Bjarte Rundereim's avatar

Bigger than what?

Bigger than the obvious danger of misleading and confusing the public with skewered facts and fake news?

Bigger than the danger of wasting large amounts of resources on something that is both uncontrollable and ansafe?

Perhaps there is another goal and perspective here?

Pure science?

Why not make sure of the output, before unleashing it on the public?

Expand full comment
Adam V's avatar

Robin, do you see your view as being in tension with the community Metaculus prediction: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ ? If so, what do you think explains your differing opinions?

Expand full comment