19 Comments

"We might similarly set a prior over parameter settings and track changes in those prior values."

Practically speaking, you can't do that. Every time you train a network you're going to get wildly different values of almost all of your parameters. The value of an individual parameter is meaningless and random, so tracking it would serve no purpose.

Here's something you probably don't know. In practice, the "fully trained" network has almost identical edge weights to the initial "randomly initialized" network. The training makes only small changes, essentially just fine tuning the random initialization. This is because the weight space (the vector space of all the weights) is so high dimensional, which means that wherever you are in weight space, you're *close* to a viable solution, of which there are very many. So it only takes a small adjustment from the random initialization to get to the closest viable solution.

This "small adjustment" makes a big difference in the network's behavior, but it is small in terms of the absolute sizes of the weight updates.

Expand full comment

I feel like the measure of "economy dominated by AI" is a very flawed measure as if AI is cheap whatever it does wont be a large fraction of economic activity.

I mean, suppose that in 40 years virtually no human is physically involved in any sort of manufacture, resource extraction, farming, routine janitorial work or construction but it's able to supply that labour really cheaply. Does that count or not? I mean, if we get really good at programming AI without much effort is then it seems likely that those aspects of production become a relatively small amount of economic activity (in dollar terms).

Doesn't AI only com to dominate economic activity if it's both very useful but also requires some relatively limited resource (eg requires huge amounts of power). Unless you just mean the amount of economic activity it's involved with which sounds bad since it means that if everyone wears AI enabled smart glasses it's true even if they only add a bit of value.

Expand full comment

Here's an alternative idea: let's allow marketing hype and emotion-laden doomerism untethered by any prior or indeed any rational governing process to dictate our feelings about AI, and rush to pass sweeping legislation/regulations that will squelch any newcomers in the space to the benefit of big incumbent lobbyists.

Expand full comment

I think that large language models are more promising and powerful than Robin suggests. I also think that the fraction of world income that goes to pay for AI systems is not the best metric to measure their progress and impact. Bing tells me Tyler would say a better metric would be how much they increase economic productivity, which has been stagnant for decades despite previous technological innovations.

Expand full comment

Questions from a medium size man:

Is it possible to imagine an AI system which is more complicated than human knowledge is capable of evaluating?

If so, is it possible to imagine that AI can run out of the scope of human control?

If only the very most capable AI experts will be able to control AI, who can control the controllers?

Expand full comment

Robin, do you see your view as being in tension with the community Metaculus prediction: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ ? If so, what do you think explains your differing opinions?

Expand full comment