Let's say human-assisted FOOM is HAFOOM. The flow of power from human designers to AIs isn't as one-way as it might seem. If AI improvements are coming faster, why is that? Is there a big stock of humans rushing to join the AI field? (Wouldn't that be HAFOOM?). Is it that AI experts are becoming better faster? (Wouldn't that be HAFOOM?). My specific sense of hafoom is that it's becoming easier to play with fundamental AI design ideas (there are modular kits now), easier to test the AIs because of simple performance increase and data availability, and so it's becoming easier for humans to advance AI tech. The technology of AI is AI-reflective and improvement-assistive. So the human input (& I don't mean there aren't geniuses involved, just that there always have been and progress wasn't this fast) is becoming like checking off one skill category at a time.
You apparently think that you have evidence against my position - that there's been considerable corporate progress over time. However, your cite is all about the significance of management practices and it offers little-to-no temporal data. In explanation, you argue that: "experiments like Bloom's on the big benefits from bringing in better management and consultants strongly imply that there is no such trend". It makes it seem as though you don't have any relevant data and so are reduced to making things up. I don't see how the idea that there is often space for improved management practices in any way refutes the idea of extensive corporate progress over time. That's just a mistaken inference by you.
Those firms 100 years ago were in a world which was much smaller and which lacked most of the people and resources corporations can use today.
I don't think corporate progress is at all obvious, as I just pointed out, and you should have much better evidence than 'well, corporations are bigger than they were a century ago' (when *everything* is bigger than a century ago).
Giving corporate evolution credit for things like the invention of semiconductors or increase in global population is pretty strange; if they have evolved productivity 'at quite a rate', we should see them making far better use of inputs than an old corporation would have for the same outputs (pretty much the definition of 'productivity'), while as my link shows, we don't really seem to see any convergence or change over time.
Individual companies - like individual organisms - are quite likely get worse over time - due to senescence and decay. That does not preclude evolutionary progress, though. 100 years ago, most firms were tiny and impotent compared to the huge and powerful corporations we have today. Corporate progress seems obvious. I am not sympathetic to the idea that notions of corporate progress should necessarily exclude progress made by humans or technology. If the constituent parts get better, the whole gets better as well.
What evidence is there that corporations systematically on average get better at 'quite a rate' (as opposed to the humans or the capital & technologies they incorporate or create, which are available to non-corporate entities as well, and are not the corporations themselves)? When I look at the economics literature on dispersion of productivity (eg https://www.gwern.net/docs/... ), what they usually note for even the most standardized and comparable activities is shockingly large, persistent, long-term differences in productivity across corporations; and experiments like Bloom's on the big benefits from bringing in better management and consultants strongly imply that there is no such trend.
Corporations are large, self-improving systems. Many of them include machine intelligence as components. They do get better at quite a rate. Many of them have left individual human performance behind long ago. They do still include some humans, but there is considerable variation in how much automation they employ. "FOOM" might be silly - but self-improving systems are real and important. Will one organization draw ahead of the pack? It seems as though that already happened - with governments. Will one goverment draw ahead of the pack? It didn't really happen so far - but these are still early days. It seems difficult to say with much certainty.
I see "value drift" as appropriate name for scenario of a value specification that works fine in familiar situations but then becomes deadly when the system is much more capable. With a system that was widely used, you'd usually get many warnings of such specification problems as a system gradually became more capable.
FOOM is and has always been irrelevant. Any highly intelligent agent, or perhaps just reasonably intelligent agent, that is able to clone itself, work decisively towards an underspecified outcome doesn't require rest or sleep, has no time preference, and has no moral constraints, is gonna kill almost everybody. This is so obvious to me that I don't understand why it's contested.
I agree with a fair amount of this post: foom is not too likely, and I'm unsurprised that disagreements remain the same after new evidence.
It's misleading to imply that value drift is the top concern related to foom. My main concern if we get foom is about mistakes in the initial values or rules about how to generate values. I expect that most people who worry about foom think such early mistakes are more likely to doom us than later value drift.
I've recently gotten a better understanding of where I disagree with Eliezer. He expects we're on the verge of discovering a "core of general intelligence" that humans have but which current AI lacks. Whereas I expect that anything I'd call such a core is already at least partly understood and implemented, and that further advances will come from features that I'd classify as being added on top of such as core - loosely resembling how humans advanced by producing culture, writing, etc.
Yeah, I think the complete lack of progress on System II type thinking is my biggest disconnect with the AI risk folks. Maybe it will turn out to be much easier than we predict, but I don't think recent progress should cause us to move any predictions of improvement earlier.
Let's say human-assisted FOOM is HAFOOM. The flow of power from human designers to AIs isn't as one-way as it might seem. If AI improvements are coming faster, why is that? Is there a big stock of humans rushing to join the AI field? (Wouldn't that be HAFOOM?). Is it that AI experts are becoming better faster? (Wouldn't that be HAFOOM?). My specific sense of hafoom is that it's becoming easier to play with fundamental AI design ideas (there are modular kits now), easier to test the AIs because of simple performance increase and data availability, and so it's becoming easier for humans to advance AI tech. The technology of AI is AI-reflective and improvement-assistive. So the human input (& I don't mean there aren't geniuses involved, just that there always have been and progress wasn't this fast) is becoming like checking off one skill category at a time.
You apparently think that you have evidence against my position - that there's been considerable corporate progress over time. However, your cite is all about the significance of management practices and it offers little-to-no temporal data. In explanation, you argue that: "experiments like Bloom's on the big benefits from bringing in better management and consultants strongly imply that there is no such trend". It makes it seem as though you don't have any relevant data and so are reduced to making things up. I don't see how the idea that there is often space for improved management practices in any way refutes the idea of extensive corporate progress over time. That's just a mistaken inference by you.
Those firms 100 years ago were in a world which was much smaller and which lacked most of the people and resources corporations can use today.
I don't think corporate progress is at all obvious, as I just pointed out, and you should have much better evidence than 'well, corporations are bigger than they were a century ago' (when *everything* is bigger than a century ago).
Giving corporate evolution credit for things like the invention of semiconductors or increase in global population is pretty strange; if they have evolved productivity 'at quite a rate', we should see them making far better use of inputs than an old corporation would have for the same outputs (pretty much the definition of 'productivity'), while as my link shows, we don't really seem to see any convergence or change over time.
Individual companies - like individual organisms - are quite likely get worse over time - due to senescence and decay. That does not preclude evolutionary progress, though. 100 years ago, most firms were tiny and impotent compared to the huge and powerful corporations we have today. Corporate progress seems obvious. I am not sympathetic to the idea that notions of corporate progress should necessarily exclude progress made by humans or technology. If the constituent parts get better, the whole gets better as well.
What evidence is there that corporations systematically on average get better at 'quite a rate' (as opposed to the humans or the capital & technologies they incorporate or create, which are available to non-corporate entities as well, and are not the corporations themselves)? When I look at the economics literature on dispersion of productivity (eg https://www.gwern.net/docs/... ), what they usually note for even the most standardized and comparable activities is shockingly large, persistent, long-term differences in productivity across corporations; and experiments like Bloom's on the big benefits from bringing in better management and consultants strongly imply that there is no such trend.
Corporations are large, self-improving systems. Many of them include machine intelligence as components. They do get better at quite a rate. Many of them have left individual human performance behind long ago. They do still include some humans, but there is considerable variation in how much automation they employ. "FOOM" might be silly - but self-improving systems are real and important. Will one organization draw ahead of the pack? It seems as though that already happened - with governments. Will one goverment draw ahead of the pack? It didn't really happen so far - but these are still early days. It seems difficult to say with much certainty.
Isn't it progress towards recursive self-improvement if these language models are now better at writing code?
I see "value drift" as appropriate name for scenario of a value specification that works fine in familiar situations but then becomes deadly when the system is much more capable. With a system that was widely used, you'd usually get many warnings of such specification problems as a system gradually became more capable.
FOOM is and has always been irrelevant. Any highly intelligent agent, or perhaps just reasonably intelligent agent, that is able to clone itself, work decisively towards an underspecified outcome doesn't require rest or sleep, has no time preference, and has no moral constraints, is gonna kill almost everybody. This is so obvious to me that I don't understand why it's contested.
I agree with a fair amount of this post: foom is not too likely, and I'm unsurprised that disagreements remain the same after new evidence.
It's misleading to imply that value drift is the top concern related to foom. My main concern if we get foom is about mistakes in the initial values or rules about how to generate values. I expect that most people who worry about foom think such early mistakes are more likely to doom us than later value drift.
I've recently gotten a better understanding of where I disagree with Eliezer. He expects we're on the verge of discovering a "core of general intelligence" that humans have but which current AI lacks. Whereas I expect that anything I'd call such a core is already at least partly understood and implemented, and that further advances will come from features that I'd classify as being added on top of such as core - loosely resembling how humans advanced by producing culture, writing, etc.
Yeah, I think the complete lack of progress on System II type thinking is my biggest disconnect with the AI risk folks. Maybe it will turn out to be much easier than we predict, but I don't think recent progress should cause us to move any predictions of improvement earlier.