166 Comments

Do you know of anyone doing explicit economic models of FOOM like scenarios? I'm starting to think about them here https://modellingselfmodification.wordpress.com/2024/04/07/notes-on-an-approach-to-modelling-ai-self-modification/

Expand full comment

As I re-review the intelligence explosion / singularity discussions, I find myself thinking * Intelligence explosion depends on what you define as intelligence * ... I juxtapose Lex Friedman's points "most of the big questions of intelligence have not been answered nor properly formulated" [see Lex Friedman, 2nd slide of Deep Learning Basics Slide Deck, Spring 2019, deeplearning.mit.edu] ...

So if you want to declare look at Moore's law, all you can really talk about is computing power, not intelligence. I am surveying the assorted definitions of intelligence ... and guess what ... we've got a long ways to go.

More later ..

Expand full comment

> To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning.

Yes. This is a crux for me. I don't think this is true. But if it turned out this was the way the world was, I would think that Robin's view is basically correct.

Expand full comment

> A system that can perform well across a wide range of tasks probably needs thousands of good modules.

I also wonder if this is a crux for Robin. If it turns out that intelligence has a simple core (some sort of universal learning algorithm), and there is a general trick for applying this simple core in diverse domains (something like metalearning), then it doesn't really make sense the model the system as being composed of many modules that need to be independently developed and improved upon. In this case, improvements to either the simple core or the generalization trick would translate to improvements across "modules".

In this case, an improvement to your AI looks (in Robin's frame) as a simultaneous improvement to *all* of the modules, which might result in a large (discontinuous?) capability boost.

Expand full comment

> Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

Well, I guess the key claim is that small improvements in cognitive capability translate to large improvements in intellectual output.

One operationalization of the core disagreement: Take a person as smart as John Von Neuman. There is some number (n) of copies of Von Neuman that would be able to match the intellectual output of the rest of the world in AI development. What is n?

My guess is that n is relatively small, under 10, and plausibly one.

[Interlude where I think to myself: As a hypothetical, I could ask "Suppose n Von Neumans were competing with the whole rest of the Manhattan project to get to the atomic bomb. It seems sort of absurd that one lone VN would beat all the other genius of the era. However, if I imgaine that the lone VN has access to and command of all the other resources of a manhatan project (all the money and man power, but no high caliber genius other than himself), it becomes more plausible. 1 NV > the rest of the top 51 geniuses on earth in terms of intellectual output? This seems a little implausible, but only a little. 10 NV > 50 other world class geniuses seems almost obvious.]

Does Robin think that the n is much higher? Note that we're not talking about outcompeting the entire economy, only outcompeting the rest of the AI industry.

Or does Robin agree that, say, n = 3 VN is sufficient to outcompete the AI industry, he just thinks that no one team will suddenly be able to build a Von Neuman equivalent while everyone else can only build smart Edward Teller or Feynman equivalents?

(Of course, he could think both.)

Expand full comment

"The main feature of algorithms is their *ad hoc* nature ... there are no general principles that can be applied across the board to solve algorithmic problems.""

This is becoming less and less true, thanks to recent advances in machine learning.

Expand full comment

Forgive me if the next hypothesis is to bizarre, even to consider: Imagine that AI is already in progress, creating his own tools, (and the following you can call it science fiction) one of them could be to create a interconnection for deep study and use, of the human brain (to see reality by our own eyes perhaps..). Now the bizarre: 100 brains have gone missing from a university in Texas (news link:http://www.news.com.au/tech... Who did that? For what purpose..Imagine...

Expand full comment

Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?

Expand full comment

Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?

Expand full comment

Well a real world example to test these assumptions might be voice recognition. An AI domain that is mostly done by a few centralized businesses that specialize in it.

And it's also a domain that has in the past has had huge breakthroughs that massively advanced the state of the art.

Expand full comment

If you ask a human to find someone's address from their name, he'll go find a phonebook, or else type the person's name into Google. Or maybe even type the question into Google "how can I find someone's address online". He wouldn't need a database or a database app built into his head, he would only need to understand the question and know in general terms how to find that type of information.

The same would be true of any general intelligence, I think; the key to a general intelligence (as opposed to narrow AI) is that you can figure out the answers to problems you've never come across before. Human can certainly do that; at least to some extent.

Expand full comment

Brief: I never thought of a superintelligence as a toolbox of specialized modules that is frankensteind together. A superintelligence needs to be able to design those tools on its own and also use the right tools for the right job on its own. Maybe in the beginning we do need to give it a toolbox that does consist of many different modules, but its job is not to simply apply the tool/models we give it but rather to improve on them, invent new ones, integrate them and apply them. That to me is the core feature of a superintelligence, otherwise it's neither intelligent nor super. The foom in this picture can be the result of it being able to create and manipulate and integrate a multitude of very complex models leading to ever more complex and intricate and detailed models, in turn leading to better and better predictions. What is growing exponentially in this instance is the width and depth and quality of its model of reality, which is the one core feature that for me defines a superintelligence.

Expand full comment

Hello Robin.

I have a different conception of how intelligence goes foom. The fact that the human brain utilizes different modules for different tasks is a result of our evolutionary history and I assume you would strongly agree with that. Learning to speak is easy, learning to write and read is hard. Throwing things is easy, math is hard. It is not at all clear to me why a superintelligence necessarily needs a variety of different modules, at least not in the sense that I understand the word module ("specialized computation algorithm").

That we have a variety of software modules specifically designed for different tasks nowadays does not suggest to me that a superintelligence is basically just a collection of thousands of specialized modules, in fact I conceive of a true superintelligence as exactly the opposite: Namely the integration of various modules (or I would prefer to say "models of reality") into a more unified and bigger picture to see and utilize all those connections. Conceptually I think a (nonfriendly) superintelligence "only" really needs to do but one thing really well, rather than a thousand different and completely unrelated things and tasks: What it must be good at is creating and improving upon models of reality. Software nowadays, say financial software that attempts to predict the stock market, is essentially modelling (or programmed to reflect) a very very thin slice of reality in an attempt to predict the future, and that model obviously is very primitive in that the software doesn't have any understanding of what it is actually doing let alone how any of its algorithms relate to anything else in the world. A superintelligence would create and improve upon models much more akin to the way we humans model reality on different levels, but without the "modular" limitations of our peculiar evolutionary quirks and shortcomings and without the limitations of our lousy working memory and long term memory. Superintelligence as just a collection of highly specialized modules just seems unlikely to me, for a superintelligence to "understand" anything in the sense we humans understand things it would need to model reality in many different "fields" and on many different layers (perhaps even with a multitude of tools, instead of exclusively through math). Let's look at how human intelligence works - what do really "intelligent" people and scientists actually do in the end: They try to discover and see and utilize the connections between models of reality, namely creating those models but also trying to discern where and how those models connect "vertically" (vertically as in different layers of models of the same things atoms - molecules - chemistry - biology - behavior - group dynamics) - and "horizontally" (say in all the ways in which politics and the economy are related). Now in my book a superintelligence is exactly doing that - building and improving upon models but without all the restrictions our human brains impose on us. A superintelligence would try to integrate separate models (or maybe modules if that's how you want to think of it) into a more unified and bigger picture. I think this is the real game changer. When we were infants our model of reality was extremely primitive and in many ways oversimplified and/or flat-out wrong, but many years later here we are with many complex models of reality (many of which have only even started to exist in the last 200 years) - but all of them would be dwarfed instantly by a superintelligence that is able to model reality in a competent manner, without our limitations.

Here is another thought: The more models you have and understand the easier new data and models based on this or new data are connected and integrated with what you already know... and in this very way the superintelligence I envision would be able to grow exponentially. Not because of any software or hardware improvements or necessarily even improvements to its own core code but simply because it gets better at constructing and refining models of reality - and the more models it creates from data, the more new or previously not understood data and models it can connect and integrate into what it already knows. So the exponential growth I conceive of is basically one that is based on the multitude and integration of models of reality this superintelligence can handle.

Picture a smallish sphere for a second. Say the content of this sphere represents the starting point of a superintelligence after it is "unleashed". The content/volume of the globe represents the few basic models of reality it starts out with, but it also contains a model and algorithms of how to create more and better models of reality, including the ability to competently improve upon these original models and also adding and integrating new models. Every time a new model is added (say a model of how biological evolution works, or how human courtship works etc) the globe expands and the volume increases so the more models are added and integrated the bigger this globe gets. Some models may be islands for a while because the intelligence doesn't understand yet how this model relates to other models but eventually with new models a connection will emerge and a model that was previously isolated from the rest gets connected. Now even if the speed at which new data and models are added and integrated "at the borders" remains roughly constant, the volume of this globe still grows exponentially. The more models it incorporates the more new models become available and more connections between these models are discovered the quality of the models improves. Compare this to us: we humans have to use many different models and switch back and forth between them because we can't take it all in and manipulate it all at once. We need to switch between our models like we need to switch the zoom level on a google earth map because we can't possibly see let alone process our own house and the entire continent (at the same resolution) simultaneously, while a superintelligence conceivable would be able to.

As an upcoming psychologist I have models of neurotransmitters in my head and I also have models of say group behavior in humans. I may read thousands of papers on these and other related topics but just how many of all the things that are explicitly and implicitly detailed in those papers are relevant to further an in-depth understanding about how neurotransmitters relate and influence group behavior, but are overseen or simply forgotten by me? I think a lot. I know these things are connected somehow in many many ways but in my mind I have to jump between different layers of my model of reality, seriously strain my memory and tediously try to coax out the connections and somehow I try to visualize it and I try to understand it through equations and I try to model it with other different tools and in different ways but it is damn slow and tedious and sometimes just nothing useful comes out of it. A superintelligence with the right hardware and software could process enormous amounts of scientific (and perhaps self-measured) data and integrate it into a much better and more accurate model than I or a group of scientists ever could. And moreover it might be able to understand all the connections between multiple layers of models - namely the dozens and hundreds and thousands of ways how certain neurotransmitters are influencing group dynamics in a way no human or group of humans could model let alone understand, let alone actually put to use, simply because we cannot model it all at the same time in our heads even if perhaps we could model it bit by bit over decades on paper or with computer-models.

I left out considerations about the human friendliness problem obviously, but in this post I just wanted to detail how I conceive of the "intelligence" part of the superintelligence actually growing - and doing so exponentially. I think the bottleneck in this scenario will be the hardware limitations that allow this type of superintelligence to take its own measurements in an attempt to interpret its own data instead of relying on existing data and scientific papers that are of a very inconsistent quality.

I talked a lot about models and different layers but let's not forget that "reality" only has "one layer" in the sense that everything complex is really just composed of tiny particles interacting. Models for the superintelligence are really just a way to limit the amount of information it needs to process in order to predict outcomes, as it won't be able to compute everything that's "out there" in reality from the ground up by simulating all the individual particles that give rise to "higher level" phenomena like "a human being". So all those models are really just tools to manage an unmanageable amount of information in a useful and competent manner, so it needs to be able to apply different models depending on which one is suseful and the superintelligence obviously needs to understand that to become competent at managing and applying those models. Sometimes when it cannot see the connection between how the interaction of particles leads to some higher level phenomenon maybe it could try to compute particle interactions locally and discover blank and not understood territory in out maps of reality and find the shape of the missing puzzle piece that we simply cannot.

Expand full comment

Only a mental midget (or a lawyer) resorts to semantic dodges to save himself.

You're actually similar to EY in your flaws, which is why you despise his ilk: you can't write (or think) precisely, and you aggressively exploit your incapacitiy in order to save your arguments.

Expand full comment

Jesus doesn't exist and, as far as I'm concerned, neither do you.

Expand full comment

The context is human programmers ... fast stupid programmers can't code smart AIs. But fast stupid processes, a la evolution, may be able to produce smart AIs.

Expand full comment