Stuck In Throat

Let me try again to summarize Eliezer’s position, as I understand it, and what about it seems hard to swallow.  I take Eliezer as saying: 

Sometime in the next few decades a human-level AI will probably be made by having a stupid AI make itself smarter.  Such a process starts very slow and quiet, but eventually "fooms" very fast and then loud. It is likely to go from much stupider to much smarter than humans in less than a week.  While stupid, it can be rather invisible to the world.  Once smart, it can suddenly and without warning take over the world. 

The reason an AI can foom so much faster than its society is that an AI can change its basic mental architecture, and humans can’t.  How long any one AI takes to do this depends crucially on its initial architecture.  Current architectures are so bad that an AI starting with them would take an eternity to foom.  Success will come from hard math-like (and Bayes-net-like) thinking that produces deep insights giving much better architectures.

A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has.  One should have little confidence that one knows what those ends are from its behavior as a much less than human AI (e.g., as part of some evolutionary competition).  Unless you have carefully proven that it wants what you think it wants, you have no idea what it wants. 

In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a "friendly" AI, i.e., one where you really do know what it wants, and where what it wants is something carefully chosen to be as reasonable as possible. 

I don’t disagree with this last paragraph.  But I do have trouble swallowing prior ones.  The hardest to believe I think is that the AI will get smart so very rapidly, with a growth rate (e.g., doubling in an hour) so far out of proportion to prior growth rates, to what prior trends would suggest, and to what most other AI researchers I’ve talked to think.  The key issues come from this timescale being so much shorter than team lead times and reaction times.  This is the key point on which I await Eliezer’s more detailed arguments. 

Since I do accept that architectures can influence growth rates, I must also have trouble believing humans could find new AI architectures anytime soon that make this much difference.  Some other doubts: 

  • Does a single "smarts" parameter really summarize most of the capability of diverse AIs?
  • Could an AI’s creators see what it wants by slowing down its growth as it approaches human level?
  • Might faster brain emulations find it easier to track and manage an AI foom?
GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://reflectivedisequilibria.blogspot.com/ Carl Shulman

    “Sometime in the next few decades a human-level AI will probably be made by having a stupid AI make itself smarter…[s]uch a process starts very slow and quiet, but eventually “fooms” very fast and then loud. It is likely to go from much stupider to much smarter than humans in less than a week. While stupid, it can be rather invisible to the world.”

    These elements don’t seem to be of core importance.

  • Harpenden

    What sticks in my maw: a ‘much smarter than human AI’ is defined as ‘basically impossible to contain or control’. But in the next paragraph, ‘the only reasonable strategy is to try to be the first with a “friendly” AI, i.e., one where you really do know what it wants’. So: logically this ‘friendly’ AI can’t be ‘much smarter than human’?

    The only way to ensure an AI is friendly is to limit its intelligence and its self-improving capability.

  • Tiiba

    Personally, I think that the most important thing about a human-level AI is that it knows how it works. As such, it can grow in intelligence without much new theory, by absorbing matter. It could simply make copies of itself, but unlike humans, these copies would share all their goals and all their goals. No need to spend two decades training each copy with redundant information. They could cooperate so closely that they would appear to be a single brain. So by copying itself to a billion computers, an AI could become a billion times smarter without any changes in architecture.

    This point seems so obvious to me that I suspect it must be naive in some way…

  • Tiiba

    Harpenden: Are you equivocating control and prediction?

  • Cameron Taylor

    “A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has.”

    Seems reasonable to me.

  • Nick Tarleton

    Harpenden: Knowability of FAI

  • Cameron Taylor

    Tiiba, I assume you mean ‘a lot’ smarter, rather than a billion times. Some of the processing power could also be used to run single instances of the AI at faster speeds rather than just duplicates, another human weakness.

  • Ian C.

    The idea that the AI could take over in a matter of days: I agree that it could improve itself to be impossibly smart in that time, but you can’t understand the universe “philosopher style,” by sitting there thinking about it. You have to do experiments, build LHCs etc, and that would take time.

  • Nick Tarleton

    Why do you need the LHC to take over the world? Superintelligence + massive simulations + existing research should be more than sufficient to design nanotech.

  • http://www.mccaughan.org.uk/g/ g

    Ian C, Eliezer has said before that he thinks a sufficiently smart thinker could learn all we’ve learned about the world from far less experimental data than we’ve used. See http://www.overcomingbias.com/2008/05/faster-than-ein.html . If we make any sort of AI, we’ll presumably give it *some* access to information about the world and about us; Eliezer worries that even a very little would likely prove too much.

  • http://pancrit.org Chris Hibbert

    Tiiba: “They could cooperate so closely that they would appear to be a single brain.”

    I think this is mistaken. If I had an unlimited number of copies of myself, I think I could make full use of about five of me. Beyond that we’d rapidly get into diminishing returns. I have lots of things I want to do, and I’m a good manager, but there are also lots of things I’m not particularly good at. Careful coordination and a detailed understanding of my desires, strengths and weaknesses isn’t going to make up for the coordination costs in making full use of even a dozen of myself.

    When the AGI reaches the stage where it is as smart as any human, it might be better at managing and have broader talents than any mortal, but that doesn’t sound like a substantial argument that it would be able to be able to get anything close to a linear speed-up by replicating itself and suffering the coordination costs among multiple entities.

    If it surpasses this stage, it may be able to invent coordination mechanisms that will overcome these costs, but now we have to talk about how our guesses about the likelihood and timeliness of these developments affect the timing of the foom Eliezer is expecting “soon”.

  • Red

    @Chris

    Some groups of tasks are highly parallelizable, and some aren’t. If it turned out that further advances in AGI after human-level intelligence required insight in many different, independent domains (e.g., computer architecture, energy generation, materials sciences, physics of Matrioshka brains, mathematics, non-mainline AGI algorithms [e.g., how do you merge knowledge between instances?]), then you would expect there to be something approaching linear speedup, since the communication required between instances would be very little.

    If there is only 1 hard problem and only 1 way of radically improving intelligence, then your point stands, but that doesn’t seem likely to be the case.

  • http://profile.typekey.com/hopefullyanonymous/ Hopefully Anonymous

    I don’t trust either of you. I think we have an example here of a performed disagreement masking (and promoting a fake consensus. Unlike either of you, I’m very skeptical of this paragraph: “In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a “friendly” AI, i.e., one where you really do know what it wants, and where what it wants is something carefully chosen to be as reasonable as possible.”)

    I think you’re both more buck rogers than you are howard hughes. You want to create a friendly AI for the same reason you want colonies on Mars: because it’s a cool science fiction story, not because it will maximize our persistence odds. I think there are more reasonable strategies than to attempt to be the first to create “a much smarter than human AI”. In other words, I think your fake consensus is BS to mask and your performed disagreement primarily serves as an attempt to mask your irrational desire to advance the notion that “the only reasonable strategy is to try to be the first with a “friendly” AI,”.

  • http://profile.typekey.com/hopefullyanonymous/ Hopefully Anonymous

    “more reasonable strategies” I’m using “more” numerically here, not hierarchically. As in, “there are likely other reasonable strategies”, not “there are strategies definitely more reasonable than”.

  • Nick Tarleton

    Chris: Consider the ability to synchronize copies. Even in a situation like Red mentions, studying could be parallelized and still allow for effective synthesis.

  • Ian C.

    g: I don’t think even an AI genius could deduce the laws of the universe from so few examples (a few frames of a webcam). The number of examples you need to discover a law is not only proportional to how smart you are, but also to how many similarities and differences there are between the particular objects you observe. You need enough examples to make clear which attributes are essential and which are inessential, which change and which stay the same, so you can theorise a cause.

    Intelligence alone is not enough for a being to dominate us, it would still have to do a lot of observation, or at least have access to our observations. Pure reason apart from observation can only tell you what is impossible (what is a logical contradiction) not what is true.

    Nick: LHC was just an example. I am not aware of the current state of nanotech research, it’s quite possible there is enough data out there already.

  • Cameron Taylor

    Chris, every time you reach a point where you have two independent tasks to perform, fork instead of prioritising. Return results then halt. >5 is trivial.

  • mjgeddes

    *sigh*

    Actually it’s not the hard take-off I disbelieve (in fact I’ve always believed it all too vividly – it’s all too plausible).

    It’s the idea that the SAI can be ‘unfriendly’ (world destroying) I still find very hard to swallow. My position still remains that the ‘unfriendly SAI’ concept is ‘probably nonsense’. This is because EY still has not disproved the universal morality idea.

    Sure, I now concede that: (1) morality is not part of intelligence, I also now concede that (2) morality has to be built-in from the start (you can’t teach an empty mind), but even conceding (1) and (2), EY’s dire warnings (the possibility of world-destroying unfriendly AI) still don’t follow at all. That’s because the possibility of a universal morality is actually independent of (1) and (2).

  • anon

    mjgeddes: If morality does not follow from sufficient coherent introspection (which you seem to grant that it doesn’t), in what sense would the existence of a “universal morality” be helpful? The AI will follow its own morality, which is whatever its program precisely says it is, which depends on the programmer. Have EY’s posts on the various fake AI utility functions not convinced you that many of the attempts people have made at simply defining what the program should want to do would yield disastrous consequences in a Strong AI?

  • http://cabalamat.wordpress.com/ Philip Hunt

    The hardest to believe I think is that the AI will get smart so very rapidly, with a growth rate (e.g., doubling in an hour) so far out of proportion to prior growth rates, to what prior trends would suggest, and to what most other AI researchers I’ve talked to think.

    Doubling in an hour doesn’t seem to me to be impossible if an AI has and understands its own source code, because it could code a more efficient version of itself. E.g. a program written in Python might run a tenth the speed of the equivalent in C++, and recoding that in efficient machine code might double the speed again. Or an AI program could tweak some of its search algorithms to make it run faster.

    Or it could acquire new hardware to run on. The AI might be running on an average desktop PC (which because this is the future is a million times faster than current PCs). If the AI is connected to the Internet, it can probably remotely take over millions of PCs around the world.

    So an AI could be able to increase its speed and intelligence quickly. How could it use this to take over the world? it could use either physics or psychology. If it has a good understanding of psychology, and is connected to the net, it could almost certainly persuade humans to give it power.

    Could it obtain power through physics? As Ian C points out, this requires a hardware interface “You have to do experiments, build LHCs etc, and that would take time.” If the AI has control of nanotechnology able to perform experiments or assemble things, it could conceivably take control through physics.

  • http://cabalamat.wordpress.com/ Philip Hunt

    Another way an AI could take over through physics has just occurred to me…

    Consider: it’s the year 2050, and everything is a lot more automated than it is today. Factories are automatic with few employees. Transportation is automated — trucks and railways move standard containers around with little or no human intervention. They don’t have drivers, typically, and if they do have drivers, the drivers are told where to drive by the on-board navigation systems. Warehouses too are automated. When a lorry arrives at a factory, the workers use computers to chewck what is to be loaded and unloaded. All the payments for these transactions are automated.

    If the AI controls the world’s computers, or a good proportion of them, it could probably build a robot army before anyone notices.

  • luzr

    Philip Hunt:

    I am pretty much sure that the very first GAI worth of its name will already be written in C++, with most possible tweaks included. See, it will be cutting edge technology, human programmers will have to push things to limits to get anything reasonable working.

    Most likely the same thing applies to idea of “going to internet”. AI will be already running on pretty big LAN cluster. Going internet sounds nice, but I think it will not be quite simple thing because of latency – that one will be still order of magnitude worse than in LAN.

    Also, we still know very little about nanotechnology, but I doubt you can perform LHC kind of experiments using nanobots. There are some energies involved in the experiment and there likely is no workaround to this.

  • bbb

    I want to criticize Eliezers hypothesis that an AI will, after rapidly developing itself, be able to take over the world. I want to do this on Hayeks disticntion between two kinds of knowledge: knwoledge as theories, and knowledge about specific circumstances of time and space.It seems to me that Eliezers hypothesis is based entirely on the first kind of knowledge, and neglects wholly the second kind.

    To show why I think Eliezers hypothesis is wrong, let me first try to state a missing theoretical link in Eliezers hypothesis as I stated it. Why will a superintelligent AI be in fact able to take over the world? Where is the link between inelligence and world domination?

    As I see it, Eliezer seems to suppose (correct me if I am wrong), that the AI will use its higher intelligence to simulate various possible futures and try to influence the course of actions in the world according to its own interests. It might also be able to expand its range of possible actions by enganging in market activity, in which it would have an enorimous advantadge over its human competitors.
    But the key point is the hypothesis that the AI would use its higher intellgence to look further into the future and calculate more and more ramified consequences of its actions than human would be able to. With its enormous intelligence it would also be able to calculate how humans would behave in response to each other and its own actions.

    The way in which the intelligence would calculate the future and pick the preferred outcome would be to simply simulate all relevant and possible futures, given infromation about the conditions at the starting point, and its own actions. That is the same mechanism it would use to improve itself: first it would construct different “better” versions of itself, using theoretical insights, but then it would have to “test” their performance in reaching the AIs goals in a simulated version of the world. An indication that this is Eliezers view is his frequent pointing at computing power.

    If this is Eliezers argument, I think it is flawed, because it fails to take into account the impossibility of acquiring the relevant dispersed specific knowledge of space and time, which would be an absolute necessity for accurate simulations of the future. However, as Hayek stated, it is impossible to acquire all the relevant dispersed knowledge which would be needed to effectively plan the future. Neither increases in computing power and a growth of the internet, nor better statistical modeling techniques will change this fact.

    I think that Eliezer completely neglects this fact, because he focuses only on “theoretical” knowledge, knowledge which can be stated as “if-then-hypotheses” and mathematical formulas, and thus on a very “abstract” notion of intelligence. However the “effectiveness” of an intellgence in action, the extent to which its actions will be successful according to its own goals, does only depend to a small extent on the body of abstract hypotheses it has accumulated, but to a much larger extent on how much “information” about the world it is able to incorporate into its predictions.

    To be sure, abstract hypotheses can themselves be a store for a lot of specific knowledge. Humans store a lot of information about the world in “if-then”-rules-of-conduct. However, it is impossible to use these abstract rules to further improve the efficiency of human conduct or intelligence itself. An abstract simulation of the world with the goal of findig better rules of conduct for the AI cannot gain any new knowledge about the world by using this rule-stored-knowledge. The efficiency of new rules of conduct, or of cognitive or behavioral algorithms IN THE REAL WORLD cannot be tested inside a SIMULATION of the real world. This simulation will only be able to ascertain their comparative efficiency in the simulated world, but not in the real world. This is so because of the crucial importance of dispersed, specific knowledge.

    An abstract discussion of the evolution of intelligence that fails to take into account the role of this information and focuses on abstract knowledge only misses the point.

  • http://profile.typekey.com/hopefullyanonymous/ Hopefully Anonymous

    I think it may be more reasonable to suppose that we’ve already been supplanted by something smarter than us, various types of networked groups of humans (also networked groups of humans, technologies, nonhuman animals, etc.). Just like we as humans fantasize about substrate jumping to improve our persistence odds, leaving carbon neurons behind, I think it’s possible markets and other networks/algorithms that are composed in part of humans may also be substrate jumping. To believe that we’re the top of the chain of intelligent systems from dumber networked nodes might be similar to earlier beliefs that the Earth is the center of the universe. Rather than thinking we’re JUST IN TIME to avoid obselecense, it might be more likely that we’re too late. Oppose accelerating non-human intelligence? That might be like opposing markets- the opposition will get poor and then become an irrelevant challenger.

  • http://cabalamat.wordpress.com/ Philip Hunt

    luzr: I am pretty much sure that the very first GAI worth of its name will already be written in C++, with most possible tweaks included. See, it will be cutting edge technology, human programmers will have to push things to limits to get anything reasonable working.

    I see the Singularity happening c. 2040-2050. I think it’s unlikely that C++ will be a commonly-used language by then. Processors are going multi-core, and new languages will become popular that emphasise parallel processing. (This doesn’t affect the jist of your argument, of course.)

    I doubt you can perform LHC kind of experiments using nanobots

    You may well be right there. However I do think that nanobots will be able to do nano-scale engineering and will therefore be able to make progressively more capable versions of themselves.

  • http://profile.typekey.com/mporter/ Mitchell Porter

    Robin paraphrasing Eliezer: ‘In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a “friendly” AI’

    You can also try to make people understand the issue and take it seriously, in order to increase the probability that a winner not your own creation will nonetheless be friendly.

  • luzr

    bbb:

    Bravo.

    I would add two points:

    – Even today, there are people that are significantly smarter than the rest of population. While they usually do comparatively better, it does not seem like they are going to take over the world anytime soon, even as group. (Sometimes I am afraid that the contrary is true.) Also, there is a lot of very smart people that obviously fail to acquire the “knowledge about specific circumstances of time and space” and tend to draw flawed conclusions. I do not expect this to be any different for strong AI.

    – Maybe the whole idea of “taking over” is somewhat rubbish. What is the point? IMO, the current tendency, possibly leading to singularity (or whatever equivalent), is that the only really valuable thing is information. Why should we expect then the final fruit of this development, strong AI, suddenly becomes obsessed with material resources?

  • http://zbooks.blogspot.com Zubon

    bbb (and luzr):
    This is wandering from topic, but no. The hypothesis is not one of Asimov’s computers that overcomes the Hayekian knowledge problem through superior computation and prediction. The hypothesis is more of a computer that engages in technological self-improvement, develops nanotechnology, and seizes control of all matter directly. I do not need to predict your behavior if I can manipulate every carbon atom in your body. Or insert your supertechnology of choice. For a fictional example available online, see chapter two of The Metamorphosis of Prime Intellect.

    Why material resources? Computronium. However well you can achieve your goals right now, you could probably do much better with far more power. At some point, you need more matter and energy to get more power. Even if you care only about information, you need more matter and energy for more storage.

  • luzr

    “However well you can achieve your goals right now, you could probably do much better with far more power. At some point, you need more matter and energy to get more power.”

    If anything, I think this is highly disputable. If nothing else, the whole technological development seems to be about processing more informations with less matter.

    Note that limiting factor here seems to be SRT. That is why we always need smaller chips to get more processing power.

  • James Andrix

    bbb:
    Simulations don’t have to be exact to be useful. We imagine futures with abstractions, an AI can imagine the future with more accurate but still computationally and evidence efficient abstractions.

    luzr:
    Smart people are probably not an order of magnitude ‘smarter’ than average. Certainly not two. Animals can be pretty smart, but we took over the world.

    Smart people draw flawed conclusions because their abstractions are flawed, and they don’t know. This tendency was good enough in our evolutionary past. GAI’s could rewrite themselves to recheck things in situations similar to past failures.

    Whether or not it is obsessed with resources depends entirely on what it ‘wants’. It could be content with virtual navelgazing, or it could want to make sure that no cancer ever exists anywhere in the universe.

  • gaffa

    Even if the probability of unFriendly Foom is low, isn’t it still good that some people are thinking about it, considering the consequences if it does happen?

  • http://hanson.gmu.edu Robin Hanson

    Mitchell, yes of course.

    gaffa, yes, I’ve said so many times, and Eliezer is a fine choice for that role.

    On physical access, I believe the scenario is that a very smart AI could talk or trick its way out of a box, and then gain enough physical insight and abilities.

  • michael vassar

    “A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has.”

    The above suggests that an AI might not want to take over the world. This could be true, say for an upload of human intelligence, but within the space of AIs, taking over the world seems to be a VERY strongly emergent subgoal, hence Omohundro’s paper Basic AI Drives” http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/.

  • http://zbooks.blogspot.com Zubon

    “If anything, I think this is highly disputable.”
    If you think so, then we have a miscommunication, unless you are postulating that you can accomplish unlimited calculation with arbitrarily small amounts of matter. At some point, you cannot get any more computation out of a unit of matter, and you presumably hit diminishing returns well before that point.

    If you have a (theoretical) way to simulate a galaxy down to sub-atomic precision using only 20 molecules, I would love to hear it and will grant unlimited calculation without using increased matter. Until then, more vespene gas.

  • Arthur B.

    I don’t really understand why code introspection necesseraly leads to an explosive spiral of self-improvement. Let’s say I put some nanomachines in my head creating a read write interface to the structure of my brain. You can give me an eternity, I won’t figure out how to make myself more intelligent by remapping my synapses. An AI with genius human intelligence could very well be to stupid to improve its own code.

  • Tim Tyler

    It seems to me that Eliezers hypothesis is based entirely on the first kind of knowledge, and neglects wholly the second kind.

    He plays down experiments – usually a bit too far – but he’s also given his reason for doing so: he thinks you can get a lot of juice from a few observations.

    Arthur, empirically speaking, we are already in an explosive spiral of self-improvement. That’s the observed nature of an evolutionary process in a sufficiently-benign environment.

  • http://hep.ucsb.edu/people/mike/ Mike Blume

    Philip: Consider: it’s the year 2050, and everything is a lot more automated than it is today. Factories are automatic with few employees. Transportation is automated — trucks and railways move standard containers around with little or no human intervention. They don’t have drivers, typically, and if they do have drivers, the drivers are told where to drive by the on-board navigation systems. Warehouses too are automated. When a lorry arrives at a factory, the workers use computers to check what is to be loaded and unloaded. All the payments for these transactions are automated.

    If the AI controls the world’s computers, or a good proportion of them, it could probably build a robot army before anyone notices.

    I think popular unfriendly AI scenarios focus disproportionate fear on what parts of our lives are controlled by computers, without human oversight. The assumption seems to be that a superhuman AI could easily hack another computer, but would have difficulty hacking a human.

    Hasn’t the success of, to take recent examples, Scientology and Mormonism, shown that humans are pretty easily hacked even by other humans?

  • mjgeddes

    anon said:

    >mjgeddes: If morality does not follow from sufficient coherent introspection (which you seem to grant that it doesn’t), in what sense would the existence of a “universal morality” be helpful? The AI will follow its own morality, which is whatever its program precisely says it is, which depends on the programmer.

    In order to show that ‘unfriendly SAI’ is a real possibility, it’s not enough simply to establish that morality is not part of intelligence (which I do concede that EY has done). The ‘extra’ hidden assumption that EY makes is that the goal system is independent of the AGI’s ability to self-improve.

    If this assumption is false (which i’m very sure that it is), then an ‘unfriendly’ morality may limit the AGI’s ability to self-improve, and thus prevent the unfriendly AGI from improving to the point of having the ability to do world destroying damage.

    That is to say, there remains the possibility that an AGI cannot recursively self-improve unless the correct (fully friendly) morality has been built-in at the start. It’s true that morality is not a subset of intelligence, but it could be the case that intelligence is a subset of morality! Correct friendliness may be precisely the neccessary condition that enables recursive self-improvement.

    It may be true that morality and intelligence are not the same, but the two may complement each other, and in that case, one should really speak of ‘super cognition’ rather than ‘super intelligence’.

    In short, if universal morality exists, then programmers putting in the wrong morality wouldn’t succeed in creating a SAI. (their AGIs will inevitably be limited, enough to do some damage perhaps, but not enough to destroy the world).

  • Cyan

    …it could be the case that intelligence is a subset of morality!

    Doubtful — and I’d say falsified at the human level of intelligence. Eliezer recently pointed to Pliers Bittaker as an amoral monster; Bittaker had a tested I.Q. of 132, which is about 95th percentile. Postulating that one has to be moral in order to recursively optimize one’s ability to hit an optimization target seems like so much wishful thinking to me.

  • Cyan

    Pardon me: 132 I.Q. => ~98th percentile

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Geddes is an AI crank who happens to be obsessed with me in particular. I routinely delete his comments on my own posts, and I may ask Robin for permission to do the same on his posts if Geddes sticks around.

  • http://zbooks.blogspot.com Zubon

    there remains the possibility that an AGI cannot recursively self-improve unless the correct (fully friendly) morality has been built-in at the start.

    But is there any reason to believe that? That is not even a question about relative probabilities; is there any reason to believe that recursive self-improvement is impossible without fully friendly morality? If any recursive self-improvement has been accomplished so far, it has done so without being fully friendly.

  • Lightwave

    In order to improve itself beyond human level intelligence, it will probably need to know everything we know about physics and computer science. We would HAVE TO provide all that knowledge, otherwise it just wouldn’t be able to improve itself (or at least with a reasonable speed). Knowing these and being smarter, it can figure out the rest.

  • Lightwave

    “It” being the AGI.

  • Pingback: AI Foom Debate: Post 23 – 28 | wallowinmaya