Back to LessWrong

Giant cheesecake fallacy

From Lesswrongwiki

Jump to: navigation, search

One often hears, in futurism, a line of reasoning that goes something like this. Someone says: "When technology advances far enough, we’ll be able to build minds far surpassing human intelligence. Now it’s clear, that if you’re baking a cheesecake, how large a cheesecake you can bake depends on your intelligence. A superintelligence could build enormous cheesecakes - cheesecakes the size of cities. And Moore's Law keeps dropping the cost of computing power. By golly, the future will be full of giant cheesecakes!" I call this the Giant Cheesecake Fallacy. It happens whenever the argument leaps directly from capability to actuality, without considering the necessary intermediate of motive.

Here are two examples of reasoning that include a Giant Cheesecake Fallacy:

  • A sufficiently powerful Artificial Intelligence could overwhelm any human resistance and wipe out humanity. (And the AI would decide to do so.) Therefore we should not build AI.
  • Or: A sufficiently powerful AI could develop new medical technologies capable of saving millions of human lives. (And the AI would decide to do so.) Therefore we should build AI.

And the natural mistake, once you understand the Giant Cheesecake Fallacy, is to ask: "What will an Artificial Intelligence want?" When trying to talk about Artificial Intelligence, it becomes extremely important to remember that we cannot make any general statement about Artificial Intelligences because the design space is too large. People talk about "AIs" as if all AIs formed a single tribe, an ethnic stereotype. Now, it might make sense to talk about "the human species" as a natural category, because we humans all have essentially the same brain architecture - limbic system, cerebellum, visual cortex, prefrontal cortex, and so on. But the term “Artificial Intelligence” refers to a vastly larger space of possibilities than this. When we talk about “AIs” we are really talking about minds-in-general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans. And then all the rest of that huge sphere is the space of minds in general.

See also