Search results

Jump to navigation Jump to search

Page title matches

  • 45 bytes (4 words) - 11:20, 25 June 2009
  • #REDIRECT [[Basic AI drives]]
    29 bytes (4 words) - 05:17, 14 June 2012
  • {{arbitallink|https://arbital.com/p/AI_boxing/|AI boxing}} An '''AI Box''' is a confined computer system in which an [[Artificial General Intel
    4 KB (630 words) - 06:14, 14 June 2023
  • 48 bytes (4 words) - 21:06, 30 December 2011
  • ...e them on a computer. It is considered an [[Unfriendly AI|unsafe]] form of AI, because its goals are unknown.
    519 bytes (79 words) - 06:42, 15 June 2023
  • #REDIRECT [[AI boxing]]
    23 bytes (3 words) - 17:47, 10 July 2012
  • 48 bytes (4 words) - 13:29, 17 June 2017
  • #REDIRECT [[Nanny AI]]
    22 bytes (3 words) - 21:23, 13 July 2012
  • '''AI advantages''' are various factors that might favor AIs in case there was ev ...rior processing power:''' Having more serial processing power would let an AI think faster than humans, while having more parallel processing power and m
    3 KB (479 words) - 06:11, 14 June 2023
  • A '''tool AI''' is a type of Artificial Intelligence which is built to be used as a tool ...dangerous, it was an unnecessary path of development. His example of tool AI behavior was Google Maps, which uses complex algorithms and data to plot a
    1 KB (232 words) - 07:43, 15 June 2023
  • ...' is a regularly proposed solution to the problem of developing [[Friendly AI]]. It is conceptualized as a super-intelligent system which is designed for ...curity (“boxing”), as well as problems involved with trying to program the AI to only answer questions. In the end, the paper reaches the cautious conclu
    5 KB (797 words) - 06:48, 15 June 2023
  • #REDIRECT [[AI takeoff]]
    24 bytes (3 words) - 20:30, 29 June 2012
  • A problem is considered '''AI-complete''' or '''AI-hard''' if solving it is equivalent to creating [[AGI|Artificial General In ...P-complete, a class of problems in [[complexity theory]]. Problems labeled AI-complete like graceful degradation or computer vision tend to be framed at
    2 KB (267 words) - 06:15, 14 June 2023
  • '''AI takeoff''' refers to a point in the future where [[Artificial General Intel ...intaining control of the AGI's ascent it should be easier for a [[Friendly AI]] to emerge.
    4 KB (536 words) - 06:54, 13 June 2023
  • 46 bytes (4 words) - 07:39, 4 May 2009
  • ...o delay the [[Singularity]] while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the [[Existential risk|risks]] associat ...tzel has suggested a number of preliminary components for building a Nanny AI:
    3 KB (355 words) - 06:40, 15 June 2023
  • 48 bytes (4 words) - 21:09, 30 December 2011
  • ...o human-equivalent or even trans-human reasoning. The key for successful [[AI takeoff]] would lie in creating adequate starting conditions. |url=http://www.cs.ru.ac.za/courses/Honours/ai/HybridSystems/P2.pdf}}
    4 KB (554 words) - 07:24, 15 June 2023
  • 32 bytes (4 words) - 04:17, 27 June 2012
  • ...elligence architecture or goal content. It also analyzes the ways in which AI and human psychology are likely to differ, and the ways in which those diff [[FAI]] refers to “benevolent” AI systems, that have advanced at least to the point of making real-world plan
    14 KB (2,068 words) - 04:56, 15 June 2023

Page text matches

  • ...r', especially where the human's level of consumption is concerned. If the AI is able and willing to change its master's choices through social engineeri
    483 bytes (79 words) - 06:13, 15 June 2023
  • ...e them on a computer. It is considered an [[Unfriendly AI|unsafe]] form of AI, because its goals are unknown.
    519 bytes (79 words) - 06:42, 15 June 2023
  • A problem is considered '''AI-complete''' or '''AI-hard''' if solving it is equivalent to creating [[AGI|Artificial General In ...P-complete, a class of problems in [[complexity theory]]. Problems labeled AI-complete like graceful degradation or computer vision tend to be framed at
    2 KB (267 words) - 06:15, 14 June 2023
  • ...ally designed to have a positive effect on humanity is called a [[Friendly AI]]. *[[Basic AI drives]]
    2 KB (222 words) - 06:15, 14 June 2023
  • {{arbitallink|https://arbital.com/p/ai_arms_race/|AI arms races}} An '''AI arms race''' is a situation where multiple parties are trying to be the fir
    2 KB (268 words) - 06:14, 14 June 2023
  • .../lesswrong.com/lw/vv/logical_or_connectionist_ai/ Logical or Connectionist AI?] People who don't work in AI, who hear that Eliezer works in AI, often ask him: "Do you build neural networks or expert systems?" This tu
    790 bytes (127 words) - 06:48, 15 June 2023
  • ...that since [[whole brain emulation]] seems feasible then human-level [[AGI|AI]] must also be feasible. There are many underlying assumptions in the argum *(iii) If we emulate this machine, there will be AI.
    940 bytes (134 words) - 05:09, 15 June 2023
  • #REDIRECT [[Nanny AI]]
    22 bytes (3 words) - 21:23, 13 July 2012
  • #REDIRECT [[AI takeoff]]
    24 bytes (3 words) - 14:41, 12 June 2012
  • #REDIRECT [[AI takeoff]]
    24 bytes (3 words) - 20:30, 29 June 2012
  • ...ntially dangerous]], due to considerations such as [[Basic AI drives|basic AI drives]] possibly causing behavior which is in conflict with humanity's val ...//lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/ Reply to Holden on Tool AI], [[Eliezer Yudkowsky]] argues that, since all intelligences select correct
    2 KB (241 words) - 04:22, 15 June 2023
  • #REDIRECT [[Basic AI drives]]
    29 bytes (4 words) - 05:17, 14 June 2012
  • #redirect [[Basic AI drives]]
    29 bytes (4 words) - 15:21, 20 June 2017
  • #REDIRECT [[Basic AI drives]]
    29 bytes (4 words) - 06:40, 17 June 2012
  • ...Friendly AI-complete''' if solving it is equivalent to creating [[Friendly AI]]. *[[Oracle AI]] - an AGI which takes no action besides answering questions.
    3 KB (380 words) - 05:28, 15 June 2023
  • A '''tool AI''' is a type of Artificial Intelligence which is built to be used as a tool ...dangerous, it was an unnecessary path of development. His example of tool AI behavior was Google Maps, which uses complex algorithms and data to plot a
    1 KB (232 words) - 07:43, 15 June 2023
  • #REDIRECT [[On Designing AI Sequence]]
    38 bytes (5 words) - 15:20, 11 March 2017
  • #redirect [[Road to AI Safety Excellence]]
    42 bytes (6 words) - 19:00, 4 July 2017
  • #REDIRECT [[Basic AI drives#Instrumental convergence thesis]]
    61 bytes (7 words) - 05:19, 14 June 2012
  • #redirect [[Accelerating AI Safety Adoption in Academia]]
    57 bytes (7 words) - 13:32, 17 June 2017
View (previous 20 | ) (20 | 50 | 100 | 250 | 500)