Search results
Jump to navigation
Jump to search
Page title matches
- ...heuristic|absurdities]] as a [[Paperclip maximizer]]. Creatures with alien values might as well value ''only'' non-sentient life, or they might spend all the953 bytes (135 words) - 04:23, 15 June 2023
- 32 bytes (3 words) - 20:51, 22 June 2012
Page text matches
- ...which are pursued for their own sake are called [[Terminal value|terminal values]]. .../terminal_values_and_instrumental_values/ Terminal Values and Instrumental Values]415 bytes (55 words) - 06:11, 15 June 2023
- ...heuristic|absurdities]] as a [[Paperclip maximizer]]. Creatures with alien values might as well value ''only'' non-sentient life, or they might spend all the953 bytes (135 words) - 04:23, 15 June 2023
- ==Terminal vs. instrumental values== ...insic values), which are means-to-an-end, mere tools in achieving terminal values. For example, if a given university student studies merely as a professiona4 KB (623 words) - 06:11, 15 June 2023
- '''Value extrapolation''' can be defined as an account of how the human values, moral and desires would be under “ideal circumstances”. These circumst ...cognitive processes that give rise to them could help us shift to a set of values intentionally chosen through a state of “reflective equilibrium”.3 KB (370 words) - 07:11, 15 June 2023
- .... Value learning could prevent an AGI of having goals detrimental to human values, hence helping in the creation of [[Friendly AI]]. ...ly produce those same rewards without the trouble of also maximizing human values (i.e.: if the reward was human happiness it could alter the human mind so i2 KB (373 words) - 07:11, 15 June 2023
- ...aggregative consequentialist theories (those that maximize the sum of the values of individual structures) will be indifferent between any acts with merely ...rule" that determines what values are to be aggregated (e.g., discounting values far away in space and time)2 KB (282 words) - 06:10, 15 June 2023
- * statistical tools such as p-values in the context of experimental science449 bytes (66 words) - 07:22, 15 June 2023
- ==Complex values and fun theory's solution== ...nd advocate that it alone be maximized. This would neglect all other human values. For example, if we simply optimize for pleasure or happiness, [[wireheadin4 KB (696 words) - 05:34, 15 June 2023
- ...of humans, the maximization of the [[Complexity of value|full set of human values]] (for the humans' benefit, not for itself). This article discusses the con Since cooperation has instrumental value for achieving a variety of terminal values, benevolence in agents--humans, AGIs, or others--may arise even if it is no3 KB (396 words) - 04:35, 15 June 2023
- ...one's surrounding environment. A "virtual community" has no such natural values but still retains the [[mind-killer|mind-killing]] illusion of protection,2 KB (257 words) - 04:51, 15 June 2023
- ...through CBV, which suggests instead a “conceptual blend” between different values and perspectives. The term was borrowed from Fauconnier and Tunner’s work ...ies on the assumption that the creation and definition of collective human values is probably better carried out through human work and collaboration than th4 KB (568 words) - 04:51, 15 June 2023
- ...ong.com/lw/6nb/ego_syntonic_thoughts_and_values/ Ego syntonic thoughts and values]2 KB (237 words) - 15:53, 17 January 2015
- ...To implement such a scenario, all AGIs must be instilled with law-abiding values such that they respect human property rights. Reciprocally, humans must com ...p://www.overcomingbias.com/2009/10/prefer-law-to-values.html Prefer Law To Values].</li>2 KB (330 words) - 05:46, 15 June 2023
- ...nnect you to a person 90% similar to your friend). For example, all of our values ''except'' novelty might yield a future full of individuals replaying only ...stence into it. The human equivalents of a utility function, our terminal values, contain many different elements that are not strictly reducible to one ano9 KB (1,341 words) - 04:52, 15 June 2023
- ...an agent's [[Terminal value|terminal values]] reduces the chance that the values as they are will be fulfilled. This, from the perspective of intelligence a ...ne of the designer's subgoals, at the cost of some of the designer's other values. For example, if the designer of an artificial general intelligence thinks4 KB (562 words) - 05:37, 15 June 2023
- ...relative or absolute?", "Do moral facts exist?" or “How do we learn moral values?”. (As distinct from object-level moral questions like, "Ought I to stea ...an be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like "morality" can be tal3 KB (458 words) - 06:32, 15 June 2023
- [[Category:Values]]883 bytes (133 words) - 05:35, 15 June 2023
- ...izer]], utilitronium is paperclips. For more [[complexity of value|complex values]], no homogeneous organization of matter will have optimal utility.807 bytes (103 words) - 07:13, 15 June 2023
- ...humanity. The thought experiment shows that AIs with apparently innocuous values could pose an [[existential risk|existential threat]]. ...). This produces a thought experiment which shows the contingency of human values: An [[really powerful optimization process|extremely powerful optimizer]] (8 KB (1,124 words) - 04:47, 15 June 2023
- ...t. Those who do not, use its undesirability to argue that not all terminal values reduce to "happiness" or some simple analogue. Hedonium is the [[hedonism|h1 KB (140 words) - 05:43, 15 June 2023