Difference between revisions of "Counterfactual resiliency"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "Counterfactual resiliency is a plausibility test for predictive models, particularly models that do not provide causal explanations for their predictions, proposed by Stuart A...")
 
m
 
Line 1: Line 1:
Counterfactual resiliency is a plausibility test for predictive models, particularly models that do not provide causal explanations for their predictions, proposed by Stuart Armstrong. If if there are likely possible worlds in which the inputs to the model stay constant but the event the model attempts to predict happens differently, or in which the event happens the same way but the inputs change in a way that would have led the model to make a different prediction, then the model must be flawed. Armstrong argues that Robin Hanson's [http://hanson.gmu.edu/EconOfBrainEmulations.pdf model] predicting changes in the rate of economic growth, and Ray Kurzweil's [http://en.wikipedia.org/wiki/Accelerating_change#Kurzweil.27s_The_Law_of_Accelerating_Returns Law of Accelerating Returns], both fail the counterfactual resiliency test, because both rely heavily on trends from the distant past which could easily change without significantly changing our current and future situation. And conversely, that AI and brain uploading could be made easier or more difficult without significantly altering the past, changing the outcomes which their models attempt to predict without changing the inputs that go into the models. However, he argues that [[Moore's law]] passes the counterfactual resiliency test, because there are enough factors that contribute to the 18-month doubling time that changing any one of them would have a limited effect on the overall trend.
+
Counterfactual resiliency is a plausibility test for predictive models, particularly models that do not provide causal explanations for their predictions, proposed by Stuart Armstrong. If if there are likely possible worlds in which the inputs to the model stay constant but the event the model attempts to predict happens differently, or in which the event happens the same way but the inputs change in a way that would have led the model to make a different prediction, then the model must be flawed. Armstrong argues that Robin Hanson's [http://hanson.gmu.edu/EconOfBrainEmulations.pdf model] predicting changes in the rate of economic growth, and Ray Kurzweil's [http://en.wikipedia.org/wiki/Accelerating_change#Kurzweil.27s_The_Law_of_Accelerating_Returns Law of Accelerating Returns], both fail the counterfactual resiliency test, because both rely heavily on trends from the distant past which could easily change without significantly changing our current and future situation. And conversely, that AI and [[Whole brain emulation|brain uploading]] could be made easier or more difficult without significantly altering the past, changing the outcomes which their models attempt to predict without changing the inputs that go into the models. However, he argues that [[Moore's law]] passes the counterfactual resiliency test, because there are enough factors that contribute to the 18-month doubling time that changing any one of them would have a limited effect on the overall trend.
  
 
==Blog posts==
 
==Blog posts==

Latest revision as of 07:54, 10 August 2013

Counterfactual resiliency is a plausibility test for predictive models, particularly models that do not provide causal explanations for their predictions, proposed by Stuart Armstrong. If if there are likely possible worlds in which the inputs to the model stay constant but the event the model attempts to predict happens differently, or in which the event happens the same way but the inputs change in a way that would have led the model to make a different prediction, then the model must be flawed. Armstrong argues that Robin Hanson's model predicting changes in the rate of economic growth, and Ray Kurzweil's Law of Accelerating Returns, both fail the counterfactual resiliency test, because both rely heavily on trends from the distant past which could easily change without significantly changing our current and future situation. And conversely, that AI and brain uploading could be made easier or more difficult without significantly altering the past, changing the outcomes which their models attempt to predict without changing the inputs that go into the models. However, he argues that Moore's law passes the counterfactual resiliency test, because there are enough factors that contribute to the 18-month doubling time that changing any one of them would have a limited effect on the overall trend.

Blog posts

http://lesswrong.com/lw/ea8/counterfactual_resiliency_test_for_noncausal/