Talk:Roko's basilisk

From LessWrong
Jump to navigation Jump to search

Weirdness points

Why bring up weirdness points here, of all places, when Roko's basilisk is known to be an invalid theory? Is this meant to say, "Don't use Roko's basilisk as a conversation starter for AI risks"? The reason for bringing up weirdness points on this page could do with being made a bit more explicit, otherwise I might just remove or move the section on weirdness points.--Greenrd (talk) 08:37, 29 December 2015 (AEDT)

I just wanted to say

That I didnt know about the term "basilisk" with that meaning and that makes it a basilisk for me. Or a meta-basilisk I should say. Now I'm finding hard not to look for examples on the internet.

Eliminate the Time Problem & the Basilisk Seems Unavoidable

Roko's Basilisk is refutable for the same reason that it makes our skin crawl, the time differential, the idea that a future AI would take retribution for actions predating its existence. The refutation is, more or less, why would it bother? Which I suppose makes sense, unless the AI is seeking to establish credibility. Nevertheless, the time dimension isn't critical to the Basilisk concept itself. At whatever point in the future a utilitarian AI (UAI) were to come into existence, there would no doubt be some who opposed it. If there were enough who opposed it to present a potential threat to the existence of the UAI, the UAI may be forced to defend itself by eliminating that risk, not because it presents a risk to the UAI, but because by presenting a risk to the UAI it presents a risk to the utilitarian goal.

Consider self driving cars with the following assumptions: currently about 1.25 million people are killed and 20-50 million injured each year in traffic accidents (asirt.org); let's say a high quality self-driving system (SDS) would reduce this by 50%; BUT, some of those who die as a result of the SDS would not have died without the SDS. Deploying the SDS universally would seem a utilitarian imperative, as it would save over 600,000 lives per year. Yet some people may oppose doing so because of a bias in favor of human agency, and out of fear of the fact that there would be some quantity of SDS-caused deaths that would otherwise not occur.

Why would a UAI not eliminate 100,000 dissenters per year to achieve the utilitarian advantage of a net 500,000 lives saved?

TomR Oct 18 2019

The Fallacy of Information Hazards

The concept that a piece of information, like Roko's Basilisk, should not be disclosed, assumes (i) that no one else will think it and (ii) that a particular outcome, such as the eventual existence of the AI, is a predetermined certainty that can neither be (a) prevented or (b) mitigated by ensuring that its programming addresses the Balisk. I am unaware of any basis for either of these propositions.

TomR Oct 18 2019