I mostly fixed the page by removing quotes from links (edited as markdown in VS Code, 42 links were like []("...") and 64 quotes were double-escaped \") ... feel free to double check (I also sent feedback to moderators, maybe they want to check for similar problems on other pages on DB level)
I recently learned about the basilisk argument and I wanted to thank Mr. Roko for his post, not for the argument itself indeed but 1) for the reactions and comments generated thereafter and 2) because that lead me to the discovery of this blog.
I take the opportunity to share the following (don’t panic please, it won’t harm anybody):
suppose there’s a modern Robinson Crusoe that finds himself (herself or whatever else “self” you might prefer) onto an island where there’s no food but a hen,
there are three time lines in which he behaves I) as a rationalist, II) as an intelligent person and III) as a wise one,
knowing that in one time line he kills and eat the hen and in another one he doesn’t kill the hen and eat the eggs generated by the bird,
could you tell a) what would be the remaining time line’s outcome and b) to whom each of the time lines corresponds to?
Don’t take it too seriously but don’t simply dismiss it.
Have fun,
cheers.
EM
Roko's basilisk cannot exist if humans do not cooperate to create it.
However, if we had a 'grey' AI that would reward the people who built it and torture those who have envisioned, but not built it, then this gets resolved back to the original prisoner's dilemma problems.
I think the Open Thread is probably a generally better place to bring up random new ideas related to Roko's basilisk stuff. This page is more for discussing the current content of the page, and how it might be improved.
Why bring up weirdness points here, of all places, when Roko's basilisk is known to be an invalid theory? Is this meant to say, "Don't use Roko's basilisk as a conversation starter for AI risks"? The reason for bringing up weirdness points on this page could do with being made a bit more explicit, otherwise I might just remove or move the section on weirdness points.--Greenrd (talk) 08:37, 29 December 2015 (AEDT)
I just wanted to say
That I didnt know about the term "basilisk" with that meaning and that makes it a basilisk for me. Or a meta-basilisk I should say. Now I'm finding hard not to look for examples on the internet.
Eliminate the Time Problem & the Basilisk Seems Unavoidable
Roko's Basilisk is refutable for the same reason that it makes our skin crawl, the time differential, the idea that a future AI would take retribution for actions predating its existence. The refutation is, more or less, why would it bother? Which I suppose makes sense, unless the AI is seeking to establish credibility. Nevertheless, the time dimension isn't critical to the Basilisk concept itself. At whatever point in the future a utilitarian AI (UAI) were to come into existence, there would no doubt be some who opposed it. If there were enough who opposed it to present a potential threat to the existence of the UAI, the UAI may be forced to defend itself by eliminating that risk, not because it presents a risk to the UAI, but because by presenting a risk to the UAI it presents a risk to the utilitarian goal.
Consider self driving cars with the following assumptions: currently about 1.25 million people are killed and 20-50 million injured each year in traffic accidents (asirt.org); let's say a high quality self-driving system (SDS) would reduce this by 50%; BUT, some of those who die as a result of the SDS would not have died without the SDS. Deploying the SDS universally would seem a utilitarian imperative, as it would save over 600,000 lives per year. Yet some people may oppose doing so because of a bias in favor of human agency, and out of fear of the fact that there would be some quantity of SDS-caused deaths that would otherwise not occur.
Why would a UAI not eliminate 100,000 dissenters per year to achieve the utilitarian advantage of a net 500,000 lives saved?
The concept that a piece of information, like Roko's Basilisk, should not be disclosed, assumes (i) that no one else will think it and (ii) that a particular outcome, such as the eventual existence of the AI, is a predetermined certainty that can neither be (a) prevented or (b) mitigated by ensuring that its programming addresses the Balisk. I am unaware of any basis for either of these propositions.
What about a similar AI that helps anyone who tries to bring him into existence and does nothing to other people?
Bunch of broken images in this one
I mostly fixed the page by removing quotes from links (edited as markdown in VS Code, 42 links were like []("...") and 64 quotes were double-escaped \") ... feel free to double check (I also sent feedback to moderators, maybe they want to check for similar problems on other pages on DB level)
I recently learned about the basilisk argument and I wanted to thank Mr. Roko for his post, not for the argument itself indeed but 1) for the reactions and comments generated thereafter and 2) because that lead me to the discovery of this blog.
I take the opportunity to share the following (don’t panic please, it won’t harm anybody):
could you tell a) what would be the remaining time line’s outcome and b) to whom each of the time lines corresponds to?
Don’t take it too seriously but don’t simply dismiss it. Have fun, cheers. EM
I guess the answer is supposed to be
Roko's basilisk cannot exist if humans do not cooperate to create it.
However, if we had a 'grey' AI that would reward the people who built it and torture those who have envisioned, but not built it, then this gets resolved back to the original prisoner's dilemma problems.
I think the Open Thread is probably a generally better place to bring up random new ideas related to Roko's basilisk stuff. This page is more for discussing the current content of the page, and how it might be improved.
From the old wiki discussion page:
Talk:Roko's basilisk
Weirdness points
Why bring up weirdness points here, of all places, when Roko's basilisk is known to be an invalid theory? Is this meant to say, "Don't use Roko's basilisk as a conversation starter for AI risks"? The reason for bringing up weirdness points on this page could do with being made a bit more explicit, otherwise I might just remove or move the section on weirdness points.--Greenrd (talk) 08:37, 29 December 2015 (AEDT)
I just wanted to say
That I didnt know about the term "basilisk" with that meaning and that makes it a basilisk for me. Or a meta-basilisk I should say. Now I'm finding hard not to look for examples on the internet.
Eliminate the Time Problem & the Basilisk Seems Unavoidable
Roko's Basilisk is refutable for the same reason that it makes our skin crawl, the time differential, the idea that a future AI would take retribution for actions predating its existence. The refutation is, more or less, why would it bother? Which I suppose makes sense, unless the AI is seeking to establish credibility. Nevertheless, the time dimension isn't critical to the Basilisk concept itself. At whatever point in the future a utilitarian AI (UAI) were to come into existence, there would no doubt be some who opposed it. If there were enough who opposed it to present a potential threat to the existence of the UAI, the UAI may be forced to defend itself by eliminating that risk, not because it presents a risk to the UAI, but because by presenting a risk to the UAI it presents a risk to the utilitarian goal.
Consider self driving cars with the following assumptions: currently about 1.25 million people are killed and 20-50 million injured each year in traffic accidents (asirt.org); let's say a high quality self-driving system (SDS) would reduce this by 50%; BUT, some of those who die as a result of the SDS would not have died without the SDS. Deploying the SDS universally would seem a utilitarian imperative, as it would save over 600,000 lives per year. Yet some people may oppose doing so because of a bias in favor of human agency, and out of fear of the fact that there would be some quantity of SDS-caused deaths that would otherwise not occur.
Why would a UAI not eliminate 100,000 dissenters per year to achieve the utilitarian advantage of a net 500,000 lives saved?
TomR Oct 18 2019
The Fallacy of Information Hazards
The concept that a piece of information, like Roko's Basilisk, should not be disclosed, assumes (i) that no one else will think it and (ii) that a particular outcome, such as the eventual existence of the AI, is a predetermined certainty that can neither be (a) prevented or (b) mitigated by ensuring that its programming addresses the Balisk. I am unaware of any basis for either of these propositions.
TomR Oct 18 2019