Comment author: Unknowns 27 November 2014 08:09:49AM 3 points [-]

Eliezer has said he would be willing to make one more bet like this (but not more, since he needs to ensure his ability to pay if he loses). I don't think anyone has taken him up on it. Robin Hanson was going to do it but backed out, so as far as I know the offer is still open.

Comment author: wedrifid 27 November 2014 01:27:43PM 1 point [-]

I want the free $10. The $1k is hopeless and were I to turn out to lose that side of the bet then I'd still be overwhelmingly happy that I'm still alive against all expectations.

Comment author: pragmatist 27 November 2014 11:13:15AM *  -5 points [-]

I assign a very high probability (>90%) to Azathoth123 being Eugine_Nier. Given the latter's history, I wouldn't be surprised if Azathoth were involved in voting shenanigans. But I think it would be better if you take this to a mod (Viliam_Bur, I believe) for confirmation/action, rather than speculating in public.

ETA: Just realized that this comment is doing exactly what it was advising against. Slightly embarrassed that I didn't notice while I was writing it.

Comment author: wedrifid 27 November 2014 01:11:48PM *  -3 points [-]

I consider social policy proposal harmful and reject it as applied to myself or others. You may of course continue to refrain from speaking out against this kind of behaviour if you wish.

In the unlikely event that the net positive votes (at that time) given to Azathoth123 reflect the actual attitudes of the lesswrong community the 'public' should be made aware so they can choose whether to continue to associate with the site. At least one prominent user has recently disaffiliated himself (and deleted his account) for a far less harmful social political concern. On the other hand other people who embrace alternate lifestyles may be relieved to see that Azathoth's prejudiced rabble rousing is unambiguously rejected here.

Comment author: Liso 27 November 2014 06:25:27AM *  1 point [-]

It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.

There are two sorts of unsafe AI: one which care and one which doesnt care.

Ignorant is fastest - only calculate answer and doesn't care of anything else.

Friend and enemy has to analyse additional things...

Comment author: wedrifid 27 November 2014 08:01:09AM -1 points [-]

Ignorant is fastest - only calculate answer and doesn't care of anything else.

Just don't accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.

Comment author: wedrifid 27 November 2014 06:22:15AM 3 points [-]

Wow. I want the free money too!

Comment author: Azathoth123 22 November 2014 05:03:54AM 1 point [-]

What's the in practice difference between say, a polyamorous group raising children together in a stable situation and a large, extended family with various cousins and so on?

The fact that their internal dynamics are completely different.

Or to make it even simpler, I see no strong reason to say "you shouldn't be gay" when you could be saying "Hey gay guys, you should form a monogamous pairbond and raise children together for 18 years".

Because:

1) The child is deprived of a mother (or father). And yes the two play different roles in bringing up children.

2) Gays aren't monogamous. One obvious way to see this is to note how much gay culture is based around gay bathhouses. Another way is to image search pictures of gay pride parades.

Comment author: wedrifid 26 November 2014 10:05:59PM 1 point [-]

2) Gays aren't monogamous. One obvious way to see this is to note how much gay culture is based around gay bathhouses. Another way is to image search pictures of gay pride parades.

This user seems to to spreading an agenda of ignorant bigotry against homosexuality and polyamory. It doesn't even temper the hostile stereotyping with much pretense of just referring to trends in the evidence.

Are the upvotes this account is receiving here done by actual lesswrong users (who, frankly, ought to be ashamed of themselves) or has Azathoth123 created sockpuppets to vote itself up?

Comment author: Brillyant 26 November 2014 09:32:38PM -2 points [-]

I see.

by threating or hypnotising a human

This is the gist of the AI Box experiment, no?

Comment author: wedrifid 26 November 2014 09:51:48PM 0 points [-]

This is the gist of the AI Box experiment, no?

No. Bribes and rational persuasion are fair game too.

Comment author: Jiro 26 November 2014 03:18:39PM 0 points [-]

In this case "be blackmailed" means "contribute to creating the damn AI".

To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI that decided the good from bringing an FAI into the world a few days earlier (saving ~150,000 lives per day earlier it gets here)". The AI acausally blackmails people into building it sooner, not into building it at all. So failing to give into the blackmail results in the AI still being built but later and it is capable of punishing people.

Comment author: wedrifid 26 November 2014 09:46:08PM -2 points [-]

To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI

I don't know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.

The AI acausally blackmails people into building it sooner, not into building it at all.

It applies to both - causing itself to exist at a different place in time or causing itself to exist at all. I've explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there are other humans who can defect and create the torture-AI anyhow.

You asked "How could it?". You got an answer. Your rhetorical device fails.

Comment author: MrMind 26 November 2014 11:02:36AM 0 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Comment author: wedrifid 26 November 2014 12:34:07PM 2 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

Comment author: TheAncientGeek 26 November 2014 10:36:15AM 1 point [-]

Um actually we do, the issue is that progressives what to do the latter.

Evidence?

Frankly even teaching about will some kids into wife murderers,

Evidence?

Comment author: wedrifid 26 November 2014 12:06:38PM -4 points [-]

Evidence?

Start here.

Comment author: ThisSpaceAvailable 26 November 2014 09:09:59AM 0 points [-]

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Comment author: wedrifid 26 November 2014 11:46:57AM -1 points [-]

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).

View more: Prev | Next