Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by mwengler on AI Box Log
Comment author: moridinamael 27 January 2012 05:56:49AM 10 points [-]

It's worse than that. The AI could say, "Look, here is a proof of FAI. Here is my code showing that I have implemented the friendliness modification." The proof and the code are utterly convincing, except erroneous in a subtle way that the gatekeeper is not smart enough to detect. Game over.

In response to comment by moridinamael on AI Box Log
Comment author: Snowyowl 03 June 2015 02:32:03AM 0 points [-]

Three years late, but: there doesn't even have to be an error. The Gatekeeper still loses for letting out a Friendly AI, even if it actually is Friendly.

Comment author: [deleted] 23 November 2014 02:35:31PM *  2 points [-]

I don't how to estimate it myself, so are these kind of depictions almost certainly because of Eliezer's influence?

Comment author: Snowyowl 24 November 2014 10:09:09PM 0 points [-]

There have been other sci-fi writers talking about AI and the singularity. Charles Stross, Greg Egan, arguably Cory Doctorow... I haven't seen the episode in question, so I can't say who I think they took the biggest inspiration from.

In response to Initiation Ceremony
Comment author: Dave_Orr 28 March 2008 11:18:34PM 1 point [-]

Maybe this is a dumb question, but where did 1/6 come from? I mean, when they asked the question I did the math and came up with 2/11, and I don't even see how you might get 1/6.

Comment author: Snowyowl 17 July 2013 11:57:39AM 2 points [-]

9/16ths of the people present are female Virtuists, and 2/16ths are male Virtuists. If you correctly calculate that 2/(9+2) of Virtuists are male, but mistakenly add 9 and 2 to get 12, you'd get one-sixth as your final answer. There might be other equivalent mistakes, but that seems the most likely to lead to the answer given.

Of course, it's irrelevant what the actual mistake was since the idea was to see if you'll let your biases sway you from the correct answer.

In response to comment by Snowyowl on Causal Universes
Comment author: cousin_it 29 November 2012 11:12:20AM *  0 points [-]

These are pretty strong arguments, but maybe the idea can still be rescued by handwaving :-)

In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything.

Seconding A113's recommendation of "Be Here Now", that story along with the movie Primer was my main inspiration for the model.

In response to comment by cousin_it on Causal Universes
Comment author: Snowyowl 30 November 2012 01:44:49PM *  0 points [-]

The later Ed Stories were better.

In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything.

Good point, but not actually answering the question. I guess what I'm asking is: given a single use of the time machine (Primer-style, you turn it on and receive an object, then later turn it off and send an object), make a list of all the objects you can receive and what each of them can lead to in the next iteration of the loop. This structure is called a Markov chain. Given the entire structure of the chain, can you deduce what probability you have of experiencing each possibility?

Taking your original example, there are only 2 states the timeline can be in:
* A: Nothing arrives from the future. You toss a coin to decide whether to go back in time. Next state: A (50% chance) or B (50% chance)
*B: A murderous future self arrives from the future. You and him get into a fight, and don't send anything back. Next state: A (100% chance).

Is there a way to calculate from this what the probability of actually getting a murderous future self is when you turn on the time machine?

I'm inclined to assume it would be a stationary distribution of the chain, if one exists. That is to say, one where the probability distribution of the "next" timeline is the same as the probability distribution of the "current" timeline. In this case, that would be (A: 2/3, B: 1/3). (Your result of (A: 4/5, B: 1/5) seems strange to me: half of the people in A will become killers, and they're equal in number to their victims in B.)

There are certain conditions that a Markov chain needs to have for a stationary distribution to exist. I looked them up. A chain with a finite number of states (so no infinitely dense CDs for me :( ) fits the bill as long as every state eventually leads to every other, possibly indirectly (i.e. it's irreducible). So in the first scenario, I'll receive a CD with a number between 0 and N distributed uniformly. The second scenario isn't irreducible (if the "first" timeline has a CD with value X, it's impossible to ever get a CD with value Y in any subsequent timeline), so I guess there needs to be a chance of the CD becoming corrupted to a different value or the time machine exploding before I can send the CD back or something like that.

Teal deer: This model works but the probability of experiencing each outcome can easily depend on the tiny chance of an unexpected outcome. I like it a lot because it's more intuitive than NSCP but the structure makes more sense than branching-multiverse. I may have to steal it if I ever write a time-travel story.

In response to comment by Snowyowl on Causal Universes
Comment author: A113 29 November 2012 08:55:25AM 4 points [-]

The Novikov Self-Consistency Principle can help answer that. It is one of my favorite things. I don't think it was named in the post, but the concept was there.

The idea is that contradictions have probability zero. So the first scenario, the one with the paradox, doesn't happen. It's like the Outcome Pump if you hit the Emergency Regret Button. Instead of saying "do the following," it should say "attempt the following." If it is one self-consistent timeline, then you will fail. I don't know why you'll fail, probably just whatever reason is least unlikely, but the probability of success is zero. The probability distribution is virtually all at "you send the same number you received." (With other probability mass for "you misread" and "transcription error" and stuff).

If your experiment succeeds, then you are not dealing with a single, self-consistent universe. The Novikov principle has been falsified. The distribution of X depends on how many "previous" iterations there were, which depends on the likelihood that you do this sequence given that you receive such a CD. I think it would be a geometric distribution?

The second one is also interesting. Any number is self-consistent. So (back to Novikov) none of them are vetoed. If a CD arrives, the distribution is whatever distribution you would get if you were asked "Write a number." More likely, you don't receive a CD from the future. That's what happened today. And yesterday. And the day before. If you resolve to send the CD to yourself the previous day, then you will fail if self-consistency applies

Have you read HPMoR yet? I also highly recommend this short story.

In response to comment by A113 on Causal Universes
Comment author: Snowyowl 30 November 2012 01:18:39PM 0 points [-]

I wasn't reasoning under NSCP, just trying to pick holes in cousin_it's model.

Though I'm interested in knowing why you think that one outcome is "more likely" than any other. What determines that?

In response to Causal Universes
Comment author: Bugmaster 28 November 2012 06:19:48PM 8 points [-]

So far, we've just generated all 2^N possible bitstrings of size N, for some large N; nothing more. You wouldn't expect this procedure to generate any people or make any experiences real, unless enumerating all finite strings of size N causes all lawless universes encoded in them to be real.

Wait, why not ? If people can be encoded as bit strings -- which is the prerequisite for any kind of a Singularity -- then what's the difference between a bit string that I obtained by scanning a person, and a completely identical bit string that I just happened to randomly generate ?

In response to comment by Bugmaster on Causal Universes
Comment author: Snowyowl 29 November 2012 08:14:45AM 7 points [-]

You make a surprisingly convincing argument for people not being real.

In response to comment by DaFranker on Causal Universes
Comment author: cousin_it 28 November 2012 11:39:42PM *  0 points [-]

Yeah. Basically, I'm trying to figure out how subjective probabilities would work consistently in the "parallel timelines" model, e.g. the probability of meeting a time traveler vs becoming that time traveler, when everyone's decisions can also be probabilistic. The question interests me because in some cases it seems to have a unique but non-obvious answer.

In response to comment by cousin_it on Causal Universes
Comment author: Snowyowl 29 November 2012 08:11:10AM *  0 points [-]

Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.

Suppose you have a CD with infinite storage space - if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:

  • If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.

  • If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.

What is the probability distribution of the number on your CD? What is the probability that you didn't receive a CD from the future?

Once you've worked that one out, consider this similar algorithm:

  • If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.

  • If a CD arrives from the future, read the number on it. Call this number X. Write X on your own CD and send it back in time.

What is the probability distribution of the number on your CD? What is the probability that you didn't receive a CD from the future?

Comment author: ikrase 30 September 2012 08:23:31AM 0 points [-]

Death does not go on. The humans are immortal.

The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters. And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.

Comment author: Snowyowl 27 November 2012 04:32:49PM 1 point [-]

The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.

I don't follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.

And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.

For one, because they're symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They're annoyingly ethical in that regard.) And I do mean equal value - humans became partly superhappy, and superhappies became partly human. For two, because shared culture and psychology makes it possible to have meaningful dialogue between species: even with the Cultural Translator, everyone got headaches after five minutes. Remember that to the superhappies, meaningful communication is literally as good as sex.

In response to comment by Kindly on Causal Reference
Comment author: Armok_GoB 26 October 2012 06:30:31PM 4 points [-]

After concluding that it had no harmfully results, your brain incorporates the effect into a consumer device in which it was mildly usefully and becomes near ubiquitous, and spreads all over the world.

this'd have the makings of a great SCP if not for the obvious problem.

In response to comment by Armok_GoB on Causal Reference
Comment author: Snowyowl 05 November 2012 10:39:46AM 0 points [-]

I'd say it would make a better creepypasta than an SCP. Still, if you're fixed on the SCP genre, I'd try inverting it.

Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it's not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject's deep subconscious. It's only after a considerable amount of testing that they determine the process by which the divergence is caused - and it's something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.

Comment author: Eliezer_Yudkowsky 02 November 2012 12:29:52PM 11 points [-]

If this were true, the ancient Greeks would've had science.

Comment author: Snowyowl 02 November 2012 01:57:23PM 12 points [-]

They came impressively close considering they didn't have any giant shoulders to stand on.

View more: Next