Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Recovering_irrationalist 19 October 2008 02:16:46PM 0 points [-]

So far hardly any feedback on places & no restaurant recommendations. If I get no more responses by tomorrow I'll just search the net for a well-reviewed restaurant that's walkable-to from Montgomery Theater, good for groups, accepting of casual attire and hopefully not too crowded/noisy (with a private room?), book it for Saturday probably around 7pm for 21 people, post the details and directions and hope everyone turns up.

If you'd rather a different time, or have any preferences at all, please let me know before I do that. So far no one's mentioned vegetarian, parking or wheelchair access needs, or preference for or against any food except one vote for pizza. How do you feel about Chinese? Italian? Mexican?

Comment author: Recovering_irrationalist 19 October 2008 11:16:46AM 5 points [-]

Excellent post. Please write more on ethics as safety rails on unseen cliffs.

In response to Crisis of Faith
Comment author: Recovering_irrationalist 12 October 2008 03:34:00PM 2 points [-]

Nazir, a secret hack to prevent Eliezer from deleting your posts is here. #11.6 is particularly effective.

Comment author: Recovering_irrationalist 12 October 2008 12:22:44PM 0 points [-]

Ah, I see...

other events may be offered at the same time, and I can not predict such events.

As far as Eliezer is currently aware, Saturday night should be clear.

I meant some of you singularity-related guys may want to meet me at other times, possibly at my apartment.

I'd love to come to another meet, Anna would too, probably others. I just wasn't sure there'd be enough people for two, so focused on making at least one happen.

I guess this was not the right place to post such an offer.

If the invite extends to OB readers, you're very welcome to share this page. If it's just for us Singularitarians, it's probably better to plan elsewhere and post a link here.

Comment author: Recovering_irrationalist 10 October 2008 12:47:00PM 1 point [-]

Oops, misinterpreted tags. Should read:

It's 3am and the lab calls. Your AI claims [nano disaster/evil AI emergence/whatever] and it must be let out to stop it. It's evidence seems to check out.

Comment author: Recovering_irrationalist 10 October 2008 12:44:00PM 0 points [-]

Even if we had the ultimate superintelligence volunteer to play the AI and we proved a gatekeeper strategy "wins" 100% (functionally equal to a rock on the "no" key) that wouldn't show AI boxing can possibly be safe.

It's 3am and the lab calls. Your AI claims and it must be let out to stop it. It's evidence seems to check out...

If it's friendly, keeping that lid shut gets you just as dead as if you let it out and it's lying. That's not safe. Before it can hide it's nature, we must know it's nature. The solution to safe AI is not a gatekeeper no smarter than a rock!

Besides, as Drexler said, Intelligent people have done great harm through words alone.

Comment author: Recovering_irrationalist 09 October 2008 07:00:00PM 4 points [-]

If there's a killer escape argument it will surely change with the gatekeeper. I expect Eliezer used his maps the arguments and psychology to navigate reactions & hesitations to a tiny target in the vast search space.

A gatekeeper has to be unmoved every time. The paperclipper only has to persuade once.

Comment author: Recovering_irrationalist 07 October 2008 12:09:00PM 0 points [-]

I'm not saying this is wrong, but in its present form, isn't it really a mysterious answer to a mysterious question? If you believed it, would the mystery seem any less mysterious?

Hmm. You're right.

Darn.

Comment author: Recovering_irrationalist 07 October 2008 12:25:00AM 0 points [-]

it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one

I didn't think it would solve all our questions, I just wondered if it was both the simplest solution and lacking good evidence to the contrary. Would there be a higher chance of being a Boltzmann brain in a universe identical to ours that happened to be part of a what-if-world? If not, how is all this low-entropy around me evidence against it?

Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically

How would our "Block Universe" look different from the inside if it was a what-if-Block-Universe? It all adds up to...

Not trying to argue, just curious.

Comment author: Recovering_irrationalist 05 October 2008 10:13:00PM 1 point [-]

Eliezer: imagine that you, yourself, live in a what-if world of pure mathematics

Isn't this true? It seems the simplest solution to "why is there something rather than nothing". Is there any real evidence against our apparently timeless, branching physics being part of a purely mathematical structure? I wouldn't be shocked if the bottom was all Bayes-structure :)

View more: Next