Comment author: Wei_Dai2 06 February 2009 07:54:54PM 10 points [-]

But the tech in the story massively favors the defense, to the point that a defender who is already prepared to fracture his starline network if attacked is almost impossible to conquer (you’d need to advance faster than the defender can send warnings of your attack while maintaining perfect control over every system you’ve captured). So an armed society would have a good chance of being able to cut itself off from even massively superior aliens, while pacifists are vulnerable to surprise attacks from even fairly inferior ones.

I agree, and that's why in my ending humans conquer the Babyeaters only after we develop a defense against the supernova weapon. The fact that the humans can see the defensive potential of this weapon, but the Babyeaters and the Superhappies can't, is a big flaw in the story. The humans sacrificed billions in order to allow the Superhappies to conquer the Babyeaters, but that makes sense only if the Babyeaters can't figure out the same defense that the humans used. Why not?

Also, the Superhappies' approach to negotiation made no game theoretic sense. What they did was, offer a deal to the other side. If they don't accept, impose the deal on them anyway by force. If they do accept, trust that they will carry out the deal without try to cheat. Given these incentives, why would anyone facing a Superhappy in negotiation not accept and then cheat? I don't see any plausible way in which this morality/negotiation strategy could have become a common one in Superhappy society.

Lastly, I note that the Epilogue of the original ending could be named Atonement as well. After being modified by the Superhappies (like how the Confessor was "rescued"?), the humans would now be atoning for having forced their children suffer pain. What does this symmetry tell us, if anything?

Comment author: accolade 21 January 2016 10:56:45PM 0 points [-]

why would anyone facing a Superhappy in negotiation not accept and then cheat?

The SH cannot lie. So they also cannot claim to follow through on a contract while plotting to cheat instead.

They may have developed their negotiation habits only facing honest, trustworthy members of their own kind. (For all we know, this was the first Alien encounter the SH faced.)

Comment author: accolade 02 December 2015 03:00:28AM 0 points [-]

Been there, loved it!

Comment author: accolade 11 November 2015 11:12:09AM *  3 points [-]

Thank you so much for providing and super-powering this immensely helpful work environment for the community, Malcolm!

Let me chip in real quick... :-9

There - ✓ 1 year subscription GET. I can has a complice nao! \o/
"You're Malcolm" - and awesome! :)

Comment author: Yosarian2 21 January 2014 02:26:00AM 4 points [-]

That's not the idea that really scares Less Wrong people.

Here's a more disturbing one; try to picture a world where all the rational skills you're learning on Less Wrong are actually somehow flawed, and actually make it less likely that you'll discover the truth or made you correct less often, for whatever reason? What would that look like? Would you be able to tell the difference.

I must say, I have trouble picturing that, but I can't prove it's not true (we are basically tinkering with the way our mind works without a software manual, after all).

Comment author: accolade 30 September 2015 07:48:27PM 0 points [-]
In response to Trying to Try
Comment author: subod_83 15 April 2010 07:45:14PM 9 points [-]

There's a familiar story - maybe you’ve heard it - a story about a proud young man who came to Socrates asking for knowledge. He walked up to the muscular philosopher and said, "O great Socrates, I come to you for knowledge."

Socrates led the young man through the streets of the town - down to the sea - and chest deep into water. Then he asked, "What do you want?"

"Knowledge, O wise Socrates," said the young man with a smile.

Socrates put his strong hands on the man's shoulders and pushed him under. Thirty seconds later Socrates let him up. "What do you want?" he asked again.

"Knowledge," the young man sputtered, "O great and wise Socrates."

Socrates pushed him under again. Thirty seconds passed, thirty-five. Forty. Socrates let him up. The man was gasping. What do you want, young man?"

Between heavy, heaving breaths the fellow wheezed, "Knowledge, O wise and wonderful..."

Socrates jammed him under again. Forty seconds passed. Fifty. "What do you want?"

"Air!" he screeched. "I need air!"

"When you want knowledge as you have just wanted air, then you will have knowledge."

Can you choose to try harder than you actually are? Isn't that like choosing to believe? I always thought you either believe or you don't. We don't have a choice in the matter. Do we?

In response to comment by subod_83 on Trying to Try
Comment author: accolade 27 September 2013 01:23:57AM *  1 point [-]

[ TL;DR keywords in bold ]

Assuming freedom of will in the first place, why should you not be able to choose to try harder? Doesn't that just mean allocating more effort to the activity at hand?

Did you mean to ask "Can you choose to do better than your best?" ? That would indeed seem similar to the doubtable idea of selecting beliefs arbitrarily. By definition of "best", you can not do better than it. But that can be 'circumvented' by introducing different points in time: Let's say at t=1 your muscle capacity enables you to lift up to 10 kg. You can not actually choose to lift more. You can try, but would fail. But you can choose to do weight training, with the effect that until t=2 you have raised your lifting power to 20 kg. So you can do better (at t=2) than your best (at t=1).

But Eliezer's point was a different one, to my understanding: He suggested that when you say (and more or less believe) that you "try your best", you are wrong automatically. (But only lying to the extent of your awareness of this wrongness.) Because you do better when setting out to "succeed" instead of to "try"; because these different mindsets influence your chances of success.

About belief choice: Believing is not a simply choosable action like any other. But I can imagine ways to alter one's own beliefs (indirectly), at least in theory:

  • Influencing reality: one example is the aforementioned weightlifting: That is a device for changing the belief "I am unable to lift 20 kg" - by changing the actual state of reality over time.
  • Reframing a topic, concentrating on different (perspectives on) parts of the available evidence, could alter your conclusion.
  • Self-fulfilling prophecy effects, when you are aware of them, create cases where you may be able to select your belief. Quoting Henry Ford:

    If you think you can do a thing or think you can't do a thing, you're right.

    If you believe this quote, then you can select whether to believe in yourself, since you know you will be right either way.

  • (Possibly a person who has developed a certain kind of mastery over her own mind can spontaneously program herself to believe something.)

(More examples of manipulating one's own beliefs, there in the form of "expectancy", can be found under "Optimizing Optimism" in How to Beat Procrastination. You can also Google "change beliefs" for self-help approaches to the question. Beware of pseudoscience, though.)

Comment author: David_Gerard 12 September 2013 10:58:29AM 1 point [-]

And the mouseovers. And the alt text, which is different again.

Comment author: accolade 26 September 2013 04:53:43AM *  2 points [-]

And the mock ads at the bottom.

ETA: Explanation: Sometimes the banner at the bottom will contain an actual (randomized) ad, but many of the comics have their own funny mock ad associated. (When I noticed this, I went through all the ones I had already read again, to not miss out on that content.)

(I thought I'd clarify this, because this comment got downvoted - possibly because the downvoter misunderstood it as sarcasm?)

Comment author: gwern 27 January 2013 05:17:20PM 1 point [-]

You're a bit late.

Comment author: accolade 27 January 2013 08:30:19PM 0 points [-]

Never too late to upboat a good post! \o/ (…and dispense some bias at the occasion…)

Comment author: accolade 27 January 2013 02:26:49PM 0 points [-]

Upvoted.

Comment author: [deleted] 22 January 2013 05:33:27PM 0 points [-]

Maybe they changed their mind about that halfway through (and they were particularly resistant to the sunk cost effect). I agree that's not very likely, though (probability < 10%).

(BTW, the emphasis looks random to me. I'm not a native speaker, but if I was saying that sentence aloud in that context, the words I'd stress definitely mostly wouldn't be those ones.)

Comment author: accolade 22 January 2013 07:50:08PM *  0 points [-]

Thanks for the feedback on the bold formatting! It was supposed to highlight keywords, sort of a TL;DR. But as that is not clear, I shall state it explicitly.

Comment author: Qiaochu_Yuan 22 January 2013 10:28:42AM *  4 points [-]

I really wanted to fake the experiment in order to convince people about the dangers of failing gatekeepers, wouldn't it be better for me to say I had won? After all, I lost this experiment.

If you really had faked this experiment, you might have settled on a lie which is not maximally beneficial to you, and then you might use exactly this argument to convince people that you're not lying. I don't know if this tactic has a name, but it should. I've used it when playing Mafia, for example; as Mafia, I once attempted to lie about being the Detective (who I believe was dead at the time), and to do so convincingly I sold out one of the other members of the Mafia.

Comment author: accolade 22 January 2013 12:07:33PM *  0 points [-]

If the author assumes that most people would even put considerable (probabilistic) trust into his assertion of having won, he would not maximize his influence on general opinion by employing this bluff of stating he has almost won. This is amplified by the fact that the statement of an actual AI win is more viral.

Lying is further discouraged by the risk that the other party will sing.

View more: Next