Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Chriswaterguy 22 February 2017 05:21:07AM 0 points [-]

Dead link, FWIW.

Comment author: accolade 24 May 2017 02:49:06AM *  0 points [-]

((
Pretty much deader than disco, but my inet-fu was able to dig up the following excerpts of the original article (from http://newsinfo.inquirer.net/25019/overcoming-procrastination):

“Too many people set goals that are simply unrealistic. Too big, they want it too soon, and they wonder why they don’t have any results in their life. What happens to a person who is consistently setting big goals that are outside of their scope, outside of their belief system, and they keep coming short of them? What kind of pattern does it set up in their mind? That sort of person starts to say, ‘Why do I bother with this goal setting stuff—I don’t ever achieve anything.’

“Set yourself a goal that is realistic, something you can see that isn’t too far and isn’t overpowering, not too far away, but at the same time, giving you a bit of a stretch, getting you out of your comfort zone. And once you’ve done that, and you’ve built your belief, you’ve built your power, then you set yourself another realistic goal, with another stretch factor. And once you’ve done that, another one. So it’s like a series of stepping stones, still getting you in the same direction, but having a staggered approach. Also, the wrong goal is something that’s too low. It doesn’t stimulate you, drive you, because you’ve done it before or you can do it or it’s simple. It doesn’t give you that drive, to give you that ‘take action step,’ to beat procrastination and help you as well.”

Also since I have evidently no life, I mini-doxed Sam in case someone would like to ask him whether he still has a copy of the whole article, lol:
https://www.linkedin.com/in/sam-tornatore-7b87b911a/
https://www.facebook.com/sam.tornatore.9

Comment author: ChristianKl 23 June 2016 01:58:33PM 0 points [-]

Sure your address will start to receive more spam, but it will be filtered like the spam you already have is.

If you have an extension that sends false responses the spammers will have an incentive to avoid messageing those email addresses.

Comment author: accolade 05 April 2017 09:32:39PM 0 points [-]

But they could still use/ sell your address for spam that doesn’t work with a mail response, but clicking a link. (E.g. shopping for C1/\L|S.)

Comment author: accolade 02 April 2017 11:21:46AM 0 points [-]

• Everett branches where Eliezer Yudkowsky wasn’t born have been deprecated. (Counterfactually optimizing for them is discouraged.)

Comment author: accolade 02 April 2017 10:59:32AM *  0 points [-]

"That which can be destroyed by being a motherfucking sorceror should be"

Brilliant!! x'D x'D

(This might make a good slogan for pure NUs …)

Comment author: accolade 17 March 2017 08:56:56PM *  0 points [-]

“Effective Hedonism”
“Effective Personal Hedonism”
“Effective Egoistic Hedonism”
“Effective Egocentric Hedonism”
“Effective Ego-Centered Hedonism”
“Effective Self-Centric Hedonism”
“Effective Self-Centered Hedonism”

In response to Timeless Identity
Comment author: Roland2 03 June 2008 09:03:28AM 9 points [-]

Where can I sign up for cryonics if I live outside the United States and Europe?

In response to comment by Roland2 on Timeless Identity
Comment author: accolade 17 November 2016 03:57:56AM 0 points [-]
Comment author: Wei_Dai2 06 February 2009 07:54:54PM 10 points [-]

But the tech in the story massively favors the defense, to the point that a defender who is already prepared to fracture his starline network if attacked is almost impossible to conquer (you’d need to advance faster than the defender can send warnings of your attack while maintaining perfect control over every system you’ve captured). So an armed society would have a good chance of being able to cut itself off from even massively superior aliens, while pacifists are vulnerable to surprise attacks from even fairly inferior ones.

I agree, and that's why in my ending humans conquer the Babyeaters only after we develop a defense against the supernova weapon. The fact that the humans can see the defensive potential of this weapon, but the Babyeaters and the Superhappies can't, is a big flaw in the story. The humans sacrificed billions in order to allow the Superhappies to conquer the Babyeaters, but that makes sense only if the Babyeaters can't figure out the same defense that the humans used. Why not?

Also, the Superhappies' approach to negotiation made no game theoretic sense. What they did was, offer a deal to the other side. If they don't accept, impose the deal on them anyway by force. If they do accept, trust that they will carry out the deal without try to cheat. Given these incentives, why would anyone facing a Superhappy in negotiation not accept and then cheat? I don't see any plausible way in which this morality/negotiation strategy could have become a common one in Superhappy society.

Lastly, I note that the Epilogue of the original ending could be named Atonement as well. After being modified by the Superhappies (like how the Confessor was "rescued"?), the humans would now be atoning for having forced their children suffer pain. What does this symmetry tell us, if anything?

Comment author: accolade 21 January 2016 10:56:45PM 0 points [-]

why would anyone facing a Superhappy in negotiation not accept and then cheat?

The SH cannot lie. So they also cannot claim to follow through on a contract while plotting to cheat instead.

They may have developed their negotiation habits only facing honest, trustworthy members of their own kind. (For all we know, this was the first Alien encounter the SH faced.)

Comment author: accolade 02 December 2015 03:00:28AM 0 points [-]

Been there, loved it!

Comment author: accolade 11 November 2015 11:12:09AM *  3 points [-]

Thank you so much for providing and super-powering this immensely helpful work environment for the community, Malcolm!

Let me chip in real quick... :-9

There - ✓ 1 year subscription GET. I can has a complice nao! \o/
"You're Malcolm" - and awesome! :)

Comment author: Yosarian2 21 January 2014 02:26:00AM 5 points [-]

That's not the idea that really scares Less Wrong people.

Here's a more disturbing one; try to picture a world where all the rational skills you're learning on Less Wrong are actually somehow flawed, and actually make it less likely that you'll discover the truth or made you correct less often, for whatever reason? What would that look like? Would you be able to tell the difference.

I must say, I have trouble picturing that, but I can't prove it's not true (we are basically tinkering with the way our mind works without a software manual, after all).

Comment author: accolade 30 September 2015 07:48:27PM 0 points [-]

View more: Next