Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: WoodSwordSquire 29 December 2015 07:02:39AM 2 points [-]

The one improvement that I'm fairly certain I can contribute to lesswrong/HPMOR/etc is getting better at morality. First, being introduced to and convinced up utilitarianism helped me get a grip on how to reason about ethics. Realizing that morality and "what I want the world to be like, when I'm at my best" are really similar, possibly the same thing, was also helpful. (And from there, HPMOR's slytherins and the parts of objectivism that EAs tend to like were the last couple ideas I needed to learn how to have actual self esteem.)

But as to the kinds of improvements you're interested in. I'm better at thinking strategically, often just from using some estimation in decision making. (If I built this product, how many people would I have to sell it to at what price to make it worth my time? Often results in not building the thing.) But the time since I discovered lesswrong included my last two years of college and listening to startup podcasts to cope with a boring internship, so it's hard to attribute credit.

My memory isn't better, but I haven't gone out of my way to improve it. I'm pretty sure that programming and reading about programming are much better ways of improving at programming, than reading about rationality is. The sanity waterline is already pretty high in programming, so practicing and following best practices is more efficient than trying to work them out yourself from first principles.

It didn't surprise me at all to see that someone had made a post asking this question. The sequences are a bit over-hyped, in that they suggest that rationality might make the reader a super-human and then it usually doesn't happen. I think I still got a lot of useful brain-tools from them, though. It's like a videogame that was advertiesd as the game to end all games, and then it turns out to just be a very good game with a decent chance of becoming a classic. (For the record, my expectations didn't go quite that high, that I can remember, but it's not surprising that some peoples' do. It's possible mine did and I just take disappointment really well.)

Comment author: sboo 30 December 2015 04:39:16AM *  2 points [-]

I'm pretty sure that programming and reading about programming are much better ways of improving at programming, than reading about rationality is.

right, that's what motivated the post. I feel like spending time learning "domain specific knowledge" is much more effective than "general rationality techniques". like even if you want to get better at three totally different things over the course of a few years, the time spent on the general technique (that could help all three) might not help as much as on exclusively specific techniques.

still, I tend to have faith in abstractions/generality, as my mind has good long-term memory and bad short-term memory. I guess this is... a crisis of faith, if you will. in "recursive personal cognitive enhancement" (lol).

Comment author: Raziel123 29 December 2015 08:39:28PM *  9 points [-]

The most important benefit from less wrong ist that before lw I hat a very fixed mindset of things I know and I don't, like if it were properties of the things in itself, and when I wanted to improve at something I just do it in a very vague directionless way.

A more concrete example is that I always liked modding video games but in modding is very limited what you can do comparing to coding, so at least once a year I make a half hearted attempt to learn get better at modding, which result in nothing because the next step always was to learn to code (which was in the "I canĀ“t" bin ). After reading posts here of people doing awesome stuff , internalize that the map is not the territory and so, I realized that I could likely learn to code , an then the "I can't" bin broke. Exactly two years later know I'm fairly good with python , java and some of haskell just for the fun. I'm currently close to releasing an android game.

A life changing benefit I gain was to "cure" my social anxiety, it was mostly thanks to a post make here linking to Mark Manson, but it totally changed the way I interact with people from being all fear and uneasiness to flow and actually enjoying being around people (especially women).

Other less direct benefits are clearing a lot of philosophical confusion, save me from a couple of death spirals, I have the memorization problem mostly solved with spaced repetition, I change my mind more often, strategic thinking, meta-thinking and more stuff that's getting more abstract and I don't think is in the spirit of the question.

To answer the question, I DO think that my past self was dumber than me now, so in a way I'm gotten smarter.

Comment author: sboo 30 December 2015 04:26:50AM -2 points [-]

Haskell <3

has anyone actually gotten smarter, in the past few years, by studying rationality?

7 sboo 28 December 2015 06:34PM

I feel I've learned a lot about problem solving from less wrong (and HPMOR in particular). be concrete, hold off on proposing solutions, et cetera. The effect*, unfortunately, doesn't seem to be as much as I had hoped.

Small increases in intelligence are extremely general and even recursive, so it feels worth the effort. But, I found alternatives much more effective (though still modest) then studying/discussing/applying the sequences. like meditation or smart drugs. 

I'm interested in other lesswrongers experiences in cognitive enhancement. 

* by "smarter" I mean "better at problem solving", where examples of "problems" are writing a program, finding the right thing to say to resolve interpersonal conflict, memorizing some random fact quickly and then recalling it quickly/vividly. let me know if you want further clarification. 

Comment author: lacker 19 September 2014 06:51:57AM 3 points [-]

Ah, it's just like "What Would Jesus Do" bracelets.

Comment author: sboo 10 April 2015 06:15:11AM 2 points [-]

Rational!Jesus

We have the next HPMOR.

Comment author: buybuydandavis 08 August 2014 03:23:10AM 3 points [-]

I'd recommend a persona over a role. Make it a person to inhabit, instead of an idea to correspond to.

Comment author: sboo 04 April 2015 07:08:31AM 0 points [-]

can you explain? Sounds interesting.

Comment author: DavidM 06 May 2011 12:33:01AM *  4 points [-]

I have no idea whether an experiment like one you describe would show a difference. I think there is probably some kind of bottleneck in the data the retina sends to the brain, and I imagine that could stand in the way of testing my attention in the way you describe. But the basic point is interesting.

I could imagine lots of experiments which I believe would likely show a difference in perception and attention between people who meditate in the way I've described and people who don't. For example, I claimed that mode four perception has "wide attentional width". It seems likely that this implies that a person in that mode of perception would be much better at attentional tasks involving simultaneous recognition of objects in different parts of the visual field. (For example, imagine watching a large computer screen that flashes two images, at the far left and right sides simultaneously, and having to explicitly say something about what properties those images had.) And since I can get into mode four perception when I want to, I should be better at these tasks.

On the other hand, most low-level cognitive processes are inaccessible to introspection, so without any knowledge of cognitive psychology, I have no idea whether the feature of experience I call "wide attentional width" would translate into this particular finding, or not do so because of some detail about human cognition that cognitive psychologists know about but I don't.

So, ultimately, I expect that a variety of tests along these lines would find obvious differences, but I don't have enough knowledge to pick out any particular one.

Risto Saarelma mentions EEG readings, and I imagine that meditating in the way I describe would produce obvious effects there, though I don't know enough about EEG readings to predict what they would be.

In general I worry that this is not a helpful line of thinking to pursue. Finding these effects would show that the time I've invested in meditating has affected the functioning of my brain with respect to attention and perception. Would this really be a surprising result to you? I would expect that a person who pursues any exercise in attention and perception is likely to show differences in attention and perception compared to a person who doesn't, simply due to neuroplasticity, even if they aren't exercises in attention and perception that would ever lead to enlightenment. I don't see that being able to demonstrate these differences would have a very large bearing on the claims I've made that people here have found controversial (though it would have some bearing on them).

Comment author: sboo 02 November 2014 09:44:13AM 3 points [-]

anecdote: David Ingram (who claims to be enlightened) came to a cogsci lab at my school, and was able to perceive some normally-imperceptible "subliminal" visual stimuli (i.e. X milliseconds long flash or whatever). I heard it from a friend who administered the test, I don't have the raw data or an article, grains of salt and all that.

Comment author: Eliezer_Yudkowsky 04 February 2009 08:50:30AM 11 points [-]

This is the original ending I had planned for Three Worlds Collide.

After writing it, it seemed even more awful than I had expected; and I began thinking that it would be better to detonate Sol and fragment the human starline network, guaranteeing that, whatever happened in the future, true humans would continue somewhere.

Then I realized I didn't have to destroy the Earth - that, like so many other stories I'd read, my very own plot had a loophole. (I might have realized earlier, if I'd written part 5 before part 6, but the pieces were not written in order.)

Tomorrow the True Ending will appear, since it was indeed guessed in the comments yesterday.

If anyone wonders why the Normal Ending didn't go the way of the True Ending - it could be because the Superhappy ambassador ship got there too quickly and would have been powerful enough to prevent it. Or it could be because the highest decision-makers of humankind, like Akon himself, decided that the Superhappy procedure was the categorically best way to resolve such conflicts between species. The story does not say.

Comment author: sboo 22 April 2014 03:58:30AM *  3 points [-]

"... must relinquish bodily pain, embarrassment, and romantic troubles."

that's worse than letting billions of children be tortured to death every year. that's worse than dying from a supernova. that's worse than dying from mass suicide. that's worse than dying because you can't have sex with geniuses to gain their minds and thus avert the cause of death that you die from.

you really think existence without pain is that bad? you really they are not "true humans".

what about the 3WC humans? are they not "true humans" either. only us?

what about those with CIP? what about cold people? are they not "true humans"?

do you think there should be less but non-zero pain in our minds? how much?

ignore the loophole. explain why this superhappy ending is worse than the supernova ending.

literally unbelievable.

Comment author: ChrisHallquist 19 July 2013 06:18:29PM 1 point [-]

There's a couple potential advantages to having a horcrux in Harry's possession: Harry's then guarding it, and it might be used to possess Harry later on. Though that's less helpful if, as in canon, Harry is a horcrux. But even then, I'd hardly be shocked to see Voldemort creating an additional horcrux on a whim.

Comment author: sboo 19 April 2014 07:00:02AM 1 point [-]

"You can never have enough big white belts, remember that."

Comment author: ChrisHallquist 19 July 2013 12:46:45AM 5 points [-]

Blaming the Pioneer Plaque for the progressive degredation sounds like it makes sense at first, but the point of the Pioneer Plaque thing is that this Voldemort is supposed to be smarter than canon Voldemort, and a Pioneer Plaque horcrux superior. That theory makes the Pioneer Plaque horcrux inferior. Also I'm pretty sure Voldemort has other horcruxes, including Roger Bacon's diary and quite possibly ones hidden in the other locations Harry suggested when discussing how to get rid of a Dementor.

Comment author: sboo 19 April 2014 06:44:02AM 0 points [-]

maybe that's why he hasn't killed harry after hearing the prophecy.

he needs help in finding his horcrux.

which he won't, because space is huge.

Comment author: Alejandro1 18 July 2013 01:40:22PM *  65 points [-]

I am amazed that Eliezer managed to take Rowling's most corny idea and made it non-corny: "The power that the Dark Lord knows not" is, after all, none other than the power of true love. And it is a mighty power not because of a hokey magical force attached to it, but because someone who feels it in addition to being rational is motivated to reshape the universe. "Power comes from having something to protect."

Comment author: sboo 19 April 2014 06:15:57AM 0 points [-]

quirrel wants to protect himself from death. and gains the power to do it.

View more: Next