Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JesseGalef 17 May 2013 02:09:54AM *  20 points [-]

Regarding the music: I found video game soundtracks to be especially perfect - after all, they're designed to be background music. But I think there's more to it than that. I've had years of conditioning such that when I hear the Warcraft II soundtrack I immediately get into a mindset of intense concentration and happiness.

Obviously it depends on your tastes and whether you have attachments to particular video games, but here are my favorites:

(non-video game music that go into the rotation)

Comment author: incariol 18 May 2013 12:09:56PM 1 point [-]

Here's another one: Skyrim soundtrack (a bit over 3,5 hours of epic fantasy music, with the last ~40 minutes being purely atmospheric/ambient).

Comment author: incariol 18 March 2013 07:47:03PM 6 points [-]

Choice of attention - to pay attention to this and ignore that - is to the inner life what choice of action is to the outer. In both cases, a man is responsible for his choice and must accept the consequences, whatever they may be.

W. H. Auden

Comment author: incariol 23 January 2013 12:10:28AM *  5 points [-]

Apart from Numerical Analysis and Parallel Computing which seem a bit out of place here (*), and swapping Bishop's Pattern Recognition for Murphy's ML: A Probabilistic Perspective or perhaps Barber's freely available Bayesian Reasoning and ML, this is actually quite a nice list - if complemented with Vladimir Nesov's. ;)

(*) We're still in a phase that's not quite philosophy in a standard sense of the word, but nonetheless light years away from even starting to program the damn thing, and although learning functional programming from SICP is all good and well due to its mind-expanding effects, going into specifics of designing programs for parallel architectures or learning about various techniques for numerical integration is ... well, I'd rather invest my time in going through the Princeton Companion to get a nice bird's eye view of math, or grab Pearl's Probabilistic Reasoning in Intelligent Systems and Causality to get a feel for what a formal treatment/reduction of an intuitive concept looks like, and leave numerics and other software design issues for a time when they become relevant.

Comment author: incariol 26 December 2012 06:06:31PM 2 points [-]

Given these recent logic-related posts, I'm curious how others "visualize" this part of math, e.g. what do you "see" when you try to understand Goedel's incompleteness theorem?

(And don't tell me it's kittens all the way down.)

Things like derivatives or convex functions are really easy in this regard, but when someone starts talking about models, proofs and formal systems, my mental paintbrush starts doing some pretty weird stuff. In addition to ordinary imagery like bubbles of half-imagined objects, there is also something machine-like in the concept of a formal system, for example, like it was imbued with a potential to produce a specific universe of various thingies in a larger multiverse (another mental image)...

Anyway, this is becoming quite hard to describe - and it's not all due to me being a non-native speaker, so... if anyone is prepared to share her mind's roundabouts, that would be really nice, but apart from that - is there a book, by a professional mathematician if possible, where one can find such revelations?

Comment author: [deleted] 15 November 2012 05:24:17PM *  5 points [-]

I don't think you'd be likely to find yourself in a relationship despite not wanting to by going to parties with lots of pretty girls around, let alone by walking on a street where girls also walk rather than through a forest. And not developing social skills may make things much harder should you ever decide to try and get into a relationship later in your life.

In response to comment by [deleted] on Checklist of Rationality Habits
Comment author: incariol 16 November 2012 12:35:38PM 0 points [-]

Well, it has happened to me before - girls really can be pretty insistent. :) But this is not actually what concerns me - it's the distraction/wasted time induced by pretty-girl-contact event like apotheon explained below.

Comment author: incariol 12 November 2012 07:42:41PM 9 points [-]

When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over (as in the banning AGI & hardware development proposal), I'm wondering what is the value of this kind of speculation - other than amusing oneself with a picture of "what would this button do" on a simulation of Earth under one's hands.

As I see it, there's no point in thinking about these kind of "large scale" interventions that are closely interweaved with politics. Better to focus on what relatively small groups of people can do (this includes, e.g. influencing a few other AGI development teams to work on FAI), and in this context, I think out best hope is in deeply understanding the mechanics of intelligence and thus having at least a chance at creating FAI before some team that doesn't care the least about safety dooms us all - and there will be such teams, regardless of what we do today, just take a look at some of the "risks from AI" interviews...

Comment author: incariol 11 November 2012 11:42:09PM 1 point [-]

What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?

It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.

Comment author: Kaj_Sotala 07 November 2012 01:26:39PM 28 points [-]

Very nice list! I feel like this one in particular is one of the most important ones:

I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)

To give my own example: I try to be vegetarian, but occasionally the temptation of meat gets the better of me. At some point I realized that whenever I walked past a certain hamburger place - which was something that I typically did on each working day - there was a high risk of me succumbing. Obvious solution: modify my daily routine to take a slightly longer route which avoided any hamburger places. Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

Comment author: incariol 11 November 2012 12:55:37AM 3 points [-]

Another example: as I don't feel like getting in a relationship for the foreseeable future, I try to avoid circumstances with lots of pretty girls around, e.g. not going to certain parties, taking walks in those parts of the forest where I don't expect to meet any, and in general, trying to convince other parts of my brain that the only girl I could possibly be with exists somewhere in the distant future or not at all (if she can't do a spell or two and talk to dragons, she won't do ;-)).

It also helps being focused on math, programming and abstract philosophy - and spending time on LW, it seems. :)

In response to Logical Pinpointing
Comment author: incariol 11 November 2012 12:28:41AM 0 points [-]

Due to all this talk about logic I've decided to take a little closer look at Goedel's theorems and related issues, and found this nice LW post that did a really good job dispelling confusion about completeness, incompleteness, SOL semantics etc.: Completeness, incompleteness, and what it all means: first versus second order logic

If there's anything else along these lines to be found here on LW - or for that matter, anywhere, I'm all ears.

In response to Logical Pinpointing
Comment author: incariol 11 November 2012 12:21:37AM *  6 points [-]

So this is where (one of the inspirations for) Eliezer's meta-ethics comes from! :)

A quick refresher from a former comment:

Cognitivism: Yes, moral propositions have truth-value, but not all people are talking about the same facts when they use words like "should", thus creating the illusion of disagreement.

... and now from this post:

Some people might dispute whether unicorns must be attracted to virgins, but since unicorns aren't real - since we aren't locating them within our universe using a causal reference - they'd just be talking about different models, rather than arguing about the properties of a known, fixed mathematical model.

(This little realization also holds a key to resolving the last meditation, I suppose.)

I've heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at least a bit easier to understand where Eliezer's coming from.

View more: Next