Comment author: shminux 21 July 2014 10:54:57PM *  19 points [-]

I don't think "Politics is hard mode" conveys the point.

Any mention of politics is a minefield of unintended triggers. In the "politics is the mind-killer" post Eliezer refers to the mind-killing properties politically charged examples have on any discussion, precisely because of these triggers. That's the reason that

political examples should not be used in a non-political discussion.

Just like any trigger-heavy example should not be used unless explicitly intended to trigger people. (I used it in one my posts for that purpose.)

TL;DR: the original meaning of "politics is the mind killer" is "avoid unintended triggers in your arguments".

Unfortunately, this slogan became a catch-all "boo! politics" attitude. Maybe what is needed is a post "How to discuss politics (race/gender/...) rationally". Unless one has been written already, though I came up empty after a cursory look.

Comment author: roryokane 23 July 2014 07:25:03PM 4 points [-]

A better slogan for that purpose might simply be "Politics makes for bad examples". Straight to the point. It needs explanation, just like the "mind-killer" slogan, but after the explanation it is easy to remember the reasoning behind it.

Comment author: Daniel_Starr 22 March 2012 01:30:04PM *  3 points [-]

Ironically, I suspect the "cultlike" problem is that LessWrong/SI's key claims lack falsifiability.

Friendly AI? In the far future.

Self-improvement? All mental self-improvement is suspected of being a cult, unless it trains a skill outsiders are confident they can measure.

If I have a program for teaching people math, outsiders feel they know how they can check my claims - either my graduates are good at math or not.

But if I have a program for "putting you in touch with your inner goddess", how are people going to check my claims? For all outsiders know I'm making people feel good, or feel good about me, without actually making them meaningfully better.

Unfortunately, the external falsifiability of LW/SI's merits is more like the second case than the first. Especially, I suspect, for people who aren't already big fans of mathematics, information theory, probability, and potential AI.

Organization claims to improve a skill anyone can easily check = school. Organization claims to improve a quality that outsiders don't even know how to measure = cult.

If and when LW/SI can headline more easily falsifiable claims, it will be less cultlike.

I don't know if this is an immediately solvable problem, outside of developing other aspects of LW/SI that are more obviously useful/impressive to outsiders, and/or developing a generation of LW/SI fans who are indeed "winners" as rationalists ideally would be.

Comment author: roryokane 24 May 2014 06:18:57AM 1 point [-]

PredictionBook might help with measuring improvement, in a limited way. You can use it to measure how often your predictions are correct, and whether you are getting better over time. And you could theoretically ask LW-ers and non-LW-ers to make some predictions on PredictionBook, and then compare their accuracy to see if Less Wrong helped. Making accurate predictions of likelihood is a real skill that certainly has the possibility to be very useful – though it depends on what you’re predicting.

Comment author: roryokane 18 April 2014 01:26:11AM *  20 points [-]

“If only there were irrational people somewhere, insidiously believing stupid things, and it were necessary only to separate them from the rest of us and mock them. But the line dividing rationality and irrationality cuts through the mind of every human being. And who is willing to mock a piece of his own mind?”

(With apologies to Solzhenitsyn).

– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”

Comment author: terasinube 03 March 2014 09:47:03PM 1 point [-]

In non-Toastmasters settings, these skills have been useful when I’m trying to talk to people who have different interests, or when I’m put on the spot to talk about something I feel like I don’t know a lot about.

This sounds like you became more sociable. Now I'm curious how would a sociable person be like to you? I mean... what is the line that separates the sociable from not sociable in your perspective?

Comment author: roryokane 15 March 2014 03:17:28AM *  1 point [-]

I would think the difference is that sociable people feel comfortable even in a less formal gathering, when you don’t know of anyone you would particularly like to talk to and nobody has asked you to talk. Even in such a situation, a sociable person could find something interesting to do, involving other people, and be reasonably confident that they are not being rude or boring, and end up enjoying whatever they find to do.

Comment author: roryokane 30 November 2013 07:20:49PM *  9 points [-]

I took the survey.

I chose to Defect on the monetary reward prize question. Why?

  • I realized that the prize money is probably contributed by Yvain. And if $60-or-less were to be distributed between a random Less Wrong member and Yvain, I would rather as much of it as possible go to Yvain. This is because I know Yvain is smart and writes interesting posts, so the money could help him to contribute something to the world that another could not. Answering Defect lowers the amount of prize money, making Yvain keep more of it.
  • Also, I would rather I have the $60-or-less than anyone other Less Wrong member, and answering Defect gets me a bigger chance of that happening.

Edit: pgbh had the same reasoning.

Comment author: DanielLC 02 November 2013 12:07:37AM 3 points [-]

At first I thought this would be suggesting horror stories that are particularly horrifying to rationalists, such as the one Eliezer mentioned on the HP:MoR author's notes about the Friendship is Optimal fanfiction.

Comment author: roryokane 02 November 2013 12:32:55AM 3 points [-]

Link to the story: Friendship is Optimal. Though I wouldn’t call the story as a whole a horror story; rather, it has some fridge horror. And it is particularly horrifying to those interested in the singularity, rather than to rationalists in general.

Comment author: erratio 29 October 2013 05:53:22PM 1 point [-]

Same experience here. I eventually switched to EmotionSense because TagTime felt like too much bookkeeping. (basically, I have long stretches during the day when I don't want to be distracted by my phone, which led to me having a bunch of tags to fill in at any given time since it was sampling every 45 minutes on average, and I'm really obsessive about not wanting to leave tags un-filled). EmotionSense bugs me a maximum of 5 times a day and IMO has a more intuitive interface for mood sampling.

Comment author: roryokane 01 November 2013 11:43:36PM *  2 points [-]
Comment author: roryokane 01 November 2013 11:06:25PM *  0 points [-]

7. … I think of several things related to work that I really want to remember … as I'm trying to fall asleep …

I use my smartphone (Android) in cases similar to this, though it’s not usually work-related stuff that I think of. I have the sound recording app WAVE Recorder in my dock / quick launch area. If I want to note something for later with the minimum of fuss, it’s easy to unlock my phone, open the app, hit record, and briefly describe whatever it is that I thought of (or hum it, if it’s a piece of music). Then I just hit stop and lock the phone again.

However, the downside of recording audio is that it’s harder to read later. You can’t just skim what you wrote later to remind yourself what you said; you have to wait through the whole recording. This can be mitigated somewhat by giving the recorded audio file a relevant name. But sometimes I value reading the whole thing easily later more than noting the idea as quickly as possible right now.

In those cases, I write the idea down in a new Evernote note. Evernote is also in my phone’s dock. I use the SwiftKey Keyboard to write the idea quickly, and Lux to turn the screen’s brightness below the built-in minimum so that the screen doesn’t hurt my eyes or wake me up too much in the dark room.

Comment author: roryokane 29 October 2013 02:40:07AM 0 points [-]

The flaw in the argument is simply that it assumes E(X/Y) > 1 implies that E(X) > E(Y).

I didn’t understand this sentence very well at first, because the inequality on the right is two steps removed from the one on the left. I find this version clearer:

The flaw in the argument is simply that it assumes E(X/Y) > 1 implies that E(X) / E(Y) > 1. (If E(X) / E(Y) > 1, that would imply that E(X) > E(Y).)

Comment author: BraydenM 10 September 2013 07:59:24AM 6 points [-]

"And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity."

Can I have some specific examples that might help illustrate this point?

Comment author: roryokane 20 September 2013 03:02:14AM *  7 points [-]

A hypothetical based on an amalgamation of my own experiences during a co-op:

You work as a programmer at a company that writes websites with the programming languages VBScript and VB.Net. You have learned enough about those languages to do your job, but you think the Ruby language is much more efficient, and you write your personal programming projects in Ruby. You occasionally go to meetings in your city for Ruby programmers, which talk about new Ruby-related technologies and techniques.

You are nearing the deadline for the new feature you were assigned to write. You had promised you would get the web page looking good in all browsers by today’s followup meeting about that feature. Fifteen minutes before the meeting, you realize that you forget to test in Internet Explorer 8. You open it in IE8 and find that the web page looks all messed up. You spend fifteen rushed minutes frantically looking up the problem you see and trying out code fixes, and you manage to fix the problem just before the meeting.

It’s just you, the technical lead, and the project manager at the meeting. You explain that you’ve finished your feature, and he nods, congratulates you, and makes note of that in his project tracker. Then he tells you what he wants you to work on next: an XML Reformatter. The XML documents used internally in one of the company’s products are poorly formatted and organized, with incorrect indentation and XML elements in random order. He suggests that you talk to the technical lead about how to get started, and leaves the meeting.

This project sounds like something that will be run only once – a one-time project. You have worked with XML in Ruby before, and are excited at the idea of being able to use your Ruby expertise in this project. You suggest to the technical lead that you write this program in Ruby.

“Hmm… no, I don’t think we should use Ruby for this project. We’re going to be using this program for a long time – running it periodically on our XML files. And all of our other programmers know VB.Net. We should write it in VB.Net, because I am pretty sure that another programmer is going to have to make a change to your program at some point.”

If you’re not thinking straight, at this point, you might complain, “I could write this program so much faster in Ruby. We should use Ruby anyway.” Yet that does not address the technical lead’s point, and ignores the fact that one of your assumptions has been revealed to be wrong.

If you are aware enough of your emotions to notice that you’re still on adrenaline from your last-minute fix, you might instead think, I don’t like the sound of missing this chance to use Ruby, but I might not be thinking straight. I’ll just accept that reasoning for now, and go back and talk to the technical lead in his office later if I think of a good argument against that point.

This is a contrived example. It is based on my experiences, but I exaggerate the situation and “your” behavior. Since I had to make many changes to the real situation to make an example that was somewhat believable, that would indicate that the specific tip you quoted isn’t applicable very often – in my life, at least.

View more: Prev | Next