Comment author: IlyaShpitser 19 June 2015 05:34:33PM *  5 points [-]

Eliezer wants to be a guru. No one calls him on it. There is an enormous amount of unhealthy hero worship. What did you expect, exactly?

Eliezer is constitutionally incapable of doing anything without coming across as hilariously over-the-top arrogant, and at some point instead of fighting it he just turned it into his style so that now it’s kind of hard to tell when he’s joking or not.

-- Yvain on EY.

Comment author: ahbwramc 19 June 2015 05:56:28PM 15 points [-]

I don't know, it feels like I see more people criticizing perceived hero worship of EY than I see actual hero worship. If anything the "in" thing on LW these days seems to be signalling how evolved one is by putting down EY or writing off the sequences as "just a decent popular introduction to cognitive biases, nothing more" or whatever.

In response to comment by [deleted] on In praise of gullibility?
Comment author: SolveIt 18 June 2015 11:47:39AM *  12 points [-]

I disagree with the premise that LW tears half-baked ideas to shreds. My experience (which, admittedly is limited to open threads) is that you'll be fine if you're clear that what you're presenting a work in progress, and you don't overreach with your ideas.

By overreach, I mean something like this:

This is an attempt to solve happiness. Several factors, such as health, genetics, and social environment, affect happiness. So happiness = healthgeneticssocial environment.

You can see what's wrong with the post above. It's usually not this blatant, but I see this sort of thing too often, and they are invariably ripped to shreds. On the other hand, something like this:

This is an attempt to solve happiness. First, I'd like to identify the factors that affect happiness. I can think of health, genetics, and social environment. Can we break this down further? Am I missing any important factors?

Probably won't be ripped to shreds. It has it's fair share of problems, so I wouldn't expect an enthusiastic response from the community, but it won't be piled upon either.

Frankly speaking, the first type of post reeks of cargo cult science (big equations, formal style (often badly executed), and references that may or may not help the reader). I'm not too unhappy to see those posts being ripped to shreds.

Comment author: ahbwramc 18 June 2015 01:22:06PM 4 points [-]

I agree with this. "Half-baked" was probably the wrong phrase to use - I didn't mean "idea that's not fully formed or just a work in progress," although in retrospect that's exactly what half-baked would convey. I just meant an idea that's seriously flawed in one way or another.

Comment author: ThisSpaceAvailable 13 June 2015 02:55:56AM 6 points [-]

I realize that no analogy is perfect, but I don't think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it's just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.

Comment author: ahbwramc 13 June 2015 03:32:59AM 0 points [-]

Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don't think it's exaggerating to say that the sleeper cell "already exists". I'm willing to own up to the analogy to that extent.

As for Knightian uncertainty: either the AI will be an existential threat, or it won't. I already think that it will be (or could be), so I think I'm already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important.

(Of course, I'm against wasting research money, so I pay attention to arguments for why AI won't be a threat. I just haven't been convinced yet)

In response to The Fallacy of Gray
Comment author: ahbwramc 12 June 2015 10:08:30PM *  2 points [-]

When I first read this post back in ~2011 or so, I remember remembering a specific scene in a book I had read that talked about this error and even gave it the same name. I intended to find the quote and post it here, but never bothered. Anyway, seeing this post on the front page again prompted me to finally pull out the book and look up the quote (mostly for the purpose of testing my memory of the scene to see if it actually matched what was written).

So, from Star Wars X-Wing: Isard's Revenge, by Michael A Stackpole (page 149 of the paperback edition):

Tycho stood. "It's called the gray fallacy. One person says white, another says black, and outside observers assume gray is the truth. The assumption of gray is sloppy, lazy thinking. The fact that one person takes a position that is diametrically opposed to the truth does not then skew reality so the truth is no longer the truth. The truth is still the truth."

So maybe not exactly the same sentiment as this post, but not a bad rationality lesson for a Star Wars book, really.

(for those interested: my memory of the scene was pretty much accurate, although it occurred much later in the book than I had thought)

Comment author: FrameBenignly 09 June 2015 04:36:29PM 2 points [-]

Your approach looks quite unscientific to me. What empirical evidence do you have to support this? How would you go about codifying these ideas into a proper scientific theory?

Comment author: ahbwramc 09 June 2015 05:14:50PM 3 points [-]

I mean, I don't really disagree; it's not a very scientific theory right now. It was just a blog post, after all. But if I was trying to test the theory, I would probably take a bunch of people who varied widely in writing skill and get them to write a short piece, and then get an external panel to grade the writing. Then I would get the same people to take some kind of test that judged ability to recognize rather than generate good writing (maybe get some panel of experts to provide some writing samples that were widely agreed to vary in writing quality, and have the participants rank them). Then I would see how much of the variation in writing skill was explained by the variation in ability to recognize good writing. If it was all or most of the variation, that would probably falsify the theory - the theory would say the most difficult part of "guess and check" is the guessing part, but those results would say it's the checking.

That's the first thing to come to mind, anyway.

Comment author: ahbwramc 09 June 2015 01:36:45PM *  9 points [-]

I wrote a couple posts on my personal blog a while ago about creativity. I was considering cross-posting them here but didn't think they were LessWrong-y enough. Quick summary: I think because of the one-way nature of most problems we face (it's easier to recognize a solution than it is to generate it), pretty much all of the problem solving we do is guess-and-check. That is, the brain kind of throws up solutions to problems blindly, and then we consciously check to see if the solutions are any good. So what we call "creativity" is just "those algorithms in the brain that suggest solutions to problems, but that we lack introspective access to". The lack of introspective access means it's difficult to pass creative skills on - think of a writer trying to explain how to write well. They can give a few basic rules of thumb, but most of their skill is contained within a black box that suggests possible sentences. The actual writing process is something like "wait for brain to come up with some candidate next sentence", and then "for each sentence, make a function call to 'is-sentence-good?' module of brain" (in other words, guess and check). Good writers/creative people are just those people who have brain algorithms that are unusually good at raising the correct solution to attention out of the vast possible space of solutions we could be considering. Of course, sometimes one has insights into a rule or process that generates some of the creative suggestions of the brain. When that happens you can verbalize explicitly how the creative skill works, and it stops being "creative" - you can just pass it on to anyone as a simple rule or procedure. This kind of maps nicely onto the art/science divide, as in "more of an art than a science". Skills are "arts" if they are non-proceduralizable because the algorithms that generate the skill are immune to introspection, and skills are "sciences" if the algorithms have been "brought up into consciousness", so to speak, to the point where they can be explicitly described and shared (of course, I think art vs science is a terrible way to describe this dichotomy, because science is probably the most creative, least proceduralizable thing we do, but what are you gonna do?)

Anyway, I don't know if all of this is just already obvious to everyone here, but I've found it a very useful way to think about creativity.

Edit: I missed your last sentence somehow. The above is definitely just plausible and/or fun to read.

Comment author: ChristianKl 31 May 2015 04:24:06PM 8 points [-]

(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)

Often being prepared simply means that nobody notices anything being at odds. Don't optimize for flashy solutions.

Comment author: ahbwramc 31 May 2015 04:35:56PM 3 points [-]

Fair.

Comment author: ahbwramc 31 May 2015 04:06:28AM 10 points [-]

What contingencies should I be planning for in day to day life? HPMOR was big on the whole "be prepared" theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I'd bet there's some low-hanging fruit that I'm missing out on in terms of preparedness. Any suggestions? They don't have to be big things - people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that the average person is likely to face one at some point in their life, and being prepared for it can have a very high payoff in that case. But there's also a failure mode that people fall into of focusing only on preparing for sexy-but-extremely-low-probability events (I recall a reddit thread that discussed how to survive in case an airplane that you're on breaks up, which...struck me as not the best use of one's planning time). So I'd be just as interested in mundane, everyday tips.

(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)

Comment author: ahbwramc 21 May 2015 02:43:53PM 2 points [-]

I feel like there are interesting applications here for programmers, but I'm not exactly sure what. Maybe you could link up a particular programming language's syntax to our sense of grammar, so programs that wouldn't compile would seem as wrong to you as the sentence "I seen her". Experienced programmers probably already have something like this I suppose, but it could make learning a new programming language easier.

Comment author: JonahSinick 21 May 2015 04:25:26AM 1 point [-]

You are making the assumption that one's self-worth needs to be tied to one's status. Status is a part of what you are. This is not correct. You can keep your ego separate from it. Status can be a tool, it is what you have, not what you are.

No, I wasn't making such an assumption, I was trying to guess what was going on in your mind: a lot of people do attach their self-worth to their social status. I'm trying to get calibrated.

At first, I thought "LWers will be like me and not care about their relative status on an emotional level " then I thought "LWers care a huge amount about their relative status, that's why they all got angry when I wrote a strong criticism of Eliezer and SIAI in 2010, then I thought "maybe LWers don't care that much about their status after all."

If LWers weren't emotionally invested in relative status, we wouldn't be having this conversation :-). There's clearly some sort of issue of self-worth being tied to status. I just don't know how large the effect size is, and in what contexts I should and shouldn't expect it to show up. Can you help me understand?

The initial clash on LW wasn't really even directly about status. It was about rudeness. Regardless of whether one wants to play status games or not, there are social norms of politeness and etiquette.

I'm aware of this, I was intentionally departing from these norms, in an attempt to support Less Wrong's stated purpose as A community blog devoted to refining the art of rationality.

Up until recently, my attitude had been "these people are all hypocrites who don't actually care about rationality." I now know that I had been overly cynical. But taken seriously, the view "when Jonah writes things on Less Wrong, he should be careful to refrain from saying true true things when they might offend other participants" corresponds to "Less Wrong is not a community for some like Jonah whose focus is on refining the art of rationality."

Note that I do adhere to standards of polite discourse except to the extent that I express my views when I think that they're important.

No, you are mistaken about that. You would become very useful and possibly well-compensated, but just by itself the possession of valuable information will not grant you much status. It just doesn't work this way.

I meant in expectation, not necessarily.

And untangle your own ego from you ability to freely say "I'm smarter than all y'all, peasants!"

You're doing it again :D. You seem to think that I'm coming across as arrogant because I'm egotistical. This isn't at all the case – it would be a relief for me if someone else was writing about the things that I want to communicate. I've found myself in the difficult position of having important information to communicate that other people aren't communicating.

Ok, here's the situation. I believe that I know how people in our broad reference class can systematically increase their productivity by 10x-100x. I've done this by using what I learned in data science to aggregate the common wisdom of great historical figures, the best mathematicians in the world, the most knowledgable LWers and the most knowledgable people in the EA movement. Just saying "you can make yourselves ~10x more productive" pattern matches very heavily with a crackpot.

I have a cold start problem: in order for people to understand the importance of the information that I have to convey, they need to spend a fair amount of time thinking about it, but without having seen the importance of the information, they're not able to distinguish me from being a crackpot.

That's why I've been pushing for the importance off putting a lot of time into understanding substantive things: because I've had the perception that people have dug themselves into a sort of epistemic rabbit hole where it's in principle impossible for me to signal that I'm right, independently of whether or not I am.

What I want to convey is really hard (and perhaps impossible) to convey succinctly: that's why nobody's been able to do it successfully before! There are tens or hundreds of thousands of people who have known it. Bill Gates knows it, Warren Buffett knows it, Bill Clinton knows it, Freeman Dyson knows it. But it comes close to being impossible to externalize –historically people have learned how to do it by carefully observing others who can do it, generally as mediated through in-person interactions, and failing that, very careful reading of historical documents by great thinkers from the past.

Certainly the odds are against me being able to communicate it, when nobody else has been able to :D. But I still think that there's some hope. I'm at something of a loss as to how to proceed.

Comment author: ahbwramc 21 May 2015 01:44:37PM *  1 point [-]

I have a cold start problem: in order for people to understand the importance of the information that I have to convey, they need to spend a fair amount of time thinking about it, but without having seen the importance of the information, they're not able to distinguish me from being a crackpot.

For what it's worth, these recent comments of yours have been working on me, at least sort of. I used to think you were just naively arrogant, but now it's seeming more plausible that you're actually justifiably arrogant. I don't know if I buy everything you're saying, but I'll be paying more attention to you in the future anyway.

I've tried to convey certain hard-to-explain LessWrong concepts to people before and failed miserably. I'm recognizing the same frustration in you that I felt in those situations. And I really don't want to be on the wrong side of another LW-sized epistemic gap.

View more: Prev | Next