Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: John_Maxwell_IV 04 November 2017 03:19:54AM 1 point [-]

I suppose that all else equal, more recent textbooks are better since they are going to be more up-to-date (modulo editions) and also written to address a perceived flaw in all the existing books? Though there are some great older textbooks: I think I remember reading that Turing award winner Richard Hamming added probability to his calculus textbook because he agreed with students that calculus was too frequently presented without any motivating applications.

If you're an autodidact, having answers available for the problems in the book, to make sure you are learning stuff correctly, seems pretty valuable.

Comment author: gwern 07 October 2017 02:18:32AM 3 points [-]

Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

You know, you could do that. By giving them the money.

Comment author: John_Maxwell_IV 09 October 2017 12:50:53AM 3 points [-]

The Future of Life Institute thinks that a portfolio approach to AI safety, where different groups pursue different research agendas, is best. It's plausible to me that we've hit the point of diminishing returns in terms of allocating resources to MIRI's approach, and marginal resources are best directed towards starting new research groups.

Comment author: fowlertm 08 October 2017 04:35:41PM *  5 points [-]

I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:

"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."

Which is exactly what I was proposing.

I have tried for years to network with people in the futurist/rationalist movement, by offering to write for various websites and blogs (and being told no each and every single time), or by trying to discuss novel rationality techniques with people positioned to provide useful feedback (and being ignored each and every single time).

While I may not be Eliezer Yudkowsky the evidence indicates that I'm at least worth casually listening to, but I have had no luck getting even that far.

I left a cushy job in Asia because I wanted to work toward making the world a better place, and I'm not content simply giving money to other people to do so on my behalf. I have a lot of talent and energy which could be going towards that end; for whatever reason, the existing channels have proven to be dead ends for me.

But even if the above were not the case, there is an extraordinary amount of technical talent in the front range which could be going towards more future-conscious work. Most of these people probably haven't heard of LW or don't care much about it (as evinced by the moribund LW meetup in Boulder and the very, very small one in Denver), but they might take notice if there were a futurist institution within driving distance.

Approaching from the other side, I've advertised futurist-themed talks on LW numerous times and gotten, like, three people to attend.

I'll continue donating to CFAR/MIRI because they're doing valuable work, but I also want to work on this stuff directly, and I haven't been able to do that with existing structures.

So I'm going to build my own. If you have any useful advice for that endeavor, I'd be happy to hear it.

Comment author: John_Maxwell_IV 09 October 2017 12:43:07AM 1 point [-]

Maybe your mistake was to write a book about your experience of self-study instead of making a series of LW posts. Nate Soares took this approach and he is now the executive director of MIRI :P

Comment author: Elo 21 September 2017 06:22:25PM 3 points [-]

As a heads up - my email was in spam.

In response to comment by Elo on LW 2.0 Open Beta Live
Comment author: John_Maxwell_IV 28 September 2017 03:22:53AM 2 points [-]

If this happens, be sure to mark "not spam" so your email provider (Gmail/Yahoo/etc.) will count that as a point of positive reputation for the lesserwrong.com domain.

(For the team behind lesserwrong, it might be wise to send emails from lesswrong.com for the time being, since lesswrong.com presumably already has a good domain reputation. Feel free to talk to me if you have more questions, I used to work in email marketing.)

Comment author: IlyaShpitser 17 September 2017 03:05:01PM 2 points [-]

It's not PageRank that worked, it's anti-induction that worked. PageRank did not work, as soon as it faced resistance.

Comment author: John_Maxwell_IV 18 September 2017 07:54:42AM 0 points [-]

You really are a "glass half empty" kind of guy aren't you.

Comment author: IlyaShpitser 17 September 2017 02:05:36AM *  0 points [-]

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That's evidence about pagerank systems not being great on their own. People game the hell out of citations.


Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.

Comment author: John_Maxwell_IV 18 September 2017 06:19:19AM 0 points [-]

People game the hell out of citations.

Is there anyone whose makes it their business to guard against this?

Comment author: IlyaShpitser 17 September 2017 03:33:17PM *  3 points [-]

Maybe I'm just not very good at doing literature searches. I did a search on Google Scholar for "reddit karma" and found only one paper which focuses on reddit karma.

You can't do lit searches with google. Here's one paper with a bunch of references on attacks on reputation systems, and reputation systems more generally:

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36757.pdf

You are right that lots of folks outside of academia do research on this, in particular game companies (due to toxic players in multiplayer games). This is far from a solved problem -- Valve, Riot and Blizzard spend an enormous amount of effort on reputation systems.


I don't see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own.

I don't think there is a way to write this in a way that doesn't sound mean: because you are an amateur. Imo, the best way for amateurs to proceed is to (a) trust experts, (b) read expert stuff, and (c) mostly not talk. Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion. In principle, taking expert consensus as the prior is a part of rationality. In practice, people ignore this part because it is not a practice that is fun to follow. It's much more fun to talk than to read papers.

LW's love affair with amateurism is one of the things I hate most about its culture.


My favorite episode in the history of science is how science "forgot" what the cure of scurvy was. In order for human civilization not to forget things, we need to be better about (a), (b), (c) above.

Comment author: John_Maxwell_IV 18 September 2017 06:05:16AM 5 points [-]

I appreciate the literature pointer.

taking expert consensus as the prior

What expert consensus are you referring to? I see an unsolved engineering problem, not an expert consensus.


My view of amateurism has been formed, in a large part, from reading experts on the topic:

The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you'll probably see problems that software could solve. In fact, you're doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don't even know what the status quo is to take it for granted.

Paul Graham

Introspection, and an examination of history and of reports of those who have done great work, all seem to show typically the pattern of creativity is as follows. There is first the recognition of the problem in some dim sense. This is followed by a longer or shorter period of refinement of the problem. Do not be too hasty at this stage, as you are likely to put the problem in the conventional form and find only the conventional solution.

Richard Hamming

Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read, even when you’re reading what you conceive to be introductory stuff.

Edward Boyden

This past summer I was working at a startup that does predictive maintenance for internet-connected devices. The CEO has a PhD from Oxford and did his postdoc at Stanford, so probably not an amateur. But working over the summer, I was able to provide a different perspective on the problems that the company had been thinking about for over a year, and a big part of the company's proposed software stack ended up getting re-envisioned and written from scratch, largely due to my input. So I don't think it's ridiculous for me to wonder whether I'd be able to make a similar contribution at Valve/Riot/Blizzard.

The main reason I was able to contribute as much as I did was because I had the gumption to consider the possibility that the company's existing plans weren't very good. Basically by going in the exact opposite direction of your "amateurs should stay humble" advice.

Here are some more things I believe:

  • If you're solving a problem that is similar to a problem that has already been solved, but is not an exact match, sometimes it takes as much effort to re-work an existing solution as to create a new solution from scratch.

  • Noise is a matter of place. A comment that is brilliant by the standards of Yahoo Answers might justifiably be downvoted on Less Wrong. It doesn't make sense to ask that people writing comments on LW try to reach the standard of published academic work.

  • In computer science, industry is often "ahead" of academia in the sense that important algorithms get discovered in industry first, then academics discover them later and publish their results.

Interested to learn more about your perspective.

Comment author: IlyaShpitser 17 September 2017 04:34:02PM *  0 points [-]

aggregating lots of individual estimates of quality sure can help discover the quality.

I guess we fundamentally disagree. Lots of people with no clue about something aren't going to magically transform into a method for discerning clue regardless of aggregation method -- garbage in garbage out. For example: aggregating learners in machine learning can work, but requires strong conditions.

Comment author: John_Maxwell_IV 18 September 2017 05:10:07AM 2 points [-]

Do you disagree with Kaj that higher-voted comments are consistently more insightful and interesting than low-voted ones?

It sounds like you are making a different point: that no voting system is a substitute for having a smart, well-informed userbase. While that is true, that is also not really the problem that a voting system is trying to solve.

Comment author: richardbatty 17 September 2017 06:55:47PM 6 points [-]

You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.

On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

Comment author: John_Maxwell_IV 18 September 2017 05:05:16AM *  4 points [-]

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds.

I'm not persuaded that this is substantially more true of scientists than people in the LW community.

Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see "Profession" section here).

They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture.

I don't think people usually become scientists unless they like the culture of academic science.

I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

I think "intellectual communities" are just a high-status kind of subculture. "Be more high status" is usually not useful advice.

I think it might make sense to see academic science as a culture that's optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.

If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don't endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.

Comment author: Elo 16 September 2017 07:50:13AM 1 point [-]

Yes it will probably cause people to devalue the site. If you pay a dollar it will tend to "feel like" the entire endeavour is worth a dollar.

Comment author: John_Maxwell_IV 17 September 2017 02:06:48AM 1 point [-]

I was talking about paying people to contribute. Not having people pay for membership.

View more: Next