Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Davidmanheim 23 May 2017 11:27:14AM 2 points [-]

I really like the idea here, but think it's important to be more careful about recommendations. There are community members (Gwern, Scott of SSC,) who have done significant research on many areas discussed here, and have fantastic guides to some parts. Instead of compiling a lot of advice, perhaps you could find which things aren't covered well already, link to those that are, and try to investigate others more thoroughly.

Comment author: lifelonglearner 23 May 2017 03:01:04PM 0 points [-]

Yep! Romeo Stevens also has some very well explained articles here on LW. This one and this one

Comment author: lifelonglearner 22 May 2017 09:16:30PM 1 point [-]

The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a so-what feeling about Bayesian inference. In fact, this was the author's own prior opinion.

Bayesian Methods for Hackers is designed as a introduction to Bayesian inference from a computational/understanding-first, and mathematics-second, point of view. Of course as an introductory book, we can only leave it at that: an introductory book. For the mathematically trained, they may cure the curiosity this text generates with other texts designed with mathematical analysis in mind. For the enthusiast with less mathematical-background, or one who is not interested in the mathematics but simply the practice of Bayesian methods, this text should be sufficient and entertaining.

Just started reading this text, and I currently find it very instructive for someone trying to get a handle on Bayesianism from a CS perspective.

[Link] Probabilistic Programming and Bayesian Methods for Hackers

1 lifelonglearner 22 May 2017 09:15PM
Comment author: philh 22 May 2017 10:02:31AM 1 point [-]

I used to buy melatonin from a place called Puritan's Pride. They ship to the UK, and (shipping included) I'd pay about £15 for 720x200µg. But they've stopped selling it in 200µg; their lowest dose is now 1mg.

Does anyone know any other sites that ship low-dose melatonin to the UK for not loads more expensive? (If you don't know that it ships to the UK, I don't mind checking that myself. I know amazon.com doesn't.)

https://www.vitacost.com seems to be an option, it's more expensive but not prohibitively so.

Comment author: lifelonglearner 22 May 2017 05:37:51PM 3 points [-]

Perhaps check out Piping Rock? They ship worldwide and their prices seem pretty good. I've gotten l-theanine from there before.

Comment author: Lumifer 19 May 2017 03:06:22AM 5 points [-]


Except in a very few matches, usually with world-class performers, there is a point in every match (and in some cases it's right at the beginning) when the loser decides he's going to lose. And after that, everything he does will be aimed at providing an explanation of why he will have lost. He may throw himself at the ball (so he will be able to say he's done his best against a superior opponent). He may dispute calls (so he will be able to say he's been robbed). He may swear at himself and throw his racket (so he can say it was apparent all along he wasn't in top form). His energies go not into winning but into producing an explanation, an excuse, a justification for losing.

C. TERRY WARNER, Bonds That Make Us Free

Comment author: lifelonglearner 19 May 2017 04:09:12AM 0 points [-]

This was what I had in mind as well!

Comment author: gwern 17 May 2017 01:26:44AM 1 point [-]

RL is extremely active now and methods are improving considerably, but it's hard to keep up with research since it's spread out so much - RL stuff often shows up in places like /r/machinelearning but only intermittently as it's not the major focus. This is despite RL being one of the most important applications of the deep learning revolution and the single most relevant area to AI risk. I've been submitting most of the important papers and news for the past half-year or so, so this might be a useful place to subscribe to.

Comment author: lifelonglearner 19 May 2017 04:07:21AM 0 points [-]

Thanks for sharing this! I notice the RL subreddit doesn't have a wiki. I'm very new to ml (on week 6 of Ng's coursera class), and I'm wondering if there are good shallow overview of RL from the perspective you're pointing at (i.e. with regards to being a strong application for deep learning and relevance to AI risk).

Instrumental Rationality Sequence Update (Drive Link to Drafts)

2 lifelonglearner 19 May 2017 04:01AM

Hey all,

Following my post on my planned Instrumental Rationality sequence, I thought it'd be good to give the LW community an update of where I am.

1) Currently collecting papers on habits. Planning to go through a massive sprint of the papers tomorrow. The papers I'm using are available in the Drive folder linked below.

2) I have a publicly viewable Drive folder here of all relevant articles and drafts and things related to this project, if you're curious to see what I've been writing. Feel free to peek around everywhere, but the most relevant docs are this one which is an outline of where I want to go for the sequence and this one which is the compilation of currently sorta-decent posts in a book-like format (although it's quite short right now at only 16 pages).

Anyway, yep, that's where things are at right now.


Comment author: lifelonglearner 16 May 2017 01:35:03PM 0 points [-]

With regards to this part:

The action cannot causally affect the state, but somehow taking a1 gives us evidence that we’re in the preferable state s1. That is, P(s1|a1)>P(s1|a2) and u(a1,s1)>u(a2,s2).

I'm actually unsure if CDT-theorists take this as true. If you're only looking at the causal links between your actions, P(s1|a1) and P(s1|a2) are actually unknown to you. In which case, if you're deciding under uncertainty about probabilities, so you strive to just maximize payoff. (I think this is roughly correct?)

I think the reason why many people think one should go to the doctor might be that while asserting P(s1|a1,K) > P(s1|a2,K), they don’t upshift the probability of being sick when they sit in the waiting room.

Does s1 refer to the state of being sick, a1 to going to the doctor, and a2 to not going to the doctor? Also, I think most people are not afraid of going to the doctor? (Unless this is from another decision theory's view)?

Comment author: Lumifer 15 May 2017 03:26:06PM 0 points [-]

The problem with basic, object-level advice is that has to be specific to a particular person and a particular situation. There are not that many generic universal solutions applicable to all and sundry -- in most cases, "it depends".

It is useful to give people tools, but it also is useful to give people insights which will allow them to make and modify tools of their own.

Comment author: lifelonglearner 15 May 2017 03:48:15PM 0 points [-]

Sure, I think that's totally fair.

However, I do think that most insight based posts are also fairly specific (i.e. not super generalizable).

Also, maybe I'm on the denser end here, but it took me a long time to make the connection between taking insights and finding ways to adapt them to things that would help me. I'm thinking that articles which explicitly harp on this might be useful.

Comment author: lifelonglearner 15 May 2017 03:22:33PM *  1 point [-]

Not a big suggestion, merely a cosmetic one: For future posts, putting the actual URL linked to the text like this is probably a little easier on the eyes.

In general, I think this is really good, and I like how you have summaries that go along with every link. I think this is very helpful to keep track of the conversations happening across these communities! Thank you for putting this together!

(Also, I write mindlevelup, as additional context.)

View more: Next