Disclaimer: I don’t have any particular expertise in AI safety, I just had some thoughts and this seemed like the place to put them. The bleak outlook that Eliezer presents here and elsewhere seems to in large part be driven by the expectation that AGI will be developed before the...
I am certainly not an expert, but what I have read in the field of AI alignment is almost solely targeted at handling the question of how to make sure that a general AI does not start optimizing for outcomes that humans do not want to happen. This is, to...
Epistemic Status: I'm highly confident this is a phenomenon that occurs with a lot of advice people give, but I'm quite uncertain about the best way to deal with it when trying to give advice to more than one person. The main thing people fail to consider when giving advice...
Note: This falls under the category of self-help advice, the usefulness of which tends to be variable between individuals. This technique has noticeably improved my day to day feelings of well being, but people are often very different and your mileage may vary. It is notoriously difficult to increase your...
The two systems of cognition a la Kahneman each seem to each have roles that they fulfill better than the other and in which, for optimal performance, the other system ought not to interfere. Until recently, I had only really thought about when System 2 ought to override System 1,...
Rationalists often find difficult, important challenges to work on and they become very excited and passionate about their causes. I expect it is common (because it happened to me and I have heard references to similar episodes by others) that such causes seem so important that aspiring rationalists set unreasonably...
Gendlin’s technique of Focusing primarily focuses (hehe) on problems or negative felt senses. Something I have not seen discussed much is that one can apply the concepts of Focusing to many different felt senses that are not problems, or even negative in any way. At least in my experience, you...