If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.
If you go through my posts on LW, you can read most of the questions that I've been thinking about in the last few years. I don't think any of the problems that I raised have been solved so I'm still attempting to answer them. To give a general idea, these include questions in philosophy of mind, philosophy of math, decision theory, normative ethics, meta-ethics, meta-philosophy. And to give a specific example I've just been thinking about again recently: What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?
As a side note, I don't know if it's good from a productivity perspective to jump around amongst so many different questions. It might be better to focus on just a few with the others in the back of one's mind. But now that I have so many unanswered questions that I'm all very interested in, it's hard to stay on any of them for very long. So reader beware. :)
Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build an FAI, since if they're too confident about altruism then they're likely to be too confident about many other philosophical problems, but even then I don't stress it too much.
I guess there is a spectrum of concern over philosophical problems involved in building an FAI/AGI, and I'm on the far end of the that spectrum. I think most people building AGI mainly want short term benefits like profits or academic fame, and do not care as much about the far reaches of time and space, in which case they'd naturally focus more on the immediate engineering issues.
Among people working on FAI, I guess they either have not thought as much about philosophical problems as I have and therefore don't have a strong sense of how difficult those problems are, or are just overconfident about their solutions. For example when I started in 1997 to think about certain seemingly minor problems about how minds that can be copied should handle probabilities (within a seemingly well-founded Bayesian philosophy), I certainly didn't foresee how difficult those problems would turn out to be. This and other similar experiences made me update my estimates of how difficult solving philosophical problems is in general.
BTW I would not describe myself as "working on FAI" since that seems to imply that I endorse the building of an FAI. I like to use "working on philosophical problems possibly relevant to FAI".
Pretty much just here. I do read a bunch of other blogs, but tend not to comment much elsewhere since I like having an archive of my writings for future reference, and it's too much trouble to do that if I distribute them over many different places. If I change my main online hangout in the future, I'll note that on my home page.
Pain isn't reliably bad, or at least some people (possibly a fairly proportion), seek it out in some contexts. I'm including very spicy food, SMBD, deliberately reading things that make one sad and/or angry without it leading to any useful action, horror fiction, pushing one's limits for its own sake, and staying attached to losing sports teams.
I think this leads to the question of what people are trying to maximize.