In case you missed:
Recently I published:
http://lesswrong.com/lw/np5/adversity_to_success/
http://lesswrong.com/lw/nsf/should_you_change_where_you_live_also_a_worked/
http://lesswrong.com/lw/nsn/the_problem_tm_analyse_a_conversation/
I collect bot-voted down-spam by our resident troll Eugine (who has never really said why he bothers to do so). Which pushes them off the discussion list. Spending time solving this problem is less important to me than posting more so it might be around for a while. I am sorry if anyone missed these posts. But troll's gonna troll.
Feel free to check them out! LW needs content. Trying to solve that problem right now while ignoring the downvoting because I know you love me and would never actually downvote my writing. (or you would actually tell me about why - as everyone else already does when they have a problem)
Wow, that is a lot of downvotes on neutral-to-good comments. Your posts aren't great, but they don't seem like -10 territory, either.
I thought we had something for this now?
nope. No current solution. I have no doubt what the cause is, because real lesswrongers write comments with their downvotes.
I've written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI
Thanks for helping me out in some tough times, LessWrong crew. Please keep supporting one another and being positive rather than negative.
X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
I slightly edited the opening formula to reflect the fact that one can no longer posts to Main. Also added the instruction to unflag the submission options.
Just a note that someone just went through and apparently downvoted as many of my comments as they could, without regard for content.
LW spoilt me. I now watch The Mickey Mouse Club as a story of an all-powerful, all-knowing AI and knowledge sent back from the future...
Who are the moderators here, again? I don't see where to find that information. It's not on the sidebar or About page, and search doesn't yield anything for 'moderator'.
In a comment to my map of biases in x-risks research I got suggestion to add collective biases, so I tried to make their preliminary list and would welcome any suggestions:
"Some biases result from collective behaviour of group of peoples, so that each person seems to be rational but the result is irrational or suboptimal.
Well-known examples are tragedy of the commons, prisoners dilemma and other non optimal Nash equilibriums.
Different form of natural selection may result in such group behaviour, for example psychopaths may easily reach higher status...
What are rationalist presumptions?
I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)
Epistemic rationality
I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalis...
Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don't even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
The mainstream LW idea seems to be that the right to life is based on sentience.
At the same time, killing babies is the go-to example of something awful.
Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?
Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?
How would you write a better "Probability theory, the logic of science"?
Brainstorming a bit:
accounting for the corrections and rederivations of Cox' theorem
more elementary and intermediate exercises
regrouping and expanding the sections on methods "from problem formulation to prior": uniform, Laplace, group invariance, maxent and its evolutions (MLM and minxent), Solomonoff
regroup and reduce all the "orthodox statistics is shit" sections
a chapter about anthropics
a chapter about Bayesian network and causality, that flows into...
an introduction to machine learning
If you are not a Boltzmann Brain, then sentience produced by evolution or simulation is likely more common than sentience produced by random quantum fluctuations.
If sentience produced by evolution or simulation is more common than sentience produced by random quantum fluctuations, and given an enormous universe available as simulation or alien resources, then the amount of sentient aliens or simulations is high.
Therefore, P(Sentient Aliens or Simulation) and P(You are a Boltzmann Brain) move in opposite directions when updated with new evidence. As
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "