In case you missed:
Recently I published:
http://lesswrong.com/lw/np5/adversity_to_success/
http://lesswrong.com/lw/nsf/should_you_change_where_you_live_also_a_worked/
http://lesswrong.com/lw/nsn/the_problem_tm_analyse_a_conversation/
I collect bot-voted down-spam by our resident troll Eugine (who has never really said why he bothers to do so). Which pushes them off the discussion list. Spending time solving this problem is less important to me than posting more so it might be around for a while. I am sorry if anyone missed these posts. But troll's gonna troll.
Feel free to check them out! LW needs content. Trying to solve that problem right now while ignoring the downvoting because I know you love me and would never actually downvote my writing. (or you would actually tell me about why - as everyone else already does when they have a problem)
Wow, that is a lot of downvotes on neutral-to-good comments. Your posts aren't great, but they don't seem like -10 territory, either.
I thought we had something for this now?
nope. No current solution. I have no doubt what the cause is, because real lesswrongers write comments with their downvotes.
I've written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI
Thanks for helping me out in some tough times, LessWrong crew. Please keep supporting one another and being positive rather than negative.
X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
I slightly edited the opening formula to reflect the fact that one can no longer posts to Main. Also added the instruction to unflag the submission options.
Just a note that someone just went through and apparently downvoted as many of my comments as they could, without regard for content.
LW spoilt me. I now watch The Mickey Mouse Club as a story of an all-powerful, all-knowing AI and knowledge sent back from the future...
Who are the moderators here, again? I don't see where to find that information. It's not on the sidebar or About page, and search doesn't yield anything for 'moderator'.
In a comment to my map of biases in x-risks research I got suggestion to add collective biases, so I tried to make their preliminary list and would welcome any suggestions:
"Some biases result from collective behaviour of group of peoples, so that each person seems to be rational but the result is irrational or suboptimal.
Well-known examples are tragedy of the commons, prisoners dilemma and other non optimal Nash equilibriums.
Different form of natural selection may result in such group behaviour, for example psychopaths may easily reach higher status...
What are rationalist presumptions?
I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)
Epistemic rationality
I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalis...
Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don't even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
The mainstream LW idea seems to be that the right to life is based on sentience.
At the same time, killing babies is the go-to example of something awful.
Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?
Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?
How would you write a better "Probability theory, the logic of science"?
Brainstorming a bit:
accounting for the corrections and rederivations of Cox' theorem
more elementary and intermediate exercises
regrouping and expanding the sections on methods "from problem formulation to prior": uniform, Laplace, group invariance, maxent and its evolutions (MLM and minxent), Solomonoff
regroup and reduce all the "orthodox statistics is shit" sections
a chapter about anthropics
a chapter about Bayesian network and causality, that flows into...
an introduction to machine learning
If you are not a Boltzmann Brain, then sentience produced by evolution or simulation is likely more common than sentience produced by random quantum fluctuations.
If sentience produced by evolution or simulation is more common than sentience produced by random quantum fluctuations, and given an enormous universe available as simulation or alien resources, then the amount of sentient aliens or simulations is high.
Therefore, P(Sentient Aliens or Simulation) and P(You are a Boltzmann Brain) move in opposite directions when updated with new evidence. As
Very close, but not quite. (Or, at least not quite my understanding. I haven’t dug too deep.)
A reply to Presuppositionalism
Rationalist presume Occam's razor because it proof itself Rationalist presume Induction razor because it proof itself *etc.
I wouldn’t say that we should presume anything because it proves itself. Emotionally, we may have a general impulse to accept things because of evidence, and so it is natural to accept induction using inductive reasoning. So, that’s likely why the vast majority of people actually accept some form of induction. However, this is not self-consistent, according to Lob’s theorem. We must either accept induction without being able to make a principled argument for doing so, or we must reject it, also without a principled reason.
So, Presuppositionalism appears to be logically false, according to Lob’s theorem.
I could leave it at that, but it’s bad form to fight a straw man, and not the strongest possible form of an argument. The steel man of Presuppositionalism might instead take certain propositions as a matter of faith, and make no attempt to prove them. One might then build much more complex philosophies on top of those assumptions.
Brief detour
Before I reply to that, let me back up for a moment. I Agree Denotationally But Object Connotationally with most of the rest of what you said above. (It seems to me to be technically true, but phrased in such a way that it would be natural to draw false inferences from it.)
If I had merely posited that induction was valid, I suspect it wouldn’t have been disconcerting, even if I didn’t offer any explanation as to why we should start there and not at “I am not dreaming” or any of the examples you listed. You were happy to accept some starting place, so long as it felt reasonable. All I did was add a little rigor to the concept of a starting point.
However, by additionally pointing out the problems with asserting anything from scratch, I’ve weakened my own case, albeit for the larger goal of epistemic rationality. But since all useful philosophies must be based in something, they also can’t prove their own validity. The falling tide lowers all ships, but doesn’t change their hull draft) or mast height.
So, we still can’t then say “the moon is made of blue cheese, because the moon is made of blue cheese”. If we just assume random things to be true, eventually some of them might start to contradict one another. Even if they didn’t, we’d still have made multiple random assertions when it was possible to make fewer. It’s not practically possible not to use induction, so every practical philosophy does so. However, adding additional assertions is unnecessary.
So, I agree connotationally when you say “The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology”. This infers that all possible sets of starting points are equally valid, which I don’t agree with. I’ll concede that induction is equally as valid as total epistemic nihilism (the position that nothing is knowable, not to be confused with moral nihilism, which has separate problems). I can’t justify accepting induction over rejecting it. However, once I accept at least 1 thing, I can use that as a basis for judging other tools and axioms.
A reply to the Presuppositionalism steel man
Lets go back to the Presuppositionalism steel man. Rather than making a self-referential statement as a proof, it merely accepted certain claims without proof. Any given Presuppositionalist must accept induction to function in the real world. If they also use that induction and accept things that induction proves, then we can claim to have a simpler philosophy. (Simpler being closer to the truth, according to Occam’s razor.)
They might accept induction, but reject Occam’s razor, though. I haven’t thought through the philosophical implications of trying to reject Occam’s Razor, but at first glance it seems like it would make life impractically complicated. It doesn’t necessarily lead to being unable to conclude that one should continue breathing, since it’s always worked in the past. So, it’s not instant death, like truly rejecting induction, but I suspect that truly rejecting Occam’s razor, and completely following through with all the logical implications, would cause problems nearly as bad.
For example, overfitting might prevent drawing meaningful conclusions about how anything works, since trillions of arbitrarily complex function can all be fit to any given data set. (For example, sums of different sine waves.) It may be possible to substitute some other principle for Occam’s razor to minimize this problem, but I suspect that then it would then be possible to compare that method against Occam’s Razor (well, Solomonoff induction) and demonstrate that one produced more accurate results. There may already be a proof that Solomonoff induction is the best possible set of Bayesian Priors, but I honestly haven’t looked into it. It may merely be the best set of priors known so far. (Either way, it’s only the best assuming infinite computing power is available, so the question is more academic than practical.)
General conclusions
So, it looks like this is the least bad possible philosophy, or at least quite close. It’s a shame we can’t reject epistemic nihilism, but pretty much everything else seems objectively suboptimal, even if some things may hold more aesthetic appeal or be more intuitive or easy to apply. (This is really math heavy, and almost nothing in mathematics is intuitive. So, in practice we need lots of heuristics and rules of thumb to make day to day decisions. None of this is relevant except when these more practical methods fail us, like on really fundamental questions. The claim is just that all such practical heuristics seem to work by approximating Solomonoff induction. This allows aspiring rationalists to judge potential heuristics by this measure, and predict what circumstances the heuristic will work or fail in.)
It is NOT a guarantee that we’re right about everything. It is NOT an excuse to make lots of arbitrary presuppositions in order to get the conclusions we want. Anything with any assumptions is NOT perfect, but this is just the best we have, and if we ever find something better we should switch to that and never look back.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "