In case you missed:
Recently I published:
http://lesswrong.com/lw/np5/adversity_to_success/
http://lesswrong.com/lw/nsf/should_you_change_where_you_live_also_a_worked/
http://lesswrong.com/lw/nsn/the_problem_tm_analyse_a_conversation/
I collect bot-voted down-spam by our resident troll Eugine (who has never really said why he bothers to do so). Which pushes them off the discussion list. Spending time solving this problem is less important to me than posting more so it might be around for a while. I am sorry if anyone missed these posts. But troll's gonna troll.
Feel free to check them out! LW needs content. Trying to solve that problem right now while ignoring the downvoting because I know you love me and would never actually downvote my writing. (or you would actually tell me about why - as everyone else already does when they have a problem)
Wow, that is a lot of downvotes on neutral-to-good comments. Your posts aren't great, but they don't seem like -10 territory, either.
I thought we had something for this now?
nope. No current solution. I have no doubt what the cause is, because real lesswrongers write comments with their downvotes.
I've written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI
Thanks for helping me out in some tough times, LessWrong crew. Please keep supporting one another and being positive rather than negative.
X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
I slightly edited the opening formula to reflect the fact that one can no longer posts to Main. Also added the instruction to unflag the submission options.
Just a note that someone just went through and apparently downvoted as many of my comments as they could, without regard for content.
LW spoilt me. I now watch The Mickey Mouse Club as a story of an all-powerful, all-knowing AI and knowledge sent back from the future...
Who are the moderators here, again? I don't see where to find that information. It's not on the sidebar or About page, and search doesn't yield anything for 'moderator'.
In a comment to my map of biases in x-risks research I got suggestion to add collective biases, so I tried to make their preliminary list and would welcome any suggestions:
"Some biases result from collective behaviour of group of peoples, so that each person seems to be rational but the result is irrational or suboptimal.
Well-known examples are tragedy of the commons, prisoners dilemma and other non optimal Nash equilibriums.
Different form of natural selection may result in such group behaviour, for example psychopaths may easily reach higher status...
What are rationalist presumptions?
I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)
Epistemic rationality
I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalis...
Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don't even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
The mainstream LW idea seems to be that the right to life is based on sentience.
At the same time, killing babies is the go-to example of something awful.
Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?
Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?
How would you write a better "Probability theory, the logic of science"?
Brainstorming a bit:
accounting for the corrections and rederivations of Cox' theorem
more elementary and intermediate exercises
regrouping and expanding the sections on methods "from problem formulation to prior": uniform, Laplace, group invariance, maxent and its evolutions (MLM and minxent), Solomonoff
regroup and reduce all the "orthodox statistics is shit" sections
a chapter about anthropics
a chapter about Bayesian network and causality, that flows into...
an introduction to machine learning
If you are not a Boltzmann Brain, then sentience produced by evolution or simulation is likely more common than sentience produced by random quantum fluctuations.
If sentience produced by evolution or simulation is more common than sentience produced by random quantum fluctuations, and given an enormous universe available as simulation or alien resources, then the amount of sentient aliens or simulations is high.
Therefore, P(Sentient Aliens or Simulation) and P(You are a Boltzmann Brain) move in opposite directions when updated with new evidence. As
Well, in the normal course of life, on the object level, some things are more probable than others.
If you push me about if I REALLY know they're true, then I admit that my reasoning and data could be confounded by a Matrix or whatever.
Maybe it's clearer like so:
Colloquially, I know how to judge relative probabilities.
Philosophically (strictly), I don't know the probability that any of my conclusions are true (because they rest on concepts I don't pretend to know are true).
About the moral values thing, it sounds kinda like you haven't read the sequence on metaethics. If not, then I'm glad to be the one to introduce you to the idea, and I can give you the broad strokes in a few sentences in a comment, but you might want to ponder the sequence if you want more.
Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality.
But, humans don't have access to our source code. We can't see all that we care about. Figuring out the specific values, and how much to weight them against each other is just the old game of thought experiments and considering trade-offs, etcetera.
Nothing that can be reduced to some one-word or one-sentence idea that sums it all up. So we don't know what all the values are or how they're weighted. You might read about "Coherent Extrapolated Volition," if you like.
Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn't change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.
A great example is Alexander Wales' short story "The Last Christmas" (particularly chapter 2 and 3). See below.
The elves care about Christmas Spirit, not right and wrong, or morality, or fairness.
When it's pointed out that what they're doing isn't fair, they don't protest, they just say "We don't care. Fairness isn't part of the Christmas Spirit."
And we might say, "Santa being fat? We don't care, that's not part of morality. We don't deny that it's part of the Christmas Spirit; we just don't care that it is."
If aliens care about different things, it's not about our morality versus "their" morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn't care about morality. It cares about clippiness.
The moral thing and the clippy thing to do are both fixed calculations. Once you know the answer, it's a feature of your mind if you happen to respond to morality, or clippiness, or Glumpshizzle, or Christmas Spirit.
If anybody thinks I've misunderstood part of this, please, do let me know. I've tried to understand, and would like to correct any mistakes if I have them.
“You wouldn’t even make any arguments for why you should live?” asked Charles.
“My life is meaningless in the face of the Christmas spirit,” said Matilda.
“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”
“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,”
“It’s unfair,” said Charles.
“Life is unfair,” said Matilda.
“Does it have to be?” asked Charles. “Is that the Christmas spirit?”
“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”
Colloquially, I know how to judge relative probabilities.
Philosophically (strictly), I don't know the probability that any of my conclusions are true (because they rest on concepts I don't pretend to know are true).
Again. my point is that it that to do justice to philosophical doubt, you need to avoid high probabilities in practical reasoning a laTaleb. But not everyone gets that. A lot of people think that using probability alone us sufficient.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "