In case you missed:
Recently I published:
http://lesswrong.com/lw/np5/adversity_to_success/
http://lesswrong.com/lw/nsf/should_you_change_where_you_live_also_a_worked/
http://lesswrong.com/lw/nsn/the_problem_tm_analyse_a_conversation/
I collect bot-voted down-spam by our resident troll Eugine (who has never really said why he bothers to do so). Which pushes them off the discussion list. Spending time solving this problem is less important to me than posting more so it might be around for a while. I am sorry if anyone missed these posts. But troll's gonna troll.
Feel free to check them out! LW needs content. Trying to solve that problem right now while ignoring the downvoting because I know you love me and would never actually downvote my writing. (or you would actually tell me about why - as everyone else already does when they have a problem)
Wow, that is a lot of downvotes on neutral-to-good comments. Your posts aren't great, but they don't seem like -10 territory, either.
I thought we had something for this now?
nope. No current solution. I have no doubt what the cause is, because real lesswrongers write comments with their downvotes.
I've written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI
Thanks for helping me out in some tough times, LessWrong crew. Please keep supporting one another and being positive rather than negative.
X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
I slightly edited the opening formula to reflect the fact that one can no longer posts to Main. Also added the instruction to unflag the submission options.
Just a note that someone just went through and apparently downvoted as many of my comments as they could, without regard for content.
LW spoilt me. I now watch The Mickey Mouse Club as a story of an all-powerful, all-knowing AI and knowledge sent back from the future...
Who are the moderators here, again? I don't see where to find that information. It's not on the sidebar or About page, and search doesn't yield anything for 'moderator'.
In a comment to my map of biases in x-risks research I got suggestion to add collective biases, so I tried to make their preliminary list and would welcome any suggestions:
"Some biases result from collective behaviour of group of peoples, so that each person seems to be rational but the result is irrational or suboptimal.
Well-known examples are tragedy of the commons, prisoners dilemma and other non optimal Nash equilibriums.
Different form of natural selection may result in such group behaviour, for example psychopaths may easily reach higher status...
What are rationalist presumptions?
I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)
Epistemic rationality
I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalis...
Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don't even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
The mainstream LW idea seems to be that the right to life is based on sentience.
At the same time, killing babies is the go-to example of something awful.
Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?
Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?
How would you write a better "Probability theory, the logic of science"?
Brainstorming a bit:
accounting for the corrections and rederivations of Cox' theorem
more elementary and intermediate exercises
regrouping and expanding the sections on methods "from problem formulation to prior": uniform, Laplace, group invariance, maxent and its evolutions (MLM and minxent), Solomonoff
regroup and reduce all the "orthodox statistics is shit" sections
a chapter about anthropics
a chapter about Bayesian network and causality, that flows into...
an introduction to machine learning
If you are not a Boltzmann Brain, then sentience produced by evolution or simulation is likely more common than sentience produced by random quantum fluctuations.
If sentience produced by evolution or simulation is more common than sentience produced by random quantum fluctuations, and given an enormous universe available as simulation or alien resources, then the amount of sentient aliens or simulations is high.
Therefore, P(Sentient Aliens or Simulation) and P(You are a Boltzmann Brain) move in opposite directions when updated with new evidence. As
About the moral values thing, it sounds kinda like you haven't read the sequence on metaethics
More a case of read but not believed.
Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality
That isn't saying anything cogent. If moral values are some specific subset of human values, you haven't said what the criterion of inclusion in that subset is. On the other hand, if you are saying all human values are moral values, that is incredible:-
Human values can conflict.
Morality is a decision theory, it tells you what you should do.
A ragbag of conflicting values cannot be used to make a definitive decision.
Therefore morality is not a ragbag of conflicting values.
Perhaps you think CEV solves the problem of value conflict. But if human morality is broadly defined, then th CEV process will be doing almost all the lifting, and CEV is almost entirely unspecified. On the other hand, if you narrow down the specification of human values , you increase the amount of arbitrariness.
Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn't change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.
Your theory of morality is arbitrary because you are not explaining why only human (twenty first century? Western?) values count as morality. Rather. you are using "morality" as something like a place name or personal name. No reason need be given why Istanbul is Istanbul, that's just a label someone put in an area of Earths surface.
But morality cannot be a matter of arbitrary labeling, because it is about having a principled reason why you should do one thing and not another......however no such reason could be founded on an arbitrary naming ceremony! No more than everyone should obey me just because I dub myself the King of the World! To show that human values are morality, you have to show that they should be followed, which you don't do just by calling them morality. That doesn't remove the arbitrariness in the right way.
Because the map is not the territory, normative force does not come from labels or naming ceremonies. You can't change what is by relabelling it, and you can't change what ought to be that way either.
Note how we have different rules about proper names and meaningful terms. You can name things as you wish , because nothing follows from it, because names are labels, not contentfull terms. You can make inferences from contentfull terms, but you should apply them carefully, since argument from tendentiously applied terms us a common form of bad argument. Folow the rules and you have no causal series going from map to territory. Choose one from column A, and one from column B and you do.
Morality is a fixed equation.
What you are describing isn't fixed in the expected sense of being derivable from first principles.
If aliens care about different things, it's not about our morality versus "their" morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn't care about morality. It cares about clippiness
How does that pan out in practice? If (1) humans have the one true morality, then we should apply it, and even force it in others. If (2) morality is just a set of arbitrary values, there is little reason humans should folow it, and even less justification to impose it.
These are contradictory ideas, yet you are asserting both of them!
BTW, denial of your claims that morality is a unique but arbitrary thing doesn't entail believing that clipping is morality. You can have N things that are morality, according to some criteria, without Clipping being amongst them.
Moreover, alternative r theories don't have to disclaim any connection between morality and human values.
[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]
Human values can conflict. Morality [...] tells you what you should do. A ragbag of conflicting values cannot be used to make a definitive decision. Therefore morality is not a ragbag of conflicting values.
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as "definitive" if it is universal, if in some suitable sense everyone would/should arrive at the sam...
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "