If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
(Trigger warnings: mention of rape, harassment, and hostile criticism of Less Wrong.)
A lesson on politics as mindkiller —
There's a thread on Greta Christina's FTB blog about standards of evidence in discussions of rape and harassment. One of her arguments:
Extraordinary claims require extraordinary evidence. But claims of sexual harassment, abuse, assault, and rape are not extraordinary. They are depressingly ordinary. So the level of evidence we should need to believe a claim about sexual harassment, abuse, assault, or rape is substantially lower than the level of evidence we should need to believe a claim about, say, Bigfoot.
This is straight Bayes — since the prior for rape is higher than the prior for Bigfoot, it requires less evidence to raise our credence above 0.5 in any given case of a claimed occurrence. In the comments, one person points out the connection to Bayes, in part remarking:
“Bayesian updating” is a good method for using evidence rationally to change your mind. If someone requires extraordinary evidence to believe a depressingly common event, they are not being rational.
In response, another commenter, apparently triggered by the mention of Bayes, goes on a ...
I've decided to live less on the internet (a.k.a. the world's most popular superstimulus) and more in real life. I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
On a related note, I was thinking about how to solve the problem of the constant temptation to waste time on the internet. For most superstimuli, the correct action is to cut yourself off completely, but that's not really an option at all here. Even disregarding the fact that it would be devastatingly impractical in today's world, the internet is an instant connection to all the information in the world, making it incredibly useful. Ideally one would use the internet purely instrumentally - you would have an idea of what you want to do, open up the browser, do it, then close the browser.
To that end, I have an idea for a Chrome extension. You would open up the browser, and a pop-up would appear prompting you to type in your reason for using the internet today. Then, your reason would be written in big black letters at the top of the page while you're browsing, and only go away when you close Chrome. This would force you to rem...
Perhaps a stupid question, or, more accurately, not even a question - but I don't understand this attitude. If you enjoy going on the Internet, why would you want to stop? If you don't enjoy it, why would it tempt you?
Wanting is mediated by dopamine. Liking is mostly about opiods. The two features are (unfortunately) not always in sync.
It reminds me, and I mean no offence by this, like the attitude addicts have towards drugs. But it really stretches plausibility to say that the Internet could be something like a drug.
It really doesn't stretch plausibility. The key feature here is "has addictive potential". It doesn't matter to the brain whether the reward is endogenous dopamine released in response to a stimulus or something that came in a pill.
There was a post on Slashdot today arguing that "Aging is a disease and we should try to defeat it or at least slow it down".
The comments are full of deathism: many people apparently sincerely coming out in favour of not just death (limited lifespan) but aging and deterioration.
Everyone who doesn't feel in their gut that many (most?) normal people truly believe aging and death are good, and will really try to stop you from curing it if they can, should go and read through all the comments there. It's good rationality training if (like me) you haven't ever discussed this in person with your friends (or if they all happened to agree). It's similar to how someone brought up by and among atheists (again, me) may not understand religion emotionally without some interaction with it.
Someone marked the appeal to worse problems article on Wikipedia for prospective deletion, for lack of sourcing - it appears to mostly have been written from the TVTropes page page. I've given it its proper name and added "whataboutery" as another name for it - but it needs more, and preferably from a suitably high-quality source.
A fact about industrial organization that recently surprised me:
Antimonopoly rules prevent competitors from coordinating. One exemption in the US is political lobbying: executives can meet at their political action committee. Joint projects in some industries are organized as for-profit companies owned by (nonprofit) political action committees.
My girlfriend taught me how to dive this past weekend. I'm 26. I had fully expected to go my entire life without learning how to dive, I guess because I unconsciously thought it was "too late" to learn, somehow. Now I'm wondering what other skills I never learned at the typical age and could just as easily learn now.
(if you're looking for object-level takeaways, just start out with kneeling dives - they're way easier and far less intimidating - then gradually try standing up more and more)
Two roads diverged in a woods, and I
Stepped on the one less traveled by
Yet stopped, and pulled back with a cry
For all those other passers-by
Who had this road declined to try
Might have a cause I knew not why
What dire truths might that imply?
I feared that road might make me die.
And so with caution to comply
I wrung my hands and paced nearby
My questions finding no reply
Until a traveller passed nigh
With stronger step and focused eye
I bid the untouched road goodbye
And followed fast my new ally.
The difference made I'll never know
'Till down that other path you go.
I couldn't find a place to mention this sort of thing at the wiki, so I'm mentioning it here.
The search box should be near the top of the page.
It's one of the most valuable things on a lot of websites, especially wikis, and I don't want to have to look for it.
What are the relative merits of using one's real name vs. a pseudonym here?
When I first started reading LessWrong, I was working in an industry obsessed with maintaining very mainstream appearances, so I chose to go with a pseudonym. I have since changed industries and have no intention of going back, so my original reason for using a pseudonym is probably irrelevant now.
I continue running into obstacles (largely-but-not-exclusively of an accessibility nature) when it comes to the major crowdfunding websites. It seems not to be just me; the major platforms (Kickstarter/Indiegogo) could stand to be much more screen reader-friendly, and the need for images (and strong urging to use videos) is an obstacle to any blind person seeking funding who doesn't have easy access to sighted allies/minions.
My present thoughts are that I'd rather outsource setting up crowdfunding campaigns to someone for whom these would not be serious ob...
Here's an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI's response.
I searched and it doesn't look like anyone has discussed this criticism of LW yet. It's rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I don't think "condescending" touches accurately upon what is going on here. This seems to be politics being the mindkiller pretty heavily (ironically one of the things they apparently think is stupid or hypocritical). They've apparently taken some of the lack of a better term "right-wing" posts and used that as a general portrayal of LW. Heck, I'm in many ways on the same political/tribal group as this author and think most of what they said is junk.. Examples include:
...Members of Lesswrong are adept at rationalising away any threats to their privilege with a few quick "Bayesian Judo" chops. The sufferings caused by today's elites — the billions of people who are forced to endure lives of slavery, misery, poverty, famine, fear, abuse and disease for their benefit — are treated at best as an abstract problem, of slightly lesser importance than nailing down the priors of a Bayesian formula. While the theories of right-wing economists are accepted without argument, the theories of socialists, feminists, anti-racists, environmentalists, conservationists or anyone who might upset the Bayesian worldview are subjected to extended empty "rationalist&quo
I searched and it doesn't look like anyone has discussed this criticism of LW yet. It's rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I'd more go with "incoherent ranting" than "condescending".
Does anyone have any opinions on this paper? [http://arxiv.org/pdf/1207.4913.pdf]
It is a proof of Bell's Inequality using counterfactual language. The idea is to explore links between counterfactual causal reasoning and quantum mechanics. Since these are both central topics on Less Wrong, I'm guessing there are people on this website who might be interested.
I don't have any background in Quantum Mechanics, so I cannot evaluate the paper myself, but I know two of the authors and have very high regard for their intelligence.
Does anybody think that there might be another common metaethical theory to go along w/ deontology, consequentialism, and virtue? I think it's only rarely codified, usu. used implicitly or as a folk theory, in which morality consists of bettering ones own faction and defeating opposing factions, and as far as I can see it's most common in radical politics of all stripes. Is this distinguishable from very myopic consequentialism or mere selfishness?
It depends on the reasons why one considers it right to benefit one's own faction and defeat opposing ones, I guess. Or are you proposing that this is just taken as a basic premise of the moral theory? If so, I'm not sure you can justifiably attribute it to many political groups. I doubt a significant number of them want to defeat opposing factions simply because they consider that the right thing to do (irrespective of what those factions believe or do).
Also, deontology, consequentialism and virtue ethics count as object-level ethical theories, I think, not meta-ethical theories. Examples of meta-ethical theories would be intuitionism (we know what is right or wrong through some faculty of moral intuition), naturalism (moral facts reduce to natural facts) and moral skepticism (there are no moral facts).
I often write things out to make them clear in my own mind. This works particularly well for detailed planning. Just as some people "don't know what they think until they hear themselves say it", I don't know what I think until I write it down. (Fast typing is an invaluable skill.)
Sometimes I use the same approach to work out what I think, know or believe about a subject. I write a sort of evolving essay laying out what I think or know.
And now I wonder: how much of that is true for other people? For instance, when Eliezer set out to write the Seq...
Idle curiosity / possibility of post being deleted:
At one point in LessWrong's past (some time in the last year, I think), I seem to recall replying to a post regarding matters of a basilisk nature. I believe that the post I replied to was along these lines:
Given that the information has been leaked, what is the point of continuing to post discussions of this matter?
I believe my response was long the lines of:
...I hate to use silly reflective humor, but given that the information has been leaked, what is the point of censoring discussions of this matter
Problem:
Inspiration:
Proposal:
Dismiss comment button
Bob writes a comment that doesn't carry its weight. Alice, a LW reader, can choose to up-vote, down-vote, or Dismiss Bob's comment. Dismiss advise...
Researchers have found that people experiencing Nietzschean angst tend to cling to austere ethical codes, in the hopes of reorienting themselves.
That quote is from this Slate article - the article is mostly about social stigma surrounding mental illness.
The quote is plausible, in an untrustworthy common-sense kind of way. It also seems to align with my internal perspective of my moral life. Does anyone know if it is actually true? What research is out there?
EDIT: In case it isn't clear, I'm asking if anyone knows anything about the (uncited) resea...
I'm a CFAR alumnus looking to learn how to code for the very first time. When I met Luke Muehlhauser, he said that as far as skills go, coding is very good for learning quickly whether one is good at it or not. He said that Less Wrong has some resources for learning and assessing my own natural talent or skill for coding, and he told me to come here to find it.
So, where or what is this resource which will assess my own coding skills with tight feedback loops? Please and thanks.
I've set up a prediction tracking system for personal use. I'm assigning confidence levels to each prediction so I can check for areas of under- or over-confidence.
My question: If I predicted X, and my confidence in X changes, will it distort the assessment of my overall calibration curve if I make a new prediction about X at the new confidence level, keep the old prediction, and score both predictions later? Is that the "right" way to do this?
More generally, if my confidence in X fluctuates over time, does it matter at all what criterion I use ...
This has probably been mentioned before, but I didn't feel like searching the entire comment archive of Less Wrong to find discussion on it: Can functionality be programmed into the website to sort the comments from posts from Overcoming Bias days by "Best" or at least "Top" ("New" would be nice as well!!)? Those posts are still open for commenting, and sometimes I find comments from years later more insightful. Plus, I'm sick and tired of scrolling through arguments with trolls.
And, given that this probably has been discussed before - why hasn't it been done yet?
Just a few questions for some of you:
Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don't have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It's like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn't seem to too bad. It's when simultaneously being agai...
I've decided to start a blog, and I kind of like the name "Tin Vulcan", but I suspect that would be bad PR. Thoughts? (I don't intend it to be themed, but I would expect most of the posts to be LW-relevant.)
(Name origins: Fbzr pbzovangvba bs "orggre guna n fgenj ihypna" naq gur Gva Jbbqzna.)
I've heard the idea of adaptive screen brightness mentioned here a few times. I know fluxgui in Linux that does it. and seems that windows 7 and 8 come equipped. My windows is XP in one of my computers, how do I get it to lose brightness automatically during late hours?
Socks: Traditionally I've worn holeproof explorers. Last time I went shopping for new socks, I wanted to try something new but was overwhelmed by choice and ended up picking some that turned out to be rather bad. My holeproofs and the newer ones are both coming to the end of their lives, and I'll need to replace them all soon. Where should I go to learn about what types of sock would be best?
A quick google for best socks or optimal socks leads me to lots of shops, and pages for sports socks, and pages for sock fashion, but nothing about picking out a comfo...
I would appreciate some advice. I've been trying to decide what degree to get. I've already taken a bunch of general classes and now I need to decide what to focus on. There are many fields that I think would enjoy working in, such as biotechnology, neuroscience, computer science, molecular manufacturing, alternative energy, etc. Since I'm not sure what I want to go into I was thinking of getting degree with a wide range of applications, such as physics, or math. I plan on improving my programming skills in my spare time, which should widen my prospects.
On...
do any programmers or web developers have an opinion about getting training on team tree house? has anyone else done this?
Does anyone have a working definition of "forgiveness"? Given that definition, do you find it to be a useful thing to do?
There was a recent post or comment about making scientific journal articles more interesting by imagining the descriptions (of chemical interactions?) as being GIGANTIC SPECIAL EFFECTS. Anyone remember it well enough to give a link?
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else. Here are 2 screenshots:
http://i.imgur.com/OO5UHPX.png http://i.imgur.com/0Il8TeJ.png
Rhodiola is apparently the bomb, but I've read somewhere it suffers from poor quality supplements. Here in CEE in pharmacies the brand name they sell is called Vitango. Any experiences? http://www.vitango-stress.com/
In programming, you can "call" an argumentless function and get a value. But in mathematics, you can't. WTF?
I had an idea for Wei Dai's "What is Probability, Anyway?," but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn't been brought up before, or did I just write up a "durr"? (If it's not, I'll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity...
There's nothing inherently wrong with simulating intelligent beings, so long as you don't make them suffer. If you simulate an intelligent being and give it a life significantly worse than you could, well, that's a bit ethically questionable. If we had the power to simulate someone, and we chose to simulate him in a world much like our own, including all the strife, trouble, and pain of this world, when we could have just as easily simulated him in a strictly better world, then I think it would be reasonable to say that you, the simulator, are morally responsible for all that additional suffering.
Agree, but I'd like to point out that "just as easily" hides some subtlety in this claim.