If it's worth saying, but not worth its own post, even in Discussion, it goes here.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
One part of my brain keeps being annoyed by the huge banner saying “Less Wrong will be undergoing maintenance in 1 day, 9 hours” and wishes there were a button to hide it away; another part knows perfectly well that if I did that, I would definitely forget that.
Does anyone else believe in deliberate alienation? Forums and organizations like Lesswrong often strive to be and claim to want to be more (and by extension indefinitely) inclusive but I think excluding people can be very useful in terms of social utilons and conversation, if not so good for $$$. There's a lot of value in having a pretty good picture of who you're talking to in a given social group, in terms of making effective use of jargon and references as well as appeals to emotion that actually appeal. I think thought should be carefully given as to who exactly you let in or block out with any given form of inclusiveness or insensitivity.
On a more personal note, I think looking deliberately weird is a great way to make your day to day happenstance interactions more varied and interesting.
Yes, insufficient elitism is a failure mode of people who were excluded at some point in their life.
This seems like a good time to link the Five Geek Social Fallacies, one of my favorite subculture sociology articles.
(Insufficient elitism as a failure mode is #1.)
Did anyone try using the LessWrong web software for their own website? I would like to try it, but I looked at the source code and instructions, and it seemed rather difficult. Probably because I have no experience with Python, Ruby, or with configuring servers (a non-trivial dose of all three seems necessary).
If someone would help me install it, that would be awesome. A list of steps what exactly needs to be done, and what (and which version) needs to be installed on server would be also helpful.
The idea is: I would like to start a rationalist community in Slovakia, and a website would be helpful to attract new people. Although I will recommend all readers to visit LW, reading in a foreign language is a significant incovenience; I expect the localized version to have at least 10 times more readers. Also I would like to discuss local events and coordinate local meetups or other activities.
Seemed to me it would be best to reuse the LW software and just localize the texts; but now it seems the installation is more complicated than the discussion softwares I have used before (e.g. PHPBB). But I really like the LW features (Markdown syntax, karma). I just have no experience with the used technologies, and don't want to spend my next five weekends learning them. So I hope someone already having the skills would help me.
Are there any mechanisms on this site for dealing with mental health issues triggered by posts/topics (specifically, the forbidden Roko post)? I would really appreciate any interested posters getting in touch by PM for a talk. I don't really know who to turn to.
Sorry if this is an inappropriate place to post this, I'm not sure where else to air these concerns.
So, there are hundreds of diseases, genetic and otherwise, with an incidence of less than 1%. That means that the odds of you having any one of them are pretty low, but the odds of you having at least one of them are pretty good. The consequence of this is that you're less likely to be correctly diagnosed if you have one of these rare conditions, which again, you very well might. If you have a rare disorder whose symptoms include frequent headaches and eczema, doctors are likely to treat the headaches and the eczema separately, because, hey, it's pretty unlikely that you have that one really rare condition!
For example, I was diagnosed by several doctors with "allergies to everything" when I actually have a relatively rare condition, histamine intolerance; my brother was diagnosed by different doctors as having Celiac disease, severe anxiety, or ulcers, when he actually just had lactose intolerance, which is pretty common, and I still cannot understand how they systematically got that one wrong. In both cases, these repeated misdiagnoses led to years of unnecessary, significant suffering. In my brother's case, at one point they actually prescribed him drugs with signi...
but the odds of you having at least one of them are pretty good.
The odds of you having any particular disease is not independent of your odds of having other diseases.
this gross example of irrationality
soren, please don't take this the wrong way, but based on what I've seen you post so far, you are not a strong enough rationalist to say things like this yet. You are using your existing knowledge of biases to justify your other biases, and this is dangerous.
Doctors have a limited amount of time and other resources. Any time and other resources they put into considering the possibility that a patient has a rare disease is time and other resources they can't put into treating their other patients with common diseases. In the absence of a certain threshold of evidence suggesting it's time to consider a rare disease (with a large space of possible rare diseases, most of the work you need to do goes into getting enough evidence to bring a given rare disease to your attention at all), it is absolutely completely rational to assume that patients have common diseases in general. .
At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.", rather than (paraphrasing for humor) "That's getting erased from the internet. No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?"
The real irony is that Eliezer is now a fantastic example of the commitment/sunk cost effect which he has warned against repeatedly: having made an awful decision, and followed it up with further awful decisions over years (including at least 1 Discussion post deleted today and an expansion of topics banned on LW; incidentally, Eliezer, if you're reading this, please stop marking 'minor' edits on the wiki which are obviously not minor), he is trapped into continuing his disastrous course of conduct and escalating his interventions or justifications.
And now the basilisk and the censorship are an established part of the LW or MIRI histories which no critic could possibly miss, and which pattern-matches on religion. (Stross claims that it indicates that we're "Calvinist", which is pretty hilarious for anyone who hasn't drained the term of substantive meaning and turned it into a buzzword for people they don't like.) A pity.
While we're on the topic, I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey, it would be much easier to go around to all the places discussing it and say something like 'this is solely Eliezer's problem; 98% disagree with censoring it'. But he didn't, and so just as I predicted, we have lost a powerful method of damage control.
It sucks being Cassandra.
It's too late. This poll is in the wrong place (attracting only those interested in it), will get too few responses (certainly not >1000), and is now obviously in reaction to much more major coverage than before so the responses are contaminated.
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit,
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.
can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?
The basilisk is harmless. Eliezer knows this. The streisand effect was the intended consequence of the censor. The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It's all a clever memetic inoculation program!
disclaimer : I don't actually believe this.
Another possibility: Eliezer doesn't object to the meme that anyone who doesn't donate to SIAI/MIRI will spend eternity in hell being spread in a deniable way.
We are the hollow men / we are the stuffed men / Leaning together / Headpiece filled with straw. Alas! / Our dried comments when / we discuss together / Are quiet and meaningless / As median-cited papers / or reports of supplements / on the Internet.
Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?
No. I have watched Eliezer make this unforced error now for years, sliding into an obvious and common failure mode, with mounting evidence that censorship is, was, and will be a bad idea, and I have still not seen any remotely plausible explanation for why it's worthwhile.
Just to take this most recent Stross post: he has similar traffic to me as far as I can tell, which means that since I get ~4000 unique visitors a day, he gets as many and often many more. A good chunk will be to his latest blog post, and it will go on being visited for years on end. If it hits the front page of Hacker News as more than a few of his blog posts do, it will quickly spike to 20k+ uniques in just a day or two. (In this case, it didn't.) So we are talking, over the next year, easily 100,000 people being exposed to this presentation of the basilisk (just need average 274 uniques...
The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?
There's now the impression that a community of aspiring rationalists — or, at least, its de-facto leaders — are experiencing an ongoing lack of clue on the subject of the efficacy of censorship on online PR.
The "reputational damage" is not just "Eliezer or LW have this kooky idea."
It is "... and they think there is something to be gained by shutting down discussion of this kooky idea, when others' experience (Streisand Effect, DeCSS, etc.) and their own (this very thread) are strong evidence to the contrary."
It is the apparent failure to update — or to engage with widely-recognized reality at all — that is the larger reputational damage.
It is, for that matter, the apparent failure to realize that saying "Don't talk about this because it is bad PR" is itself horrible PR.
The idea that LW or its leadership dedicate nontrivial attention to encircling and defending against this kooky idea makes it appear that the idea is central to LW. Some folks on the thread on Stross's forum seem to think that Roko discovered the hidden se...
having one's work cruelly mischaracterized and held up to ridicule is a whole bunch of no fun.
Thank you for appreciating this. I expected it before I got started on my life, I'm already accustomed to it by now, I'm sure it doesn't compare to the pain of starving to death. Since I'm not in any real trouble, I don't intend to angst about it.
The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?
The basilisk is now being linked on Marginal Revolution. Estimated site traffic: >3x gwern.net; per above, that is >16k uniques daily to the site.
What site will be next?
The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?
Answering the rhetorical question because the obvious answer is not what you imply [EDIT: I notice that J Taylor has made a far superior reply already]: Yes, it limits the ongoing reputational damage.
I'm not arguing with the moderation policy. But I will argue with bad arguments. Continue to implement the policy. You have the authority to do so, Eliezer has the power on this particular website to grant that authority, most people don't care enough to argue against that behavior (I certainly don't) and you can always delete the objections with only minimal consequences. But once you choose to make arguments that appeal to reason rather than the preferences of the person with legal power then you can be wrong.
At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.
I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear ...
I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.
I get the people who've been frightened by it because EY seems to take it seriously too. (Dmytry also gets them, which is part of why he's so perpetually pissed off at LW. He does his best to help, as a decent person would.) More generally, people distressed by it feel they can't talk about it on LW, so they come to RW contributors - addressing this was why it was made a separate article. (I have no idea why Warren Ellis then Charlie Stross happened to latch onto it - I wish they hadn't, because it was totally not ready, so I had to spend the past few days desperately fixing it up, and it's still terrible.) EY not in fact thinking it's feasible or important is a point I need to address in the last section of the RW article, to calm this concern.
The article is still terrible, but it's better than it was when Stross linked it. The greatest difficulty is describing the thing and the fuss accurately while explaining it to normal intelligent people without them pattern matching it to "serve the AI God or go to Hell". This is proving the very hardest part. (Let's assume for a moment 0% of them will sit down with 500K words of sequences.) I'm trying to leave it for a bit, having other things to do.
At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.
As far as I can tell the entire POINT of LW is to talk about various mental pathologies and how to avoid them or understand them even if they make you very uncomfortable to deal with or acknowledge. The reasons behind talking about the basilisk or basilisks in general (apart from metashit about censorship) are just like the reasons for talking about trolley problems even if they make people angry or unhappy. What do you do when your moral intuitions seem to break down? What do you do about compartmentalization or the lack of it? Do you bite bullets? Maybe the mother should be allowed to buy acid.
To get back to meta shit: If people are complaining about the censorship and you are sick of the complaints, the simplest way to stop them is to stop the censorship. If someone tells you there's a problem, the response of "Quit your bitching, it's annoying" is rarely appropriate or even reasonable. Being annoying is the point of even lameass activism like this. I personally think any discussion of the actual basilisk has reached ev...
The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?
I hate to use silly symmetrical rhetoric, however:
The secret has been leaked and the reputational damage is ongoing. Is there really anything to be gained by continuing the current moderation policy?
Stross is widely read, cited, and quoted approvingly, on his blog and off (eg. Hacker News). He is a trusted source for many geeks.
White coat hypertension is a phenomenon in which patients exhibit elevated blood pressure in a clinical setting (doctor's office, hospital, etc.) but not in other settings, apparently due to anxiety caused by being in the clinical setting.
Stereotype threat is the experience of anxiety or concern in a situation where a person has the potential to confirm a negative stereotype about their social group. Since most people have at least one social identity which is negatively stereotyped, most people are vulnerable to stereotype threat if they encounter a situation in which the stereotype is relevant. Although stereotype threat is usually discussed in the context of the academic performance of stereotyped racial minorities and women, stereotype threat can negatively affect the performance of European Americans in athletic situations as well as men who are being tested on their social sensitivity.
Math anxiety is anxiety about one's ability to do mathematics, independent of skill. Highly anxious math students will avoid situations in which they have to perform mathematical calculations. Math avoidance results in less competency, exposure and math practice, leaving students more anxious an...
Over the last month Bitcoin's nearly doubled in value. It's now nearly at hit historical high. http://bitcoincharts.com/charts/mtgoxUSD#tgMzm1g10zm2g25zv
Does anybody know what drives the latest Bitcoin price development?
drug use, pedophilia, (...), and rampant amateur speculation
Hey, that's almost 2.5% of the world GDP! Can't go wrong with a market this size.
Any bored nutritionists out there? I've put together a list of nutrients, with their USDA recommended quantities/amounts, and scoured amazon for the best deals, in trying to create my own version of Soylent. My search was complicated by the following goals:
I am also worried about possible unforeseen consequences of eating bad diets, but one of those bad diets is my current one, so...
How much do actors know about body language? Are they generally taught to use body language in a way consistent with what they're saying and expressing with their faces? (If so, does this mean that watching TV shows or movies muted could be a good way to practice reading body language?)
I do not believe it would be a good way to practice because even with actors acting the way they are supposed (consistent body language and facial expressions) lets say conservatively 90% of the time, you are left with 10% wrong data. This 10% wouldn't be that bad except for the fact that it is actors trying to act correctly (meaning you would interpret what it looks like for a fabricated emotion to be a real emotion). This could be detrimental to many uses of being able to read body language such as telling when other people are lying.
My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.
After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.
This may not extremely applicable to the real world because emotions felt in court room...
Dude. Seriously. Spoilers.
This comment is a little less sharp than it would have been had I not gone to the gym first; but unless you (and the apparent majority in this thread) actively want to signal contempt for those who disagree with you, please remember that there are some people here who do not want to read about the fucking basilisk.
It's been suggested to me that since I don't blog, I start an email newsletter. I ignored the initial suggestions, but following the old maxim* began to seriously consider it on the third or fourth suggestion (who also mentioned they'd even pay for it, which would be helpful for my money woes).
My basic idea is to once a month compile: everything I've shared on Google+, articles excerpted in Evernote or on IRC, interesting LW comments**, and a consolidated version of the changes I've made to gwern.net
that month. Possibly also include media I've consumed with reviews for books, anime, music etc akin to the media thread.
I am interested in whether LWers would subscribe:
[pollid:415]
If I made it a monthly subscription, what does your willingness-to-pay look like? (Please be serious and think about what you would actually do.)
[pollid:416]
Thanks to everyone voting.
* "Once is chance; twice is coincidence; three times is enemy action." Or in Star Wars terms: "If someone calls you a Hutt, ignore them; if two people call you a Hutt, begin to wonder; and if three do, buy a slobber-pail and start stockpiling glitterstim."
** For example, my recent comments on the SAT (Harvar...
Conditional Spam (Something we could use a better word for but this will do for now)
In short: Conditional Spam is information that is valuable to 99 percent of people.
Huge proportions of the content generated and shared on the internet is in this category, and this becomes more and more the case as a greater percentage of the population outputs to the internet as well as reading it. In this category are things like people's photos of their cats, stories about day to day anecdotes, baby pictures, but ALSO, and importantly, things like most scientific studies, news articles, and political arguments. People criticize twitter for encouraging seemingly narcissistic pointless microblogging, but in reality it's the perfect engine for distributing conditional spam: Anyone who cares about your dog can follow you, and anyone who doesn't can NOT.
When your twitter or facebook or RSS is full of things you're not informed (or entertained, since this applies to fun as well as usefulness) by, this isn't a failing of the internet. It's a failing of your filter. The internet is a tool optimized to distribute conditional spam as widely as possible, and you can tune your use of it to try and make th...
In the short story/paper "Sylvan's Box" by Graham Priest the author tries to argue that it's possible to talk meaningfully about a story with internally inconsistent elements. However, I realized afterward that if one truly was in possession of a box that was simultaneously empty and not empty there would be no way to keep the inconsistency from leaking out. Even if the box was tightly closed it would both bend spacetime according to its empty weight and also bend spacetime according to its un-empty weight. Opening the box would cause photons ...
Persson (Uehiro Fellow, Gothenburg) has jokingly said that we are neglecting an important form or altruistic behavior.
http://www.youtube.com/watch?v=sKmxR1L_4Ag&feature=player_detailpage#t=1481s
We have a duty not to kill
We have a duty not to rape
but we do not have a duty, at least not a strong duty, to save lives
or to have sex with someone who is sexually-starved
Its a good joke.
What worries me is that it makes Effective Altruism of the GWWC and 80000h kind analogous to "fazer um feio feliz" an expression we use in portuguese meaning &q...
Where on LW is it okay to request advice? (The answers I would expect -- are these right? -- are: maybe, just maybe, in open threads, probably not in Discussion if you don't want to get downvoted into oblivion, and definitely not in Main; possibly (20-ish percent sure) nowhere on the site.)
I'm asking because, even if the discussions themselves probably aren't on-topic for LW, maybe some would rather hear opinions formulated by people with the intelligence, the background knowledge and the debate style common around here.
It's definitely okay to post in open threads. It might be acceptable to post to discussion, if your problem is one that other users may face or if you can make the case that the subsequent discussion will produce interesting results applicable to rational decisionmaking generally.
Two papers from last week: "The universal path integral" and "Quantum correlations which imply causation".
The first defines a quantum sum-over-histories "over all computable structures... The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures."
The second, despite its bland title, is actually experimenting with a new timeless formalism, a "pseudo-density matrix which treats space and time indiscriminate...
I've been reading the sequences but i've realized that less of it has sunk in then i would have hoped. What is the best way to make the lessons sink in?
With all the mental health issues coming up recently, I thought I'd link Depression Quest, a text simulation of what it's like to live with depression.
Trigger warning: Please read the introduction page thoroughly before clicking Start. If you are or have been depressed, continue at your own risk.
I'm looking for information about chicken eye perfusion, as a possible low-cost cryonics research target. Anyone here doing small animal research?
Following up on my comment in the February What are You Working On thread, I've posted an update to my progress on the n-back game. The post might be of interest to those who want to get into mobile game/app development.
I have recently tried playing the Monday-Tuesday game with people three times. The first time it worked okay, but the other two times the person I was trying to play it with assumed I was (condescendingly!) making a rhetorical point, refused to play the game, and instead responded to what they thought the rhetorical point I was making was. Any suggestions on how to get people to actually play the game?
I decided I want to not see my karma or the karma of my comments and posts. I find that if anyone ever downvotes me it bothers me way more than it should, and while "well, stop letting it bother you" is a reasonable recommendation, it seems harder to implement for me than a software solution.
So, to that end, I figured out how the last posted version of the anti-kibitzer script works, and remodeled it to instead hide only my own karma (which took embarrassingly long to figure out, since my javascript skills can be best described with terms like &q...
A couple guys argue that quantum fluctuations are relevant to most macroscopic randomness, including ordinary coin tosses and the weather. (I haven't read the original paper yet.)
The link discusses normal human flips as being quantum-influenced by cell-level events; a mechanical flipper doesn't seem relevant.
Could someone write a post (or I suppose we could create a thread here) about the Chelyabinsk meteorite?
It's very relevant for a variety of reasons:
connection to existential risk
the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day
any observations people have (I haven't any) on global communication and global rational decision making at this time, before it was determined that the damage and integrated risk was limited
the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day
It came from a different region of space, on an entirely different orbit. 2012 DA14 approached Earth from the south on a northward trajectory, whereas the Chelyabinsk meteorite was on a what looks like much more in-plane, east-west orbit. As unlikely as it sounds, there is no way they could have been the fragments of the same asteroid (unless they broke apart years ago and were consequently separated further by more impacts or the chaotic gravitational influences of other objects in the solar system).
The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.
I've seen several references to a theory that the english merchant class out breeded both the peasants and nobles with major societal implications (causing the industrial revolution), but now I can't find them. Does anyone know what I'm talking about?
A bit of a meta question / possibly suggestion:
Has the idea of showing or counting karma-per-reader ratios been discussed before? The idea just occurred to me, but I'd rather not spend time thinking at length about it (I've not noticed any obvious disadvantages, so if you see some please tell me) if multiple other LWers have already discussed or thought about it.
Deleted. Don't link to possible information hazards on Less Wrong without clear warning signs.
E.g. this comment for a justified user complaint. I don't care if you hold us all in contempt, please don't link to what some people think is a possible info hazard without clear warning signs that will be seen before the link is clicked. Treat it the same way you would goatse (warning: googling that will lead to an exceptionally disgusting image).
Working on my n-back meta-analysis again, I experienced a cute example of how prior information is always worth keeping in mind.
I was trying to incorporate the Chinese thesis Zhong 2011; not speaking Chinese, I've been relying on MrEmile to translate bits (thanks!) and I discovered tonight that I had used the wrong table. I couldn't access the live thesis version because the site was erroring so I flipped to my screenshotted version... and I discovered that one line (the control group for the kids who trained 15 days) was cut off:
「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.
「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.
Use Wei Dai's script. Use the 'sort by karma' feature.
Link: Obama Seeking to Boost Study of Human Brain
It's still more-or-less rumors with little in the way of concrete plans. It would, at the least, be exciting to see funding of a US science project on the scale of the human genome project again.
I don't see why the "pattern matching" is invalid.
It is the things that tend to go with it that are the problem. Such as the failure to understand which facets are different and similar and the missing of the most important part of the particular case due to distraction by thoughts relevant to a different scenario.
What's wrong with embracing foreign cultures, uploadings, upliftings, and so on?
Maybe I am biased by my personal history, having embraced what, as far as I can tell, is the very cutting edge of Western Culture (i.e. the less-wrong brand of secular humanism), and feeling rather impatient for my origin cultures to follow a similar path, which they are violently reticent to. Maybe I've got a huge blind spot of some other sort.
But when the Superhappies demand that we let them eradicate suffering forever, or when CelestAI offers us all our own personal paradise...
An overview of political campaigns
Once a new president is in power, he forgets that voters who preferred him to the alternative did not necessarily comprehend or support all of his intentions. He believes his victory was due to his vision and goals; he underestimates how much the loss of credibility for the previous president helped him and overestimates how much his own party supports him.
Is there a better way to look at someone's comment history, other than clicking next through pages of pages of recent comments? I would like to jump to someone's earliest posts.
I've been trying to correct my posture lately. Anyone have thoughts or advice on this?
Some things:
Advice from reddit; if you spend lots of time hunched over books or computers, this looks useful and here are pictures of stretches.
You can buy posture braces for like $15-$50. I couldn't find anything useful about their efficacy in my 5 minutes of searching, other than someone credible-sounding saying that they'd weaken your posture muscles (sounds reasonable) and one should thus do stretches instead.
Searching a certain blog, I found this which says t
A rationalist, mathematical love song
I got a totally average woman stands about 5’3”
I got a totally average woman she weighs about 153
Yeah she’s a mean, mean woman by that I mean statistically mean
Y’know average
I just watched Neil Tyson, revered by many, on The Daily Show answer the question "What do you think is the biggest threat to mankind's existence?" with "The threat of asteroids."
Now, there is a much better case to be made for the danger from future AI than it is from asteroids as an x-risk, by the transitive property Neil Tyson's beliefs would pattern match to xenu even better than MIRI's beliefs - a fiery death from the sky.
Yet we give NdGT the benefit of the doubt of why he came to his conclusion, why don't you do the same with MIRI?
The pattern matching's conclusions are wrong because the information it is matching on is misleading. The article implied that there was widespread belief that the future AI should be assisted, and this was wrong. Last I looked it still implied widespread support for other beliefs incorrectly.
This isn't an indictment of pattern matching so much as a need for the information to be corrected.
This article got me rather curious
Extracts:
AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters' worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.
"Reality is so complex, we must move away from dogma, whether it's conspiracy theories or free-market," says James Glattfelder. "Our analysis is reality-based."
Now that's the kind ...
Just for sharing an unpretentious but (IMO) interesting post from a blog I regularly read.
In commenting on an article about the results of an experiment aimed at "simulating" a specific case of traumatic brain injury and measuring its supposed effects on solving a particularly difficult problem, economist/game theorist Jeffrey Ely asked whether a successful intervention could ever be designed to give people certain unusual, circumstantially useful skills.
...It could be that we have a system that takes in tons of sensory information all of which is
So I was planning on doing the AI gatekeeper game as discussed in a previous thread.
My one stipulation as Gatekeeper was that I could release the logs after the game, however my opponent basically backed out after we had barely started.
Is it worth releasing the logs still, even though the game did not finish?
Ideally I could get some other AI to play against me, that way I have more logs to release. I will give you up to two hours on Skype, IRC, or some other easy method of communication. I am estimating my resounding victory with a 99%+ probability. We can put karma, small money, or nothing on the line.
Is anyone up for this?
I enjoy your posts, and I have been a consumer of your G+ posts and your blog for sometime now, even though I don't much comment and just lurk about. While I would want some sort of syndication of your stuff, I am wondering if an external expectation of having to meet the monthly compilation target or the fact that you know for sure that there is a definite large audience for your posts now, will affect the quality of your posts? I realize that there is likely not any answer possible for this beforehand, but I'd like to know if you've considered this.
I was reading Buss' Evolutionary Psychology, and came across a passage on ethical reasoning, status, and the cognitive bias associated with the Wason Selection Task. The quote:
...Cummins (1998) marshals several forms of evidence to support the dominance theory. The first pertains to the early emergence in a child's life of reasoning about rights and obligations, called deontic reasoning. Deontic reasoning is reasoning about what a person is permitted, obligated, or forbidden to do (e.g., Am I old enough to be allowed to drink alcoholic beverages?). This for
Assuming by "it" you refer to the decision theory work, that UFAI is a threat, Many Worlds Interpretation, things they actually have endorsed in some fashion, it would be fair enough to talk about how the administrators have posted those things and described them as conclusions of the content, but it should accurately convey that that was the extent of "pushing" them. Written from a neutral point of view with the beliefs accurately represented, informing people that the community's "leaders" have posted arguments for some unus...
I was wondering about the LW consensus regarding molecular nanotechnology. Here's a little poll:
How many years do you think it will take until molecular nanotechnology comes into existence? [pollid:417]
What is the probability that molecular nanotechnology will be developed before superhuman Artificial General Intelligence? [pollid:418]
Omega appears and makes you the arbiter over life and death. Refuse, and everybody dies.
The task is this: You are presented with n (say, 1000) individuals and have to select a certain number who are to survive.
You can query Omega for their IQ, their life story and most anything that comes to mind, you cannot meet them in person. You know none of them personally.
You cannot base your decision on their expected life span. (Omega matches them in life expectancy brackets.)
You also cannot base your decision on their expected charitable donations, or a proxy thereof.
What do?
Can you please stop with this meta discussion?
I banned the last discussion post on the Basilisk, not Eliezer. I'll let this one stand for now as you've put some effort into this post. However, I believe that these meta discussions are as annoyingly toxic as anything at all on Less Wrong. You are not doing yourself or anyone else any favors by continuing to ride this.
The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?
At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.
The basilisk is now being linked on Marginal Revolution. Estimated site traffic: >3x gwern.net; per above, that is >16k uniques daily to the site.
What site will be next?