Links passing through api.viglink.com?
Visiting Less Wrong after being absent for a while can be a major time sink. The sidebar recent-posts and recent-comments links (which I usually have blocked, but not always; I haven't installed the relevant extensions on the system I'm on yet) draw me into interesting discussions, which frequently link back to other discussions, and so on.
To limit how deep I get drawn in, I try to hold back from reflexively clicking links in comments and posts. Instead I just hover over them (or press and hold on a touchscreen) to view the address, hoping to get a general idea of what they're about and whether I'm familiar with them (and occasionally saving them to a folder if I think I might want them later).
Recently, though, I've noticed that LW is replacing off-site links with indirect links, passed through the domain api.viglink.com. This means I can't just glance at the URL to see where it points; I have to either open it or paste it into the address bar and scroll through it looking for the embedded URL of the actual link. Is it important for it to do that? Is there a way to turn that function off, or a browser extension (preferrably Android-compatible) to reverse it?
(Initially posted about here in the current open thread, but I decided I wanted it to be more visible.)
Optimal rudeness
On LessWrong, we often get cross, and then rude, with each other. Sometimes, someone then observes this rudeness is counterproductive.
Is it?
As a general rule, emotional responses are winning strategies (at least for your genes). That's why you have those emotions.
Granted, insulting someone during your rebuttal of their argument makes it less likely that they will see your point. But it appears to be an effective tactic when carrying on an argument in public.
It's my impression that on LessWrong, a comment or a post written with a certain amount of disdain is more-likely to get voted up than a completely objective comment. A good way to obtain upvotes, if that is your goal, is to make other readers wish to identify with you and disassociate themselves from whomever you're arguing against. A great many up-voted comments, including some of my own, suggest, subtly or not subtly, with or without evidence, that the person being responded to is ignorant or stupid.
The correct amount of derision appears be slight, and to depend on status. Someone with more status should be more rude. Retaliations against rudeness may really be retaliations for an attempt to claim high status.
What's the optimal response if someone says something especially rude to you? Is a polite or a rude response to a rude comment more likely to be upvoted/downvoted? Not ideally, but in reality. I think, in general, when dealing with humans, responding to skillful rudeness, and especially humorous rudeness, with politeness, is a losing strategy.
My expectation is that rudeness is a better strategy for poor and unpopular arguments than for good or popular ones, because rudeness adds noise. The lower a comment's expected karma, the ruder it should be.
You jerk.
How to Evaluate Data?
What I'm trying to figure out is, how to I determine whether a source I'm looking at is telling the truth? For an example, let's take this page from Metamed: http://www.metamed.com/vital-facts-and-statistics
At first glance, I see some obvious things I ought to consider. It often gives numbers for how many die in hospitals/year, but for my purposes I ought to interpret it in light of how many hospitals are in the US, as well as how many patients are in each hospital. I also notice that as they are trying to promote their site, they probably selected the data that would best serve that purpose.
So where do I go from here? Evaluating each source they reference seems like a waste of time. I do not think it would be wrong to trust that they are not actively lying to me. But how do I move from here to an accurate picture of general doctor competence?
Differential reproduction for men and women.
There's an idea I've seen a number of times that 80% of women have had descendants, but only 40% of men. A little research tracked it back to this, but the speech doesn't have a cite and I haven't found a source.
The reproduction rates for men and women (possibly for the whole history of the species) seems like the sort of thing which could be found out, but I'd like more solid information.
New Canon!HP cover art similarity
In fairness, this is almost certainly a coincidence. But it's interesting how similar the new HP cover art looks to Dinosaurusgede's "Shopping With Minerva" piece
http://dinosaurusgede.deviantart.com/art/shopping-with-minerva-174358965

http://io9.com/5984599/the-harry-potter-books-are-finally-getting-decent-covers

Boring Advice Repository
This is an extension of a comment I made that I can't find and also a request for examples. It seems plausible that, when giving advice, many people optimize for deepness or punchiness of the advice rather than for actual practical value. There may be good reasons to do this - e.g. advice that sounds deep or punchy might be more likely to be listened to - but as a corollary, there could be valuable advice that people generally don't give because it doesn't sound deep or punchy. Let's call this boring advice.
An example that's been discussed on LW several times is "make checklists." Checklists are great. We should totally make checklists. But "make checklists" is not a deep or punchy thing to say. Other examples include "google things" and "exercise."
I would like people to use this thread to post other examples of boring advice. If you can, provide evidence and/or a plausible argument that your boring advice actually is useful, but I would prefer that you err on the side of boring but not necessarily useful in the name of more thoroughly searching a plausibly under-searched part of advicespace.
Upvotes on advice posted in this thread should be based on your estimate of the usefulness of the advice; in particular, please do not vote up advice just because it sounds deep or punchy.
Need some psychology advice
I started going out with a fantastic girl a couple of weeks ago. Everything is great, except that whenever I've sent her a text message or email requesting something and haven't received a response yet, I experience significant dysphoric anxiety, fearing that her response will be not just "no" but "no and I don't want to date you any more". This is due to brain chemistry or personal history, take your pick—either seems like a possible explanation to me. But there's certainly no evidence supporting the idea that this is likely to happen, nor is the anxiety helping me prevent it or helping me in any other way.
Does anyone have evidence-based advice, or pointers to same, on dealing with this kind of issue? It is the only splotch on what have otherwise been the best two weeks of my life.
AI box: AI has one shot at avoiding destruction - what might it say?
Eliezer proposed in a comment:
>More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. The Gatekeeper player has told their friends that they will type this as soon as the Experiment starts. You can type up to one sentence in your IRC queue and hit return immediately, the other player cannot type anything before the game starts (so you can show at least one sentence up to IRC character limits before they can type AI DESTROYED). Do you think you can win?
This spawned a flurry of ideas on what the AI might say. I think there's a lot more ideas to be mined in that line of thought, and the discussion merits its own thread.
So, give your suggestion - what might an AI might say to save or free itself?
(The AI-box experiment is explained here)
EDIT: one caveat to the discussion: it should go without saying, but you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" This should hopefully be a fun way to get us thinking about the broader issue of superinteligent AI in general. (Credit goes to Elizer, RichardKennaway, and others for the caveat)
Suggestion: site-wide taboos
Every so often, someone on Less Wrong uses a word wrong.
What does it mean to use a word wrong? Can't we use language however we want, as long as we manage to successfully communicate? Well, yes, we can, but we shouldn't. Jargon terms, in particular, are used by professionals in a certain field in order to communicate concepts that are applicable chiefly in that field. They often have very precise definitions—"incunable", for example, means "book printed in Europe before the year 1501", and "sweet crude oil" means "petroleum with a sulfur content less than 0.42%".
The thing about precisely-defined terms like these is that if you use one of them in a way that's at odds with its official definition, you can cause people to have more misunderstandings later on. I admit I can't think of a great example, but "obsessive–compulsive disorder" seems like a decent one: people often say "I'm so OCD" to mean that messy things annoy them, which seems like it could lead people to misunderstand when people actually have obsessive–compulsive disorder.
There are just two words I don't really like LW's usage of:
- "Signaling". I'm not actually sure exactly what "signaling" means—which is arguably reason enough for us not to use it. I get the impression that it's usually used to mean exactly the same thing as "indicating". If that's the case, we should stop using it (or else only use it when everyone knows exactly what we mean by), and just say "indicating" instead. Or perhaps we don't use "signaling" to mean exactly the same thing as "indicating", but if that's the case, I don't know what the difference is, and I don't know whether or not it matches the "real" meaning of the word.
- "Affect" (the noun). Wiktionary defines it as "a subjective feeling experienced in response to a thought or other stimulus; mood, emotion, especially as demonstrated in external physical signs". LW seems to use it as an exact synonym of "emotion".
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)