If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough*
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough*
An interesting story -- about science, what gets published, and what the incentives for scientists are. But really it is about whether you ought to believe published research.
The summary has three parts (I am quoting from the story).
Part 1 : We were inspired by the fast growing literature on embodiment that demonstrates surprising links between body and mind (Markman & Brendl, 2005; Proffitt, 2006) to investigate embodiment of political extremism. Participants from the political left, right and center (N = 1,979) completed a perceptual judgment task in which words were presented in different shades of gray. Participants had to click along a gradient representing grays from near black to near white to select a shade that matched the shade of the word. We calculated accuracy: How close to the actual shade did participants get? The results were stunning. Moderates perceived the shades of gray more accurately than extremists on the left and right (p = .01). Our conclusion: political extremists perceive the world in black-and-white, figuratively and literally. Our design and follow-up analyses ruled out obvious alternative explanations such as time spent on task and a tendency to s...
Ugh. I am generally in the unsympathetic-to-PUA thinking camp, so I offer the following not to bring up a controversial subject again, but because I think publicly acknowledging when one encounters inconvenient evidence for one's priors is a healthy habit to be in...
Recently I added the following (truthful) text to my OkCupid! profile:
Note, July 2013 -- I can't claim to be in a relationship yet, but I have had a couple of dates with a someone who had me totally enthralled within 30 minutes of meeting her. I'm flattered by the wave of other letters that have come in the past month, but I've put responding to anyone else on hold while I devote myself to worshiping the ground she walks on.
Having noted that I am a)unavailable and b)getting lots of competing offers, a high status combination, the result is... in three days, the number of women rating my profile highly has gone from 61 to 113.
OTOH I wouldn't at all be shocked to find out that profiles rated highly and profiles most often responded to are significantly different sets. Signalling preferences vs revealed preference yada yada.
Funny, I read your post and my initial reaction was that this evidence cuts against PUA. (Now I'm not sure whether it supports PUA or not, but I lean towards support).
PUA would predict that this phrase
...while I devote myself to worshiping the ground she walks on.
is unattractive.
I dunno, in the context it sounds clearly tongue-in-cheek -- though you usually can't countersignal to people who don't know you (see also).
I've noticed a few times how surprisingly easy it is to be in the upper echelon of some narrow area with a relatively small amount of expenditure (for an upper middle class American professional). This is easy to see in various entertainment hobbies- an American professional adult who puts, say, 10% of his salary into Legos will have a massive collection by the standards of most people who own Legos. Similarly, putting 10% of a professional's salary into buying gadgets means that you would be buying a new one or two every month.
I recently came across an article on political donations and saw the same effect- to be in the top .01% of American political donors, it only takes about $11k an election cycle (more in presidential years, less in legislative only years). Again, at 10% of income, that only takes an income of ~$55k a year (since the cycles occur every two years), which is comparable to the median American salary (and lower than the starting salaries for most of my friends who graduated with STEM bachelor's degrees).
It's not clear to me what percentage of people do this. It's the sort of thing that you could only do for a few narrow niches, since buying a ton of Legos impedes ...
What are the 'benefits' you alude to?
Mostly access to exceptional people / opportunities, and admiration / social status. For example, become a major donor to a wildlife rescue center, and you get invited to play with the tigers. I would be surprised if major MIRI donors that live in the Bay area don't get invited to dinner parties / similar social events with MIRI people.
For the status question, I think it's better to be high status in a narrow niche than medium status in many niches. It's not clear to me how the costs compare, though.
After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.
After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it.
We almost need a list for this. This makes half a dozen people I've seen making the same declaration.
Please don't let my potential harm discourage you.
Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.
Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.
Warning: Could be dangerous to look into it
B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."
So you're asking, what new medical sensory hazards may be developed in the future.
Well, the example of photosensitive epilepsy, where no trigger is mentioned which could have existed before the 19th century or so, suggests you should be very wary of thinking the risk of new sensory hazards is close to zero. Flash grenades are another visual example of a historically novel sensation which badly damages ordinary people. Infrasound is another plausible candidate for future deliberate or accidental weaponization. And so on...
epilepsy is rare and is rarely adult-onset for the non-elderly.
There, see, you're doing it again! Why would you exclude the elderly? Keep in mind that you yourself should aspire to become elderly one day (after all, consider the most likely alternative...).
I think the most obvious semi-basilisk example is certain strains of religion. Insofar as it makes you believe you might go to hell, and all your friends are going to hell, these religions will make you feel bad an also make you want to spread them to everyone you know. Feeling bad is not the same as death or mental breakdown or other theoretical actual basilisk consequences but in essence there are meme complexes that contain elements that demand you spread the whole complex. If someone's in possession of such a concept but has defeated it or is in some way immune it may still be correct for them not to tell you for fear you are not and will spread it to others once it has worked it's will on you.
So according to this article a large factor in rising tuition costs in American universities is attributable to increases in administration and overhead costs. For example,
Over the past four decades, though, the number of full-time professors or “full-time equivalents”—that is, slots filled by two or more part-time faculty members whose combined hours equal those of a full-timer—increased slightly more than 50 percent. That percentage is comparable to the growth in student enrollments during the same time period. But the number of administrators and administrative staffers employed by those schools increased by an astonishing 85 percent and 240 percent, respectively.
Certainly some of these increases are attributable to the need for more staff supporting new technological infrastructure such as network/computer administration but those needs don't explain the magnitude of the increases seen.
The author also highlights examples of excess and waste in administrative spending such as large pay hikes for top administrators in the face of budget cuts and the creation of pointless committees. How much these incidents contribute to the cost of tuition is somewhat questionable as the evi...
Also, worth considering is the idea that increased administration is needed to deal with new regulations and/or norms. For example many schools have added positions dealing with diversity, sexual assault, and disability accommodations.
Anyway, this was surprising to me because I would naively predict that, if we were talking about almost any other product, we would begin to see less bureaucratically bloated competitors offering it for cheaper and driving the price down. What's unique about university that stops this from happening?
We do see competition.
ETA: Two additional points:
A lot of the spending/waste is on prestige projects like new buildings, rather than on administrators.
If you're wondering why nobody is challenging the top schools, I have three responses:
1) It would require too high an initial investment. 2) It would require attracting top students, which is more difficult given scholarships and lack of reputation. 3) This college is trying to do so.
Question: where can I upload jailbroken PDFs that is public & Google-visible?
For a job, I compiled ~100MB of lipreading research, some of them extremely obscure & hard to find (I also have some Japanese literature PDFs in a similar situation); while I have no personal interest in the topic and do not want to host indefinitely the PDFs on gwern.net, I feel it would be a massive waste to simply delete them.
I cannot simply put them in a Dropbox public folder because they wouldn't show up in Google, and Scribd is an abomination I despise.
(crosspost from Google+)
Open comment thread, Monday July 29th
If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.
Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)
Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.
Any LW readers living in India? I recently moved here (specifically, New Delhi) from the United States and I'm interested in the possibility of a local meet-up.
The usual suggestion for cases like this is to unilaterally announce a meetup in a public place, and bring a book in case no one shows up. Best case: awesome people doing awesome things. Worst case: you spend a couple hours reading.
I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.
Is there a well-known term for the kind of error that pre-registrations of scientific studies is meant to avoid? I mean the error where an experiment is designed to test something like "This drug cures the common cold," but then when the results show no effect, the researchers repeatedly do the analysis on smaller slices of the data from the experiment, until eventually they have the results "This drug cures the common cold in males aged 40-60, p<.05," when of course that result is just due to random chance (because if you do the statistical tests on 20 subsets of the data, chances are one of them will show an effect with p<.05).
It's similar to the file drawer effect, except it's within a single experiment, not many.
As I understand applying Bayes to science, the aim is to direct research into areas that make sense. However, sometimes valuable discoveries are made by accident.
Is there any way to tell whether your research is over-focused? To improve the odds of noticing valuable anomalies?
In the past, people like Eliezer Yudkowsky and, I think, Luke Meulhauser have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined? I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.
I have a question about the Simulation Argument.
Suppose that it's some point in the future, and we're able to run conscious simulations of our ancestors. We're considering whether or not to run such a simulations.
We are also curious about whether we are in a simulation ourselves, and we know that knowledge that civilizations like ours run ancestor simulations would be evidence for the proposition that we ourselves are in a simulation.
Could the choice at this point whether or not to run a simulation be used as a form of acausal control over the probability that we ourselves are living in a simulation?
I strongly recommend not using stupid. It's less distracting to just point out mistakes without using insults.
Waffled between putting this here and putting this in the Stupid Questions thread:
Why is the default assumption that a superintelligence of any type will populate its light cone?
I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).
But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opp...
Seems prudent to do.
Unless it values the existence of stars more than it values other things it could do with that energy.
If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?
Warning: politics, etc., etc.
What do conservative political traditions squabble over?
My upbringing and social circles are moderately left-wing. There's a well-observed failure mode in these circles, not entirely dissimilar to what's discussed in Why Our Kind Can't Cooperate, where participants sabotage cooperation by going out of their way to find things to disagree about, presumably for moral posturing and virtue-signalling reasons.
In recent years I have become fairly sceptical of intrinsic differences between political groups, which leads me to my openi...
We used to nutshell it as Trads vs Libertarians in college. Here are the relevant strawmen each group has of the other. (Hey, you asked what the fights look like!)
Trads see libertarians as: Just as prone to utopian thinking as those wretched liberals, or else shamelessly callous. Either they really do believe that people will just be naturally good without laws or institutions (what piffle!) or they just don't care about the casualties and trust that they themselves will rise to the top of their brutal, anarchic meritocracy. Not to mention that some of them could be more accurately described as libertines and just want an excuse for license.
Libertarians see trads as: Hidebound stick in the muds. They'd rather have people following arbitrary rules than thinking critically. They despise modernity, but don't actually have a positive vision of what they want instead (they're prone to ruefully shaking their heads and saying "Everything went downhill after the 1950s, or the American Revolution, or the Fall of Man"). By proposing ridiculous schemes (a surprising number have monarchist sympthies!) and washing their hands of governance in a show of 'epistemological modesty' and 'subsidiarity' they wriggle out of putting principles into practice.
I believe I've encountered a problem with either Solomonoff induction or my understanding of Solomonoff induction. I can't post about it in Discussion, as I have less than 20 karma, and the stupid questions thread is very full (I'm not even sure if it would belong there).
I've read about SI repeatedly over the last year or so, and I think I have a fairly good understanding of it. Good enough to at least follow along with informal reasoning about it, at least. Recently I was reading Rathmanner and Hutter's paper, and Legg's paper, due to renewed interest in ...
(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else. I think.
I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives ...
You can't act on any object. You change its environment, and the object will flow.
Kate Stone, TED talk, paper with electronics
This seems like an interesting half truth since you can't change the environment without acting on objects. However, it's possible that the environment is a richer tool of influence than acting directly, and also possible that people are less apt to resent the environment for not doing what they want, therefore less likely to try to force it.
Random idea for the Lobian obstacle that turned out not to work, but I decided to post anyway on the off chance someone can salvage it:
Inspired by the human brains bicameral system: Split the system into two, A and B. A has ((B proves C) -> C), B has ((A proves C) -> C). A, trusting B, can build B' as strong as B; B, trusting A, can build A' as strong as A.
Obvious flaw: A has ((B proves ((A proves C) -> C)) -> ((A proves C) -> C), so A has ((A proves C) -> C), and vice versa.
Recent effective altruism job openings (all close within the next 10 days):
Careers analyist at 80,000 Hours
Director of Communications, Director of Community and Director of Research at Giving What We Can
Researcher and Community Manager at Effective Animal Altruism
I'm planning on taking Algorithms Part 1 and Part 2 through Coursera to complement my first year computer science (software engineering) courses. I am very much interested in collaborating with other LWers. The first course in the sequence starts August 23. Please let me know if you are interested and what form of collaboration you would be most comfortable with (weekly "book club" posts in Discussion, IRC studyhall, etc... ) if you are.
About the course:
...An introduction to fundamental data types, algorithms, and data structures, with emphasis on
My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.
A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.
Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?
Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.
Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.
Any suggestions?
A few months ago, I decided to try a "gather impossible problems, hold off on proposing solutions until we've thoroughly understood them, then solve them" 'campaign'. The problems I came up with focused on blindness, so I started the discussion here rather than LW. I was surprised when I looked it up today and found that it only lasted for four days--I had been sure it had managed to drag on a little longer than that.
I recall someone tried something similar on LW, though considerably less focused and more willing to take things they couldn't be e...
Has anyone else's inbox icon been behaving erratically (i.e., turning red even when there were no new messages or comment replies)?
You might be confused because pressing the "back" button to a time when the message was unread will make the symbol turn read.
This is a call for Less Wrong users who do not wish to personally identify as rationalists, or do not perfectly relate to the community at a cultural level:
What do you use Less Wrong for? And, what are some reasons for why you do not identify as a rationalist? Are there some functions that you wish the community would provide which it otherwise does not?
...Apparently you can't delete your own comments with no replies anymore.
whoops, accidentally hit comment instead of show help. disregard for now.
Warning: politics, etc., etc.
What do conservative political traditions squabble over?
My upbringing and social circles are moderately left-wing. There's a well-observed failure mode in these circles, not entirely dissimilar to what's discussed in Why Our Kind Can't Cooperate, where participants sabotage cooperation by going out of their way to find things to disagree about, presumably for moral posturing and virtue-signalling reasons.
In recent years I have become fairly sceptical of intrinsic differences between political groups, which leads me to my opening question: what do conservative political traditions squabble over? I find it hard to imagine what form this sort of self-sabotaging moral posturing might take. Can anyone who grew up on the other side of the fence offer any insight?
At least in the US since the 60's, another way to divide conservatives has been in the party's three big issues: economic classical liberalism, social conservatism, and foreign-policy neo-conservatism. The moderate, short-term goals of these groups are sometimes in alignment, but their desired end-states look very different:
Neo-conservatives want a big military and an aggressive foreign policy, whereas classical liberals hate war and want to shrink the military, along with the rest of the government; and religious conservatives (generally- the prevalence