This post is shameless bragging:
I donated two days of pay to the Schistosomiasis Control Initiative. As always, this is incredibly easy to do. If you would like to do so, here is a link:
Should we listen to music? This seems like a high-value thing to think about.* Some considerations:
Music masks distractions. But we can get the same effect through alternatives such as white noise, calming environmental noise, or ambient social noise.
Music creates distractions. It causes interruptions. It forces us to switch our attention between tasks. For instance, listening to music while driving increases the risk of accidents.
We seem to enjoy listening to music. Anecdotally, when I've gone on "music fasts", music starts to sound much better and I develop cravings for music. This may indicate that this is a treadmill system, such that listening to music does not produce lasting improvements in mood. (That is, if enjoyment stems from relative change in quality/quantity of music and not from absolute quality/quantity, then we likely cannot obtain a lasting benefit.)
Frequency of music-listening correlates (.18) with conscientiousness. I'd guess the causation's in the wrong direction, though.
Listening to random music (e.g. a multi-genre playlist on shuffle) will randomize emotion and mindstate. Entropic influences on sorta-optimized things (e.g. mindstate) are u
I went through the literature on background music in September 2012; here is a dump of 38 paper references. Abstracts can be found by searching here and I can provide full texts on request.
Six papers that I starred in my reference manager (with links to full texts):
Get RescueTime or something similar and flip a coin every day to decide whether or not to listen to music. After a while patterns might emerge.
Yeah. I may not feel as strongly as you about this, but I still feel music is something intrinsically valuable to me. At least something about is is, and I haven't yet found a better substitute for it. If I stop listening to music entirely, I feel like the world is a bit more devoid of value to me. It might make sense to talk about this for those who don't feel strongly about the matter, but for me personally this starts to drift into the Straw Vulcan territory.
App Academy has been discussed here before and several Less Wrongers have attended (such as ChrisHallquist, Solvent, Curiouskid, and Jack).
I am considering attending myself during the summer and am soliciting advice pertaining to (i) maximing my chance of being accepted to the program and (ii) maximing the value I get out of my time in the program given that I am accepted. Thanks in advance.
EDIT: I ended up applying and just completed the first coding test. Wasn't too difficult. They give you 45 minutes, but I only needed < 20.
EDIT2: I have reached the interview stage. Thanks everyone for the help!
EDIT3: Finished the interview. Now awaiting AA's decision.
EDIT4: Yet another interview scheduled...this time with Kush Patel.
EDIT5: Got an acceptance e-mail. Decision time...
EDIT6: Am attending the August cohort in San Francisco.
I work at App Academy, and I'm very happy to discuss App Academy and other coding bootcamps with anyone who wants to talk about them with me.
I have previously Skyped LWers to help them prepare for the interview.
Contact me at bshlegeris@gmail.com if interested (or in comments here).
A public service announcement.
If you only rarely peek out from underneath your rock and don't know about Heartbleed you should bother to find out. Additional info e.g. here. A pretty basic tool to check servers is here.
Notable vulnerable services were, for example, Gmail and Yahoo Mail.
List of affected sites with recommendations on which to change your password. Unfortunately, you should also probably change any other sites on which you use the same password.
It's a good time to do an Expected Utility calculation!
if you think that: p(having your accounts compromised) ( pain if accounts are compromised) > 1 (inconvenience of changing passwords), then change em!
Also, might be a good opportunity for you to start using a password manager like LastPass
Inspired by economical lolcats, I guess we should have some rationality lolcats. Here are a few quick ideas:
Two big cats next to each other, a third smaller cat in front of them or hiding somewhere aside. "Consider the third alternative"
One cat standing on hind legs, other cat crouching. "If P(H|E) > P(H) ... then P(H|~E) < P(H)"
Cat examining a computer mouse. "Iz mouse 'by definishun' ... still can't eat"
Cat ripping apart paper boxes. "Stop compartmentalizing"
Cat ripping apart a map. "The map is not the territory"
Cat riding a vacuum cleaner ... something about Friendly AI.
Kittens riding a dog. "Burdensome details"
Cat looking suspiciously at a whirlpool in a bathtub. "Resist the affective spiral"
Or simply a picture of some smart cat (cat with glasses?) and some applause-light texts, like "All your Bayes are belong to us"
I am not sure what is the proper procedure for creating these; specifically whether there is some good source of legally available cat images. What is the correct font to use, and whether there are some tools for conveniently adding texts to pictures. Anyone has experience with this?
Does anyone else here have bizarre/hacky writing habits?
I discovered Amphetype, a learn-to-type application that allows you to type passages from anything that you get as a text file. But I've started to use it to randomly sample excerpts from my own writing. The process of re-typing it word for word makes me actually re-process it, mentally speaking, and I often find myself compelled to actually re-write something upon having re-typed it.
Something similar that I've had positive results with is to print out a draft, open a new file, and make myself transcribe the new draft to a new file.
Are there any listings of rationalist houses and/or Less Wrong users looking for an apartment?
If not, do you guys think it should be a feature that is added to lesswrong.com? IMO, it's something that has the potential to improve a lot of lives, and doesn't take that much effort to implement. So ROI-wise, it seems like something worthy of doing.
What's so special about HPMoR?
Some people seem to think that it is more than just a decent read: that it genre-breaking, that it transcends the rules of ordinary fiction. Some people change their life-pattern after reading HPMoR. Why?
For some context on who is asking this question: I've read 400 pages or more of HPMoR; as well as pretty much everything else that Eliezer has written.
I can't speak for others, but I love HPMoR. I honestly believe it's one of the best pieces of fiction I've ever read, so I'll try to describe my own reasons.
Tropes and Plot Devices: I've read a lot of sci-fi and fantasy and HPMoR avoids a lot of the downfalls of the genre such as dei ex machina, whiny/angsty heroes, and phleboninum/unobtainium. Eliezer is familiar enough with common tropes that he does a great job of applying them in the right contexts, subverting them interestingly, and sometimes calling them out and making fun of them directly.
The First Law of Rationalist Fiction: Roughly, that characters should succeed by thinking in understandable, imitable ways, not by inexplicable powers or opaque "bursts of insight" that don't really explain anything. After hearing this ideal stated outright and seeing it in practice, a lot of other fiction I've read (and, unfortunately, written) seems a lot less satisfying. Eliezer does a fantastic job at giving a look into the characters' minds and letting you follow their thought patterns. This makes it even more satisfying when they succeed and even more crushing when they fail.
Application of the Sequences: As someo
I'm also somewhat confused by this. I love HPMoR and actively recommend it to friends, but to the extent Eliezer's April Fools' confession can be taken literally, characterizing it as "you-don't-have-a-word genre" and coming from "an entirely different literary tradition" seems a stretch.
Some hypotheses:
One more hypothesis after reading other comments:
HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than "rationalist fiction", a better term would be "paragon fiction". Characters have rich and conflicting motives so life isn't a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or against their interests. Virtues aren't merely labels and obstacles don't automatically dissolve, so readers could learn to emulate these paragons through observation.
This actually does seem at odds with the western canon, and off-hand I can't think of anything else that might be described in this way. Perhaps something like Hikaru No Go? Though I haven't read them, maybe Walter Jon Williams' Aristoi or Ian Banks' Culture series?
It's one of the only fictional works I can read without having to constantly ignore obvious things the protagonists should be doing. It's really, really funny.
"Atlas Shrugged is the greatest novel that has ever been written, in my judgment, so let's let it go at that."
— Nathaniel Branden, quoted in a 1971 interview in Reason magazine.
why some people are induced to change their life by it (perhaps only because it piques their interest for other material on LessWrong)
I'm also kind of surprised by this but... actually how rare really is it for people to say "X caused me to change my life."
I do know for a fact that people have changed their lives based on canon Harry Potter, with hundreds of people becoming obsessed with different character pairings etc. So maybe it isn't too surprising that it would happen with HPMoR, at least to a few people.
A weird fact about humanity and mass society is that virtually anything that reaches a large enough audience will wind up with some obsessive fans. As an example, dozens of women pledged their undying devotion to Richard Ramirez, the "Night Stalker."
why Eliezer has described his own work as "fictional literature from what looks like an entirely different literary tradition."
Characters in HPMOR do things for rational reasons. Smart characters are smart and make their decisions based on careful thought instead of unexplained flashes of insight.
That not something that happens in normal fiction. If you think that's not new, which works of fiction do you consider to have the same quality?
why some people are induced to change their life by it (perhaps only because it piques their interest for other material on LessWrong)
Because HPMOR often has morals to teach.
There are a lot of atheists who are essentially like Harry's father. They wouldn't run experiments to test whether magic exist but simply assume that it doesn't exist and get angry with everyone who claims magic exists. By having a well written story they might update into the direction of empricism.
It teaches a version of science that about experiements and not about reading authoritative papers. That might raise in at least a few readers the question of why they aren't doing science in their lifes.
The narrative about taking heroic responsiblity is strong. ...
If I had to guess, I'd guess that it targets a particular kind of audience that most fiction isn't targeted at, and consequently appeals to that audience more than excellent other books targeted elsewhere.
An idea: a rationality hackathon.
From what I see, it seems like rationalists don't act on ideas often enough. To help people get the motivation to act on ideas, I sense that a hackathon would be effective. People would talk and group together to prototype different ideas, and at the end of the hackathon, participants would vote on the best ideas, and hopefully this would spark some action.
I guess what makes this different from the typical hackathon is:
1) Participants would be Less Wrong readers, or people part of other rationality-minded communities.
2) The goal would be to start things that are as beneficial as possible to the world (people at hackathons usually just want to build something "cool").
Thoughts?
A nice puzzle which I found in this Math Overflow page: Is there a position with a finite number of chess pieces on an infinite chessboard, such that White has a forced win in ω moves? The meaning of this is that White has a move such that, for every possible response of Black, White has a guaranteed checkmate in a number of moves bounded by a finite number N; but before Black's first move, we cannot put a bound on how large N might be.
The thread gives a solution, and also links to this paper, where higher ordinals and questions of computability in infinite chess are also considered.
Since one big problem with neural nets is their lack of analyzability, this geometric approach to deep learning neural networks seems probably useful.
A career question, asked with EA aims in mind, that will hopefully be relevant to many other LW members.
I am considering CS research as a career path, probably in one of AI/ML/distributed systems. I'm currently working as a software developer and I have done extensive MOOC work to pick up a CS background in terms of coursework, but my undergraduate degree is in math and I have no published research.
If I decide that getting a PhD was worthwhile and wanted to apply to good programs, where would I start building my resume and skills? Independent research project? Sufficiently impressive projects within my current company? Should I just get a master's and see how that goes?
Alternately, is it possible to get involved with industry research without a PhD? What would such a career path look like?
Thoughts on any or all of the above questions, suggestions for people to talk to, etc. would be much appreciated.
This article proposes a plausible mathematical model for the subject perception of (long) time spans:
http://www.stochastik.uni-freiburg.de/~rueschendorf/papers/BrussRueSep3:Geron.pdf
This is interesting for the following reasons:
It is general and robust to definitions of time perception in particular it doesn't rely on a specific measurement or definition of events.
It is analogous to model of perception of other stimuli.
The derived relationship suggests time perception being logarithmic with age and thus at age a time seems to proceed only at a rate o
This is not a Rationality Quote, but it might be about transhumanism if you squint. From a short Iron Man fanfic, Skybreak, cut for relevance:
...He tells the recruits that technology will never replace them. He tells them that flight will always be there for them, that flight has to be there for them, because they are masters of the sky and what the hell were humans meant to do, except fly?
…
He knows men will never stop flying. Not because the machines will stop coming, because they won't. Not because the future's gonna step aside for him, because it won't.
I am looking for resources related to meditation. I've made the same call before but this time I am aiming at checking the evidence more thoroughly.
I am particularly interested in 1. Decent studies in general 2. Information relating to the dangers of meditation 3. Resources that outline the differences between the different types of meditation
(no need to send me the wiki page on meditation research but if you are particularly impressed in a study there, that could be useful to me)
I don't know whether this observation has been made before, (if it has, certainly more succintly), but I've noticed something about arguments with particularly irrational people (AGW deniers, Holocaust deniers, creationists, etc): The required length of each subsequent reply to explain why they're wrong grows exponentially with the length of the argument, while the irrational side can remain roughly constant. Entangled truths?
I'm at that point in life where I have to make a lot of choices about my future life. I'm considering doing a double major in biochemistry and computer science. I find both of these topics to be fascinating, but I'm not sure if that's the most effective way to help the world. I am comfortable in my skills as an autodidact, and I find myself to be interested in comp sci, biochemistry, physics, and mathematics. I believe that regardless which I actually major in, I could learn any of the others quite well. I have a nagging voice in my head saying that I shou...
I have a nagging voice in my head saying that I shouldn't bother learning biochemistry, because it won't be useful in the long term because everything will be based on nanotech and we will all be uploads. Is that a valid point?
Keeping in mind the biases (EDIT: but also the expertise) that my username indicates, I would say that is nearly exactly backwards - modifications and engineering of biochemistry and biochemistry-type systems will actually occur (and already are) while what most people around here think of when they say 'nanotech' is a pipe dream. Biochemistry is the result of 4 gigayears of evolution showing the sorts of things that can actually be accomplished with atoms robustly rather than as expensive delicate one-off demonstrations and the most successful fine-scale engineering in the future will resemble it closely if not be it.
Link: Appeals to evidence may decrease donations among donors primarily seeking warm fuzzies: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2421943 hat tip http://marginalrevolution.com/marginalrevolution/2014/04/does-greater-charitable-effectiveness-spur-more-donations.html
Moore’s law is no longer expected to deliver improved transistor cost scaling at or below the 20nm node...
For decades, semiconductor engineers have come to broad agreement about which technologies represented the best and most reliable scaling opportunities for future manufacturing...
If EUV and 450mm wafers don’t happen at 10nm, the “what happens next?” roadmap is a grab-bag of unresolved difficulties and potentially terrible economics.
Just wanted to say it is nice to see MIRI has a github presence - https://github.com/machine-intelligence
Looking forward to seeing more.
I asked this in the last open thread and got no reply so here it is again:
Have there been any studies on how effective things like MOOCS and Khanacademy and so on are at teaching people?
Thought experiment. Imagine a machine that can create an identical set of atoms to the atoms that comprise a human's body. This machine is used to create a copy of you, and a copy of a second person, whom you have never met and know nothing about.
After the creation of the copy, 'you' will have no interaction with it. In fact, it's going to be placed into a space ship and fired into outer space, as is the copy of Person 2. Unfortunately, one spaceship is going to be very painful to be in. The other is going to be very pleasant. So a copy of you will experie...
Feyerabend's counterinduction and Bayesianism. Has anyone here thought about how these two views of science bear on each other?
Have LWers ever used Usenet? By that, I mean: connected to a NNTP server (not Google Groups) with a newsreader to read discussions and perhaps comment (not solely download movies & files).
[pollid:666]
Your age is:
[pollid:667]
I am curious about the age-distribution of Usenet use: I get the feeling that there is a very sharp fall in Usenet age such that all nerds who grew up in the '70s-'80s used Usenet, but nerd teens in the mid-'90s to now have zero usage of it except for a rare few who know it as a better BitTorrent.
(This was posted in the welcome thread, and I received a PM suggesting I post it here.)
I am looking for someone to help me with the Quantum Physics sequence. I have little background in physics and mathematics. For purposes of the sequence, you could probably consider me "intelligent but uninformed" or something like that.
To indicate the level on which I am having difficulties, take as an example the Configurations and Amplitude post.
I want opinions: Is Neal Stephenson's Anathem a work of rationalist fiction?
I think it is (I don't think it was written with the rationalist label in mind, it just meets the qualifying standards).
Science and the scientific method are core plot points Technology is central. There is a transhumanist theme. The main character is a scholar/scientist. He seems approximately realistic in his behavior and intellect. Maybe this makes him more of a traditional hero in the midst of far more rational and intelligent people than himself.
From "Bayes' Theorem":
...In front of you is a bookbag containing 1,000 poker chips. I started out with two such bookbags, one containing 700 red and 300 blue chips, the other containing 300 red and 700 blue. I flipped a fair coin to determine which bookbag to use, so your prior probability that the bookbag in front of you is the red bookbag is 50%. Now, you sample randomly, with replacement after each chip. In 12 samples, you get 8 reds and 4 blues. What is the probability that this is the predominantly red bag?
... a blue chip is exactly the
What getting a ratio of 1000004:1000000 tells you is that you're looking at the wrong hypotheses.
If you know absolutely-for-sure (because God told you, and God never lies) that you have either a (700,300) bag or a (300,700) bag and are sampling whichever bag it is uniformly and independently, and the only question is which of those two situations you're in, then the evidence does indeed favour the (700,300) bag by the same amount as it would if your draws were (8,4) instead of (1000004,1000000).
But the probability of getting anything like those numbers in either case is incredibly tiny and long before getting to (1000004,1000000) you should have lost your faith in what God told you. Your bag contains some other numbers of chips, or you're drawing from it in some weirdly correlated way, or the devil is screwing with your actions or perceptions.
("Somewhere close to 50:50" is correct in the following sense: if you start with any sensible probability distribution over the number of chips in the bags that does allow something much nearer to equality, then Pr((700,300)) and Pr((300,700)) are far closer to one another than either is to Pr(somewhere nearer to equality) and the latter is what you should be focusing on because you clearly don't really have either (700,300) or (300,700).)
Belief & double-blind randomized control group studies: response to IlyaShpitser
In a previous thread IlyaShpitser said >According to your blog, you don't believe in RCTs, right? What do you believe in?
This is part of the problem I'm trying to address. Belief/non-belief are inappropriate locutions to use in terms not only of the double-blind randomized control group method (DBRCGM), but of models and methods of science in general. "Belief in" a any scientific method is not even remotely relevant to science or the philosophy of science. ...
App Academy has been discussed here before and several Less Wrongers have attended (such as ChrisHallquist, Solvent, Curiouskid, and Jack).
I am considering attending myself during the summer and am soliciting advice pertaining to (i) maximing my chance of being accepted to the program and (ii) maximing the value I get out of my time in the program given that I am accepted. Thanks in advance.
EDIT: I ended up applying and just completed the first coding test. Wasn't too difficult. They give you 45 minutes, but I only needed < 20.
EDIT2: I have reached the interview stage. Thanks everyone for the help!
EDIT3: Finished the interview. Now awaiting AA's decision.
EDIT4: Yet another interview scheduled...this time with Kush Patel.
EDIT5: Got an acceptance e-mail. Decision time...
EDIT6: Am attending the August cohort in San Francisco.
I'm a current student who started two weeks ago on Monday. I'd be happy to talk as well.