If it's worth saying, but not worth its own post, even in Discussion, it goes here.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
Would LessWrong readers be interested in an intuitive explanation of special relativity?
Of course any scifi fan knows about Mazer Rackham's very own "There and Back Again." Why does that work? Special relativity!, I hear you say. But what does that actually mean? It probably makes you feel all science-like to say that out loud, but maybe you want a belief more substantial than a password. I did.
Relativity also has philosophical consequences. Metaphysics totally relies on concepts of space and time, yet philosophers don't learn relativity. One of my favorite quotes...
"... in the whole history of science there is no greater example of irony than when Einstein said he did not know what absolute time was, a thing which everyone knew." - J. L. Synge.
If I were to teach relativity to a group of people who were less interested in passing the physics GRE and more interested in actually understanding space and time, I would do things a lot differently from how I learned them. I'd focus on visualizing rather than calculating the Lorenz transforms. I'd focus on the spacetime interval, Minkowski spacetime, and the easy conversion factor between space and time (it's called c).
I love to teach and write and doodle but I'm not sure whether LessWrong is an appropriate forum for this topic. I don't want to dance in an empty or hostile theater dontchaknow.
Do people think superrationality, TDT, and UDT are supposed to be useable by humans?
I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.
But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks about humans using it. And in HPMoR, Eliezer has two eleven-year old humans using a bare-bones version of TDT to cooperate (I forget the chapter this occurs in), and in the TDT paper, Eliezer still makes no mention of AIs but instead talks about "causal decision theorists" and "evidential decision theorists" as though they were just people walking around with opinions about decision theory, not the platonic formalized abstraction of decision theories. (I don't think he uses the phrase "timeless decision theorists".)
I think part of the rejection people have to these decision theories might be from ho...
The idea of risk compensation says that if you have a seatbelt in your car, you take more risks while driving. There seem to be many similar "compensation" phenomena that are not related to risk:
Building more roads might not ease congestion because people switch from public transport to cars.
Sending aid might not alleviate poverty because people start having more kids.
Throwing money at a space program might not give you Star Trek because people create make-work.
Having more free time might not make you more productive because you'll just w
I may be missing something here, but I haven't seen anyone connect utility function domain to simulation problems in decision theory. Is there a discussion I missed, or an obvious flaw here?
Basically: I can simply respond to the AI that my utility function does not include a term for the suffering of simulated me. Simulated me (which I may have trouble telling is not the "me" making the decision) may end up in a great deal of pain, but I don't care about that. The logic is the same logic that compels me to, for example, attempt actually save the ...
Less Wrong frequently suggests that people become professional programmers, since it's a fun job that pays decently. If you're already a programmer, but want to get better, you should consider Hacker School, which is now accepting applications for its fall batch. It doesn't cost anything, and there are even grants available for living expenses.
Full disclosure: it's run by friends of mine, and my wife attended.
Being inspired by the relatively recent discussions of Parfit's Repugnant Conclusion, I started to wonder how many of us actually hold that ceteris paribus, a world with more happy people is better than a world with fewer happy people. I am not that much interested in answer generated by the moral philosophy you endorse, but rather the intuitive gut feeling: imagine you learn from a sufficiently trustworthy source about existence of a previously unknown planet (1) with a billion people living on it, all of them reasonably (2) happy, would it feel like a go...
Upvote this if learning about the new planet full of happy people feels like good news to you.
Not sure if this is acceptable in an open thread but oh well.
I am currently a university student and get all of my expenses paid for by government aid and my parents. This fall I will start tutoring students and earn some money with it. Now, what should I do with it? Should I save it for later in life? Should I spend it for toys or whatnot? Part of both? I would like your opinions on that.
You should probably spend it on things that give you good experiences that will improve you and that you will remember throughout your life. Going to see shows, joining activities such as martial arts (I favor Capoeira) or juggling or something can give you fun skills you can use indefinitely as well as introducing you to large amounts of potentially awesome people. Not only are friendships and relationships super important for long-term happiness, spending money on experiential things as opposed to possessions is also linked to fonder memories etc.
If you want to buy toys, I recommend spending money on things you will use a lot, such as a new phone, a better computer, or something like a kindle.
In general I approve of saving behavior but to be honest the money you make tutoring kids is not gonna be a super relevant amount for your long-term financial security.
I call this the EverQuest Savings Algorithm when I do it. The basis is that in EverQuest and most games in general, the amount of money you can make at a given level is insignificant to the income you will be making in a few more levels, so it never really seems to make sense to save unless you've maxed out your level. The same thing happens in real life, as all your pre-first-job savings are rendered insignificant by your first-job savings, and subsequently your pre-first-post-college-job savings are obsoleted by your first post-college job.
What's that site where you can precommit to things and then if you don't do them it gives your money to $hated-political-party?
This was inspired by the recent Pascal's mugging thread, but it seems like a slightly more general and much harder question. It sufficiently hard I'm not even sure where to start looking for the answer, but I guess my first step is to try to formalize the question.
From a computer programming perspective, it seems like a decision AI might have to have a few notations for probabilities and utilities which did not chart to actual numbers. For instance, assume a decision AI capable of assessing probability and utility uses RAM to do so, and has a finite amount...
I have never really used a budget. I want to try, even though I make enough and spend little enough that it's not an active problem. I've been pointed to YNAB... but one review says "YNAB is not for you if ... [you’re] not in debt, you don’t live paycheck to paycheck and you save money fast enough. If it ain’t broke, don’t fix it." I have data on Mint for a year, so I have a description of my spending. The part I'm confused about is the specifics of deciding what normatively I "should" spend in various categories. My current plan is pro...
Has anyone from CfAR contacted the authors of Giving Debiasing Away? They at least claim to be interested in implementing debiasing programs, and CfAR is a bit short on people with credentials in Psychology.
More well done rationality lite from cracked this time on generalizing from fictional evidence and narrative bias.
I have a question about a nagging issue I have in probability -
The conditional probability can be expressed thus: p(A|B)=p(AB)/p(B) However, the proofs I've seen of this rely on restricting your initial sample space to B. Doesn't this limit the use of this equivalency to cases where you are, in fact, conditioning on B - that is, you can't use this to make inferences about B's conditional probability given A? Or am I misunderstanding the proof? (Or is there another proof I haven't seen?)
(I can't think of a case where you can't make inferences about B given A, but I'm having trouble ascertaining whether the proof actually holds.)
I've been pondering a game; an iterated prisoner's dilemma with extended rules revolving around trading information.
Utility points can be used between rounds for one of several purposes; sending messages to other agents in the game, reproducing, storing information (information is cheap to store, but must be re-stored every round), hacking, and securing against hacking.
There are two levels of iteration; round iteration and game iteration. A hacked agent hands over its source code to the hacker; if the hacker uses its utility to store this information unti...
9 months ago, I designed something like a rationality test (as in biological rationality, although parts of it depend on prior knowledge of concepts like expected value). I'll copy it here, I'm curious whether all my questions will get answered correctly. Some of the questions might be logically invalid, please tell me if they are and explain your arguments (I didn't intend any question to be logically invalid). Also, certain bits might be vague - if you don't understand it, it's likely that it's my fault. Feel free to skip any amount of questions and sele...
I've been working on candidates to replace the home page text, about page text, and FAQ. I've still got more polishing I'd like to do, but I figured I'd go ahead and collect some preliminary feedback.
Candidate home page blurb vs current home page blurb (starts with "Thinking and deciding...").
Feel free to edit the candidate pages on the wiki, or send me suggestions via personal message. Harsh criticism is fine. It's possible that the existing versions are better...
Do any LWers have any familiarity with speed reading and have any recommendations or cautions about it?
Is it possible to embed JavaScript code into articles? If yes, how? I was thinking about doing some animations to illustrate probability.
FMA fans: for no particular reason I've written an idiosyncratic bit of fanfiction. I don't think I got Ed & Al's voice right, and if you don't mind reading bad fanfiction, I'd appreciate suggestions on improving the dialogue.
It's getting close to a year since we did the last census of LW, (Results) (I actually thought it had been longer until I checked) Is it time for another one? I think about once a year is right, but we may be growing or changing fast enough that more than that is appropriate. Ergo, a poll:
Edit: If you're rereading the results and have suggestions for how to improve the census, it might be a good idea to reply to this comment.
I was planning to do one in October of this year (though now that it's been mentioned, I might wait till January as a more natural "census point").
If someone else wants to do one first, please get in contact with me so we can make it as similar to the last one as possible while also making the changes that we agreed were needed at the time.
What (if anything) really helps to stop a mosquito bite from itching? And are there any reliable methods for avoiding bites, apart from DEET? I'll use DEET if I have to, but I'd rather use something less poisonous.
Does anyone have any recommendations on learning formal logic? Specifically natural deduction and the background to Godel's incompleteness theorem.
I have a lot of material on the theory but I find it a very difficult thing to learn, it doesn't respond well to standard learning techniques because of the mixture of specificity and deep concepts you need to understand to move forward.
Would LessWrong readers be interested in an intuitive explanation of special relativity?
Of course any scifi fan knows about Mazer Rackham's very own "There and Back Again." Why does that work? Special relativity!, I hear you say. But what does that actually mean? It probably makes you feel all science-like to say that out loud, but maybe you want a belief more substantial than a password. I did.
Relativity also has philosophical consequences. Metaphysics totally relies on concepts of space and time, yet philosophers don't learn relativity. One of my favorite quotes...
If I were to teach relativity to a group of people who were less interested in passing the physics GRE and more interested in actually understanding space and time, I would do things a lot differently from how I learned them. I'd focus on visualizing rather than calculating the Lorenz transforms. I'd focus on the spacetime interval, Minkowski spacetime, and the easy conversion factor between space and time (it's called c).
I love to teach and write and doodle but I'm not sure whether LessWrong is an appropriate forum for this topic. I don't want to dance in an empty or hostile theater dontchaknow.
I think intuitive explanations of physics are awesome. Though, there already seem to be several pretty great ones on the internet for special relativity. For example, see here, here, and here.
Are you aware of these other explanations? What would you do differently/better than them? Maybe there's another topic not as well covered, and you could fill that gap? (These are just rhetorical questions to spark your thinking; no need to actually answer me.)
If you do pursue this project, then do let us know. Best of luck!
(Disclaimer: I'm not a physicist. My univ... (read more)