Luke wrote a detailed description of his approach to beating procrastination (here if you missed it).
Does anyone know if he's ever given an update anywhere as to whether or not this same algorithm works for him to this day? He seems to be very prolific and I'm curious about whether his view on procrastination has changed at all.
Yvain has started a nootropics survey: https://docs.google.com/forms/d/1aNmqagWZ0kkEMYOgByBd2t0b16dR029BoHmR_OClB7Q/viewform
Background: http://www.reddit.com/r/Nootropics/comments/1xglcg/a_survey_for_better_anecdata/ http://www.reddit.com/r/Nootropics/comments/1xt0zn/rnootropics_survey/
I hope a lot of people take it; I'd like to run some analyses on the results.
I wrote a logic puzzle, which you may have seen on my blog. It has gotten a lot of praise, and I think it is a really interesting puzzle.
Imagine the following two player game. Alice secretly fills 3 rooms with apples. She has an infinite supply of apples and infinitely large rooms, so each room can have any non-negative integer number of apples. She must put a different number of apples in each room. Bob will then open the doors to the rooms in any order he chooses. After opening each door and counting the apples, but before he opens the next door, Bob must accept or reject that room. Bob must accept exactly two rooms and reject exactly one room. Bob loves apples, but hates regret. Bob wins the game if the total number of apples in the two rooms he accepts is a large as possible. Equivalently, Bob wins if the single room he rejects has the fewest apples. Alice wins if Bob loses.
Which of the two players has the advantage in this game?
This puzzle is a lot more interesting than it looks at first, and the solution can be seen here.
I would also like to see some of your favorite logic puzzles. If you you have any puzzles that you really like, please comment and share.
2.5 years ago I made an attempt to calculate an upper bound for the complexity of the currently known laws of physics. Since the issue of physical laws and complexity keeps coming up, and my old post is hard to find with google searches, I'm reposting it here verbatim.
...I would really like to see some solid estimates here, not just the usual hand-waving. Maybe someone better qualified can critique the following.
By "a computer program to simulate Maxwell's equations" EY presumably means a linear PDE solver for initial boundary value problems. The same general type of code should be able to handle the Schroedinger equation. There are a number of those available online, most written in Fortran or C, with the relevant code size about a megabyte. The Kolmogorov complexity of a solution produced by such a solver is probably of the same order as its code size (since the solver effectively describes the strings it generates), so, say, about 10^6 "complexity units". It might be much lower, but this is clearly the upper bound.
One wrinkle is that the initial and boundary conditions also have to be given, and the size of the relevant data heavily depends on the desired pre
I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).
For example, in the subgame "Coins from Urn Anise," you'll be told: "I have a mysterious urn labelled 'Anise' full of coins, each with possibly different probabilities. I'm picking a fresh coin from the urn. I'm about to flip the coin. Will I get heads? [Trial 1 of 10; Session 1]". You can then adjust a slider to select a number a in [0,1]. As you adjust a, you adjust the payoffs that you'll receive if the outcome of the coin flip is heads or tails. Specifically you'll receive 1+log2(a) points if the result is heads and 1+log2(1-a) points if the result is tails. This is a proper scoring rule in the sense that you maximize your expected return by choosing a equal to the posterior probability that, given what you know, this coin will come out heads. The payouts are harshly negative if you have false certainty. E.g. if you choose a=0.995, you'd only stand to gain 0.993 if heads happens but would lose 6.644 if tails happens. At the moment, you don't know much about the coin, but as...
Brought to mind by the recent post about dreaming on Slate Star Codex:
Has anyone read a convincing refutation of the deflationary hypothesis about dreams - that is, that there aren't any? In the sense of nothing like waking experience ever happening during sleep; just junk memories with backdated time-stamps?
My brain is attributing this position to Dennett in one of his older collections - maybe Brainstorms - but it probably predates him.
Stimuli can be incorporated into dreams - for example, if someone in a sleep lab sees you are in REM sleep and sprays water on you, you're more likely to report having had a dream it was raining when you wake up. Yes, this has been formally tested. This provides strong evidence that dreams are going on during sleep.
More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.
I wrote a piece for work on quota systems and affirmative action in employment ("Fixing Our Model of Meritocracy"). It's politics-related, but I did get to cite a really fun natural experiment and talk about quotas for the use of countering the availability heuristic.
An interesting quote, I wonder what people here will make of it...
...True rationalists are as rare in life as actual deconstructionists are in university English departments, or true bisexuals in gay bars. In a lifetime spent in hotbeds of secularism, I have known perhaps two thoroughgoing rationalists—people who actually tried to eliminate intuition and navigate life by reasoning about it—and countless humanists, in Comte’s sense, people who don’t go in for God but are enthusiasts for transcendent meaning, for sacred pantheons and private chapels. They hav
Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.
My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.
My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP). Is this even possible? Claims seem to be contradictory.
Does anybody have recommendations on systems th...
Something I recently noticed: steelmanning is popular on LessWrong. But the sequences contain a post called Against Devil's Advocacy, which argues strongly against devil's advocacy, and steelmanning often looks a lot like devil's advocacy. What, if anything is the difference between the two?
Steelmanning is about fixing errors in an argument (or otherwise improving it), while retaining (some of) the argument's assumptions. As a result, the argument becomes better, even if you disagree with some of the assumptions. The conclusion of the argument may change as a result, what's fixed about the conclusion is only the question that it needs to clarify. Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.
So the difference is in the direction of reasoning and intent regarding epistemic hygiene. Steelmanning starts from (somewhat) fixed assumptions and looks for more robust arguments following from them that would address a given question (careful hypothetical reasoning), while devil's advocacy starts from a fixed conclusion (not just a fixed question that the conclusion would judge) and looks for convincing arguments leading to it (rationalization with allowed use of dark arts).
A bad aspect of a steelmanned argument is that it can be useless: if you don't accept the assumptions, there is often little point in investigating their implications. A bad aspect of a devil's advocate's argument is that it may be misleading, acting as filtered evidence for the chosen conclusion. In this sense, devil's advocates exercise the skill of coming up with misleading arguments, which might be bad for their ability to reason carefully in other situations.
An article on samurai mental tricks. Most of them will not be that surprising to LWers, but it is nice to see modern results have a long history of working.
Does anyone have advice for getting an entry level software-development job? I'm finding a lot seem to want several years of experience, or a degree, while I'm self taught.
Ignore what they say on the job posting, apply anyway with a resume that links to your Github, websites you've built, etc. Many will still reject you for lack of experience, but in many cases it will turn out the job posting was a very optimistic description of the candidate they were hoping to find, and they'll interview you anyway in spite of not meeting the qualifications on the job listing.
I got to design my first infographic for work and I'd really appreciate feedback (it's here: "Did We Mess Up on Mammograms?").
I'm also curious about recommendations for tools. I used Easl.ly which is a WYSIWYG editor, but it was annoying in that I couldn't just tell it I wanted an mxn block of people icons, evenly spaced, but had to do it by hand instead.
A TEDx video about teaching mathematics; in Slovak, you have to select English subtitles. "Mathematics as a source of joy" Had to share it, but I am afraid the video does not explain too much, and there is not much material in English to link to -- I only found two articles. So here is a bit more info:
The video is about an educational method of a Czech math teacher Vít Hejný; it is told by his son. Prof. Hejný created an educational methodology based mostly on Piaget, but specifically applied to the domain of teaching mathematics (elementary- and...
Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly neg...
You know how when you see a kid about to fall off a cliff, you shrug and don't do anything because the standards of discourse aren't as high as they could be?
Me neither.
A task with a better expected outcome is still better (in expected outcome), even if it's hopeless, silly, not as funny as some of the failure modes, not your responsibility or in some way emotionally less comfortable.
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of hu...
Is this going to become even a harder distinction to make as tech continues to get better?
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say "I'm conscious". Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying "I'm conscious" for reasons other than being conscious. After all, you can have a very small program that outputs the string "I'm conscious" without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It's not clear where the cutoff happens, or even if it's meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that "it preserves behavior at every step, therefore it preserves consciousness", but as the above argument shows, that justification is incomplete and could easily be wrong.
Paraphrased from #lesswrong: "Is it wrong to shoot everyone who believes Tegmark level 4?" "No, because, according to them, it happens anyway". (It's tongue-in-cheek, for you humorless types.)
I am still seeking players for a multiplayer game of Victoria 2: Hearts of Darkness. We have converted from an earlier EU3 game, itself converted from CK2; the resulting history is very unlike our own. We are currently in 1844:
BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.
I don't think it has already been posted here on LW, but SMBC has a wonderful little strip about UFAI: http://www.smbc-comics.com/?id=3261#comic
I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):
I'm w...
I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.
It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal, the entire calculus sequence will not be that useful.
If your goal is to learn physics or economics, you should learn calculus, statistics, analysis.
If you want to have a true understanding of the math that is built into rationality, you want probability, statistics, logic.
If you want to learn what most math PhDs learn, then you need things like algebra, analysis, topology.
I am going to organize a coaching course to learn Javascript + Node.js.
My particular technology of choice is node.js because:
Has anyone else had one of those odd moments when you've accidentally confirmed reductionism (of a sort) by unknowingly responding to a situation almost identically to the last time or times you encountered it? For my part, I once gave the same condolences to an acquaintance who was living with someone we both knew to be very unpleasant, and also just attempted to add the word for "tomato" in Lojban to my list of words after seeing the Pomodoro technique mentioned.
After a brain surgery, my father developed Anterograde amnesia. Think Memento by Chris Nolan. His reactions to different comments/situations were always identical. If I were to mention a certain word, it would always invoke the same joke. Seeing his wife wearing a certain dress always produces the same witty comment. He was also equally amused by his wittiness every time.
For several months after the surgery he had to be kept on tight watch, and was prone to just do something that was routine pre-op, so we found a joke he finds extremely funny and which he hasn't heard before the surgery, and we would tell it every time we want him to forget where he was going. So, he would laugh for a good while, get completely disoriented, and go back to his sofa.
For a long while, we were unable to convince him that he had a problem, or even that he had the surgery (he would explain the scar away through some fantasy). And even when we manage, it lasts only for a minute or two.. Since then, I've developed several signals I would use if I found myself in an isomorphic situation. I had already read HPMoR by that time, but have discarded Harry's lip-biting as mostly pointless in real life.
Are there any reasons for becoming utilitarian, other than to satisfy one's empathy?
EDIT: This particular site does margin trading differently to how I thought margin trading normally works. So... disregard everything I just said?
Bitcoin economy and a possible violation of the efficent market hypososis. With the growing maturity of the Bitcoin ecosystem, there has appeared a website which allows leveraged trading, meaning that people who think they know which way the price is going can borrow money to increase their profits. At the time of writing, the bid-ask spread for the rates offered is 0.27% - 0.17% per day, which is 166% - 86% per ...
How does solipsism change one's pattern of behavior, compared to other things being alive? I noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.
For example, if you steal from your neighbor, you can observe that you run the risk of him catching you, and thus you having to deal with consequences that will be painful or unpleasant. Similarly, assuming you're a healthy person, you have a conscience that makes you feel bad about certain thin...
I participated in an economics experiment a few days ago, and one of the tasks was as follows. Choose one of the following gambles where each outcome has 50% probability Option 1: $4 definitely Option 2: $6 or $3 Option 3: $8 or $2 Option 4: $10 or $1 Option 5: $12 or $0
I choose option 5 as it has the highest expected value. Asymptotically this is the best option but for a single trial, is it still the best option?
Technically, it depends on your utility function. However, even without knowing your utility function, I can say that for such a low amount of money, your utility function is very close to linear, and option 5 is the best.
Here's one interesting way of viewing it that I once read:
Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).
An Iterated Prisoner's Dilemma variant I've been thinking about —
There is a pool of players, who may be running various strategies. The number of rounds played is randomly determined. On each round, players are matched randomly, and play a one-shot PD. On the second and subsequent rounds, each player is informed of its opponent's previous moves; but players have no information about what move was played against them last round, nor whether they have played the same opponent before.
In other words, as a player you know your current opponent's move history — ...
Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.
As far as I can tell, the downsides are:
The scarring is basically covered if you have a few two days’ hair growth there and I am fine with tha...
I'm looking into Bayesian Reasoning and trying to get a basic handle on it and how it differs from traditional thinking. When I read about how it (apparently) takes into account various explanations for observed things once they are observed, I was immediately reminded of Richard Feynman's opinion of Flying Saucers. Is Feynman giving an example of proper Bayesian thinking here?
Since people were pretty encouraging about the quest to do one's part to help humanity, I have a follow-up question. (Hope it's okay to post twice on the same open thread...)
Perhaps this is a false dichotomy. If so, just let me know. I'm basically wondering if it's more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address ...
I am not sure if this deserves it's own post. I figured I would post here and then add it to discussion if there is sufficient interest.
I recently started reading Learn You A Haskell For Great Good. This is the first time I have attempted to learn a functional language, and I am only a beginner in Imperative languages (Java). I am looking for some exercises that could go along with the e-book. Ideally, the exercises would encourage learning new material in a similar order to how the book is presented. I am happy to substitute/compliment with a different re...
Modafinil is prescription-only in the US, so to get it you have to do illegal things. However, I note that (presumably due to some legislative oversight?) the related drug Adrafinil is unregulated, you can buy it right off Amazon. Does anyone know how Adrafinil and Modafinil compare in terms of effectiveness and safety?
Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.
I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.
Would you prefer that one person be horribly tortured for eternity without hope or rest, or that 3^^^3 people die?
One person being horribly tortured for eternity is equivalent to that one person being copied infinite times and having each copy tortured for the rest of their life. Death is better than a lifetime of horrible torture, and 3^^^3, despite being bigger than a whole lot of numbers, is still smaller than infinity.
You are committing the nirvana fallacy. How many native speakers of English never make mistakes or never "pick an unnatural choice"?
For example, I know a woman who immigrated to the US as an adult and is fully bilingual. As an objective measure, I think she had the perfect score on the verbal section of the LSAT. She speaks better English than most "natives". She is not unusual.
Tell your French linguist to go into countryside and listen to the French of the uneducated native speakers. Do they make mistakes?
I'm not talking about performance errors in general. I'm talking about the fact that it is extremely hard to acquire native-like competence wrt the semantics and pragmatics of the ways in which English allows one to express something about the future.
Your utterance of this sentence severely damages your credibility with respect to any linguistic issue. The proper way to say this is: she speaks ... (read more)