I am looking to set a morning routine for myself and wanted to hear if you have some unusual component in your morning routine other people might benefit from.
One thing I might start experimenting with is a version of morning contemplation. Ancient stoicism seems to suggest to reflect on one's principles in the morning, christian tradition has morning prayers and Benjamin Franklin reviewed his virtues every morning, so why not do a little personalised version of it? Things like the serenity prayer or Tarski's litany.
I have a terrible problem where I wake up from my alarm, turn off the alarm, then go back to sleep (I've missed several morning lectures this way). The solution I've been trialing is to put a glass of water and some caffeine pills on my bedside table when I go to sleep. That way, when I wake up I can turn off the alarm, take the pill and give in to the urge to put my head back on the pillow, confident that the caffeine will wake me up again a few minutes later. This has worked every time I've remembered to put out the pills.
I got this idea from someone else on LW but I've forgotten who, so credit to whomever it was.
My preferred solution for this problem is to have the alarm on the other side of the room so I have to actively get out of bed and walk over to it.
Then one day you're ill, or in the shower, or you go away on business and forget to disable the cron job, and BOOM no more files.
In the newest episode of Person of Interest there is explicit mention of Friendly Artificial Intelligence, and the main character seems have an attittude towards AI similar to the one found on LessWrong. The exact quote is: "Even if I had succeeded in creating a benevolent machine, as if any such thing could exist, never forget that even a so-called 'Friendly Artificial Super Intelligence' would be every bit as dangerous as an unfriendly one."
Since the LessWrongWiki is the second google hit (at least for me) under both "Friendly Artificial Intelligence" and "Friendly Artificial Super Intelligence" (the first being this wikipedia page, which explicitly mentions Eliezer in the second sentence) this might lead to an increased amount of visitors? Also it should be noted that in this series almost everything said about artificial intelligence is compatible with the ideas found on LessWrong (sometimes tweaked a little bit to make the series more interesting) - this series might well promote some beliefs on AI from LessWrong to the attention of a broader audience.
For a family gift exchange game, I am giving a gift of the form: "25 dollars will be donated to a charity of you choice from this list"
Please help me form that list. My goal is to make the list feel as diverse as possible, so it feels like there is a meaningful decision, without sacrificing very much effectiveness.
My current plan is to take the 8 charities on givewell's top charities list, and remove a couple that have the same missions as other charities on the list. Maybe adding MIRI or other Xrisk charities (give me ideas) that are very difficult to compare effectiveness with the givewell charities.
What would you put on the list?
For those who are intending to have children A recent study shows a substantial drop in IQ of children who were exposed to phthalates in the womb, and this pattern lasts at least until age 7. This is the first such study, so it may not turn out this way, but it seems worth noting, and it underscores how much low-hanging fruit there may still be in terms of environmental impacts on intelligence (similar to what happened with iodine in the 1920s and 30s).
Do we have a catalog of Not Less Wrong rationality guides?
I know we have the list of rationality blogs, but I'm asking about a collection of material that educates at an entry level of formalized rationality but sits at lower inferential distances that the sequences.
I find it annoying that there is no way to view all the posts at once -- you can see either main or discussion posts, but not both at once. I think it would be good to add "All" next to the tabs for Main and Discussion.
There is an undocumented way: all posts, all comments. But those URLs are not linked from anywhere.
When you go to GiveWell's Donate page, one of the questions is,
How should we use your gift? We may use unrestricted gifts to support our operations or to make grants, at our discretion:
And you can choose the options:
Grants to recommended charities
Unrestricted donation
I notice I'm reluctant to pick "Unrestricted," fearing my donation might be "wasted" on GiveWell's operations, instead of going right to the charity. But that seems kind of strange. Choosing "Unrestricted" gives GiveWell strictly more options than choo...
I think the nasty part of the Hard Problem of Consciousness is probably in finding a naturalistic explanation for how things come seem subjectively objective: for why the wavelength of red feels from the inside like a built-in quality of the world rather than a perception generated by a mind in response to a stimulus. I think the "social processing theory of consciousness" doesn't quite explain this, at least not to my satisfaction.
Of course, the random thoughts I record in Open Thread are not liable to be high-quality.
Thinking about a quote from HPMOR (the podcast is quite good, if anyone was interested):
...But human beings had four times the brain size of a chimpanzee. 20% of a human's metabolic energy went into feeding the brain. Humans were ridiculously smarter than any other species. That sort of thing didn't happen because the environment stepped up the difficulty of its problems a little. Then the organisms would just get a little smarter to solve them. Ending up with that gigantic outsized brain must have taken some sort of runaway evolutionary process, something
Dark Triad people who excel at getting others to do their bidding might have higher than average IQ's, but they generally don't go into STEM fields where having high IQ's can pay off.
If you're smart and "excel at getting others to do [your] bidding", you don't want to go into STEM, you want to go into management.
Is there some resource (for instance, in the LW sequences) where I can redirect people to get a quick, clean view into the Bayesian worldview of doing things, especially in science? When I read people say things like "Consensus doesn't matter in science!" I want to respond with "Well, consensus isn't everything, but being informed about the agreement in opinion of a large number of authorities in a subject should make you update your beliefs" but I find it hard to do that without then having to explain what "update your beliefs" actually means.
I note that in my profile I can see posts that I have up or downvoted under "Liked" and "Disliked." Is there a way to get a similar list of comments that I've up and downvoted?
There's lots of material here in comment threads that I think is worth remembering but is tricky to re-find.
When turning in their final papers by email, a few of my students added positive comments about my class such as "Thanks for a great semester, I really enjoyed this class". All of my students will do an anonymous (to me) evaluation of my class. I imagine that both of these occurrences are commonplace among college students. It would be interesting to see if students who complement the class also give high anonymous evaluations, and this might tell us something about the honesty of praise. This would be an easy study for a college to conduc...
I'm considering trying out a paleo diet. I'm not totally convinced by all the arguments for it, but are there arguments against paleo that say it is actually bad for you? By "bad" I mean are there worthwhile arguments that switching to paleo is worse for your weight and longevity than your typical American diet?
Link to the podcast, with transcript.
The mention of MIRI, about (bad) AI forecasts :
Russ Roberts : [It] seems to me that there are a lot of people in AI who think [strong AI development is] only a matter of time, and that the consequences are going to be enormous. They're not going to just be like a marginal improvement or marginal challenge. They "threaten the human race."
Gary Marcus : Before we get to those consequences, which I actually do think are important, I'll just say that there's this very interesting [?] by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you.
The Anti-Drug
I've seen that a lot of drugs seem to act like "gratification borrowers": they take gratification/happiness from the future and spend it all on the present, sometimes extremely quickly, then leave you feeling miserable for a certain duration, the "low" or "hangover".
I was wondering whether there was any drug that did the opposite, that functioned like delayed gratification: a drug that makes you feel utterly miserable at first, then eventually leaves you with a long-lasting feeling of satisfaction, accomplishment, and joy.
Does anyone here know of such a thing?
Thought: I think Pascal's Mugging can't harm boundedly rational agents. If an agent is bounded in its computing power, then what it ought to do is draw some bounded number of samples from its mixture model of possible worlds, and then evaluate the expected value of its actions in the sample rather than across the entire mixture. As the available computing power approaches infinity, the sample size approaches infinity, and the sample more closely resembles the true distribution, thus causing the expected utility calculation to approach the true expected u...
It seems to me that Islamist terrorists are trying to maximize defection from the larger society, and they're even able to recruit Kurds. Admittedly, they're only getting a tiny proportion of people, but why are they getting anyone at all?
Would anyone care to take a crack at whether there are conditions under which this makes sense in terms of game theory?
AI researcher with good academic credentials is wanted. If you are interested, send me a private message, please.
I was unsure where to post this question; let me know whether it merits its own thread. I also apologize if this post is a bit messy.
If I had to title this post, I might name it, "Optimizing College Activities for a Future Programmer".
I'm a college student at an American school. It's quite a lot of work -- more than I can do in the time given, and I have a study routine that's more efficient than a lot of people I know. I was handling it relatively well last year and still getting enough sleep, exercise, socializing, etc. -- basically all the thi...
In the same way that we have other periodic threads, how much of a good or a bad idea would it be to have a periodically posted thread where we could post our recent rationality failures?
Predictionbook has been linked to and discussed here before. I'm one of the (few) active users, and I'm curious why more people who are regulars here don't use it or don't use it frequently. People who don't use Predictionbook, why don't you? Part of why I am curious is that if there are interface or similar issues then now might be a good time to speak up since Jayson Virissimo is working on a similar service here.
People who don't use Predictionbook, why don't you?
I'm not really sure how to answer this. Predictionbook is just one of thousands of websites I don't use.
Trends at 2050?
How many linear or logarithmic trends can be forecast out as far as the year 2050? I once found some graphs of CPU speed and storage per dollar per year, but seem to have lost them; and now I'm curious what other trend-lines might be worth thinking about.
(I've been writing a story, and would be happy to make whatever such details I include as plausible as possible, while also acknowledging that relying on such trends is a mug's game.)
Anyone here know about the International High IQ Society? I'm wondering if it would be worthwhile for me to try to get into it. Several of the free online IQ tests I've taken have put me at a few points below 120, so I probably wouldn't be able to get into Mensa, but I might be able to squeeze out ~6-7 more points from practice in order to qualify for the IHIQS, which only requires a score of 124 (95th percentile).
How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?
Request for help: I'm looking for a study on scope insensitivity to use in one of my college entrance essays. If I recall it correctly, the study showed something like, when asked how much people would pay save one girl of a group of eight in a tribe from a disease, it was over double what people would pay to save all eight.
I checked the standard scope insensitivity post, and tried my google-fu, but can't remember where I originally heard it.
Every night as I'm lying in bed trying to fall asleep, I think of five of six things I want to remember habitually or in the short-run, so I get up and write them down. This costs me at least 25 minutes of sleep. I'm sure I'm not the only one with this problem; does anyone know good ways to store or record these ideas?
The self-indication assumption seems to violate some pretty basic probability. Suppose that there are two possible universes: universe A has one person, and universe B has 99 people. Suppose that the prior probability for each is 50%. Under SIA, you are 1% likely to be each one of these 100 possible people. But that means that universe B has 99% probability, even though we just assigned it 50% probability. It can't change without updating on evidence, which we never did. What happened?
We updated on the fact that we exist. SSA does this a little too: specifically, the fact that you exist means that there is at least one observer. One way to look at it is that there is initially a constant number of souls that get used to fill in the observers of a universe. In this formulation, SIA is the result of the normal Bayesian update on the fact that soul-you woke up in a body.
Would it be possible for me to buy an actual printed version of HPMoR in some way? (Or at least, of the first X chapters of HPMoR, where X is at least a significant part of the story).
Thanks!
Is there a significant difference between the mathematical universe hypothesis and Hegelian absolute idealism. Both seem to claim the primacy of ideas over matter (mind in case of Hegel, and math in case of MUH), and conclude that matter should follow the law of ideas. MUH just makes one step forward, and says that if there are different kind of maths, there should be different kinds of universes, while Hegel haven't claimed the same about different minds.
Reduced consumption of animal products, more specifically meat should help my health and both my purse and the global poor through reduced food prices. For reducing meat consumption in general it seems easy to just replace meat in a lot of dishes with cheese or substitute meaty dishes with some scrambled eggs. What can I do for variety? I am especially looking for cheap, fast and/or convenient methods to put together a meal. I am very willing to trade off fast for the other two as I can listen to audiobooks or similar while preparing food.
To coin a phrase, "What has government done to our money?"
Because this seems to conflict with the "Austrian" doomsaying that the U.S. dollar would "collapse." Now it looks like the U.S. dollar has gone into the opposite of a "collapse" because a dollar can buy a lot more oil today than it could a few months back.
I want to open up the debate again whether to split donations or to concentrate them in one place.
One camp insists on donating all your money to a single charity with the highest current marginal effectiveness. The other camp claims that you should split donations for various reasons ranging from concerns like "if everyone thought like this" to "don't put all your eggs in one basket." My position is firmly in the second camp as it seems to me obvious that you should split your donations just as you split your investments, because of ris...
Your utility function need not be completely linear, just locally linear. If your utility function measures against the total good done in the world, your effect on the world will be small enough to be locally linear
Most people don't want to optimize the total good done, but instead care about the amount of good they do. People donate to charity until the marginal utility they derive from purchasing moral satisfaction falls below the marginal utility they derive from purchasing other things. In this case, diversification makes sense, because utility you assign to good you're responsible for is very non-linear.
If you're giving to charity at all, that's awesome. Do what motivates you.
Horrible news!!! Organic molecules have just been found on Mars. It appears that the great filter is ahead of us.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Previous Open Thread
Next Open Thread
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.