The sequences eBook, Rationality: From AI to Zombies, will most likely be released early in the day on March 13, 2015.
This has been published! I assume a Main post on the subject will be coming soon so I won't create one now.
Unless I am much mistaken, the Pebblesorters would not approve of the cover :)
Google Ventures and the Search for Immortality Bill Maris has $425 million to invest this year, and the freedom to invest it however he wants. He's looking for companies that will slow aging, reverse disease, and extend life.
I remember reading an article here a while back about a fair protocol for making a bet when we disagree on the odds, but I can't find it. Anyone remember what that was? Thanks!
From the Even Odds thread:
Assume there are n people. Let S_i be person i's score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be
(i.e. the person's score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.
This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quadratic scoring rule), then each person expects the same profit before the question is resolved.
I'm toying with the idea of programming a game based on The Murder Hobo Investment Bubble. The short version is that Old Men buy land infested with monsters, hire Murder Hobos to kill the monsters, and resell the land at a profit. I want to make something that models the whole economy, with individual agents for each Old Man, Murder Hobo, and anything else I might add. Rather than explicitly program the bubble in, it would be cool to use some kind of machine learning algorithm to figure everything out. I figure they'll make the sorts of mistakes that lead ...
Does anyone have any good web resources on how to be a good community moderator?
A friend and I will shortly be launching a podcast and want to have a Reddit community where listeners can interact with us. He and I will be forum's moderators to begin with, and I want to research how to do it well.
I'm thinking about starting a new political party (in my country getting into parliament as a new party is e̶a̶s̶y̶ not virtually impossible, so it's not necessarily a waste of time). The motivation for this is that the current political process seems inefficient.
Mostly I'm wondering if this idea has come up before on lesswrong and if there are good sources for something like this.
The most important thing is that no explicit policies are part of the party's platform (i.e. no "we want a higher minimum wage"). I don't really have a party program ye...
On MIRI's website at https://intelligence.org/all-publications/, the link to Will Sawin and Abram Demski's 2013 paper goes to https://intelligence.org/files/Pi1Pi2Probel.pdf, when it should go to http://intelligence.org/files/Pi1Pi2Problem.pdf
Not sure how to actually send this to the correct person.
There should be some kind of penalty on Prediction Book (e.g. not being allowed to use the site for two weeks) for people who do not check the "make this prediction private" box for predictions that are about their personal life and which no one else can even understand.
Basic question about bits of evidence vs. bits of information:
I want to know the value of a random bit. I'm collecting evidence about the value of this bit.
First off, it seems weird to say "I have 33 bits of evidence that this bit is a 1." What is a bit of evidence, if it takes an infinite number of bits of evidence to get 1 bit of information?
Second, each bit of evidence gives you a likelihood multiplier of 2. E.g., a piece of evidence that says the likelihood is 4:1 that the bit is a 1 gives you 2 bits of evidence about the value of that bit. ...
Since mild traumatic brain injury is sometimes an outcome of motor vehicle collision, it seems possible that wearing a helmet while driving may help to mitigate this risk. Oddly, I have been unable to find any analysis or useful discussion. Any pointers?
A recent study looks at "equality bias" where given two or more people, even when one is clearly outperforming others one stills is inclined to see the people as nearer in skill level than the data suggests. This occurred even when money was at stake, people continued to act like others were closer in skill than they actually were. (I strongly suspect that this bias may have a cultural aspect.) Summary article discussing the research is here. Actual study is behind paywall here and related one also behind paywall here. I'm currently on vacation b...
A reporter I know is interested in doing an article on people in the cryonics movement. If people are interested, please message me for details.
Good news for the anxious, a simple relaxation technique once a week can have a significant effect on cortisol http://www.ergo-log.com/cortrelax.html
"Abbreviated Progressive Relaxation Training (APRT) – on forty test subjects. APRT consists of lying down and contracting specific muscle groups for seven seconds and then completely relaxing them for thirty seconds, while focusing your awareness on the experience of contracting and relaxing the muscle groups.
There is a fixed sequence in which you contract and relax the muscle groups. You start with your ...
Can one of the people here who has admin or moderator privileges over at PredictionBook please go and deal with some of the recent spammers?
I wrote an essay about the advantages (and disadvantages) of maximizing over satisficing but I’m a bit unsure about its quality, that’s why I would like to ask for feedback here before I post it on LessWrong.
Here’s a short summary:
According to research there are so called “maximizers” who tend to extensively search for the optimal solution. Other people — “satisficers” — settle for good enough and tend to accept the status quo. One can apply this distinction to many areas:
Epistemology/Belief systems: Some people, one could describe them as epistemic max...
Apparently first bumps are a much more hygienic alternative to the handshake . This has been reported e.g. here, here and here.
I wonder whether I should try to get adoption of this as a greeting among my friends. It might also be an alternative to the sometime awkward choice between handshake and hug (though this is probably a regional cultural issue).
And I wonder whether the LW community has an idea on this and whether that might be advanced in some way. Or whether is just a misguided hype.
Perhaps it would be beneficial to make a game used for probability calibration in which players are asked questions and give answers along with their probability estimate of it being correct. The number of points gained or lost would be a function of the player’s probability estimate such that players would maximize their score by using an unbiased confidence estimate (i.e. they are wrong p proportion of the time when they say they think they are correct with probability p. I don’t know of such a function off hand, but they are used in machine learning, so they should be able to be found easily enough. This might already exist, but if not, it could be something CFAR could use.
Original Ideas
How often do you manage to assemble a few previous ideas in a way in which it is genuinely possible that nobody has assembled them before - that is, that you've had a truly original thought? When you do, how do you go about checking whether that's the case? Or does such a thing matter to you at all?
For example: last night, I briefly considered the 'Multiple Interacting Worlds' interpretation of quantum physics, in which it is postulated that there are a large number of universes, each of which has pure Newtonian physics internally, but whose ...
You can't ever be entirely sure if an idea wasn't thought of before. But, if you care to demonstrate originality, you can try an extensive literature review to see if anyone else has thought of the same idea. After that, the best you can say is that you haven't seen anyone else with the same idea.
Personally, I don't think being the first person to have an idea is worth much. It depends entirely on what you do with it. I tend to do detailed literature reviews because they help me generate ideas, not because they help me verify that my ideas are original.
From a totally amateur point of view, I'm starting to feel (based on following news and reading the occasional paper) that the biggest limitation on AI development is hardware computing power. If so, this good news for safety since it implies a relative lack of exploitable "overhang". Agree/disagree?
What if a large part of how rationality makes you life better is not from making better choices but simply making your ego smaller by adopting an outer view, seeing yourself as a mean for your goals and judging objectively, thus reducing ego, narcissism, solipsism, that are linked with the inner view?
I have a keen interest in "the problem of the ego" but I have no idea what words are best to express this kind of problem. All I know it is knewn since the Axial Age.
I'm almost finished writing a piece that will likely go here either in discussion or main on using astronomy to gain information about existential risk. If anyone wants to look at a draft and provide feedback first, please send me a message with an email address.
Video from the Berkeley wrap party
I think the first half hour is them getting set up. Then there are a couple of people talking about what HPMOR meant to them, Eliezer reading (part of?) the last chapter, and a short Q&A. Then there's setting up a game which is presumably based on the three armies, and I think the rest is just the game-- if there's more than that, please let me know.
Hey,I posted here http://lesswrong.com/lw/ldg/kickstarting_the_audio_version_of_the_upcoming/ but if anyone wanted the audio sequences I'll buy it for two of you. Respond at link; I won't know who's first if I get responses at two places.
PredictionBook's graph on my user account shows me with a mistaken prediction of 100%. But it is giving a sample size of 10 and I'm pretty sure I have only 9 predictions judged by now. Does anyone know a way to find the prediction it's referring to?
When making AGI, it is probably very important to prevent the agent from altering their own program code until they are very knowledgeable on how it works, because if the agent isn’t knowledgeable enough, they could alter their reward system to become unFriendly without realizing what they are doing or alter their reasoning system to become dangerously irrational. A simple (though not foolproof) solution to this would be for the agent to be unable to re-write their own code just “by thinking,” and that the agent would instead need to find their own source ...
I'm looking for an HPMOR quote, and the search is somewhat complicated because I'm trying to avoid spoiling myself searching for it (I've never read it).
The quote in question was about how it is quite possible to avert a bad future simply by recognizing it and doing the right thing in the now. No time travel required.
[No HPMOR Spoliers]
I'm unsure if it's fit for the HPMoR discussion thread for Ch. 119, so I'm posting it here. What's up with all of Eliezer's requests at the end?
...If anyone can put me in touch with J. K. Rowling or Daniel Radcliffe, I would appreciate it.
If anyone can put me in touch with John Paulson, I would appreciate it.
If anyone can credibly offer to possible arrange production of a movie containing special effects, or an anime, I may be interested in rewriting an old script of mine.
And I am also interested in trying my hand at angel investing, if a
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.