Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say "oops" for each of them. Here we go...
There was once a time when the average human couldn't expect to live much past age thirty. (Jul 2012)
That's probably wrong. IIRC, previous eras' low life expectancy was mostly due to high child mortality.
We have not yet mentioned two small but significant developments leading us to agree with Schmidhuber (2012) that "progress toward self-improving AIs is already substantially beyond what many futurists and philosophers are aware of." These two developments are Marcus Hutter's universal and provably optimal AIXI agent model... and Jurgen Schmidhuber's universal self-improving Godel machine models... (May 2012)
This sentence is defensible for certain definitions of "significant," but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren't particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and re...
On September 26, 1983, Soviet officer Stanislav Petrov saved the world. (Nov 2011)
Eh, not really.
The Wiki link in the linked LW post seems to be closer to "Stanislav Petrov saved the world" than "not really":
Petrov judged the report to be a false alarm, and his decision is credited with having prevented an erroneous retaliatory nuclear attack
...
His colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile strike if they had been on his shift.
...
Petrov, as an individual, was not in a position where he could single-handedly have launched any of the Soviet missile arsenal. ... But Petrov's role was crucial in providing information to make that decision. According to Bruce Blair, a Cold War nuclear strategies expert and nuclear disarmament advocate, formerly with the Center for Defense Information, "The top leadership, given only a couple of minutes to decide, told that an attack had been launched, would make a decision to retaliate."
A closely related article says:
...Petrov's responsibilities included observing the satellite early warning network and notifying his sup
previous eras' low life expectancy was mostly due to high child mortality.
I have long thought that the very idea of "life expectancy at birth" is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.
I've been systematically downvoted for the past 16 days. Every day or two, I'd lose about 10 karma. So far, I've lost a total of about 160 karma.
It's not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said "thanks" when somebody pointed out a formatting error in my comments is now at -1.
I'm not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.
A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent "Is Less Wrong too scary to marginalized groups?" firestorm.
One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).
Robin Hanson on Facebook:
Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.
Consider the case of Willard Wells and his Springer-published book Apocalypse When?: Calculating How Long the Human Race Will Survive (2009). From a UCSD news story about a talk Wells gave about the book:
Larry Carter, a UCSD emeritus professor of computer science, didn’t mince words. The first time he heard about Wells’s theories, he thought, “Oh my God, is this guy a crackpot?”
But persuaded by Well’s credentials, which include a PhD from Caltech in math and theoretical physics, a career that led him L-3 Photonics and the Caltech/Jet Propulsion Laboratory, and an invention under his belt, Carter gave the ideas a chance. And was intrigued.
For a taste of the book, here is Wells' description of one specific risk:
...When advanced robots arrive... the serious threat [will be] h
Some names familiar to LWers seem to have just made their fortunes (again, in some cases); http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/ (via HN)
Google is shelling out $400 million to buy a secretive artificial intelligence company called DeepMind....Based in London, DeepMind was founded by games prodigy and neuroscientist Demis Hassabis, Skype & Kazaa developer Jaan Tallin and researcher Shane Legg.
I liked Legg's blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.
EDIT: bigger discussion at http://lesswrong.com/r/discussion/lw/jks/google_may_be_trying_to_take_over_the_world/#comments - new aspects: $500m, not $400m; DeepMind proposes an ethics board
I'm going to do the unthinkable: start memorizing mathematical results instead of deriving them.
Okay, unthinkable is hyperbole. But I've noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don't know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn't grasp why certain results were true. I had gone along with this idea without thinking about it critically.
So these are the things I'm ...
In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.
Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.
You are right that "the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems", I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.
In this article, Eliezer says:
Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.
Recently, a similar phrase popped into my head, which I found quite useful:
Confusion gets curiosity. Does not get anger, disgust or fear. Never. Never ever never for ever.
That's all.
PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file -- it can be a garbage pdf or even a pdf that's already on scribd). They say this at the very bottom of their pricing page, but I didn't notice until just now.
Hello, we are organizing monthly rationality meetups in Vienna - we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.
Reason #k Why I <3 Pomodoros:
They really help me get over akrasia. I beemind how many pomodoros I do per week, so I do tasks I would otherwise procrastinate if I can do 20 minutes of them (yes, I do short pomodoros) and get to enter a data point at the end. Often I find that the task is much shorter/less awful than it felt in the abstract.
Example: I just moved today, and didn't have that much to unpack, but decided I'd do it tomorrow, because I felt tired and it would presumably be long and unpleasant. But then I realized I could get a pomodoro out of it (plus permission from myself to stop after 20 min and go to bed). Turns out it took 11 minutes and now I'm all set up!
Even if you know that signaling is stupid, it doesn't escape the cost of not signaling.
It's a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it's not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it's a breezy recap of what he already knows. He comes out the other side without the eternal "has no formal education" tagline, and a whole new slew of acquaintances.
Now, I understand that there may be good reasons not to, and I'd very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a "Here's your tuition and an extra sum of money to cover the opportunity cost of your time, I don't care how unfair it is that people won't take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible" scholarship?
Has anyone toyed around with the idea of sending him off to get a math degree somewhere?
I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don't mean that much (otherwise people would never move up the academic status chain).
Yes it is too bad that writing things down clearly takes a long time.
Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I'll briefly mention that Eliezer is currently meeting with a "mathematical exposition aimed at math researchers" tutor. I don't know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.
I don't think you understand signaling well.
Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.
Trying to go the traditional route wouldn't fit into the highly effective image that he already signals.
Put another way, the purpose of signaling isn't so nobody will give you crap. It's so somebody will help you accomplish your goals.
People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.
A recent experience reminded me that basics are really important. On LW we talk a lot about advanced aspects of rationality.
If you would have to describe the basics, what would you say? What things are so obvious for you about rationality that they usually go without saying?
You can frequently make your life better by paying attention to what you're doing, looking for possible improvements, trying your ideas, and observing whether the improvements happen.
My meditation blog from a (somewhat) rationalist perspective is now past 40 posts:
John_Maxwell_IV and I were recently wondering about whether it's a good idea to try to drink more water. At the moment my practice is "drink water ad libitum, and don't make too much of an effort to always have water at hand". But I could easily switch to "drink ad libitum, and always have a bottle of water at hand". Many people I know follow the second rule, and this definitely seems like something that's worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:
http://www.sciencedirect.com/science/article/pii/S0002822399000486:
...Dehydration of as little as 1% decrease in body weight results in impaired physiological and performance responses (4), (5) and (6), and is discussed in more detail below. It affects a wide range of cardiovascular and thermoregulatory responses (7), (8), (9), (10), (11), (12), (13) and (14).
The Nationwide Food Consumption Surveys indicate that a portion of the population may be chronically mildly dehydrated. Several factors may increase the likelihood of chronic, mild dehydration, including a poor thirst mechanism, dissatisfaction with the taste of water, common co
Repeating my post from the last open thread, for better visibility:
I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn't very good (on the level of Calculus 101). I'm not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it's even worse with variance...
Any ideas how to proceed?
I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here's an explanation:
Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.
Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.
Now spin the ruler on the spike. It's easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity -- how hard you have to twist the ruler to make it spin, or to make it stop spinning.
"I'd like to understand this more deeply" is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?
When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.
What precisely does "somehow the same as the original set" mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.
For example, if we speak about weights, the natural way of "joining together" is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the "sum/n".
Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that "5 is the mean of the original set" means "the original set b...
A few years back, the Amanda Knox murder case was extensively discussed on LW.
Did someone here ask about the name of a fraud where the fraudster makes a number of true predictions for free, then says "no more predictions, I'm selling my system."? There's no system, instead the fraudster divided the potential victims into groups, and each group got different predictions. Eventually, a few people have the impression of an unbroken accurate series.
Anyway, the scam is called The Inverted Pyramid, and the place I'd seen it described was in the thoroughly charming "Adam Had Three Brothers. by R.A. Lafferty.
Edited to add: It...
People often ask why MIRI researchers think decision theory is relevant for AGI safety. I, too, often wonder myself whether it's as likely to be relevant as, say, program synthesis. But the basic argument for the relevance of decision theory was explained succinctly in Levitt (1999):
...If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable physical and visual events routinely occur. It will not be practical, or even safe, t
A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?
Somewhere I saw the claim that in choosing sperm donors the biggest factor turns out to be how cute the baby pictures are, but at this point it's just a cached thought. Looking now I'm not able to substantiate it. Does anyone know where I might have seen this claim?
Does anyone else experience the feeling of alienation? And does anyone have a good strategy for dealing with it?
Is it always correct to choose that action with the highest expected utility?
Suppose I have a choice between action A, which grants -100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.
Intuitively, it seems like action A is too risky. I'll almost certainly end up with ...
I think the non-intuitive nature of the A choice is because we naturally think of utilons as "things". For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one's preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.
It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don't think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to -100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the "outcomes <=> utilons" map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.
ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people's intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.
Also, it's well possible that your utility function doesn't evaluate to +10000 for any value of its argument, i.e. it's bounded above.
Daniel Dennett quote to share, on an argument in Sam Harris' book Free Will;
... he has taken on a straw man, and the straw man is beating him
From: http://www.samharris.org/blog/item/reflections-on-free-will#sthash.5OqzuVcX.dpuf
Just thought that was pretty damn funny.
The MIRI course list bashes on "higher and higher forms of calculus" as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.
So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?
Not much. The kind of probability relevant to MIRI's interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that's more algorithms. Not an expert on numerical analysis by any means, though.
If you have a general interest in mathematics, I still recommend that you learn some calculus because it's an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.
Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don't see ourselves particularly bonded to LW at all. Especially I.
We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein's brains (whether they were l...
Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I'm interested in seeing if there are any low hanging fruit I'm missing.
I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?
A quick google will give you a lot of lists but most of them are from news sources that I don't trust.
Has anyone had experiences with virtual assistants? I've been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.
I'd like to hear about any positive or negative experiences.
One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That's why I'm asking here.
Has anyone paired Beeminder and Project Euler? I'd like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?
I hadn't realised before that Max Tegmark's work was actually funded by a massive grant from the Templeton Foundation. $9 million to found FQXI.
The purpose of the Templeton Foundation is to spray around more money than most academics could dream of - $9 million for philosophy! - seeking to try to blur the lines between science and religion and corrupt the public discourse. The best interpretation that can reasonably be put on taking the Templeton shilling is that one is doing so cynically.
This is not pleasing news, not at all.
Any book recommendations for a good intro to evolutionary psychology? I remember Eliezer suggested The Moral Animal, but I also vaguely remember some other people recommending against it. I'll probably just go with TMA unless some other book gets suggested multiple times.
I don't understand why wireheading is almost universally considered worse than death, or at least really really negative.
I keep looping through the same crisis lately, which comes up any time someone points out that I'm pretentious / an idiot / full of shit / lebens unwertes leben / etc.:
Is there a good way for me to know if I'm actually any good at anything? What are appropriate criteria to determine whether I deserve to have pride in myself and my abilities? And what are appropriate criteria to determine whether I have the capacity to determine whether I've met those criteria?
Following up on http://lesswrong.com/lw/jij/open_thread_for_january_17_23_2014/af90 :
This is a minimally-viable update on account of recent travel and imminent job interviews, but the precommitments seem to be succeeding in at least forcing something like progress and keeping some attention on the problem.
http://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement
Some of these ideas are very poorly thought out. Some are interesting.
I'm in art school and I have a big problem with precision and lack of "sloppiness" in my work. I'm sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit - maybe the size of some area in the cerebellum or something, I don't know. Am I right in thinking this?
Seems to me that that's likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it - perhaps some different kind of exercises - and do your best at those, before drawing any conclusions about your fundamental limits... because those conclusions themselves will limit you even more.
I'm recalling a Less Wrong post about how rationality only leads to winning if you "have enough of it". Like if you're "90% rational", you'll often "lose" to someone who's only "10% rational". I can't find it. Does anyone know what I'm talking about, and if so can you link to it?
I'm quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?
Perhaps it's because HMM seem to be more difficult to grasp?
Or it's because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.
For comparison, Google search "bayes OR bayesian network OR net" site:lesswrong.com gives 1,090 results.
Google search hidden markov model site:lesswrong.com gives 91 results.
Is there a good way of finding what kind of job might fit a person? Common advice such as "do what you like to do" or "do what you're good at" is relatively useless for finding a specific job or even a broader category of jobs.
I've did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.
Does anyone have a simple, easily understood definition of "logical fallacy" that can be used to explain the concept to people who have never heard of it before?
I was trying to explain the idea to a friend a few days ago but since I didn't have a definition I had to show her www.yourlogicalfallacyis.com. She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.
She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.
You think she would've understood the concept even more quickly if you had a definition? I think people underestimate the value of showing people examples as a way of communicating a concept (and overestimate the value of definitions).
Can anyone recommend a good replacement for flagfic.com ? This was a site that could download stories from various archives (fanfiction.net, fimfiction.net, etc) transform them to various e-reader formats, and email them to you. I used it to email fanfics I wanted to read directly to my Kindle as .mobi files.
Repost as there were no answers:
Has anyone here done Foundation Training? How is the evidence supporting them?
I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.