You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:Locaha
25 January 2014 03:14:59PM
5 points
[-]
Repeating my post from the last open thread, for better visibility:
I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn't very good (on the level of Calculus 101). I'm not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it's even worse with variance...
Comment author:solipsist
25 January 2014 04:09:00PM
*
14 points
[-]
I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here's an explanation:
Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.
Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.
Now spin the ruler on the spike. It's easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity -- how hard you have to twist the ruler to make it spin, or to make it stop spinning.
"I'd like to understand this more deeply" is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?
If you are frustrated with hand waving in calculus, read a Real Analysis textbook. The magic words which explain how the heck you can have a probability distributions over real numbers is measure theory.
How does that answer the question?
It's true that the center of gravity is a mean, but the moment of inertia is not a variance. It's one thing to say something is "proportional to a variance" to mean that the constant is 2 or pi, but when the constant is the number of points, I think it's missing the statistical point.
But the bigger problem is that these are not statistical examples! Means and sums of squares occur many places, but why are they are a good choice for the central tendency and the tendency to be central? Are you suggesting that we think of a random variable as a physical rod? Why? Does trying to spin it have any probabilistic or statistical meaning?
Comment author:solipsist
25 January 2014 06:15:43PM
*
1 point
[-]
I wasn't aiming to answer Locaha's question as much as figure out what question to answer. The range of math knowledge here is high, and I don't know where Locaha stands. I mean,
But why [is the mean calculated as] sum/n?
That could be a basic question about the meaning of averages -- the sort of knowledge I internalized so deeply that I have trouble forming it into words.
But maybe Locaha's asking a question like:
Why is an unbiased estimator of population mean a sum/n, but an unbiased estimator of population variance a sum/(n-1)?
That's a less philosophical question. So if Locaha says "means are like the centers of mass! I never understood that intuition until now!", I'll have a different follow up than if Locaha says "Yes, captain obvious, of course means are like centers of mass. I'm asking about XYZ".
Comment author:spxtr
25 January 2014 08:47:22PM
0 points
[-]
Mean and variance are closely related to center of mass and moment of inertia. This is good intuition to have, and it's statistical. The only difference is that the first two are moments of a probability distribution, and the second two are moments of a mass distribution.
I don't have a good resource for you - I've had too much math education to pin down exactly where I picked up this kind of logic. I'd recommend set theory in general for getting an understanding of how math works and how to talk and read precisely in mathematics.
For your specific question about the mean, it's the only number such that the sum of all (samples - mean) equals zero. Go ahead and play with the algebra to show it to yourself. What it means is that if you go off of the mean, you're going to be more positive of the numbers in the sample than you are negative, or more negative than positive.
http://intelligence.org/courses/ has information on set theory. I also enjoyed reading Bertrand Russell's "Principia Mathematica", but haven't evaluated it as a source for learning set theory.
Not really - but I do agree that it's absolutely vital to understand the basic concepts or terms. I think that's a major reason why people fail to learn - they just don't really grasp the most vital concepts. That's especially true of fields with lots of technical terms. If you don't understand the terms you'll struggle to follow even basic lines of reasoning.
For this reason I sometimes provide students with a list of central terms, together with comprehensive explanations of what they mean, when I teach.
Comment author:Viliam_Bur
25 January 2014 05:03:05PM
*
9 points
[-]
When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.
What precisely does "somehow the same as the original set" mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.
For example, if we speak about weights, the natural way of "joining together" is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the "sum/n".
Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that "5 is the mean of the original set" means "the original set behaves (with regards to the natural thing to do, i.e. addition) as if it was composed of the 5's".
There are situations where some other operation is the natural thing to do. Sometimes it is multiplication. For example, if you multiply some original value with 2, and they you multiply it by 8, the result of these two operations is the same as if you would multiply it twice by 4. In this case it's called geometric mean, and it's a root of product.
It can be even more complicated, so it doesn't necessarily have a name, but the idea is always replacing the original set with a set of identical values such that in the original context they would behave the same way. For example, the example above could be described as a 100% growth (multiplication by 2) and 700% growth (multiplication by 8), and you need to get a result 300% (multiplication by 4); in which case it would be "root of (product of (Xi + 100%)) - 100%".
If there is no meaningful operation in the original set, if the set can be ordered, we can pick the median. If the set can't even be ordered, if there are discrete values, we can pick the most frequent value as the best approximation of the original set.
Comment author:[deleted]
25 January 2014 05:53:08PM
*
1 point
[+]
(1
child)
Comment author:[deleted]
25 January 2014 05:53:08PM
*
1 point
[-]
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample.
The mean of the sum of two random variables is the sum of the means (ditto with the variances); there's no similarly simple formula for the median. (See ChristianKl's comment for why you'd care about the sum.)
The mean if the value of x that minimizes SUM_i (x - x_i)^2; if you have to approximate all elements in your sample with the same value and the cost of an imperfect approximation is the square distance from the exact value (and any smooth function looks like the square when you're sufficiently close to the minimum), then you should use the mean.
Comment author:[deleted]
26 January 2014 11:39:58AM
0 points
[-]
(Of course, all this means that if you're more likely to multiply things together than add them, the badness of an approximation depends on the ratio between it and the true value rather than the difference, and things are distributed log-normally, you should use the geometric mean instead. Or just take the log of everything.)
Comment author:edanm
25 January 2014 09:18:16PM
0 points
[-]
IS this a good book to start with? I know it's the standard "Bayes" intro around here, but is it good for someone with, let's say, zero formal probability/statistics training?
Comment author:[deleted]
25 January 2014 10:02:32PM
0 points
[-]
I think it's even better if you're not familiar with frequentist statistics because you won't have to unlearn it first, but I know many people here disagree.
I suppose it's better that to never have suffered through frequentist statistics first, but I think you appreciate the right way a lot more after you've had to suffer through the wrong way for a while.
Comment author:[deleted]
26 January 2014 09:42:46AM
0 points
[-]
Well, Jaynes does point out how bad frequentism is as often as he can get away with. I guess the main thing you're missing out if you weren't previously familiar with it is knowing whether he's attacking a strawman.
Comment author:Kaj_Sotala
26 January 2014 02:45:09PM
3 points
[-]
I was under the impression that the "this is definitely not a book for beginners" was the standard consensus here: I seem to recall seeing some heavily-upvoted comments saying that you should be approximately at the level of a math/stats graduate student before reading it. I couldn't find them with a quick search, but here's one comment that explicitly recommends another book over it.
Comment author:maia
25 January 2014 08:24:29PM
0 points
[-]
Attending a CFAR workshop and session on Bayes (the 'advanced' session) helped me understand a lot of things in an intuitive way. Reading some online stuff to get intuitions about how Bayes' theorem and probability mass work was helpful too. I took an advanced stats course right after doing these things, and ended up learning all the math correctly, and it solidified my intuitions in a really nice way. (Other students didn't seem to have as good a time without those intuitions.) So that might be a good order to do things in.
Some multidimensional calc might be helpful, but other than that, I think you don't need too much other math to support learning more probability and stats.
Comment author:Benito
25 January 2014 10:47:15PM
*
4 points
[-]
I asked a similar question a while back, and I was directed to this book, which I found to be incredibly useful. It is written at an elementary level, has minimal little maths, yet is still technical, and brings across so many central ideas in very clear, Bayesian, terms. It is also on Lukeprog's CSA book recommendations for 'Become Smart Quickly'.
Note: this is the only probability textbook I have read. I've glanced through the openings of others, and they've tended to be above my level. I am sixteen.
Comment author:Lumifer
26 January 2014 02:43:36AM
0 points
[-]
This isn't at introductory level, but try exploring the ideas around Fisher information -- it basically ties together information theory and some important statistical concepts.
Fisher Information is hugely important in that it lets you go from just treating a family of distributions as a collection of things to treating them as a space with its own meaningful geometry. The wikipedia page doesn't really convey it but this write-up by Roger Grosse does. This has been known for decades but the inferential distance to what folks like Amari and Barndorff-Nielsen write is vast.
The problem with most Probability and Statistics courses is the axiomatic approach. Purely formalism. Here are the rules - you can play by them if you want to.
Jaynes was such a revelation for me, because he starts with something you want, not arbitrary rules and conventions. He builds probability theory on basic desiredata of reason that you that make sense. He had reasons for my "whys?".
Also, standard statistics classes always seemed a bit perverse to me - logically backward. They always just felt wrong. Jaynes approach replaced that tortured backward thinking with clear, straight lines going forward. You're always asking the same basic question "What is the probability of A given that I know B?"
And he also had the best notation. Even if I'm not going to do any math, I'll often formulate a problem using his notation to clarify my thinking.
Comment author:pragmatist
26 January 2014 06:36:57AM
*
3 points
[-]
As a first step, I suggest Dennis Lindley's Understanding Uncertainty. It's written for the layperson, so there's not much in the way of mathematical detail, but it is very good for clarifying the basic concepts, and covers some surprisingly sophisticated topics.
ETA: Ah, I didn't notice that Benito had already recommended this book. Well, consider this a second opinion then.
Comment author:Qiaochu_Yuan
27 January 2014 07:33:13PM
*
4 points
[-]
I don't think that's really what means are. That intuition might fit the median better. One reason means are nice is that they have really nice properties, e.g. they're linear under addition of random variables. That makes them particularly easy to compute with and/or prove theorems about. Another reason means are nice is related to betting and the interpretation of a mean as an expected value; the theorem justifying this interpretation is the law of large numbers.
Nevertheless in many situations the mean of a random variable is a very bad description of it (e.g. mean income is a terrible description of the income distribution and median would be much more appropriate).
Edit: On the other hand, here's one very undesirable property of means: they're not "covariant under increasing changes of coordinates," which on the other hand is true of medians. What I mean is the following: suppose you decide to compute the mean population of all cities in the US, but later decide this is a bad idea because there are some really big cities. If you suspect that city populations grow multiplicatively rather than additively (e.g. the presence of good thing X causes a city to be 1.2x bigger than it otherwise would, as opposed to 200 people bigger), you might decide that instead of looking at population you should look at log population. But the mean of log population is not the log of mean population!
On the other hand, because log is an increasing function, the median of log population is still the log of median population. So taking medians is in some sense insensitive to these sorts of decisions, which is nice.
I tried the video at the url, and it seemed a lot more like straining (little pun about the mistaken url), but that might not be a fair test.
The basic idea of getting hip mobility seems sound, but I recommend Scott Sonnon's Ageless Mobility and IntuFlow, and the The Five Tibetan Rites -- sorry for the cheesy name on the latter, but they're a cross between yoga and calisthenics with a lot of emphasis on getting backwards/forwards pelvis mobility.
I'm in art school and I have a big problem with precision and lack of "sloppiness" in my work. I'm sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit - maybe the size of some area in the cerebellum or something, I don't know. Am I right in thinking this?
Comment author:Manfred
25 January 2014 08:29:21PM
*
0 points
[-]
I think it's a metaphor thing. Like, in writing, if you say "The shadow of a lamppost lay on the ground like a spear. He walked and it pierced him like a spear." What more description of the scene do you need than that? In fact, talking about the color of the path or what kind of trousers our character was wearing would be counterproductive to the quality of the writing.
One could view sloppiness in art in the same way - use of metaphor that captures the scene without the need for detail.
Comment author:maia
25 January 2014 08:28:32PM
8 points
[-]
Seems to me that that's likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it - perhaps some different kind of exercises - and do your best at those, before drawing any conclusions about your fundamental limits... because those conclusions themselves will limit you even more.
Comment author:ChristianKl
25 January 2014 09:05:23PM
*
1 point
[-]
I would guess that you try to exert too much control. The kind of "sloppiness" that's useful for creativity is about letting things go.
Meditation might help.
As you are female, dancing a partner dance where you have to follow and can't control everything might be useful. Letting go of trying to control is lesson 101 for a lot of woman who pick up Salsa dancing.
I would guess that you try to exert too much control. The kind of "sloppiness" that's useful for creativity is about letting things go.
I'm already good at this part of creativity, but precision is also pretty important. Right now I'm working on a project where I have to trace in pen (can't erase, flaws are obvious) photographs that I took. Letting things go won't help here.
As a lead, you learn that you aren't really controlling much of anything in Salsa either. You're setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don't expect.
But I'm guessing that you've hit on the right direction of interpretation of sloppiness as letting go of control. I'd extend that to too much self conscious* control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest of your capabilities for the same intention.
When the self monitoring person in your head tries to do too much, he gets in the way of the rest of you doing it right.
Comment author:[deleted]
26 January 2014 10:13:07AM
0 points
[-]
But I'm guessing that you've hit on the right direction of interpretation of sloppiness as letting go of control. I'd extend that to too much self conscious* control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest of your capabilities for the same intention.
That seems related with the common observation that it's easier to speak a foreign language when drunk than when sober: in the latter case I feel I'm so worried of saying something grammatically incorrect that I end up speaking in very simple sentences and very haltingly. (And the widespread use of drugs among rock musicians is well-known.)
Comment author:ChristianKl
26 January 2014 11:43:01AM
0 points
[-]
As a lead, you learn that you aren't really controlling much of anything in Salsa either. You're setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don't expect.
For advanced dancing that's true. For beginners, not so much. At the beginning Salsa is the guy leading a move and the woman following.
If you are a guy and want to learn dancing for the sake of letting go control I wouldn't recommend Salsa. I think it took me 1 1/2 years to get to that point.
A whole 1 1/2 years? Took me a lot longer than that. I've been at Salsa mainly for about a decade.
Yes, the unfortunate fact is that most leads are taught to "lead moves" when they start. If they were taught to lead movement, they'd make faster progress, IMO. Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver. I've seen a West Coast instructor teach a beginning class that way, and thought it was the best beginning class I had ever seen.
Comment author:ChristianKl
27 January 2014 12:42:53PM
0 points
[-]
A whole 1 1/2 years? Took me a lot longer than that.
I think on of the turning events was for me my first Bachata Congress in Berlin. I didn't know too many Bachata patterns and after hours of dancing the brain just switches off and let's the body do it's thing.
But you are right that it might well take longer for the average guy. That means it's not a good training exercise to pick up the skill of letting go control for man.
For woman on the other hand it's something to be learned at the beginning.
Yes, the unfortunate fact is that most leads are taught to "lead moves" when they start.
At the beginning I mainly thought I didn't understand what teaching dance is all about and that a bunch of teachers have something like real expertise.
The more I dance the more I think that their teaching is very suboptimal. A local Salsa teacher teaches mainly patterns in her lessons. On the other hand she writes on her blog about how it's all in the technique and about traits like confidence. It's also not like she didn't learn dance at formal dance university courses for 5 years, so she should know a bit.
Things like telling a guy who dances with a bit of distance to the girl to dance closer, just aren't good advice when the girl isn't comfortable with dancing closer. Yes, if they would dance closer things would be nicer, but there usually a reason why a pair has the distance it has.
Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver.
Manipulation is an interesting choice of words. What do you mean with it?
I remember a Kizomba dance a year ago where I didn't know much Kizomba. I did have a lot of partner perception from Bachata. I picked up enough information from my dance partner that I could just follow her movements in a way where she didn't thought she was leading but I was certainly dancing a bunch of steps with her I hadn't learned in a lecture.
To use sort of what "manipulation" means in osteopathy I think you could call that nonmanipluative leading. In Bachata I think there are a lot of situation where a movement is there in the body but surpressed and things get good if they lead can "free" the movement and stabilize it. I think such nonmanipulative dancing is quite beautiful.
Unfortunately I'm not good enough to do that in Salsa and even in Bachata I'm not always having good enough perception.
Comment author:EStokes
25 January 2014 09:08:30PM
*
1 point
[-]
Some guesses on my part-
Maybe your tendency towards precision is at the wrong times? If practicing, for example, it might be counterproductive since you probably want quantity instead of quality, or maybe you're trying to get everything down precisely too early on and it's making your work stiff.
Manfred's point is good- "metaphor that captures the scene without the need for detail."... If you render background details overmuch, they can distract the viewer from the focal point of the work. Maybe put some effort into looking at how the "metaphors" of different things work? For example, how more skilled artists draw/paint grass in the distance, or whatnot.
I think it's a common thing to sort of notice something wrong in an area, and to spend a lot of time on that area in hopes of fixing it, which would make it less sloppy... Maybe sketch that thing a lot for practice.
If you're drawing from life, it's possible that lack of sloppiness comes from not making sense of the gestalt, so to speak. I'd think that understanding the form of the subject and how the lighting on it works means you can simplify things away. I don't do much (read: any) figure drawings from life, but I'd imagine that understanding the figure and what's important and what isn't would be helpful. Maybe doing some master copies of skilled, more abstract drawings of the figure would help. Maybe look up a comic artist or cartoonist you like and look at what they do.
ETA:
To address your actual question, I'd say I don't know any particular evidence for why that should be so.
Rationality-technique-wise: It's good that you asked people, since that would bring you evidence of the idea being true or false. In the future it might be even more useful to suppress hypothesizing until some more investigating has gone on- "biological limit" is the sort of thing that feels true if you don't understand how to do something or how to understand how to do something. I think there's a post about this, or something; let me see if I can find it... ETA2: The exact anecdote I was thinking of doesn't apply as much as I thought it did, but maybe the post "Fake Explanations" or something applies?
I have never biked twenty miles in one go.
It could be that this reflects some inherent limit.
Or it could be that I just haven't tried yet.
If I believe that it is an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and succeed, then I will update.
If I believe that it is not an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and fail, then I will update.
In either case, the test of my ability
Is not in contemplating what mechanisms of self might limit me,
But in trying anyway, when I have the opportunity to do so,
And seeing what happens.
Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.
If that means arranging with a friend to pick you up in their car if you have to bail out, or picking a circular route that never takes you that far from home, or any other way of handling the contingency. Going "but suppose I fail!" and not trying is an even worse piece of wormtonguing than the one fubarobfusco is addressing.
Comment author:Ishaan
26 January 2014 07:55:13AM
*
0 points
[-]
If other people working the same craft have managed to achieve precision, it's very unlikely to be a biological limit, right? The resolution of human fine motor skills is really high.
You didn't mention what the craft was or the nature of the sloppiness, but have you considered using simple tools to augment technical skills? Perhaps a magnifying glass, rulers. pieces of string/clay or other suitably shaped objects to guide the hand, etc?
Comment author:memoridem
28 January 2014 03:06:26AM
0 points
[-]
You could try doing something that gives immediate feedback for sloppiness, like simple math problems for example. You might gain some generalizable insight like that speed affects sloppiness. Since you already practice meditation, it should be easier to become aware of the specific failure modes that contribute to sloppiness, which doesn't seem to be a well defined thing in itself.
Even if you know that signaling is stupid, it doesn't escape the cost of not signaling.
It's a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it's not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it's a breezy recap of what he already knows. He comes out the other side without the eternal "has no formal education" tagline, and a whole new slew of acquaintances.
Now, I understand that there may be good reasons not to, and I'd very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a "Here's your tuition and an extra sum of money to cover the opportunity cost of your time, I don't care how unfair it is that people won't take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible" scholarship?
Comment author:ChristianKl
25 January 2014 09:02:00PM
9 points
[-]
I don't think you understand signaling well.
Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.
Trying to go the traditional route wouldn't fit into the highly effective image that he already signals.
Put another way, the purpose of signaling isn't so nobody will give you crap. It's so somebody will help you accomplish your goals.
People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.
Comment author:David_Gerard
25 January 2014 10:40:51PM
*
1 point
[-]
Your last para would imply that not getting crap from gossip journalists means you are not interesting or successful. Eliezer/MIRI gets almost no press. Are you sure that's what you meant?
Well, yes, there is going to be some inevitable crap, but the purpose of signalling is so that you could impress a much larger pool of people. So it might not be much help for gossip journalists, but it might help with the marginal professional ethicist, mathematician, or public figure. In that area, you might get some additional "Anybody who can do that must be damn impressive.". Does the additional damn-impressive outweigh the cost? I don't know, that's why I'm asking.
Comment author:jimmy
25 January 2014 10:42:28PM
2 points
[-]
In addition "getting flak" isn't necessarily a bad thing.
It can be counter-signaling if you can get flak and stay standing.
It can also polarize people and separate those who can evaluate the inside arguments to realize that you're good from those who can't and have to just write you off for having no formal education.
Eddie has some math talent. He can invest some time, money, and effort C to get a degree, which allows other people to discern that he has a higher probability of having that math talent. This higher probability confers some benefit in that other people will more readily take his advice in mathematical matters, or talk with him about his math.
The fun twist is that Eddie lives in a society with many other individuals with varying degrees of math talent, each of whom can expend C to get a degree and the associated benefits. People with almost no mathematical talent have a prohibitively high C, because even if they can pony up the time and money, they have to work very hard to fake their way through. But people with high math ability often choose to stand out by getting the degree, because their C is relatively lower, and a very high proportion of them get degrees. This creates a high association between degrees and mathematical ability, and makes it unlikely to see high mathematical ability in the absence of a degree.
That's the basic idea, plus degrees signal other things which may be completely unrelated to math, but are still nice. Even in the case where the degree has no causal effect no math ability, there are benefits to having one, in that the other math people can judge very quickly that they're interested in talking to you.
Hopefully that demonstrates that I understand signalling. My question is about the costs and benefits of a particular signal.
Comment author:ChristianKl
26 January 2014 01:14:32AM
-2 points
[-]
It demonstrate that you don't. Humans make decisions via something called the availability heuristic.
If you bring into the awareness of the person that you are talking that you are a mathematician that only has a bachleor, no master, no PHD and no professorship that you aren't bringing expertise into his mind.
If you are however a self taught person who managed to published multiple papers among them a paper titled "Complex Value Systems in Friendly AI" in Artificial General Intelligence Lecture Notes in Computer Science Volume and who has his own research institute that's a better picture.
If you have published papers that a lot more relevant for relevant experts than whether you have a degree that verifies basic understanding. If a person really cares whether Eliezer has a math degree he already lost that person.
Impressing Thiel is independent of a future degree or not, because he's already impressed. Where's the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel? Maybe MIRI doesn't need another billionaire, but I don't think they'd turn one away.
Comment author:ChristianKl
26 January 2014 01:06:33AM
5 points
[-]
Impressing Thiel is independent of a future degree or not, because he's already impressed.
I think the deal that Eliezer has with Thiel is that Eliezer does MIRI full time. Switching focus to getting a degree might violate the deal. Gives that Thiel has a lot of money impressing Thiel more might also be very useful if they want more money from him.
Where's the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel?
Do you really think that someone who isn't contrarian will put his money into MIRI?
The present set up is quite okay. Those who want people with academic credentials can give their money to FHI. Those who want more contrarian people can give their money to MIRI.
Whether or not Eliezer has a degree doesn't change that he's the kind of person who has a public Okcupid profile detailing his sexual habits and the fact that he's polyamorous.
When Steve Job was alive and run around in a sweater, he didn't cause people to disregard him because he wasn't wearing a suit.
People respect the person who's a contrarian who's okay with not everyone liking him. The contrarian who tries to get every to like them on the other hand get's no respect.
Comment author:drethelin
26 January 2014 10:19:38PM
0 points
[-]
On the other hand if he decides to get a degree and pulls it off in a year or something impressive like that it could just feed into the contrarian genius image.
Comment author:ChristianKl
26 January 2014 10:40:59PM
0 points
[-]
Yes, but that would prokbably either mean paying someone else to do your homework with means that you are vunerable to attack or making studying the sole focus for a year.
I'm not certain that getting a degree now counts as the traditional route. Also, I don't think that an additional degree is particularly damaging to his image. People aren't going to lose interest in FAI if he sells out and gets a traditional degree. Or they are and I have no idea what kind of people are involved.
Comment author:James_Miller
26 January 2014 05:46:10AM
*
3 points
[-]
Peter Thiel (the billionaire) has the proven ability to spot talent, which is why he is a billionaire. Eliezer has traits that Thiel values, and this is probably much more important than any signal Eliezer sent.
Comment author:IlyaShpitser
25 January 2014 11:06:50PM
*
19 points
[-]
Has anyone toyed around with the idea of sending him off to get a math degree somewhere?
I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don't mean that much (otherwise people would never move up the academic status chain).
Yes it is too bad that writing things down clearly takes a long time.
True. It seems like the great-papers avenue is being pursued full-steam these days with MIRI, but I wonder if they're going to run out of low-hanging fruit to publish, or if mainstream academia is going to drag their heels replying to them.
Comment author:lukeprog
26 January 2014 07:55:11AM
*
12 points
[-]
Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I'll briefly mention that Eliezer is currently meeting with a "mathematical exposition aimed at math researchers" tutor. I don't know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.
Comment author:jsteinhardt
26 January 2014 06:56:12AM
*
5 points
[-]
4 years (or even 1 year if you are super hard-core) of time is a pretty non-trivial investment. I was 2 classes away from a second degree and declined to take them, because the ~100 hours of work it would have taken wasn't worth the additional letters after my name. I also just really don't know anyone relevant who thinks that a college degree or lack thereof particularly matters (although the knowledge and skills acquired in the course of pursuing said degree may matter a lot). Good people will judge you by what you've done to demonstrate skill, not based on a college diploma.
I think IlyaShpitser's comment pretty much nails it.
Comment author:djm
27 January 2014 12:16:58AM
0 points
[-]
I came to the same conclusion, and in general a lack of degree has not impacted me as I get employment based on demonstrated skill. The main limitation is that any formal Postgrad study is impossible without a degree and this was a regret for me, prior to getting access to the coursera type courses.
Comment author:drethelin
26 January 2014 09:56:22PM
1 point
[-]
This might have been a good call 10 years ago but nowadays Eliezer is participating in regular face to face meetings with skilled mathematicians and scientists in the context of constructing and analyzing theorems and decision strategies. This means that for a large amount of the people who are most important to convince, he gets to screen out all the "evidence" of not having a degree. And to a large extent, someone having the respect of a bunch of math phds is more important a qualifier of talent than having that phd themselves.
There's theoretically still the problem of selling Eliezer to the muggles but I don't think that's anywhere near as important as getting serious thinkers on board.
Comment author:Viliam_Bur
27 January 2014 09:59:11AM
*
0 points
[-]
Different target groups may use different signals.
For example, for a scientist the citations may be more important than formal education. For an ordinary person with a university diploma who never published anything anywhere, formal education will probably remain the most important signal, because that's what they use. A smart sponsor may instead consider the ability of getting things done. And the New Age fans will debate about how much Eliezer fits the definitions of an "indigo child".
If the goal is to impress people for whom having an university diploma is the most important signal (they are a majority of the population), the best way would be to find an university which gives the diploma for minimum time and energy spent. Perhaps one where you just pay some money (hopefully not too much), take a few easy exams, and that's it; you don't have to spend time at the lessons. After this, no one can technically say that Eliezer has "no formal education". (And if they start discussing the quality of the university, then Eliezer can point to his citations.) The idea is to do this as easily as possible... assuming it's even worth doing.
There are also other things to consider, such as the fact that other people working with Eliezer do have formal education... so why exactly is it a problem if Eliezer doesn't? Does MIRI seem from outside like one man show? Maybe that should be fixed.
Comment author:banx
26 January 2014 12:27:13AM
*
5 points
[-]
Is it always correct to choose that action with the highest expected utility?
Suppose I have a choice between action A, which grants -100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.
Intuitively, it seems like action A is too risky. I'll almost certainly end up with a huge decrease in utility, just because there's a remote chance of a windfall. Risk aversion doesn't apply here, since we're dealing in utility, right? So either I'm failing to truly appreciate the chance at getting 1M utilons -- I'm stuck thinking about it as I would money -- or this is a case where there's reason to not take the action that maximizes expected value. Help?
EDIT: Changed the details of action A to what was intended
Comment author:Alejandro1
26 January 2014 01:00:27AM
*
14 points
[-]
I think the non-intuitive nature of the A choice is because we naturally think of utilons as "things". For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one's preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.
It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don't think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to -100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the "outcomes <=> utilons" map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.
ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people's intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.
Comment author:jsteinhardt
26 January 2014 07:04:43AM
-1 points
[-]
Yes, this seems almost certainly true (and I think is even necessary if you want to satisfy the VNM axioms, otherwise you violate the continuity axiom).
Comment author:jsteinhardt
27 January 2014 08:41:18AM
1 point
[-]
Yes I'm quite aware... note that if there's a sequence of outcomes whose values increase without bound, then you could construct a lottery that has infinite value by appropriately mixing the lotteries together, e.g. put probability 2^-k on the outcome with value 2^k. Then this lottery would be problematic from the perspective of continuity (or even having an evaluable utliity function).
Comment author:[deleted]
27 January 2014 09:03:51AM
0 points
[-]
Are lotteries allowed to have infinitely many possible outcomes? (The Wikipedia page about the VNM axioms only says "many"; I might look it up on the original paper when I have time.)
Comment author:jsteinhardt
27 January 2014 09:14:24AM
0 points
[-]
I'm not sure, although I would expect VNM to invoke the Hahn-Banach theorem, and it seems hard to do that if you only allow finite lotteries. If you find out I'd be quite interested. I'm only somewhat confident in my original assertion (say 2:1 odds).
There are versions of the VNM theorem that allow infinitely many possible outcomes, but they either
1) require additional continuity assumptions so strong that they force your utility function to be bounded
or
2) they apply only to some subset of the possible lotteries (i.e. there will be some lotteries for which your agent is not obliged to define a utility).
I might look it up on the original paper when I have time.
The original statement and proof given by VNM are messy and complicated. They have since been neatened up a lot. If you have access to it, try "Follmer H., and Schied A., Stochastic Finance: An Introduction in Discrete Time, de Gruyter, Berlin, 2004"
I'd flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It's almost a definition of what utility is - if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have (probably related to risk).
I've recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I'm working on working with that to do effective long-term planning -- the best trick I have so far is weighing "unacceptable status quo continues" as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn't help any, and I've been doing that).
Comment author:TylerJay
26 January 2014 07:48:33PM
0 points
[-]
I sometimes have the same intuition as banx. You're right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars):
-100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it's even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn't feel this, it's still likely a factor in most actual humans' utility functions.
TrustVectoring summed it up well above:
If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
Comment author:Qiaochu_Yuan
27 January 2014 07:29:36PM
*
0 points
[-]
Depending on your preferred framework, this is in some sense backwards: utility is, by definition, that thing which it is always correct to choose the action with the highest expected value of (say, in the framework of the von Neumann-Morgenstern theorem).
Comment author:[deleted]
26 January 2014 04:28:54PM
2 points
[-]
I don't know what you mean precisely by confusion, but I personally can't always control what my immediate primal level response is to certain situations. If I try to strictly avoid certain feelings, I usually end up convincing myself that I'm not feeling that way when actually I am. I'd rather notice what I'm feeling and then move on from there, it's probably easier to control your thinking that way. Just because you're angry doesn't mean you have to act like angry.
Comment author:Stabilizer
26 January 2014 05:57:30PM
1 point
[-]
That's basically what I meant. The move is to notice the anger, fear or disgust and then realize that this emotion isn't useful and can be actively detrimental. Then consciously try to switch to curiosity.
Of course, I couldn't condense the full messiness of reality into a pithy saying.
Comment author:falenas108
26 January 2014 07:56:14AM
14 points
[-]
I've been systematically downvoted for the past 16 days. Every day or two, I'd lose about 10 karma. So far, I've lost a total of about 160 karma.
It's not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said "thanks" when somebody pointed out a formatting error in my comments is now at -1.
I'm not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.
Comment author:CAE_Jones
26 January 2014 08:49:10AM
12 points
[-]
A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent "Is Less Wrong too scary to marginalized groups?" firestorm.
One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).
Comment author:drethelin
26 January 2014 09:48:54PM
2 points
[-]
How is this victim blaming? As I interpret it the claim is that the person was probably NOT the victim of systematic downvoting but instead made a lot of comments that are counter to what people like to hear, creating the illusion of same.
Comment author:gjm
26 January 2014 10:12:13PM
3 points
[-]
Hard to explain getting downvoted for
a comment where I said "thanks" when somebody pointed out a formatting error in my comments
as being about saying things "counter to what people like to hear". Which is why I didn't interpret CAE_Jones as suggesting that that's what was going on.
Comment author:CAE_Jones
27 January 2014 09:48:56AM
0 points
[-]
For what it's worth, I agree with gjm that "flame-bait" was a poor choice of words on my part, and I understand how it could have been taken as victim-blaming in spite of my intentions.
Comment author:pragmatist
26 January 2014 09:11:47AM
*
4 points
[-]
Gah... This is becoming way too common, and it seems like there's pretty good evidence (further supported in this instance) regarding the responsible party. I wish someone with the power to do so would do something about it.
Comment author:Dias
26 January 2014 07:23:47PM
-3 points
[-]
a comment where I said "thanks" when somebody pointed out a formatting error in my comments is now at -1.
That sounds like a pretty low value comment. Is it beneficial to third parties to be able to read it? If not, just make the correction and PM your thanks. Otherwise you're unnecessarily wasting everyone's time in the guise of politeness.
Comment author:ChristianKl
26 January 2014 10:16:10PM
3 points
[-]
Acknowledging that there indeed an error and that you weren't intentionally doing something is helpful.
If you make the correction and not mention that the comment that points out the issue is right, it will seem to anybody who reads the discussion later as if the comment points out a nonexisting problem.
Comment author:Dias
27 January 2014 02:35:30AM
0 points
[-]
It is not necessary that the argument I gave be right. All that is necessary for the g'grandparent to be wrong is for there to be a plausible reason why someone would want to downvote such a comment, other than malice.
Comment author:Vulture
27 January 2014 06:48:26PM
*
4 points
[-]
I got a seemingly one-time hit of this about a week ago. For what it's worth I had just been posting comments on the subject of rape, but a whole bunch of my unrelated comments got it too.
(Since then it's been having an obnoxious deterrent effect on my commenting, because I feel so precariously close to just accumulating negative karma every time I post, leaving readers with the impression that my ideas have all been identified as worthless by someone probably cleverer than themselves. I'm now consciously trying to avoid thinking like this)
Comment author:lukeprog
26 January 2014 08:59:03AM
*
36 points
[-]
Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say "oops" for each of them. Here we go...
There was once a time when the average human couldn't expect to live much past age thirty. (Jul 2012)
That's probably wrong. IIRC, previous eras' low life expectancy was mostly due to high child mortality.
We have not yet mentioned two small but significant developments leading us to agree with Schmidhuber (2012) that "progress toward self-improving AIs is already substantially beyond what many futurists and philosophers are aware of." These two developments are Marcus Hutter's universal and provably optimal AIXI agent model... and Jurgen Schmidhuber's universal self-improving Godel machine models... (May 2012)
This sentence is defensible for certain definitions of "significant," but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren't particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and regretted it a couple months later.
one statistical prediction rule developed in 1995 predicts the price of mature Bordeaux red wines at auction better than expert wine tasters do. (Jan 2011)
The Wiki link in the linked LW post seems to be closer to "Stanislav Petrov saved the world" than "not really":
Petrov judged the report to be a false alarm, and his decision is credited with having prevented an erroneous retaliatory nuclear attack
...
His colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile strike if they had been on his shift.
...
Petrov, as an individual, was not in a position where he could single-handedly have launched any of the Soviet missile arsenal. ... But Petrov's role was crucial in providing information to make that decision. According to Bruce Blair, a Cold War nuclear strategies expert and nuclear disarmament advocate, formerly with the Center for Defense Information, "The top leadership, given only a couple of minutes to decide, told that an attack had been launched, would make a decision to retaliate."
Petrov's responsibilities included observing the satellite early warning network and notifying his superiors of any impending nuclear missile attack against the Soviet Union. If notification was received from the early warning systems that inbound missiles had been detected, the Soviet Union's strategy was an immediate nuclear counter-attack against the United States (launch on warning), specified in the doctrine of mutual assured destruction.
That he didn't literally have his finger on the "Smite!" button, or that the SU might still not have retaliated if he'd raised the alarm, is not the point.
Comment author:gjm
26 January 2014 10:29:46AM
10 points
[-]
previous eras' low life expectancy was mostly due to high child mortality.
I have long thought that the very idea of "life expectancy at birth" is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.
Comment author:TylerJay
26 January 2014 07:18:11PM
2 points
[-]
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live? Or even at a past time?
Comment author:Lumifer
26 January 2014 07:33:42PM
5 points
[-]
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live?
Sure, there is the concept of life expectancy at specific age. For example, there is the "default" life expectancy at birth, there is the life expectancy for a 20 year-old, life expectancy for a 60-year-old, etc. Just google it up.
On the AIXI and such... you see, its just hard to appreciate just how much training it takes to properly understand something like that. Very intelligent people, with very high mental endurance, train for decades, to be able to mentally manipulate the relevant concepts at their base level. Now, let's say someone only spent a small fraction of the time - either because they've pursued a wrong topic through the most critical years, or because they have low mental endurance. Unless they're impossibly intelligent, they have no chance of forming even a merely good understanding.
Comment author:Thomas
26 January 2014 09:51:32AM
*
5 points
[-]
Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don't see ourselves particularly bonded to LW at all. Especially I.
We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein's brains (whether they were lighter or heavier than average - I am quite sure they were heavier but it seems that the Cathedral says otherwise).
We discussed Three Worlds Collide, IBM brain simulations, MIRI endeavors and progress, genetics ...
Heretical? Well, considering that 'heretic' means 'someone who thinks on their own', I'm not sure how we're supposed to interpret that negatively.
I assume however that you meant 'disagreeing with common positions displayed on LW' - which of those common positions did you differ on, and why, and just how homogeneous do you think LW is on those?
Comment author:Thomas
26 January 2014 04:20:15PM
*
0 points
[-]
I can speak mostly for myself. Still, we the locals go back decade and more, discussing some topics.
It is kind of clear to me, that there is a race toward superintelligence. As it was always the race toward some future technology, be it flying, be it atomic bomb, be it Moon race ... you name it.
Except, that this is the final, most important race ever. What can you expect then from the competitors? You can expect them to claim, that the Singularity/Transcendence is still far, far away. You can expect, that the competition will try to persuade you to abandon your own project, if you have any. For example, by saying that an uncontrollable monster is lurking in the dark, named UFAI. They will say just about anything, to persuade you to quit.
This works both ways, between almost any two competitors, to be clear.
My view is the following. If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month.
I am not sure, if there is a human (group) currently able to accomplish this. Very well might be. It's likely NOT THAT difficult.
We discussed the Marylin vos Savant's toying with Paul Erdos. A smartass against a top scientist is occasionally like a cat and mouse game, where the mouse mistakenly thinks he's a cat. There are many other examples, like Ballard against all the historians and archeologists. Or Moldbug against Dawkins.
Of course, that does not automatically mean another smartass is preying upon the MIRI and AI academia combined, in the real AI case. But it's not impossible. May be several different big cats in the wild who keep a low profile for a time being. Might be lion with his pride, inhabiting the academia also.
The most interesting outcome would be no Singularity for a few decades.
Comment author:Lumifer
26 January 2014 05:03:13PM
3 points
[-]
If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month.
That seems an... unusual view. Have you actually tried writing code that exhibits something related to intelligence?
It depends on your language and coding style, doesn't it? I've seen C style guides that require you to stretch out onto 15 lines what I'd hope to take 4, and in a good functional language shouldn't take more than 2.
Comment author:Lumifer
27 January 2014 04:18:16PM
1 point
[-]
Yes, and the number of lines is a ridiculously bad metric of the code's complexity anyway.
Was a funny moment when someone I know was doing a Java assignment, I got curious, and it turned out that a full page of Java code is three lines in Perl :-)
That really depends on coding style, again. I find that common Java coding styles are hideously decompressed, and become far more readable if you do a few things per line instead of maybe half a thing. Even they aren't as bad as the worst C coding styles I've seen, though, where it takes like 7 lines to declare a function.
As for Perl vs Java... was it solved in Perl by a Regex? That's one case where if you don't know what you're doing, Java can end up really bloated but it usually doesn't need to be all that bad.
Comment author:Kawoomba
26 January 2014 04:18:43PM
4 points
[-]
Yes, although it would help if you could be a bit more specific, the term is somewhat overloaded.
As for the strategy, depends. Find a better community (than the one you feel alienated from) in the sense of better matching values? We both seem to feel quite at home in this one (for me, if not for the suffocating supremacy of EA).
I meant alienated from society at large, not from LW, although the influence of society at large obviously affects discussion on LW.
One aspect of my feeling is that I increasingly suspect that the fundamental reason people believe things in the political realm is that they feel a powerful psychological need to justify hatred. The naive view of political psychology is that people form ideological beliefs out of their experience and perceptions of the world, and those beliefs suggest that a certain category of people is harming the world, and so therefore they are justified in feeling hatred against that category of people. But my new view is that causality flows in the opposite direction: people feel hatred as a primal psychological urge, and so their conscious forebrain is forced to concoct an ideology that justifies the hatred while still allowing the individual to maintain a positive pro-social self-image.
This theory is partially testable, because it posits that a basic prerequisite of an ideology is that it identifies an out-group and justifies hatred against that out-group.
Comment author:Viliam_Bur
26 January 2014 07:23:59PM
*
3 points
[-]
The part where the emotional needs come first, and the ideological belief comes later as a way of expressing and justifying them, that feels credible. I just don't think that everyone starts from the position of hatred (or, in the naive view, not everyone ends with hatred). There are other emotions, too.
But maybe the people motivated by hatred make a large part of the most mindkilled crowd. Because other emotions can be expressed legitimately also outside of the politics.
Tentatively: Look for what "and therefore" you've got associated with the feeling. Possibilities that come to my mind-- and therefore people are frightening, or and therefore I should be angry at them all the time, or and therefore I should just hide, or and therefore I shouldn't be seeing this.
In any case, if you've got an "and therefore" and you make it conscious, you might be able to think better about the feeling.
Comment author:fubarobfusco
27 January 2014 06:06:41AM
*
3 points
[-]
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: "Antisemitismus ist der Sozialismus des dummen Kerls." ("Antisemitism is the socialism of the stupid guy", or perhaps colloquially, "Antisemitism is a dumb-ass version of socialism") That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker's view, anyway — would be better remedied by changing the political-economic structure.
Jay Smooth recently put out a video, "Moving the Race Conversation Forward", discussing recent research to the effect that mainstream-media discussions of racial issues tend to get bogged down in talking about whether an individual did or said something racist, as opposed to whether institutions and social structures produce racially biased outcomes.
There are probably other sources for similar ideas from around the political spectra. (I'll cheerfully admit that the above two sources are rather lefter than I am, and I just couldn't be arsed to find two rightish ones to fit the politesse of balance.) People do often look for individuals or out-groups to blame for problems caused by economic conditions, social structures, institutions, and so on. The individuals blamed may have precious little to do with the actual problems.
That said, if someone's looking to place blame for a problem, that does suggest the problem is real. It's not that they're inventing the problem in order to have something to pin on an out-group. (It also doesn't mean that a particular structural claim, Marxist or whatever, is correct on what that problem really is — just that the problem is not itself confabulated.)
Sure, obviously there are real problems in the world. Your examples seem to support my thesis that people believe in ideologies not because those ideologies are capable of solving the problems, but because the ideologies justify their feelings of hatred.
I suppose I see it as more a case of biased search: people have actual problems, and look for explanations and solutions to those problems, but have a bias towards explanations that have to do with blaming someone. The closer someone studies the actual problems, though, the less credibility blame-based explanations have.
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: "Antisemitismus ist der Sozialismus des dummen Kerls." ("Antisemitism is the socialism of the stupid guy", or perhaps colloquially, "Antisemitism is a dumb-ass version of socialism") That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker's view, anyway — would be better remedied by changing the political-economic structure.
Does that make socialism the anti-semitism of the smart? Or perhaps of the ambitious -- they're attracted to it because it gives them an enemy big enough to justify taking over everything?
Comment author:ChristianKl
26 January 2014 10:11:31PM
2 points
[-]
Feeling usually become a problem when you resist them.
My general approach with feelings:
Find someone towards which you can express the content behind the feeling. This works best in person. Online communication isn't good for resolving feelings. Speak openly about whatever comes to mind.
Track the feeling down in your body. Be aware where it happens to be. Then release it.
I think that feeling is more common than you might think. Especially if you deviate enough from the societal norm (which Less Wrong generally does).
My general strategy for dealing with is social interaction with people who'll probably understand. Just talk it over with them. It's best if you do this with people you care about. It doesn't have to be in person, if you've got someone relevant on Skype, that works as well.
Hmm, this is probably good advice. Part of my problem is that my entire family is made up of people who are both 1) Passionate advocates of an American political tribe and 2) Not very sophisticated philosophically.
A common condition with geeks in general and aspiring rationalists in particular, I'd say.
I've recently been expanding my network of like-minded people both by going to the local meetups and also by being invited in a Skype group for tumblr rationalists.
I know that a feeling of alienation isn't conductive to meeting new people, so I'm not sure I can offer other advice. Contact some friends who might be open to new ideas? I'd offer to help myself, but I'm not sure if I'm the right person to talk to. (In any case, I've PM'd my Skype name if you do need a complete stranger to talk to.)
Comment author:memoridem
28 January 2014 02:12:15AM
*
2 points
[-]
I think this feeling arises from social norms feeling unnatural to you. This feeling should be expected if your interests are relevant to this site, since people are not trying to be rational by default.
The difference between a pathetic misfit and and an admirable eccentric is their level of awesomeness. If you become good enough at anything relevant to other people, you don't have to live through their social expectations. Conform to the norms or rise above them.
Note that I think most social norms are nice to have, but this doesn't mean there aren't enough of the kind that make me feel alienated. It could be that the feeling of alienation is a necessary side effect of some beneficial cognitive change, in which case I'd try to cherish the feeling. I've found that rising to a leadership position diminishes the feeling significantly, however.
Comment author:bramflakes
26 January 2014 05:09:55PM
*
17 points
[-]
I'm going to do the unthinkable: start memorizing mathematical results instead of deriving them.
Okay, unthinkable is hyperbole. But I've noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don't know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn't grasp why certain results were true. I had gone along with this idea without thinking about it critically.
So these are the things I'm going to add to my anki decks, with the obligatory rule that I'm only allowed to memorize results if I could theoretically re-derive them (or if the know-how needed to derive them is far beyond my current ability). These will include common trig results, derivatives and integrals of all basic functions, most physical formulae relating heat, motion, pressure and so on. I predict that the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems, though I can't think of a way to measure this. Also, recommendations for other things to memorize are welcome.
Comment author:shminux
26 January 2014 05:44:58PM
10 points
[-]
In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.
Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.
You are right that "the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems", I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.
Comment author:whales
26 January 2014 08:56:14PM
*
1 point
[-]
Nice, and good luck! I'm glad to see that my post resonated with someone. For rhetorical purposes, I didn't temper my recommendations as much as I could have -- I still think building mental models through deliberate practice in solving difficult problems is at the core of physics education.
I treat even "signpost" flashcards as opportunities to rehearse a web of connections rather than as the quiz "what's on the other side of this card?" If an angle-addition formula came up, I'd want to recall the easy derivation in terms of complex exponentials and visualize some specific cases on the unit circle, at least at first. I also use cards like that in addition to cards which are themselves mini-problems.
Comment author:ChristianKl
26 January 2014 11:04:45PM
2 points
[-]
In general there the core principle of spaced repetition that you don't put something into the system that you don't already understand.
When trying to memorize mathematical results make sure that you only add cards when you really have a mental understanding. Using Anki to avoid forgetting basic operations is great. If you however add a bunch of information that's complex, you will forget it and waste a lot of time.
Comment author:whales
26 January 2014 11:56:19PM
*
4 points
[-]
That's true if you're just using spaced repetition to memorize, although I'd add that it's still often helpful to overlearn definitions and simple results just past the boundaries of your understanding, along the lines of Prof. Ravi Vakil's advice for potential students:
Here's a phenomenon I was surprised to find: you'll go to talks, and hear various words, whose definitions you're not so sure about. At some point you'll be able to make a sentence using those words; you won't know what the words mean, but you'll know the sentence is correct. You'll also be able to ask a question using those words. You still won't know what the words mean, but you'll know the question is interesting, and you'll want to know the answer. Then later on, you'll learn what the words mean more precisely, and your sense of how they fit together will make that learning much easier. The reason for this phenomenon is that mathematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you'll never get anywhere. Instead, you'll have tendrils of knowledge extending far from your comfort zone. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning "forwards". (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.)
The second point I'd make is that the spacing effect (distributed practice) works for complex learning goals as well, although it will help if your practice consists of more than rote recall.
Comment author:ChristianKl
27 January 2014 12:16:41AM
0 points
[-]
If you learn definitions it's important to sit down and actually understand the definition. If you write a card before you understand it, that will lead to problems.
Comment author:palladias
26 January 2014 10:23:31PM
3 points
[-]
Has anyone paired Beeminder and Project Euler? I'd like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?
Comment author:Lumifer
27 January 2014 01:12:33AM
2 points
[-]
"Sexy" isn't signaling -- it's a characteristic that people (usually) try to signal, more or less successfully. "I'm sexy" basically means "You want me" : note the difference in subjects :-)
Comment author:Lumifer
27 January 2014 02:12:57AM
*
1 point
[-]
Pretty much the same thing. Regardless of an, um, widespread misunderstanding :-D sexy behavior does NOT signal either promiscuity or sexual availability. It signals "I want you to desire me" and being desired is a generally advantageous position to be in.
Comment author:ChristianKl
27 January 2014 12:11:08PM
1 point
[-]
If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep. Especially if there no established trust, signal really high amounts of sexuality doesn't result in "You want me".
In my own interactions with professional dancers there are plenty of situations where the woman succeeds in signaling a high amount of sexyness. I however know that I"m dancing with a professional dancer who going to sent that signal to a lot of guys so she doesn't enter my mental category of potential mates.
I think people frequently go wrong when the confuse impression of characteristics with goals.
Comment author:ChristianKl
27 January 2014 04:06:51PM
1 point
[-]
It depends on how you define the term.
For a reasonable definition of sexy, the term refers to letting a woman feel sexual tension. If you talk about social interactions it's useful to have a word that refers to making another person feel sexual tension.
Of course you can define beautiful, attractive and sexy all the same way. Then you get a one dimensional model where Bob wants Alice with utility rating X. I don't think that's model is very useful to understanding how humans behave in mating situations.
Comment author:Lumifer
27 January 2014 04:14:57PM
1 point
[-]
It depends on how you define the term.
I define it as "arousing sexual interest and desire in people of appropriate gender and culture". Note that this is quite different from "beautiful" and is a narrow subset of "attractive".
the term refers to letting a woman feel sexual tension.
"Tension" generally implies conflict or some sort of a counterforce.
Comment author:ChristianKl
27 January 2014 10:36:13PM
*
-1 points
[-]
"Tension" generally implies conflict or some sort of a counterforce.
Testosterone which is commonly associated with sexiness in males is about dominance. It has something to do with power that does create tension.
Of course a woman can decide to have sex with shy a guy because he's nice and she thinks that he's intelligent or otherwise a good match. Given that there are shy guys who do have sex that's certainly happening in reality.
Does that mean that the behavior of that guy deserves the label "sexy"? I don't think he's commonly given that label.
There also words like sensual and empathic. A guy can get layed by being very empathic and just making woman that feel really great by interacting with him in a sensual way. I think it's useful to separate that mentally from the kind of behavior that comes from testosterone that commonly get's called sexy.
If you read an exciting thriller you are also feeling tension even when you aren't in conflict with the book or there some counterforce. Building up tension and then releasing it is a way for human to feel pleasure.
Comment author:Torello
27 January 2014 01:41:31AM
1 point
[-]
Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles.
I would agree with what Lumifer says below, but I think sexy can be signalling when many people are involved: look at the sexy people I hang out with. Being with sexy people brings high status because it's high status.
Comment author:ChristianKl
27 January 2014 12:12:54PM
2 points
[-]
Sexy is a quite broad word that probably used by different people in different ways. I think for most people it about what they feel when looking at the person. Those feeling where set up by evolution over large time frames.
Evolution doesn't really care about whether you get a fun intercourse partner.
But it's not only evolution. It also has a lot to do with culture. Culture also doesn't care about whether you get a fun intercourse partner. People who watch a lot of TV get taught that certain characteristics are sexy.
For myself I would guess that most of my cultural imprint regarding what I find sexy comes from dancing interactions.
If a woman moves in a way that suggests that she doesn't dance well, that will reduce her sex appeal to me more than it probably does with the average male.
Comment author:PECOS-9
27 January 2014 12:22:39AM
*
13 points
[-]
PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file -- it can be a garbage pdf or even a pdf that's already on scribd). They say this at the very bottom of their pricing page, but I didn't notice until just now.
Google is shelling out $400 million to buy a secretive artificial intelligence company called DeepMind....Based in London, DeepMind was founded by games prodigy and neuroscientist Demis Hassabis, Skype & Kazaa developer Jaan Tallin and researcher Shane Legg.
I liked Legg's blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.
Comment author:TylerJay
27 January 2014 02:06:19AM
5 points
[-]
The MIRI course list bashes on "higher and higher forms of calculus" as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.
So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?
Comment author:Qiaochu_Yuan
27 January 2014 07:25:49PM
*
7 points
[-]
Not much. The kind of probability relevant to MIRI's interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that's more algorithms. Not an expert on numerical analysis by any means, though.
If you have a general interest in mathematics, I still recommend that you learn some calculus because it's an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.
Comment author:TylerJay
27 January 2014 08:02:19PM
1 point
[-]
Thanks. I took single variable calculus, differential equations, and linear algebra in college, but its been four years since then and I haven't really used any of it since (and I think I really only learned it in context, not deeply). I've just been trying to figure out how much of my math foundations i'm going to need to re-learn.
Has anyone had experiences with virtual assistants? I've been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.
I'd like to hear about any positive or negative experiences.
One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That's why I'm asking here.
Comment author:adamzerner
28 January 2014 03:15:01AM
*
0 points
[-]
I don't, but in Tim Ferris' book Four-Hour Work Week, I think I recall him recommending them. I think this was the one he recommended: https://www.yourmaninindia.com/.
Let me know if you come across some good findings on this. If effective, virtual assistants could be very useful, and thus they're something I'm interested in. On that note, it'd probably be worth writing a post about them.
Is there a good way of finding what kind of job might fit a person? Common advice such as "do what you like to do" or "do what you're good at" is relatively useless for finding a specific job or even a broader category of jobs.
I've did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.
Comment author:ChristianKl
27 January 2014 10:58:39PM
2 points
[-]
Is there a good way of finding what kind of job might fit a person?
That's a strange question.
Either you want to know how to pick up the skill of being a career adviser. Alternatively you want to find a job for yourself. You might also be a parent who tries to find a job that fits his child instead of letting the child decide for themselves.
I think the answers to those three possibilities are very different.
Comment author:memoridem
28 January 2014 02:46:30AM
*
2 points
[-]
I think for most people who ask this question, the range of fitting jobs is much wider than they think. You learn to like what you become good at.
If I were to pick a career right now, I'd just take a long list of reasonably complex jobs and remove any that contain an obvious obstacle like a skill requirement I'm unlikely to improve at. Then from what is left, I'd narrow the choice by some other criteria than perceived fit, income and future employment prospects for example and then pick one of them either by some additional criteria or randomly. I'm confident I'd learn to like almost any job chosen this way.
If you make money you can do whatever you like in the future even if you chose your job poorly in the first place. So please don't choose to become an English major.
Comment author:gedymin
27 January 2014 01:23:58PM
*
2 points
[-]
I'm quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?
Perhaps it's because HMM seem to be more difficult to grasp?
Or it's because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.
For comparison, Google search "bayes OR bayesian network OR net" site:lesswrong.com gives 1,090 results.
Google search hidden markov model site:lesswrong.com gives 91 results.
Comment author:Qiaochu_Yuan
27 January 2014 07:21:49PM
*
0 points
[-]
There's a proliferation of terminology in this area; I think a lot of these are in some sense equivalent and/or special cases of each other. I guess "Bayesian network" is more consistent with the other Bayes-based vocabulary around here.
Hello, we are organizing monthly rationality meetups in Vienna - we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.
Comment author:pan
27 January 2014 06:33:04PM
5 points
[-]
Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I'm interested in seeing if there are any low hanging fruit I'm missing.
I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?
A quick google will give you a lot of lists but most of them are from news sources that I don't trust.
Comment author:Qiaochu_Yuan
27 January 2014 07:18:08PM
*
3 points
[-]
I found this list of causes of death by age and gender enlightening (it doesn't necessarily tell you that a particular action will increase your lifespan, but then again neither do correlations). For example, I was surprised by how often people around my age or a bit older die of suicide and "poisoning" (not sure exactly what this covers but I think it covers stuff like alcohol poisoning and accidentally overdosing on medicine?).
Comment author:Qiaochu_Yuan
27 January 2014 07:15:00PM
*
5 points
[-]
A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?
Comment author:adamzerner
28 January 2014 03:12:28AM
2 points
[-]
I'm recalling a Less Wrong post about how rationality only leads to winning if you "have enough of it". Like if you're "90% rational", you'll often "lose" to someone who's only "10% rational". I can't find it. Does anyone know what I'm talking about, and if so can you link to it?
Comment author:D_Malik
28 January 2014 04:32:22AM
*
5 points
[-]
John_Maxwell_IV and I were recently wondering about whether it's a good idea to try to drink more water. At the moment my practice is "drink water ad libitum, and don't make too much of an effort to always have water at hand". But I could easily switch to "drink ad libitum, and always have a bottle of water at hand". Many people I know follow the second rule, and this definitely seems like something that's worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:
Dehydration of as little as 1% decrease in body weight results in impaired physiological and performance responses (4), (5) and (6), and is discussed in more detail below. It affects a wide range of cardiovascular and thermoregulatory responses (7), (8), (9), (10), (11), (12), (13) and (14).
The Nationwide Food Consumption Surveys indicate that a portion of the population may be chronically mildly dehydrated. Several factors may increase the likelihood of chronic, mild dehydration, including a poor thirst mechanism, dissatisfaction with the taste of water, common consumption of the natural diuretics caffeine and alcohol, participation in exercise, and environmental conditions. Dehydration of as little as 2% loss of body weight results in impaired physiological and performance responses. New research indicates that fluid consumption in general and water consumption in particular can have an effect on the risk of urinary stone disease; cancers of the breast, colon, and urinary tract; childhood and adolescent obesity; mitral valve prolapse; salivary gland function; and overall health in the elderly. Dietitians should be encouraged to promote and monitor fluid and water intake among all of their clients and patients through education and to help them design a fluid intake plan.
The effect of dehydration on mental performance has not been adequately studied, but it seems likely that as physical performance is impaired with hypohydration, mental performance is impaired as well (62) and (63). Gopinathan et al (29) studied variation in mental performance under different levels of heat stress-induced dehydration in acclimatized subjects. After recovery from exercise in the heat, subjects demonstrated significant and progressive reductions in the performance of arithmetic ability, short-term memory, and visuomotor tracking at 2% or more body fluid deficit compared with the euhydrated state.
So how much is 2% dehydration? http://en.wikipedia.org/wiki/Dehydration#Differential_diagnosis : "A person's body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]" http://en.wikipedia.org/wiki/Body_water quotes Arthur Guyton 's Textbook of Medical Physiology: "the total amount of water in a man of average weight (70 kilograms) is approximately 40 litres, averaging 57 percent of his total body weight." So effects on cognition become apparent after 40l*2%=800ml of water has been lost, which takes roughly 800ml/(2.5l/24h) = 8 hours. Now, this assumes water is lost at a constant rate, which is false, but it still seems like it would take a while to lose a full 800ml. Which implies that you don't have to make a conscious effort to drink more water because everybody gets at least mildly thirsty after, say, half an hour of walking around outside on a warm day, which seems like it would be a lot less than 800ml.
http://freebeacon.com/michelle-obamas-drink-more-water-campaign-based-on-faulty-science/ : “There really isn’t data to support this,” said Dr. Stanley Goldfarb of the University of Pennsylvania. “I think, unfortunately, frankly, they’re not basing this on really hard science. It’s not a very scientific approach they’ve taken. … To make it a major public health effort, I think I would say it’s bizarre.” Goldfarb, a kidney specialist, took particular issue with White House claims that drinking more water would boost energy. ”The idea drinking water increases energy, the word I’ve used to describe it is: quixotic,” he said. “We’re designed to drink when we’re thirsty. … There’s no need to have more than that.”
http://ask.metafilter.com/166600/Drinking-more-water-should-make-me-less-thirsty-right : When you don't drink a lot of water your body retains liquid because it knows it's not being hydrated. It will conserve and reabsorb liquid. When you start drinking enough water to stay more than hydrated your body will start using the water and then dispensing of it as needed. Your acuity for thirst will be activated in a different way and in a sense work better.
Some thoughts:
More frequent water-drinking makes you urinate more often, which is probably a bad thing for productivity.
There might be negative effects with chronic mild dehydration at levels less severe than in the studies above.
There might also be hormetic effects. (As in, your body functions best under frequent mild dehydration because that's what happened in the EEA, and always giving it as much water as it wants will be bad.)
Thoughts? Please post your own opinion if you're knowledgeable about this or if you've researched it.
Comments (316)
Repeating my post from the last open thread, for better visibility:
I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn't very good (on the level of Calculus 101). I'm not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it's even worse with variance...
Any ideas how to proceed?
I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here's an explanation:
Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.
Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.
Now spin the ruler on the spike. It's easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity -- how hard you have to twist the ruler to make it spin, or to make it stop spinning.
"I'd like to understand this more deeply" is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?
A different level explanation, which may or may not be helpful:
Read up on affine space, convex combinations, and maybe this article about torsors.
If you are frustrated with hand waving in calculus, read a Real Analysis textbook. The magic words which explain how the heck you can have a probability distributions over real numbers is measure theory.
Moments of mass in physics is a good intro to moments in stats for people who like to visualize or "feel out" concepts concretely. Good post!
How does that answer the question?
It's true that the center of gravity is a mean, but the moment of inertia is not a variance. It's one thing to say something is "proportional to a variance" to mean that the constant is 2 or pi, but when the constant is the number of points, I think it's missing the statistical point.
But the bigger problem is that these are not statistical examples! Means and sums of squares occur many places, but why are they are a good choice for the central tendency and the tendency to be central? Are you suggesting that we think of a random variable as a physical rod? Why? Does trying to spin it have any probabilistic or statistical meaning?
I wasn't aiming to answer Locaha's question as much as figure out what question to answer. The range of math knowledge here is high, and I don't know where Locaha stands. I mean,
That could be a basic question about the meaning of averages -- the sort of knowledge I internalized so deeply that I have trouble forming it into words.
But maybe Locaha's asking a question like:
That's a less philosophical question. So if Locaha says "means are like the centers of mass! I never understood that intuition until now!", I'll have a different follow up than if Locaha says "Yes, captain obvious, of course means are like centers of mass. I'm asking about XYZ".
Mean and variance are closely related to center of mass and moment of inertia. This is good intuition to have, and it's statistical. The only difference is that the first two are moments of a probability distribution, and the second two are moments of a mass distribution.
Using the word "distribution" doesn't make it statistical.
I don't have a good resource for you - I've had too much math education to pin down exactly where I picked up this kind of logic. I'd recommend set theory in general for getting an understanding of how math works and how to talk and read precisely in mathematics.
For your specific question about the mean, it's the only number such that the sum of all (samples - mean) equals zero. Go ahead and play with the algebra to show it to yourself. What it means is that if you go off of the mean, you're going to be more positive of the numbers in the sample than you are negative, or more negative than positive.
Can you recommend a place to start learning about set theory?
http://intelligence.org/courses/ has information on set theory. I also enjoyed reading Bertrand Russell's "Principia Mathematica", but haven't evaluated it as a source for learning set theory.
Not really - but I do agree that it's absolutely vital to understand the basic concepts or terms. I think that's a major reason why people fail to learn - they just don't really grasp the most vital concepts. That's especially true of fields with lots of technical terms. If you don't understand the terms you'll struggle to follow even basic lines of reasoning.
For this reason I sometimes provide students with a list of central terms, together with comprehensive explanations of what they mean, when I teach.
When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.
What precisely does "somehow the same as the original set" mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.
For example, if we speak about weights, the natural way of "joining together" is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the "sum/n".
Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that "5 is the mean of the original set" means "the original set behaves (with regards to the natural thing to do, i.e. addition) as if it was composed of the 5's".
There are situations where some other operation is the natural thing to do. Sometimes it is multiplication. For example, if you multiply some original value with 2, and they you multiply it by 8, the result of these two operations is the same as if you would multiply it twice by 4. In this case it's called geometric mean, and it's a root of product.
It can be even more complicated, so it doesn't necessarily have a name, but the idea is always replacing the original set with a set of identical values such that in the original context they would behave the same way. For example, the example above could be described as a 100% growth (multiplication by 2) and 700% growth (multiplication by 8), and you need to get a result 300% (multiplication by 4); in which case it would be "root of (product of (Xi + 100%)) - 100%".
If there is no meaningful operation in the original set, if the set can be ordered, we can pick the median. If the set can't even be ordered, if there are discrete values, we can pick the most frequent value as the best approximation of the original set.
The mean of the sum of two random variables is the sum of the means (ditto with the variances); there's no similarly simple formula for the median. (See ChristianKl's comment for why you'd care about the sum.)
The mean if the value of x that minimizes SUM_i (x - x_i)^2; if you have to approximate all elements in your sample with the same value and the cost of an imperfect approximation is the square distance from the exact value (and any smooth function looks like the square when you're sufficiently close to the minimum), then you should use the mean.
The mean and variance are jointly sufficient statistics for the normal distribution
Possibly something else which doesn't come to my mind at the moment.
(Of course, all this means that if you're more likely to multiply things together than add them, the badness of an approximation depends on the ratio between it and the true value rather than the difference, and things are distributed log-normally, you should use the geometric mean instead. Or just take the log of everything.)
Here, have a book!
http://www-biba.inrialpes.fr/Jaynes/prob.html
Actually, I started reading that one and found it too hard.
IS this a good book to start with? I know it's the standard "Bayes" intro around here, but is it good for someone with, let's say, zero formal probability/statistics training?
I think it's even better if you're not familiar with frequentist statistics because you won't have to unlearn it first, but I know many people here disagree.
I suppose it's better that to never have suffered through frequentist statistics first, but I think you appreciate the right way a lot more after you've had to suffer through the wrong way for a while.
I agree, that's why I'm glad I learned Bayes first. Makes you appreciate the good stuff more.
Did you misread the comment you're replying to, are you sarcastic, or am I missing something?
Well, Jaynes does point out how bad frequentism is as often as he can get away with. I guess the main thing you're missing out if you weren't previously familiar with it is knowing whether he's attacking a strawman.
I was under the impression that the "this is definitely not a book for beginners" was the standard consensus here: I seem to recall seeing some heavily-upvoted comments saying that you should be approximately at the level of a math/stats graduate student before reading it. I couldn't find them with a quick search, but here's one comment that explicitly recommends another book over it.
Attending a CFAR workshop and session on Bayes (the 'advanced' session) helped me understand a lot of things in an intuitive way. Reading some online stuff to get intuitions about how Bayes' theorem and probability mass work was helpful too. I took an advanced stats course right after doing these things, and ended up learning all the math correctly, and it solidified my intuitions in a really nice way. (Other students didn't seem to have as good a time without those intuitions.) So that might be a good order to do things in.
Some multidimensional calc might be helpful, but other than that, I think you don't need too much other math to support learning more probability and stats.
I asked a similar question a while back, and I was directed to this book, which I found to be incredibly useful. It is written at an elementary level, has minimal little maths, yet is still technical, and brings across so many central ideas in very clear, Bayesian, terms. It is also on Lukeprog's CSA book recommendations for 'Become Smart Quickly'.
Note: this is the only probability textbook I have read. I've glanced through the openings of others, and they've tended to be above my level. I am sixteen.
This isn't at introductory level, but try exploring the ideas around Fisher information -- it basically ties together information theory and some important statistical concepts.
Fisher Information is hugely important in that it lets you go from just treating a family of distributions as a collection of things to treating them as a space with its own meaningful geometry. The wikipedia page doesn't really convey it but this write-up by Roger Grosse does. This has been known for decades but the inferential distance to what folks like Amari and Barndorff-Nielsen write is vast.
Read Edwin Jaynes.
The problem with most Probability and Statistics courses is the axiomatic approach. Purely formalism. Here are the rules - you can play by them if you want to.
Jaynes was such a revelation for me, because he starts with something you want, not arbitrary rules and conventions. He builds probability theory on basic desiredata of reason that you that make sense. He had reasons for my "whys?".
Also, standard statistics classes always seemed a bit perverse to me - logically backward. They always just felt wrong. Jaynes approach replaced that tortured backward thinking with clear, straight lines going forward. You're always asking the same basic question "What is the probability of A given that I know B?"
And he also had the best notation. Even if I'm not going to do any math, I'll often formulate a problem using his notation to clarify my thinking.
I think this is a most awesome mistype of desiderata.
As a first step, I suggest Dennis Lindley's Understanding Uncertainty. It's written for the layperson, so there's not much in the way of mathematical detail, but it is very good for clarifying the basic concepts, and covers some surprisingly sophisticated topics.
ETA: Ah, I didn't notice that Benito had already recommended this book. Well, consider this a second opinion then.
I don't think that's really what means are. That intuition might fit the median better. One reason means are nice is that they have really nice properties, e.g. they're linear under addition of random variables. That makes them particularly easy to compute with and/or prove theorems about. Another reason means are nice is related to betting and the interpretation of a mean as an expected value; the theorem justifying this interpretation is the law of large numbers.
Nevertheless in many situations the mean of a random variable is a very bad description of it (e.g. mean income is a terrible description of the income distribution and median would be much more appropriate).
Edit: On the other hand, here's one very undesirable property of means: they're not "covariant under increasing changes of coordinates," which on the other hand is true of medians. What I mean is the following: suppose you decide to compute the mean population of all cities in the US, but later decide this is a bad idea because there are some really big cities. If you suspect that city populations grow multiplicatively rather than additively (e.g. the presence of good thing X causes a city to be 1.2x bigger than it otherwise would, as opposed to 200 people bigger), you might decide that instead of looking at population you should look at log population. But the mean of log population is not the log of mean population!
On the other hand, because log is an increasing function, the median of log population is still the log of median population. So taking medians is in some sense insensitive to these sorts of decisions, which is nice.
Repost as there were no answers:
Has anyone here done Foundation Training? How is the evidence supporting them?
(I think there's a typo in the URL.)
You are right. Fixed.
Corrected url: Foundation Training
I tried the video at the url, and it seemed a lot more like straining (little pun about the mistaken url), but that might not be a fair test.
The basic idea of getting hip mobility seems sound, but I recommend Scott Sonnon's Ageless Mobility and IntuFlow, and the The Five Tibetan Rites -- sorry for the cheesy name on the latter, but they're a cross between yoga and calisthenics with a lot of emphasis on getting backwards/forwards pelvis mobility.
I'm in art school and I have a big problem with precision and lack of "sloppiness" in my work. I'm sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit - maybe the size of some area in the cerebellum or something, I don't know. Am I right in thinking this?
Just to be clear: you're worried that you aren't sloppy enough?
If so, for us non-artists, can you explain how 'sloppiness' can be a good thing?
I think it's a metaphor thing. Like, in writing, if you say "The shadow of a lamppost lay on the ground like a spear. He walked and it pierced him like a spear." What more description of the scene do you need than that? In fact, talking about the color of the path or what kind of trousers our character was wearing would be counterproductive to the quality of the writing.
One could view sloppiness in art in the same way - use of metaphor that captures the scene without the need for detail.
And no, of course it's not a biological limit.
Sorry, I communicated poorly. I meant [introducting] lack of sloppiness into my work. That's not what I meant. I'm too sloppy.
You should edit the original question. People seem to be answering the wrong question below.
Seems to me that that's likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it - perhaps some different kind of exercises - and do your best at those, before drawing any conclusions about your fundamental limits... because those conclusions themselves will limit you even more.
I would guess that you try to exert too much control. The kind of "sloppiness" that's useful for creativity is about letting things go.
Meditation might help.
As you are female, dancing a partner dance where you have to follow and can't control everything might be useful. Letting go of trying to control is lesson 101 for a lot of woman who pick up Salsa dancing.
He isn't.
I'm already good at this part of creativity, but precision is also pretty important. Right now I'm working on a project where I have to trace in pen (can't erase, flaws are obvious) photographs that I took. Letting things go won't help here.
I already do meditate.
I'm not, sorry.
Swing classes are pretty good about letting either gender learn to follow, if you'd like.
As a lead, you learn that you aren't really controlling much of anything in Salsa either. You're setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don't expect.
But I'm guessing that you've hit on the right direction of interpretation of sloppiness as letting go of control. I'd extend that to too much self conscious* control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest of your capabilities for the same intention.
When the self monitoring person in your head tries to do too much, he gets in the way of the rest of you doing it right.
That seems related with the common observation that it's easier to speak a foreign language when drunk than when sober: in the latter case I feel I'm so worried of saying something grammatically incorrect that I end up speaking in very simple sentences and very haltingly. (And the widespread use of drugs among rock musicians is well-known.)
For advanced dancing that's true. For beginners, not so much. At the beginning Salsa is the guy leading a move and the woman following.
If you are a guy and want to learn dancing for the sake of letting go control I wouldn't recommend Salsa. I think it took me 1 1/2 years to get to that point.
A whole 1 1/2 years? Took me a lot longer than that. I've been at Salsa mainly for about a decade.
Yes, the unfortunate fact is that most leads are taught to "lead moves" when they start. If they were taught to lead movement, they'd make faster progress, IMO. Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver. I've seen a West Coast instructor teach a beginning class that way, and thought it was the best beginning class I had ever seen.
I think on of the turning events was for me my first Bachata Congress in Berlin. I didn't know too many Bachata patterns and after hours of dancing the brain just switches off and let's the body do it's thing.
But you are right that it might well take longer for the average guy. That means it's not a good training exercise to pick up the skill of letting go control for man.
For woman on the other hand it's something to be learned at the beginning.
At the beginning I mainly thought I didn't understand what teaching dance is all about and that a bunch of teachers have something like real expertise.
The more I dance the more I think that their teaching is very suboptimal. A local Salsa teacher teaches mainly patterns in her lessons. On the other hand she writes on her blog about how it's all in the technique and about traits like confidence. It's also not like she didn't learn dance at formal dance university courses for 5 years, so she should know a bit.
Things like telling a guy who dances with a bit of distance to the girl to dance closer, just aren't good advice when the girl isn't comfortable with dancing closer. Yes, if they would dance closer things would be nicer, but there usually a reason why a pair has the distance it has.
Manipulation is an interesting choice of words. What do you mean with it?
I remember a Kizomba dance a year ago where I didn't know much Kizomba. I did have a lot of partner perception from Bachata. I picked up enough information from my dance partner that I could just follow her movements in a way where she didn't thought she was leading but I was certainly dancing a bunch of steps with her I hadn't learned in a lecture.
To use sort of what "manipulation" means in osteopathy I think you could call that nonmanipluative leading. In Bachata I think there are a lot of situation where a movement is there in the body but surpressed and things get good if they lead can "free" the movement and stabilize it. I think such nonmanipulative dancing is quite beautiful.
Unfortunately I'm not good enough to do that in Salsa and even in Bachata I'm not always having good enough perception.
Some guesses on my part-
Maybe your tendency towards precision is at the wrong times? If practicing, for example, it might be counterproductive since you probably want quantity instead of quality, or maybe you're trying to get everything down precisely too early on and it's making your work stiff.
Manfred's point is good- "metaphor that captures the scene without the need for detail."... If you render background details overmuch, they can distract the viewer from the focal point of the work. Maybe put some effort into looking at how the "metaphors" of different things work? For example, how more skilled artists draw/paint grass in the distance, or whatnot.
I think it's a common thing to sort of notice something wrong in an area, and to spend a lot of time on that area in hopes of fixing it, which would make it less sloppy... Maybe sketch that thing a lot for practice.
If you're drawing from life, it's possible that lack of sloppiness comes from not making sense of the gestalt, so to speak. I'd think that understanding the form of the subject and how the lighting on it works means you can simplify things away. I don't do much (read: any) figure drawings from life, but I'd imagine that understanding the figure and what's important and what isn't would be helpful. Maybe doing some master copies of skilled, more abstract drawings of the figure would help. Maybe look up a comic artist or cartoonist you like and look at what they do.
ETA:
To address your actual question, I'd say I don't know any particular evidence for why that should be so.
Rationality-technique-wise: It's good that you asked people, since that would bring you evidence of the idea being true or false. In the future it might be even more useful to suppress hypothesizing until some more investigating has gone on- "biological limit" is the sort of thing that feels true if you don't understand how to do something or how to understand how to do something. I think there's a post about this, or something; let me see if I can find it... ETA2: The exact anecdote I was thinking of doesn't apply as much as I thought it did, but maybe the post "Fake Explanations" or something applies?
I have never biked twenty miles in one go.
It could be that this reflects some inherent limit.
Or it could be that I just haven't tried yet.
If I believe that it is an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and succeed, then I will update.
If I believe that it is not an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and fail, then I will update.
In either case, the test of my ability
Is not in contemplating what mechanisms of self might limit me,
But in trying anyway, when I have the opportunity to do so,
And seeing what happens.
Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.
If that means arranging with a friend to pick you up in their car if you have to bail out, or picking a circular route that never takes you that far from home, or any other way of handling the contingency. Going "but suppose I fail!" and not trying is an even worse piece of wormtonguing than the one fubarobfusco is addressing.
If other people working the same craft have managed to achieve precision, it's very unlikely to be a biological limit, right? The resolution of human fine motor skills is really high.
You didn't mention what the craft was or the nature of the sloppiness, but have you considered using simple tools to augment technical skills? Perhaps a magnifying glass, rulers. pieces of string/clay or other suitably shaped objects to guide the hand, etc?
You could try doing something that gives immediate feedback for sloppiness, like simple math problems for example. You might gain some generalizable insight like that speed affects sloppiness. Since you already practice meditation, it should be easier to become aware of the specific failure modes that contribute to sloppiness, which doesn't seem to be a well defined thing in itself.
Even if you know that signaling is stupid, it doesn't escape the cost of not signaling.
It's a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it's not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it's a breezy recap of what he already knows. He comes out the other side without the eternal "has no formal education" tagline, and a whole new slew of acquaintances.
Now, I understand that there may be good reasons not to, and I'd very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a "Here's your tuition and an extra sum of money to cover the opportunity cost of your time, I don't care how unfair it is that people won't take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible" scholarship?
I don't think you understand signaling well.
Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.
Trying to go the traditional route wouldn't fit into the highly effective image that he already signals.
Put another way, the purpose of signaling isn't so nobody will give you crap. It's so somebody will help you accomplish your goals.
People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.
Your last para would imply that not getting crap from gossip journalists means you are not interesting or successful. Eliezer/MIRI gets almost no press. Are you sure that's what you meant?
Eliezer gets a lot more press than I do, which is just fine with me.
Well, yes, there is going to be some inevitable crap, but the purpose of signalling is so that you could impress a much larger pool of people. So it might not be much help for gossip journalists, but it might help with the marginal professional ethicist, mathematician, or public figure. In that area, you might get some additional "Anybody who can do that must be damn impressive.". Does the additional damn-impressive outweigh the cost? I don't know, that's why I'm asking.
The discussion about mean vs variance in this post may be relevant.
In addition "getting flak" isn't necessarily a bad thing.
It can be counter-signaling if you can get flak and stay standing.
It can also polarize people and separate those who can evaluate the inside arguments to realize that you're good from those who can't and have to just write you off for having no formal education.
Eddie has some math talent. He can invest some time, money, and effort C to get a degree, which allows other people to discern that he has a higher probability of having that math talent. This higher probability confers some benefit in that other people will more readily take his advice in mathematical matters, or talk with him about his math.
The fun twist is that Eddie lives in a society with many other individuals with varying degrees of math talent, each of whom can expend C to get a degree and the associated benefits. People with almost no mathematical talent have a prohibitively high C, because even if they can pony up the time and money, they have to work very hard to fake their way through. But people with high math ability often choose to stand out by getting the degree, because their C is relatively lower, and a very high proportion of them get degrees. This creates a high association between degrees and mathematical ability, and makes it unlikely to see high mathematical ability in the absence of a degree.
That's the basic idea, plus degrees signal other things which may be completely unrelated to math, but are still nice. Even in the case where the degree has no causal effect no math ability, there are benefits to having one, in that the other math people can judge very quickly that they're interested in talking to you.
Hopefully that demonstrates that I understand signalling. My question is about the costs and benefits of a particular signal.
It demonstrate that you don't. Humans make decisions via something called the availability heuristic.
If you bring into the awareness of the person that you are talking that you are a mathematician that only has a bachleor, no master, no PHD and no professorship that you aren't bringing expertise into his mind.
If you are however a self taught person who managed to published multiple papers among them a paper titled "Complex Value Systems in Friendly AI" in Artificial General Intelligence Lecture Notes in Computer Science Volume and who has his own research institute that's a better picture.
If you have published papers that a lot more relevant for relevant experts than whether you have a degree that verifies basic understanding. If a person really cares whether Eliezer has a math degree he already lost that person.
Impressing Thiel is independent of a future degree or not, because he's already impressed. Where's the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel? Maybe MIRI doesn't need another billionaire, but I don't think they'd turn one away.
I think the deal that Eliezer has with Thiel is that Eliezer does MIRI full time. Switching focus to getting a degree might violate the deal. Gives that Thiel has a lot of money impressing Thiel more might also be very useful if they want more money from him.
Do you really think that someone who isn't contrarian will put his money into MIRI? The present set up is quite okay. Those who want people with academic credentials can give their money to FHI. Those who want more contrarian people can give their money to MIRI.
Whether or not Eliezer has a degree doesn't change that he's the kind of person who has a public Okcupid profile detailing his sexual habits and the fact that he's polyamorous.
When Steve Job was alive and run around in a sweater, he didn't cause people to disregard him because he wasn't wearing a suit.
People respect the person who's a contrarian who's okay with not everyone liking him. The contrarian who tries to get every to like them on the other hand get's no respect.
On the other hand if he decides to get a degree and pulls it off in a year or something impressive like that it could just feed into the contrarian genius image.
Yes, but that would prokbably either mean paying someone else to do your homework with means that you are vunerable to attack or making studying the sole focus for a year.
I'm not certain that getting a degree now counts as the traditional route. Also, I don't think that an additional degree is particularly damaging to his image. People aren't going to lose interest in FAI if he sells out and gets a traditional degree. Or they are and I have no idea what kind of people are involved.
Yes, the autodidact signal can be tremendously effective, particularly in tech/libertarian company.
Peter Thiel (the billionaire) has the proven ability to spot talent, which is why he is a billionaire. Eliezer has traits that Thiel values, and this is probably much more important than any signal Eliezer sent.
I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don't mean that much (otherwise people would never move up the academic status chain).
Yes it is too bad that writing things down clearly takes a long time.
True. It seems like the great-papers avenue is being pursued full-steam these days with MIRI, but I wonder if they're going to run out of low-hanging fruit to publish, or if mainstream academia is going to drag their heels replying to them.
Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I'll briefly mention that Eliezer is currently meeting with a "mathematical exposition aimed at math researchers" tutor. I don't know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.
Presumably if MIRI were awash with funding you'd pay experts to make papers out of Eliezer's work, freeing Eliezer up for other things?
That's basically what another of our ongoing experiments is.
If you buy into the “crunch time” narrative, that's a lot of opportunity cost.
4 years (or even 1 year if you are super hard-core) of time is a pretty non-trivial investment. I was 2 classes away from a second degree and declined to take them, because the ~100 hours of work it would have taken wasn't worth the additional letters after my name. I also just really don't know anyone relevant who thinks that a college degree or lack thereof particularly matters (although the knowledge and skills acquired in the course of pursuing said degree may matter a lot). Good people will judge you by what you've done to demonstrate skill, not based on a college diploma.
I think IlyaShpitser's comment pretty much nails it.
I came to the same conclusion, and in general a lack of degree has not impacted me as I get employment based on demonstrated skill. The main limitation is that any formal Postgrad study is impossible without a degree and this was a regret for me, prior to getting access to the coursera type courses.
This might have been a good call 10 years ago but nowadays Eliezer is participating in regular face to face meetings with skilled mathematicians and scientists in the context of constructing and analyzing theorems and decision strategies. This means that for a large amount of the people who are most important to convince, he gets to screen out all the "evidence" of not having a degree. And to a large extent, someone having the respect of a bunch of math phds is more important a qualifier of talent than having that phd themselves.
There's theoretically still the problem of selling Eliezer to the muggles but I don't think that's anywhere near as important as getting serious thinkers on board.
Would getting more citations partly nullify the lack of formal education?
Different target groups may use different signals.
For example, for a scientist the citations may be more important than formal education. For an ordinary person with a university diploma who never published anything anywhere, formal education will probably remain the most important signal, because that's what they use. A smart sponsor may instead consider the ability of getting things done. And the New Age fans will debate about how much Eliezer fits the definitions of an "indigo child".
If the goal is to impress people for whom having an university diploma is the most important signal (they are a majority of the population), the best way would be to find an university which gives the diploma for minimum time and energy spent. Perhaps one where you just pay some money (hopefully not too much), take a few easy exams, and that's it; you don't have to spend time at the lessons. After this, no one can technically say that Eliezer has "no formal education". (And if they start discussing the quality of the university, then Eliezer can point to his citations.) The idea is to do this as easily as possible... assuming it's even worth doing.
There are also other things to consider, such as the fact that other people working with Eliezer do have formal education... so why exactly is it a problem if Eliezer doesn't? Does MIRI seem from outside like one man show? Maybe that should be fixed.
Is it always correct to choose that action with the highest expected utility?
Suppose I have a choice between action A, which grants -100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.
Intuitively, it seems like action A is too risky. I'll almost certainly end up with a huge decrease in utility, just because there's a remote chance of a windfall. Risk aversion doesn't apply here, since we're dealing in utility, right? So either I'm failing to truly appreciate the chance at getting 1M utilons -- I'm stuck thinking about it as I would money -- or this is a case where there's reason to not take the action that maximizes expected value. Help?
EDIT: Changed the details of action A to what was intended
I think the non-intuitive nature of the A choice is because we naturally think of utilons as "things". For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one's preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.
It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don't think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to -100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the "outcomes <=> utilons" map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.
ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people's intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.
Also, it's well possible that your utility function doesn't evaluate to +10000 for any value of its argument, i.e. it's bounded above.
Yes, this seems almost certainly true (and I think is even necessary if you want to satisfy the VNM axioms, otherwise you violate the continuity axiom).
An unbounded function is one that can take arbitrarily large finite values, not necessarily one that actually evaluates to infinity somewhere.
Yes I'm quite aware... note that if there's a sequence of outcomes whose values increase without bound, then you could construct a lottery that has infinite value by appropriately mixing the lotteries together, e.g. put probability 2^-k on the outcome with value 2^k. Then this lottery would be problematic from the perspective of continuity (or even having an evaluable utliity function).
Are lotteries allowed to have infinitely many possible outcomes? (The Wikipedia page about the VNM axioms only says "many"; I might look it up on the original paper when I have time.)
I'm not sure, although I would expect VNM to invoke the Hahn-Banach theorem, and it seems hard to do that if you only allow finite lotteries. If you find out I'd be quite interested. I'm only somewhat confident in my original assertion (say 2:1 odds).
There are versions of the VNM theorem that allow infinitely many possible outcomes, but they either
1) require additional continuity assumptions so strong that they force your utility function to be bounded
or
2) they apply only to some subset of the possible lotteries (i.e. there will be some lotteries for which your agent is not obliged to define a utility).
The original statement and proof given by VNM are messy and complicated. They have since been neatened up a lot. If you have access to it, try "Follmer H., and Schied A., Stochastic Finance: An Introduction in Discrete Time, de Gruyter, Berlin, 2004"
EDIT: It's online.
I'd flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It's almost a definition of what utility is - if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have (probably related to risk).
I've recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I'm working on working with that to do effective long-term planning -- the best trick I have so far is weighing "unacceptable status quo continues" as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn't help any, and I've been doing that).
I sometimes have the same intuition as banx. You're right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars): -100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it's even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn't feel this, it's still likely a factor in most actual humans' utility functions.
TrustVectoring summed it up well above: If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
Um, A actually has a utility of -89.9.
That explains why it seems less appealing!
People who play with money don't like high variance, and sometimes trade off some of the mean to reduce variance.
Depending on your preferred framework, this is in some sense backwards: utility is, by definition, that thing which it is always correct to choose the action with the highest expected value of (say, in the framework of the von Neumann-Morgenstern theorem).
In this article, Eliezer says:
Recently, a similar phrase popped into my head, which I found quite useful:
That's all.
I don't know what you mean precisely by confusion, but I personally can't always control what my immediate primal level response is to certain situations. If I try to strictly avoid certain feelings, I usually end up convincing myself that I'm not feeling that way when actually I am. I'd rather notice what I'm feeling and then move on from there, it's probably easier to control your thinking that way. Just because you're angry doesn't mean you have to act like angry.
That's basically what I meant. The move is to notice the anger, fear or disgust and then realize that this emotion isn't useful and can be actively detrimental. Then consciously try to switch to curiosity.
Of course, I couldn't condense the full messiness of reality into a pithy saying.
I've been systematically downvoted for the past 16 days. Every day or two, I'd lose about 10 karma. So far, I've lost a total of about 160 karma.
It's not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said "thanks" when somebody pointed out a formatting error in my comments is now at -1.
I'm not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.
A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent "Is Less Wrong too scary to marginalized groups?" firestorm.
One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).
I would prefer to see a little less victim-blaming here.
(I'm not sure whether you intended it as such -- but that phrase "participation in flame-bait topics" sounds like it.)
That was not my intention. (If it's any consolation, I participated in the same firestorm.)
How is this victim blaming? As I interpret it the claim is that the person was probably NOT the victim of systematic downvoting but instead made a lot of comments that are counter to what people like to hear, creating the illusion of same.
Hard to explain getting downvoted for
as being about saying things "counter to what people like to hear". Which is why I didn't interpret CAE_Jones as suggesting that that's what was going on.
For what it's worth, I agree with gjm that "flame-bait" was a poor choice of words on my part, and I understand how it could have been taken as victim-blaming in spite of my intentions.
Gah... This is becoming way too common, and it seems like there's pretty good evidence (further supported in this instance) regarding the responsible party. I wish someone with the power to do so would do something about it.
I have blindly upvoted your 10 most recent comments. This is meant as consolation but likely a one-time action .
For context, link to past discussion of mass-downvoting.
I got a seemingly one-time hit of this about a week ago. For what it's worth I had just been posting comments on the subject of rape, but a whole bunch of my unrelated comments got it too.
(Since then it's been having an obnoxious deterrent effect on my commenting, because I feel so precariously close to just accumulating negative karma every time I post, leaving readers with the impression that my ideas have all been identified as worthless by someone probably cleverer than themselves. I'm now consciously trying to avoid thinking like this)
Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say "oops" for each of them. Here we go...
That's probably wrong. IIRC, previous eras' low life expectancy was mostly due to high child mortality.
This sentence is defensible for certain definitions of "significant," but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren't particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and regretted it a couple months later.
No, that's a misreading of the study.
Eh, not really.
Silly. Donor-advised funds basically always fund as the donor wishes.
The Wiki link in the linked LW post seems to be closer to "Stanislav Petrov saved the world" than "not really":
...
...
A closely related article says:
That he didn't literally have his finger on the "Smite!" button, or that the SU might still not have retaliated if he'd raised the alarm, is not the point.
I have long thought that the very idea of "life expectancy at birth" is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live? Or even at a past time?
Sure, there is the concept of life expectancy at specific age. For example, there is the "default" life expectancy at birth, there is the life expectancy for a 20 year-old, life expectancy for a 60-year-old, etc. Just google it up.
Thanks. Interestingly, My numbers never matched up between any 2 sources.
The US SSA's actuarial tables give me a number that's 5 years different from their own "additional life expectancy" calculator.
It's kind of important to the life insurance business ....
On the AIXI and such... you see, its just hard to appreciate just how much training it takes to properly understand something like that. Very intelligent people, with very high mental endurance, train for decades, to be able to mentally manipulate the relevant concepts at their base level. Now, let's say someone only spent a small fraction of the time - either because they've pursued a wrong topic through the most critical years, or because they have low mental endurance. Unless they're impossibly intelligent, they have no chance of forming even a merely good understanding.
Smart move not only to review but post the results. Shows humbleness and at the same time prevents being called on it later.
This is an approach I'd like to see more often. Maybe you should add it to the http://lesswrong.com/lw/h7d/grad_student_advice_repository/ or some such.
Huh. I followed the link to the correction of the Petrov story, and found I'd already upvoted it.
But if you'd asked me yesterday for examples of heroes yesterday, I'd have cited Petrov immediately. S
hows how hard it is to unlearn false information once you've learned it.
Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don't see ourselves particularly bonded to LW at all. Especially I.
We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein's brains (whether they were lighter or heavier than average - I am quite sure they were heavier but it seems that the Cathedral says otherwise).
We discussed Three Worlds Collide, IBM brain simulations, MIRI endeavors and progress, genetics ...
More than 5 hours of an interesting debate.
Heretical? Well, considering that 'heretic' means 'someone who thinks on their own', I'm not sure how we're supposed to interpret that negatively.
I assume however that you meant 'disagreeing with common positions displayed on LW' - which of those common positions did you differ on, and why, and just how homogeneous do you think LW is on those?
I can speak mostly for myself. Still, we the locals go back decade and more, discussing some topics.
It is kind of clear to me, that there is a race toward superintelligence. As it was always the race toward some future technology, be it flying, be it atomic bomb, be it Moon race ... you name it.
Except, that this is the final, most important race ever. What can you expect then from the competitors? You can expect them to claim, that the Singularity/Transcendence is still far, far away. You can expect, that the competition will try to persuade you to abandon your own project, if you have any. For example, by saying that an uncontrollable monster is lurking in the dark, named UFAI. They will say just about anything, to persuade you to quit.
This works both ways, between almost any two competitors, to be clear.
My view is the following. If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month.
I am not sure, if there is a human (group) currently able to accomplish this. Very well might be. It's likely NOT THAT difficult.
We discussed the Marylin vos Savant's toying with Paul Erdos. A smartass against a top scientist is occasionally like a cat and mouse game, where the mouse mistakenly thinks he's a cat. There are many other examples, like Ballard against all the historians and archeologists. Or Moldbug against Dawkins.
Of course, that does not automatically mean another smartass is preying upon the MIRI and AI academia combined, in the real AI case. But it's not impossible. May be several different big cats in the wild who keep a low profile for a time being. Might be lion with his pride, inhabiting the academia also.
The most interesting outcome would be no Singularity for a few decades.
That seems an... unusual view. Have you actually tried writing code that exhibits something related to intelligence?
10K lines is not a big program.
I have certain abilities. This is the product of the product of mine from 10 years ago.
Smartass I am. Probably not smart enough to really make a difference, though.
Smartass is good. Saying things which are clearly not true without a hidden smartassy implication behind them -- not so much :-)
It depends on your language and coding style, doesn't it? I've seen C style guides that require you to stretch out onto 15 lines what I'd hope to take 4, and in a good functional language shouldn't take more than 2.
Yes, and the number of lines is a ridiculously bad metric of the code's complexity anyway.
Was a funny moment when someone I know was doing a Java assignment, I got curious, and it turned out that a full page of Java code is three lines in Perl :-)
That really depends on coding style, again. I find that common Java coding styles are hideously decompressed, and become far more readable if you do a few things per line instead of maybe half a thing. Even they aren't as bad as the worst C coding styles I've seen, though, where it takes like 7 lines to declare a function.
As for Perl vs Java... was it solved in Perl by a Regex? That's one case where if you don't know what you're doing, Java can end up really bloated but it usually doesn't need to be all that bad.
I don't remember the details by now, but I think that yes, there was a regexp and a map, and a few of Perl's shortcuts turned out to be useful...
Does anyone else experience the feeling of alienation? And does anyone have a good strategy for dealing with it?
Yes, although it would help if you could be a bit more specific, the term is somewhat overloaded.
As for the strategy, depends. Find a better community (than the one you feel alienated from) in the sense of better matching values? We both seem to feel quite at home in this one (for me, if not for the suffocating supremacy of EA).
I meant alienated from society at large, not from LW, although the influence of society at large obviously affects discussion on LW.
One aspect of my feeling is that I increasingly suspect that the fundamental reason people believe things in the political realm is that they feel a powerful psychological need to justify hatred. The naive view of political psychology is that people form ideological beliefs out of their experience and perceptions of the world, and those beliefs suggest that a certain category of people is harming the world, and so therefore they are justified in feeling hatred against that category of people. But my new view is that causality flows in the opposite direction: people feel hatred as a primal psychological urge, and so their conscious forebrain is forced to concoct an ideology that justifies the hatred while still allowing the individual to maintain a positive pro-social self-image.
This theory is partially testable, because it posits that a basic prerequisite of an ideology is that it identifies an out-group and justifies hatred against that out-group.
Do you have an in-person community that you feel close to?
What I'm trying to get at is, does it bother you specifically that you are alienated from "society at large," or do you feel alienated in general?
The part where the emotional needs come first, and the ideological belief comes later as a way of expressing and justifying them, that feels credible. I just don't think that everyone starts from the position of hatred (or, in the naive view, not everyone ends with hatred). There are other emotions, too.
But maybe the people motivated by hatred make a large part of the most mindkilled crowd. Because other emotions can be expressed legitimately also outside of the politics.
Tentatively: Look for what "and therefore" you've got associated with the feeling. Possibilities that come to my mind-- and therefore people are frightening, or and therefore I should be angry at them all the time, or and therefore I should just hide, or and therefore I shouldn't be seeing this.
In any case, if you've got an "and therefore" and you make it conscious, you might be able to think better about the feeling.
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: "Antisemitismus ist der Sozialismus des dummen Kerls." ("Antisemitism is the socialism of the stupid guy", or perhaps colloquially, "Antisemitism is a dumb-ass version of socialism") That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker's view, anyway — would be better remedied by changing the political-economic structure.
Jay Smooth recently put out a video, "Moving the Race Conversation Forward", discussing recent research to the effect that mainstream-media discussions of racial issues tend to get bogged down in talking about whether an individual did or said something racist, as opposed to whether institutions and social structures produce racially biased outcomes.
There are probably other sources for similar ideas from around the political spectra. (I'll cheerfully admit that the above two sources are rather lefter than I am, and I just couldn't be arsed to find two rightish ones to fit the politesse of balance.) People do often look for individuals or out-groups to blame for problems caused by economic conditions, social structures, institutions, and so on. The individuals blamed may have precious little to do with the actual problems.
That said, if someone's looking to place blame for a problem, that does suggest the problem is real. It's not that they're inventing the problem in order to have something to pin on an out-group. (It also doesn't mean that a particular structural claim, Marxist or whatever, is correct on what that problem really is — just that the problem is not itself confabulated.)
Sure, obviously there are real problems in the world. Your examples seem to support my thesis that people believe in ideologies not because those ideologies are capable of solving the problems, but because the ideologies justify their feelings of hatred.
I suppose I see it as more a case of biased search: people have actual problems, and look for explanations and solutions to those problems, but have a bias towards explanations that have to do with blaming someone. The closer someone studies the actual problems, though, the less credibility blame-based explanations have.
I've seen it phrased as "Anti-semitism is the socialism of fools".
Does that make socialism the anti-semitism of the smart? Or perhaps of the ambitious -- they're attracted to it because it gives them an enemy big enough to justify taking over everything?
But of course.
Accept that you're not average and not even typical.
Feeling usually become a problem when you resist them.
My general approach with feelings:
Find someone towards which you can express the content behind the feeling. This works best in person. Online communication isn't good for resolving feelings. Speak openly about whatever comes to mind.
Track the feeling down in your body. Be aware where it happens to be. Then release it.
I think that feeling is more common than you might think. Especially if you deviate enough from the societal norm (which Less Wrong generally does).
My general strategy for dealing with is social interaction with people who'll probably understand. Just talk it over with them. It's best if you do this with people you care about. It doesn't have to be in person, if you've got someone relevant on Skype, that works as well.
Hmm, this is probably good advice. Part of my problem is that my entire family is made up of people who are both 1) Passionate advocates of an American political tribe and 2) Not very sophisticated philosophically.
A common condition with geeks in general and aspiring rationalists in particular, I'd say.
I've recently been expanding my network of like-minded people both by going to the local meetups and also by being invited in a Skype group for tumblr rationalists.
I know that a feeling of alienation isn't conductive to meeting new people, so I'm not sure I can offer other advice. Contact some friends who might be open to new ideas? I'd offer to help myself, but I'm not sure if I'm the right person to talk to. (In any case, I've PM'd my Skype name if you do need a complete stranger to talk to.)
I think this feeling arises from social norms feeling unnatural to you. This feeling should be expected if your interests are relevant to this site, since people are not trying to be rational by default.
The difference between a pathetic misfit and and an admirable eccentric is their level of awesomeness. If you become good enough at anything relevant to other people, you don't have to live through their social expectations. Conform to the norms or rise above them.
Note that I think most social norms are nice to have, but this doesn't mean there aren't enough of the kind that make me feel alienated. It could be that the feeling of alienation is a necessary side effect of some beneficial cognitive change, in which case I'd try to cherish the feeling. I've found that rising to a leadership position diminishes the feeling significantly, however.
I'm going to do the unthinkable: start memorizing mathematical results instead of deriving them.
Okay, unthinkable is hyperbole. But I've noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don't know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn't grasp why certain results were true. I had gone along with this idea without thinking about it critically.
So these are the things I'm going to add to my anki decks, with the obligatory rule that I'm only allowed to memorize results if I could theoretically re-derive them (or if the know-how needed to derive them is far beyond my current ability). These will include common trig results, derivatives and integrals of all basic functions, most physical formulae relating heat, motion, pressure and so on. I predict that the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems, though I can't think of a way to measure this. Also, recommendations for other things to memorize are welcome.
Also, relevant
In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.
Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.
You are right that "the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems", I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.
Nice, and good luck! I'm glad to see that my post resonated with someone. For rhetorical purposes, I didn't temper my recommendations as much as I could have -- I still think building mental models through deliberate practice in solving difficult problems is at the core of physics education.
I treat even "signpost" flashcards as opportunities to rehearse a web of connections rather than as the quiz "what's on the other side of this card?" If an angle-addition formula came up, I'd want to recall the easy derivation in terms of complex exponentials and visualize some specific cases on the unit circle, at least at first. I also use cards like that in addition to cards which are themselves mini-problems.
In general there the core principle of spaced repetition that you don't put something into the system that you don't already understand.
When trying to memorize mathematical results make sure that you only add cards when you really have a mental understanding. Using Anki to avoid forgetting basic operations is great. If you however add a bunch of information that's complex, you will forget it and waste a lot of time.
Yeah, I'm wary of that fact and I've learned the downsides of it through experience :)
That's true if you're just using spaced repetition to memorize, although I'd add that it's still often helpful to overlearn definitions and simple results just past the boundaries of your understanding, along the lines of Prof. Ravi Vakil's advice for potential students:
The second point I'd make is that the spacing effect (distributed practice) works for complex learning goals as well, although it will help if your practice consists of more than rote recall.
If you learn definitions it's important to sit down and actually understand the definition. If you write a card before you understand it, that will lead to problems.
Has anyone paired Beeminder and Project Euler? I'd like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?
Is being "sexy" basically signaling promiscuity plus signaling being a fun intercourse partner?
"Sexy" isn't signaling -- it's a characteristic that people (usually) try to signal, more or less successfully. "I'm sexy" basically means "You want me" : note the difference in subjects :-)
Ok, I may have been too vague. I was thinking of the exhibition of sexy behavior, e.g. clothes, dancing/gestures, sex-related language.
Pretty much the same thing. Regardless of an, um, widespread misunderstanding :-D sexy behavior does NOT signal either promiscuity or sexual availability. It signals "I want you to desire me" and being desired is a generally advantageous position to be in.
If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep. Especially if there no established trust, signal really high amounts of sexuality doesn't result in "You want me".
In my own interactions with professional dancers there are plenty of situations where the woman succeeds in signaling a high amount of sexyness. I however know that I"m dancing with a professional dancer who going to sent that signal to a lot of guys so she doesn't enter my mental category of potential mates.
I think people frequently go wrong when the confuse impression of characteristics with goals.
In which case he failed to signal "sexy" and (a common failure mode) signaled "creepy" instead.
It depends on how you define the term.
For a reasonable definition of sexy, the term refers to letting a woman feel sexual tension. If you talk about social interactions it's useful to have a word that refers to making another person feel sexual tension.
Of course you can define beautiful, attractive and sexy all the same way. Then you get a one dimensional model where Bob wants Alice with utility rating X. I don't think that's model is very useful to understanding how humans behave in mating situations.
I define it as "arousing sexual interest and desire in people of appropriate gender and culture". Note that this is quite different from "beautiful" and is a narrow subset of "attractive".
"Tension" generally implies conflict or some sort of a counterforce.
Testosterone which is commonly associated with sexiness in males is about dominance. It has something to do with power that does create tension.
Of course a woman can decide to have sex with shy a guy because he's nice and she thinks that he's intelligent or otherwise a good match. Given that there are shy guys who do have sex that's certainly happening in reality.
Does that mean that the behavior of that guy deserves the label "sexy"? I don't think he's commonly given that label.
There also words like sensual and empathic. A guy can get layed by being very empathic and just making woman that feel really great by interacting with him in a sensual way. I think it's useful to separate that mentally from the kind of behavior that comes from testosterone that commonly get's called sexy.
If you read an exciting thriller you are also feeling tension even when you aren't in conflict with the book or there some counterforce. Building up tension and then releasing it is a way for human to feel pleasure.
Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles.
I would agree with what Lumifer says below, but I think sexy can be signalling when many people are involved: look at the sexy people I hang out with. Being with sexy people brings high status because it's high status.
I think you confuse the label "sexy" with the label "attractive". As far as my reading goes few articles use the term sexy.
Sexy is a quite broad word that probably used by different people in different ways. I think for most people it about what they feel when looking at the person. Those feeling where set up by evolution over large time frames.
Evolution doesn't really care about whether you get a fun intercourse partner.
But it's not only evolution. It also has a lot to do with culture. Culture also doesn't care about whether you get a fun intercourse partner. People who watch a lot of TV get taught that certain characteristics are sexy.
For myself I would guess that most of my cultural imprint regarding what I find sexy comes from dancing interactions. If a woman moves in a way that suggests that she doesn't dance well, that will reduce her sex appeal to me more than it probably does with the average male.
PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file -- it can be a garbage pdf or even a pdf that's already on scribd). They say this at the very bottom of their pricing page, but I didn't notice until just now.
Some names familiar to LWers seem to have just made their fortunes (again, in some cases); http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/ (via HN)
I liked Legg's blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.
EDIT: bigger discussion at http://lesswrong.com/r/discussion/lw/jks/google_may_be_trying_to_take_over_the_world/#comments - new aspects: $500m, not $400m; DeepMind proposes an ethics board
My meditation blog from a (somewhat) rationalist perspective is now past 40 posts:
http://meditationstuff.wordpress.com/
The MIRI course list bashes on "higher and higher forms of calculus" as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.
So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?
Not much. The kind of probability relevant to MIRI's interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that's more algorithms. Not an expert on numerical analysis by any means, though.
If you have a general interest in mathematics, I still recommend that you learn some calculus because it's an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.
Thanks. I took single variable calculus, differential equations, and linear algebra in college, but its been four years since then and I haven't really used any of it since (and I think I really only learned it in context, not deeply). I've just been trying to figure out how much of my math foundations i'm going to need to re-learn.
This was helpful.
Has anyone had experiences with virtual assistants? I've been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.
I'd like to hear about any positive or negative experiences.
One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That's why I'm asking here.
I don't, but in Tim Ferris' book Four-Hour Work Week, I think I recall him recommending them. I think this was the one he recommended: https://www.yourmaninindia.com/.
Let me know if you come across some good findings on this. If effective, virtual assistants could be very useful, and thus they're something I'm interested in. On that note, it'd probably be worth writing a post about them.
Daniel Dennett quote to share, on an argument in Sam Harris' book Free Will;
From: http://www.samharris.org/blog/item/reflections-on-free-will#sthash.5OqzuVcX.dpuf
Just thought that was pretty damn funny.
Is there a good way of finding what kind of job might fit a person? Common advice such as "do what you like to do" or "do what you're good at" is relatively useless for finding a specific job or even a broader category of jobs.
I've did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.
That's a strange question.
Either you want to know how to pick up the skill of being a career adviser. Alternatively you want to find a job for yourself. You might also be a parent who tries to find a job that fits his child instead of letting the child decide for themselves.
I think the answers to those three possibilities are very different.
I think for most people who ask this question, the range of fitting jobs is much wider than they think. You learn to like what you become good at.
If I were to pick a career right now, I'd just take a long list of reasonably complex jobs and remove any that contain an obvious obstacle like a skill requirement I'm unlikely to improve at. Then from what is left, I'd narrow the choice by some other criteria than perceived fit, income and future employment prospects for example and then pick one of them either by some additional criteria or randomly. I'm confident I'd learn to like almost any job chosen this way.
If you make money you can do whatever you like in the future even if you chose your job poorly in the first place. So please don't choose to become an English major.
I'm quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?
Perhaps it's because HMM seem to be more difficult to grasp?
Or it's because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.
For comparison, Google search "bayes OR bayesian network OR net" site:lesswrong.com gives 1,090 results.
Google search hidden markov model site:lesswrong.com gives 91 results.
There's a proliferation of terminology in this area; I think a lot of these are in some sense equivalent and/or special cases of each other. I guess "Bayesian network" is more consistent with the other Bayes-based vocabulary around here.
Hidden Markov models are a reasoning model to solve a specific problem. If you don't face that specific problem they are no use.
Most of the problems we discuss aren't modeled well with HMMs.
Hello, we are organizing monthly rationality meetups in Vienna - we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.
Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I'm interested in seeing if there are any low hanging fruit I'm missing.
I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?
A quick google will give you a lot of lists but most of them are from news sources that I don't trust.
Depends on what you'd call "well-researched" but, unfortunately, most of it is fuzzy platitudes. For example:
and most importantly
I found this list of causes of death by age and gender enlightening (it doesn't necessarily tell you that a particular action will increase your lifespan, but then again neither do correlations). For example, I was surprised by how often people around my age or a bit older die of suicide and "poisoning" (not sure exactly what this covers but I think it covers stuff like alcohol poisoning and accidentally overdosing on medicine?).
A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?
I'm recalling a Less Wrong post about how rationality only leads to winning if you "have enough of it". Like if you're "90% rational", you'll often "lose" to someone who's only "10% rational". I can't find it. Does anyone know what I'm talking about, and if so can you link to it?
John_Maxwell_IV and I were recently wondering about whether it's a good idea to try to drink more water. At the moment my practice is "drink water ad libitum, and don't make too much of an effort to always have water at hand". But I could easily switch to "drink ad libitum, and always have a bottle of water at hand". Many people I know follow the second rule, and this definitely seems like something that's worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:
http://www.sciencedirect.com/science/article/pii/S0002822399000486:
So how much is 2% dehydration? http://en.wikipedia.org/wiki/Dehydration#Differential_diagnosis : "A person's body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]" http://en.wikipedia.org/wiki/Body_water quotes Arthur Guyton 's Textbook of Medical Physiology: "the total amount of water in a man of average weight (70 kilograms) is approximately 40 litres, averaging 57 percent of his total body weight." So effects on cognition become apparent after 40l*2%=800ml of water has been lost, which takes roughly 800ml/(2.5l/24h) = 8 hours. Now, this assumes water is lost at a constant rate, which is false, but it still seems like it would take a while to lose a full 800ml. Which implies that you don't have to make a conscious effort to drink more water because everybody gets at least mildly thirsty after, say, half an hour of walking around outside on a warm day, which seems like it would be a lot less than 800ml.
http://freebeacon.com/michelle-obamas-drink-more-water-campaign-based-on-faulty-science/ : “There really isn’t data to support this,” said Dr. Stanley Goldfarb of the University of Pennsylvania. “I think, unfortunately, frankly, they’re not basing this on really hard science. It’s not a very scientific approach they’ve taken. … To make it a major public health effort, I think I would say it’s bizarre.” Goldfarb, a kidney specialist, took particular issue with White House claims that drinking more water would boost energy. ”The idea drinking water increases energy, the word I’ve used to describe it is: quixotic,” he said. “We’re designed to drink when we’re thirsty. … There’s no need to have more than that.”
http://ask.metafilter.com/166600/Drinking-more-water-should-make-me-less-thirsty-right : When you don't drink a lot of water your body retains liquid because it knows it's not being hydrated. It will conserve and reabsorb liquid. When you start drinking enough water to stay more than hydrated your body will start using the water and then dispensing of it as needed. Your acuity for thirst will be activated in a different way and in a sense work better.
Some thoughts:
Thoughts? Please post your own opinion if you're knowledgeable about this or if you've researched it.
Thanks for writing this up.
Lots of things fall in to this category :)
In case it's not obvious: this probably means in the absence of food/fluid consumption. You can't go on losing 2.5 litres of water a day indefinitely.