Hi, I've been following LW and the occasional article for a couple years but have never posted any comments. Now I'd like to post an article but I need 20 karma to do so. If you don't mind up-voting this comment to allow me to do that, please do!
If it helps, the post I've drafted is about some of the altruistic eating arguments I've seen in the broader LW network such as:
I use one neoclassical economic and one intuitive appeal to argue that they could all be simpler and more accurate by leaving out elasticities. LW seems like the perfect forum for this discussion!
Happy to share the draft privately before it's posted if that affects your desire to up-vote this comment, or any recommendations as to whether it's more appropriate for Main or Discussion.
Thanks!
I’m looking for a mentor who is in the software industry.
About me: I’m studying math at a university in Ohio, and I’ll graduate in May. I’m a mostly self taught programmer, but I’ve also taken a few CS classes at school and on Coursera. My most developed skills are in Python and Django, though I’ve also used C, C#, Haskell, SQL, Javascript and a few other technologies.
My goal is to find a job as a software developer, but I face several challenges:
So I’m looking for somebody who can answer some questions and give me advice on getting started in the industry. I know it’s a long shot, but there is no downside to asking. If anybody is willing to help, please PM me.
Instead of using a name like a_lurker and asking for PM's I would suggest going trying to be a little more public - your goal should be to display to potential employers that you can code. This is actually harder than it sounds, as programmers (especially self taught) are more likely to be introverted and don't like marketing themselves. [from my personal perspective]
Some suggestions would be:
Pick some sort of professional sounding name for yourself (doesn't have to be a business name) that you want to be known as - better if it is rare on google. You will use this name to promote your knowledge and collaboration on many websites
Register a domain - even if it is a .info with a simple About you page, saying you are looking for work and your resume. This website should be in your email signature and plugged on other site below.
Start a github account (learn git first) and publish something - anything that you think was good code [as long as it isn't the answer to any of your course assignments]
Answer questions on Stackoverflow with your professional name - and ask questions. Don't spam it, but don't be afraid to ask stupid questions.
Get a linkedin account and grow your network there
Happy to answer any PM's you have, but you should think about promoting yourself if you want work.
Cheers,
Duncan
I believe that “the problem of the nerdy heterosexual male” is surely one of the worst social problems today that you can’t even acknowledge as being a problem—the more so, if you weight the problems by how likely academics like me are to know the sufferers and to feel a personal stake in helping them. How to help all the young male nerds I meet who suffer from this problem, in a way that passes feminist muster, and that triggers the world’s sympathy rather than outrage, is a problem that interests me as much as P vs. NP, and that right now seems about equally hard.
Reading debates like this makes me sad. I realize that just like everything else, feminism also is a tool that different people can use for different purposes. Some people can use it to support empathy towards other human beings. Some people use it to deny empathy towards other human beings. Somehow the latter seem more prominent on the internet.
There is something in the impersonal online communication that emphasises sociopathic traits in people. When in real life you see a suffering man, you usually don't see feminists running to him screaming "actually, I have it much worse!". But when you see a man writing about his suffering online, this is what often happens. Maybe it's because behaving like an asshole in real life is likely to bring you a punch in the face or at least a loss of status, while doing the same on internet generates page views and income, if you do it properly.
Like polymathwannabe wrote, there is a meaningful definition of "privilege" (although I would prefer the word "advantage"), which is: "Maybe you are just a lucky person who doesn't have this problem, so you don't think about this problem, and you may even find it unbelieva...
You're not supposed to feel liable for the many ancient crimes that gave you your present advantages, but you're expected to be mindful of those who still suffer as a consequence.
What does this being mindful look like, in concrete terms?
surely must be an extremely uncommon problem
Aaronson's description felt very familiar to me, describing my middle and early high school years pretty well. In my case this didn't involve reading radical feminist writing, just paying attention to what adults said about how people were to treat each other.
(And despite having had several relationships and now being married I've still never initiated a relationship, mostly out of really not wanting to come off as creepy.)
Scott was probably one of the few people to actually believe what he was told about sexual harassment. For example, if you tell 18-year-old men that they are "bad" if they stare at a beautiful woman whom they are not in a relationship with, most will think you're being silly. If Scott, however, thought this was a commonly held belief I can understand why it would cause him extreme anxiety.
I don't think he read radfem with 12. Fear or being scored at when a girl finds out that a low status guy loves her doesn't need any radfem literature.
He read that literature because from his perspective it was the obvious way to deal with the problem.
That's sad, but it surely must be an extremely uncommon problem.
How sure are you of that claim? What percentage would you guess if we would ask a similar percentage at a LW census?
That's sad, but it surely must be an extremely uncommon problem.
It sounds like Aaronson had an uncommonly severe version, but the general form of the problem doesn't seem exceptionally rare, among the subpopulation in question.
Figuring out what we can usefully do about it, without trading one problem for another, that's the hard part.
(Part of me also wants to point out that exactly how uncommon it is doesn't matter very much, due to a perhaps irrational fear that someone wants to say "well, it's not common" and then forget the problem exists.)
I figured I would throw this out even though it seems exceedingly obvious in retrospect, but took me a while to figure out:
If you use Anki (or other SRS software) you can save a lot of time adding new cards by using screen shots. Not whole screen shots but selecting only important paragraphs/pictures/equations from an ebook or website. On a Mac this is command-control-shift-4 and then drag the part of the screen you want to copy to the clip board. Just paste it into Anki.
This saves me so much time when making cards out of books/papers that I almost exclusively read on my computer now.
I disrecommend this practice for text and equations. It will save you a small amount of time up front, but may cost you more overall. First, your ability to search is compromised depending on how much is a graphic. I search fairly frequently, especially when I want to update a card. That's the second disadvantage: you can't update a card so easily now. There's a third disadvantage: Typing up a paragraph from a book (in your own words) has additional learning benefits.
A fourth disadvantage is that different sources may have different notation. I've struggled to keep consistent notation for a number of my decks. You don't have much of a choice in the notation if you just take screenshots. It is a good idea to have multiple notations in your decks, but my suggestion would be to have consistent notation for the main cards and have extra cards about the less common notations.
In my experience, deciding what to add takes much more time than adding it. I make this as easy as possible by noting what I think is important when reading, reviewing, etc. I have lists of text files of things to add along with where to find more information. I have a Beeminder goal to add 3 cards daily that I don'...
A question for specialists on EA.
If I live in a place where I can choose between a standard mix of electricity sources consisting of hydrocarbons, nuclear and renewables, and a "green" mix of renewables exclusively that costs more, should I buy the green mix or buy the cheaper/cheapest mix and donate the difference to GiveWell?
I'd break this down into two questions:
For the first question, subsidising renewable energy is probably a good thing, but there's no reason to expect this particular opportunity to be up there with the world's best organizations. For the second it doesn't seem to me that it matters. So I'd say buy the normal stuff and give the difference to the best organization you can find.
Does anyone want some free meal Squares?
This obviously requires me sending me your address. But I will ship 3 pakcs (18 total) meal squares to your house free of charge to (up to) the first 4 people who want them. I ordered more meal squares then I really want for myself and I think some people on this site would like to try meal squares?
edit:
Seems I have four takers. So offer is prolly closed as enough people already said yes.
Any LWers in NYC want to do some pair programming with me? It's okay if you don't have much experience and just want to learn -- I like teaching, and learning by working directly on a project with someone more experienced is a great way to learn. Or if you're more experienced, that's great too.
I'm a software developer, but I have a repetitive stress injury so I can't type much.
I have a few ideas for projects we could work on, but I'm also open to other ideas.
So I had one of those typical mind fallacy things explode on me recently, and it's caused me to re-evaluate a whole lot of stuff.
Is there a list of high-impact questions people tend to fail to ask about themselves somewhere?
I think the "logical probability" problem, of "how do I quantify my beliefs about the trillionth digit of \pi" and whatnot, is probably just an issue of domain-theoretic maximum-entropy distributions: each step of computation can give us more information that can be used to concentrate the measure better, and domain theory says how computational results are built out of other results from computations that may not have finished yet.
Cognitive Bias research gives us a long list of situations where System 1 may fail to give you the best answer, giving a biased answer instead. Therefore, learning about cognitive biases teaches one to notice if one is in a dangerous situation where they should "halt" one's System 1 and offload their decision making to the System 2. Naturally, the next question is, how to train one's System 1 and System 2 themselves?
How does one train one's System 1? If you spend a lot of time analyzing data in your field, you can develop a domain specific intui...
Just finished the first chapter of Superintelligence. It was a great summary of the history of AI. I thought this was a funny thing:
...In the summer of 1956 at Dartmouth College, ten scientists sharing an interest in neural nets, automata theory, and the study of intelligence convened for a six-week workshop. This Dartmouth Summer Project is often regarded as the cockcrow of artificial intelligence as a field of research. Many of the participants would later be recognized as founding figures. The optimistic outlook among the delegates is reflected in the pr
Vitalik Buterin mentions LW in his latest, On Silos:
I consider economics and game theory to be a key part of cryptoeconomic protocol analysis, and consider the primary academic deficit of the cryptocurrency community to be not ignorance of advanced computer science, but rather economics and philosophy. We should reach out to http://lesswrong.com/ more.
From what I understand, there is a debate in epistemology / philosophy of science regarding the concept of simplicity ("Occam's Razor"). Some hold that there is a justifiable basis for the concept in the sense that it is an indicator of which of a set of possible theories is more likely to be true. Others dispute this and say that there is no justified basis for simplicity arguments in this sense.
In a recent conversation I made the following assertion (more or less):
Those who say that simplicity arguments are unjustified are actually saying that ...
I'm trying to install a new habit, wondering if anyone had relevant feedback, or if my description of it would be useful to anyone else.
Background: I experience situations where I feel like I "should" try to do X (for example, it's a habit that would produce good results if I kept it up), but feel a lot of resistance to doing X. The conflict between the part of myself enforcing the should and opposing the should isn't very fun. When I don't end up doing X I start down a spiral of self criticism that leads to feeling bad about myself, which leads ...
Scott Alexander and Scott Aaronson have described their experiences as shy nerdy heterosexual guys growing up. Both of them felt a crippling paralysis and fear at the thought of asking a girl out.
Since LessWrong fits this demographic pretty well, I'd like to know: how well do their experiences match yours? Only answer if you are a nerdy heterosexual male*. [pollid:806]
Feel free to elaborate.
*For this purpose I would roughly define nerdy as having two of the following characteristics: poor social skills ; a high IQ or ; intense non-mainstream interests (e...
I answered "not at all", even though I was for some years very shy, anxious and fearful about asking girls out, because I never felt anything like the specific fears both Scotts wrote about, of being labelled a creep, sexist, gross, objectifier, etc. It was just "ordinary" shyness and social awkwardness, not related at all to the tangled issues about feminism and gender relations that the current discussion centers on. I interepreted the question as being interested specifically in the intersection of shyness with these issues, otherwise I might have answered "sort of".
I ran into a political/sociological hypothesis was entirely new to me and strangely convincing although not rigorous. Maybe somebody can point me to relevant research?
It goes like this. After a revolutionary change of government, many things will be worse than before for ten to twenty years, and the rewards will only really outweigh this after. So revolutions are carried out by people who are young enough to live past that bad period and make a net gain. And industrialized societies don't have revolutions because they're too full of people who are too old ...
The map is not the territory in terms of AI
Since AIs will be mapping entities like humans, it is interesting to ponder how they will scientifically verify facts vs fiction. You could imagine a religious AI that read a lot of religious texts and then wants to meet or find god, or maybe replace god or something else. To learn that this is not possible, it would need instruments and real-time data from the world to build a realistic world model, but even then, it might not be enough to dissuade it from believing in religion. In fact I'd say there are millions...
A list of ramblings that could prove useful in extending Solomonoff Induction, or that could well be all false:
Kolmogorov complexity is a way to assign to each explanation a natural number
assigning a natural number to a program is a way to pidgeon-hole the totality of programs to a well ordered countable set, in such a way that no pidgeon-hole has infinite pidgeons in it
if every partion of k^n in m parts has an homogeneous set of size j, then k --> j^n_m
let w be omega, n, m finite, then w --> w^n_m (Ramsey theorem)
w -/-> w^n_w, on this y
If I understood the story correctly, Scott Aaronson was attacked mostly for paying too little attention to the feminist (well, not only theirs) concept of "privilege". I will try to paraphrase the concept of "privilege" (if I understand it correctly) using the terms of statistics in a way that, I imagine, might lead someone to accept the concept. This way, hopefully, I will be able to clearer express myself.
Suppose you can quantify suffering (let's use the word "suffering" even though in everyday language it is quite strong word, whereas I'll use it to describe even very small annoyances). And suppose you are trying to create a statistical model, that could predict total suffering of an individual without actually measuring his/her suffering without paying attention to a particular situation (just some kind of "total average suffering"), using explanatory variables that are easy to measure. Suppose you decide that you will use belonging to a specific social group of people as your explanatory variables. As you can see, nothing in these terms guarantees that this model will actually be good (i.e. if the error terms are symmetric, etc.), because, for example, it is not clear whether explanatory variables denoting whether a person belongs to a certain group are actually enough to make a model good, etc.
If you try, for example, linear regression, you will obtain something like this: S = a + b_1*x_1 + ... b_n*x_n + e. In addition to that, you can have additional variables of the form x_i*x_j or x_i*(1-x_j) to model interaction between different variables. Here S is total suffering, a is an intercept term, x_i is an expanatory Boolean variable denoting whether a person belongs to an i-th social group (some groups are mutually exclusive, some aren't, for example, let's say that we assign 1 to blue eyed people and 0 to others), and if b_1 is negative, then b_1 could be said to measure "privilege" of people who belong to i-th group. If I understand correctly, people who employ this concept use it this way. Let's denote Ŝ= a + b_1*x_1 + ... b_n*x_n and call it "predicted suffering".
As you can see, claims that privilege is very important and thus everyone must pay a lot of attention to it depend on many assumptions.
The model itself might be unsatisfactory if does not account for many important explanatory variables that are as important (or even more important) than those already in a model. Few people are interested in "testing" the model and justifying the variables, most people simply choose several variables and use them. 
Modeling total average suffering without paying attention to a specific situation may be misleading if the values of b_i varies a lot depending on a situation.
Another thing is that it is not clear whether error terms e are actually small. If your model of total suffering fails to account for many sources of suffering, then error terms probably dwarfs predicted suffering. It is my impression that, when people see a linear model, their default thinking is that error terms as smaller (perhaps much smaller) than the conditional mean, unless explicitly stated otherwise. Therefore saying that a model has predictive power without saying that it has huge error terms might mislead a lot of people about what the model says.
Some people might claim that they, for some reason, are only interested in specific types of suffering, i.e. suffering from prejudice, biased institutions, politics, laws, conventions of public life or something like that. That doesn't mean that individual variation and error terms are small. If they aren't, then you cannot neglect their importance.
The values of coefficients b_i may be hard to determine.
But the problem I want to talk about the most is this. If you can observe the value of response variable S (total average suffering of an individual or total average suffering of an individual which is caused by a specific sets of reasons) then focusing on predicted value Ŝ is a mistake, since observation of response variable S screens off the the whole point of making a prediction Ŝ. For example, you can use university degrees to predict the qualifications of a job applicant, but if you can already observe their qualifications, you do not need to make predictions based on those degrees. It is my impression that most people, who talk about privilege, sometimes pay little attention to actual suffering S, but, due to mental habits obtained, perhaps, by reading the literature about the topic, pay a lot of attention to predicted suffering Ŝ. For example, Scott Aaronson describes his individual S in his comment here and gets a response here. The author says she empathizes with Scott Aaronson's story (S), then starts blaming Scott for not talking about Ŝ, and proceeds to talk about average (Ŝ) female and male nerds. Ŝ is not what any individual feels, but it seems to be the only thing some people are able to talk about. If the size of Ŝ does not dwarf model error terms e, then by talking about Ŝ and not talking about S they are throwing away the reality.
In addition to that, there is, If I understand correctly, another source of confusion, and it is ambiguity of the vague concepts "institutional" and "structural". If we are talking, e.g. about suffering from biased institutions, prejudices, structures in society etc. (if for some reason we are paying more attention to only this specific type of suffering), then S (and not Ŝ) is what actually measures it. However, it is my impression that some people use these words to refer to Ŝ only, without error terms e. In this case, they should remember, that S is what actually exists in the world and, if error terms are huge, then there might be very few situations where neglecting them talking about Ŝ instead actually illuminates anything. It is my impression that some people who are interested in things like "privilege" tend to overestimate the size of Ŝ and underestimate the size of e, perhaps due to availability heuristic.
Many people, who argue against feminists, tend to claim that the latter estimate Ŝ incorrectly. This may or may not be true, but I don't think that it is a good way to convince them to pay attention to problems that are different from what they are used to dealing with. Instead, I think that there might be a chance to convince them by emphasizing that S, and not Ŝ is what exists in the real world, emphasizing that error terms e may be huge, and not allowing them to change the topic from S to Ŝ. If you make them concede that a problem X, which their model does not use as a explanatory variable, exists and person_1, person_2, ... person_n suffer from problem X. Perhaps then they will not be hostile to the idea of noticing the pattern. To sum up, it seems to me that feminism tends to explain all things in top-down fashion and model their enemies as being top-down as well. My guess is that making them to think in "bottom-up" style terms may make their thinking somewhat less rigid.
Of course, all this is an attempt to guess how a specific part of a solution (stopping feminists from trying to complicate any kind of solution) might look like
One root pattern in the set of issues (race, gender, religion) is of between-group variance attracting more attention than within-group variance.
I suspect this pattern has deeper roots than a simple neglect of variance: At least some participants seem to fully accept that a model of suffering based only on group membership may involve too much noise to apply to individuals, but still feel very concerned about the predicted group differences, and don't feel a pressing need to develop better models of individual suffering.
(BTW, this is the heart of my crit...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.