The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

New Comment
342 comments, sorted by Click to highlight new comments since: Today at 1:12 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Anybody else think the modern university system is grossly inefficient? Most of the people I knew in undergrad spend most of their time drinking to excess and skipping classes. In addition, barely half of undergraduates get their B.A in 6 years after starting. The whole system is hugely expensive in both direct subsidies and opportunity costs.

I think that society would benefit from switching to computer based learning systems for most kinds of classes. For example, I took two economics courses that incorporated CBL elements, and I found them vastly more engrossing and much more time-efficient than the lecture sections. Instead of applying to selective universities (which gain status by denying more students entry than others) people could get most of their prerequisites out of the way in a few months with standard CBL programs administered at a marginal cost of $0.

9wedrifid14y
Only if I consider the modern university system (or education institutions in general) to have a primary purpose of conveying knowledge.
9SilasBarta14y
Yep. They mainly persist as a way to sort workers: those that can get through, and with a degree in X at university Y, are good enough to be trusted to job Z (even though, as is usually the case, nothing in X actually pertains to Z -- you're just signaling your general qualifications for being taken on to do job Z). Having the degree is a good proxy for certain skills like intelligence, diligence, etc. Why not test for intelligence directly? Because in the US and most industrialized countries, it's illegal, so they have to test you by proxy -- let the university give you an IQ test as a standard for admission, but not call it that. Shifting to a system that actually makes sense is going to require overcoming a lot of inertia.
2thomblake14y
I agree with this analysis to some extent. I'm not sure I'm willing to grant that the primary purpose of universities is a way to sort workers, but that is a major thing they're used for, and I tend to argue at length that they should get out of that business. I argue as much as possible against student evaluation, grading, and granting degrees. One of the first arguments that pops up tends to be, "But how will people know who to hire / let into grad school?" But I don't think it's the University's job to answer that question.
8mattnewport14y
Clearly universities are grossly inefficient at teaching, but as Robin Hanson would say, School isn't about Learning. The education system in general in most Western countries is grossly inefficient but that is largely because it is not structured in a way that rewards educating efficiently, and that is exactly how most of the participants want it.
6SilasBarta14y
Oh, and to add to my earlier comment, another major problem with the system is the difficulty with which you can dismiss employees, which extends through most industrialized countries. This makes it much harder to take a chance on anyone, significantly restricting the set of who has a chance at any job, and thus requiring much more proof in advance. And what frustrates me the most is that most such regulations/legal environments are called "pro-worker" and the debate on them framed from the assumption that if you want to help workers you must want these laws. No, no, no! These laws make it labor markets much more rigid. Remember, whatever requirement you force on employers as a surprise, they will soon take into account when looking to hire their next albatross. There's no free lunch! These benefits can only be transient and favor only people lucky enough working at a particular time. As time goes by, you just see more and more roundabout, wasteful ways to get around the restrictions. (Note the analogy to "push the fat guy off the trolley" problems...)
2Douglas_Knight14y
Are you talking about the US? The statistic suggests that you're talking about somewhere specific. I'll assume the US. You have several claims that are not obviously related. That's not to say that I disagree with any of them, though I probably would disagree with the implicit claims that relate them, if I had to guess what they were. One red flag is the conflation of public and private schools, which have different goals and methods. The 6 year graduation rate is really about public schools, right? But then you invoke selective schools in the last paragraph.
2knb14y
The six year rate is is a nationwide average for the united states.
2Kaj_Sotala14y
Thank you, this was a quite useful link for me. (Finnish colleges currently charge no tuition fees, and some are arguing for their introduction on the basis that this would make people graduate faster; those statistics show that US students don't really graduate any much better than Finnish ones.)
2Douglas_Knight14y
I stand by my statement.
1SilasBarta14y
Well, then I guess I'm triple special for getting a degree straight from high school in 2.5 years. In engineering. [/toots horn]
2Strange714y
I certainly agree that CBL is useful, and the system as a whole is riddled with inefficiencies and perverse incentives. However, I think a lot of the problem there is actually a matter of cultural context. Prior to entering college, those undegrads learned that drinking is something fun grownups are allowed to do, whereas listening to the teacher and doing homework are trials to be either grimly endured, or minimized by good behavior in other areas.
1Kevin14y
College is often a way for 18 year olds to delay social adulthood for 4-6 years. This American Life did a very good episode on the drinking culture at the USA's #1 party school, Penn State, that proves this point beyond a reasonable doubt. Time and time again binge drinking students say that the reason they are doing it and the reason they love Penn State is because this is the only chance in their lives they are going to have to live this lifestyle. TAL sells the MP3 of the show or it's widely available on torrent sites with a simple Google search.
0[anonymous]14y
It's very interesting that Penn State was ranked a number 1 party school, since it's probably one of America's most respected schools!
5Kevin14y
It's not that meaningful of a ranking; Penn State was anointed the #1 party school by an online poll done by the Princeton Review. It did however prove that out of all of the schools with strong school spirit and insane binge drinking cultures, the students at Penn State are the best at rigging online polls. In other words, Penn State is the #1 party school because the students decided they wanted to be considered the #1 party school.
0Daniel_Burfoot14y
I think you are confusing Penn State with the University of Pennsylvania.
0[anonymous]14y
U. Penn is also a highly respected school. Penn State is considered a Public Ivy.
0Kevin14y
Penn is more respected than Penn State, but Penn State is one of the top public schools in the USA -- #15 on US News's rather controversial list. http://colleges.usnews.rankingsandreviews.com/best-colleges/national-top-public
0RobinZ14y
Do you have a statistic to back up the 6-years figure? The graduation rate appears higher than that to me.
2knb14y
This is the figure I was referencing. 53% graduate in 6 years. Charles Murray (of The Bell Curve fame believes that most people just aren't smart enough for college level work. Based on my experience, "college level work" isn't very difficult, so I remain skeptical.
0Douglas_Knight14y
6 year graduation rates You're from Illinois, right? Its graduation rate of 59% is barely higher than the US average of 56%. UIUC's rate is 80%, ISU 60%, and NEIU 20%. NEIU isn't very big, but there might be lots of similar schools. (ETA: actually NEIU+CSU are already pretty close to canceling out UIUC.)
1RobinZ14y
Am I from Illinois? No, actually - Maryland. Checking the data, it seems I'm in a very strange statistical anomaly: 82% in 6 years. At a state university. No wonder my impressions were skewed.
2Karl_Smith14y
You are at the state flagship. 82% at College Park is roughly equal to Urbana-Champaign's 80%. The point is that top schools pick students who can get through and/or do a better job of getting students through.

Repost from last open thread in the desperate hope that the lack of interest was only due to people not seeing it all the way at the bottom:

I'll be in London on April 4th and very interested in meeting any Less Wrongers who might be in the area that day. If there's a traditional LW London meetup venue, remind me what it is; if not, someone who knows the city suggest one and I'll be there. On an unrelated note, sorry I've been and will continue to be too busy/akratic to do anything more than reply to a couple of my PMs recently.

0arundelo14y
Good to hear from you!
0Roko14y
If I am around I'll come. CipherGoth will probably be interested too.
0Scott Alexander14y
Why don't you PM me your phone number and/or email address and we can try to arrange something?
0Paul Crowley14y
I have email addresses for a few UK people. Mail me - paul at ciphergoth.org - and I'll send an email that copies everyone in.
3JoshuaZ14y
The results of that study seem to be a bit more complicated in that they suggest that part of the cause of the distinction is that there's a common belief in the population that is very close to witchcraft (where intending harm or desiring can cause harm) and thus intent matters as much as action and there's isn't a clear dividing line between the two.
1Perplexed14y
A rather ironic take on Hauser's Trolley Problem.
1simplicio14y
These "act" trolley problems have the same difficulty as the original. It's so implausible that the only way to stop a runaway truck/trolley would be to make it run over a person, that one doesn't know if one's intuition is reacting against the sheer implausibility or the moral dimension. IMO, telling the subject that "pushing the fat man is the only way" is not helpful. We can't imagine ourselves in that epistemic position. The best "fat man" scenario is the Unwilling Transplant Donor, but sadly it does not have a good omission counterpart.
6JGWeissman14y
Suppose the potential organ donor is choking to death, and you have the opportunity to perform the heimlich manuever and save him.

Request for help: I can do classroom programming, but not "real-world" programming. If the problem is to, e.g. take in a huge body of text, collect aggregate statistics, and generate new output based on those stats, I can write it. (My background is in C++.)

However, in terms of writing apps with a graphical user interface, take input in real-time, make use of existing code libraries, etc., I'm at a loss. I'd like to know what would be a good introduction to this more practical level.

To better explain where I am, here is what I have tried so far: I've downloaded a lot of simple open source programs that have a lot of source files. But strangely, whenever I compile them myself and get them to run, it just runs on the command screen blindingly fast and then closes, as if I'm missing some important step. (How are you normally expected to compile open-source programs?)

I've also worked with graphics libraries and read a book (IIRC, Zen and the Art of Direct3D Game Programming) and was able to use that for writing algorithms that determine the motion of 3D objects, given particular user inputs, but it was pretty limited in domain.

I've downloaded Visual C# Express, which was... (read more)

8cousin_it14y
You want to do user-facing stuff? Then don't bother with desktop programming, write webapps. HTML and JavaScript are much easier than C++. You don't even have to learn the server side at first, a lot of useful stuff can be written as a standalone html file with no server support. For example you could make your own draggable interface to the free map tiles from http://openstreetmap.org - basically it's all just cleverly positioned image elements named like this, within a rectangular element that responds to mouse events. Or, if a little server-side coding doesn't scare you, you could make a realtime chat webpage. Stuff like that. If you need any help at all, my email is vladimir.slepnev at gmail and I'm often online in gtalk.
5Morendil14y
"Easy" is one goal you can have when learning to program. "Soundly written and maintainable" is another. Unfortunately these two goals are sometimes at odds. Language and platform don't really matter a whole lot, in the grand scheme of things; learning how to write maintainable programs does matter. Having had lots of experience extending or modifying source code written by others, I wish more novice programmers would make that their number one goal.
4cousin_it14y
I disagree! A (real) novice programmer's number one worry should be getting paid. Why should they divert their attention and spend extra effort on writing maintainable code, just so you have an easier time afterward? That's awfully selfish advice. You might claim writing maintainable code will pay off for them, but to properly evaluate that we need to weigh the marginal utilities. What's better, an extra hour improving the maintainability of your code, or an extra hour spent empathizing with the client? Ummm... And you can't say both of those things are first priority, that's not how it works. I've been coding for money for half of my life so listen to my words, ye lemmings: ship the thing, make the client happy, get paid. That's number one. Maintainability ain't number one, it ain't even in the top ten.

Maintainability ain't number one, it ain't even in the top ten.

What are your other nine?

6Morendil14y
(Edited) For one thing, that doesn't sound like something that's actionable for Silas in the context of his request for advice, compared to advising him to learn some specific techniques, such as MVC, which make for more maintainable code. For another, your worry should be "getting paid" after you have reached a reasonable level of proficiency. A medical student's first concern isn't getting paid, it's learning how not to harm patients. Similarly if you're learning programming, as opposed to confident enough of your chops to go on the market, you have a responsibility to learn how not to harm future owners of your code through negligent design practices. That a majority of programmers today fail to fulfill that basic responsibility doesn't absolve you of it.
5cousin_it14y
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn't have to wait and learn without getting paid, his current skill level is already in demand. But that's tangential. More importantly, whenever I hear the word "maintainability" I feel like "uh oh, they wanna sell me some doctrinaire bullshit". Maintainability is one of those things everyone has a different idea of. In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically. Allow me to illustrate with an example. One of my recent projects was a user interface for IPTV set-top boxes. Lots and lots of stuff like "my account", "channels I've subscribed to", et cetera. Now, the natural way to solve this problem is to have a separate file (a "page") for each screen that the user sees, and ignore small amounts of code duplication between pages. If you get this right, it's pretty much irrelevant how crappily each individual page is coded, because it's only five friggin' kilobytes and a maintenance programmer will easily find and change any functionality they want. On the other hand, if you get this wrong... and it's really fucking distressing how many experienced programmers manage to get this wrong... making a Framework with a big Architecture that separates each page into small reusable chunks, perfectly Factored, with shiny and impeccable code... maintenance tasks become hell. And so it is with other kinds of projects too, in fact with most projects I've faced in my life. Focus on finding the stupid, straightforward, natural solution, and it will be maintainable with no effort.
3thomblake14y
I wasn't with you on the importance of maintainability until you said this. Yes, programming well and naturally is automatically maintainable.
5cousin_it14y
Right on. Another way to put it: if you have to spend extra effort on maintainability, you've probably screwed up somewhere. My name for this kind of behavior is "fetish". For example, some people have a Law of Demeter fetish. Some people have a short function fetish. And so on, all kinds of little cargo cults around. Allow me to illustrate with another example. One of my recent projects is mostly composed of small functions, but there's this one function that is three screens long. What does it do? It draws a pie chart with legend. The only pie chart in the whole application. There's absolutely no use refactoring it because it's all unique code that doesn't repeat and isn't used anywhere else in the app. Pick the colors, draw the slices, draw the legend, stop. All very clear and straightforward, very easy to read and modify. A fetishist would probably throw a fit and start factoring it into small chunks, giving them descriptive names, maybe making it a "class" with some bullshit "parameters" that actually only ever take one value, etc, etc.
6Morendil14y
Well, that's merely labeling, not actually advancing an argument. What kind of predictions are we talking about here? Where is our substantial disagreement, if any? When I talk about maintainability I'm referring to specific sequences of events. In one of the most common negative scenarios, I'm asked to make one change to the functionality of a program, and I find that it requires me to make many coordinated edits in distinct source chunks (files, classes, functions, whatever). This is called "coupling" and is a quantifiable property of a program relative to some functional change specification. "Maintainable" relative to that change means (among other things) low coupling. You want to change the pie chart to use dotted lines instead of solid inside the pie, and you find that this requires a change in only one code location - that's low coupling. Now what often happens is that someone needs a program that's able to do both dotted-line pies and solid-line pies. And many times the "most natural" thing (by which I only mean, "what I see many programmers do") is then to copy the pie-chart function, paste it elsewhere with a different name, and change the line style from solid to dotted. That copy-paste programming "move" has introduced coupling, in the sense that if you want to make a change that affects all pie charts (dotted and solid alike) you'll have to make the corresponding source change twice. Someone who programs that way is eventually going to drive coupling through the roof (by repeated applications of this maneuver). At this point the program has become so difficult to change that it has to be rewritten from scratch. Plus, high coupling is also correlated with higher incidence of defects. Now you may call a "fetishist" someone whose coding style discourages copy-paste programming, that doesn't change the fact that it is a style which results in lower overall costs for the same total quantity of "delta functionality" integrated over the life of the prog
1cousin_it14y
I agree with most of your comment, except the idea that you can anticipate in what directions your software is going to grow. That's never actually worked for me. Whenever I tried designing for future requirements instead of current simplicity, clients found a way to throw me a curveball that made me go "oops, this new request screws up my whole design!" If my program ever needs a second pie chart, it's better to factor the functionality out then instead of now. Less guesswork, plus a three-screen-long function is way easier to factor than a set of small chunks is to refactor.
0Morendil14y
It's ironic that I should be suspected of claiming that. Let me reassure that you on this point, we agree as well. (It's looking more and more as if we have no substantial disagreement.) My point is that the risk is perhaps lowest if you are going to add the second pie chart, but if someone else is, the three-screens-long function could be riskier than a slightly more factored version. Or not: there is no general rule involving only length. If you want to make a pastie with that function I could give you an actual opinion. ;)
1CronoDAS14y
Object-oriented design is overrated. ;)
3wnoise14y
I wouldn't say "on the job", necessarily. But it is only learned by programming, not by thinking about programming, attending lectures on programming, etc. Programming for class assignments can count for this. Well, there is some benefit to reading good code, but you have to already have a reasonable idea what good code is for that to help.
3Morendil14y
That happens to take a significant amount of skill and learning. Read a site like the Daily WTF and you see what too often comes out of letting untrained, untaught programmers do what they're naturally inclined to do. One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them. In practice you're right: people have different ideas of maintainability. That is precisely the problem.
3cousin_it14y
But I don't know of any way to acquire this "programming common sense" except on the job. Do you? Oh, no. What a terrible idea. If you do this without actually pushing through real-world projects of your own, you'll come up with a lot of bullshit "principles" that will take forever to dislodge. In general, the ratio of actual work to abstract thinking about "principles" should be quite high.
3Vladimir_Nesov14y
Open source.
0SilasBarta14y
Another case of "let them eat cake". The very gap in my understanding is the jump between writing input once/output once algorithms, to multi-resource complex-UI programs, when existing open source applications have source files that don't make sense to me and no one on the project finds it worth their time to bring me up to speed.
2wnoise14y
Between one-input, one-output programs and complex UIs are simple UIs, such as a program that loops in reading input and output, and maintains state while doing so. The complex UIs are mostly a matter of wrapping this sort of "event loop" around a given framework or UI library. Some frameworks instead have their own event loop that does this, and instead you write callbacks and other code that the event loop calls at the appropriate times.
0SilasBarta14y
Thanks, that helps. Now I just need to learn the nuts-and-bolts of particular libraries.
2SilasBarta14y
Sorry for not reading the follow-up discussion earlier. What do you mean by this? How can I be hired for programming based on just what I have now? Who hires people at my level, and how would they know whether I'm lying about my abilities? (Yes, I know, interviews, but to they have to thin the field first.) Is there some major job finding trick I'm missing? My degree isn't in comp sci (it's in mech. engineering and I work in structural), and my education in C++ is just high school AP courses and occasional times when I need automation. Also, I've looked at the requests on e.g. rent-a-coder and they're universally things I can't get to a working .exe (though of course could write the underlying algorithms for).
5thomblake14y
The best 'trick' for job-finding is to get one from someone you know. I'm not sure what you can do with that. Generally speaking, there are a lot of people who aren't good at thinking but have training in programming, and comparatively not a lot of people who are good at thinking but not good at programming, and the latter are more valuable than the former. If I were looking for someone entry-level for webdev (and I'm not), I'd be likely to hire you over a random person with a master's degree in computer science and some experience with webdev.
2SilasBarta14y
Heh, that's what I figured, and that's my weak point. (At least you didn't say, "Pff, just find one on the internet!" as some have been known to do.) Thanks. I don't doubt people would hire me if they knew me, but there is a barrier to overcome.
3Morendil14y
I'm sorry to be the one to break the news to you, but the IT industry has appallingly low standards for hiring. For instance, you may be able to get a programming job without at any point being asked to produce a code portfolio or to program in front of an interviewer. I'd still be keen, by the way, to help you through a specific example that's giving you trouble compiling. I believe that when smart people get confused by things which their designers ought to have made simple, it's an opportunity to learn about improving similar designs.

A quick solution to the FizzBuzz quiz:

HAI
CAN HAS STDIO?
I HAS A VAR
IM IN YR LOOP
    UP VAR!!1
    IZ VAR LEFTOVER 15 LIEK 0?
    YARLY VISIBLE "FizzBuzz"
    NOWAI IZ VAR LEFTOVER 3 LIEK 0?
    YARLY VISIBLE "Fizz"
    NOWAI IZ VAR LEFTOVER 5 LIEK 0?
    YARLY VISIBLE "Buzz"
    NOWAI VISIBLE VAR
    KTHX
    IZ VAR NOT SMALR THAN 100? KTHXBYE
IM OUTTA YR LOOP
KTHXBYE
3AdeleneDawner14y
*in ur LessWrong, upvotin' ur memez*
1Morendil14y
For the first time here I'm having a Buridan moment - I don't know whether to upvote or downvote the above.
1AdeleneDawner14y
It might help to note that dialects - and I don't see any reason not to consider both the various kinds of 'netspeak and the various programming languages as such, in most cases of human-to-human interaction - are almost exclusively used as methods of signaling cultural affiliation. In this case, I parsed Bogus' use of 'netspeak as primarily an avoidance of affiliation with formal programming culture (which tends to linger even when programs are set out in standard English, in my experience), and secondarily a way of bringing in the emotional affect of the highly-social 'netspeak culture. It is 'mammal stuff', but it seems to be appropriate in this instance, to me.
0Morendil14y
Thanks. I was mostly kidding, but I appreciate the extra perspective. (Signalling my own affiliation as a true geek, I actually attempted to download a LOLCODE interpreter and run it on the above, but the ones I could get my hands on seem to be broken. I would upvote it if I could run it, and it gave the right answer.)
4AdeleneDawner14y
integer var while(1) { ++var if (var % 15 == 0) output "FizzBuz" else if (var % 3 == 0) output "Fizz" else if (var % 5 ==0) output "Buzz" else output var if !(var<100) return } Looks right to me, though I wound up reformatting the loop a little. That's most likely a result of me being in the habit of using for loops for everything, and forgetting the proper formatting for other kinds, rather than being an actual flaw in the code - I'm willing to give bogus the benefit of the doubt about it, in any case.
2gregconen14y
Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0). Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100). Of course, my own draft used if(var % 3 == 0 && var % 5 == 0) instead of the more reasonable x%15.
2AdeleneDawner14y
Mine does, but I'm aware that it's good coding practice to specify anyway. I was maintaining his choice. Yep, but I don't remember how else to signify an intrinsically infinite loop, and bogus' code seems to use an explicit return (which I wanted to keep for accuracy's sake) rather than checking the variable as part of the loop. My method of choice would be for(var=0; var<100; ++var){} (using LSL format), which skips both explicitly returning and explicitly incrementing the variable.
3JGWeissman14y
Jeff Atwood also makes this meta point about blogging about fizzbuzz: Somehow, the other responses to this comment reminded me of that.
0RobinZ14y
Somehow, I believe this is my fault for having mentioned trying it myself, and for that, I apologize.
2wnoise14y
main = putStr . unlines $ fizzbuzz 100 fizzbuzz m = map f [1..m] where f n | n `mod` 15 == 0 = "FizzBuzz" f n | n `mod` 3 == 0 = "Fizz" f n | n `mod` 5 == 0 = "Buzz" f n = show n
1wedrifid14y
If all you have is regex s/.+/nail/ until(m/j{100}/){s/(j*)$/\1\n\1j/}; s/^(j{15})*$/fizzbuzz/gm; s/^(j{3})*$/fizz/gm; s/^(j{5})*$/buzz/gm; s/^(j+)$/length($1)/gme; print; Warning: Do not try this (or any other perl coding) at home!
0ata14y
I think anyone who applies to a programming job and can't write this (in whatever language) deserves something worse than being politely turned down. for i in range(1, 101): if i % 15 == 0: print 'fizzbuzz' elif i % 3 == 0: print 'fizz' elif i % 5 == 0: print 'buzz' else: print i
0RobinZ14y
I tested myself with MATLAB (which makes it quite easy) out of some unnecessary curiosity - it took me about seven minutes, a fair part of which was debugging. I feel rather ashamed of that, actually.
0RobinZ14y
As everyone else seems to be posting their code: % FizzBuzz - print all numbers from 1 to 100, replacing multiples of 3 with % "fizz", multiples of 5 with "buzz", and multiples of 3 and 5 with % "fizzbuzz". clear clc for i = 1:100 fb = ''; if length(find(factor(i)==3)) > 0 fb = [fb 'fizz']; end if length(find(factor(i)==5)) > 0 fb = [fb 'buzz']; end if length(fb) > 0 fprintf([fb '\n']) else fprintf('%5.0f\n', i) end end A better program (by which I mean "faster", not "clearer" or "easier to modify" or "easier to maintain") would replace the tests with something less intensive - for example, incrementing two counters (one for 3 and one for 5) and zeroing them when they hit their respective desired factors.
0Morendil14y
I wouldn't be; I'd take it as (anecdotal) evidence that the craft of programming is systematically undertaught. By which I mean, the tiny, nano-level rules of how best to interact with this strange medium that is code. (Recently added to my growing backlog of possibly-top-level-post-worthy topics is "how and why programming may be a usefull skill for rationalists to pick up"...)
1RobinZ14y
I have to admit, I was looking up functions in the docs, too - I would have been a bit faster working in pseudocode on paper. Edit: Also, my training is in engineering, not comp. sci. - the programming curriculum at my school consists of one MATLAB course. Querying my brain for cached thoughts: 1. Programming encourages clear thinking - like evolution, it is immune to rationalization. 2. Thinking in terms of algorithms, rather than problem-answer pairs, and the former generalize.
5mattnewport14y
That depends on your incentive structure. You may well be right if you work as a contract programmer. If you work as a salaried employee in a large company the calculation could look different.
-1cousin_it14y
Yes, absolutely. The former path (working or contracting for many small companies) is the one I'd heartily recommend to novices. The latter path... scares me.
2murat14y
Maybe you are scared because you are aware that writing maintainable code is harder than writing code without that constraint?
3cousin_it14y
I write maintainable code anyway, and I'm friends with several people who maintain my past code and don't seem to complain. No, working at BigCo scares me because it tends to be a very one-sided activity. Employees at small companies and contractors face much more variety in what they have to do every day.
3BenAlbahari14y
Bad design choices are much more expensive to fix down the road than when they were were created. You seem to be saying that any time spent addressing this issue is worthless in comparison to spending more time empathizing with the customer.
0SilasBarta14y
Thanks for the advice and generous offer of help!
4CronoDAS14y
If you're doing them in Windows, open the command prompt using "cmd" and run them from the command line. They'll run in the CMD window, which will stay open after the program finishes doing whatever it does, leaving the output visible.
3wedrifid14y
Find a specific programming problem you need (want) to solve. That, for me at least, makes the task of learning almost automatic. I (also) recommend Ruby+Rails for practical purposes. If you want to learn how to program, for example, 3D games then I have no particular recommendations. I only got as far as 2D bitbliting on that path! ;)
3Kevin14y
Try Python+Django, Ruby+Rails, or PHP+CakePHP depending on your preference, but the pragmatic difference is much smaller than language zealots pretend. If you plan on making something with millions of users, PHP is faster than Python or Ruby. Graphic programming is harder than using generated HTML for your GUI, and there seem to be a lot more real world applications with a web GUI than anything that uses local OS graphics.
3mattnewport14y
Unfortunately this about sums up the current state of 'real world' programming. It is helpful to have a concrete goal to work towards rather than merely coding for the sake of learning. Learning 'on the job' is helpful in this regard as there is usually a somewhat defined set of requirements and there is added motivation and supervision that comes with being paid to write code. If you are trying to learn on your own I'd suggest trying to set yourself the task of writing a simple program to do something fairly clearly defined and then work towards that. Simply reading through open source code (or any third party code) is not something I've found terribly helpful as a learning exercise. More useful is to set yourself the task of fixing a specific bug or adding a specific feature as this will help direct your investigation. Learning how to use the debugging tools available to you is also important. Understanding how software is put together can be greatly aided by stepping through code in a good debugger. C# is pretty good for 'real world'/GUI development. Personally I think it is the best option overall at the moment for that kind of programming but you will find language choice is a bit of a religious war issue.
4wedrifid14y
I second that recommendation for (non-web) GUI development. Even as someone who had never programmed in C# I found learning the language the simplest option when I needed to create a visual desktop application. (Of course, given that I knew both Java and C++ it wasn't exactly a steep learning curve.)
0Furcas14y
Can you recommend a tutorial on GUI development with C#?
0wedrifid14y
I'm afraid not. I just kind of winged it.
0SilasBarta14y
Well, I don't think I described it correctly. "Circuitous", I can actually handle -- I thrive on it, in fact. But e.g. setting text in a box to bold, when the package is designed to make that easy, following the book's exact instructions, and getting plain text ... that part bothers me, especially when it's followed up with all the alternate methods that don't work, etc. But it was a long time ago so I don't remember all the details. The task I was working on was to have a WYSIWIG html editor but all redefinition and addition of tags, and add features html can't currently do. (Examples: 1. A tag that adds a specified superscript to the tagged text. 2. A tag that generates an arrow that points to some other text.) I eventually hired someone to write it, but still couldn't understand from it how the code works, and the Visual C# book only touched on the outlines of this, and I ran into the problems I listed earlier. I also tried to work through some of their existing program examples, like the blackjack one, but i don't remember where that went.
2djcb14y
If you want to write UIs, Lisp and friends would probably not be the first choice, but since you mentioned it... For Lisp, you can of course install Emacs, which (apart from being an editor) is a pretty convenient way to play around with Lisp. Emacs-Lisp may not be a start of the art Lisp implementation, but it is certainly good enough to get started. And because of the full integration with the editor, there is instant-gratification when you can use some Lisp to glue to existing things together into something useful. Emacs is available for just about any self-respecting computer system. You can also try Scheme (a Lisp dialect); there is the excellent freely available Structure and Interpretation of Computer Programs, which uses Scheme as the vehicle to explain many programming concepts. Guile is nice, free-software implementation. When you're really into a more mathematical approach, Haskell is pretty nice. For UI stuff, I find it rather painful though (same is true for Lisp and to some extent, Scheme).
2[anonymous]14y
Most open-source programs are made to be easy to compile on Unix platforms. If you're using OS X or Linux, great; if you're on Windows, download Cygwin and you'll have a Unix environment. Given all that, read the INSTALL file; it should give you step-by-step instructions for compiling and installing. Most commonly, you run ./configure, then make, then (as root) make install. That said, platforms with package managers are really nice because you can download, build, and install many programs in a single step; Debian has APT, OS X has MacPorts and Fink, and Haskell (a programming language, not an operating system) has the Cabal. In general, if running something causes a terminal to open and immediately close, try running it on a command line instead of double-clicking it. For Windows, open Command Prompt, drag the executable onto the terminal window, and hit enter.
3arundelo14y
One way to do that is to open the "Start" menu, select "Run", type cmd, and press .
1CannibalSmith14y
Got Skype, microphone, etc?
0SilasBarta14y
Yes.
1CannibalSmith14y
ಠ_ಠ ....ashdkfrflguhhhhhhhhh Debug output: when I first saw your request, I was in a very, what's the word, eager(?) mood and started writing, then realized it would be very long, then I wanted to chat and brag about coding skills, then later my mood was lower than average, and you said "yes", and I was like, groan, and... aaanyway, my Skype is cannibalsmith. If you catch me, I'll probably will be delighted to talk about programming. Yeah, so... uh...
0SilasBarta14y
:-) Thanks!
1Morendil14y
What category of app are you looking to write, narrowing down the class "app with a GUI" a little? Can you name a specific example of one you've tried to compile and run, and you've been confused at the result? One general hint is that a good way to learn how to code up significant programs from scratch is to, first, get a significant program that works and modify or extend it in some way. Also, be aware that there are several competing design philosophies when it comes to writing GUI programs, with very different outcomes in terms of maintainability and adherence to sound design principles. The "Visual" approach exemplified by the Microsoft line of tools leaves much to be desired in my experience, leading to spaghetti code too easily. I prefer approaches in which graphical components are created programmatically, and where design principles such as MVC then serve to further structure the resulting code and drive the design toward high levels of abstraction. The various Smalltalk environments are a good illustration of that philosophy.
3BenAlbahari14y
Spaghetti code is a primarily a function of the programmer, not the tools. This isn't to say the tools don't matter; they do; but the various competing tools each have their pros and cons, and it's a bit glib to suggest the Microsoft stack is obviously behind here. ASP.NET MVC, which you can use for web development in C#, is quite orthogonality-friendly.
0SilasBarta14y
I don't think this should matter for your answer, since it's just a barrier toward a broad class of programming I'm trying to overcome. All of them ;-) but I'll give you a specific example when I get back to my home computer. Well, that's kind of hard when they don't run even when you compile them. But on top of that, I haven't found any multi-source-code-file in which it's easy to jump to just the part of the code that implements a particular feature, usually because of poor documentation.
0Psy-Kosh14y
Well, if you want to play with a lisp, maybe consider PLT-Scheme?. That one has a really nice environment, etc etc... (it's a Scheme rather than a Common Lisp, though.)
0Paul Crowley14y
I've found wxPython a relatively pleasant way to write GUI programs.

How should rationalists do therapy?

As a community, we should have resources to help people who might otherwise be helped by clerics, quacks, or psychics. We should certainly cover things like minor depression and grief at the death of a loved one.

Should we just look at what therapies have the best outcome for various situations and recommend those?

Should we use what we know about cognition to suggest new therapies? Should we make a "Grief Sequence"?

4Rain14y
When I expressed problems that I have with my life, I experienced that this community is not very well versed in the emotional aspect of the situation. At least, that is how I felt (heh) when they swarmed and attacked in an effort to other-optimize. I'm sure they wanted to help, but it was a very direct, blunt experience, with little regard for the difficulties inherent in the situation, or the knowledge I already possessed. "Get therapy" is a solution, but one that I've known about for a very long time. Alicorn's post on problems vs. tasks comes to mind. It felt almost tautological: "You're depressed? You should take an action which cures depression." At least, until it ended with me telling people to please stop, and getting called sad, pitiful, and a jerk.
9Morendil14y
The key observation is that, as far as I can tell, you never actually asked for help. I call the behaviour you're commenting on "inflicting help". This is a very, very common mistake that even very smart people make. One of the basic tools in a good consultant's toolkit is to be able to recognize actual requests for help, and fulfill those strictly within the bounds of what has been requested. The good news is, this is a community of people who want to be skilled at updating on the evidence. Hopefully this negative result will be counted as evidence and people here will, in future, tend to refrain from inflicting help.
0Rain14y
My favorite part was how the person who most directly insulted me was voted up (2 rating as of now), whereas my requests to stop were voted down (-1 and 0 now, both were -2). It was very strong fuel for my martyrdom complex. I actually laughed aloud.
0FAWS14y
I started writing a devils advocate sort of reply after reading the first link, but for the life of me I can't think of any good reason to vote "No, thank you. I'd rather suffer where I am." down in context. If I was voting based on the current score (I try my best not to do that) I'd vote it back up to 0.
3Morendil14y
To take a stab at what I know of that topic: * offer help, but don't inflict help that isn't requested * verify that the helpee is "serious" about using your help: help can't be for free * an intervention is also a a test of a hypothesis: update on the results * as a corollary, effective help requires forming a theory or model of the situation * the best way to get entangled with the situation is to listen to the "helpee" * listening requires an open mind (i.e. often changing your mind) * the helpee's situation is a system, with many entangled components, which can include other people * your help and intentions in helping can become part of that system, for good or ill * your help, intentions, approach and results should always be a legitimate topic of discussion with the helpee * you should always be clear about why you're helping * because of that, it's often a good idea to have someone in turn helping you help others That's from my general approach to consulting, i.e. helping people, or more precisely "influencing people at their request". It's not specific to grief or depression counseling, and thus should perhaps be taken with a grain of salt.
1zero_call14y
Both idea sound good. Any analysis and commentary or recommendations would be useful for people, I'm sure.
0Thomas14y
This list is a therapy already for the majority of its readers, commentators and posters.

If I have no memory of some period in my past, then should I be pleased to discover that was happy during that period? Or is it that past experiences are valuable only through the pleasure their memories give us in the present?

You should be at least as pleased as you would be to discover that someone else was happy during that period.

3gwillen14y
There is something I find very satisfying about this answer. Possibly this is related to the fact that I like to think of people-over-time as being a succession of distinct, but closely related, identities.
0Zvi14y
Given that the other person you discover to be happy may be benefiting from the memory of that time, does that have to be true?
4grouchymusicologist14y
If you are a utilitarian, I think you should be pleased. Imagine you happened to find out that a person on the other side of the world, whose life has never and will never affect yours in any way, is happy right now. You'd be pleased about that, right? Now imagine you knew instead that that person was happy last week. Since this affects you not at all, there's no real difference between these: you're just pleased about the fact of someone's happiness at some point in time. If you buy my argument up to this point, then you may as well be pleased if that mystery person from the past was actually your own past self. And that's not even to mention Kevin's argument which does take into account the ways in which your past self influences your future self.
2BenPS14y
Here is one possible reason for being pleased to discover that one was unhappy in the past: Times of apparent unhappiness can lead to great personal growth. For instance, the hardest, most stressful time of my life was studying for my physics honors exams. However, now that the exams are over, I am glad to have both the knowledge I gained in studying, and the self knowledge that I am capable of pushing myself as hard as I did. (Would skills learned during the missing time be retained? Even if they weren't, the latter reason above would still apply). It would be devastating to lose the memory of any part of ones life, but I think there would be some satisfaction in learning that one had spent the missing time doing something difficult but worthwhile, even if one was not happy during that time.
2timtyler14y
It sounds as though you now have some information about those past events. Hopefully, it is a sign that your goals were being met during that period. Also, if you managed to learn that, maybe you will also learn something more useful about the period. So: I would say it is normally a good sign.
1RobinZ14y
I vote "pleased", for the rather weak reason that this makes my preferences time-symmetric*. * Edit: This is poorly-worded - what I was referring to was time shift symmetry.
2grouchymusicologist14y
But nothing else about the universe is time-symmetric, manifestly including our own revealed preferences -- I would rather be happy in the future but not in the past than be happy in the past but not in the future, if you gave me the choice right now. So this is the only argument I can think of to vote "not pleased" (of course, not displeased either) about one's past, but unremembered, happiness. (I actually do vote "pleased," though, for the reason I argued here.)
2RobinZ14y
I'm not sure that I'd prefer unrecalled happiness in the past to in the future, but I was thinking of (and should have named) time-shift symmetry, which the fundamental laws of physics are. I actually agree with your argument for voting "pleased", though, so we might be simply in agreement.
0grouchymusicologist14y
Well then, I'm sure that addresses my objection. But a couple of minutes' googling isn't giving me a good sense of what time-shift symmetry is -- and my physics background is lousy. Could you give me a quick definition?
1RobinZ14y
The laws of physics are invariant in time. Edit: Clarification - if you write the laws of physics, nowhere do you invoke the absolute time; only changes in time. The outcome of any experiment cannot change just because the time coordinate changes; it can only change because other parameters in the situation change.
2grouchymusicologist14y
Thanks for that.
0Jordan14y
I remember hearing that there have been some hints that physical constants have changed over time. If they have then the laws of physics wouldn't be time invariant. Anyone else recall anything along those lines? Wikipedia isn't terrible helpful.
1RobinZ14y
I have not heard of any such theory becoming a credible candidate for acceptance, although I see no logical contradiction in such - my impression is that discovering a time-varying term would be as surprising as discovering energy is not conserved. For fairly fundamental reasons, actually.
1wnoise14y
Note that in GR defining energy consistently is tough. Doing it so it is globally conserved is even harder. We only really have local conservation, and the changing background of GR in cosmology is in some sense effectively the same thing as changing physical law.
0SilasBarta14y
Yes, they tend to be invariant in factors that don't exist ;-P
0Nick_Tarleton14y
This seems very unlikely. If the experience of remembering pleasurable events is valuable in itself, why can't other experiences be valuable in themselves?
0scav14y
Yeah why not. It is better to be pleased than not, all else being equal.
0Kevin14y
That past experience is valuable in the sense that it did not damage your psyche in the way that a traumatic experience could have.

Poll: Do you have older siblings or are an only child?

karma balance

Vote this up if you are the oldest child with siblings.

Vote this up if you have older siblings.

Vote this up if you are an only child.

5steven046114y
I'm pretty sure that in the general population, there are at least as many people with older siblings as there are people with only younger siblings. But in this poll, it's 6 vs 19. That looks like a humongous effect (which we also found in SIAI-associated people, and which this poll was intended to further check). I could see some sort of self-selection bias and the like, and supposedly oldest children have slightly higher IQs on average, but on the whole I'm stumped for an explanation. Anyone? ETA: Here's a claim that "it is consistently found that being first-born is particularly favourable to high levels of scientific creativity". See also this.
-47JustinShovelain14y

This is a draft of a post I'm planning to send to my everything-list, partly to invite them to join Less Wrong. I'd appreciate comments and feedback on it.

Recently I heard the news that Max Tegmark has joined the Advisory Board of SIAI (The Singularity Institute for Artificial Intelligence, see http://www.singinst.org/blog/2010/03/03/mit-professor-and-cosmologist-max-tegmark-joins-siai-advisory-board/). This news was surprising to me, but in retrospect perhaps shouldn't have been. Out of the three authors of papers I cited in the original everything-list c... (read more)

4Document14y
I'm still with Jack that pointing new readers to the entirety of the sequences is non-optimal. I'm waiting for the day when we can at least say "Start here (link) and keep clicking Next, and skim as much as you like", but you probably don't want to wait that long to send the post, so I don't know.
2RobinZ14y
It doesn't look bad to me - if you believe it would be well-received, I see no problem with sending it.
0Eliezer Yudkowsky14y
Looks fine to me.

Frank Lantz: The Truth in Game Design

Players keep complaining about the random number generators being "unfair" in games that involve randomness, so game developers have started tweaking the generators to behave according to gambler's fallacy. Now results that are adverse to the player increase the chance of beneficent future results. Lantz notes that making games systems to conform to common fallacies might not be that good an idea, since games could also be used as great teaching devices on how all sorts of complex systems really work. Of cours... (read more)

This Nature article ("Quantum ground state and single-phonon control of a mechanical resonator") is making headlines in various media, and seems to be about large-scale quantum superposition, but it's always hard to tell what's getting lost in translation when you're not an expert. I'd prefer to put my trust in people here who think they're qualified to comment. Anyone?

1Psy-Kosh14y
I was about to post this here. If they actually verified that the resonator was in superposition, if they actually got interference effects out of it, well, that's it then, collapse isn't just dead, it's, well... I need a word for "dead" that's more emphatic than "dead". It's... ahem... collapsed. :P (at least such is my thought.)
0prase14y
I can't access the article now, so could you explain in more detail what is implies for the quantum collapse?

Hearing that Max Tegmark joined SIAI's board reminded me of a top-level post I was thinking of doing. In it, I would present what I think is a very strong but heretofore underemphasized argument for the Mathematical Universe/Level IV Multiverse hypothesis — specifically, an argument for why it is actually a satisfactory answer to the ultimate question of why anything bothers to exist at all — particularly targeted at people who aren't familiar with it or are skeptical of it (I was in the latter category when I first learned of it, remained there for a coup... (read more)

0Vladimir_Nesov14y
I think it's an aesthetically appealing way of looking at what's going on, but that it doesn't help with understanding what's going on (or what to do with it) in any way.
0ata14y
If you're referring to the fact that it doesn't give us any useful information about the contents or laws of this universe, then I agree completely. (If I write this post, then I do intend to acknowledge that, and to discourage calling it a "theory of everything" for that reason.) Shall I take this as a "no" vote for the "would that be considered sufficiently on-topic?" question? In its defense in that respect, it could be taken as a discussion about the outer limits of what anybody/anything anywhere can understand, and aside from that, it raises some interesting questions about anthropic reasoning.
[-][anonymous]14y70

Is anyone familiar with a possible evolutionary explanation of the placebo effect? It seems strange to me that the body would have a limit to the degree it heals itself, and that this limit gets bypassed by the belief that one is receiving treatment.

The only explanation I could string together is that the body limits how much it heals itself because it's conserving energy/resources/whatever it might need for other things (periods of scarcity, danger, etc.) Receiving medicine sends the signal that the person is being taken care of and thus at a much lower r... (read more)

5wedrifid14y
Yes, that the original papers advocating the placebo effect were misleading in their reports and the popularisations thereof grossly exageratted. Placeobo's can be shown to reliably have an effect on: * Experience of symptoms. * Even more so on reports of symptoms (that is, the presence of an expectant experimentor messes with people's heads big time.) * Psychological state. * Things that are significantly influenced by psychological state. The main two actual physical conditions that I can recall being genuinely altered by placebo (as opposed from being perceived to be altered) are ulcers and herpes virus (cold sores). Basically, two conditions that you more or less get from being stressed. (I am not criticising the use of placebo controls here. But I am asserting that the primary benefit from such controls is in 'balancing out' other biases rather than because of direct effect of placebos on healing.)
3Kevin14y
http://news.ycombinator.com/item?id=567913
0wedrifid14y
Now that is just freaky.
2Strange714y
A self-administered placebo might still be effective for evolutionary reasons. It would signal that a reduced activity level is related to tending your injuries, rather than, say, waiting in ambush or 'freezing' to avoid notice by motion-sensitive predators, so it's safe to divert resources toward repair or antibody production at the expense of sensory and muscular readiness. Same reason people have a hard time getting to sleep in unfamiliar circumstances, but focusing on a token reminder of home dispels the feeling.
2NancyLebovitz14y
People are very much affected by what they imagine is going on. For the unbendable arm you don't tell people to extend their arm effiiciently, you have them imagine the arm extending out to infinity, or imagine the arm as a firehose. I'm not sure why any of this works-- it may have something to do with activating one's own mirror neurons, but I do think the placebo effect should be viewed as a special case rather than a thing in itself.

This will be preaching to the converted here, but worthy of note: "Odds Are, It's Wrong".

It's about the continued use of frequentist significance tests.

ETA: I've found the web site flaky today. Here's Google's cached copy.

3Richard_Kennaway14y
More -- much more! -- discussion of the article by statisticians here.
0Jack14y
This is the best introduction to the subject I've seen yet. I highly recommend it to the mathphobic.

I haven't seen this posted yet and it seems it might be of interest, from a link on Hacker News:

Odds Are, It's Wrong

Science fails to face the shortcomings of statistics

For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.

During the past century, though, a mutant form of math has deflected science’s heart

... (read more)

A survey on cryonics laws:

  1. Should it become legal for a person with a degenerative disease (Alzheimer's, etc.) to choose to be cryonically preserved before physiological death, so as to preserve the brain's information before it deteriorates further? Should a patient's family be able to make such a choice for them, if their mind has already degenerated enough that they are incapable of making such a decision, or if they are in a coma or some other unconscious or uncommunicative state?

  2. Should it become legal for a person to choose to be cryonically preser

... (read more)
3Morendil14y
Yes to 1, 2, 3, 5, 6. Undecided on 4. I've been wondering about 7 for some time now. I'm against the death penalty, but given that some countries have it, it seems so obvious that people who are now being executed should be preserved instead. The probablity of a wrongful conviction being non-trivial, $30K seems like a paltry sum to invest in the possibility, however slight, of later reviving someone who was wrongfully executed. I have looked at the figures for the cost to society of the legal process leading to execution, and it is shockingly high. People on death row should at least have the option, given how much is otherwise spent on them.
1ata14y
My answers: 1. Yes and yes. I know what this is like because my grandmother spent the last years of her life with Alzheimer's, in a nursing home. When she finally died, my mom didn't cry; she explained to me that she had already done her mourning years ago. It made sense, insofar as it can ever make sense to "get over" the annihilation of a loved one: my grandmother, the person, had already effectively died long before her body did. None of us knew about cryonics at the time, and we likely wouldn't have done it even if we had known about it, but I know that people are in this situation all the time, and as awareness and acceptance of cryonics grows, people should definitely have this option. 2. I'm inclined to say that it should be discouraged but legal as an individual choice. A person could already achieve a similar (though riskier) effect by calling an ambulance, making sure their bracelet and necklace are prominently visible, and killing themselves in a relatively non-destructive way. 3. Yes. I don't know much about the pros and cons of neuro/whole-body other than the cost, but I think I'd go with the latter, to err on the side of caution. 4. Yes. I'd say it should be opt-out, or opt-in if that is absolutely necessary for getting the law passed. 5. Yes. The laws should treat it like an instance of killing someone in a coma, which are presumably the same as the laws for killing someone in general. Of course it should vary depending on whether it is accidental, negligent, or intentional. 6. I'm not quite sure. I'd think that if you cause someone's bodily death, and they are able to be preserved perfectly, then it should be treated as a non-fatal assault or accident or whatever, but something doesn't seem right about that. I think, rather than having the laws against causing someone's bodily-but-not-information-theoretic death less severe, I'd prefer to have laws against causing someone's information-theoretic death more severe. 7. I'd prefer to a

Mentally Subtracting Positive Events Improves People’s Affective States, Contrary to Their Affective Forecasts

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2746912/

Fermi's Lack-of-a-Paradox:

http://xkcd.com/718/

5Jack14y
I love how some xkcds aren't even comics or particularly funny, just hand drawn Less Wrong posts.

Charity is not about helping:

"Even if the event’s nearly $200,000 worth of tickets sell out, less than $8,000 from the sales will go to the cause."

"No hard and fast guidelines exist on how much money raised in a benefit should go for expenses, and it is not unusual for galas to raise little money or even lose it."

"Overhead at Carnegie accounts for about one-third of the expenses. The hall costs $13,785 to rent. Then there is $6,315 for ushers; $2,300 for security; and $42,535 for stagehand labor, long recognized as a major cost o

... (read more)

This afternoon I identified a way in that I strongly need to be more rational, and I wondered if there has been anything written about it on Less Wrong.

A few hours ago, I was picking up my two children from their school. They're at a very young age so my heuristic is: near a parking lot, hang on to them.

While we were exiting the school building, another small child ran from his mother and slipped through the door between me and my youngest child. I feebly tried to grab the boy's shirt but he tugged away and then I just watched as he ran into the parking l... (read more)

4Morendil14y
If it had been me in that situation, I might have reacted pretty much as you did, because I have a heuristic to leave other people's kids alone when the parents are around. Nothing riles me quite like seeing someone else interact with my child in a bossy way, and I have noticed that others often react the same. Near a school I would expect adults (including in cars) to be more on the lookout for kids running around and so my awareness of danger would be lowered relative to my awareness of etiquette and the rule to look after my own kids. No, the term akrasia should be reserved for when you have already computed what you want to do, and fail to carry through with the want. What you describe seems more like a matter of doing the best with limited computing resources. Making what in retrospect appears to be the wrong decision should, if it has not had dire consequences, be good news: you get to adjust the internal "weights" you assign to the relevant rules, and so prepare yourself for right decisions in future. Don't beat yourself up for not "thinking faster", simply reflect on your repertoire of relevant actions in similar contexts, perhaps try to expand it. For instance you may want to practice with shouting "stop" so that it works. ;)
1RobinZ14y
It appears to me that you simply ran into a situation for which you were not prepared. If there are general rules you can implement that will work, that is good, but the only cure I can think of is anticipating and considering in advance many possible scenarios.
0NancyLebovitz14y
Let Every Breath, Systema, and Rmax International are related systems based on the idea of learning to maintain mental focus under stress. I haven't worked with them myself, but the approach seems safe and plausible, and probably at least worth investigating.

Buying someone on the internet a pizza seems to be a cheap and easy way of buying a lot of fuzzies. Behold, Mr. wongwong, the most generous man in the world.

http://www.reddit.com/r/reddit.com/comments/bd3fb/dear_reddit_can_you_order_me_a_pizza/

Poll: when making a new substantive top-level post, what kinds of summary are acceptable?

This is a checkbox poll, and therefore votes for multiple options may be entered - for each option, a separate karma balance will be offered. In the event that some important option is immediately noticed to be missing, another poster may offer an option-karma balance pair without destroying the poll.

9RobinZ14y
Formal abstract, consisting of one or a few paragraphs, indented. Karma balance.
-7RobinZ14y
8RobinZ14y
Précis, consisting of one or two sentences, italics. Karma balance.
-8RobinZ14y
7RobinZ14y
"tl;dr" summary, indicated by "tl;dr" abbreviation. Karma balance.
-5RobinZ14y
6RobinZ14y
No summary. Karma balance.
-6RobinZ14y
4Jack14y
If I think style choices are best left up to the poster I should vote for all the options?
1RobinZ14y
Yes. Edit: Elaborating in light of the downvote - if there exists some large fraction which think it isn't important enough to imply a policy on, then that will be reflected by the percentage differences between the options growing small.

QALYs and how they are arrived at. "Quality Adjusted Life Years" are the measure used by UK drug approval bodies in deciding which treatments to approve. They aim to spend no more than £30,000 per QALY.

Has anybody else wished that the value of the symbol, pi, was doubled? It becomes far more intuitive this way--this may even affect uptake of trigonometry in school. This rates up with declaring the electron's charge as negative rather than positive.

4RobinZ14y
I read an argument to that effect on the Internet, but I don't have any strong feelings - maybe if I were writing a philosophical conlang I would make the change, but not normally. You may as well argue for base four arithmetic.
0Jack14y
Huh. Would that actually be easier? I always figured ten fingers...
4JGWeissman14y
I figure each finger can be up or down, 2 states, so binary. And then base 16 is just assigning symbols to sequences of 4 binary digits, a good, manageable, compression for speaking and writing. (When I say I could count something on one hand, it means there are up to 31 of them.)
2RobinZ14y
1. Fewer symbols to memorize. 2. Smaller multiplication table to memorize. 3. Direct compatibility with binary computers. The cost in number length is not large - 3*10^8 is roughly 1*4^14 - and the cost in factorization likewise - divisibility by 2, 3, and 5 remain simple, only 11 becomes difficult. If you want to argue from number of fingers, though, six beats ten. ;)
0Alicorn14y
I could see eight, but why six?
2RobinZ14y
Six works because you don't need a figure for the base. Thus, zero to five fingers on one hand, then drop all five and raise one on the other to make six. (Plus, you get easy divisibility by seven, which beats easy divisibility by eleven.) Edit: Binary, the logical extension of the above principle, has the problem that the ring finger and pinky have a mechanical connection, besides the obvious 132decimal issue. ;) I don't see how eight comes in, though.
2Alicorn14y
Eight would be if you counted your fingers with the thumb of the same hand.
0RobinZ14y
I see - I count by raising fingers, so that method didn't occur to me.
1blogospheroid14y
There's are websites dedicated to making Base 12 the standard. Same principle as making Base 6. Nature's Numbers Dozenal Society Simplest explanation - its possible to divvy 12 up in more whole fractions than the number 10.
1Singularity733714y
I don't see myself with ten fingers as a posthuman anyway.
3rortian14y
e^(pi*i) = -1 Anything else: lame.
2Singularity733714y
Uh, how is e^(pi*i) = 1 lame?
1dclayh14y
Maybe because e^0 = 1?
3simplicio14y
Well making pi=2pi would just mean the complex exponential function would repeat itself every pi radians instead of every 2pi radians. e^0 would still = 1 in either case. Note that in the current definition, e^jn(2pi) = 1 for any integer n.
0wnoise14y
e^(2*Pi*i) - 1 = 0. Hah. I fit in more numbers.
3zero_call14y
No. This is nowhere near like the metric vs. english units debate. (If you want to talk about changing units, you should put your weight on that boat instead, as it's much more of a serious issue.) Pi is already well defined, anyways. It's defined according to its historical contextual meaning, regarding diameter, for which the factor of 2 does not appear.
6Sniffnoy14y
Pi is well-defined, yes, and that's not going to change. But some notation is better than others. It would be better notation if we had a symbol that meant 2pi, and not necessarily any symbol that meant pi, because the number 2pi is just usually more relevant. There's all sorts of notation we have that is perfectly well-defined, purely mathematical, not dependent on any system of units, but is not optimal for making things intuitive and easy to read, write and generally process. The gamma function is another good example. I really fail to see why metric vs. english units is much more serious; neither metric nor english units is particularly suggestive of anything these days. Neither is more natural. The quantities being measured with them aren't going to be nice clean numbers like pi/2, they're going to be messy no matter what system of units you measure them with.
0Singularity733714y
What about the gamma function is bad? Is it the offset relation to the factorial?
1Sniffnoy14y
Yeah. It's artificially introduced (why the s-1 power?) and is basically just confusing. Gamma function isn't really something I've had reason to use myself, so I'm just going on the fact that I've heard lots of people complain about this and never anyone defending it, to conclude that it really is as dumb as it looks.
1Douglas_Knight14y
The t^(s-1) in the gamma function should be thought of as the product of t^s dt/t. This is a standard part of the Mellin transform. The dt/t is invariant under multiplication, which is a sensible thing to ask for since the domain of integration (0,infinity) is preserved by scaling, but not by the translations that preserve dt. In other words, dt/t = d(log t) and it's telling you to change variables: the gamma function is the Laplace (or Fourier) transform of exp(-exp(u)).
2Thomas14y
http://www.math.utah.edu/~palais/pi.pdf
2simplicio14y
One can dream. :) Pi relates to diameter; it'd be much nicer if it related to radius directly instead. Personally, I want to replace the kg in the mks system with a new symbol and name: I want to go back to calling it the "grave" (as it was called at one time in France), having the symbol capital gamma. Then we wouldn't have the annoying fact of a prefixed unit as a basic unit of the system.
2RobinZ14y
Embarrassingly, my first reaction was to think, "how about cgs units? Those don't use kilograms!"
1simplicio14y
Hehehe. Cgs units... it really amuses me that it seems to be astronomers who like them best. Of course, if we were really uber-cool, we'd use natural units, but somehow I can't see Kirstie Alley going on TV talking about how she lost 460 million Planck-masses on Jenny.
0Sniffnoy14y
Definitely. 2pi appears so much more often than pi.
0wnoise14y
Meh. 2 Pi shows up a lot, but so does Pi, and so does Pi/2. I think I'd rather cut it in half, actually, as fractions are more painful than integer multiples.
7Sniffnoy14y
Think about the context here, though. Having a symbol for 2pi would be much more convenient because it would make things consistent. 2pi is the number that you typically cut into fractions. Let's say we define, say, rho to mean 2pi. Then we have rho, rho/2, rho/3, rho/4... whereas with pi, we have 2pi, 2pi/2, 2pi/3, 2pi/4... the problem is those even numbers. Writing 2pi/4 looks ugly, you want to simplify, but writing pi/2 means that you no longer see the number "4" there, which is what's important, that it's a quarter of 2pi. You see the "2" on the bottom so you think it's half of 2pi. It's a mistake everyone makes every now and then - seeing pi/n and thinking it's 2pi/n. If we just had a symbol for 2pi, this wouldn't occur. Other mistakes would, sure, but as commonly as this one does? If we were to define, say, xi=pi/2, then 4xi, 2xi, 4xi/3, xi, 4xi/5... well, that's just awful.
1zero_call14y
What? Like who, 6th graders?
5LucasSloan14y
I find that unfair. I have made the mistake Sniffnoy describes many times, all of them after I was in 6th grade.
2wedrifid14y
Easy solution. Pi is half a circle. Pie is the whole one. Then there is a smooth transition from grade 3 to university.
2thomblake14y
I've been looking for a good thing to call 2*Pi - this might cut it.
0wedrifid14y
Nice one! ;)
2Sniffnoy14y
No, like anyone who isn't watching out for traps caused by bad notation. It's much easier to copy down numbers than it is to alter them appropriately. If you see "e^(pi i/3)", what stands out is the 3 in the denominator. Except oops, pi actually only means half a circle, so this is a sixth root of unity, not a third one. Part of why I like to just write zeta_n instead of e^(2pi i/n). Sure, this can be avoided with a bit of thought, but thought shouldn't be required here; notation that forces you to think about something so trivial, is not good notation.
0wnoise14y
omega_n is the notation I most often run across.
0Sniffnoy14y
Hm, I've generally just seen omega for zeta_3.
0wnoise14y
I've certainly used it for that -- but I pattern it with dropping the subscript n, when it is clear when there is only one particular root of unity we're basing off of. I've never ever seen zeta used.
3sketerpot14y
Pi/3 shows up a lot as well. If you halve pi, then you'd have to write that as 2*pi/3, which is more irritating still.

An amusing view of charity and utility, as told by Monty Python: Merchant Banker. I was trying to remember what thought experiment it reminded me of, but I couldn't find it...

This is totally irrelevant, but I just had to share it.

I use the Tony Marloshkovips system for memorizing numbers, such as phone numbers, Social Insurance Numbers, physical constants, product codes at the grocery store, etc. It's very handy.

Anyway, I had to identify myself with my SIN today on the phone for loan purposes. But there was no record of my SIN number in their database. I repeated it - still wrong. Got through finally by telling the chap on the phone my date of birth.

Turns out the number I was telling him was the speed of light in m/s (299 792 4... (read more)

An interesting dialogue at BHTV abot transhumanism between cishumanist Massimo Pigliucci and transhumanist Mike Treder. Pigliucci is among other things blogging at Rationally Speaking. This BHTV dialogue is partly as a follow-up to Pigliucci's earlier blog-post the problems with transhumanism . As I (tonyf, July 16, 2009 8:29 PM) commented then, despite the title of his blog-post, it was more of a (I think) misleading generalisation from an article by some Munkittrick than by an actual study of the "transhumanist" community that was the basis fo... (read more)

4zero_call14y
No, Pigliucci agrees that it might be possible to get an intelligence (e.g., that passes the Turing test) through the computer system. He just does not think that you can call it a human intelligence. He thinks the concept of "mind uploading" is silly because the human mind (and intelligence) is therefore fundamentally different from this computer mind. He also argues that the human mind is inseparable from the biological construction. I have to admit I am not surprised that this argument is coming from a biologist. To a physicist or an engineer, almost all problems and constructs are computational, and it's just a matter of figuring out the proper model. As a biologist, it is more difficult to see how living entities follow similar sorts of fundamental rules. In objecting to the computational theory of mind, Pigliucci objects to the computational theory of reality, and in essence, he contradicts himself. He reveals himself to be a dualist. I think he is confusing the mathematical or logical abstraction of a system (not dualistic) with the physical or material abstraction (dualistic).
0zero_call14y
Good link. Question: In one part of the discussion, Pigliucci mentions that we know how chess players seem to think (and it's not at all like chess playing computer programs.) Does anyone have any good references about how chess players think?
0[anonymous]14y
No, Pigliucci agrees that it might be possible to get an intelligence (e.g., that passes the Turing test) through the computer system. He just does not think that you can call it a human intelligence. He thinks the concept of "mind uploading" is silly because the human mind (and intelligence) is therefore fundamentally different from this computer mind. I have to admit I am not surprised that this argument is coming from a biologist. To a physicist or an engineer, almost all problems and constructs are computational, and it's just a matter of figuring out the proper model. As a biologist, it is more difficult to see how living entities follow similar sorts of fundamental rules. In objecting to the computational theory of mind, Pigliucci objects to the computational theory of reality, and in essence, he contradicts himself. He reveals himself to be a dualist. I think he is confusing the mathematical or logical abstraction of a system with the physical or material abstraction.

I'd appreciate some feedback on a brain dump I did on economics and technology. Nothing revolutionary here. Just want people with more experience on the tech side to check my thinking.

Thanks in advance

http://modeledbehavior.com/2010/03/11/the-economics-of-really-big-ideas/

0timtyler14y
Re: "Already we have computer programs which can re-write existing to programs to run faster. These programs can also re-write themselves to run faster. However, they cannot rewrite themselves to become better at re-writing themselves faster." You mean that they can't do that alone? Refactoring programs help speed up their own development, and make it easier and faster to make improvements in a set of programs that often includes their own source code. It's not total automation - but partial automation is still very significant progress.
0Karl_Smith14y
Tim, Thanks, input like this helps me try to think about the economic issues involved. Can you talk a little about the depth of recursion already possible. How much assistance are these refactoring programs providing? Can the results the be used to speed up other programs or does can it only improve its own development, etc?
0timtyler14y
To quote from my essay relating to this: "Refactoring: Refactoring involves performing rearrangements of code which preserve its function, and improve its readability and maintainability - or facilitate future improvements. Much refactoring is done by daemons - and their existence massively speeds up the production of working code. Refactoring daemons enable tasks which would previously have been intractable." * http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/ Refactoring programs are indispensable for most application programmers in Java, and other machine readable languages. They are of limited use for C/C++ because of preprocessor mangling. When refactoring hit the mainstream in Eclipse, years ago, many programmers found their productivity increased dramatically, and they also found they could easily perform refactorings that would have been practically impossible to perform manually. Refactoring is a fairly general tool. I am not sure about your "recursion" question. Modeling this as some kind of recursive function that bottoms out somewhere does not seem particularly appropriate to me. Rather, it represents the partial automation of programming. Similarly, unit tests are the automation of testing, and compilers are the automation of assembly. Computer programming and software development have many places where automation is possible, and the opportunities are gradually being taken up.
0RobinZ14y
It looks correct to me, but I'm not an experienced judge of such things.
[-][anonymous]14y20

One career path I'm sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it's okay to harm and what "harm" is.

Does this seem like a good sort of career path for someone interested in Friendly AI?

If you work on AGI and you make actual progress, then you have a moral obligation to keep it away from people who can't be trusted with it. You cannot satisfy this obligation while working for a military or a military contractor.

I'm not an expert, but I don't think there is much more overlap with FAI than other domain AI projects have. The problems for military robots probably are more of the machine vision kind than of the meta-ethics kind.

9cousin_it14y
Am I the only one to think that no, creating military robots isn't a "good career path" towards friendly AI, because creating military robots is inherently unfriendly to humanity? Especially if you live in the US and know that your robots will be used in aggressive wars against poorer countries. It's some kind of crazy ethical blindness that most Americans seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get... Just like this incident I saw on HN when one guy asked about career prospects working for the occupation force in Iraq, and another answered that it'll be an "amazing and unique experience". You'll note my reply there was much more concise.

It's some kind of crazy ethical blindness that most Homo sapiens seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get

Fixed it for you.

And the reason is evolved psychological instincts with pretty obvious selection benefits.

0FAWS14y
I don't think that's an accurate correction. Because America is the current hegemonic power Americans can get away with feeling that other nations aren't "real" in the sense the USA are. For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
2Jack14y
I'm afraid I don't know what this means. There might be pragmatic realities that force non-Americans to consider the reactions of foreigners more than Americans must. Americans have two oceans and the world's strongest military to keep a lot foreign troubles far away, other people do not. But this isn't evidence that Americans care less about foreigners than those from other countries do. It sounds like you're talking about a political blindness instead of an ethical blindness. Besides, there is equally good reason to think America's hegemonic status makes Americans more worried about foreign goings-on since American lives and American business concerns are more often at stake.
-1FAWS14y
Not "real" is the best description I have. You could say having the same sort of attitude towards other nations you might have towards Oz, Middle Earth or the Empire from Star Wars even though you intellectually know that they really exist, but that only comes close to what I mean. I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse. I was thinking more of e. g. first contact situations in SF stories and things like that, not necessarily normal international politics, but I think it extends to all fields: Domestic politics (the amount and the kind of consideration the fact that a policy seems to work well somewhere else gets), pop culture, sports, science, language learning, wherever one might consider other nations Americans have more leeway not to do so. This doesn't by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to "correct" that out.
4Jack14y
Exactly zero evidence has been presented that Americans have this ill-defined attitude at a higher rate that non-Americans. No reason given to think this is the case on balance. The obvious and straight forward interpretation of cousin it's comment was that he was referring to American nationalism. A real and quite common phenomenon in which Americans don't give a lick about people who don't live their country (in civilized places this is referred to as racism). I've met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We're good at it. Really good. We do it like it's our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He's been commenting elsewhere throughout this conversation anyway. (Not my downvote, btw)
5Clippy14y
Yes! Thank you! Finally, a human user says what I've been trying to say all along! (See for example here.) On my first visit to Earth (or perhaps the first visit of one of my copies before a reconciliation), my reaction was (translated from the language of my logs): "The Alpha species [i.e. humans] inflicts disutility on its members based on relative skin redness. I'm silver. Exit!"
2FAWS14y
While all what you say about nationalism is true It's not obvious to me that it explains what cousin_it was talking about, at least not to its full extent. Degradation of other people through nationalism usually evokes hate ("those damned X!"), while the linked comment seemed too cheerful for that, it's not like it encouraged to "help show it to those stinkin' Arabs" or anything like that. As if the fact that someone might be hurt simply didn't occur to them. There has been plenty of that in other historical cases of nationalism, but I think usually only in similarly asymmetrical situations. Nationalism in symmetrical situations seems to be of the plain hate kind.
3Jack14y
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation. It's nation-centrism, in other words. Hatred is an extreme case (thus the moniker "ultra-nationalism"). This just isn't true. At all. I'm not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries. More importantly: Why are we arguing about this? Cousin_it isn't some old philosopher or public intellectual who we can't reach for clarification. If he wants to correct my understanding of his comment let him do it.
1cousin_it14y
Sorry for taking so much time to reply. FAWS is right, I'm not saying Americans hate foreigners. It's more like a blindness or deafness. See my link above to the "amazing and unique experience" guy. The ethical angle of the situation simply doesn't occur to him, it's as if Iraqis were videogame characters. America's fighting an aggressive war and killed umpteen thousand people?... uh, okay man, I got a career to advance and I wanna go someplace exotic, like expand my horizons and shit. I've never heard anything like that from Russians or anyone else except Americans, though I'd be the first to agree that we Russians are quite nationalistic.
0FAWS14y
The original disagreement wasn't about the term nationalism (and I never claimed that nationalism didn't explain it, only that what you said about nationalism up to that point didn't), so you seem to be arguing my point here: For the reasons I described it's easier for Americans to be "ignorant about the condition of those outside the nation". You can't keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn. You seem to be of the opinion that you can't even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that. EDIT: Nation-centrism is close to what I meant with not feeling that other nations are "real".
0Jack14y
"willful" ignorance... Do we really need to spend time distinguishing nationalism from the fact that the US gets the NBA? So what you want to claim is that asymmetrical conflict is more likely than symetrical conflict to lead to people in one country being ignorant of the animosity against them in the other country. This is plausible though several counterexamples come to mind and I'm not sure it applies since a large portion of American nationalists appear to conceive of the conflict as a symmetrical one (this has been a minor issue in American politics, of course). I'm not sure I see how this issue relates to nationalism exactly and what it's relevance is. But as you can see below I'm not sure I understand what you're claiming at this point. WHAA? This is incredibly vague and confusing. I honestly have no idea what you're talking about.
-1FAWS14y
And the fact that you neither need to make any significant sacrifices nor engage in double-think doesn't make willful ignorance easier? Not really. The term nationalism is unhelpful. There seem to be at least two kinds, the we're-great-don't-care-about-anyone-else nation-centric one, and unite-against-the-enemy-us-or-them kind. My point is that being a hegemonic power facilitates the nation-centric kind. The sub-point that a hot symmetric conflict turns nationalism into the second kind pretty much by necessity even if it started out as the first kind. An asymmetric conflict of course allows either kind in the stronger party, presumably that's what your counter-examples show. Presumably you detected a feature that made the post knowably correctable. If that feature wasn't an incoherent or irrational (in light of further evidence you have available) opinion, what was it?
2mattnewport14y
That sounds like nationalism rather than racism to me. The country you live in has only a loose correlation with the colour of your skin. If people favoured countries which had a strong majority of people of a particular ethnicity that might be evidence for racism.
3Jack14y
I was speaking loosely in the parenthetical. Nationalism has a strong tendency to manifest as racism and racism has a similar tendency to manifest as nationalism. They're highly correlated but yes, conceptually distinct.
1FAWS14y
Because I thought it would be obvious enough. Americans are less likely to learn foreign languages, most Americans don't even have a passport, it's easier to write a science paper without referencing any non-American research (not that I think this done at a significant rate, but the equivalent would be unthinkable elsewhere), foreign movies are generally either ignored or remade (and set in the USA if possible), foreign trade is a smaller percentage of GDP than just about any other developed nation, it's possible to "buy American" for a greater range of products than the equivalent anywhere else, America has the top leagues for the sports it cares about (it's not just that America cares for different sports than the rest of the world, for almost all countries the top level of the sport that country cares most about is at least in part played elsewhere so a soccer fan in e. g. Romania has to pay attention to the English Premier League, the Spanish Premiera Divison etc. [and even the English and Spanish fans have incentive to pay attention to each others league because they are at roughly equal level and the top teams regularly play each other]. If America cared about soccer the top league would be there so Americans still wouldn't have any reason to pay attention to foreign sports).
3thomblake14y
I think most of those things could be expected regardless of whether America has any such putative hegemonic status. Most Americans don't have passports because they can't afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I'm not surprised most people haven't bothered. Ditto with the foreign languages - most Americans don't meet or talk to people who don't speak American. I haven't been able to find a source online - do most Chinese people speak foreign languages and have passports? Are they required?
6FAWS14y
Getting a passport is a bother everywhere, the point is that Americans don't really need a passport because their country is huge, rich and powerful and they can take a vacation in whatever climate they like without ever leaving their borders. People in other developed nations would have to make much greater sacrifices to never travel abroad. That's exactly my point! They can do that without missing all that much, unlike most of the planet. IIRC compulsory foreign language instruction (mostly in English) starts in third grade, and many educated Chinese learn a third/fourth language later. For many Chinese Mandarin is effectively a L2 language so they know their native dialect, Mandarin and some English. The state of English learning is mostly horrible and only a minority can communicate effectively, but I'd think that Chinese on average speak better English than non-native-speaker Americans speak Spanish and the difficulty is much greater. I'm not all that clear about the passport situation/foreign travel and China is a bad example anyway because it is itself an enormous country and very "nation-centric", but a huge number of Chinese study abroad, while there is no comparable reason for Americans to do so because they already have many of the most prestigious universities.
0FAWS14y
Again, why the down-vote? Is there any factual error or is giving evidence when asked not welcome here?
1FAWS14y
Why was this voted down? Was there anything in this post that isn't either objectively true (Americans have more leeway to ignore other nations) or clearly marked as speculation ("seem to")? Is it inherently irrational to consider the hypothesis that cousin_it's observation was meant exactly as stated, and then to speculate about what might be behind this observation?
7Rain14y
"War is bad, the military industrial complex is evil," sounds good, and it hits all the right emotional buttons (care for humanity, etc.), but it is not necessarily true when all of the costs and benefits are taken into account. A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack. Destruction of infrastructure can open the way for rebuilding into a far better environment, and massive war spending can push the boundaries of technology. Reshaping political landscapes can cause huge culture shifts through decades which may result in much more open, and better, societies. Suffering is terrible; death is abhorrent; and the benefits are uncertain enough, they should not be used as arguments to start an otherwise preventable war. But I do not see how we can appropriately judge the complex results of "war in general" on the timeline of decades or centuries. What I can certainly agree with is that contributing to the military is bad on the margins, since it's already getting more than its share of resources thanks to others of a more bloodthirsty bent.
1cousin_it14y
At this point I laughed with a kind of sad laugh. Everyone who thinks America will use military robots for self-defense, raise your hands! On the other hand, you've made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I'm sure you didn't mean to.
2Rain14y
They will use them for defense as well as for offense. I've seen several articles already of American cities ready to purchase military drones for law enforcement purposes, and I would be very surprised if they were not also added to strategic military bases within America to defend against potential attackers. At the very least, when countries are making strategy decisions that may involve the military, the mere existence of drones will serve as a deterrence. My point was to state the necessity of defense. If there are strong, warlike countries with military drones, such as the United States, then other countries had better start developing countermeasures to protect themselves. That, or ally themselves with the strong country in the hopes of falling under their protection rather than their ire. As such, staying ahead of the other countries is a valid strategy. And I would certainly agree that US aggressiveness is stifling those very things in Iraq, Afghanistan, Iran, etc. The word 'fear' was poorly chosen. I was thinking more of what happened to Tibet and all those pacifists when they failed to muster an appropriate military defense: actual invasion and displacement or destruction.

I can picture in my mind a world without war, a world without hate. And I can picture us attacking that world, because they'd never expect it.

-- Jack Handey's Deep Thoughts

0thomblake14y
Oddly I don't seem to have a reference handy, but several US cities already use robots in law enforcement. iRobot and Foster-Miller really took off after the success of their robot volunteers at the WTC.
7Rain14y
How much harm do you contribute by working to enable military robots? How much harm do you contribute by paying taxes to the US government, part of which are used to fund military robots? How much harm do you contribute by existing, living in the US, and absorbing a huge amount of electricity and other natural resources?
4Rain14y
Well, that was voted down pretty rapidly :) However, I was being honest with my questions. I'd like to know what sort of utilon adjustments people assign to these different situations, even if it's just a general weighting like 'high' or 'low'.
2Kevin14y
My decision to not work for the military industrial complex is all about fuzzies, not utilons.
6wedrifid14y
It can be useful to separate 'fuzzies' from 'practical benefit' but they can both be considered sources of utilons.
0AdeleneDawner14y
As I see it, it's less about how much harm those specific things do, and more about how viable the alternatives are. I expect that all governments makes tax avoidance/evasion difficult, and I suspect that paying taxes to any government will support a military. The lifestyle changes involved in actually living sustainably (as opposed to being 'slightly better than the US average' or applying greenwash) seem pretty significant and possibly unattainable for most of us, as well. (I could be wrong on the latter in a general sense; I haven't looked into it, since I'm already relatively sure that it's beyond what I, personally, could manage.) Given that Warrigal was asking about the career move, though, I expect that he does have other viable options that could be pursued without completely turning his life upside down, and that's a significant difference between this decision and the other two.
0wnoise14y
Costa Rica's constitution forbids a military, and they seem to mean it, though one can quibble about whether their police count. http://en.wikipedia.org/wiki/Military_of_Costa_Rica
0Rain14y
How viable, given that you want to live in relative comfort and ease. But if a true valuation is made, then perhaps that should not be taken as given, considering the costs.
0RobinZ14y
I have not assigned numbers - it is not a simple question.
0[anonymous]14y
I live in Russia and have refused numerous invitations to migrate to the US.
4thomblake14y
There are various arguments that building military robots is bad, but I don't think you've touched on any good ones. When you look at how unreliable human soldiers are on the field, creating military robots just seems like an obvious way to make things better for everyone involved. Fewer American casualties because we're using robots, and fewer civilian casualties because the robots are better at not shooting at civilians. Also, FWIW, most military robots currently aren't the sort that shoot people - they do things like look around corners, draw fire, perform aerial surveillance, and detect/defuse bombs.
0cousin_it14y
This is ironic. I wrote: Then you wrote: This happens to pixel-perfectly demonstrate my point about ethical blindness. Reread my quote again, then your quote, then mine, then yours again. Notice anything wrong? Anything missing? You see, you omitted one pretty important group: everyone America calls "enemy combatants". If you think all of them are bad people and deserve to die, then you obviously don't get it. Repeat after me: America Starts Aggressive Wars. Then say it again because it's true and truth won't suffer from repetition. Say it as many times as you need to make it sink in, then come back and we will resume this discussion.
7thomblake14y
America will be killing those people with or without robots. We already have ways of wiping all of the enemy combatants off the map if we want to (for example nukes). Military technology is primarily about finding ways to 1) kill fewer of our own soldiers and 2) kill fewer people who aren't enemy combatants.
6jimrandomh14y
Not necessarily. All else equal, the less it costs to wage a war (in money, American lives, and good will), the more more likely leaders are to actually start one.
2FAWS14y
Ignoring the question whether that's desirable or not (politics is the mindkiller) reducing the cost of killing those people will lead to more of those people killed in marginal situations where such considerations matter.
1thomblake14y
Yes, that's one of the good arguments against robot soliders I mentioned above. We're more likely to not care about the fate of our robot soliders, and so would be less hesitant to send them into battle. Though it's still an open question whether that effect would trump any increased monetary cost per soldier (if any) and whether the other benefits outweigh such concerns. Human soldiers perform horribly in terms of following the rules of war, and above that do absolutely horrible things sometimes.
4thomblake14y
Also, this is definitely not the place to debate this, and you have to know a lot of people won't agree with you, so stop with the flamebait.
1wnoise14y
You don't even have to go as far as "America Starts Aggressive Wars" -- "Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered." Look, I get the "Politics is the Mind Killer" mantra, and I agree that it would be fruitless to start a debate about something like abortion here -- it comes down to definitions and conventions about what is moral. But when something is actually, demonstrably, true, refusing to look at and examine the truth because it is painful to do so is not compelling. It doesn't even trigger most of the reasons in "politics is the mindkiller" -- both major U.S. Political parties are just fine with most of the examples. The only two teams that can credibly be put in opposition here are "U.S.A." and "Everyone else".
2Jack14y
It is worth noting that to complete the argument someone needs to show that America starting aggressive wars is bad. The people starting such wars, it turns out, have their reasons.
-1CronoDAS14y
[half-ironic] Yep. Some countries are just in desperate need a good ol' fashioned ass-kicking. [/half-ironic]
1cousin_it14y
Why flamebait? I stated a very well-known fact. http://en.wikipedia.org/wiki/Bay_of_Pigs_Invasion http://en.wikipedia.org/wiki/Operation_Power_Pack http://en.wikipedia.org/wiki/Operation_Urgent_Fury http://en.wikipedia.org/wiki/Operation_Just_Cause More here: http://en.wikipedia.org/wiki/CIA_sponsored_regime_change ETA: to tell the truth, until I dug up that last Wikipedia page just now for purposes of argument, I still had no clear idea how much this happened. And give these people autonomous killer robots? In the name of developing Friendly Intelligence?
6SilasBarta14y
1) Politics is the mind killer, 2) Agree denotationally but not connotationally
2Jack14y
Bay of Pigs? Really? How about nailing us on the Philippines while you're at it. :-) It isn't like there aren't recent examples to choose from.
1thomblake14y
That's why. Folks will disagree that's something that the US does, and pointing to things the US might have done decades ago won't convince them. There's no way to even debate this point without going down a potentially mind-killing rabbit hole, and I find it hard to believe you weren't aware of this when you posted it. In case you weren't aware of it: I live in the US, and I've talked to a number of ordinary folks and a number of scholarly folks about it, and I don't tend to encounter people who would grant that the US starts aggressive wars. You should be able to see why someone who thinks that would be angry and vocal about the accusation.
1cousin_it14y
Ooh... I thought we were having a factual disagreement. I apologize. Maybe this won't work as flamebait here :-)
2JGWeissman14y
Creating military robots can be friendly, if: Lbh fryy gur ebobgf gb nyy fvqrf, ercynpvat uhzna nezvrf, naq unir gurz evttrq gb abg npghnyyl svtug rnpu bgure, ohg vafgrnq gnxr njnl gur rssrpgvir cbjre bs gur tbireazragf gung jnagrq nyy gur jnef. (Rot13)
0cousin_it14y
Unfortunately, this isn't a realistic option if you're an employee at a big military contractor, which is the most likely scenario...
0JGWeissman14y
Well, yeah, there is no way someone at standard human level would pull off what happened in that story.
5Peter_de_Blanc14y
The difference between specialized FAI and general FAI is like the difference between adaptation executors and fitness maximizers. It's a big difference.
0FAWS14y
Is specialized FAI even a meaningful term? ISTM that to implement actual friendliness even in a specialized application an AI needs capabilities that imply AGI.
2Peter_de_Blanc14y
It's a nonstandard term that seemed appropriate to the discussion. By specialized FAI, I mean an AI that reliably does the thing it was made to do in a specific context.
2FAWS14y
Isn't that the same as specialized AI? I don't think anybody deliberately makes specialized AIs that don't work.
5SilasBarta14y
Sounds like a good idea, but here are my reservations/warnings: 1) For the kind of work you describe, you would probably need a high-level security clearance and continued scrutiny on your life (to make sure you don't share it with the wrong people), and you probably wouldn't be able to publicly discuss your work. (i.e., where SIAI can hear it.) 2) What are your chances you'll actually get to work on the aspect of the problem that relates to Friendliness?

The scrutiny isn't so bad. They're mainly looking for illegality or potential for corruption. And even if you've committed illegal acts, so long as you own up to it, and it wasn't in the recent past (5 to 7 years), it's generally OK. Felonies are a different matter, of course.

A secret clearance is an interview, taking fingerprints, interviews of family and friends, interviews of neighbors, a credit check, and will likely require drug testing. Top secret clearances and above lead to polygraphs and heavy grilling, with monitoring for new developments. They're renewed every few years, going through the process again.

Most of the military drone programs would be given to one large contractor like Lockheed Martin or NGIT, with lots of smaller subcontractors. A security clearance at secret level or above takes up to 9 months, costs the company over $10,000, and adds that much or more to that person's annual salary potential, so it's not something they hand out lightly.

Most contracting agencies put a small, already-cleared team on the activities that require it, and farm out most of the work (documentation, mundane code, etc.) to people without clearances. If they need more people with clearances, they tend to get temporary waivers for the duration of the work (90 days or less, for example). Most only see a small part of the whole, and you don't choose your projects; your company does.

These are not good environments to learn complex, high-level things like Friendliness.

7SilasBarta14y
It wasn't so much the background scrutiny I'm worried about so much as, "Alright, it's been fun doing this research on human-level intelligent robots. Oh, hey, I'm going to go to an AI conference in Shanghai..." "Hahahahahaha! Good one! Um ... were you being serious?"
0Rain14y
Yeah, that could get you in big trouble.
3SilasBarta14y
Yep. And so could the appearance on the internet of an e-book about "How to build a human-level armed android, by Warrigal", when Warrigal has worked at such a job. And if you go to a potentially hostile country without telling them ... well, I guess you'll get the option of a PMITA federal prison, or solitary.
5Vladimir_Nesov14y
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools. It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
2[anonymous]14y
What do you mean by "non-magical environments"?
3Vladimir_Nesov14y
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don't capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
2RobinZ14y
Your terminology was unclear but this definition is not - I would tend to call it an "organic" environment.
1Kevin14y
I have very little in the way of morality, but I personally draw the line at supporting the military industrial complex. I don't think helping the military make robots that make kill decisions themselves has much to do with provable mathematical Friendliness.
1wedrifid14y
It seems you are morally obliged to at least investigate possible mechanisms for tax evasion. But then, morality doesn't have all that much to do with consequences.
-1Kevin14y
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate. Also, I draw a distinction between something I am comfortable doing, and the likely future progress of society as a whole. Killer robots aren't going away anytime soon, and except for the extra wars it will allow us to have, killer robots result in less US deaths and more effective military tactics than on the ground troops. I expect that US killer robots will be making kill decisions or at least very strong kill suggestions that are followed 99% of the time within 10 years. There's just too much data coming in too fast for a single human operator to be able to process. If the African totalitarians are still around in 25 years, the possibility of being conquered by an army of killer robots may make them more amenable to internationally monitored elections. So good and bad things will come about as a result of the killer robot armies of the future. It's really the military industrial complex as a whole I object to; robots making kill decisions is one of the less objectionable things within the military industrial complex.
3mattnewport14y
Uh, that's a pretty dumb thing to say. For one, starting a startup and selling it has rather broader consequences than a typical tax avoidance strategy. That's like suggesting moving to a third world country to cut down on your daily living expenses - your food and accommodation costs may indeed decrease but it significantly changes your life in all kinds of other ways as well. For another this would not be tax evasion but tax avoidance which has the rather significant difference of being entirely legal.
2Kevin14y
I'm fully aware of the distinction; I was playing with the ambiguous distinction between evasion and avoidance (as you say, the distinction being that avoidance is legal) by using the language of the person I replied to. I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
0mattnewport14y
I assumed wedrifid knew the difference and was suggesting you were morally bound to evade rather than merely avoid taxes if you draw the line at supporting the military industrial complex. I don't necessarily agree with that but I took that to be his point. I would have thought that maximizing tax avoidance is something that any aspiring rationalist ought to be doing as a matter of course. The fact that you can go to jail for tax evasion seems like a pretty profound difference from tax avoidance to me. The whole tax structure is 'just' the definitions given by the rule of law.
3RobinZ14y
I don't particularly want to avoid taxes, either - I like living in a country with a government.
3Kevin14y
I like living in a country with a government compared to Somalian anarchism, but not compared to libertarian utopia. This is getting close to politics.
1RobinZ14y
As good a reason as any to drop the subject of tax avoidance.
0Kevin14y
Yes, Less Wrong could use some sort of Godwin's law analog, where a thread is declared dead or at least discouraged once it hits politics.
2mattnewport14y
I think the general consensus is that we tread carefully when straying into political territory and tend to avoid explicitly political (certainly party political) discussion but that we don't entirely avoid discussion that has a political dimension. Taken to an extreme that would seem to preclude most topics of any interest or significance. Generally the standard of discourse is fairly high here and political slanging matches are avoided. And I still don't consider it a political point that you basically fail at instrumental rationality if you overpay on your taxes.
1mattnewport14y
I don't see the contradiction. The government creates the tax code with at least the stated intention of encouraging or subsidizing certain behaviours over others. That only works if people respond rationally to the incentives. From the individual rationalist's point of view one should aim to optimize one's resources. In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law. You can then choose how to best meet your own goals by allocating the money you save as you see fit. It is only rational to not avoid taxes if you either believe the effort required to avoid them is not worth the money saved or if you believe that the optimal use of the money is to give it to the government. It seems unlikely in the latter case that the optimal amount to give to the government just happens to be the very amount they take from you so you should probably be voluntarily donating a larger portion of your income to the government. If you live in the US you should go here.
5orthonormal14y
Since we were talking about choice of career among other things, it's worth stating that your actual incentive here more closely resembles "maximizing your after-tax income" than "minimizing your taxes paid".
0mattnewport14y
True, I was focusing slightly more narrowly on the idea of minimizing your tax burden at your current income level without making major changes in your career, country of residence, etc. but on a longer timescale or in the context of broader life goals you are aiming to maximize your after-tax income rather than minimize the taxes you pay.
2Kevin14y
I don't think I'm morally bound to evade taxes for the same reason I'm not morally bound to stop the world's massive amounts of animal suffering. My utility function breaks if I take my morality too seriously. As you say, I am somewhat bound morally to try and evade taxes or even actively stage insurrection against my government. Both of those seem like very bad ideas, as the state will just crush me. Not working for the government in lieu of trying to bring down the government is similar to my decision to eat less meet rather than trying to make the whole world eat less meat. Yes, I am aware that these are not anywhere close to perfectly analogous decisions.
1rwallace14y
I'd say yes, go for it. The value would be in gaining experience in designing AI systems that have to work in the real world -- a very different proposition from systems that only have to work in the laboratory or in the imagination.
0Mitchell_Porter14y
Cognitive neuroscience and cognitive psychology are far more relevant. A Friendly AI is a moral agent; it's more like a judge than a cruise missile. A killer robot must inflict harm appropriately but it does not need to know what "harm" is; that's for politicians, generals, and other strategists. We have to extract the part of the human cognitive algorithm which, on reflection, encodes the essence of rational and moral judgment and action. That's the sort of achievement which FAI will require.
0thomblake14y
The problems involved in creating ethical military robots are vastly different from those involved in general AI. Ron Arkin's Governing Lethal Behavior in Autonomous Robots does a good job of describing how one should think when building such a thing. Basically, there are rules for war, and the trick is to just implement those in the robot, and there's very little judgement left over. To hear him explain it, it doesn't even sound like a very hard problem.
2SilasBarta14y
Then I'm not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they're surrendering? When they're dead/severely wounded? The rules of war themselves are fairly algorithmic, but applying them is a different story.
3thomblake14y
Well there's a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn't an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).
0Daniel_Burfoot14y
This is a good question, I would appreciate more discussion of it on LW. I am wondering about similar issues: my research involves computer vision, the most obvious applications of which are for surveillance and security. One does not need to be a science fiction author or devotee to imagine powerful computer vision tools or military robots being used for evil.
4Hook14y
Whether something can be used for evil or not is the wrong question. It's better to ask "How much does computer vision decrease the cost of evil?" Many of the bad things that could be done with CV can be done with a camera, a fast network connection, and an airman in Nevada, just as many of the good medical applications can be done by a patient postdoc or technician.
1RobinZ14y
Better still is to ask, "What are the benefits and harms of doing this rather than something else, including cascading consequences on to the indefinite future?" Which, of course, is murderously hard to answer in cases this far removed from direct consequences. Which is what I meant when I said computer vision research was not distinguished. Although upon consideration I would weaken the claim to "not strongly distinguished", which might still be enough to justify doing something else.
0RobinZ14y
People can use anything for evil if they want - I don't see how computer vision is distinguished on that metric.
3cousin_it14y
You just succumbed to the fallacy of gray. Computer vision is more easily used for evil than e.g. water purification technology.
0RobinZ14y
Fair enough.

This is from the friendly AI document:

Unity of will occurs when deixis is eliminated; that is, when speaker-dependent variables are eliminated from cognition. If a human simultaneously suppresses her adversarial attitude, and also suppresses her expectations that the AI will make observer-biased decisions, the result is unity of will. Thinking in the third person is natural to AIs and very hard for humans; thus, the task for a Friendship programmer is to suppress her belief that the AI will think about verself in the first person (and, to a lesser exte

... (read more)
1PhilGoetz14y
Actually this may be a better link. Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results. A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, "What would agent X do in this situation?", and you represent agent X's beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don't know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that). An even more severe problem becomes apparent when you try to build robots. Agre & Chapman's paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler. We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don't care about all that, and it's nothing but an opportunity to introduce error into the system.

The prince of one hundred thousand leaves is, among other things, a sort of fictionalized open-source project for horrifying eutopias. It might provide useful insights about that which we are least willing to consider.

[-][anonymous]14y00

Nanotech robots deliver gene therapy through blood

http://www.reuters.com/article/idUSTRE62K1BK20100321

Einstein's Gravity Confirmed on a Cosmic Scale

http://news.nationalgeographic.com/news/2010/03/100310-einstein-theory-general-relativity-gravity-dark-matter-proof/

or

Confirmation of general relativity on large scales from weak lensing and galaxy velocities

http://www.nature.com/nature/journal/v464/n7286/full/nature08857.html

Has there been any activity on the Craigslist charity idea? If people are pursuing it, is there someplace to post updates, or an email list to join?

Spirit on the Brain is a blog and a book in progress by Geoffrey Falk about the neurophysical sources of religion, which will make interesting reading for anyone wanting to know about the aetiology of the religious pathology.

I have a program that estimates the chances that one gene has the same function as another gene, based on their similarity. This is estimated from the % identity of amino acids between the proteins, and on the % of the larger protein that is covered by an alignment with the shorter protein.

For various reasons, this is done by breaking %id and %len into bins, eg 20-30%id, 30-40%id, 40-50%id, ... 30-40%len, 40-50%len, ... and estimating a probability for each bin that two proteins matched in that way have the same function.

What I want to do is to reduce the... (read more)

0wnoise14y
To make sure I'm interpreting this correctly: the calibration data is a list of pairs of genes, along with their %id, and %len, and tagged by either "same function" or "different function"? And currently, these are binned, and the probabilities estimated from the statistics known in that bin? You want to change this, in particular, reduce the number of bins. Before we get to "how", may I ask why you want to do this? It doesn't seem as if it would reduce the computational cost. It would up the number of samples and possibly get a better discrimination, but at the same time it spreads the gene pairs being compared against over larger regions of parameter space, meaning your inference is now based more on genes that should have less relevance to your case...
0PhilGoetz14y
Yes. Not enough samples for a large number of bins. Ideally I'd use a different method that would use the numbers directly, in regression or some machine-learning technique. I may do that someday. But there are institutional barriers to doing that.
2wnoise14y
So would I, but that would be a research project. There is no direct Bayesian prescription for the best way of binning, though the motto of "use every scrap of information: throw nothing away", implies to me that the proper thing to do is minimize information left out once we know the bin. A bin is most informative if the statistics of the bin have the least entropy. So select a binning that does this, and obeys whatever other reasonable restraints you want, such as being contiguous, or dividing directly into 9, by the cross product of 3 on each axis. A natural measure of the entropy is just -p log p - (1-p) log (1- p), where p is the revealed frequency, but it's not the right one. I'm going to argue that instead we want to use a different measure of entropy: that of an underlying posterior distribution. This is essentially what information we're still lacking once we have the bin. For no prior information, and data of the counts, this a Beta distribution, with parameters of the number in the bin judged to be the "same" + 1, and the number judged to be "different" + 1. There is an entropy formula in the Wikipedia article. EDIT: be careful about signs though, it appears to be the negative of the entropy currently. Because we're concerned about the gain per gene pair, naturally each bin's entropy should be weighted by how often it comes up -- that is, the number of samples in the bin (perhaps +1). Does this seem like a reasonable procedure? Note that it doesn't directly maximize differing bins getting differing predictions. Instead it minimizes uncertainty in each bin. In practice, I believe it will have the same effect. A slightly more ad-hoc thing to try would be minimizing the variance in each, rather than the entropy.
2PhilGoetz14y
You know what's funny - My bosses have a "research project bad" reaction. If I say that fixing a problem requires finding a new solution, they usually say, "That would be a research project", and nix it. But if I say, "Fixing this would require changing the underlying database from Sybase to SQLite", or, "Fixing this would require using NCBI's NRAA database instead of the PANDA database", that's easier for people to accept, even if it requires ten times as much work.
2wnoise14y
Doing some simulations on a similar problem (1-d, with p(x) = x), I'm getting results indicating that this isn't working well at all. Reducing the entropy by means of having large numbers in one bin seems to override the reduction in entropy by having the probabilities be more skewed, at least for this case. I am surprised, and a bit perplexed. EDIT: I was hoping that a better measure, such as the mutual information I(x; y) between bin (x) and probability of being the same function (y) would work. But this boils down to the measure I suggested last time: I(x; y) = H(y) - H(y|x). H(y) is fixed just by the distribution of same vs. not. H(Y|X) = - sum x p(x) int p(y|x) log p(y|x) dy, and so maximizing the mutual information is the same as minimizing the measure I suggested last time.
0PhilGoetz14y
What's the similar problem? "1-d, with p(x) = x)" doesn't mean much to me. It sounds like you're looking for bins on the region 1 to d, with p(x) = x. I think that if you used the plogp -qlogq entropy, it would work fine.
0wnoise14y
"1-d", meaning one-dimensional. n bins between 0 and 1, samples drawn uniformly in the space X = [0,1], with probability p(x) = x of being considered the same.
0PhilGoetz14y
That's a good idea. I'm glad you said that, since that was what I immediately thought of doing. I'll read up on the beta distribution, thanks!
2wnoise14y
I still think it's not a great choice, though clearly my other choices haven't worked well. But please do try it. Given that the probability is a continuous distribution, the Fisher information might instead be a reasonable thing to look at. For a single distribution, maximizing it corresponds to minimizing the variance, so my suggestion for that wasn't as ad-hoc as I thought. I'm not sure the equivalence holds for multiple distributions.

I shared this argument against cryonics here, but cyphergoth, the original poster of that thread noted that he prefers that discussion to focus on the technical feasibility of cryonics. This is not my actual opinion, just my solution to an intellectual puzzle: how can a rational person skip cryonics, even if he believes in its technical feasibility?

Let us first assume that I don't care too much about my future self, in the simple sense that I don't exercise, I eat unhealthy food, etc. Most of us are like that, and this is not irrational behavior: We simply... (read more)

From the guy that brought us the creative commons license:

http://www.fixcongressfirst.org/

[-][anonymous]14y00

If you want to write UIs, Lisp and friends would probably not be first choice, but since you mentioned it...

For Lisp, you can of course install Emacs, which (apart from being an editor) is a pretty convenient way to play around with Lisp. Emacs-Lisp may not be a start of the art Lisp implementation, but it certainly good enough to get started. An because of the full integration with the editor, there is this instant-gratification when you can use some Lisp to glue to existing things together into something useful. Emacs is available for just about any self... (read more)