Open Thread: March 2010, part 2
The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (334)
Anybody else think the modern university system is grossly inefficient? Most of the people I knew in undergrad spend most of their time drinking to excess and skipping classes. In addition, barely half of undergraduates get their B.A in 6 years after starting. The whole system is hugely expensive in both direct subsidies and opportunity costs.
I think that society would benefit from switching to computer based learning systems for most kinds of classes. For example, I took two economics courses that incorporated CBL elements, and I found them vastly more engrossing and much more time-efficient than the lecture sections. Instead of applying to selective universities (which gain status by denying more students entry than others) people could get most of their prerequisites out of the way in a few months with standard CBL programs administered at a marginal cost of $0.
Yep. They mainly persist as a way to sort workers: those that can get through, and with a degree in X at university Y, are good enough to be trusted to job Z (even though, as is usually the case, nothing in X actually pertains to Z -- you're just signaling your general qualifications for being taken on to do job Z).
Having the degree is a good proxy for certain skills like intelligence, diligence, etc. Why not test for intelligence directly? Because in the US and most industrialized countries, it's illegal, so they have to test you by proxy -- let the university give you an IQ test as a standard for admission, but not call it that.
Shifting to a system that actually makes sense is going to require overcoming a lot of inertia.
I agree with this analysis to some extent. I'm not sure I'm willing to grant that the primary purpose of universities is a way to sort workers, but that is a major thing they're used for, and I tend to argue at length that they should get out of that business. I argue as much as possible against student evaluation, grading, and granting degrees. One of the first arguments that pops up tends to be, "But how will people know who to hire / let into grad school?"
But I don't think it's the University's job to answer that question.
Oh, and to add to my earlier comment, another major problem with the system is the difficulty with which you can dismiss employees, which extends through most industrialized countries. This makes it much harder to take a chance on anyone, significantly restricting the set of who has a chance at any job, and thus requiring much more proof in advance.
And what frustrates me the most is that most such regulations/legal environments are called "pro-worker" and the debate on them framed from the assumption that if you want to help workers you must want these laws. No, no, no! These laws make it labor markets much more rigid.
Remember, whatever requirement you force on employers as a surprise, they will soon take into account when looking to hire their next albatross. There's no free lunch! These benefits can only be transient and favor only people lucky enough working at a particular time. As time goes by, you just see more and more roundabout, wasteful ways to get around the restrictions. (Note the analogy to "push the fat guy off the trolley" problems...)
Clearly universities are grossly inefficient at teaching, but as Robin Hanson would say, School isn't about Learning.
The education system in general in most Western countries is grossly inefficient but that is largely because it is not structured in a way that rewards educating efficiently, and that is exactly how most of the participants want it.
Only if I consider the modern university system (or education institutions in general) to have a primary purpose of conveying knowledge.
Are you talking about the US? The statistic suggests that you're talking about somewhere specific. I'll assume the US.
You have several claims that are not obviously related. That's not to say that I disagree with any of them, though I probably would disagree with the implicit claims that relate them, if I had to guess what they were. One red flag is the conflation of public and private schools, which have different goals and methods. The 6 year graduation rate is really about public schools, right? But then you invoke selective schools in the last paragraph.
The six year rate is is a nationwide average for the united states.
Thank you, this was a quite useful link for me. (Finnish colleges currently charge no tuition fees, and some are arguing for their introduction on the basis that this would make people graduate faster; those statistics show that US students don't really graduate any much better than Finnish ones.)
I stand by my statement.
Well, then I guess I'm triple special for getting a degree straight from high school in 2.5 years. In engineering. [/toots horn]
College is often a way for 18 year olds to delay social adulthood for 4-6 years. This American Life did a very good episode on the drinking culture at the USA's #1 party school, Penn State, that proves this point beyond a reasonable doubt. Time and time again binge drinking students say that the reason they are doing it and the reason they love Penn State is because this is the only chance in their lives they are going to have to live this lifestyle.
TAL sells the MP3 of the show or it's widely available on torrent sites with a simple Google search.
I certainly agree that CBL is useful, and the system as a whole is riddled with inefficiencies and perverse incentives.
However, I think a lot of the problem there is actually a matter of cultural context. Prior to entering college, those undegrads learned that drinking is something fun grownups are allowed to do, whereas listening to the teacher and doing homework are trials to be either grimly endured, or minimized by good behavior in other areas.
How should rationalists do therapy?
As a community, we should have resources to help people who might otherwise be helped by clerics, quacks, or psychics. We should certainly cover things like minor depression and grief at the death of a loved one.
Should we just look at what therapies have the best outcome for various situations and recommend those?
Should we use what we know about cognition to suggest new therapies? Should we make a "Grief Sequence"?
To take a stab at what I know of that topic:
That's from my general approach to consulting, i.e. helping people, or more precisely "influencing people at their request". It's not specific to grief or depression counseling, and thus should perhaps be taken with a grain of salt.
When I expressed problems that I have with my life, I experienced that this community is not very well versed in the emotional aspect of the situation. At least, that is how I felt (heh) when they swarmed and attacked in an effort to other-optimize. I'm sure they wanted to help, but it was a very direct, blunt experience, with little regard for the difficulties inherent in the situation, or the knowledge I already possessed.
"Get therapy" is a solution, but one that I've known about for a very long time. Alicorn's post on problems vs. tasks comes to mind. It felt almost tautological: "You're depressed? You should take an action which cures depression." At least, until it ended with me telling people to please stop, and getting called sad, pitiful, and a jerk.
The key observation is that, as far as I can tell, you never actually asked for help.
I call the behaviour you're commenting on "inflicting help". This is a very, very common mistake that even very smart people make. One of the basic tools in a good consultant's toolkit is to be able to recognize actual requests for help, and fulfill those strictly within the bounds of what has been requested.
The good news is, this is a community of people who want to be skilled at updating on the evidence. Hopefully this negative result will be counted as evidence and people here will, in future, tend to refrain from inflicting help.
Both idea sound good. Any analysis and commentary or recommendations would be useful for people, I'm sure.
Repost from last open thread in the desperate hope that the lack of interest was only due to people not seeing it all the way at the bottom:
Frank Lantz: The Truth in Game Design
Players keep complaining about the random number generators being "unfair" in games that involve randomness, so game developers have started tweaking the generators to behave according to gambler's fallacy. Now results that are adverse to the player increase the chance of beneficent future results. Lantz notes that making games systems to conform to common fallacies might not be that good an idea, since games could also be used as great teaching devices on how all sorts of complex systems really work. Of course the reasoning is a bit different when your bottom line depends on players not canceling their subscriptions when they think they are being shafted by unfair game code.
Messing with the random number generator feels unpleasant, both making the game less real and more able to keep the player in a dull trance without unexpected novelty. On the other hand you could say that platform games where the main character can move back and forth in air after jumping and do a double jump off thin air teach the players a wrong model of physics, but these features seem to generally make platform games more fun. So why would one be bad and the other not? One difference is that the physically improbable jumping capabilities give the players more options, while the gambler's fallacy rng just affects events independent of player actions. Another idea is that probability feels like a more fundamental aspect of a reality than how the laws of physics work.
This other article called Truth in Game Design by Scott Brodie linked from the comments of the first one is also interesting.
A new study shows rural Mayans failing to exhibit the act-omission distinction in a variant on the trolley problem.
The results of that study seem to be a bit more complicated in that they suggest that part of the cause of the distinction is that there's a common belief in the population that is very close to witchcraft (where intending harm or desiring can cause harm) and thus intent matters as much as action and there's isn't a clear dividing line between the two.
A rather ironic take on Hauser's Trolley Problem.
These "act" trolley problems have the same difficulty as the original.
It's so implausible that the only way to stop a runaway truck/trolley would be to make it run over a person, that one doesn't know if one's intuition is reacting against the sheer implausibility or the moral dimension.
IMO, telling the subject that "pushing the fat man is the only way" is not helpful. We can't imagine ourselves in that epistemic position.
The best "fat man" scenario is the Unwilling Transplant Donor, but sadly it does not have a good omission counterpart.
Suppose the potential organ donor is choking to death, and you have the opportunity to perform the heimlich manuever and save him.
If I have no memory of some period in my past, then should I be pleased to discover that was happy during that period? Or is it that past experiences are valuable only through the pleasure their memories give us in the present?
You should be at least as pleased as you would be to discover that someone else was happy during that period.
There is something I find very satisfying about this answer. Possibly this is related to the fact that I like to think of people-over-time as being a succession of distinct, but closely related, identities.
If you are a utilitarian, I think you should be pleased.
Imagine you happened to find out that a person on the other side of the world, whose life has never and will never affect yours in any way, is happy right now. You'd be pleased about that, right? Now imagine you knew instead that that person was happy last week. Since this affects you not at all, there's no real difference between these: you're just pleased about the fact of someone's happiness at some point in time.
If you buy my argument up to this point, then you may as well be pleased if that mystery person from the past was actually your own past self. And that's not even to mention Kevin's argument which does take into account the ways in which your past self influences your future self.
Here is one possible reason for being pleased to discover that one was unhappy in the past:
Times of apparent unhappiness can lead to great personal growth. For instance, the hardest, most stressful time of my life was studying for my physics honors exams. However, now that the exams are over, I am glad to have both the knowledge I gained in studying, and the self knowledge that I am capable of pushing myself as hard as I did. (Would skills learned during the missing time be retained? Even if they weren't, the latter reason above would still apply).
It would be devastating to lose the memory of any part of ones life, but I think there would be some satisfaction in learning that one had spent the missing time doing something difficult but worthwhile, even if one was not happy during that time.
I vote "pleased", for the rather weak reason that this makes my preferences time-symmetric*.
* Edit: This is poorly-worded - what I was referring to was time shift symmetry.
But nothing else about the universe is time-symmetric, manifestly including our own revealed preferences -- I would rather be happy in the future but not in the past than be happy in the past but not in the future, if you gave me the choice right now. So this is the only argument I can think of to vote "not pleased" (of course, not displeased either) about one's past, but unremembered, happiness.
(I actually do vote "pleased," though, for the reason I argued here.)
I'm not sure that I'd prefer unrecalled happiness in the past to in the future, but I was thinking of (and should have named) time-shift symmetry, which the fundamental laws of physics are.
I actually agree with your argument for voting "pleased", though, so we might be simply in agreement.
It sounds as though you now have some information about those past events. Hopefully, it is a sign that your goals were being met during that period. Also, if you managed to learn that, maybe you will also learn something more useful about the period. So: I would say it is normally a good sign.
Request for help: I can do classroom programming, but not "real-world" programming. If the problem is to, e.g. take in a huge body of text, collect aggregate statistics, and generate new output based on those stats, I can write it. (My background is in C++.)
However, in terms of writing apps with a graphical user interface, take input in real-time, make use of existing code libraries, etc., I'm at a loss. I'd like to know what would be a good introduction to this more practical level.
To better explain where I am, here is what I have tried so far: I've downloaded a lot of simple open source programs that have a lot of source files. But strangely, whenever I compile them myself and get them to run, it just runs on the command screen blindingly fast and then closes, as if I'm missing some important step. (How are you normally expected to compile open-source programs?)
I've also worked with graphics libraries and read a book (IIRC, Zen and the Art of Direct3D Game Programming) and was able to use that for writing algorithms that determine the motion of 3D objects, given particular user inputs, but it was pretty limited in domain.
I've downloaded Visual C# Express, which was actually pretty helpful in terms of showing how you can create GUIs and then jump to the corresponding code that it calls. I wrote simple programs with that and even bought a book on how to use it, but it turned out to require very circuitous routes to do simple things.
Finally, becuase it's so highly recommended, and I've read Douglas Hofstadter's introduction to it, I thought about programming in Lisp, but the only programming environment for it that I could get to work was the plain old b/w command line, when I figured I'd need to have more functionality than that, and also the libraries to do more than just computation. (I'm experienced with Mathematica, which seems similar in a lot of ways to Lisp.)
So, an specific suggestions on where I should go from here?
You want to do user-facing stuff? Then don't bother with desktop programming, write webapps. HTML and JavaScript are much easier than C++. You don't even have to learn the server side at first, a lot of useful stuff can be written as a standalone html file with no server support. For example you could make your own draggable interface to the free map tiles from http://openstreetmap.org - basically it's all just cleverly positioned image elements named like this, within a rectangular element that responds to mouse events. Or, if a little server-side coding doesn't scare you, you could make a realtime chat webpage. Stuff like that.
If you need any help at all, my email is vladimir.slepnev at gmail and I'm often online in gtalk.
"Easy" is one goal you can have when learning to program. "Soundly written and maintainable" is another. Unfortunately these two goals are sometimes at odds.
Language and platform don't really matter a whole lot, in the grand scheme of things; learning how to write maintainable programs does matter. Having had lots of experience extending or modifying source code written by others, I wish more novice programmers would make that their number one goal.
I disagree!
A (real) novice programmer's number one worry should be getting paid. Why should they divert their attention and spend extra effort on writing maintainable code, just so you have an easier time afterward? That's awfully selfish advice.
You might claim writing maintainable code will pay off for them, but to properly evaluate that we need to weigh the marginal utilities. What's better, an extra hour improving the maintainability of your code, or an extra hour spent empathizing with the client? Ummm... And you can't say both of those things are first priority, that's not how it works. I've been coding for money for half of my life so listen to my words, ye lemmings: ship the thing, make the client happy, get paid. That's number one. Maintainability ain't number one, it ain't even in the top ten.
That depends on your incentive structure. You may well be right if you work as a contract programmer. If you work as a salaried employee in a large company the calculation could look different.
(Edited)
For one thing, that doesn't sound like something that's actionable for Silas in the context of his request for advice, compared to advising him to learn some specific techniques, such as MVC, which make for more maintainable code.
For another, your worry should be "getting paid" after you have reached a reasonable level of proficiency. A medical student's first concern isn't getting paid, it's learning how not to harm patients. Similarly if you're learning programming, as opposed to confident enough of your chops to go on the market, you have a responsibility to learn how not to harm future owners of your code through negligent design practices. That a majority of programmers today fail to fulfill that basic responsibility doesn't absolve you of it.
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn't have to wait and learn without getting paid, his current skill level is already in demand.
But that's tangential. More importantly, whenever I hear the word "maintainability" I feel like "uh oh, they wanna sell me some doctrinaire bullshit". Maintainability is one of those things everyone has a different idea of. In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically.
Allow me to illustrate with an example. One of my recent projects was a user interface for IPTV set-top boxes. Lots and lots of stuff like "my account", "channels I've subscribed to", et cetera. Now, the natural way to solve this problem is to have a separate file (a "page") for each screen that the user sees, and ignore small amounts of code duplication between pages. If you get this right, it's pretty much irrelevant how crappily each individual page is coded, because it's only five friggin' kilobytes and a maintenance programmer will easily find and change any functionality they want. On the other hand, if you get this wrong... and it's really fucking distressing how many experienced programmers manage to get this wrong... making a Framework with a big Architecture that separates each page into small reusable chunks, perfectly Factored, with shiny and impeccable code... maintenance tasks become hell. And so it is with other kinds of projects too, in fact with most projects I've faced in my life. Focus on finding the stupid, straightforward, natural solution, and it will be maintainable with no effort.
I wasn't with you on the importance of maintainability until you said this. Yes, programming well and naturally is automatically maintainable.
Right on. Another way to put it: if you have to spend extra effort on maintainability, you've probably screwed up somewhere.
My name for this kind of behavior is "fetish". For example, some people have a Law of Demeter fetish. Some people have a short function fetish. And so on, all kinds of little cargo cults around.
Allow me to illustrate with another example. One of my recent projects is mostly composed of small functions, but there's this one function that is three screens long. What does it do? It draws a pie chart with legend. The only pie chart in the whole application. There's absolutely no use refactoring it because it's all unique code that doesn't repeat and isn't used anywhere else in the app. Pick the colors, draw the slices, draw the legend, stop. All very clear and straightforward, very easy to read and modify. A fetishist would probably throw a fit and start factoring it into small chunks, giving them descriptive names, maybe making it a "class" with some bullshit "parameters" that actually only ever take one value, etc, etc.
Well, that's merely labeling, not actually advancing an argument. What kind of predictions are we talking about here? Where is our substantial disagreement, if any?
When I talk about maintainability I'm referring to specific sequences of events. In one of the most common negative scenarios, I'm asked to make one change to the functionality of a program, and I find that it requires me to make many coordinated edits in distinct source chunks (files, classes, functions, whatever). This is called "coupling" and is a quantifiable property of a program relative to some functional change specification.
"Maintainable" relative to that change means (among other things) low coupling. You want to change the pie chart to use dotted lines instead of solid inside the pie, and you find that this requires a change in only one code location - that's low coupling.
Now what often happens is that someone needs a program that's able to do both dotted-line pies and solid-line pies. And many times the "most natural" thing (by which I only mean, "what I see many programmers do") is then to copy the pie-chart function, paste it elsewhere with a different name, and change the line style from solid to dotted.
That copy-paste programming "move" has introduced coupling, in the sense that if you want to make a change that affects all pie charts (dotted and solid alike) you'll have to make the corresponding source change twice.
Someone who programs that way is eventually going to drive coupling through the roof (by repeated applications of this maneuver). At this point the program has become so difficult to change that it has to be rewritten from scratch. Plus, high coupling is also correlated with higher incidence of defects.
Now you may call a "fetishist" someone whose coding style discourages copy-paste programming, that doesn't change the fact that it is a style which results in lower overall costs for the same total quantity of "delta functionality" integrated over the life of the program.
My contention is that functions which are three screens long are, other things equal, more likely to result in copy-paste parametrizations than smaller functions. (More generally, code that exhibits a higher degree of composability is less susceptible to design mistakes of this kind, at the cost of being slightly harder to understand for a novice programmer.)
I'd probably look hard at this pie chart thingy and consider chopping it up, if I felt the risk mitigation was worth the effort. Or I might agree with you and decide to leave it alone. I would consider it stupid to have a "corporate policy" or a "project rule" or even a "personal preference" of keeping all functions under a screenful. That wouldn't work, because more forces are in play than just function length.
Rather, I assess all the code I write against the criterion of "a small functional change is going to result in a small code change", and improve the structure as needed. I have a largish bag of tricks for doing that, in several languages and programming paradigms, and I'm always on the lookout for more tricks to pick up.
What, specifically, do you disagree with in the above?
I agree with most of your comment, except the idea that you can anticipate in what directions your software is going to grow. That's never actually worked for me. Whenever I tried designing for future requirements instead of current simplicity, clients found a way to throw me a curveball that made me go "oops, this new request screws up my whole design!"
If my program ever needs a second pie chart, it's better to factor the functionality out then instead of now. Less guesswork, plus a three-screen-long function is way easier to factor than a set of small chunks is to refactor.
Object-oriented design is overrated. ;)
I wouldn't say "on the job", necessarily. But it is only learned by programming, not by thinking about programming, attending lectures on programming, etc. Programming for class assignments can count for this.
Well, there is some benefit to reading good code, but you have to already have a reasonable idea what good code is for that to help.
That happens to take a significant amount of skill and learning. Read a site like the Daily WTF and you see what too often comes out of letting untrained, untaught programmers do what they're naturally inclined to do. One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
In practice you're right: people have different ideas of maintainability. That is precisely the problem.
But I don't know of any way to acquire this "programming common sense" except on the job. Do you?
Oh, no. What a terrible idea. If you do this without actually pushing through real-world projects of your own, you'll come up with a lot of bullshit "principles" that will take forever to dislodge. In general, the ratio of actual work to abstract thinking about "principles" should be quite high.
Open source.
Another case of "let them eat cake". The very gap in my understanding is the jump between writing input once/output once algorithms, to multi-resource complex-UI programs, when existing open source applications have source files that don't make sense to me and no one on the project finds it worth their time to bring me up to speed.
Sorry for not reading the follow-up discussion earlier.
What do you mean by this? How can I be hired for programming based on just what I have now? Who hires people at my level, and how would they know whether I'm lying about my abilities? (Yes, I know, interviews, but to they have to thin the field first.) Is there some major job finding trick I'm missing?
My degree isn't in comp sci (it's in mech. engineering and I work in structural), and my education in C++ is just high school AP courses and occasional times when I need automation.
Also, I've looked at the requests on e.g. rent-a-coder and they're universally things I can't get to a working .exe (though of course could write the underlying algorithms for).
The best 'trick' for job-finding is to get one from someone you know. I'm not sure what you can do with that.
Generally speaking, there are a lot of people who aren't good at thinking but have training in programming, and comparatively not a lot of people who are good at thinking but not good at programming, and the latter are more valuable than the former. If I were looking for someone entry-level for webdev (and I'm not), I'd be likely to hire you over a random person with a master's degree in computer science and some experience with webdev.
Heh, that's what I figured, and that's my weak point. (At least you didn't say, "Pff, just find one on the internet!" as some have been known to do.)
Thanks. I don't doubt people would hire me if they knew me, but there is a barrier to overcome.
I'm sorry to be the one to break the news to you, but the IT industry has appallingly low standards for hiring.
For instance, you may be able to get a programming job without at any point being asked to produce a code portfolio or to program in front of an interviewer.
I'd still be keen, by the way, to help you through a specific example that's giving you trouble compiling. I believe that when smart people get confused by things which their designers ought to have made simple, it's an opportunity to learn about improving similar designs.
A quick solution to the FizzBuzz quiz:
*in ur LessWrong, upvotin' ur memez*
For the first time here I'm having a Buridan moment - I don't know whether to upvote or downvote the above.
Jeff Atwood also makes this meta point about blogging about fizzbuzz:
Somehow, the other responses to this comment reminded me of that.
Somehow, I believe this is my fault for having mentioned trying it myself, and for that, I apologize.
If all you have is regex s/.+/nail/
Warning: Do not try this (or any other perl coding) at home!
I think anyone who applies to a programming job and can't write this (in whatever language) deserves something worse than being politely turned down.
I tested myself with MATLAB (which makes it quite easy) out of some unnecessary curiosity - it took me about seven minutes, a fair part of which was debugging.
I feel rather ashamed of that, actually.
As everyone else seems to be posting their code:
A better program (by which I mean "faster", not "clearer" or "easier to modify" or "easier to maintain") would replace the tests with something less intensive - for example, incrementing two counters (one for 3 and one for 5) and zeroing them when they hit their respective desired factors.
I wouldn't be; I'd take it as (anecdotal) evidence that the craft of programming is systematically undertaught. By which I mean, the tiny, nano-level rules of how best to interact with this strange medium that is code.
(Recently added to my growing backlog of possibly-top-level-post-worthy topics is "how and why programming may be a usefull skill for rationalists to pick up"...)
What are your other nine?
Bad design choices are much more expensive to fix down the road than when they were were created. You seem to be saying that any time spent addressing this issue is worthless in comparison to spending more time empathizing with the customer.
If you're doing them in Windows, open the command prompt using "cmd" and run them from the command line. They'll run in the CMD window, which will stay open after the program finishes doing whatever it does, leaving the output visible.
Most open-source programs are made to be easy to compile on Unix platforms. If you're using OS X or Linux, great; if you're on Windows, download Cygwin and you'll have a Unix environment. Given all that, read the INSTALL file; it should give you step-by-step instructions for compiling and installing. Most commonly, you run ./configure, then make, then (as root) make install.
That said, platforms with package managers are really nice because you can download, build, and install many programs in a single step; Debian has APT, OS X has MacPorts and Fink, and Haskell (a programming language, not an operating system) has the Cabal.
In general, if running something causes a terminal to open and immediately close, try running it on a command line instead of double-clicking it. For Windows, open Command Prompt, drag the executable onto the terminal window, and hit enter.
One way to do that is to open the "Start" menu, select "Run", type
cmd, and press <Enter>.Find a specific programming problem you need (want) to solve. That, for me at least, makes the task of learning almost automatic.
I (also) recommend Ruby+Rails for practical purposes. If you want to learn how to program, for example, 3D games then I have no particular recommendations. I only got as far as 2D bitbliting on that path! ;)
Try Python+Django, Ruby+Rails, or PHP+CakePHP depending on your preference, but the pragmatic difference is much smaller than language zealots pretend. If you plan on making something with millions of users, PHP is faster than Python or Ruby.
Graphic programming is harder than using generated HTML for your GUI, and there seem to be a lot more real world applications with a web GUI than anything that uses local OS graphics.
Unfortunately this about sums up the current state of 'real world' programming.
It is helpful to have a concrete goal to work towards rather than merely coding for the sake of learning. Learning 'on the job' is helpful in this regard as there is usually a somewhat defined set of requirements and there is added motivation and supervision that comes with being paid to write code.
If you are trying to learn on your own I'd suggest trying to set yourself the task of writing a simple program to do something fairly clearly defined and then work towards that. Simply reading through open source code (or any third party code) is not something I've found terribly helpful as a learning exercise. More useful is to set yourself the task of fixing a specific bug or adding a specific feature as this will help direct your investigation.
Learning how to use the debugging tools available to you is also important. Understanding how software is put together can be greatly aided by stepping through code in a good debugger.
C# is pretty good for 'real world'/GUI development. Personally I think it is the best option overall at the moment for that kind of programming but you will find language choice is a bit of a religious war issue.
I second that recommendation for (non-web) GUI development. Even as someone who had never programmed in C# I found learning the language the simplest option when I needed to create a visual desktop application. (Of course, given that I knew both Java and C++ it wasn't exactly a steep learning curve.)
What category of app are you looking to write, narrowing down the class "app with a GUI" a little?
Can you name a specific example of one you've tried to compile and run, and you've been confused at the result?
One general hint is that a good way to learn how to code up significant programs from scratch is to, first, get a significant program that works and modify or extend it in some way.
Also, be aware that there are several competing design philosophies when it comes to writing GUI programs, with very different outcomes in terms of maintainability and adherence to sound design principles. The "Visual" approach exemplified by the Microsoft line of tools leaves much to be desired in my experience, leading to spaghetti code too easily.
I prefer approaches in which graphical components are created programmatically, and where design principles such as MVC then serve to further structure the resulting code and drive the design toward high levels of abstraction. The various Smalltalk environments are a good illustration of that philosophy.
Spaghetti code is a primarily a function of the programmer, not the tools. This isn't to say the tools don't matter; they do; but the various competing tools each have their pros and cons, and it's a bit glib to suggest the Microsoft stack is obviously behind here. ASP.NET MVC, which you can use for web development in C#, is quite orthogonality-friendly.
If you want to write UIs, Lisp and friends would probably not be the first choice, but since you mentioned it...
For Lisp, you can of course install Emacs, which (apart from being an editor) is a pretty convenient way to play around with Lisp. Emacs-Lisp may not be a start of the art Lisp implementation, but it is certainly good enough to get started. And because of the full integration with the editor, there is instant-gratification when you can use some Lisp to glue to existing things together into something useful. Emacs is available for just about any self-respecting computer system.
You can also try Scheme (a Lisp dialect); there is the excellent freely available Structure and Interpretation of Computer Programs, which uses Scheme as the vehicle to explain many programming concepts. Guile is nice, free-software implementation.
When you're really into a more mathematical approach, Haskell is pretty nice. For UI stuff, I find it rather painful though (same is true for Lisp and to some extent, Scheme).
Got Skype, microphone, etc?
Well, if you want to play with a lisp, maybe consider PLT-Scheme?. That one has a really nice environment, etc etc... (it's a Scheme rather than a Common Lisp, though.)
I've found wxPython a relatively pleasant way to write GUI programs.
This is a draft of a post I'm planning to send to my everything-list, partly to invite them to join Less Wrong. I'd appreciate comments and feedback on it.
Recently I heard the news that Max Tegmark has joined the Advisory Board of SIAI (The Singularity Institute for Artificial Intelligence, see http://www.singinst.org/blog/2010/03/03/mit-professor-and-cosmologist-max-tegmark-joins-siai-advisory-board/). This news was surprising to me, but in retrospect perhaps shouldn't have been. Out of the three authors of papers I cited in the original everything-list charter/invitation, two others had already effectively declared themselves to be Singularitarians (see http://en.wikipedia.org/wiki/Singularitarianism): Nick Bostrom has been on SIAI's Advisory Board for a while, and Juergen Schmidhuber spoke at the Singularity Summit 2009. I was also recently invited to visit SIAI for a decision theory mini-workshop, where I found the ultimate ensemble idea to be very well-received. It turns out that many SIAI people have been following the everything-list for years.
There seems to be a very strong correlation between interest in the kind of ideas we discuss here, and interest in the technological singularity. (I myself have been interested in the Singularity even before starting this mailing list.) So the main point of this post is to let the list members who are not already familiar with the Singularity know that there is another set of ideas out there that they are likely to find fascinating.
Another reason for this post is to let you know that I've been spending most of my online discussion time at Less Wrong (http://lesswrong.com/lw/1/about_less_wrong/, "a community blog devoted to refining the art of human rationality" which is sponsored by the Future Humanity Institute, founded by Nick Bostrom, and effectively "owned" by Eliezer Yudkowsky, founder of SIAI). There I wrote a sequence of posts summarizing my current thoughts about decision theory, interpretations of probability, anthropic reasoning, and the ultimate ensemble theory.
I initially wanted to reach a difference audience with these ideas, but found that the Less Wrong format has several of advantages: both posts and comments can be voted upon, the site's members uphold fairly strict standards of clarity and logic, and the threaded presentation of comments makes discussions much easier to follow. So I plan to continue to spend most of my time there, and invite other everything-list members to join me. But please note that the site has a different set customs and emphases in topics. New members are also expected to have a good grasp of the current state of the art in human rationality in general (Bayesianism, heuristics and biases, Aumann agreement, etc., see http://wiki.lesswrong.com/wiki/Sequences) before posting, and especially before getting into disagreements and arguments with others.
I'm still with Jack that pointing new readers to the entirety of the sequences is non-optimal. I'm waiting for the day when we can at least say "Start here (link) and keep clicking Next, and skim as much as you like", but you probably don't want to wait that long to send the post, so I don't know.
It doesn't look bad to me - if you believe it would be well-received, I see no problem with sending it.
This Nature article ("Quantum ground state and single-phonon control of a mechanical resonator") is making headlines in various media, and seems to be about large-scale quantum superposition, but it's always hard to tell what's getting lost in translation when you're not an expert. I'd prefer to put my trust in people here who think they're qualified to comment. Anyone?
I was about to post this here. If they actually verified that the resonator was in superposition, if they actually got interference effects out of it, well, that's it then, collapse isn't just dead, it's, well... I need a word for "dead" that's more emphatic than "dead".
It's... ahem... collapsed. :P
(at least such is my thought.)
I can't access the article now, so could you explain in more detail what is implies for the quantum collapse?
Hearing that Max Tegmark joined SIAI's board reminded me of a top-level post I was thinking of doing. In it, I would present what I think is a very strong but heretofore underemphasized argument for the Mathematical Universe/Level IV Multiverse hypothesis — specifically, an argument for why it is actually a satisfactory answer to the ultimate question of why anything bothers to exist at all — particularly targeted at people who aren't familiar with it or are skeptical of it (I was in the latter category when I first learned of it, remained there for a couple years, forgot about it, and then unexpectedly convinced myself of it). However: Are there enough people here in either of those categories that this would make for worthwhile discussion, and would that be considered sufficiently on-topic?
I think it's an aesthetically appealing way of looking at what's going on, but that it doesn't help with understanding what's going on (or what to do with it) in any way.
If you're referring to the fact that it doesn't give us any useful information about the contents or laws of this universe, then I agree completely. (If I write this post, then I do intend to acknowledge that, and to discourage calling it a "theory of everything" for that reason.)
Shall I take this as a "no" vote for the "would that be considered sufficiently on-topic?" question?
In its defense in that respect, it could be taken as a discussion about the outer limits of what anybody/anything anywhere can understand, and aside from that, it raises some interesting questions about anthropic reasoning.
Poll: Do you have older siblings or are an only child?
karma balance
Vote this up if you are the oldest child with siblings.
Vote this up if you have older siblings.
Vote this up if you are an only child.
I'm pretty sure that in the general population, there are at least as many people with older siblings as there are people with only younger siblings. But in this poll, it's 6 vs 19. That looks like a humongous effect (which we also found in SIAI-associated people, and which this poll was intended to further check). I could see some sort of self-selection bias and the like, and supposedly oldest children have slightly higher IQs on average, but on the whole I'm stumped for an explanation. Anyone?
ETA: Here's a claim that "it is consistently found that being first-born is particularly favourable to high levels of scientific creativity". See also this.
Is anyone familiar with a possible evolutionary explanation of the placebo effect? It seems strange to me that the body would have a limit to the degree it heals itself, and that this limit gets bypassed by the belief that one is receiving treatment.
The only explanation I could string together is that the body limits how much it heals itself because it's conserving energy/resources/whatever it might need for other things (periods of scarcity, danger, etc.) Receiving medicine sends the signal that the person is being taken care of and thus at a much lower risk of needing to use it's 'reserves', so the body goes ahead and diverts them to repairing whatever is wrong with it.
However, this would suggest that a self-administered placebo would be ineffective, whereas treatment but no medicine by a doctor/caregiver would be effective. As far as I know, this isn't how the placebo effect works, but I'm not exactly up to date on the subject.
Has anyone seen a better explanation?
Yes, that the original papers advocating the placebo effect were misleading in their reports and the popularisations thereof grossly exageratted.
Placeobo's can be shown to reliably have an effect on:
(I am not criticising the use of placebo controls here. But I am asserting that the primary benefit from such controls is in 'balancing out' other biases rather than because of direct effect of placebos on healing.)
http://news.ycombinator.com/item?id=567913
People are very much affected by what they imagine is going on. For the unbendable arm you don't tell people to extend their arm effiiciently, you have them imagine the arm extending out to infinity, or imagine the arm as a firehose.
I'm not sure why any of this works-- it may have something to do with activating one's own mirror neurons, but I do think the placebo effect should be viewed as a special case rather than a thing in itself.
A self-administered placebo might still be effective for evolutionary reasons. It would signal that a reduced activity level is related to tending your injuries, rather than, say, waiting in ambush or 'freezing' to avoid notice by motion-sensitive predators, so it's safe to divert resources toward repair or antibody production at the expense of sensory and muscular readiness.
Same reason people have a hard time getting to sleep in unfamiliar circumstances, but focusing on a token reminder of home dispels the feeling.
Fermi's Lack-of-a-Paradox:
http://xkcd.com/718/
I love how some xkcds aren't even comics or particularly funny, just hand drawn Less Wrong posts.
Charity is not about helping:
I haven't seen this posted yet and it seems it might be of interest, from a link on Hacker News:
Odds Are, It's Wrong
Follow the link for the full article, there's even mention of Bayes' Theorem.
A survey on cryonics laws:
Should it become legal for a person with a degenerative disease (Alzheimer's, etc.) to choose to be cryonically preserved before physiological death, so as to preserve the brain's information before it deteriorates further? Should a patient's family be able to make such a choice for them, if their mind has already degenerated enough that they are incapable of making such a decision, or if they are in a coma or some other unconscious or uncommunicative state?
Should it become legal for a person to choose to be cryonically preserved before physiological death regardless of medical circumstances?
Should hospitals be required to cryonically preserve unidentified dead bodies, assuming cryonics is still possible given whatever condition the patient's body is in? Should the default be neuropreservation or whole-body suspension?
Should your country's national health care system (if it has one; if not, imagine it does, and that its existence is not up for debate) cover cryonics for anyone who wants it? Should it be opt-in or opt-out (or not optional)?
Should laws against mishandling human remains be more severe in the case of cryonics patients?
Should murder/homicide/manslaughter laws result in more severe punishment if the victim cannot be cryonically preserved (whether because the body was not found for too long, they were shot repeatedly in the head, they were drowned or burned or buried in a ditch, etc.)? Assume the victim would have been preserved otherwise.
How would greater legal recognition of cryonics interact with the death penalty? For example, if you are for the death penalty: what should happen if a death-row inmate is signed up for cryonics (or living in a country with a national health system that covers it, per #4)? If you are against it, but living in a country that has it, could you support any cryonics-based compromise (e.g. replacing execution with cryonic suspension until, hypothetically, our understanding of psychology has advanced enough that it is possible to rehabilitate even the most evil of criminals)?
Finally, a question about social and medical attitudes rather than laws: When cryonics is widely known and relatively socially acceptable, and the evidence for its possibility is well-accepted in the mainstream (or when people have already started being revived), should opting out of it be viewed as comparable or equivalent to being suicidal?
Yes to 1, 2, 3, 5, 6. Undecided on 4.
I've been wondering about 7 for some time now. I'm against the death penalty, but given that some countries have it, it seems so obvious that people who are now being executed should be preserved instead. The probablity of a wrongful conviction being non-trivial, $30K seems like a paltry sum to invest in the possibility, however slight, of later reviving someone who was wrongfully executed. I have looked at the figures for the cost to society of the legal process leading to execution, and it is shockingly high. People on death row should at least have the option, given how much is otherwise spent on them.
My answers:
Yes and yes.
I know what this is like because my grandmother spent the last years of her life with Alzheimer's, in a nursing home. When she finally died, my mom didn't cry; she explained to me that she had already done her mourning years ago. It made sense, insofar as it can ever make sense to "get over" the annihilation of a loved one: my grandmother, the person, had already effectively died long before her body did. None of us knew about cryonics at the time, and we likely wouldn't have done it even if we had known about it, but I know that people are in this situation all the time, and as awareness and acceptance of cryonics grows, people should definitely have this option.
I'm inclined to say that it should be discouraged but legal as an individual choice. A person could already achieve a similar (though riskier) effect by calling an ambulance, making sure their bracelet and necklace are prominently visible, and killing themselves in a relatively non-destructive way.
Yes. I don't know much about the pros and cons of neuro/whole-body other than the cost, but I think I'd go with the latter, to err on the side of caution.
Yes. I'd say it should be opt-out, or opt-in if that is absolutely necessary for getting the law passed.
Yes. The laws should treat it like an instance of killing someone in a coma, which are presumably the same as the laws for killing someone in general. Of course it should vary depending on whether it is accidental, negligent, or intentional.
I'm not quite sure. I'd think that if you cause someone's bodily death, and they are able to be preserved perfectly, then it should be treated as a non-fatal assault or accident or whatever, but something doesn't seem right about that. I think, rather than having the laws against causing someone's bodily-but-not-information-theoretic death less severe, I'd prefer to have laws against causing someone's information-theoretic death more severe.
I'd prefer to abolish the death penalty altogether. If the compromise I gave as an example were politically feasible, I would support it, but I doubt it would garner much more support than abolishing the death penalty; it seems like too many people in the US view the criminal justice system as a tool of punishment/revenge rather than of rehabilitation.
When it seems like a relatively mainstream thing to do (not necessarily common, but common enough that your friends don't think you're crazy for opting in to it) — when society has outgrown its rationalizations of death and its resistance to immortality (religious objections; the idea that there is some spiritual essence that will be destroyed; the luddite/"science has gone too far!" response; the idea that having people die against their will is a morally permissible means of avoiding overpopulation; et cetera) — then can we start questioning the mental health of people who still object to it.
Has anybody else wished that the value of the symbol, pi, was doubled? It becomes far more intuitive this way--this may even affect uptake of trigonometry in school. This rates up with declaring the electron's charge as negative rather than positive.
I read an argument to that effect on the Internet, but I don't have any strong feelings - maybe if I were writing a philosophical conlang I would make the change, but not normally. You may as well argue for base four arithmetic.
http://www.math.utah.edu/~palais/pi.pdf
One can dream. :) Pi relates to diameter; it'd be much nicer if it related to radius directly instead.
Personally, I want to replace the kg in the mks system with a new symbol and name: I want to go back to calling it the "grave" (as it was called at one time in France), having the symbol capital gamma. Then we wouldn't have the annoying fact of a prefixed unit as a basic unit of the system.
Embarrassingly, my first reaction was to think, "how about cgs units? Those don't use kilograms!"
Hehehe. Cgs units... it really amuses me that it seems to be astronomers who like them best.
Of course, if we were really uber-cool, we'd use natural units, but somehow I can't see Kirstie Alley going on TV talking about how she lost 460 million Planck-masses on Jenny.
No. This is nowhere near like the metric vs. english units debate. (If you want to talk about changing units, you should put your weight on that boat instead, as it's much more of a serious issue.) Pi is already well defined, anyways. It's defined according to its historical contextual meaning, regarding diameter, for which the factor of 2 does not appear.
Pi is well-defined, yes, and that's not going to change. But some notation is better than others. It would be better notation if we had a symbol that meant 2pi, and not necessarily any symbol that meant pi, because the number 2pi is just usually more relevant. There's all sorts of notation we have that is perfectly well-defined, purely mathematical, not dependent on any system of units, but is not optimal for making things intuitive and easy to read, write and generally process. The gamma function is another good example.
I really fail to see why metric vs. english units is much more serious; neither metric nor english units is particularly suggestive of anything these days. Neither is more natural. The quantities being measured with them aren't going to be nice clean numbers like pi/2, they're going to be messy no matter what system of units you measure them with.
What about the gamma function is bad? Is it the offset relation to the factorial?
Yeah. It's artificially introduced (why the s-1 power?) and is basically just confusing. Gamma function isn't really something I've had reason to use myself, so I'm just going on the fact that I've heard lots of people complain about this and never anyone defending it, to conclude that it really is as dumb as it looks.
The t^(s-1) in the gamma function should be thought of as the product of t^s dt/t. This is a standard part of the Mellin transform. The dt/t is invariant under multiplication, which is a sensible thing to ask for since the domain of integration (0,infinity) is preserved by scaling, but not by the translations that preserve dt.
In other words, dt/t = d(log t) and it's telling you to change variables: the gamma function is the Laplace (or Fourier) transform of exp(-exp(u)).
e^(pi*i) = -1
Anything else: lame.
Uh, how is e^(pi*i) = 1 lame?
Maybe because e^0 = 1?
Well making pi=2pi would just mean the complex exponential function would repeat itself every pi radians instead of every 2pi radians. e^0 would still = 1 in either case. Note that in the current definition, e^jn(2pi) = 1 for any integer n.
Meh. 2 Pi shows up a lot, but so does Pi, and so does Pi/2. I think I'd rather cut it in half, actually, as fractions are more painful than integer multiples.
Think about the context here, though. Having a symbol for 2pi would be much more convenient because it would make things consistent. 2pi is the number that you typically cut into fractions. Let's say we define, say, rho to mean 2pi. Then we have rho, rho/2, rho/3, rho/4... whereas with pi, we have 2pi, 2pi/2, 2pi/3, 2pi/4... the problem is those even numbers. Writing 2pi/4 looks ugly, you want to simplify, but writing pi/2 means that you no longer see the number "4" there, which is what's important, that it's a quarter of 2pi. You see the "2" on the bottom so you think it's half of 2pi. It's a mistake everyone makes every now and then - seeing pi/n and thinking it's 2pi/n. If we just had a symbol for 2pi, this wouldn't occur. Other mistakes would, sure, but as commonly as this one does?
If we were to define, say, xi=pi/2, then 4xi, 2xi, 4xi/3, xi, 4xi/5... well, that's just awful.
What? Like who, 6th graders?
I find that unfair. I have made the mistake Sniffnoy describes many times, all of them after I was in 6th grade.
Easy solution. Pi is half a circle. Pie is the whole one. Then there is a smooth transition from grade 3 to university.
I've been looking for a good thing to call 2*Pi - this might cut it.
Nice one! ;)
No, like anyone who isn't watching out for traps caused by bad notation. It's much easier to copy down numbers than it is to alter them appropriately. If you see "e^(pi * i/3)", what stands out is the 3 in the denominator. Except oops, pi actually only means half a circle, so this is a sixth root of unity, not a third one. Part of why I like to just write zeta_n instead of e^(2pi * i/n). Sure, this can be avoided with a bit of thought, but thought shouldn't be required here; notation that forces you to think about something so trivial, is not good notation.
Pi/3 shows up a lot as well. If you halve pi, then you'd have to write that as 2*pi/3, which is more irritating still.
Mentally Subtracting Positive Events Improves People’s Affective States, Contrary to Their Affective Forecasts
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2746912/
This will be preaching to the converted here, but worthy of note: "Odds Are, It's Wrong".
It's about the continued use of frequentist significance tests.
ETA: I've found the web site flaky today. Here's Google's cached copy.
More -- much more! -- discussion of the article by statisticians here.
This is the best introduction to the subject I've seen yet. I highly recommend it to the mathphobic.
Poll: when making a new substantive top-level post, what kinds of summary are acceptable?
This is a checkbox poll, and therefore votes for multiple options may be entered - for each option, a separate karma balance will be offered. In the event that some important option is immediately noticed to be missing, another poster may offer an option-karma balance pair without destroying the poll.
Formal abstract, consisting of one or a few paragraphs, indented.
Karma balance.
"tl;dr" summary, indicated by "tl;dr" abbreviation.
Karma balance.
Précis, consisting of one or two sentences, italics.
Karma balance.
No summary.
Karma balance.
If I think style choices are best left up to the poster I should vote for all the options?
This afternoon I identified a way in that I strongly need to be more rational, and I wondered if there has been anything written about it on Less Wrong.
A few hours ago, I was picking up my two children from their school. They're at a very young age so my heuristic is: near a parking lot, hang on to them.
While we were exiting the school building, another small child ran from his mother and slipped through the door between me and my youngest child. I feebly tried to grab the boy's shirt but he tugged away and then I just watched as he ran into the parking lot. I was in the middle of a decision algorithm to chase after him when he finally settled in a safe spot beside his family's car.
After about a full minute of playing the moment over and over in my head, I felt deeply disturbed by the fact that I hadn't instinctively grabbed the boy to effectively catch him and then hadn't run after him in time to save him if there had been a car. I was fully culpable: the only reason the door was open was because I was holding the door open for my kids, I knew he was running into a parking lot, and I was standing between him and his mother. But I just didn't think fast enough. My heuristic was 'hang on to my kids', which I did.
This seems to have been a matter of not computing fast enough. How could I have thought faster, in a way that would have resulted in a useful action? There have been several times in the past year where I just want to kick myself for not doing the right thing at the right time. Is this a form of akrasia?
If it had been me in that situation, I might have reacted pretty much as you did, because I have a heuristic to leave other people's kids alone when the parents are around. Nothing riles me quite like seeing someone else interact with my child in a bossy way, and I have noticed that others often react the same.
Near a school I would expect adults (including in cars) to be more on the lookout for kids running around and so my awareness of danger would be lowered relative to my awareness of etiquette and the rule to look after my own kids.
No, the term akrasia should be reserved for when you have already computed what you want to do, and fail to carry through with the want.
What you describe seems more like a matter of doing the best with limited computing resources. Making what in retrospect appears to be the wrong decision should, if it has not had dire consequences, be good news: you get to adjust the internal "weights" you assign to the relevant rules, and so prepare yourself for right decisions in future.
Don't beat yourself up for not "thinking faster", simply reflect on your repertoire of relevant actions in similar contexts, perhaps try to expand it. For instance you may want to practice with shouting "stop" so that it works. ;)
It appears to me that you simply ran into a situation for which you were not prepared. If there are general rules you can implement that will work, that is good, but the only cure I can think of is anticipating and considering in advance many possible scenarios.
Let Every Breath, Systema, and Rmax International are related systems based on the idea of learning to maintain mental focus under stress.
I haven't worked with them myself, but the approach seems safe and plausible, and probably at least worth investigating.
QALYs and how they are arrived at. "Quality Adjusted Life Years" are the measure used by UK drug approval bodies in deciding which treatments to approve. They aim to spend no more than ÂŁ30,000 per QALY.
Buying someone on the internet a pizza seems to be a cheap and easy way of buying a lot of fuzzies. Behold, Mr. wongwong, the most generous man in the world.
http://www.reddit.com/r/reddit.com/comments/bd3fb/dear_reddit_can_you_order_me_a_pizza/
An interesting dialogue at BHTV abot transhumanism between cishumanist Massimo Pigliucci and transhumanist Mike Treder. Pigliucci is among other things blogging at Rationally Speaking. This BHTV dialogue is partly as a follow-up to Pigliucci's earlier blog-post the problems with transhumanism . As I (tonyf, July 16, 2009 8:29 PM) commented then, despite the title of his blog-post, it was more of a (I think) misleading generalisation from an article by some Munkittrick than by an actual study of the "transhumanist" community that was the basis for Pigliuccci's then rather sweeping criticism. The present BHTV dialogue was in a rather different tone, and it seemed Pigliucci and Treder understood each others rather well. (As for now I do not see any mentioning of the dialogue on Rationally Speaking, it would be interesting to see if he will make any further comment.)
I have not time to comment the dialogue in detail. But I say that both Pigliucci and Treder did not distinguish between consciousness and intelligence. Pigiliucci pointed very clearly out that the concept of "mind uploading" suppose the "computational hypothesis of consciousness" to be true, but (at least from an materialistic point of view) it is not at all clear why it should be true. But from that he tacitly draw the conclusion (it seemed to me at last after a single view of the dialogue) that also [general] intelligence is depending on that assumption. Which I cannot see how it should. Is not the connection (or not) betwen consciouness and intelligence a so-far open question?
No, Pigliucci agrees that it might be possible to get an intelligence (e.g., that passes the Turing test) through the computer system. He just does not think that you can call it a human intelligence.
He thinks the concept of "mind uploading" is silly because the human mind (and intelligence) is therefore fundamentally different from this computer mind. He also argues that the human mind is inseparable from the biological construction. I have to admit I am not surprised that this argument is coming from a biologist. To a physicist or an engineer, almost all problems and constructs are computational, and it's just a matter of figuring out the proper model. As a biologist, it is more difficult to see how living entities follow similar sorts of fundamental rules. In objecting to the computational theory of mind, Pigliucci objects to the computational theory of reality, and in essence, he contradicts himself. He reveals himself to be a dualist. I think he is confusing the mathematical or logical abstraction of a system (not dualistic) with the physical or material abstraction (dualistic).
An amusing view of charity and utility, as told by Monty Python: Merchant Banker. I was trying to remember what thought experiment it reminded me of, but I couldn't find it...
This is totally irrelevant, but I just had to share it.
I use the Tony Marloshkovips system for memorizing numbers, such as phone numbers, Social Insurance Numbers, physical constants, product codes at the grocery store, etc. It's very handy.
Anyway, I had to identify myself with my SIN today on the phone for loan purposes. But there was no record of my SIN number in their database. I repeated it - still wrong. Got through finally by telling the chap on the phone my date of birth.
Turns out the number I was telling him was the speed of light in m/s (299 792 458 - "nippy back pain relief"). It's not my fault they have the same number of digits!
One career path I'm sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it's okay to harm and what "harm" is.
Does this seem like a good sort of career path for someone interested in Friendly AI?
I'm not an expert, but I don't think there is much more overlap with FAI than other domain AI projects have. The problems for military robots probably are more of the machine vision kind than of the meta-ethics kind.
Sounds like a good idea, but here are my reservations/warnings:
1) For the kind of work you describe, you would probably need a high-level security clearance and continued scrutiny on your life (to make sure you don't share it with the wrong people), and you probably wouldn't be able to publicly discuss your work. (i.e., where SIAI can hear it.)
2) What are your chances you'll actually get to work on the aspect of the problem that relates to Friendliness?
The scrutiny isn't so bad. They're mainly looking for illegality or potential for corruption. And even if you've committed illegal acts, so long as you own up to it, and it wasn't in the recent past (5 to 7 years), it's generally OK. Felonies are a different matter, of course.
A secret clearance is an interview, taking fingerprints, interviews of family and friends, interviews of neighbors, a credit check, and will likely require drug testing. Top secret clearances and above lead to polygraphs and heavy grilling, with monitoring for new developments. They're renewed every few years, going through the process again.
Most of the military drone programs would be given to one large contractor like Lockheed Martin or NGIT, with lots of smaller subcontractors. A security clearance at secret level or above takes up to 9 months, costs the company over $10,000, and adds that much or more to that person's annual salary potential, so it's not something they hand out lightly.
Most contracting agencies put a small, already-cleared team on the activities that require it, and farm out most of the work (documentation, mundane code, etc.) to people without clearances. If they need more people with clearances, they tend to get temporary waivers for the duration of the work (90 days or less, for example). Most only see a small part of the whole, and you don't choose your projects; your company does.
These are not good environments to learn complex, high-level things like Friendliness.
It wasn't so much the background scrutiny I'm worried about so much as,
"Alright, it's been fun doing this research on human-level intelligent robots. Oh, hey, I'm going to go to an AI conference in Shanghai..."
"Hahahahahaha! Good one! Um ... were you being serious?"
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
What do you mean by "non-magical environments"?
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don't capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
Your terminology was unclear but this definition is not - I would tend to call it an "organic" environment.
The difference between specialized FAI and general FAI is like the difference between adaptation executors and fitness maximizers. It's a big difference.
If you work on AGI and you make actual progress, then you have a moral obligation to keep it away from people who can't be trusted with it. You cannot satisfy this obligation while working for a military or a military contractor.
Am I the only one to think that no, creating military robots isn't a "good career path" towards friendly AI, because creating military robots is inherently unfriendly to humanity? Especially if you live in the US and know that your robots will be used in aggressive wars against poorer countries. It's some kind of crazy ethical blindness that most Americans seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get... Just like this incident I saw on HN when one guy asked about career prospects working for the occupation force in Iraq, and another answered that it'll be an "amazing and unique experience". You'll note my reply there was much more concise.
Fixed it for you.
And the reason is evolved psychological instincts with pretty obvious selection benefits.
There are various arguments that building military robots is bad, but I don't think you've touched on any good ones. When you look at how unreliable human soldiers are on the field, creating military robots just seems like an obvious way to make things better for everyone involved. Fewer American casualties because we're using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
Also, FWIW, most military robots currently aren't the sort that shoot people - they do things like look around corners, draw fire, perform aerial surveillance, and detect/defuse bombs.
This is ironic. I wrote:
Then you wrote:
This happens to pixel-perfectly demonstrate my point about ethical blindness. Reread my quote again, then your quote, then mine, then yours again. Notice anything wrong? Anything missing?
You see, you omitted one pretty important group: everyone America calls "enemy combatants". If you think all of them are bad people and deserve to die, then you obviously don't get it. Repeat after me: America Starts Aggressive Wars. Then say it again because it's true and truth won't suffer from repetition. Say it as many times as you need to make it sink in, then come back and we will resume this discussion.
America will be killing those people with or without robots. We already have ways of wiping all of the enemy combatants off the map if we want to (for example nukes). Military technology is primarily about finding ways to 1) kill fewer of our own soldiers and 2) kill fewer people who aren't enemy combatants.
Not necessarily. All else equal, the less it costs to wage a war (in money, American lives, and good will), the more more likely leaders are to actually start one.
Ignoring the question whether that's desirable or not (politics is the mindkiller) reducing the cost of killing those people will lead to more of those people killed in marginal situations where such considerations matter.
Yes, that's one of the good arguments against robot soliders I mentioned above. We're more likely to not care about the fate of our robot soliders, and so would be less hesitant to send them into battle. Though it's still an open question whether that effect would trump any increased monetary cost per soldier (if any) and whether the other benefits outweigh such concerns.
Human soldiers perform horribly in terms of following the rules of war, and above that do absolutely horrible things sometimes.
Also, this is definitely not the place to debate this, and you have to know a lot of people won't agree with you, so stop with the flamebait.
You don't even have to go as far as "America Starts Aggressive Wars" -- "Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered."
Look, I get the "Politics is the Mind Killer" mantra, and I agree that it would be fruitless to start a debate about something like abortion here -- it comes down to definitions and conventions about what is moral.
But when something is actually, demonstrably, true, refusing to look at and examine the truth because it is painful to do so is not compelling. It doesn't even trigger most of the reasons in "politics is the mindkiller" -- both major U.S. Political parties are just fine with most of the examples. The only two teams that can credibly be put in opposition here are "U.S.A." and "Everyone else".
It is worth noting that to complete the argument someone needs to show that America starting aggressive wars is bad. The people starting such wars, it turns out, have their reasons.
Why flamebait? I stated a very well-known fact.
http://en.wikipedia.org/wiki/Bay_of_Pigs_Invasion
http://en.wikipedia.org/wiki/Operation_Power_Pack
http://en.wikipedia.org/wiki/Operation_Urgent_Fury
http://en.wikipedia.org/wiki/Operation_Just_Cause
More here: http://en.wikipedia.org/wiki/CIA_sponsored_regime_change
ETA: to tell the truth, until I dug up that last Wikipedia page just now for purposes of argument, I still had no clear idea how much this happened. And give these people autonomous killer robots? In the name of developing Friendly Intelligence?
1) Politics is the mind killer, 2) Agree denotationally but not connotationally
Bay of Pigs? Really? How about nailing us on the Philippines while you're at it. :-)
It isn't like there aren't recent examples to choose from.
That's why. Folks will disagree that's something that the US does, and pointing to things the US might have done decades ago won't convince them. There's no way to even debate this point without going down a potentially mind-killing rabbit hole, and I find it hard to believe you weren't aware of this when you posted it.
In case you weren't aware of it: I live in the US, and I've talked to a number of ordinary folks and a number of scholarly folks about it, and I don't tend to encounter people who would grant that the US starts aggressive wars. You should be able to see why someone who thinks that would be angry and vocal about the accusation.
Ooh... I thought we were having a factual disagreement. I apologize. Maybe this won't work as flamebait here :-)
"War is bad, the military industrial complex is evil," sounds good, and it hits all the right emotional buttons (care for humanity, etc.), but it is not necessarily true when all of the costs and benefits are taken into account. A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack. Destruction of infrastructure can open the way for rebuilding into a far better environment, and massive war spending can push the boundaries of technology. Reshaping political landscapes can cause huge culture shifts through decades which may result in much more open, and better, societies.
Suffering is terrible; death is abhorrent; and the benefits are uncertain enough, they should not be used as arguments to start an otherwise preventable war. But I do not see how we can appropriately judge the complex results of "war in general" on the timeline of decades or centuries.
What I can certainly agree with is that contributing to the military is bad on the margins, since it's already getting more than its share of resources thanks to others of a more bloodthirsty bent.
At this point I laughed with a kind of sad laugh. Everyone who thinks America will use military robots for self-defense, raise your hands! On the other hand, you've made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I'm sure you didn't mean to.
They will use them for defense as well as for offense. I've seen several articles already of American cities ready to purchase military drones for law enforcement purposes, and I would be very surprised if they were not also added to strategic military bases within America to defend against potential attackers. At the very least, when countries are making strategy decisions that may involve the military, the mere existence of drones will serve as a deterrence.
My point was to state the necessity of defense. If there are strong, warlike countries with military drones, such as the United States, then other countries had better start developing countermeasures to protect themselves. That, or ally themselves with the strong country in the hopes of falling under their protection rather than their ire. As such, staying ahead of the other countries is a valid strategy.
And I would certainly agree that US aggressiveness is stifling those very things in Iraq, Afghanistan, Iran, etc. The word 'fear' was poorly chosen. I was thinking more of what happened to Tibet and all those pacifists when they failed to muster an appropriate military defense: actual invasion and displacement or destruction.
-- Jack Handey's Deep Thoughts
How much harm do you contribute by working to enable military robots?
How much harm do you contribute by paying taxes to the US government, part of which are used to fund military robots?
How much harm do you contribute by existing, living in the US, and absorbing a huge amount of electricity and other natural resources?
Well, that was voted down pretty rapidly :)
However, I was being honest with my questions. I'd like to know what sort of utilon adjustments people assign to these different situations, even if it's just a general weighting like 'high' or 'low'.
My decision to not work for the military industrial complex is all about fuzzies, not utilons.
It can be useful to separate 'fuzzies' from 'practical benefit' but they can both be considered sources of utilons.
Creating military robots can be friendly, if:
Lbh fryy gur ebobgf gb nyy fvqrf, ercynpvat uhzna nezvrf, naq unir gurz evttrq gb abg npghnyyl svtug rnpu bgure, ohg vafgrnq gnxr njnl gur rssrpgvir cbjre bs gur tbireazragf gung jnagrq nyy gur jnef.
(Rot13)
I'd say yes, go for it. The value would be in gaining experience in designing AI systems that have to work in the real world -- a very different proposition from systems that only have to work in the laboratory or in the imagination.
I have very little in the way of morality, but I personally draw the line at supporting the military industrial complex. I don't think helping the military make robots that make kill decisions themselves has much to do with provable mathematical Friendliness.
It seems you are morally obliged to at least investigate possible mechanisms for tax evasion. But then, morality doesn't have all that much to do with consequences.
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Also, I draw a distinction between something I am comfortable doing, and the likely future progress of society as a whole. Killer robots aren't going away anytime soon, and except for the extra wars it will allow us to have, killer robots result in less US deaths and more effective military tactics than on the ground troops. I expect that US killer robots will be making kill decisions or at least very strong kill suggestions that are followed 99% of the time within 10 years. There's just too much data coming in too fast for a single human operator to be able to process.
If the African totalitarians are still around in 25 years, the possibility of being conquered by an army of killer robots may make them more amenable to internationally monitored elections.
So good and bad things will come about as a result of the killer robot armies of the future. It's really the military industrial complex as a whole I object to; robots making kill decisions is one of the less objectionable things within the military industrial complex.
Uh, that's a pretty dumb thing to say. For one, starting a startup and selling it has rather broader consequences than a typical tax avoidance strategy. That's like suggesting moving to a third world country to cut down on your daily living expenses - your food and accommodation costs may indeed decrease but it significantly changes your life in all kinds of other ways as well. For another this would not be tax evasion but tax avoidance which has the rather significant difference of being entirely legal.
I'm fully aware of the distinction; I was playing with the ambiguous distinction between evasion and avoidance (as you say, the distinction being that avoidance is legal) by using the language of the person I replied to. I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
Cognitive neuroscience and cognitive psychology are far more relevant. A Friendly AI is a moral agent; it's more like a judge than a cruise missile. A killer robot must inflict harm appropriately but it does not need to know what "harm" is; that's for politicians, generals, and other strategists.
We have to extract the part of the human cognitive algorithm which, on reflection, encodes the essence of rational and moral judgment and action. That's the sort of achievement which FAI will require.
This is from the friendly AI document:
Actually, thinking in the third person is unnatural to humans and computers. It's just that writing logic programs in the third person is natural to programmers. Many difficult representational problems, however, become much simpler when you use deictic representations. There's an overview of this literature in the book Deixis in Narrative: A cognitive science perspective (Duchan et al. 1995). For a shorter introduction, see A logic of arbitrary and indefinite objects.
Actually this may be a better link.
Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results.
A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, "What would agent X do in this situation?", and you represent agent X's beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don't know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that).
An even more severe problem becomes apparent when you try to build robots. Agre & Chapman's paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler.
We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don't care about all that, and it's nothing but an opportunity to introduce error into the system.
The prince of one hundred thousand leaves is, among other things, a sort of fictionalized open-source project for horrifying eutopias. It might provide useful insights about that which we are least willing to consider.
I'd appreciate some feedback on a brain dump I did on economics and technology. Nothing revolutionary here. Just want people with more experience on the tech side to check my thinking.
Thanks in advance
http://modeledbehavior.com/2010/03/11/the-economics-of-really-big-ideas/
Einstein's Gravity Confirmed on a Cosmic Scale
http://news.nationalgeographic.com/news/2010/03/100310-einstein-theory-general-relativity-gravity-dark-matter-proof/
or
Confirmation of general relativity on large scales from weak lensing and galaxy velocities
http://www.nature.com/nature/journal/v464/n7286/full/nature08857.html
Has there been any activity on the Craigslist charity idea? If people are pursuing it, is there someplace to post updates, or an email list to join?