Open Thread: March 2010, part 2
The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (334)
One career path I'm sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it's okay to harm and what "harm" is.
Does this seem like a good sort of career path for someone interested in Friendly AI?
I'm not an expert, but I don't think there is much more overlap with FAI than other domain AI projects have. The problems for military robots probably are more of the machine vision kind than of the meta-ethics kind.
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
What do you mean by "non-magical environments"?
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don't capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
Your terminology was unclear but this definition is not - I would tend to call it an "organic" environment.
Sounds like a good idea, but here are my reservations/warnings:
1) For the kind of work you describe, you would probably need a high-level security clearance and continued scrutiny on your life (to make sure you don't share it with the wrong people), and you probably wouldn't be able to publicly discuss your work. (i.e., where SIAI can hear it.)
2) What are your chances you'll actually get to work on the aspect of the problem that relates to Friendliness?
The scrutiny isn't so bad. They're mainly looking for illegality or potential for corruption. And even if you've committed illegal acts, so long as you own up to it, and it wasn't in the recent past (5 to 7 years), it's generally OK. Felonies are a different matter, of course.
A secret clearance is an interview, taking fingerprints, interviews of family and friends, interviews of neighbors, a credit check, and will likely require drug testing. Top secret clearances and above lead to polygraphs and heavy grilling, with monitoring for new developments. They're renewed every few years, going through the process again.
Most of the military drone programs would be given to one large contractor like Lockheed Martin or NGIT, with lots of smaller subcontractors. A security clearance at secret level or above takes up to 9 months, costs the company over $10,000, and adds that much or more to that person's annual salary potential, so it's not something they hand out lightly.
Most contracting agencies put a small, already-cleared team on the activities that require it, and farm out most of the work (documentation, mundane code, etc.) to people without clearances. If they need more people with clearances, they tend to get temporary waivers for the duration of the work (90 days or less, for example). Most only see a small part of the whole, and you don't choose your projects; your company does.
These are not good environments to learn complex, high-level things like Friendliness.
It wasn't so much the background scrutiny I'm worried about so much as,
"Alright, it's been fun doing this research on human-level intelligent robots. Oh, hey, I'm going to go to an AI conference in Shanghai..."
"Hahahahahaha! Good one! Um ... were you being serious?"
Yeah, that could get you in big trouble.
Yep. And so could the appearance on the internet of an e-book about "How to build a human-level armed android, by Warrigal", when Warrigal has worked at such a job.
And if you go to a potentially hostile country without telling them ... well, I guess you'll get the option of a PMITA federal prison, or solitary.
I'd say yes, go for it. The value would be in gaining experience in designing AI systems that have to work in the real world -- a very different proposition from systems that only have to work in the laboratory or in the imagination.
I have very little in the way of morality, but I personally draw the line at supporting the military industrial complex. I don't think helping the military make robots that make kill decisions themselves has much to do with provable mathematical Friendliness.
It seems you are morally obliged to at least investigate possible mechanisms for tax evasion. But then, morality doesn't have all that much to do with consequences.
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Also, I draw a distinction between something I am comfortable doing, and the likely future progress of society as a whole. Killer robots aren't going away anytime soon, and except for the extra wars it will allow us to have, killer robots result in less US deaths and more effective military tactics than on the ground troops. I expect that US killer robots will be making kill decisions or at least very strong kill suggestions that are followed 99% of the time within 10 years. There's just too much data coming in too fast for a single human operator to be able to process.
If the African totalitarians are still around in 25 years, the possibility of being conquered by an army of killer robots may make them more amenable to internationally monitored elections.
So good and bad things will come about as a result of the killer robot armies of the future. It's really the military industrial complex as a whole I object to; robots making kill decisions is one of the less objectionable things within the military industrial complex.
Uh, that's a pretty dumb thing to say. For one, starting a startup and selling it has rather broader consequences than a typical tax avoidance strategy. That's like suggesting moving to a third world country to cut down on your daily living expenses - your food and accommodation costs may indeed decrease but it significantly changes your life in all kinds of other ways as well. For another this would not be tax evasion but tax avoidance which has the rather significant difference of being entirely legal.
I'm fully aware of the distinction; I was playing with the ambiguous distinction between evasion and avoidance (as you say, the distinction being that avoidance is legal) by using the language of the person I replied to. I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
I assumed wedrifid knew the difference and was suggesting you were morally bound to evade rather than merely avoid taxes if you draw the line at supporting the military industrial complex. I don't necessarily agree with that but I took that to be his point.
I would have thought that maximizing tax avoidance is something that any aspiring rationalist ought to be doing as a matter of course.
The fact that you can go to jail for tax evasion seems like a pretty profound difference from tax avoidance to me. The whole tax structure is 'just' the definitions given by the rule of law.
I don't think I'm morally bound to evade taxes for the same reason I'm not morally bound to stop the world's massive amounts of animal suffering. My utility function breaks if I take my morality too seriously. As you say, I am somewhat bound morally to try and evade taxes or even actively stage insurrection against my government. Both of those seem like very bad ideas, as the state will just crush me.
Not working for the government in lieu of trying to bring down the government is similar to my decision to eat less meet rather than trying to make the whole world eat less meat. Yes, I am aware that these are not anywhere close to perfectly analogous decisions.
I don't particularly want to avoid taxes, either - I like living in a country with a government.
I like living in a country with a government compared to Somalian anarchism, but not compared to libertarian utopia. This is getting close to politics.
As good a reason as any to drop the subject of tax avoidance.
I don't see the contradiction. The government creates the tax code with at least the stated intention of encouraging or subsidizing certain behaviours over others. That only works if people respond rationally to the incentives.
From the individual rationalist's point of view one should aim to optimize one's resources. In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law. You can then choose how to best meet your own goals by allocating the money you save as you see fit.
It is only rational to not avoid taxes if you either believe the effort required to avoid them is not worth the money saved or if you believe that the optimal use of the money is to give it to the government. It seems unlikely in the latter case that the optimal amount to give to the government just happens to be the very amount they take from you so you should probably be voluntarily donating a larger portion of your income to the government. If you live in the US you should go here.
Since we were talking about choice of career among other things, it's worth stating that your actual incentive here more closely resembles "maximizing your after-tax income" than "minimizing your taxes paid".
Am I the only one to think that no, creating military robots isn't a "good career path" towards friendly AI, because creating military robots is inherently unfriendly to humanity? Especially if you live in the US and know that your robots will be used in aggressive wars against poorer countries. It's some kind of crazy ethical blindness that most Americans seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get... Just like this incident I saw on HN when one guy asked about career prospects working for the occupation force in Iraq, and another answered that it'll be an "amazing and unique experience". You'll note my reply there was much more concise.
How much harm do you contribute by working to enable military robots?
How much harm do you contribute by paying taxes to the US government, part of which are used to fund military robots?
How much harm do you contribute by existing, living in the US, and absorbing a huge amount of electricity and other natural resources?
Well, that was voted down pretty rapidly :)
However, I was being honest with my questions. I'd like to know what sort of utilon adjustments people assign to these different situations, even if it's just a general weighting like 'high' or 'low'.
I have not assigned numbers - it is not a simple question.
My decision to not work for the military industrial complex is all about fuzzies, not utilons.
It can be useful to separate 'fuzzies' from 'practical benefit' but they can both be considered sources of utilons.
As I see it, it's less about how much harm those specific things do, and more about how viable the alternatives are. I expect that all governments makes tax avoidance/evasion difficult, and I suspect that paying taxes to any government will support a military. The lifestyle changes involved in actually living sustainably (as opposed to being 'slightly better than the US average' or applying greenwash) seem pretty significant and possibly unattainable for most of us, as well. (I could be wrong on the latter in a general sense; I haven't looked into it, since I'm already relatively sure that it's beyond what I, personally, could manage.) Given that Warrigal was asking about the career move, though, I expect that he does have other viable options that could be pursued without completely turning his life upside down, and that's a significant difference between this decision and the other two.
Creating military robots can be friendly, if:
Lbh fryy gur ebobgf gb nyy fvqrf, ercynpvat uhzna nezvrf, naq unir gurz evttrq gb abg npghnyyl svtug rnpu bgure, ohg vafgrnq gnxr njnl gur rssrpgvir cbjre bs gur tbireazragf gung jnagrq nyy gur jnef.
(Rot13)
Unfortunately, this isn't a realistic option if you're an employee at a big military contractor, which is the most likely scenario...
Well, yeah, there is no way someone at standard human level would pull off what happened in that story.
Fixed it for you.
And the reason is evolved psychological instincts with pretty obvious selection benefits.
I don't think that's an accurate correction. Because America is the current hegemonic power Americans can get away with feeling that other nations aren't "real" in the sense the USA are. For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
I'm afraid I don't know what this means.
There might be pragmatic realities that force non-Americans to consider the reactions of foreigners more than Americans must. Americans have two oceans and the world's strongest military to keep a lot foreign troubles far away, other people do not. But this isn't evidence that Americans care less about foreigners than those from other countries do. It sounds like you're talking about a political blindness instead of an ethical blindness. Besides, there is equally good reason to think America's hegemonic status makes Americans more worried about foreign goings-on since American lives and American business concerns are more often at stake.
Not "real" is the best description I have. You could say having the same sort of attitude towards other nations you might have towards Oz, Middle Earth or the Empire from Star Wars even though you intellectually know that they really exist, but that only comes close to what I mean. I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse.
I was thinking more of e. g. first contact situations in SF stories and things like that, not necessarily normal international politics, but I think it extends to all fields: Domestic politics (the amount and the kind of consideration the fact that a policy seems to work well somewhere else gets), pop culture, sports, science, language learning, wherever one might consider other nations Americans have more leeway not to do so. This doesn't by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to "correct" that out.
Why was this voted down? Was there anything in this post that isn't either objectively true (Americans have more leeway to ignore other nations) or clearly marked as speculation ("seem to")? Is it inherently irrational to consider the hypothesis that cousin_it's observation was meant exactly as stated, and then to speculate about what might be behind this observation?
Exactly zero evidence has been presented that Americans have this ill-defined attitude at a higher rate that non-Americans.
No reason given to think this is the case on balance.
The obvious and straight forward interpretation of cousin it's comment was that he was referring to American nationalism. A real and quite common phenomenon in which Americans don't give a lick about people who don't live their country (in civilized places this is referred to as racism). I've met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We're good at it. Really good. We do it like it's our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He's been commenting elsewhere throughout this conversation anyway.
(Not my downvote, btw)
Yes! Thank you! Finally, a human user says what I've been trying to say all along! (See for example here.)
On my first visit to Earth (or perhaps the first visit of one of my copies before a reconciliation), my reaction was (translated from the language of my logs):
"The Alpha species [i.e. humans] inflicts disutility on its members based on relative skin redness. I'm silver. Exit!"
Because I thought it would be obvious enough. Americans are less likely to learn foreign languages, most Americans don't even have a passport, it's easier to write a science paper without referencing any non-American research (not that I think this done at a significant rate, but the equivalent would be unthinkable elsewhere), foreign movies are generally either ignored or remade (and set in the USA if possible), foreign trade is a smaller percentage of GDP than just about any other developed nation, it's possible to "buy American" for a greater range of products than the equivalent anywhere else, America has the top leagues for the sports it cares about (it's not just that America cares for different sports than the rest of the world, for almost all countries the top level of the sport that country cares most about is at least in part played elsewhere so a soccer fan in e. g. Romania has to pay attention to the English Premier League, the Spanish Premiera Divison etc. [and even the English and Spanish fans have incentive to pay attention to each others league because they are at roughly equal level and the top teams regularly play each other]. If America cared about soccer the top league would be there so Americans still wouldn't have any reason to pay attention to foreign sports).
Again, why the down-vote? Is there any factual error or is giving evidence when asked not welcome here?
I think most of those things could be expected regardless of whether America has any such putative hegemonic status. Most Americans don't have passports because they can't afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I'm not surprised most people haven't bothered. Ditto with the foreign languages - most Americans don't meet or talk to people who don't speak American.
I haven't been able to find a source online - do most Chinese people speak foreign languages and have passports? Are they required?
Getting a passport is a bother everywhere, the point is that Americans don't really need a passport because their country is huge, rich and powerful and they can take a vacation in whatever climate they like without ever leaving their borders. People in other developed nations would have to make much greater sacrifices to never travel abroad.
That's exactly my point! They can do that without missing all that much, unlike most of the planet.
IIRC compulsory foreign language instruction (mostly in English) starts in third grade, and many educated Chinese learn a third/fourth language later. For many Chinese Mandarin is effectively a L2 language so they know their native dialect, Mandarin and some English. The state of English learning is mostly horrible and only a minority can communicate effectively, but I'd think that Chinese on average speak better English than non-native-speaker Americans speak Spanish and the difficulty is much greater.
I'm not all that clear about the passport situation/foreign travel and China is a bad example anyway because it is itself an enormous country and very "nation-centric", but a huge number of Chinese study abroad, while there is no comparable reason for Americans to do so because they already have many of the most prestigious universities.
That sounds like nationalism rather than racism to me. The country you live in has only a loose correlation with the colour of your skin. If people favoured countries which had a strong majority of people of a particular ethnicity that might be evidence for racism.
I was speaking loosely in the parenthetical. Nationalism has a strong tendency to manifest as racism and racism has a similar tendency to manifest as nationalism. They're highly correlated but yes, conceptually distinct.
While all what you say about nationalism is true It's not obvious to me that it explains what cousin_it was talking about, at least not to its full extent. Degradation of other people through nationalism usually evokes hate ("those damned X!"), while the linked comment seemed too cheerful for that, it's not like it encouraged to "help show it to those stinkin' Arabs" or anything like that. As if the fact that someone might be hurt simply didn't occur to them. There has been plenty of that in other historical cases of nationalism, but I think usually only in similarly asymmetrical situations. Nationalism in symmetrical situations seems to be of the plain hate kind.
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation. It's nation-centrism, in other words. Hatred is an extreme case (thus the moniker "ultra-nationalism").
This just isn't true. At all. I'm not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries.
More importantly: Why are we arguing about this? Cousin_it isn't some old philosopher or public intellectual who we can't reach for clarification. If he wants to correct my understanding of his comment let him do it.
The original disagreement wasn't about the term nationalism (and I never claimed that nationalism didn't explain it, only that what you said about nationalism up to that point didn't), so you seem to be arguing my point here: For the reasons I described it's easier for Americans to be "ignorant about the condition of those outside the nation".
You can't keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn.
You seem to be of the opinion that you can't even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that.
EDIT: Nation-centrism is close to what I meant with not feeling that other nations are "real".
"War is bad, the military industrial complex is evil," sounds good, and it hits all the right emotional buttons (care for humanity, etc.), but it is not necessarily true when all of the costs and benefits are taken into account. A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack. Destruction of infrastructure can open the way for rebuilding into a far better environment, and massive war spending can push the boundaries of technology. Reshaping political landscapes can cause huge culture shifts through decades which may result in much more open, and better, societies.
Suffering is terrible; death is abhorrent; and the benefits are uncertain enough, they should not be used as arguments to start an otherwise preventable war. But I do not see how we can appropriately judge the complex results of "war in general" on the timeline of decades or centuries.
What I can certainly agree with is that contributing to the military is bad on the margins, since it's already getting more than its share of resources thanks to others of a more bloodthirsty bent.
At this point I laughed with a kind of sad laugh. Everyone who thinks America will use military robots for self-defense, raise your hands! On the other hand, you've made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I'm sure you didn't mean to.
They will use them for defense as well as for offense. I've seen several articles already of American cities ready to purchase military drones for law enforcement purposes, and I would be very surprised if they were not also added to strategic military bases within America to defend against potential attackers. At the very least, when countries are making strategy decisions that may involve the military, the mere existence of drones will serve as a deterrence.
My point was to state the necessity of defense. If there are strong, warlike countries with military drones, such as the United States, then other countries had better start developing countermeasures to protect themselves. That, or ally themselves with the strong country in the hopes of falling under their protection rather than their ire. As such, staying ahead of the other countries is a valid strategy.
And I would certainly agree that US aggressiveness is stifling those very things in Iraq, Afghanistan, Iran, etc. The word 'fear' was poorly chosen. I was thinking more of what happened to Tibet and all those pacifists when they failed to muster an appropriate military defense: actual invasion and displacement or destruction.
Oddly I don't seem to have a reference handy, but several US cities already use robots in law enforcement. iRobot and Foster-Miller really took off after the success of their robot volunteers at the WTC.
-- Jack Handey's Deep Thoughts
There are various arguments that building military robots is bad, but I don't think you've touched on any good ones. When you look at how unreliable human soldiers are on the field, creating military robots just seems like an obvious way to make things better for everyone involved. Fewer American casualties because we're using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
Also, FWIW, most military robots currently aren't the sort that shoot people - they do things like look around corners, draw fire, perform aerial surveillance, and detect/defuse bombs.
This is ironic. I wrote:
Then you wrote:
This happens to pixel-perfectly demonstrate my point about ethical blindness. Reread my quote again, then your quote, then mine, then yours again. Notice anything wrong? Anything missing?
You see, you omitted one pretty important group: everyone America calls "enemy combatants". If you think all of them are bad people and deserve to die, then you obviously don't get it. Repeat after me: America Starts Aggressive Wars. Then say it again because it's true and truth won't suffer from repetition. Say it as many times as you need to make it sink in, then come back and we will resume this discussion.
America will be killing those people with or without robots. We already have ways of wiping all of the enemy combatants off the map if we want to (for example nukes). Military technology is primarily about finding ways to 1) kill fewer of our own soldiers and 2) kill fewer people who aren't enemy combatants.
Ignoring the question whether that's desirable or not (politics is the mindkiller) reducing the cost of killing those people will lead to more of those people killed in marginal situations where such considerations matter.
Yes, that's one of the good arguments against robot soliders I mentioned above. We're more likely to not care about the fate of our robot soliders, and so would be less hesitant to send them into battle. Though it's still an open question whether that effect would trump any increased monetary cost per soldier (if any) and whether the other benefits outweigh such concerns.
Human soldiers perform horribly in terms of following the rules of war, and above that do absolutely horrible things sometimes.
Not necessarily. All else equal, the less it costs to wage a war (in money, American lives, and good will), the more more likely leaders are to actually start one.
Also, this is definitely not the place to debate this, and you have to know a lot of people won't agree with you, so stop with the flamebait.
Why flamebait? I stated a very well-known fact.
http://en.wikipedia.org/wiki/Bay_of_Pigs_Invasion
http://en.wikipedia.org/wiki/Operation_Power_Pack
http://en.wikipedia.org/wiki/Operation_Urgent_Fury
http://en.wikipedia.org/wiki/Operation_Just_Cause
More here: http://en.wikipedia.org/wiki/CIA_sponsored_regime_change
ETA: to tell the truth, until I dug up that last Wikipedia page just now for purposes of argument, I still had no clear idea how much this happened. And give these people autonomous killer robots? In the name of developing Friendly Intelligence?
1) Politics is the mind killer, 2) Agree denotationally but not connotationally
That's why. Folks will disagree that's something that the US does, and pointing to things the US might have done decades ago won't convince them. There's no way to even debate this point without going down a potentially mind-killing rabbit hole, and I find it hard to believe you weren't aware of this when you posted it.
In case you weren't aware of it: I live in the US, and I've talked to a number of ordinary folks and a number of scholarly folks about it, and I don't tend to encounter people who would grant that the US starts aggressive wars. You should be able to see why someone who thinks that would be angry and vocal about the accusation.
Ooh... I thought we were having a factual disagreement. I apologize. Maybe this won't work as flamebait here :-)
Bay of Pigs? Really? How about nailing us on the Philippines while you're at it. :-)
It isn't like there aren't recent examples to choose from.
You don't even have to go as far as "America Starts Aggressive Wars" -- "Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered."
Look, I get the "Politics is the Mind Killer" mantra, and I agree that it would be fruitless to start a debate about something like abortion here -- it comes down to definitions and conventions about what is moral.
But when something is actually, demonstrably, true, refusing to look at and examine the truth because it is painful to do so is not compelling. It doesn't even trigger most of the reasons in "politics is the mindkiller" -- both major U.S. Political parties are just fine with most of the examples. The only two teams that can credibly be put in opposition here are "U.S.A." and "Everyone else".
It is worth noting that to complete the argument someone needs to show that America starting aggressive wars is bad. The people starting such wars, it turns out, have their reasons.
[half-ironic] Yep. Some countries are just in desperate need a good ol' fashioned ass-kicking. [/half-ironic]
If you work on AGI and you make actual progress, then you have a moral obligation to keep it away from people who can't be trusted with it. You cannot satisfy this obligation while working for a military or a military contractor.
This is a good question, I would appreciate more discussion of it on LW. I am wondering about similar issues: my research involves computer vision, the most obvious applications of which are for surveillance and security. One does not need to be a science fiction author or devotee to imagine powerful computer vision tools or military robots being used for evil.
People can use anything for evil if they want - I don't see how computer vision is distinguished on that metric.
You just succumbed to the fallacy of gray. Computer vision is more easily used for evil than e.g. water purification technology.
Fair enough.
Whether something can be used for evil or not is the wrong question. It's better to ask "How much does computer vision decrease the cost of evil?" Many of the bad things that could be done with CV can be done with a camera, a fast network connection, and an airman in Nevada, just as many of the good medical applications can be done by a patient postdoc or technician.
Better still is to ask, "What are the benefits and harms of doing this rather than something else, including cascading consequences on to the indefinite future?" Which, of course, is murderously hard to answer in cases this far removed from direct consequences.
Which is what I meant when I said computer vision research was not distinguished. Although upon consideration I would weaken the claim to "not strongly distinguished", which might still be enough to justify doing something else.
The difference between specialized FAI and general FAI is like the difference between adaptation executors and fitness maximizers. It's a big difference.
Is specialized FAI even a meaningful term? ISTM that to implement actual friendliness even in a specialized application an AI needs capabilities that imply AGI.
It's a nonstandard term that seemed appropriate to the discussion. By specialized FAI, I mean an AI that reliably does the thing it was made to do in a specific context.
Isn't that the same as specialized AI? I don't think anybody deliberately makes specialized AIs that don't work.
The problems involved in creating ethical military robots are vastly different from those involved in general AI. Ron Arkin's Governing Lethal Behavior in Autonomous Robots does a good job of describing how one should think when building such a thing. Basically, there are rules for war, and the trick is to just implement those in the robot, and there's very little judgement left over. To hear him explain it, it doesn't even sound like a very hard problem.
Then I'm not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they're surrendering? When they're dead/severely wounded?
The rules of war themselves are fairly algorithmic, but applying them is a different story.
Well there's a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn't an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).
I'd appreciate some feedback on a brain dump I did on economics and technology. Nothing revolutionary here. Just want people with more experience on the tech side to check my thinking.
Thanks in advance
http://modeledbehavior.com/2010/03/11/the-economics-of-really-big-ideas/
It looks correct to me, but I'm not an experienced judge of such things.
Re: "Already we have computer programs which can re-write existing to programs to run faster. These programs can also re-write themselves to run faster. However, they cannot rewrite themselves to become better at re-writing themselves faster."
You mean that they can't do that alone? Refactoring programs help speed up their own development, and make it easier and faster to make improvements in a set of programs that often includes their own source code.
It's not total automation - but partial automation is still very significant progress.
Tim,
Thanks, input like this helps me try to think about the economic issues involved.
Can you talk a little about the depth of recursion already possible. How much assistance are these refactoring programs providing? Can the results the be used to speed up other programs or does can it only improve its own development, etc?
To quote from my essay relating to this:
"Refactoring: Refactoring involves performing rearrangements of code which preserve its function, and improve its readability and maintainability - or facilitate future improvements. Much refactoring is done by daemons - and their existence massively speeds up the production of working code. Refactoring daemons enable tasks which would previously have been intractable."
Refactoring programs are indispensable for most application programmers in Java, and other machine readable languages. They are of limited use for C/C++ because of preprocessor mangling. When refactoring hit the mainstream in Eclipse, years ago, many programmers found their productivity increased dramatically, and they also found they could easily perform refactorings that would have been practically impossible to perform manually.
Refactoring is a fairly general tool. I am not sure about your "recursion" question. Modeling this as some kind of recursive function that bottoms out somewhere does not seem particularly appropriate to me. Rather, it represents the partial automation of programming. Similarly, unit tests are the automation of testing, and compilers are the automation of assembly.
Computer programming and software development have many places where automation is possible, and the opportunities are gradually being taken up.
Repost from last open thread in the desperate hope that the lack of interest was only due to people not seeing it all the way at the bottom:
Why don't you PM me your phone number and/or email address and we can try to arrange something?
Good to hear from you!
Anybody else think the modern university system is grossly inefficient? Most of the people I knew in undergrad spend most of their time drinking to excess and skipping classes. In addition, barely half of undergraduates get their B.A in 6 years after starting. The whole system is hugely expensive in both direct subsidies and opportunity costs.
I think that society would benefit from switching to computer based learning systems for most kinds of classes. For example, I took two economics courses that incorporated CBL elements, and I found them vastly more engrossing and much more time-efficient than the lecture sections. Instead of applying to selective universities (which gain status by denying more students entry than others) people could get most of their prerequisites out of the way in a few months with standard CBL programs administered at a marginal cost of $0.
Clearly universities are grossly inefficient at teaching, but as Robin Hanson would say, School isn't about Learning.
The education system in general in most Western countries is grossly inefficient but that is largely because it is not structured in a way that rewards educating efficiently, and that is exactly how most of the participants want it.
I certainly agree that CBL is useful, and the system as a whole is riddled with inefficiencies and perverse incentives.
However, I think a lot of the problem there is actually a matter of cultural context. Prior to entering college, those undegrads learned that drinking is something fun grownups are allowed to do, whereas listening to the teacher and doing homework are trials to be either grimly endured, or minimized by good behavior in other areas.
Yep. They mainly persist as a way to sort workers: those that can get through, and with a degree in X at university Y, are good enough to be trusted to job Z (even though, as is usually the case, nothing in X actually pertains to Z -- you're just signaling your general qualifications for being taken on to do job Z).
Having the degree is a good proxy for certain skills like intelligence, diligence, etc. Why not test for intelligence directly? Because in the US and most industrialized countries, it's illegal, so they have to test you by proxy -- let the university give you an IQ test as a standard for admission, but not call it that.
Shifting to a system that actually makes sense is going to require overcoming a lot of inertia.
I agree with this analysis to some extent. I'm not sure I'm willing to grant that the primary purpose of universities is a way to sort workers, but that is a major thing they're used for, and I tend to argue at length that they should get out of that business. I argue as much as possible against student evaluation, grading, and granting degrees. One of the first arguments that pops up tends to be, "But how will people know who to hire / let into grad school?"
But I don't think it's the University's job to answer that question.
Do you have a statistic to back up the 6-years figure? The graduation rate appears higher than that to me.
6 year graduation rates
You're from Illinois, right? Its graduation rate of 59% is barely higher than the US average of 56%. UIUC's rate is 80%, ISU 60%, and NEIU 20%. NEIU isn't very big, but there might be lots of similar schools. (ETA: actually NEIU+CSU are already pretty close to canceling out UIUC.)
Am I from Illinois? No, actually - Maryland. Checking the data, it seems I'm in a very strange statistical anomaly: 82% in 6 years. At a state university.
No wonder my impressions were skewed.
You are at the state flagship. 82% at College Park is roughly equal to Urbana-Champaign's 80%. The point is that top schools pick students who can get through and/or do a better job of getting students through.
This is the figure I was referencing. 53% graduate in 6 years. Charles Murray (of The Bell Curve fame believes that most people just aren't smart enough for college level work. Based on my experience, "college level work" isn't very difficult, so I remain skeptical.
Only if I consider the modern university system (or education institutions in general) to have a primary purpose of conveying knowledge.
Are you talking about the US? The statistic suggests that you're talking about somewhere specific. I'll assume the US.
You have several claims that are not obviously related. That's not to say that I disagree with any of them, though I probably would disagree with the implicit claims that relate them, if I had to guess what they were. One red flag is the conflation of public and private schools, which have different goals and methods. The 6 year graduation rate is really about public schools, right? But then you invoke selective schools in the last paragraph.
The six year rate is is a nationwide average for the united states.
I stand by my statement.
Thank you, this was a quite useful link for me. (Finnish colleges currently charge no tuition fees, and some are arguing for their introduction on the basis that this would make people graduate faster; those statistics show that US students don't really graduate any much better than Finnish ones.)
Well, then I guess I'm triple special for getting a degree straight from high school in 2.5 years. In engineering. [/toots horn]
College is often a way for 18 year olds to delay social adulthood for 4-6 years. This American Life did a very good episode on the drinking culture at the USA's #1 party school, Penn State, that proves this point beyond a reasonable doubt. Time and time again binge drinking students say that the reason they are doing it and the reason they love Penn State is because this is the only chance in their lives they are going to have to live this lifestyle.
TAL sells the MP3 of the show or it's widely available on torrent sites with a simple Google search.
I think you are confusing Penn State with the University of Pennsylvania.
Penn is more respected than Penn State, but Penn State is one of the top public schools in the USA -- #15 on US News's rather controversial list. http://colleges.usnews.rankingsandreviews.com/best-colleges/national-top-public
It's not that meaningful of a ranking; Penn State was anointed the #1 party school by an online poll done by the Princeton Review. It did however prove that out of all of the schools with strong school spirit and insane binge drinking cultures, the students at Penn State are the best at rigging online polls. In other words, Penn State is the #1 party school because the students decided they wanted to be considered the #1 party school.
Oh, and to add to my earlier comment, another major problem with the system is the difficulty with which you can dismiss employees, which extends through most industrialized countries. This makes it much harder to take a chance on anyone, significantly restricting the set of who has a chance at any job, and thus requiring much more proof in advance.
And what frustrates me the most is that most such regulations/legal environments are called "pro-worker" and the debate on them framed from the assumption that if you want to help workers you must want these laws. No, no, no! These laws make it labor markets much more rigid.
Remember, whatever requirement you force on employers as a surprise, they will soon take into account when looking to hire their next albatross. There's no free lunch! These benefits can only be transient and favor only people lucky enough working at a particular time. As time goes by, you just see more and more roundabout, wasteful ways to get around the restrictions. (Note the analogy to "push the fat guy off the trolley" problems...)
Request for help: I can do classroom programming, but not "real-world" programming. If the problem is to, e.g. take in a huge body of text, collect aggregate statistics, and generate new output based on those stats, I can write it. (My background is in C++.)
However, in terms of writing apps with a graphical user interface, take input in real-time, make use of existing code libraries, etc., I'm at a loss. I'd like to know what would be a good introduction to this more practical level.
To better explain where I am, here is what I have tried so far: I've downloaded a lot of simple open source programs that have a lot of source files. But strangely, whenever I compile them myself and get them to run, it just runs on the command screen blindingly fast and then closes, as if I'm missing some important step. (How are you normally expected to compile open-source programs?)
I've also worked with graphics libraries and read a book (IIRC, Zen and the Art of Direct3D Game Programming) and was able to use that for writing algorithms that determine the motion of 3D objects, given particular user inputs, but it was pretty limited in domain.
I've downloaded Visual C# Express, which was actually pretty helpful in terms of showing how you can create GUIs and then jump to the corresponding code that it calls. I wrote simple programs with that and even bought a book on how to use it, but it turned out to require very circuitous routes to do simple things.
Finally, becuase it's so highly recommended, and I've read Douglas Hofstadter's introduction to it, I thought about programming in Lisp, but the only programming environment for it that I could get to work was the plain old b/w command line, when I figured I'd need to have more functionality than that, and also the libraries to do more than just computation. (I'm experienced with Mathematica, which seems similar in a lot of ways to Lisp.)
So, an specific suggestions on where I should go from here?
Unfortunately this about sums up the current state of 'real world' programming.
It is helpful to have a concrete goal to work towards rather than merely coding for the sake of learning. Learning 'on the job' is helpful in this regard as there is usually a somewhat defined set of requirements and there is added motivation and supervision that comes with being paid to write code.
If you are trying to learn on your own I'd suggest trying to set yourself the task of writing a simple program to do something fairly clearly defined and then work towards that. Simply reading through open source code (or any third party code) is not something I've found terribly helpful as a learning exercise. More useful is to set yourself the task of fixing a specific bug or adding a specific feature as this will help direct your investigation.
Learning how to use the debugging tools available to you is also important. Understanding how software is put together can be greatly aided by stepping through code in a good debugger.
C# is pretty good for 'real world'/GUI development. Personally I think it is the best option overall at the moment for that kind of programming but you will find language choice is a bit of a religious war issue.
I second that recommendation for (non-web) GUI development. Even as someone who had never programmed in C# I found learning the language the simplest option when I needed to create a visual desktop application. (Of course, given that I knew both Java and C++ it wasn't exactly a steep learning curve.)
Can you recommend a tutorial on GUI development with C#?
I'm afraid not. I just kind of winged it.
Well, I don't think I described it correctly. "Circuitous", I can actually handle -- I thrive on it, in fact. But e.g. setting text in a box to bold, when the package is designed to make that easy, following the book's exact instructions, and getting plain text ... that part bothers me, especially when it's followed up with all the alternate methods that don't work, etc. But it was a long time ago so I don't remember all the details.
The task I was working on was to have a WYSIWIG html editor but all redefinition and addition of tags, and add features html can't currently do. (Examples: 1. A tag that adds a specified superscript to the tagged text. 2. A tag that generates an arrow that points to some other text.)
I eventually hired someone to write it, but still couldn't understand from it how the code works, and the Visual C# book only touched on the outlines of this, and I ran into the problems I listed earlier.
I also tried to work through some of their existing program examples, like the blackjack one, but i don't remember where that went.
Try Python+Django, Ruby+Rails, or PHP+CakePHP depending on your preference, but the pragmatic difference is much smaller than language zealots pretend. If you plan on making something with millions of users, PHP is faster than Python or Ruby.
Graphic programming is harder than using generated HTML for your GUI, and there seem to be a lot more real world applications with a web GUI than anything that uses local OS graphics.
You want to do user-facing stuff? Then don't bother with desktop programming, write webapps. HTML and JavaScript are much easier than C++. You don't even have to learn the server side at first, a lot of useful stuff can be written as a standalone html file with no server support. For example you could make your own draggable interface to the free map tiles from http://openstreetmap.org - basically it's all just cleverly positioned image elements named like this, within a rectangular element that responds to mouse events. Or, if a little server-side coding doesn't scare you, you could make a realtime chat webpage. Stuff like that.
If you need any help at all, my email is vladimir.slepnev at gmail and I'm often online in gtalk.
"Easy" is one goal you can have when learning to program. "Soundly written and maintainable" is another. Unfortunately these two goals are sometimes at odds.
Language and platform don't really matter a whole lot, in the grand scheme of things; learning how to write maintainable programs does matter. Having had lots of experience extending or modifying source code written by others, I wish more novice programmers would make that their number one goal.
I disagree!
A (real) novice programmer's number one worry should be getting paid. Why should they divert their attention and spend extra effort on writing maintainable code, just so you have an easier time afterward? That's awfully selfish advice.
You might claim writing maintainable code will pay off for them, but to properly evaluate that we need to weigh the marginal utilities. What's better, an extra hour improving the maintainability of your code, or an extra hour spent empathizing with the client? Ummm... And you can't say both of those things are first priority, that's not how it works. I've been coding for money for half of my life so listen to my words, ye lemmings: ship the thing, make the client happy, get paid. That's number one. Maintainability ain't number one, it ain't even in the top ten.
That depends on your incentive structure. You may well be right if you work as a contract programmer. If you work as a salaried employee in a large company the calculation could look different.
Yes, absolutely. The former path (working or contracting for many small companies) is the one I'd heartily recommend to novices. The latter path... scares me.
Maybe you are scared because you are aware that writing maintainable code is harder than writing code without that constraint?
I write maintainable code anyway, and I'm friends with several people who maintain my past code and don't seem to complain. No, working at BigCo scares me because it tends to be a very one-sided activity. Employees at small companies and contractors face much more variety in what they have to do every day.
What are your other nine?
(Edited)
For one thing, that doesn't sound like something that's actionable for Silas in the context of his request for advice, compared to advising him to learn some specific techniques, such as MVC, which make for more maintainable code.
For another, your worry should be "getting paid" after you have reached a reasonable level of proficiency. A medical student's first concern isn't getting paid, it's learning how not to harm patients. Similarly if you're learning programming, as opposed to confident enough of your chops to go on the market, you have a responsibility to learn how not to harm future owners of your code through negligent design practices. That a majority of programmers today fail to fulfill that basic responsibility doesn't absolve you of it.
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn't have to wait and learn without getting paid, his current skill level is already in demand.
But that's tangential. More importantly, whenever I hear the word "maintainability" I feel like "uh oh, they wanna sell me some doctrinaire bullshit". Maintainability is one of those things everyone has a different idea of. In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically.
Allow me to illustrate with an example. One of my recent projects was a user interface for IPTV set-top boxes. Lots and lots of stuff like "my account", "channels I've subscribed to", et cetera. Now, the natural way to solve this problem is to have a separate file (a "page") for each screen that the user sees, and ignore small amounts of code duplication between pages. If you get this right, it's pretty much irrelevant how crappily each individual page is coded, because it's only five friggin' kilobytes and a maintenance programmer will easily find and change any functionality they want. On the other hand, if you get this wrong... and it's really fucking distressing how many experienced programmers manage to get this wrong... making a Framework with a big Architecture that separates each page into small reusable chunks, perfectly Factored, with shiny and impeccable code... maintenance tasks become hell. And so it is with other kinds of projects too, in fact with most projects I've faced in my life. Focus on finding the stupid, straightforward, natural solution, and it will be maintainable with no effort.
That happens to take a significant amount of skill and learning. Read a site like the Daily WTF and you see what too often comes out of letting untrained, untaught programmers do what they're naturally inclined to do. One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
In practice you're right: people have different ideas of maintainability. That is precisely the problem.
But I don't know of any way to acquire this "programming common sense" except on the job. Do you?
Oh, no. What a terrible idea. If you do this without actually pushing through real-world projects of your own, you'll come up with a lot of bullshit "principles" that will take forever to dislodge. In general, the ratio of actual work to abstract thinking about "principles" should be quite high.
Open source.
I wouldn't say "on the job", necessarily. But it is only learned by programming, not by thinking about programming, attending lectures on programming, etc. Programming for class assignments can count for this.
Well, there is some benefit to reading good code, but you have to already have a reasonable idea what good code is for that to help.
I wasn't with you on the importance of maintainability until you said this. Yes, programming well and naturally is automatically maintainable.
Right on. Another way to put it: if you have to spend extra effort on maintainability, you've probably screwed up somewhere.
My name for this kind of behavior is "fetish". For example, some people have a Law of Demeter fetish. Some people have a short function fetish. And so on, all kinds of little cargo cults around.
Allow me to illustrate with another example. One of my recent projects is mostly composed of small functions, but there's this one function that is three screens long. What does it do? It draws a pie chart with legend. The only pie chart in the whole application. There's absolutely no use refactoring it because it's all unique code that doesn't repeat and isn't used anywhere else in the app. Pick the colors, draw the slices, draw the legend, stop. All very clear and straightforward, very easy to read and modify. A fetishist would probably throw a fit and start factoring it into small chunks, giving them descriptive names, maybe making it a "class" with some bullshit "parameters" that actually only ever take one value, etc, etc.
Object-oriented design is overrated. ;)
Well, that's merely labeling, not actually advancing an argument. What kind of predictions are we talking about here? Where is our substantial disagreement, if any?
When I talk about maintainability I'm referring to specific sequences of events. In one of the most common negative scenarios, I'm asked to make one change to the functionality of a program, and I find that it requires me to make many coordinated edits in distinct source chunks (files, classes, functions, whatever). This is called "coupling" and is a quantifiable property of a program relative to some functional change specification.
"Maintainable" relative to that change means (among other things) low coupling. You want to change the pie chart to use dotted lines instead of solid inside the pie, and you find that this requires a change in only one code location - that's low coupling.
Now what often happens is that someone needs a program that's able to do both dotted-line pies and solid-line pies. And many times the "most natural" thing (by which I only mean, "what I see many programmers do") is then to copy the pie-chart function, paste it elsewhere with a different name, and change the line style from solid to dotted.
That copy-paste programming "move" has introduced coupling, in the sense that if you want to make a change that affects all pie charts (dotted and solid alike) you'll have to make the corresponding source change twice.
Someone who programs that way is eventually going to drive coupling through the roof (by repeated applications of this maneuver). At this point the program has become so difficult to change that it has to be rewritten from scratch. Plus, high coupling is also correlated with higher incidence of defects.
Now you may call a "fetishist" someone whose coding style discourages copy-paste programming, that doesn't change the fact that it is a style which results in lower overall costs for the same total quantity of "delta functionality" integrated over the life of the program.
My contention is that functions which are three screens long are, other things equal, more likely to result in copy-paste parametrizations than smaller functions. (More generally, code that exhibits a higher degree of composability is less susceptible to design mistakes of this kind, at the cost of being slightly harder to understand for a novice programmer.)
I'd probably look hard at this pie chart thingy and consider chopping it up, if I felt the risk mitigation was worth the effort. Or I might agree with you and decide to leave it alone. I would consider it stupid to have a "corporate policy" or a "project rule" or even a "personal preference" of keeping all functions under a screenful. That wouldn't work, because more forces are in play than just function length.
Rather, I assess all the code I write against the criterion of "a small functional change is going to result in a small code change", and improve the structure as needed. I have a largish bag of tricks for doing that, in several languages and programming paradigms, and I'm always on the lookout for more tricks to pick up.
What, specifically, do you disagree with in the above?
I agree with most of your comment, except the idea that you can anticipate in what directions your software is going to grow. That's never actually worked for me. Whenever I tried designing for future requirements instead of current simplicity, clients found a way to throw me a curveball that made me go "oops, this new request screws up my whole design!"
If my program ever needs a second pie chart, it's better to factor the functionality out then instead of now. Less guesswork, plus a three-screen-long function is way easier to factor than a set of small chunks is to refactor.
Bad design choices are much more expensive to fix down the road than when they were were created. You seem to be saying that any time spent addressing this issue is worthless in comparison to spending more time empathizing with the customer.
Thanks for the advice and generous offer of help!
Find a specific programming problem you need (want) to solve. That, for me at least, makes the task of learning almost automatic.
I (also) recommend Ruby+Rails for practical purposes. If you want to learn how to program, for example, 3D games then I have no particular recommendations. I only got as far as 2D bitbliting on that path! ;)
What category of app are you looking to write, narrowing down the class "app with a GUI" a little?
Can you name a specific example of one you've tried to compile and run, and you've been confused at the result?
One general hint is that a good way to learn how to code up significant programs from scratch is to, first, get a significant program that works and modify or extend it in some way.
Also, be aware that there are several competing design philosophies when it comes to writing GUI programs, with very different outcomes in terms of maintainability and adherence to sound design principles. The "Visual" approach exemplified by the Microsoft line of tools leaves much to be desired in my experience, leading to spaghetti code too easily.
I prefer approaches in which graphical components are created programmatically, and where design principles such as MVC then serve to further structure the resulting code and drive the design toward high levels of abstraction. The various Smalltalk environments are a good illustration of that philosophy.
Spaghetti code is a primarily a function of the programmer, not the tools. This isn't to say the tools don't matter; they do; but the various competing tools each have their pros and cons, and it's a bit glib to suggest the Microsoft stack is obviously behind here. ASP.NET MVC, which you can use for web development in C#, is quite orthogonality-friendly.
I don't think this should matter for your answer, since it's just a barrier toward a broad class of programming I'm trying to overcome.
All of them ;-) but I'll give you a specific example when I get back to my home computer.
Well, that's kind of hard when they don't run even when you compile them. But on top of that, I haven't found any multi-source-code-file in which it's easy to jump to just the part of the code that implements a particular feature, usually because of poor documentation.
Got Skype, microphone, etc?
Yes.
Most open-source programs are made to be easy to compile on Unix platforms. If you're using OS X or Linux, great; if you're on Windows, download Cygwin and you'll have a Unix environment. Given all that, read the INSTALL file; it should give you step-by-step instructions for compiling and installing. Most commonly, you run ./configure, then make, then (as root) make install.
That said, platforms with package managers are really nice because you can download, build, and install many programs in a single step; Debian has APT, OS X has MacPorts and Fink, and Haskell (a programming language, not an operating system) has the Cabal.
In general, if running something causes a terminal to open and immediately close, try running it on a command line instead of double-clicking it. For Windows, open Command Prompt, drag the executable onto the terminal window, and hit enter.
One way to do that is to open the "Start" menu, select "Run", type
cmd, and press <Enter>.If you want to write UIs, Lisp and friends would probably not be the first choice, but since you mentioned it...
For Lisp, you can of course install Emacs, which (apart from being an editor) is a pretty convenient way to play around with Lisp. Emacs-Lisp may not be a start of the art Lisp implementation, but it is certainly good enough to get started. And because of the full integration with the editor, there is instant-gratification when you can use some Lisp to glue to existing things together into something useful. Emacs is available for just about any self-respecting computer system.
You can also try Scheme (a Lisp dialect); there is the excellent freely available Structure and Interpretation of Computer Programs, which uses Scheme as the vehicle to explain many programming concepts. Guile is nice, free-software implementation.
When you're really into a more mathematical approach, Haskell is pretty nice. For UI stuff, I find it rather painful though (same is true for Lisp and to some extent, Scheme).
If you're doing them in Windows, open the command prompt using "cmd" and run them from the command line. They'll run in the CMD window, which will stay open after the program finishes doing whatever it does, leaving the output visible.
An amusing view of charity and utility, as told by Monty Python: Merchant Banker. I was trying to remember what thought experiment it reminded me of, but I couldn't find it...
Mentally Subtracting Positive Events Improves People’s Affective States, Contrary to Their Affective Forecasts
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2746912/
If I have no memory of some period in my past, then should I be pleased to discover that was happy during that period? Or is it that past experiences are valuable only through the pleasure their memories give us in the present?
It sounds as though you now have some information about those past events. Hopefully, it is a sign that your goals were being met during that period. Also, if you managed to learn that, maybe you will also learn something more useful about the period. So: I would say it is normally a good sign.
That past experience is valuable in the sense that it did not damage your psyche in the way that a traumatic experience could have.
I vote "pleased", for the rather weak reason that this makes my preferences time-symmetric*.
* Edit: This is poorly-worded - what I was referring to was time shift symmetry.
But nothing else about the universe is time-symmetric, manifestly including our own revealed preferences -- I would rather be happy in the future but not in the past than be happy in the past but not in the future, if you gave me the choice right now. So this is the only argument I can think of to vote "not pleased" (of course, not displeased either) about one's past, but unremembered, happiness.
(I actually do vote "pleased," though, for the reason I argued here.)
I'm not sure that I'd prefer unrecalled happiness in the past to in the future, but I was thinking of (and should have named) time-shift symmetry, which the fundamental laws of physics are.
I actually agree with your argument for voting "pleased", though, so we might be simply in agreement.
Well then, I'm sure that addresses my objection. But a couple of minutes' googling isn't giving me a good sense of what time-shift symmetry is -- and my physics background is lousy. Could you give me a quick definition?
The laws of physics are invariant in time.
Edit: Clarification - if you write the laws of physics, nowhere do you invoke the absolute time; only changes in time. The outcome of any experiment cannot change just because the time coordinate changes; it can only change because other parameters in the situation change.
Yes, they tend to be invariant in factors that don't exist ;-P
Thanks for that.
I remember hearing that there have been some hints that physical constants have changed over time. If they have then the laws of physics wouldn't be time invariant.
Anyone else recall anything along those lines? Wikipedia isn't terrible helpful.
I have not heard of any such theory becoming a credible candidate for acceptance, although I see no logical contradiction in such - my impression is that discovering a time-varying term would be as surprising as discovering energy is not conserved. For fairly fundamental reasons, actually.
Note that in GR defining energy consistently is tough. Doing it so it is globally conserved is even harder. We only really have local conservation, and the changing background of GR in cosmology is in some sense effectively the same thing as changing physical law.
Yeah why not. It is better to be pleased than not, all else being equal.
If you are a utilitarian, I think you should be pleased.
Imagine you happened to find out that a person on the other side of the world, whose life has never and will never affect yours in any way, is happy right now. You'd be pleased about that, right? Now imagine you knew instead that that person was happy last week. Since this affects you not at all, there's no real difference between these: you're just pleased about the fact of someone's happiness at some point in time.
If you buy my argument up to this point, then you may as well be pleased if that mystery person from the past was actually your own past self. And that's not even to mention Kevin's argument which does take into account the ways in which your past self influences your future self.
You should be at least as pleased as you would be to discover that someone else was happy during that period.
There is something I find very satisfying about this answer. Possibly this is related to the fact that I like to think of people-over-time as being a succession of distinct, but closely related, identities.
Here is one possible reason for being pleased to discover that one was unhappy in the past:
Times of apparent unhappiness can lead to great personal growth. For instance, the hardest, most stressful time of my life was studying for my physics honors exams. However, now that the exams are over, I am glad to have both the knowledge I gained in studying, and the self knowledge that I am capable of pushing myself as hard as I did. (Would skills learned during the missing time be retained? Even if they weren't, the latter reason above would still apply).
It would be devastating to lose the memory of any part of ones life, but I think there would be some satisfaction in learning that one had spent the missing time doing something difficult but worthwhile, even if one was not happy during that time.
This seems very unlikely. If the experience of remembering pleasurable events is valuable in itself, why can't other experiences be valuable in themselves?
Is anyone familiar with a possible evolutionary explanation of the placebo effect? It seems strange to me that the body would have a limit to the degree it heals itself, and that this limit gets bypassed by the belief that one is receiving treatment.
The only explanation I could string together is that the body limits how much it heals itself because it's conserving energy/resources/whatever it might need for other things (periods of scarcity, danger, etc.) Receiving medicine sends the signal that the person is being taken care of and thus at a much lower risk of needing to use it's 'reserves', so the body goes ahead and diverts them to repairing whatever is wrong with it.
However, this would suggest that a self-administered placebo would be ineffective, whereas treatment but no medicine by a doctor/caregiver would be effective. As far as I know, this isn't how the placebo effect works, but I'm not exactly up to date on the subject.
Has anyone seen a better explanation?
People are very much affected by what they imagine is going on. For the unbendable arm you don't tell people to extend their arm effiiciently, you have them imagine the arm extending out to infinity, or imagine the arm as a firehose.
I'm not sure why any of this works-- it may have something to do with activating one's own mirror neurons, but I do think the placebo effect should be viewed as a special case rather than a thing in itself.
A self-administered placebo might still be effective for evolutionary reasons. It would signal that a reduced activity level is related to tending your injuries, rather than, say, waiting in ambush or 'freezing' to avoid notice by motion-sensitive predators, so it's safe to divert resources toward repair or antibody production at the expense of sensory and muscular readiness.
Same reason people have a hard time getting to sleep in unfamiliar circumstances, but focusing on a token reminder of home dispels the feeling.
Yes, that the original papers advocating the placebo effect were misleading in their reports and the popularisations thereof grossly exageratted.
Placeobo's can be shown to reliably have an effect on:
(I am not criticising the use of placebo controls here. But I am asserting that the primary benefit from such controls is in 'balancing out' other biases rather than because of direct effect of placebos on healing.)
http://news.ycombinator.com/item?id=567913
Now that is just freaky.
Has anybody else wished that the value of the symbol, pi, was doubled? It becomes far more intuitive this way--this may even affect uptake of trigonometry in school. This rates up with declaring the electron's charge as negative rather than positive.
I read an argument to that effect on the Internet, but I don't have any strong feelings - maybe if I were writing a philosophical conlang I would make the change, but not normally. You may as well argue for base four arithmetic.
Huh. Would that actually be easier? I always figured ten fingers...
I don't see myself with ten fingers as a posthuman anyway.
The cost in number length is not large - 3*10^8 is roughly 1*4^14 - and the cost in factorization likewise - divisibility by 2, 3, and 5 remain simple, only 11 becomes difficult.
If you want to argue from number of fingers, though, six beats ten. ;)
I could see eight, but why six?
Six works because you don't need a figure for the base. Thus, zero to five fingers on one hand, then drop all five and raise one on the other to make six. (Plus, you get easy divisibility by seven, which beats easy divisibility by eleven.)
Edit: Binary, the logical extension of the above principle, has the problem that the ring finger and pinky have a mechanical connection, besides the obvious 132decimal issue. ;)
I don't see how eight comes in, though.
Eight would be if you counted your fingers with the thumb of the same hand.
I see - I count by raising fingers, so that method didn't occur to me.
There's are websites dedicated to making Base 12 the standard. Same principle as making Base 6.
Nature's Numbers
Dozenal Society
Simplest explanation - its possible to divvy 12 up in more whole fractions than the number 10.
I figure each finger can be up or down, 2 states, so binary. And then base 16 is just assigning symbols to sequences of 4 binary digits, a good, manageable, compression for speaking and writing.
(When I say I could count something on one hand, it means there are up to 31 of them.)
Meh. 2 Pi shows up a lot, but so does Pi, and so does Pi/2. I think I'd rather cut it in half, actually, as fractions are more painful than integer multiples.
Pi/3 shows up a lot as well. If you halve pi, then you'd have to write that as 2*pi/3, which is more irritating still.
Think about the context here, though. Having a symbol for 2pi would be much more convenient because it would make things consistent. 2pi is the number that you typically cut into fractions. Let's say we define, say, rho to mean 2pi. Then we have rho, rho/2, rho/3, rho/4... whereas with pi, we have 2pi, 2pi/2, 2pi/3, 2pi/4... the problem is those even numbers. Writing 2pi/4 looks ugly, you want to simplify, but writing pi/2 means that you no longer see the number "4" there, which is what's important, that it's a quarter of 2pi. You see the "2" on the bottom so you think it's half of 2pi. It's a mistake everyone makes every now and then - seeing pi/n and thinking it's 2pi/n. If we just had a symbol for 2pi, this wouldn't occur. Other mistakes would, sure, but as commonly as this one does?
If we were to define, say, xi=pi/2, then 4xi, 2xi, 4xi/3, xi, 4xi/5... well, that's just awful.
Definitely. 2pi appears so much more often than pi.
No. This is nowhere near like the metric vs. english units debate. (If you want to talk about changing units, you should put your weight on that boat instead, as it's much more of a serious issue.) Pi is already well defined, anyways. It's defined according to its historical contextual meaning, regarding diameter, for which the factor of 2 does not appear.
Pi is well-defined, yes, and that's not going to change. But some notation is better than others. It would be better notation if we had a symbol that meant 2pi, and not necessarily any symbol that meant pi, because the number 2pi is just usually more relevant. There's all sorts of notation we have that is perfectly well-defined, purely mathematical, not dependent on any system of units, but is not optimal for making things intuitive and easy to read, write and generally process. The gamma function is another good example.
I really fail to see why metric vs. english units is much more serious; neither metric nor english units is particularly suggestive of anything these days. Neither is more natural. The quantities being measured with them aren't going to be nice clean numbers like pi/2, they're going to be messy no matter what system of units you measure them with.
From the guy that brought us the creative commons license:
http://www.fixcongressfirst.org/
How should rationalists do therapy?
As a community, we should have resources to help people who might otherwise be helped by clerics, quacks, or psychics. We should certainly cover things like minor depression and grief at the death of a loved one.
Should we just look at what therapies have the best outcome for various situations and recommend those?
Should we use what we know about cognition to suggest new therapies? Should we make a "Grief Sequence"?
The prince of one hundred thousand leaves is, among other things, a sort of fictionalized open-source project for horrifying eutopias. It might provide useful insights about that which we are least willing to consider.