Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread June 2010, Part 3

6 Post author: Kevin 14 June 2010 06:14AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.

 

Comments (606)

Comment author: NancyLebovitz 15 June 2010 12:25:02PM *  17 points [-]

How to Keep Someone with You Forever.

This is a description of "sick systems"-- jobs and relationships which destructively take people's lives over.

I'm posting it here partly because it may be of use-- systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.

One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent-- and it can take a very long time to recognize the contradiction. It's plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what's being done to them, but is there any more to it than that?

One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children-- the parents are unlikely to have a substantial network of helpers, they aren't sharing a bed with the baby (leading to more serious sleep deprivation), and there's a belief that raising children is almost impossible to do well enough.

Also, it's interesting that people keep spontaneously inventing sick systems. It isn't as though there's a manual. I'm guessing that one of the drivers is feeling uncomfortable at seeing the victims feeling good and/or capable of independent choice, so that there are short-run rewards for the victimizers for piling the stress on.

On the other hand, there's a commenter who reports being treated better by her family after she disconnected from the craziness.

Comment author: xamdam 16 June 2010 02:13:31PM *  15 points [-]

Message from Warren Buffett to other rich Americans

http://money.cnn.com/2010/06/15/news/newsmakers/Warren_Buffett_Pledge_Letter.fortune/index.htm?postversion=2010061608

I find super-rich people's level of rationality specifically interesting, because, unless they are heirs or entertainment, it takes quite a bit of instrumental rationality to 'get there'. Nevertheless it seems many of them do not make the same deductions as Buffett, which seem pretty clear:

My wealth has come from a combination of living in America, some lucky genes, and compound interest. Both my children and I won what I call the ovarian lottery. (For starters, the odds against my 1930 birth taking place in the U.S. were at least 30 to 1. My being male and white also removed huge obstacles that a majority of Americans then faced.)

My luck was accentuated by my living in a market system that sometimes produces distorted results, though overall it serves our country well. I've worked in an economy that rewards someone who saves the lives of others on a battlefield with a medal, rewards a great teacher with thank-you notes from parents, but rewards those who can detect the mispricing of securities with sums reaching into the billions. In short, fate's distribution of long straws is wildly capricious.

In this sense they are sort of 'natural experiments' of cognitive biases at work.

Comment author: pjeby 16 June 2010 03:44:02PM 5 points [-]

My wealth has come from a combination of living in America, some lucky genes, and compound interest. Both my children and I won what I call the ovarian lottery. (For starters, the odds against my 1930 birth taking place in the U.S. were at least 30 to 1. My being male and white also removed huge obstacles that a majority of Americans then faced.)

My luck was accentuated by my living in a market system that sometimes produces distorted results, though overall it serves our country well. I've worked in an economy that rewards someone who saves the lives of others on a battlefield with a medal, rewards a great teacher with thank-you notes from parents, but rewards those who can detect the mispricing of securities with sums reaching into the billions. In short, fate's distribution of long straws is wildly capricious.

Wow. That is some seriously clear thinking. Too bad Mr. Buffet isn't here to get the upvote himself, so I upvoted you instead. ;-)

Comment author: xamdam 16 June 2010 03:55:49PM 4 points [-]

I think in Buffett's case this is not an accident; I venture to claim that his wealth is a result of fortune combining with an unusual doze of rationality (even if he calls it 'genes'). My strongest piece of evidence is that his business partner for the past 40 years, Charlie Munger, is one of the very early outspoken adopters of the good parts of modern psychology, such as ideas of Cialdini and Tversky/Kahneman and decision-making under uncertainty.

http://vinvesting.com/docs/munger/human_misjudgement.html

Comment author: pjeby 17 June 2010 01:21:51AM 3 points [-]

http://vinvesting.com/docs/munger/human_misjudgement.html

Oh wow, I think I have a new role model. Any chance we can get these two (Buffet and Munger) to open a rationality dojo? (Who knows, they might be impressed, given that most people ask them for wealth advice instead...)

Comment author: multifoliaterose 14 June 2010 09:18:52PM *  14 points [-]

I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain's post titled "That Other Kind of Status." I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I'm leaving it up to keep the responses in context).

I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.

I've been a lurker in this community for three months and I've found that it's the smartest community that I've ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having "arrived home."

At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe.

I don't want to get involved in a debate about this point now (although I'd be happy to elaborate and give my thoughts in detail if there's interest).

What I want to do is to draw attention to the remarks that I made in my second comment at the link. From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).

My drawing attention to this question is not out of malice toward any of you - as I indicated above, I feel more comfortable with Less Wrong than I do with almost any other large group that I've ever come across. I like you people and if some of you are suffering from the issue (*) I see this as understandable and am sympathetic - we're all only human.

But I am concerned that I haven't seen much evidence of serious reflection about the possibility of (*) on Less Wrong. The closest that I've seen is Yvain's post titled "Extreme Rationality: It's Not That Great". Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, (*) is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility (*).

Any thoughts? I'd also be interested in any relevant references.

[Edited in response to cupholder's comment, deleted extraneous words.]

Comment author: Eneasz 16 June 2010 07:02:02AM 17 points [-]

At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe

I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).

You know what... I'm going to come right out and say it.

A lot of people need their clergy. And after a decade of denial, I'm finally willing to admit it - I am one of those people.

The vast majority of people do not give their 10% tithe to their church because some rule in some "holy" book demands it. They don't do it because they want a reward in heaven, or to avoid hell, or because their utility function assigns all such donated dollars 1.34 points of utility up to 10% of gross income.

They do it because they want their priests to kick more ass than the OTHER group's priests. OUR priests have more money, more power, and more intellect and YOUR sorry-ass excuse for a holy-man. "My priest bad, cures cancer and mends bones; your priest weak, tell your priest to go home!"

So when I give money to the SIAI (or FHI or similar causes) I don't do it because I necessarily think it's the best/most important possible use of my fungible resources. I do it because I believe Eliezer & Co are the most like-me actors out there who can influence the future. I do it because of all the people out there with the ability to alter the flow of future events, their utility function is the closest to my own, and I don't have the time/energy/talent to pursue my own interests directly. I want the future to look more like me, but I also want enough excess time/money to get hammered on the weekends while holding down an easy accounting job.

In short - I want to be able to just give a portion of my income to people I trust to be enough like me that they will further my goals simply by pursuing their own interests. Which is to say: I want to support my priests.

And my priests are Eliezer Yudkowsky and the SIAI fellows. I don't believe they leach off of me, I feel they earn every bit of respect and funding they get. But that's besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.

The vatican isn't made out of gold because the pope is greedy, it's made out of gold because the peasants demand that it be so. And frankly, I demand that the vatican be put to fucking shame when it compares itself us.

Standard Disclaimer, but really... some enthusiasm is needed to fight Azathoth.

Comment author: blogospheroid 18 June 2010 06:24:53AM 1 point [-]

Voted up for honesty.

Comment author: cupholder 14 June 2010 09:53:19PM *  4 points [-]

Comment on markup: I saw the first version of your comment, where you were using "(*)" as a textual marker, and I see you're now using "#" because the asterisks were messing with the markup. You should be able to get the "(*)" marker to work by putting a backslash before the asterisk (and I preferred the "(*)" indicator because that's more easily recognized as a footnote-style marker).

Feels weird to post an entire paragraph just to nitpick someone's markup, so here's an actual comment!

From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups

Let me try and rephrase this in a way that might be more testable/easier to think about. It sounds like the question here is what is causing the correlation between being a member of LW/SIAI and agreeing with LW/SIAI that future AI is one of the most important things to worry about. There are several possible causes:

  1. group membership causes group agreement (agreement with the group)
  2. group agreement causes group membership
  3. group membership and group agreement have a common cause (or, more generally, there's a network of causal factors that connect group membership with group agreement)
  4. a mix of the above

And we want to know whether #1 is strong enough that we're drifting towards a cult attractor or some other groupthink attractor.

I'm not instantly sure how to answer this, but I thought it might help to rephrase this more explicitly in terms of causal inference.

Comment author: multifoliaterose 15 June 2010 01:45:17AM *  3 points [-]

I'm not sure that your rephrasing accurately captures what I was trying to get at. In particular, strictly speaking (*) doesn't require that one be a part of a group , although being part of a group often plays a role in enabling (*).

Also, I'm not only interested in possible irrational causes for LW/SIAI members' belief that future AI is one of the most important things to worry about, but also possible irrational causes for each of:

(1) SIAI members' belief that donating to SIAI in particular is the most leveraged way to reduce existential risks? Note that it's possible to devote ones' live to a project without believing that it's the best project for additional funding - see Givewell's blog posts on Room For More Funding:

For reference, PeerInfinity says

A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks.

(2) The belief that refining the art of human rationality is very important.

On (2), I basically agree with Yvain's post Extreme Rationality: It's Not That Great.

My own take is that the Less Wrong community has been very enriching in some of its members lives on account of allowing them the opportunity to connect with people similar to themselves, and that their very positive feelings connected with their Less Wrong experience have led some of them to overrate the overall importance of Less Wrong's stated mission. I can write more about this if there's interest.

Comment author: JoshuaZ 14 June 2010 10:02:09PM *  3 points [-]

I'm not aware of anyone here who would claim that LW is one of the most important things in the world right now but I think a lot of people here would agree that improving human reasoning is important if we can have those improvements apply to lots of different people across many different fields.

There is a definite group of people here who think that SIAI is really important. If one thinks that a near Singularity is a likely event then this attitude makes some sense. It makes a lot of sense if you assign a high probability to a Singularity in the near future and also assign a high probability to the possibility that many Singularitarians either have no idea what they are doing or are dangerously wrong. I agree with you that the SIAI is not that important. In particular, I think that a Singularity is not a likely event for the foreseeable future, although I agree with the general consensus here that a large fraction of Singularity proponents are extremely wrong at multiple levels.

Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important. That's the same reason that a lot of the general public thinks that tokamak fusion reactors will be practical in the next fifty years: The physicists and engineers who think that are going to loudly push for funding. The ones who don't are going to generally just go and do something else. Thus, in any given setting it can be difficult to estimate the general communal attitude towards something since the strongest views will be the views that are most apparent.

Comment author: Vladimir_Nesov 14 June 2010 10:24:49PM *  13 points [-]

I don't think intelligence explosion is imminent either. But I believe it's certain to eventually happen, absent the end of civilization before that. And I believe that its outcome depends exclusively on the values of the agents driving it, hence we need to be ready, with good understanding of preference theory at hand when the time comes. To get there, we need to start somewhere. And right now, almost nobody is doing anything in that direction, and there is very poor level of awareness of the problem and poor intellectual standards of discussing the problem where surface awareness is present.

Either right now, or 50, or 100 years from now, a serious effort has to be taken on, but the later it starts, the greater the risk of being too late to guide the transition in a preferable direction. The problem itself, as a mathematical and philosophical challenge, sounds like something that could easily take at least 100 years to reach clear understanding, and that is the deadline we should worry about, starting 10 years too late to finish in time 100 years from now.

Comment author: Benquo 15 June 2010 01:04:15AM -1 points [-]

"But I believe it's certain to eventually happen, absent the end of civilization before that."

And I will live 1000 years, provided I don't die first.

Comment author: Vladimir_Nesov 15 June 2010 01:26:46AM *  2 points [-]

But I believe it's certain to eventually happen, absent the end of civilization before that.

And I will live 1000 years, provided I don't die first.

(As opposed to gradual progress, of course. I could make a case with your analogy facing an unexpected distinction also, as in what happens if you got overrun by a Friendly intelligence explosion, and persons don't prove to be a valuable pattern, but death doesn't adequately describe the transition either, as value doesn't get lost.)

Comment author: Nick_Tarleton 17 June 2010 06:07:39AM *  12 points [-]

Ladies and gentlemen, the human brain: acetaminophen reduces the pain of social rejection.

Comment author: MichaelBishop 14 June 2010 04:22:40PM *  12 points [-]

I'd like to share introductory level posts as widely as possible. There are only three with this tag. Can people nominate more of these posts, perhaps messaging the author to encourage them to tag their post "introduction."

We should link to, stumble on, etc. accessible posts as much as possible. The sequences are great, but intimidating for many people.

Added: Are there more refined tags we'd like to use to indicate who the articles are appropriate for?

Comment author: RobinZ 15 June 2010 04:23:26AM 9 points [-]

There are a few scattered posts in Eliezer's sequences which do not, I believe, have strong dependencies (I steal several from the About page, others from Kaj_Sotala's first and second lists) - I separate out the ones which seem like good introductory posts specifically, with a separate list of others I considered but do not think are specifically introductory.

Introductions:

Not introductions, but accessible and cool:

Comment author: SilasBarta 15 June 2010 01:05:13PM 3 points [-]

As usual, I'll have to recommend Truly Part of You as an excellent introductory post, given the very little background required, and the high insight per unit length.

Comment author: blogospheroid 15 June 2010 05:27:09AM 3 points [-]

Thanks for this list.

Comment author: khafra 14 June 2010 12:37:58PM 8 points [-]

Wikipedia says the term "Synthetic Intelligence" is a synonym for GAI. I'd like to propose a different use: as a name for the superclass encompassing things like prediction markets. This usage occurred to me while considering 4chan as a weakly superintelligent optimization process with a single goal; something along the lines of "producing novelty;" something it certainly does with a paperclippy single-mindedness we wouldn't expect out of a human.

It may be that there's little useful to be gained by considering prediction markets and chans as part of the same category, or that I'm unable to find all the prior art in this area because I'm using the wrong search terms--but it does seem somewhat larger and more practical than gestalt intelligence.

Comment author: timtyler 15 June 2010 08:48:18PM *  4 points [-]

That is usually called "collective intelligence":

http://en.wikipedia.org/wiki/Collective_intelligence

Calling it "synthetic Intelligence" would be bad, IMO.

Comment author: Peter_Lambert-Cole 18 June 2010 12:12:26AM 7 points [-]

I have an idea that I would like to float. It's a rough metaphor that I'm applying from my mathematical background.

Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.

First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.

Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.

Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.

My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.

If you noticed, this idea comes from Differential Geometry, where you use a collection ("atlas") of overlapping charts/local homeomorphisms to R^n ("maps") as a suitable structure for discussing manifolds.

Comment author: Perplexed 27 July 2010 11:21:58PM 1 point [-]

I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I'm not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.

For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).

But I would go farther than this. I would also claim that we shouldn't imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that "It's maps (or models, or turtles) all the way down".

Comment author: Douglas_Knight 18 June 2010 09:53:50PM 1 point [-]

But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps.

What's an example of people doing this?

Comment author: Peter_Lambert-Cole 20 June 2010 05:49:40PM 2 points [-]

I think one place to look for this phenomenon is when in a debate, you seize upon someone's hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.

But hidden assumptions aren't bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It's a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.

When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others' assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.

This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side's assumptions to see how they fit.

Comment author: SilasBarta 20 June 2010 06:04:08PM *  1 point [-]

Mostly agree. It's really irritating and unproductive (and for me, all too frequent) when someone thinks they've got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.

Yes, people need to watch for the hidden assumptions they make, but they shouldn't point out the assumptions others make unless they can say why it's unreasonable and how its weakening would hurt the argument it's being used for. "You're assuming X!" is not, by itself, relevant counterargument.

Comment author: timtyler 16 June 2010 08:42:28AM 6 points [-]

A question: Do subscribers think it would be possible to make an open-ended self-improving system with a perpetual delusion - e.g. that Jesus loves them.

Comment author: Mitchell_Porter 16 June 2010 09:14:52AM *  7 points [-]

Yes, in that it could be open-ended in any "direction" independent of the delusion. However, that might require contrived initial conditions or cognitive architecture. You might also find the delusion becoming neutralized for all practical purposes, e.g. the delusional proposition is held to be true in "real reality" but all actual actions and decisions pertain to some "lesser reality", which turns out to be empirical reality.

ETA: Harder question: are there thinking systems which can know that they aren't bounded in such a way?

Comment author: nhamann 14 June 2010 09:00:14PM *  6 points [-]

“There is no scientist shortage,” declares Harvard economics professor Richard Freeman, a pre-eminent authority on the scientific work force. Michael Teitelbaum of the Alfred P. Sloan Foundation, a leading demographer who is also a national authority on science training, cites the “profound irony” of crying shortage — as have many business leaders, including Microsoft founder Bill Gates — while scores of thousands of young Ph.D.s labor in the nation’s university labs as low-paid, temporary workers, ostensibly training for permanent faculty positions that will never exist.

The Real Science Gap

ETA: Here's a money quote from near the end of the article:

The main difference between postdocs and migrant agricultural laborers, he jokes, is that the Ph.D.s don’t pick fruit.

(Ouch)

Comment author: Houshalter 15 June 2010 12:34:29AM *  0 points [-]

I'm not sure I see what the problem is. Capitalism works? It makes it seem like this system is unsustainable or bound to collapse, but I'm not sure I see how two and two fit together. I am particularly confused with this quote:

Obviously, the “pyramid paradigm can’t continue forever,” says Susan Gerbi, chair of molecular biology at Brown University and one of the relatively small number of scientists who have expressed serious concern about the situation. Like any Ponzi scheme, she fears, this one will collapse when it runs out of suckers — a stage that appears to be approaching. “We need to have solutions for some new steady-state model” that will limit the production of new scientists and offer them better career prospects, she adds.

First of all, how is it a ponzi scheme that is bound to collapse? Also limiting the number of scientists is not going to make the system better, except that maybe individuals will have less competition = more opportunities, which is not a benefit to the whole system, just to the individual.

EDIT: Fixed spelling.

Comment author: SilasBarta 15 June 2010 01:01:33AM *  25 points [-]

I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.

My take? The system of go-to-college/get-a-job needs to collapse and be replaced, for the most part, by apprenticeships (or "internships" as we fine gentry call them) at a younger age, which will give people significantly more financial security and enhance the economy's productivity. But this will be bad news for academics.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

So the slack will have to be picked up by people "outside the system". Yes, they'll be starved for funds and rely on rich people and donations to non-profits, but they'll mostly make up for it by their ability to get much more insight out of much less data: knowing what data-mining techniques to use, spotting parallels across different fields, avoiding the biases that infect academia, and generally automating the kind of inference currently believed to require a human expert to perform.

In short: this, too, shall pass -- the only question is how long we'll have to suffer until the transition is complete.

Sorry, [/rant].

Comment author: fiddlemath 15 June 2010 03:51:51AM *  7 points [-]

I agree that college as an institution of learning is a waste for most folks - they will "never use this," most disregard the parts of a liberal arts education that they're force-fed, and neither they nor their jobs benefit. Maybe students gain something from networking with each other. But yes, Goodhart's Law applies. Employers appear to use a diploma as an indicator of deligence and intelligence. So long as that's true, students will fritter away four years of their lives and put themselves deep in debt to get a magic sheet of paper.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

It's been broken forever, in basically the same way it is now. Most working scientists are trying to prove their idea, because neagtive results don't carry nearly so much prestige as positive results, and the practice of science is mostly about prestige. I'm sure I could find citations for peer review being "pal review" throughout its lifetime. (ooh. I'll try this in a moment.)

To the extent that science has ever worked, it's because the social process of science has worked - scientists are just open-minded enough to, as a whole, let strong evidence change their collective minds. I'm not really convinced that the social process of science has changed significantly over the last decades, and I can imagine these assertions being rooted in generalized nostalgia. Do you have reasons to assert this?

(Are you just blowing off steam about this? I can totally support that, because argh argh argh the publication treadmill in my field headdesk headdesk expletives. But if you have evidence, I'd love to hear it.)

Comment author: SilasBarta 15 June 2010 04:20:34AM *  11 points [-]

I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.

I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it's getting worse per unit person.

For the absolute level, I've noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I'm too sleepy to go into detail right now, but briefly:

  • There's no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it's solvable.

  • There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google's PageRank, allowing it to identify crucial sites. Ecologists didn't apply it to the problem of identifying critical ecosystem species until a few years ago.

  • I've gone into grad school myself and found that existing explanations of concepts is a scattered mess: it's almost like they don't want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article "Explain yourself!")

As an example of what a mess it is (and at risk of provoking emotions that aren't relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren't as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.

Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, "Um, anyone who's got the links to the public data, it'd be nice if you could post them here..."

To clarify, I'm NOT trying to raise the issue about AGW being a scam, etc. I'm saying that no matter how good the science is, here we have a case where it's of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.

Comment author: fiddlemath 15 June 2010 05:54:27AM *  8 points [-]

If the quality of science in general has not increased with more people, it's getting worse per unit person.

Er, I'd just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s "science"), and thereby seriously influencing it. Further, that's not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.

All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:

Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists' individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world's useful knowledge, or satisfy curiosity[*].

This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis - a least publishable unit is very nearly the author's research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.

Since these explanations are scattered and confusing, it's brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren't likely to catch it, because it's hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.

All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.

[*] This demands substantiation, but I have no studies to point to. It's common knowledge, perhaps, and it's true in the research environments I've found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?

Comment author: NancyLebovitz 16 June 2010 04:46:51PM 5 points [-]

I think you've got an example of generalizing from one example, and perhaps the habit of thinking of oneself as typical-- you're unusually good at finding clear explanations, and you think that other people could be about as good if they'd just try a little.

I suspect they'd have to try a lot.

As far as I can tell, most people find it very hard to imagine what it's like to not understand knowledge they've assimilated, which is another example of the same mistake.

Comment author: SilasBarta 16 June 2010 05:08:38PM 1 point [-]

Well, I appreciate the compliment, but keep in mind you haven't personally put me to the test on my claim to have that skill at explaining.

As far as I can tell, most people find it very hard to imagine what it's like to not understand knowledge they've assimilated, which is another example of the same mistake.

But I don't understand why this would be hard -- people make quite a big deal about how "I was little boy/girl like you too one time". Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.

(I remember one of my professors, later my grad school advisor (bless his heart), was a master at explaining and achieving Level 2 understanding on topics. He was always able to connect it back to related topics, and if students had trouble understanding something, he was always able to identify what the knowledge deficit was and jump in with an explanation of the background info needed.)

To the extent that your assessment is accurate, this problem people have can still be corrected by relatively simple changes in practice. For example, instead of just learning the next class up and moving on, people could make a habit of checking for how it connects to the previous class's knowledge, to related topics, to introductory class knowledge, and to layperson knowledge. It wouldn't help current people, as you have to make it an ongoing effort, but it doesn't sound like it's hard.

Also, is it really that hard for people to ask themselves, "Assume I know nothing. What would I have to be told to be able to do this?"

Comment author: sketerpot 13 July 2010 09:43:04PM *  5 points [-]

Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.

I remember that it was all pretty straightforward and intuitive. This was not a typical experience, and it also means that I don't really know what average students have trouble with in basic Newtonian physics. Physics professors tend to be people who were unusually good at introductory physics classes. (Meanwhile, I can't seem to find an explanation of standard social skills that doesn't assume a lot of intuitions that I find non-obvious. Fucking small talk, how does it work?!)

Most professors weren't typical students, so why would their recollections be a good guide to what problems typical students have when learning a subject for the first time?

Comment author: SilasBarta 13 July 2010 10:22:26PM *  3 points [-]

I remember intro physics being straightforward and intuitive, and I had no trouble explaining it to others. In fact, the first day we had a substitute teacher who just told us to read the first chapter, which was just the basics like scientific notation, algebraic manipulation, unit conversion, etc. I ended up just teaching the others when something didn't make sense.

If there was any pattern to it, it was that I was always able to "drop back a level" to any grounding concept. "Wait, do you understand why dividing a variable by itself cancels it out?" "Do you understand what multiplying by a power of 10 does?"

That is, I could trace back to the beginning of what they found confusing. I don't think I was special in having this ability -- it's just something people don't bother to do, or don't themselves possess the understanding to do, whether it's teaching physics or social skills (for which I have the same complaint as you).

Someone who really understands sociality (i.e., level 2, as mentioned above) can fall back to the questions of why people engage in small talk, and what kind of mentality you should have when doing so. But most people either don't bother to do this, or have only an automatic (level 1) understanding.

Do you ever have trouble explaining physics to others? Do you find any commonality to the barriers you encounter?

Comment author: steven0461 13 July 2010 11:51:00PM *  6 points [-]

In mathy fields, how much of it is caused by insufficiently deep understanding and how much of it is caused by taboos against explicitly discussing intuitive ways of thinking that can't be defended as hard results? The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you're learning, and to build intuitions you have to spend a lot of time doing exercises. But I've always thought such effort could be partly avoided if instead of playing dignified Zen master, textbooks were full of low-status sentences like "a prisoner's dilemma means two parties both have the opportunity to help the other at a cost that's smaller than the benefit, so it's basically the same thing as trade, where both parties give each other stuff that they value less than the other, so you should imagine trade as people lobbing balls of stuff at each other that grow in the lobbing, and if you zoom out it's like little fountains of stuff coming from nowhere". (ETA: I mean in addition to the math itself, of course.) It's possible that I'm overrating how much such intuitions can be shared between people, maybe because of learning-style issues.

Comment author: JoshuaZ 13 July 2010 11:57:24PM 3 points [-]

That is, I could trace back to the beginning of what they found confusing. I don't think I was special in having this ability -- it's just something people don't bother to do, or don't themselves possess the understanding to do, whether it's teaching physics or social skills (for which I have the same complaint as you).

This demonstrates a highly developed theory of mind. In order to do this one needs to both have a good command of material and a good understanding of what people are likely to understand or not understand. This is often very difficult.

Comment author: Will_Newsome 08 September 2010 05:49:51PM 2 points [-]

Fucking small talk, how does it work?!

For me, a small but significant hack suggested by Anna Salamon was to try to act (and later, to actually be) cheerful and engaged instead of wittily laconic and 'intelligent'. That said, it's rare that I remember to even try. Picking up habits is difficult.

Comment author: Will_Newsome 18 June 2010 08:12:30AM 3 points [-]

So what is the realistic alternative for those who have no other marketable skills, such as myself? (I specifically don't have a high school diploma, though I suppose it would be trivially easy to nab a GED.)

Comment author: SilasBarta 18 June 2010 02:11:59PM 4 points [-]

Until the adjustment happens, there won't be a common way because most people are still in the current inefficient mentality so you don't get scaling effects. Whatever internships friends and family can offer would probably be the best alternative.

In the future, there will probably be some standardized test you'll have to take at age 16-18 to show that you're reasonably competent and your education wasn't a sham. (The SAT tests could probably be used as they stand for this purpose.) Then, most people will go straight to unpaid or low-paid interships in the appropriate field, during which they may have to take classes to get a better theoretical background in their field (like college, but more relevant).

After a relatively short time, they will either prove their mettle and have contacts, experience, and opportunities, or realize it was a bad idea, cut their losses, and try something else. It sounds like a big downside, until you compare it to college today.

Comment author: Vive-ut-Vivas 15 June 2010 07:40:50PM 6 points [-]

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

I wish I could vote this comment up a hundred times. This insane push toward college without much thought about the quality of the education is extremely harmful. People are more focused on slips of paper that signal status versus the actual ability to do things. Not only that, but people are spending tens of thousands of dollars for degrees that are, let's be honest, mostly worthless. Liberal arts and humanities majors are told that their skill set lies in the ability to "think critically"; this is a necessary but not sufficient skill for success in the modern world. (Aside from the fact that their ability to actually "think critically" is dubious in the first place.) In reality, the entire point is networking, but there has to be a more efficient way of doing this that isn't crippling an entire generation with personal debt.

Comment author: wedrifid 18 June 2010 11:53:05AM *  4 points [-]

I wish I could vote this comment up a hundred times.

I would settle for just 10 times if it were in the form of a post. ;)

Liberal arts and humanities majors are told that their skill set lies in the ability to "think critically";

Evidently the ability to think critically is instilled after the propaganda is spread.

Comment author: SilasBarta 17 June 2010 07:09:00PM 5 points [-]

Liberal arts and humanities majors are told that their skill set lies in the ability to "think critically"; this is a necessary but not sufficient skill for success in the modern world.

Wow, now that is what I would call fraud. It's something the students should be able to detect right off the bat, given the lack of liberal arts success stories they can point to. It's like they just think, "I like history, so I'll study that", with no consideration of how they'll earn a living in four years (or seven). That can't last.

In reality, the entire point is networking, but there has to be a more efficient way of doing this that isn't crippling an entire generation with personal debt.

And I wish I could vote that up a hundred times. I wouldn't mind as much if colleges were more open about "hey, the whole point of being here is networking", but I guess that's something no one can talk about in polite company.

Comment author: realitygrill 17 June 2010 04:37:01AM 2 points [-]

Tell my parents this one.

On the other hand, is 'success' an existentialist concept (in that you have to define it yourself)? I would think it'd be near impossible to come to a consensus as to what is necessary and sufficient for success.

Comment author: Mass_Driver 17 June 2010 05:13:52AM 5 points [-]

Sure, it's vague. The point is that, for any plausible, conventional definition of success you might be able to come up with, a typical liberal arts degree is definitely insufficient and probably unnecessary to meet that definition's criteria.

Comment author: Mass_Driver 17 June 2010 05:15:31AM 2 points [-]

In short: this, too, shall pass -- the only question is how long we'll have to suffer until the transition is complete.

Or, it may not pass, and the American educational system may continue to gather detritus until it collapses. Anybody familiar enough with the Chinese Ming dynasty to rationally assess the similarities? I'm not.

Comment author: SilasBarta 17 June 2010 07:10:13PM 1 point [-]

Or, it may not pass, and the American educational system may continue to gather detritus until it collapses.

Not to be pedantic, but that would be passing. I made no pretense about shortness in the time this will take to pass.

Comment author: nhamann 15 June 2010 05:00:57AM 2 points [-]

I'm not sure I see what the problem is.

From the article:

Paid out of the grant, these highly skilled employees might earn $40,000 a year for 60 or more hours a week in the lab. A lucky few will eventually land faculty posts, but even most of those won’t get traditional permanent spots with the potential of tenure protection. The majority of today’s new faculty hires are “soft money” jobs with titles like “research assistant professor” and an employment term lasting only as long as the specific grant that supports it.

I'm not sure how typical this experience is, but assuming it is as common as the article suggests: you don't see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that "capitalism works?" I'm not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it's reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.

Comment author: Houshalter 15 June 2010 01:27:08PM 2 points [-]

I'm not sure how typical this experience is, but assuming it is as common as the article suggests: you don't see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that "capitalism works?" I'm not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it's reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.

Maybe that was a little harsh. But the question is, why are "huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) [...] getting paid very little to work in conditions with almost no long-term job security?" The article suggests it's because we have a surplus. But if those people weren't so highly trained, would they then get those better jobs? Probably not, people don't discriminate against you because you're "highly trained".

Comment author: simplicio 17 June 2010 12:03:02AM 5 points [-]

An idea I had: an experiment in calibration. Collect, say, 10 (preferably more) occasions on which a weather forecaster said "70% chance of rain/snow/whatever," and note whether or not these conditions actually occurred. Then find out if the actual fraction is close to 0.7.

I wonder whether they actually do care about being well calibrated? Probably not, I suppose their computers just spit out a number and they report it. But it would be interesting to find out.

I will report my findings here, if you are interested, and if I stay interested.

Comment author: JoshuaZ 17 June 2010 12:07:56AM *  1 point [-]

Note that this sort of thing has been done a bit before. See for example this analysis.

Edit: The linked analysis has a lot of problems. See discussion below.

Comment author: simplicio 17 June 2010 12:22:31AM *  4 points [-]

Cool, but hold on a minute though. I quote:

In measuring precipitation accuracy, the study assumed that if a forecaster predicted a 50 percent or higher chance of precipitation, they were saying it was more likely to rain than not. Less than 50 percent meant it was more likely to not rain.

That prediction was then compared to whether or not it actually did rain...

Isn't something wrong here? If you say "60% chance of rain," and it doesn't rain, you are not necessarily a bad forecaster. Not unless it actually rained on less (or more!) than 60% of those occasions. It should rain on ~60% of occasions on which you say "60% chance of rain."

Am I just confused about this fellow's methodology?

Comment author: Kevin 16 June 2010 08:47:29PM 5 points [-]

IBM's Watson AI trumps humans in "Jeopardy!"

http://news.ycombinator.com/item?id=1436625

Comment author: cousin_it 16 June 2010 09:10:18PM *  1 point [-]

Thanks a lot for the link. I remember Eliezer arguing with Robin whether AI will advance explosively by using few big insights, or incrementally by amassing encoded knowledge and many small insights. Watson seems to constitute evidence in favor of Robin's position as it has no single key insight:

Ferrucci says his team will continue to fine-tune Watson, but improving its performance is getting harder. “When we first started, we’d add a new algorithm and it would improve the performance by 10 percent, 15 percent,” he says. “Now it’ll be like half a percent is a good improvement.”

Comment author: JoshuaZ 15 June 2010 04:12:58AM 5 points [-]

I'm thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes' Theorem. Would people be interested in such a post?

Comment author: Christian_Szegedy 16 June 2010 07:40:02PM *  3 points [-]

Funny, I've been entertaining the same idea for a few weeks.

Every time I read statements like "... and then I update the probabilities, based on this evidence ...", I think to myself: "I wish I had the time (or processing power) he thinks he has. ;)"

Comment author: h-H 15 June 2010 02:11:17AM *  5 points [-]

yay! music composition AI

we've had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.

good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?

Comment author: NancyLebovitz 16 June 2010 09:09:18AM 5 points [-]

Thanks for the link.

If a machine could write a Mozart sonata every bit as good as the originals, then what was so special about Mozart?

Mozart developed the Mozart sonata.

Comment author: SilasBarta 15 June 2010 06:06:24PM 0 points [-]

Good music isn't about good music. It's about which music authorities have approved of it.

Comment author: NancyLebovitz 17 June 2010 12:56:24PM *  4 points [-]
Comment author: wedrifid 17 June 2010 03:13:40PM 2 points [-]
Comment author: Christian_Szegedy 18 June 2010 07:58:34AM 3 points [-]
Comment author: Morendil 18 June 2010 07:29:12AM 3 points [-]
Comment author: NancyLebovitz 18 June 2010 03:25:27PM 1 point [-]

Wade's breakthrough came after his real-life child was born. The duties of fatherhood limited the time he could spend playing the game, so he replaced the "computer" with a much simpler pattern called an "instruction tape", made up of smaller patterns known as "gliders". By placing these at precise intervals, he created a program that feeds into the constructor and dictates its actions, much like the punched rolls of tape once used to control the first computers.

One of Eliezer's posts talks about realizing that conventional science is content with an intolerably slow pace. Here we have an example of less time leading to a better solution.

Comment author: Blueberry 18 June 2010 07:46:52AM *  1 point [-]

Apparently it doesn't replicate itself any more than a glider does; the old copy is destroyed as it creates a new copy.

Comment author: Morendil 18 June 2010 08:05:15AM 1 point [-]

Reading the conwaylife.com thread gives a better sense of this thingie's importance than the comparison with a glider. ;)

Comment author: MartinB 17 June 2010 11:47:52PM *  3 points [-]

These days, I sometimes get bumped into great new ideas[tm], that are at times well proven, or at least workable and useful -- only to remember that I did already use that idea some years ago with great success and then dumped it for no good reason whatsoever. Simple example: in language learning write ups, I repeatedly find the idea of an SRS. That is a program which does spaced repetitions at nice intervals and consistently helps in memorizing not only language items, but also all other kinds of facts. Programs and data collections are now freely available -- but I already programmed my own program for that about 14 years ago as a nice entry level programming exercise, and used it quite extensively and successfully for about 2 years in school, till I suddenly stopped. That made me wonder which other great ideas I already used and discarded, why former me would do such a thing and to make it a public question: which great things LWers might have tried and discarded for no particular reason.

Another obvious example from my own stack would be the use of checklists to pack for holidays. Worked great for years and still does.

Comment author: cousin_it 15 June 2010 10:12:44PM *  3 points [-]

Apologies for posting so much in the June Open Threads. For some reason I'm getting many random ideas lately that don't merit a top-level post, but still lead to interesting discussions. Here's some more.

  1. How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

  2. How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

Of course, both those arguments fall apart if the deception equipment is "unusually clever" at deceiving you. In that case both questions are probably hopeless.

Comment author: zero_call 17 June 2010 02:17:34AM 3 points [-]

How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.

The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.

Comment author: JoshuaZ 15 June 2010 10:21:24PM 4 points [-]

The first one fails terribly. I've had dreams where I've thought I've proven some statement I'm thinking about and when waking up can remember most of the "proof" and it is clearly incoherent. No, subconscious, the fact that Martin van Buren was the 8th President of the United States does not tell me anything about zeros of L-functions. (I've had other proofs that were valid though so I don't want the subconscious to stop working completely).

The second one seems more viable. May I suggest using something like electromagnetic stimulation of specific areas of the brain rather than deliberately damaging sections? For that matter, the fact that drugs can alter thought processes not just perception also strongly argues against being a brain in the vat by the same sort of logic.

Comment author: Mass_Driver 17 June 2010 05:18:57AM 2 points [-]

How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer,

Do you have access to the computer software of your choice in your dreams? That sounds unusually vivid to me, maybe even lucid. I'm lucky if I can find a working pen and a desk that obeys the laws of physics in my dreams.

Comment author: wedrifid 17 June 2010 03:28:26PM 3 points [-]

Do you have access to the computer software of your choice in your dreams?

I know I do. In the last couple of years I have gone from almost never remembering a dream to having dreams that are sometimes even more vivid than my memories of real life. I even had to check my computer one day to see whether or not what I remembered doing was 'real' or not.

Comment author: Morendil 17 June 2010 06:49:27AM 1 point [-]

Heck, I'm lucky if I can find trousers in my dreams.

Comment author: wedrifid 17 June 2010 03:20:46PM 1 point [-]

Depends on how you define 'lucky' I guess. ;)

Comment author: humpolec 18 June 2010 09:50:48AM 1 point [-]

How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

A similar method was used by Solaris protagonist to check if he isn't hallucinating.

Comment author: cousin_it 18 June 2010 10:06:49AM *  1 point [-]

Ouch! I read Solaris long ago. It seems the idea stuck in my head and I forgot its origin. And it does make much more sense if you substitute "hallucinating" for "dreaming".

Comment author: wedrifid 17 June 2010 03:33:33PM 1 point [-]

How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

The trick, then, is to instill in yourself a habit of checking whether you are asleep regularly (ie. even when you are awake). A habit of thinking "am I awake, let me check" is the hard part and without that habit your sleeping mind isn't likely to question itself. Literature on lucid dreaming talks a lot about such tests. In fact, combined with 'write dreams down as soon as you wake up' and 'consume X substance" it more or less summarizes the techniques.

Comment author: Risto_Saarelma 18 June 2010 01:48:41PM 2 points [-]

The odd thing is that despite reading stuff about reality tests and trying to build a habit from doing them while awake, on the rare occasions I've had a lucid dream I've just spontaneously become aware that I'm presently dreaming. I don't remember ever having a non-lucid dream where I've done a reality test.

Instead of fancy stuff like determining prime factors, one consistent dream sign I've had is utter incompetence in telling time from digital watches and clocks. This generally doesn't tip me off that I'm dreaming though, and doesn't occur often enough that I could effectively condition myself to recognize it.

Comment author: humpolec 18 June 2010 09:43:15AM 1 point [-]

In fact, combined with 'write dreams down as soon as you wake up' and 'consume X substance" it more or less summarizes the techniques.

There are also trance/self-hypnosis methods, like WILD, some people seem to be very successful with them.

Comment author: wedrifid 18 June 2010 11:32:42AM 1 point [-]

Interesting. And personally I find experimenting with trance and self-hypnosis by themselves to be even more fascinating than vivid dreaming. If only I did not come with the apparent in built feature of inoculating myself to any particular method of trance or self hypnosis after a few successful uses.

Comment author: Dagon 16 June 2010 07:14:14PM *  1 point [-]

I think "unusually clever" should be "sufficiently clever" in your caveat. I have very wide error bars on what I think would be usual, but I suspect that it's almost guaranteed to defeat those tests if it's defeated the overall test you've already applied of "have only memories of experiences consistent with a believable reality".

In which case both questions are indeed hopeless.

Comment author: RobinZ 15 June 2010 01:37:56AM 3 points [-]
Comment author: Emile 15 June 2010 12:44:41PM *  8 points [-]

I don't know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :

You see, there's a shifty little game that proponents of gender discrimination are playing. They argue that high SAT scores are indicative of success in science, and then they say that males tend to have higher math SAT scores, and therefore it is OK to encourage more men in the higher ranks of science careers…but they never get around to saying what their SAT scores were. Larry Summers could smugly lecture to a bunch of accomplished women about how men and women were different and having testicles helps you do science, but his message really was "I have an intellectual edge over you because some men are incredibly smart, and I am a man", which is a logical fallacy.

From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn't that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.

The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.

Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.

Comment author: Douglas_Knight 23 June 2010 12:21:08AM *  3 points [-]

The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.

Well, I think PZ Myers is a liar who has never heard of such people, but they do exist. Robin Hanson, for one. More representative is conchis's claim early in the comments that

some [Oxford] admissions fellows were discounting female students’ grades on the basis that they were more likely to reflect conscientiousness than talent.

Rewritten: I've heard hints along these lines in America, where girls get better grades, in both high school and college, than boys with the same SATs. This is suggested to be about conscientiously doing homework. If American colleges don't want to reward conscientiousness, they could change their grading to avoid homework.

That would make them be like my understanding of Oxford, where I believe grades are based on high-stakes testing, not on homework. But I also thought admissions was only based on high-stakes testing, too. That is, I don't even know what the quoted claim means by "grades," nor have I been able to track down people openly admitting anything like it.

Do British students get grades other than A-levels? Are there sex divergences between the grades and A-levels? A-levels and predictions? I hear that Oxbridge grades are lower variance for girls than boys. I also hear that boys do better on the math SATs than on the math A-levels, which seems like it should be a condemnation of one of the tests.

Comment author: Nick_Tarleton 16 June 2010 05:06:09PM *  2 points [-]

Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.

Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it's obvious that most people don't distinguish well, in political situations, between incidental aid and explicit support.) Also, if evaluating individual intelligence is costly and/or inevitably noisy, it is (selfishly) rational for evaluators to give significant weight to gender, i.e. discriminate. And given how little people understand statistics, and the extent to which judgments of status/worth are tied to intelligence and to group membership, it seems inevitable that belief in group differences will lead people to discriminate far more than would be rational.

Comment author: Emile 17 June 2010 10:16:05AM 2 points [-]

Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it's obvious that most people don't distinguish well, in political situations, between incidental aid and explicit support.)

Can't this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?

Say we have two somewhat similar positions:

  • Position A, which is false and maybe evil (in this case "we should discriminate against women when hiring scientists, because they aren't as likely to be very smart")
  • Position B, which is maybe true (in this case ("the lack women female scientists could be due to the fact that they aren't as likely to be very smart")

A straw man is pretending that people arguing B are arguing A, or pretending that there's no difference between the two - which seems to be what P.Z. Myers is doing.

You're saying that position B gives support for position A, and, yes, it does. That can be a good reason to attack people who support position B (especially if you really don't like position A), but that holds even if position B is true.

Comment author: Nick_Tarleton 22 June 2010 06:44:49PM *  2 points [-]

Can't this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?

Agreed. I don't necessarily approve of this sort of rhetoric, but I think it's worth trying to figure out what causes it, and recognize any good reasons that might be involved. (I also don't mean to say that people who use this rhetoric are calculating instrumental rationalists — mostly, I think they, as I alluded to, don't recognize the possibility of saying things representative of and useful to an outgroup without being allied with it.)

Comment author: Alexandros 14 June 2010 06:10:17PM *  3 points [-]

Off That (Rationalist Anthem) - Baba Brinkman

More about skeptics than rationalists, but still quite nice. Enjoy

Comment author: Vladimir_Nesov 18 June 2010 04:39:59PM *  6 points [-]

I've noticed a surprising conclusion about moral value of the outcomes (1) existential disaster that terminates civilization, leaving no rational singleton behind ("Doom"), (2) Unfriendly AI ("UFAI") and (3) FAI. It now seems that although the most important factor in optimizing the value of the world (according to your personal formal preference) is increasing probability of FAI (no surprise here), all else equal UFAI is much more preferable than Doom. That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.

The main argument (known as Rolf Nelson's AI deterrence) can be modeled by counterfactual mugging: an UFAI will give up a (small) portion of the control over its world to FAI's preference (pay the $100), if there is a (correspondingly small) probability that FAI could've been created, had the circumstances played out differently (which corresponds to the coin landing differently in counterfactual mugging), in exchange for the FAI (counterfactually) giving up a portion of control to the UFAI (reward from Omega).

As a result, having an UFAI in the world is better than having no AI (at any point in the future), because this UFAI can work as a counterfactual trading partner to a FAI that could've existed under other circumstances, which would make the FAI stronger (improve the value of the possible worlds). Of course, the negative effect of decreasing the probability of FAI is much stronger than the positive effect of increasing the probability of UFAI to the same extent, which means that if the choice is purely between UFAI and FAI, the balance is conclusively in FAI's favor. That there are FAIs in the possible worlds also shows that the Doom outcome is not completely devoid of moral value.

More arguments and a related discussion here.

Comment author: Morendil 17 June 2010 06:00:14AM 4 points [-]

Looks like LW briefly switched over to its backup server today, one with a database a week out of date. That, or a few of us suffered a collective hallucination. Or, for that matter, just me. ;)

Just in case you were wondering too.

Comment author: AndyWood 17 June 2010 07:31:55AM 3 points [-]

I was wondering indeed. That was surreal.

Comment author: cousin_it 14 June 2010 09:59:06PM *  4 points [-]

Any LessWrongers understand basic economics? This could be another great topic set for all of us. Let's kick things off with a simple question:

I'm renting an apartment for X dollars a month. My parents have a spare apartment that they rent out to someone else for Y dollars a month. If I moved into that apartment instead, would that help or hurt the country's economy as a whole? Consider the cases X>Y, X<Y, X=Y.

ETA: It's fascinating how tricky this question turned out to be. Maybe someone knowledgeable in economics could offer a simpler question that does have a definite answer?

Comment author: James_K 15 June 2010 05:53:39AM 7 points [-]

An interesting question. Here are some initial thoughts:

In terms of broad economic aggregates, it won't make any difference. If you rent the room off your parents for a market rate, GDP is exactly unaffected, people are paying the same money to different people. If you rent it for less than market rate, GDP is lower, but this reflects deficiencies in measured GDP since GDP uses market prices as a proxy for the value of a transaction (this is fine for the most part, but doing your child a favour is an exception conventional methodology can't deal with). So from a macroeconomic perspective I'd say it's a wash either way.

Microeconomically, there could be some efficiencies in you renting from your parents. If they trust you more than a random stranger (and let's hope they do) they will spend less time monitoring your behaviour (property inspections and the like) than they would a random stranger, but the value of your familial relationship should constrain you from taking advantage of that lax monitoring in the way a stranger would. This mean that your parents save time (which makes their life easier) and no one should be worse off (I assume the current tenant of their room would find adequate accommodation elsewhere).

However, one note of caution. If you were to get into a dispute of some sort with your parents over the tenancy, this could damage your relationship with your parents. If you value this relationship (and I assume you do), this is a potential downside that doesn't exist under the status quo. Also, some people might see renting from your parents as little different to living with your parents which (depending on your age) may cost you status in your day-to-day life (even if you pay a market rate). If you value status, you should be aware of this drawback.

So in summary, the most efficient outcome depends on three variables: 1) How much time and effort do your parents spend monitoring their tenant at the moment? 2) How likely is it that your relationship with them could be strained as a result of you living there? 3) How many friends / acquaintances / colleagues do you have that would think less of your for renting from your parents (and how much do you care)?

I hope that helps.

Comment author: MichaelBishop 15 June 2010 03:30:32AM 4 points [-]

I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands, presumably if you rent the apartment for $X when X>Y. My impression is that in good economic times, marginal spending is not considered to improve economic welfare.

Comment author: SilasBarta 16 June 2010 03:48:18PM *  2 points [-]

I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands ...

Imagine that the "economy" is sluggish, and that a widget maker currently profits $1 on each widget sale. Now, consider these two scenarios:

a) I buy 100 widgets that I don't want, in order "to help the economy".
b) I give the widget-maker $100. Then, I lie and say, "OMG!!! I just heard that demand for widgets is SURGING, you've GOT to make more than usual!" (Assume they trust me.)

In both cases, the widget-maker is $100 richer, the real resources in the economy are unchanged, and the widget-maker has gotten a false signal that more widgets should be produced. Yet one of those "helps the economy", while the other doesn't? How does that make sense?

If you believe that either one of those "helps the economy", your whole view of "the economy" took a wrong turn somewhere.

Comment author: SilasBarta 15 June 2010 03:02:22PM *  12 points [-]

If I moved into that apartment instead, would that help or hurt the country's economy as a whole?

Good question, not because it's hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.

  • If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don't.

  • If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don't.

  • If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.

ANYTHING beyond that -- anything whatsoever -- is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people's wants are being satisfied, which is supposed to be what we mean by a "good economy".

Today, economists equate growing GDP -- irrespective of measuring artifacts that make it deviate from what we want it to measure -- with a good economy. If the economy isn't doing well enough, well, we need more "aggregate demand" -- you see, people aren't buying enough things, which must be bad.

Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure. No, instead, we have come to define success by the number of money-based market exchanges, rather than whether people are getting the combination of work, consumption, and leisure (all broadly defined) that they want.

This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.

Now, it's true there are prisoner's dilemma-type situations where people have to cooperate and endure some pain to be better off in the aggregate. But the corresponding benefit that economists expect from this collective sacrifice is ... um ... more pointless work that doesn't satisfy real demand .. but hey, it keeps up "aggregate demand", so it must be what a sluggish economy needs.

Are you starting to see how skewed the standard paradigm is? If people found a more efficient, mutualist way to care for their children rather than make cash payments to day care, this would be regarded as a GDP contraction -- despite most people being made better off and efficiency improving. If people work longer hours than they'd like, to produce stuff no one wants, well, that shows up as more GDP, and it's therefore "good".

How the **** did we get into this mindset?

Sorry, [/another rant].

Comment author: NancyLebovitz 16 June 2010 08:54:25AM 5 points [-]

What isn't reflected in the GDP is huge.

There's the underground economy-- I've seen claims about the size of it, but how would you check them?

There's everything people do for each other without it going through the official economy.

And there's what people do for themselves-- every time you turn over in bed, you are presumably increasing value. If you needed paid help, it would be adding to the GDP.

Comment author: James_K 16 June 2010 05:28:40AM *  5 points [-]

I don't understand where you acquired this view of economists. I am an economist and I assure you economists don't ascribe to the "measured GDP is everything" view you attribute to them.

This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.

This is not an accurate portrayal of what Keynesians believe. The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.

The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy; low quality spending by government drives high quality spending by the private sector.

If you wish to be sceptical of this story (I'm fairly dubious about it myself), then fine, but Keynesians aren't arguing what you think they're arguing.

Comment author: SilasBarta 16 June 2010 02:44:51PM *  6 points [-]

If you wish to be sceptical of this story (I'm fairly dubious about it myself), then fine, but Keynesians aren't arguing what you think they're arguing.

No, that's precisely what I assumed they're arguing, and I believe my points were completely responsive. I will address the position you describe in the context of the criticism in my rant.

The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.

The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy;

Now, unpack the meaning of all of those terms, back to the fundamentals we really care about, and what is all that actually saying? Well, first of all, have you played rationalist taboo with this and tried to phrase everything without economics jargon, so as to fully break down exactly what all the above means at the layperson level? To me, economists seem to talk as if they have not done so.

I would like for you to tell me whether you have done so in the past, and write up the phrasing you get before reading further. You've already tabooed a lot, but I think you need to go further, and remove the terms: recession, depression, stimulus, excessive, pessimism, invest, and economic activity. (What's left? Terms like prefer, satisfaction, wants, market exchange, resources, working, changing actions.)

Now, here's what I get: (bracketed phrases indicate a substitution of standard economic jargon)

"People [believe that future market interactions with others will be less capable of satisfyng their wants], which leads them to [allocate resources so as to anticipate lower gains from such activity]. As people do this, the combined effect of their actions is to make this suspicion true, [increasing the relative benefit of non-market exchanges or unmeasured market exchanges].

"The government should therefore [purchase things on the market] in order to produce a [false signal of the relative merit of selling certain goods], and facilitate production of [goods people don't want at current prices or that they previously couldn't justify asking their government to provide]. This, then, becomes a self-fulfilling prophecy: once people [sell unwanted goods due to this government action], it actually becomes beneficial for others to sell goods people do want on the market, [preventing a different kind of adjustment to conditions from happening]."

Phrased in these terms, does it even make sense? Does it even claim to do something people might want?

Comment author: James_K 17 June 2010 08:22:57AM 7 points [-]

People [believe that future market interactions with others will be less capable of satisfyng their wants]

That was a very useful exercise since it helped me identify the key point of disagreement between you an Keynesianism. If I'm right, you're coming at this from a goods market perspective i.e. "I, a typical consumer am not interested in any of these goods at these prices, so I'm going to not buy so much", whereas the Keynesians are blaming this kind of attitude: "I, a typical consumer am fearful of the future. While I want to buy stuff, I'd better start saving for the future instead in case I lose my job" and it's the saving that triggers the recession (money flows out of the economy into savings, this fools people into thinking they are poorer and the death spiral begins).

A couple of other contextual points: 1) The monetary stimulus that Keynes recommended was based on governments running deficits, not necessarily spending more. Cutting taxes works just as well

2) Keynes was trying to reduce the magnitude of boom-bust swings, not increase trend economic growth rates. As such he prescribed the opposite behaviour in boom times, have government run surpluses to tamp down consumer exuberance. This is less widely known since politicians only ever talk about Keynes during recessions, when it gives them intellectual cover to spend lots of money.

3) The Keynesian consensus is not universal. Arnold Kling's "recalculation" story is much closer to your picture, and you'll notice he doesn't advocate stimulus, but rather waiting to see how people adjust to the new economic circumstances.

4) GDP is the preoccupation of macroeconomists. Microeconomists (like me) care much more about allocative efficiency, which is to say to what extent are things in the hands of the people who value them most? So there's a whole branch of the profession to which your initial GDP-centrism comment does not apply.

It's points 3 and 4 in particular that lead me to object to your claim that economists are obsessed with GDP. To my way of thinking, it's politicians that are obsessed with GDP because they believe their chances of re-election are tied to economic growth and unemployment figures. So they spend a lot of time asking economists how to increase GDP, and therefore economists more often than not to discuss GDP when they appear in public.

Comment author: SilasBarta 17 June 2010 02:13:47PM 1 point [-]

That was a very useful exercise since it helped me identify the key point of disagreement between you an Keynesianism. If I'm right, you're coming at this from a goods market perspective i.e. "I, a typical consumer am not interested in any of these goods at these prices, so I'm going to not buy so much", whereas the Keynesians are blaming this kind of attitude: "I, a typical consumer am fearful of the future. While I want to buy stuff, I'd better start saving for the future instead in case I lose my job" and it's the saving that triggers the recession (money flows out of the economy into savings, this fools people into thinking they are poorer and the death spiral begins).

It's still not clear to me that you've done what I asked (taboo your model's predicates down to fundamentals laypeople care about), or that you have the understanding that would result from having done what I asked.

  • What's the difference between the "goods market" perspective and the "blaming this kind of attitude"/Keynesian perspective? Why is one wrong or less helpful, and what problems would result from using it?

  • Why is it bad for people to believe they are poorer when they are in fact poorer?

  • Why is it bad for more money to go into savings? Why does "the economy" entirely hinge on money not doing this?

Until you can answer (or avoid assuming away) those problems, it's not clear to me that your understanding is fully grounded in what we actually care about when we talk about a "good economy", and so you're making the same oversights I mentioned before.

Comment author: James_K 18 June 2010 08:50:43PM 3 points [-]

you're making the same oversights I mentioned before.

No, I'm not making those oversights because I am a) not a Keynesian and b) not a macroeconomist. My offering defences of this position should not be construed as fundamental agreement that position.

This is quickly turning into a debate about the merits of Keynesianism which is not a debate I am interested in because stabilisation policy is not my field and I don't find it very interesting, I got enough of it at university. I'm going to touch on a few points here, but I'm not going to engage fully with your argument; you really need to talk to a Keynesian macroeconomist if you want to discuss most of this stuff. For one thing my ability to taboo certain words is affected by the fact I don't have a very solid grip on the theory and I don't spend much of my time thinking about high level aggregates like GDP.

Now here's the best I can do on your bullet point questions, sorry if it doesn't help much, but it's all I've got: 1) The difference is that Keynesians believe savings reduce the money supply by taking money out of circulation, this makes them think they are poorer, which makes them act like they're poorer, which makes other people poorer.

2) Because it starts with an illusion of poverty. The first cause of recessions in a Keynesian model is "animal spirits", or in layman's terms, irrational fear of financial collapse. Viewed from this perspective, stimulus is a hack that undoes the irrationality that caused the problem in the first place (and because it's caused by irrationality they can feel confident it is a problem).

3) This is actually one of my biggest problems with Keynesian theory. If it strikes you as counter-intuitive or silly, I'm not going to dissuade you.

One final point: The reason I replied to your initial comment in the first place, was your suggestion that all economists are obsessed with maximising measured GDP over everything else.

But many economists don't deal with GDP at all. When I was learning labour market theory we were taught that once people's wage rate gets high enough, one could expect them to work fewer hours since the demand for leisure time increases with income. There was never a suggestion that this was anything to be concerned about, the goal is utility, not income.

In environmental economics I recall reading a paper by Robert Solow (the seminal figure in the theory of economic growth) arguing that it was important to consider changes in environmental quality along with GDP, to get a better picture of how well off people really are.

I look at what I have been taught in economics, and I simply can't square it with your view of the profession. Some kinds of economists tend to be obsessed with growth, but they tend to be economists who specialise in economic growth. The rest of us have other pursuits, and other obsessions.

Comment author: Vladimir_M 16 June 2010 08:12:29PM *  1 point [-]

James_K:

I am an economist and I assure you economists don't ascribe to the "measured GDP is everything" view you attribute to them.

Aside from the standard arguments about the shortcomings of GDP, my principal objection to the way economists use it is the fact that only the nominal GDP figures are a well-defined variable. To make sensible comparisons between the GDP figures for different times and places, you must convert them to "real" figures using price indexes. These indexes, however, are impossible to define meaningfully. They are produced in practice using complicated, but ultimately arbitrary number games (and often additionally slanted due to political and bureaucratic incentives operating in the institutions whose job is to come up with them).

In fact, when economists talk about "nominal" vs. "real" figures, it's a travesty of language. The "nominal" figures are the only ones that measure an actual aspect of reality (even if one that's not particularly interesting per se), while the "real" figures are fictional quantities with only a tenuous connection to reality.

Comment author: realitygrill 17 June 2010 04:14:17AM *  3 points [-]

It's pretty easy to get this sort of view just reading books. In my (limited) experience, there are a fair percentage of divergent types that are not like this - and they tend to be the better economists.

You may like Morgenstern's book On the Accuracy of Economic Observations. How I rue the day I saw this in a used bookstore in NY and didn't have the cash to buy it..

EDIT: fixed title name

Comment author: Vladimir_M 17 June 2010 11:34:04PM *  3 points [-]

I'm going through Morgenstern's book right now, and it's really good. It's the first economic text I've ever seen that tries to address, in a systematic and no-nonsense way, the crucial question of whether various sorts of numbers routinely used by economists (and especially macroeconomists) make any sense at all. That this book hasn't become a first-rank classic, and is instead out of print and languishing in near-total obscurity, is an extremely damning fact about the intellectual standards of the economic profession.

I've also looked at some other texts by Morgenstern I found online. I knew about his work in game theory, but I had no idea that he was such an insightful contrarian on the issues of economic statistics and aggregates. He even wrote a scathing critique of the concept ot GNP/GDP (a more readable draft is here). Unfortunately, while this article sets forth numerous valid objections to the use of these numbers, it doesn't discuss the problems with price indexes that I pointed out in this thread.

Comment author: Vladimir_M 17 June 2010 06:29:19AM 1 point [-]

realitygrill:

It's pretty easy to get this sort of view just reading books. In my (limited) experience, there are a fair percentage of divergent types that are not like this - and they tend to be the better economists.

Could you please list some examples? Aside from Austrians and a few other fringe contrarians, I almost always see economists talking about the "real" figures derived using various price indexes as if they were physicists talking about some objectively measurable property of the universe that has an existence independent of them and their theories.

You may like Morgenstern's book On the Accuracy of Economic Measurements. How I rue the day I saw this in a used bookstore in NY and didn't have the cash to buy it..

Thanks for the pointer! Just a minor correction: apparently, the title of the book is On the Accuracy of Economic Observations. It's out of print, but a PDF scan is available (warning -- 31MB file) in an online collection hosted by the Stanford University.

I just skimmed a few pages, and the book definitely looks promising. Thanks again for the recommendation!

Comment author: James_K 17 June 2010 08:50:23AM 1 point [-]

It's not so much a matter of being overconfident as it is not listing the disclaimers at every opportunity. The Laspeyres Price Index (the usual type of price index) has well understood limitations (specifically that it overestimates consumer price growth as it doesn't deal with technological improvement and substitution effects very well), but since we don't have anything better, we use it anyway.

"Real" is a term of art in economics. It's used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn't all that useful. real GDP may be less certain, but it's more useful.

Bear in mind that everything economists use is an estimate of a sort, even nominal GDP. Believe it or not, they don't actually ask every business in the country how much they produced and / or received in income (which is why the income and expenditure methods of calculating GDP give slightly different numbers although they should give exactly the same result in theory). The reason this may not be readily apparent is that most non-technical audiences start to black out the moment you talk about calculating a price index (hell, it makes me drowsy) and technical audiences already understand the limitations.

Comment author: Vladimir_M 17 June 2010 05:45:17PM *  2 points [-]

James_K:

"Real" is a term of art in economics. It's used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn't all that useful. real GDP may be less certain, but it's more useful.

You're talking about the "real" figures being "less certain," as if there were some objective fact of the matter that these numbers are trying to approximate. But in reality, there is no such thing, since there exists no objective property of the real world that would make one way to calculate the necessary price index correct, and others incorrect.

The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods). However, even if we limit ourselves to those that look reasonable, there is still an infinite number of different procedures that can be used to calculate a price index, all of which will yield different results, and there is no objective way whatsoever to determine which one is "more correct" than others. If all the reasonable-looking procedures led to the same results, that would indeed make these results meaningful, but this is not the case in reality.

Or to put it differently, an "objective" price index is a logical impossibility, for at least two reasons. First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers. Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used. Therefore, comparisons of "real" variables invariably involve arbitrary and unwarranted assumptions about the relative values of different things to different people. Again, of course, different arbitrary choices of methodology yield different numbers here.

(By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective, unquestioningly use price indexes without stopping to think that the basic assumption behind the very notion of a price index is that value is objective and measurable after all.)

Comment author: Clippy 18 June 2010 08:24:59PM *  6 points [-]

The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods)

Very true. A good general measure in human economic systems should NOT merely look at the ease of availability of finished paperclips. It should also include, in the "basket", such things as extrudable metal, equipment for detecting and extracting metal, metallic wire extrusion machines, equipment for maintaining wire extrusion machines, bend radius blocks, and so forth.

Thank you for pointing this out; you are a relatively good human.

By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective

That is a very poor inference on their part.

Comment author: NancyLebovitz 18 June 2010 01:17:36PM 4 points [-]

Here's a crude metric I use for gauging the relative goodness of societies as places to live: Immigration vs. emigration.

It's obviously fuzzy-- you can't get exact numbers on illegal migration, and the barriers (physical, legal, and cultural) to relocation matter, but have to be estimated. So does the possibility that one country may be better than another, but a third may be enough better than either of them to get the immigrants.

For example, the evidence suggests that the EU and the US are about equally good places to live.

Comment author: Vladimir_M 18 June 2010 07:16:19PM *  3 points [-]

I don't think that's a good metric. Societies that aren't open to mass immigration can have negligible numbers of immigrants regardless of the quality of life their members enjoy. Japan is the prime example.

Moreover, in the very worst places, emigration can be negligible because people can be too poor to pay for the ticket to move anywhere, or prohibited to leave.

Comment author: [deleted] 19 June 2010 11:58:23AM 1 point [-]

But "given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power", you could predict how people would choose if they were not faced with legal and moving-cost barriers - e.g. imagine a philanthropist willing to pay the moving costs. So your objection to this metric seems to be a surmountable one, in principle, assuming perfect knowledge etc. The main remaining barrier to migration may be sentimental attachment - but given perfect knowledge etc. one could predict how the choices would change without that remaining barrier.

Applying this metric to Europa versus Earth, presumably Europans would choose to stay on Europa and humans would choose to stay on Earth even with legal, moving-cost, and sentimental barriers removed, indeed both would pay a great deal to avoid being moved.

In contrast to Europans versus humans, humans-of-one-epoch are not very different from humans-of-another-epoch.

Comment author: NancyLebovitz 18 June 2010 07:24:16PM 1 point [-]

A fair point, though I think societies like that are pretty rare. Any other notable examples?

Comment author: Vladimir_M 18 June 2010 07:42:35PM *  2 points [-]

Off the top of my head, I know that Finland had negligible levels of immigration until a few years ago. Several Eastern European post-Communist countries are pretty decent places to live these days (I have in mind primarily the Czech Republic), but still have no mass immigration. As far as I know, the same holds for South Korea.

Regarding emigration, the prime example were the communist countries, which strictly prohibited emigration for the most part (though, rather than looking at the numbers of emigrants, we could look at the efforts and risks many people were ready to undertake to escape, which often included dodging snipers and crawling through minefields).

Comment author: James_K 18 June 2010 08:13:22PM 1 point [-]

First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers.

The basket used is based on a representation of what people are currently consuming. This means we don't have to second-guess people's preferences. Unique goods like houses pose a problem, but there's not really anything we can do about that, so the normal process is to take an average of existing houses.

Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used.

Which is a well understood problem. Every economist knows this, but what would you have us do? It is necessary to inflation-adjust certain statistics, and if the choice is between doing it badly and not doing it at all, then we'll do it badly. Just because we don't preface every sentence with this fact doesn't mean we're not aware of it.

Comment author: SilasBarta 18 June 2010 09:15:47PM *  1 point [-]

Just to avoid confusion among readers, I want to distance myself from part of Vladimir_M's position. While I agree with many of the points he's made, I don't go so far as to say that CPI is a fundamentally flawed concept, and I agree with you that we have to pick some measure and go with it; and that the use of it does not require its caveats to be restated each time.

However, I do think that, for the specific purpose that it is used, it is horribly flawed in noticeable, fixable ways, and that economists don't make these changes because of lost purpose syndrome -- they get so focused on this or that variable that they're disconnected from the fundamental it's supposed to represent. They're doing the economic equivalent of suggesting to generals that their living soldiers be burned to ashes so that the media will stop broadcasting images of dead soldier bodies being brought home.

Comment author: James_K 18 June 2010 09:53:29PM 1 point [-]

I wouldn't be in a good position to determine if it's lost purpose syndrome since I'm an insider, but I would suggest that path dependence has a lot to do with it.

Price indices are produced by governments, who are notoriously averse to change. And what's worse the broad methodology is dictated by international standards, so if an economist or some other intelligent person comes up with a better price index they have to convince the body of economists and statisticians that they have a good idea, and then convince the majority of OECD countries (at a minimum) that their method is worth the considerable effort of changing every country's methodology.

That's a high hurdle to cross.

Comment author: [deleted] 17 June 2010 11:42:32PM 1 point [-]

If some price indexes are "clearly absurd", then they apparently have some value to us - for if they were valueless, then why call any particular one "absurd"? If they yield different results, then so be it - let us simply be open about how the different indexes are defined and what result they yield. The absence of a canonical standard will of course not be useful to people primarily interested in such things as pissing contests between nations, but the results should be useful nonetheless.

We commonly talk about tradeoffs, e.g., "if I do this then I will benefit in one way but lose in another". We can do the same thing with price indexes. "In this respect things have improved but in this other respect things have gotten worse."

Comment author: NancyLebovitz 16 June 2010 08:56:29AM 1 point [-]

I've heard that the trick works less well each time it's used (perhaps within a limited time period). Is this plausible?

Comment author: MichaelBishop 15 June 2010 04:23:47PM *  1 point [-]

Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure.

  1. Really? Because I hear economists talk about the value of leisure time quite frequently.
  2. IMO, most economists don't fetishize GDP the way you suggest they do.
  3. You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you're not defending it, you're just claiming it.
Comment author: SilasBarta 15 June 2010 04:42:39PM *  0 points [-]

Really? Because I hear economists talk about the value of leisure time quite frequently. ...IMO, most economists don't fetishize GDP the way you suggest they do.

Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.

Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for "what to do" about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.

You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you're not defending it, you're just claiming it.

I most certainly am defending it -- by showing the errors in the classification of what counts as a benefit. If the argument is that stimulus will get GDP numbers back up, then yes, I didn't provide counterarguments. But my point was that the effect of the stimulus is to worsen that which we really mean by a "good economy".

The stimulus is getting people to do blow resources doing (mostly) useless things. Whether or not it's effective at getting these numbers where they need to be, the numbers aren't measuring what we really want to know about. Success would mean the useless, make-work jobs eventually lead to jobs satisfying real demand, yet no metric that they focus on captures this.

Comment author: CronoDAS 22 September 2010 12:41:54AM 1 point [-]

Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.

This is because it isn't. A "lower level of output/work" means that people, on average, are going to be poorer. And the way our economy is set up (in the United States at least), reducing output/work by 1% doesn't mean that each person works 1% less, produces 1% less, and consumes 1% less, it means that 1 in 100 people lose their job, can't find another one, and become poor, while the rest keep going on as they have been. So, when output/work falls, you don't get more leisure, you get more poverty.

And I disagree that most stimulus spending ends up being directed to "worthless" projects. Maybe they're not the best value for money, but even completely worthless make-work projects are still effective at wealth redistribution. Furthermore, if people are willing to lend the government money for really, really low interest rates (as demonstrated by prices of U.S Treasury securities) then isn't that a signal that it's an unusually good time for the U.S. government to borrow and spend - that the economy wants more of what the government produces and less of what private industry produces?

Comment author: SilasBarta 22 September 2010 06:30:12PM *  2 points [-]

This is because it isn't. A "lower level of output/work" means that people, on average, are going to be poorer. And the way our economy is set up (in the United States at least), reducing output/work by 1% doesn't mean that each person works 1% less, produces 1% less, and consumes 1% less,

This I think reflects a status-quo bias. When the per capita GDP was lower in 2000, or 1990, the economy managed to employ a higher percentage of people. While you're right that current institutions, inertia, and laws prevent shorter workweeks, that is an argument for removing these barriers, not an argument for trying to game the GDP numbers in the (false) hope that this will somehow translate into sustainable employment because of the historical correlation.

And I disagree that most stimulus spending ends up being directed to "worthless" projects. Maybe they're not the best value for money, but even completely worthless make-work projects are still effective at wealth redistribution.

Okay, but that still looks like a case of lost purposes and fake utlity functions. If you're spending money to redistribute, then spend the money to redistribute! Don't spend it on a project that hogs up real resources just to get a small side-effect of transferring money to people you want to help. ("What's your real objection" and all.) If it's important that they feel they earn the paycheck, then require that they take job training.

And the reason I call the projects worthless is this (and it doesn't require an ideological commitment to being against government projects): people couldn't justify asking the government to provide these things before the recession. But if the recession is a contraction of productive capacity, then the projects we commit to should also contract -- it should look like an even worse deal.

The fact that the government can issue debt cheaper doesn't change this fact. The reduced productive capacity is a real (i.e. non-nominal) phenomenon. The greater ease with which government can procure resources does not mean our aggregate ability to produce them has increased; it just means the government can more easily increase its share of the shrinking pie. That still implies that our "choice set" is being reduced, and the newer, larger wastefulness of these projects will have to show up somewhere.

If the fundamental determinant of reduced unemployment is whether the economy has entered into (as Arnold Kling says) sustainable patterns of specialization and trade, then temporary stimulus projects can't accelerate this, because they're by definition not sustainable: after they're over, we'll just have to readjust again.

I must emphasize, as I did in this blog post, that this does not mean we should give suffering families the finger because "it would be inefficient and all" -- the fact that they (under a stimulus project) are working, feeling productive, and getting a paycheck is very significant, and definitely counts as a benefit. It's just that you should help them a way that doesn't inhibit the economy's search for efficient use of factors of production, nor (significantly) favor these families over the ones that are going to be screwed again when the projects have to stop, and the hunt for re-coordination starts anew.

Comment author: SilasBarta 16 June 2010 03:23:09AM 1 point [-]

Downvote explanation requested. This looks like a reasoned reply to MichaelBishop's criticism, and I'm interested in knowing how it errs and how Michael's comment doesn't, and how this is so obvious.

Comment author: CarlShulman 16 June 2010 03:25:34AM *  3 points [-]

Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for "what to do" about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.

[Didn't downvote.] This is silly. The 'leisure' of unemployment is concentrated on a few, and comes with elevated rates of low status, depression, suicide, divorce, degradation of employability, etc.

Comment author: SilasBarta 16 June 2010 03:48:28AM *  2 points [-]

That's a misinterpretation of what I was suggesting as the alternative. Lower output + more leisure doesn't mean the "leisure" is concentrated entirely in a few workers, making them full-time leisurists who starve. Rather, it means that anyone who wants to work for money would work fewer hours and have a lower level of consumption, not zero consumption.

Furthermore, the lower consumption is only consumption of goods purchased with money; with significant restructuring, labor with predictable demand (like babysitting) can be handled by cooperatives that avoid the need to pay for it out of cash reserves.

I don't deny that make-work programs allow workers to show off and practice their skills, retaining employability. I criticize economists who miss this benefit. But if you're going to spend money to get this benefit, you should spend it in a way that directly targets the achievement of this benefit to the workers, rather than on make-work projects that only achieve this benefit as a site effect, and which waste capital goods and distort markets in the process.

Comment author: CronoDAS 22 September 2010 12:50:39AM *  1 point [-]

That's a misinterpretation of what I was suggesting as the alternative. Lower output + more leisure doesn't mean the "leisure" is concentrated entirely in a few workers, making them full-time leisurists who starve. Rather, it means that anyone who wants to work for money would work fewer hours and have a lower level of consumption, not zero consumption.

Unfortunately, in the United States, you really would end up with much more of the former and less of the latter. Europe would be better off, though, thanks to different labor laws; would you suggest that the United States adopt something like France's maximum 35 hour workweek, or Germany's subsidies to part-time workers?

Currently, hours worked per week is positively correlated with hourly wages; one person working 80 hours a week usually makes more money than two people who both work 40 hours a week. Also, specifically wanting to do part-time work is a bad signal to employers. It signals that you're not committed to your job, that you're probably lazy, and that you're weird. So, absent government intervention, you probably won't see people voluntarily reducing their working hours.

Comment author: thomblake 15 June 2010 04:07:03PM 0 points [-]

Nice to see this kind of thinking from a capitalistish.

Comment author: AlephNeil 15 June 2010 11:29:04AM 3 points [-]

Here's another question to chew on:

Suppose you're in a country that grows and consumes lots of cabbages, and all the cabbages consumed are home-grown. Suppose that one year people suddenly, for no apparent reason, decide that they like cabbages a lot more than they used to, and the price doubles. But at least to begin with, rates of production remain the same throughout the economy. Does this help or harm the economy, or have no effect?

In one sense it 'obviously' has no effect, because the same quantities of all goods and services are produced 'before' and 'afterwards'. So whether we're evaluating them according to the 'earlier' or the 'later' utility function, the total value of what we're producing hasn't changed. (Presumably the prices of non-cabbages would decline to some extent, so it's at least consistent that GDP wouldn't change, though I still can't see anything resembling a mathematical proof that it wouldn't.)

Comment author: Vladimir_M 14 June 2010 10:16:45PM 2 points [-]

would that help or hurt the country's economy as a whole?

What exact metric do you have in mind?

Comment author: NancyLebovitz 18 June 2010 11:48:28AM 2 points [-]

Sometimes I try to catch up on Recent Comments, but it seems as though the only way to do it is one page at a time. To make matters slightly worse, the link for the Next page going pastwards is at the bottom of the page, but the page loads at the top, so I have to scroll down for each page.

Is there any more efficient way to do it?

Comment author: wedrifid 18 June 2010 12:16:59PM 2 points [-]

Use the RSS feed that appears on the recent comments page. I use reader.google.com to read my RSS feeds. This will allow you to scroll back in bulk using just the scrollbar then read at leisure. It also shows comments as 'read' or 'unread' based on where you are up to.

Comment author: Houshalter 18 June 2010 07:19:48PM 3 points [-]

Hmm... I don't know about recent comments, I just go to the posts I'm following. Hit control+F and then type (or copy/paste) "load more comments" and go through and hit each one. Then erase it and type the current date or yesterday's date in the formate "date month" (18 June) and it will highlight all of those comments (if you use youtube a lot, you might already use this method on the "see all comments" page except you have to type "hour" or "minute" instead of an exact time which is actually more convenient.) When you're done checking all of the new comments you can erase that and put in "continue this thread" (is that right, I forgot what it is exactly.)

Hope that helps.

Comment author: rhollerith_dot_com 18 June 2010 02:03:10PM *  1 point [-]

The only measure I know of that might make it more efficient to catch up on recent comments is for you to go to your preferences page, and where it says "Display 50 comments by default," change the "50" to some larger number. I have been using "200" on a very slow (33.6 K bits/sec) connection.

Are there periods in your life when you read or at least skim every comment made on Less Wrong? The reason I ask is that I am a computer programmer, and every now and then I imagine ways of making the software behind Less Wrong easier to use. To do that effectively, I need to know things about how people use Less Wrong.

Comment author: NancyLebovitz 18 June 2010 04:42:33PM *  3 points [-]

Here's my wishlist:

As much trn functionality as it seems to be worth coding-- in particular, the ability to default to only seeing unread comments (or at least a Recent Comments for posts as well as for the whole site) while reading comments to a post while having easy access to old comments. the ability to default to not seeing chosen threads and sub-threads, and tree navigation.

If you want to find out how people generally use the site, I think a top level post asking about it is the only way to get the questions noticed. If you post it, I'll upvote it.

Comment author: Mass_Driver 17 June 2010 07:30:14PM 2 points [-]

Can anyone recommend a good book or long article on bargaining power? Note that I am NOT looking for biographies, how-to books, or self-help books that teach you how to negotiate. Biographies tend to be outliers, and how-to books tend to focus on the handful of easily changeable independent variables that can help you increase your bargaining power at the margins.

I am instead looking for an analysis of how people's varying situations cause them to have more or less bargaining power, and possibly a discussion of what effects this might have on psychology, society, or economics.

By "bargaining power" I mean the ability to steer transactions toward one's preferred outcome within a zone of win-win agreements. For example, if we are trapped on a desert island and I have a computer with satellite internet access and you have a hand-crank generator and we have nothing else on the island except that and our bathing suits and we are both scrupulously honest and non-violent, we will come to some kind of agreement about how to share our resources...but it is an open question whether you will pay me something of value, I will pay you something, or neither. Whoever has more bargaining power, by definition, will come out ahead in this transaction.

Comment author: Lonnen 18 June 2010 02:00:25PM *  3 points [-]

I'm currently reading Thomas Schelling's Strategy of Conflict and it sounds like what you're looking for here. From this Google Books Link to the table of contents you can sample some chapters.

Comment author: Lonnen 17 June 2010 02:39:23PM 2 points [-]

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?

Comment author: Dagon 17 June 2010 07:29:55PM 3 points [-]

Calling them "dark arts" is itself a tactic for framing that only affects the less-rational parts of our judgement.

A purely rational agent will (the word "should" isn't necessary here) of course use rhetoric, outright lies, and other manipulations to get irrational agents to behave in ways that further it's goals.

The question gets difficult when there are no rational agents involved. Humans, for instance, even those who want to be rational most of the time, are very bad at judging when they're wrong. For these irrational agents, it is good general advice not to lie or mislead anyone, at least if you have any significant uncertainty on the relative correctness of your positions on the given topic.

Put another way, persistent disagreement indicates mutual contempt for each others' rationality. If the disagreement is resolvable, you don't need the dark arts. If you're considering the dark arts, it's purely out of contempt.

Comment author: cousin_it 17 June 2010 06:49:04PM *  2 points [-]

Dark arts, huh? Sometime ago I put forward the following scenario:

Bob wants to kill a kitten. The FAI wants to save the kitten because it's a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?

(Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)

Comment author: Nick_Tarleton 17 June 2010 06:54:13PM *  5 points [-]

Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?

Expected utility reasoning with a particular utility function says the FAI is right. If we disagree, our preferences might be described by some other utility function.

Comment author: NancyLebovitz 18 June 2010 01:39:06PM *  2 points [-]

Is that actually the FAI's only or best technique?

Off the top of my non-amplified brain:

Reward Fred for not torturing kittens.

Give Fred simulated kittens to torture and deny Fred access to real kittens.

Give Fred something harmless to do which he likes better than torturing kittens.

ETA Convince Fred that torturing kittens is wrong.

Comment author: wedrifid 17 June 2010 03:10:21PM *  1 point [-]

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents.

Yes.

For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it?

Yes. (When we say 'rational agent' or 'rational AI' we are usually referring to "instrumental rationality". To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts.

Would a FAI?

Almost certainly, but this may depend somewhat on who exactly it is 'friendly' to and what that person's preferences happen to be.

Comment author: Lonnen 17 June 2010 04:32:29PM 2 points [-]

That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:

Eliezer on Informers and Persuaders

I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less. It's a pity that this wonderful excuse exists, but in the real world, well...

It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.

Comment author: wedrifid 17 June 2010 06:41:33PM 5 points [-]

I'm not sure where Eliezer got the 'just exactly as elegant as the previous Persuader, no more, no less" part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be 'fair'.

Comment author: [deleted] 16 June 2010 02:45:03PM 2 points [-]

Does anyone happen to know the status of Eliezer's rationality book?

Comment author: Alicorn 16 June 2010 06:27:07PM 1 point [-]

The first draft is in progress.

Comment author: cousin_it 15 June 2010 06:03:55PM *  2 points [-]

Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?

ETA: absent other suggestions, I'm going to call such devices "AI bombs".

Comment author: timtyler 15 June 2010 09:16:14PM 2 points [-]
Comment author: MichaelBishop 14 June 2010 03:44:53PM *  2 points [-]

Whole Brain Emulation: The Logical Endpoint of Neuroinformatics? (google techtalk by Anders Sandberg)

I assume someone has already linked to this but I didn't see it so I figured I'd post it.

Comment author: Yoreth 14 June 2010 08:10:24AM 5 points [-]

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Comment author: IsaacLewis 14 June 2010 05:55:40PM 10 points [-]

Two counters to the majoritarian argument:

First, it is being mentioned in the mainstream - there was a New York Times article about it recently.

Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought - nuclear war. I've been reading Bertrand Russel's autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK's upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.

Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.

I think your second point is stronger. However, I don't think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you've got something that's like a human brain, but faster. Let it replicate itself, and you've got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.

Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?

Comment author: cousin_it 14 June 2010 11:49:07AM *  10 points [-]

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind.

If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today's humanity is pretty close to understanding the human mind well enough to improve it.

Comment author: Houshalter 14 June 2010 09:11:24PM 3 points [-]

I don't think the number of AIs actually matters. If multiple AI's can do a job, then a single AI should be able to simulate them as though it was multiple AI's (or better yet just figure out how to do it on it's own) and then do it as well. Another thing to note is that if the AI makes a copy of its program and puts it in external storage, it doesn't add any extra complexity to itself. It can then run it's optimization process on it, although I do agree that it would be more practical if it only improved parts of itself at a time.

Comment author: cousin_it 14 June 2010 09:20:58PM *  4 points [-]

You're right, I used the million AIs as an intuition pump, imitating Eliezer's That Alien Message.

Comment author: DanArmak 14 June 2010 02:54:39PM 8 points [-]

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas.

This is strictly true if you're talking about the working memory that is part of a complete model of your "mind". But a mind can access an unbounded amount of externally stored data, where a complete self-representation can be stored.

A Turing Machine of size N can run on an unbounded-size tape. A von Neumann PC with limited main memory can access an unbounded-size disk.

Although we can only load a part of the data into working memory at a time, we can use virtual memory to run any algorithm written in terms of the data as a whole. If we had an AI program, we could run it on today's PCs and while we could run out of disk space, we couldn't run out of RAM.

Comment author: Morendil 14 June 2010 08:51:37AM 5 points [-]

I'd just forget the majoritarian argument altogether, it's a distraction.

The second question does seem important to me, I too am skeptical that an AI would "obviously" have the capacity to recursively self-improve.

The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification.

It's possible - and I find it intuitively plausible - that there is some inherent general limit to a mind's capacity for self-knowledge, self-understanding and self-modification. But an intuition isn't an argument.

Comment author: AlanCrowe 14 June 2010 12:34:07PM 6 points [-]

I see Yoreth's version of the majoritarian argument as ahistorical. The US Government did put a lot of money into AI research and became disillusioned. Daniel Crevier wrote a book AI: The tumultuous history of the search for artificial intelligence. It is a history book. It was published in 1993, 17 years ago.

There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today's belief that AI is around the corner from yesterday's belief that AI is around the corner. Wrong then, right now, because...

Alternatively one might argue that scaling died at 90 nanometers, practical computer science is just turning out Java monkeys, the low hanging fruit has been picked, there is no road map, theoretical computer science is a tedious sub-field of pure mathematics, partial evaluation remains an esoteric backwater, theorem provers remain an esoteric backwater, the theorem proving community is building the wrong kind of theorem provers and will not rejuvenate research into partial evaluation,...

The lack of mainstream interest in explosive developments in AI is due to getting burned in the past. Noticing that the scars are not fading is very different from being unaware of AI.

Comment author: SilasBarta 14 June 2010 01:21:54PM 2 points [-]

There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today's belief that AI is around the corner from yesterday's belief that AI is around the corner. Wrong then, right now, because...

I'm reminded of a historical analogy from reading Artificial Addition. Think of it this way: a society that believes addition is the result of adherence to a specific process (or a process isomorphic thereto), and understands part of that process, is closer to creating "general artificial addition" than one that tries to achieve "GAA" by cleverly avoiding the need to discover this process.

We can judge our own distance to artificial general intelligence, then, by the extent to which we have identified constraints that intelligent processes must adhere to. And I think we've seen progress on this in terms of more refined understanding of e.g. how to apply Bayesian inference. For example, the work by Sebastian Thrun on how to seamlessly aggregate knowledge across sensors to create a coherent picture of the environment, which has produced tangible results (navigating the desert).

Comment deleted 14 June 2010 01:49:25PM *  [-]
Comment author: CarlShulman 14 June 2010 02:41:36PM 6 points [-]

10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).

Here's Ben Goertzel's survey. I think that Dan Dennett's median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.

Comment author: timtyler 15 June 2010 09:01:05PM *  2 points [-]

Dan Dennett and Douglas Hofstadater don't think machine intelligence is coming anytime soon. Those folk actually know something about machine intelligence, too!

Comment author: JoshuaZ 14 June 2010 02:07:37PM 3 points [-]

None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I'd be curious what citation you have for the Hawking claim). From the computer scientists I've talked to, the impression I get is that they see AI as such a failure that most of them just aren't bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There's also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won't. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.

Note also that nothing in Yoreth's post actually relied on or argued that there won't be moderately smart AI so it doesn't go against what he's said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth's second argument applies roughly to any level of intelligence. So overall, I don't think the point about those individuals does much to address the argument.

Comment deleted 14 June 2010 03:01:10PM *  [-]
Comment author: JoshuaZ 14 June 2010 03:07:49PM 10 points [-]

That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.

Comment author: SilasBarta 14 June 2010 09:19:45PM 3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.

So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?

Comment author: Vladimir_Nesov 14 June 2010 03:07:50PM *  3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know?

Machine learning, more math/probability theory/belief networks background?

Comment author: timtyler 15 June 2010 09:05:43PM *  2 points [-]

Re: "What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy."

A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.

Comment author: Daniel_Burfoot 14 June 2010 08:46:16PM 2 points [-]

I disagree with this, basically because AI is a pre-paradigm science.

I am gratified to find that someone else shares this opinion.

What does an average AI prof know that a physics graduate who can code doesn't know?

A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can't?

Comment deleted 14 June 2010 10:47:12PM [-]
Comment author: CarlShulman 15 June 2010 12:46:08PM 4 points [-]

I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.

Statistics vs machine learning: FIGHT!

Comment author: MatthewW 14 June 2010 07:10:44PM 2 points [-]

I think Hofstadter could fairly be described as an AI theorist.

Comment author: Emile 17 June 2010 02:14:59PM 2 points [-]

So could Robin Hanson.

Comment author: timtyler 15 June 2010 08:52:56PM 3 points [-]

Re: "can a mind understand itself?"

That is no big deal: copy the mind a few billion times, and then it will probably collectively manage to grok its construction plans well enough.

Comment author: NancyLebovitz 15 June 2010 01:34:28PM *  2 points [-]

Another argument against the difficulties of self-modeling point: It's possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.

It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.

Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn't trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.

What if it just works on having a better understanding of math, logic, and probability?

Comment author: xamdam 14 June 2010 03:33:00PM 2 points [-]

In addition to theoretical objections, I think the majoritarian argument is factually wrong. Remember, 'future is here, just not evenly distributed'.

http://www.google.com/trends?q=singularity shows a trend

http://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all - this week in NYT. Major MSFT and GOOG involvement.

http://www.acceleratingfuture.com/michael/blog/2010/04/transhumanism-has-already-won/

Comment author: timtyler 15 June 2010 08:58:18PM 2 points [-]

Re: "http://www.google.com/trends?q=singularity shows a trend"

Not much of one - and also, this is a common math term - while:

"Your terms - "technological singularity" - do not have enough search volume to show graphs."

Comment author: NancyLebovitz 18 June 2010 01:04:47PM 2 points [-]

Creeping rationality: I just heard a bit on NPR about a proposed plan to distribute the returns from newly found mineral wealth in Afghanistan to the general population. This wasn't terribly surprising. What delighted and amazed me was the follow-up that it was hoped that such a plan would lead to a more responsive government, but all that was known was that such plans have worked in democratic societies, and it wasn't known whether causality could be reversed to use such a plan to make a society more democratic.

Comment author: knb 18 June 2010 09:26:13PM *  3 points [-]

Such plans work in societies with rule of law, and fail miserably in societies that are clan based and tribal. A quarter of Afghanistan's GDP may go to bribes and shakedowns. A more honest description from NPR would be that historically, mineral wealth when controlled by deeply corrupt governments like Afghanistan's, is primarily used for graft and nepotism, benefiting a few elites in government and industry while funding the oppression of everyone else.

In other words, Afghanistan is more like Nigeria than Norway.

Comment author: xamdam 16 June 2010 10:15:21PM 2 points [-]
Comment author: Psy-Kosh 17 June 2010 04:00:30AM *  2 points [-]

The causal-set line of physics research has been (very lightly) touched on here before. (I believe it was Mitchel Porter that had linked to one or two things related to that, though I may be misremembering). But recently I came across something that goes a bit farther: rather than embedding a causal set in a spacetime or otherwise handing it the spacetime structure, it basically just goes "here's a directed acyclic graph... we're going to add on a teensy weensy few extra assumptions... and out of it construct the minkowski metric, and relativistic transformations"

I'm slowly making my way through this paper (partly slowed by the fact that I'm not all that familiar with order theory), but the reason I mention the paper (A Derivation of Special Relativity from Causal Sets) is because I can't help but wonder if it might give us a hook to go in the other direction. That is, if this line of research might let us bring the mathematical machinery of much of physics to help us analyze stuff like Bayes nets and decision theory and give us a (potentially) really powerful mathematical tool.

Maybe I'm completely wrong and nothing interesting will come of trying to "reverse" the causal set line of research, (but causal set stuff is neat anyways, so at least I get some fun from reading and thinking about it) but does seem potentially worth looking into.

Besides, if this does end up being a useful tool, it would be perhaps one of the biggest and subtlest punchlines the universe pulled on us: since causal-sets are an approach to quantum gravity, if it ended up helping with the rationality/AI/etc stuff...

That would mean that Penrose was right about quantum gravity being a key to mind... BUT IN A WAY ENTIRELY DIFFERENT THAN HE INTENDED! bwahahahaha. :)

Comment author: Houshalter 15 June 2010 06:16:34PM *  2 points [-]

Is anyone else concerned about the possibility of nuclear terrorist attacks? No, I don't mean what you usually hear on the news about dirty bombs or Iran/North Korea. I mean an actual terrorists with an actual nuclear bomb. There are a suprising number of nuclear weapons on the bottom of the ocean. Has it occured to anyone that someone with enough funding and determination could actually retrieve one of them. Maybe they already have?

In its campaign to discredit General Lebed’s revelations, the Russian government insisted that the loss of a nuclear weapon was unthinkable. No responsible party could lose something so important. But to the contrary, we know that not only the Soviet Union, but also the United States, lost numbers of nuclear weapons. At least four Soviet submarines, armed with a total of 40 nuclear weapons, sank during the Cold War. According to press reports, one of these was partially recovered from the Pacific Ocean floor by a unique deep-water submarine, the Glomar Explorer, owned by the reclusive billionaire Howard Hughes. Three nuclear missiles and two nuclear torpedoes were recovered. The Department of Defense has acknowledged a number of what it calls “Broken Arrows” (nuclear weapons lost by U.S. forces), although it has never said how many. The confirmed reports include a 1965 case where an aircraft loaded with a B43 nuclear bomb rolled off a carrier stationed near Japan. Neither the aircraft nor the weapon was ever recovered. A year later, the U.S. Air Force accidentally dropped a 20-megaton nuclear bomb in the Mediterranean Sea during a high-altitude refueling mission near Palomares, Spain. After three months of frantic searching, it was found. Given the sensitivity of such events, it is reasonable to infer that the few official confirmations are merely the tip of the iceberg.

And here is a public list of known nuclear accidents

Comment author: gwern 15 June 2010 09:09:31PM 2 points [-]

I am not. To even suggest that that this is a possibility anywhere near the level of a sovereign actor giving terrorists nukes is to dramatically overestimate terrorist groups' technical competence, and also ascribe basic instrumental rationality to them (a mistake; see my Terrorism is not about Terror).

Even if a terrorist could marshal the interest, assemble in one place the millions necessary, and actually hire a world-class submersible and in the scant days they can afford, find the wreckage of a bomb, it would probably be useless. US nukes are designed to failsafe, so if the wiring has corroded, or the explosives are misaligned? And that's ignoring issues with radioactive decay. (Was the bomb a tritium-pumped H-bomb? Well, given tritium's extremely short half-life, I'm afraid that bomb is now useless.)

Comment author: Houshalter 15 June 2010 10:40:42PM *  1 point [-]

Maybe, although remember there are a lot more players interested in obtaining nuclear weapons then just a few terrorists. And the best crimes are the ones no one knew were commited. Unsucessful criminals are over represented as opposed to ones that got away. I suspect the same is true for terrorists. Blowing up a building isn't going to achieve your goals, but blowing up a city might. After all, it's ended a war once and just the threat stopped another from ever happening. Also, even if the bomb itself is useless, it is probably worth quite a bit of money, more then the millions it would take to retrieve it (maybe thousands as technology improves? There are some in shallower water. In 1958 the government was prepared to retrieve a lost bomb, but never located it.) I don't honestly know a lot about nuclear weapons, but the materials in it, maybe even the design itself, would be worth something to somebody. Maybe said organization has the resources to salvage it, after all, they already had enough money to get it in the first place.

Even if no bombs go off, I wouldn't be suprised if the government eventually gets around to searching for them and finds they're not there. And there are other nuclear threats to. Although I can't find anywhere to confirm it, it was floating around the internet that up to 80 "suitcase nukes" are missing. This quote from wikipedia particularly distrubed me:

The highest-ranking GRU defector Stanislav Lunev claimed that such Russian-made devices do exist and described them in more detail. These devices, "identified as RA-115s (or RA-115-01s for submersible weapons)" weigh from fifty to sixty pounds. They can last for many years if wired to an electric source. In case there is a loss of power, there is a battery backup. If the battery runs low, the weapon has a transmitter that sends a coded message—either by satellite or directly to a GRU post at a Russian embassy or consulate.” According to Lunev, the number of "missing" nuclear devices (as found by General Lebed) "is almost identical to the number of strategic targets upon which those bombs would be used."

Lunev suggested that suitcase nukes might be already deployed by the GRU operatives at the US soil to assassinate US leaders in the event of war. He alleged that arms caches were hidden by the KGB in many countries for the planned terrorism acts. They were booby-trapped with "Lightning" explosive devices. One of such cache, which was identified by Vasili Mitrokhin, exploded when Swiss authorities tried to remove it from woods near Berne. Several others caches were removed successfully. Lunev said that he had personally looked for hiding places for weapons caches in the Shenandoah Valley area and that "it is surprisingly easy to smuggle nuclear weapons into the US" either across the Mexican border or using a small transport missile that can slip undetected when launched from a Russian airplane.

I will leave it at that for now, I'm not one of those paranoid people that goes around ranting about nuclear proliferation or whatever. If there really is a problem, there's not much we can do (except maybe try to get to those lost bombs first, or take anti-terrorism more seriously.)

Comment author: gwern 15 June 2010 10:47:13PM 2 points [-]

I don't take Lunev seriously. Defectors are notoriously unreliable sources of information (as I think Iraq should have proven. Again.).

The problem with nuclear terrorism is that atomic bombs come with return addresses - the US has always collected isotopic samples (eg. with aerial collecting missions in international airspace) precisely to make sure this is the case. (Ironically, invading Afghanistan and Iraq may've helped deter nuclear terrorism: 'If the US invaded both these countries over just a few thousand dead, then it's plausible they will nuke us even if we cry to the heavens that we just carelessly lost that bomb.')

Comment author: NancyLebovitz 17 June 2010 01:08:01AM 2 points [-]

I prefer spending my precious mental CPUs on worrying about the US government going really bad.

Admittedly, a terrorist nuke (especially if exploded in the US) would be likely to cause the US government to take a lot more control.

Comment author: SilasBarta 17 June 2010 03:48:02PM *  1 point [-]

Amanda Knox update: Someone claims he knows the real killer, and is being taken seriously enough to give Knox and Sollecito a chance of being released. Of course, he's probably lying, since Guede most likely is the killer, and it's not who this new guy claims. But what can you do against the irrational?

I found this on a Slashdot discussion as a result of -- forgive me -- practicing the dark arts. (Pretty depressing I got upmodded twice on net.)

Comment author: gwern 21 June 2010 02:01:02AM *  3 points [-]

"I know [he was involved] because my brother confessed to me that he had killed Meredith and he asked me to hide a blood-stained knife and set of keys," he said, according to an attachment to Knox's appeal documents.

"I had everything under a little wall behind my house," he said. "I am happy to stand up in court and confirm all this and wrote to the court several times to tell them but was never questioned."

Should be easy to test his claims...

We "can't simply investigate in the course of a trial every claim that comes up," Mignini told CNN.

I sometimes wonder, is the Italian judicial system really that lousy or is there some sort of linguistic or cultural barrier there.

Comment author: simplicio 18 June 2010 10:12:31PM 3 points [-]

You were arguing against your real opinion as a 5th columner? May I ask why?

(Well done, by the way, in a technical sense. Just the right amount of character assassination: "Sollecito and Knox were known to be practitioners of dangerous sex acts.")

Just don't kill the younglings, Anakin!

Comment author: SilasBarta 18 June 2010 10:49:41PM 3 points [-]

I thought it would get modded down and then provoke someone as well-informed as komponisto to thoroughly refute it, and make people realize how stupid those arguments were.

Damn ... now that's starting to sound like a fake justification!

Eh, I guess I just like trolling too :-/

Comment author: simplicio 18 June 2010 11:13:37PM 2 points [-]

...and make people realize how stupid those arguments were.

Internet, Silas. Silas, Internet. ;)

I think you will find an ample number of inspiringly bad arguments out there, without adding to their number. I believe this is called cutting one's nose to spite one's face.

Comment author: JoshuaZ 18 June 2010 10:16:35PM 2 points [-]

Slashdot threads have a bad enough signal to noise ratio as is. Please don't do that sort of thing.

Comment author: kodos96 17 June 2010 09:56:18PM *  1 point [-]

FYI, this was discussed previously here

Comment author: Risto_Saarelma 18 June 2010 06:37:21AM 1 point [-]

Aaron Swartz: That Sounds Smart

Comment author: ciphergoth 17 June 2010 11:40:12AM 1 point [-]

I recently read a fascinating paper that argued based on what we know about cognitive bias that our capacity for higher reason actually evolved as a means to persuade others of what we already believe, rather than as a means to reach accurate conclusions. In other words, rationalization came first and reason second.

Unfortunately I can't remember the title or the authors. Does anyone remember this paper? I'd like to refer to it in this talk. Thanks!

Comment author: Morendil 17 June 2010 11:43:58AM 4 points [-]

That would probably be "Why do humans reason" by Mercier and Sperber, which I covered in this post.

Comment author: Kevin 16 June 2010 08:01:18PM 1 point [-]