Open Thread June 2010, Part 3

6 Post author: Kevin 14 June 2010 06:14AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.

 

Comments (606)

Comment author: Kevin 14 June 2010 07:56:37AM 0 points [-]
Comment author: cupholder 14 June 2010 08:24:47AM 0 points [-]

Looking forward to the inevitable 'Could video playdates be making your child vulnerable to cyberpredators?' follow-up.

Comment author: Kevin 14 June 2010 08:32:10AM 1 point [-]

Chatrouletteforkids.com

Comment author: Yoreth 14 June 2010 08:10:24AM 5 points [-]

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Comment author: Morendil 14 June 2010 08:51:37AM 5 points [-]

I'd just forget the majoritarian argument altogether, it's a distraction.

The second question does seem important to me, I too am skeptical that an AI would "obviously" have the capacity to recursively self-improve.

The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification.

It's possible - and I find it intuitively plausible - that there is some inherent general limit to a mind's capacity for self-knowledge, self-understanding and self-modification. But an intuition isn't an argument.

Comment author: Will_Newsome 14 June 2010 09:26:57AM 1 point [-]

Why is the word obviously in quotes?

Comment author: Morendil 14 June 2010 09:45:20AM 1 point [-]

Because I am not just saying it's not obvious an AI would recursively self-improve, I'm also referring to Eliezer's earlier claims that such recursive self-improvement (aka FOOM) is what we'd expect given our shared assumptions about intelligence. I'm sort-of quoting Eliezer as saying FOOM obviously falls out of these assumptions.

Comment author: Will_Newsome 14 June 2010 09:49:53AM 2 points [-]

I'm worried about the "sort-of quoting" part. I get nervous when people put quote marks around things that aren't actually quotations of specific claims.

Comment author: Morendil 14 June 2010 09:53:19AM 3 points [-]

Noted, and thanks for asking. I'm also somewhat over-fond of scare quotes to denote my using a term I'm not totally sure is appropriate. Still, I believe my clarification above is sufficient that there isn't any ambiguity left now as to what I meant.

Comment author: AlanCrowe 14 June 2010 12:34:07PM 6 points [-]

I see Yoreth's version of the majoritarian argument as ahistorical. The US Government did put a lot of money into AI research and became disillusioned. Daniel Crevier wrote a book AI: The tumultuous history of the search for artificial intelligence. It is a history book. It was published in 1993, 17 years ago.

There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today's belief that AI is around the corner from yesterday's belief that AI is around the corner. Wrong then, right now, because...

Alternatively one might argue that scaling died at 90 nanometers, practical computer science is just turning out Java monkeys, the low hanging fruit has been picked, there is no road map, theoretical computer science is a tedious sub-field of pure mathematics, partial evaluation remains an esoteric backwater, theorem provers remain an esoteric backwater, the theorem proving community is building the wrong kind of theorem provers and will not rejuvenate research into partial evaluation,...

The lack of mainstream interest in explosive developments in AI is due to getting burned in the past. Noticing that the scars are not fading is very different from being unaware of AI.

Comment author: SilasBarta 14 June 2010 01:21:54PM 2 points [-]

There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today's belief that AI is around the corner from yesterday's belief that AI is around the corner. Wrong then, right now, because...

I'm reminded of a historical analogy from reading Artificial Addition. Think of it this way: a society that believes addition is the result of adherence to a specific process (or a process isomorphic thereto), and understands part of that process, is closer to creating "general artificial addition" than one that tries to achieve "GAA" by cleverly avoiding the need to discover this process.

We can judge our own distance to artificial general intelligence, then, by the extent to which we have identified constraints that intelligent processes must adhere to. And I think we've seen progress on this in terms of more refined understanding of e.g. how to apply Bayesian inference. For example, the work by Sebastian Thrun on how to seamlessly aggregate knowledge across sensors to create a coherent picture of the environment, which has produced tangible results (navigating the desert).

Comment author: whpearson 14 June 2010 01:35:01PM 0 points [-]

Can you point me to an overview of this understanding? I would like to apply it to the problem of detecting different types of data in a raw binary file.

Comment author: SilasBarta 14 June 2010 01:51:05PM *  2 points [-]

I don't know of a good one. You could try this, but it's light on the math. I'm looking through Thrun's papers to find a good one that gives a simple overview of the concepts, and through the CES documentation.

I was introduced to this advancement in EY's Selling nonapples article.

And I'm not sure how this helps for detecting file types. I mean, I understand generally how they're related, but not how it would help with the specifics of that problem.

Comment author: whpearson 14 June 2010 03:03:47PM 0 points [-]

Thanks I'll have a look. I'm looking for general purpose insights. Otherwise you could use the same sorts of reasoning to argue that the technology behind deep blue was on the right track.

Comment author: SilasBarta 14 June 2010 03:52:42PM 0 points [-]

True, the specific demonstration of Thrun's that referred to was specific to navigating a terrestrial desert environment, but it was a much more general problem than chess, and had to deal with probabilistic data and uncertainty. The techniques detailed in Thrun's papers easily generalize beyond robotics.

Comment author: whpearson 14 June 2010 04:22:04PM 0 points [-]

I've had a look, and I don't see anything much that will make the techniques easily generalize to my problems (or any problem that has similar characteristics to mine, such as very large amounts of possibly relevant data). Oh, I am planning to use bayesian techniques. But easy is not how I would characterize the translating of the problem.

Comment author: SilasBarta 14 June 2010 04:28:32PM *  3 points [-]

Now that you mention it, one of the reasons I'm trying to get acquainted with the methods Thrun uses is to see how much they rely on advance knowledge of exactly how the sensor works (i.e. its true likelihood function). Then, I want to see if it's possible to infer enough relevant information about the likelihood function (such as through unsupervised learning) so that I can design a program that doesn't have to be given this information about the sensors.

And that's starting to sound more similar to what you would want to do.

Comment author: rwallace 14 June 2010 02:43:47PM 1 point [-]

I know of partial evaluation in the context of optimization, but I hadn't previously heard of much connection between that and AI or theorem provers. What do you see as the connection?

Or, more concretely: what do you think would be the right kind of theorem provers?

Comment author: whpearson 14 June 2010 02:57:15PM *  2 points [-]

Partial evaluation is interesting to me in a AI sense. If you haven't have a look at the 3 projections of Futamura.

But instead of compilers and language specifications you have learning systems and problem specifications. Or something along those lines.

Comment author: rwallace 14 June 2010 04:15:23PM 1 point [-]

Right, that's optimization again. Basically the reason I'm asking about this is that I'm working on a theorem prover (with the intent of applying it to software verification), and if Alan Crowe considers current designs the wrong kind, I'm interested in ideas about what the right kind might be, and why. (The current state of the art does need to be extended, and I have some ideas of my own about to do that, but I'm sure there are things I'm missing.)

Comment author: AlanCrowe 14 June 2010 04:13:11PM 6 points [-]

I think I made a mistake in mentioning partial evaluation. It distracts from my main point. The point I'm making a mess of is that Yoreth asks two questions:

If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI?

I read (mis-read?) the rhetoric here as containing assumptions that I disagree with. When I read/mis-read it I feel that I'm being slipped the idea that governments have never been interested in AI. I also pick up a whiff of "the mainstream doesn't know, we must alert them." But mainstream figures such as John McCarthy and Peter Norvig know and are refraining from sounding the alarm.

So partial evaluation is a distraction and I only made the mistake of mentioning it because it obsesses me. But it does! So I'll answer anyway ;-)

Why am I obsessed? My Is Lisp a Blub post suggests one direction for computer programming language research. Less speculatively, three important parts of computer science are compiling (ie hand compiling), writing compilers, and tools such as Yacc for compiling compilers. The three Futamura projections provide a way of looking at these three topics. I suspect it is the right way to look at them.

Lambda-the-ultimate had an interesting thread on the type-system feature-creep death-spiral. Look for the comment By Jacques Carette at Sun, 2005-10-30 14:10 linking to Futamura's papers. So there is the link to having a theorem proving inside a partial evaluator.

Now partial evaluating looks like it might really help with self-improving AI. The AI might look at its source, realise that the compiler that it is using to compile itself is weak because it is a Futamura projection based compiler with an underpowered theorem prover, prove some of the theorems itself, re-compile, and start running faster.

Well, maybe, but the overviews I've read of the classic text by Jones, Gomard, and Sestoft, make me think that the start of the art only offers linear speed ups. If you write a bubble sort and use partial evaluation to compile it, it stays order n squared. The theorem prover will never transform to an n log n algorithm.

I'm trying to learn ACL2. It is a theorem prover and you can do things such as proving that quicksort and bubble sort agree. That is a nice result and you can imagine that fitting into a bigger picture. The partial evaluator wants to transform a bubble sort into something better, and the theorem prover can annoint the transformation as correct. I see two problems.

First, the state of the art is a long way from being automatic. You have to lead the theorem prover by the hand. It is really just a proof checker. Indeed the ACL2 book says

You are responsible for guiding it, usually by getting it to prove the necessary lemmas. Get used to thinking that it rarely proves anything substantial by itself.

it is a long way from proving (bubble sort = quick sort) on its own.

Second that doesn't actually help. There is no sense of performance here. It only says that they agree, without saying which is faster. I can see a way to fix this. ACL2 can be used to prove that interpreters conform to their semantics. Perhaps it can be used to prove that an instrumented interpreter performs a calculation in fewer than n log n cycles. Thus lifting the proofs from proofs about programs to proofs about interpreters running programs would allow ACL2 to talk about performance.

This solution to problem two strikes me as infeasible. ACL2 cannot cope with the base level without hand holding, which I have not managed to learn to give. I see no prospect of lifting the proofs to include performance without adding unmanageable complications.

Could performance issues be built in to a theorem prover, so that it natively knows that quicksort is faster than bubble sort, without having to pass its proofs through a layer of interpretation? I've no idea. I think this is far ahead of the current state of computer science. I think it is preliminary to, and much simple than, any kind of self-improving artificial intelligence. But that is what I had in mind as the right kind of theorem prover.

There is a research area of static analysis and performance modelling. One of my Go playing buddies has just finished a PhD in it. I think that he hopes to use the techniques to tune up the performance of the TCP/IP stack. I think he is unaware of and uninterested in theorem provers. I see computer science breaking up into lots of little specialities, each of which takes half a life time to master. I cannot see the threads being pulled together until the human lifespan is 700 years instead of 70.

Comment author: rwallace 14 June 2010 04:42:31PM 2 points [-]

Ah, thanks, I see where you're coming from now. So ACL2 is pretty much state-of-the-art from your point of view, but as you point out, it needs too much handholding to be widely useful. I agree, and I'm hoping to build something that can perform fully automatic verification of nontrivial code (though I'm not focusing on code optimization).

You are right of course that proving quicksort is faster than bubble sort, is even considerably more difficult than proving it is equivalent.

But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests. To be sure, that approach is fallible, but what of it? The optimized version only needs to be probably faster than the original. A formal guarantee is only needed for equivalence.

Comment author: mindviews 14 June 2010 11:04:31AM 1 point [-]

Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.

So: do you know any counterarguments or articles that address either of these points?

I don't have any articles but I'll take a stab at counterarguments.

A Majoritarian counterargument: AI turned out to be harder and further away than originally thought. The general view is still tempered by the failure of AI to live up to those expectations. In short, the AI researchers cried "wolf!" too much 30 years ago and now their predictions aren't given much weight because of that bad track record.

A mind can't understand itself counterargument: Even accepting as a premise that a mind can't completely understand itself, that's not an argument that it can't understand itself better than it currently does. The question then becomes which parts of the AI mind are important for reasoning/intelligence and can an AI understand and improve that capability at a faster rate than humans.

Comment author: cousin_it 14 June 2010 11:49:07AM *  10 points [-]

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind.

If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today's humanity is pretty close to understanding the human mind well enough to improve it.

Comment author: whpearson 14 June 2010 01:10:35PM 0 points [-]

It depends upon what designing a mind is like. How much minds intrinsically rely on interactions between parts and how far those interactions reach.

In the brain most of the interesting stuff such as science and the like is done by culturally created components. The evidence for this is the stark variety of the worldviews that exist in the world and have existed in history (with most of the same genes) and the ways those views made those that hold them interact with the world.

Making a powerful AI, in this view, is not just a problem of making a system with lots of hardware or the right algorithms from birth; it is a problem of making a system with the right ideas. And ideas interact heavily in the brain. They can squash or encourage each other. If one idea goes, others that rely on it might go as well.

I suspect that we might be close to making the human mind able to store more ideas or make the ideas process more quickly. How much that will lead to the creation of better ideas I don't know. That is will we get a feedback loop? We might just get better at storing gossip and social information.

Comment author: Houshalter 14 June 2010 09:11:24PM 3 points [-]

I don't think the number of AIs actually matters. If multiple AI's can do a job, then a single AI should be able to simulate them as though it was multiple AI's (or better yet just figure out how to do it on it's own) and then do it as well. Another thing to note is that if the AI makes a copy of its program and puts it in external storage, it doesn't add any extra complexity to itself. It can then run it's optimization process on it, although I do agree that it would be more practical if it only improved parts of itself at a time.

Comment author: cousin_it 14 June 2010 09:20:58PM *  4 points [-]

You're right, I used the million AIs as an intuition pump, imitating Eliezer's That Alien Message.

Comment deleted 14 June 2010 01:49:25PM *  [-]
Comment author: JoshuaZ 14 June 2010 02:07:37PM 3 points [-]

None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I'd be curious what citation you have for the Hawking claim). From the computer scientists I've talked to, the impression I get is that they see AI as such a failure that most of them just aren't bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There's also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won't. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.

Note also that nothing in Yoreth's post actually relied on or argued that there won't be moderately smart AI so it doesn't go against what he's said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth's second argument applies roughly to any level of intelligence. So overall, I don't think the point about those individuals does much to address the argument.

Comment deleted 14 June 2010 03:01:10PM *  [-]
Comment author: JoshuaZ 14 June 2010 03:07:49PM 10 points [-]

That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.

Comment author: Vladimir_Nesov 14 June 2010 03:07:50PM *  3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know?

Machine learning, more math/probability theory/belief networks background?

Comment deleted 14 June 2010 03:15:02PM [-]
Comment author: Vladimir_Nesov 14 June 2010 03:33:51PM *  2 points [-]

There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it's more than standard curriculum requires. On the other hand, it's much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won't necessarily possess.

Comment author: whpearson 14 June 2010 03:08:14PM 1 point [-]

The AI prof is more likely to know more things that don't work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?

Comment deleted 14 June 2010 03:15:39PM [-]
Comment author: whpearson 14 June 2010 03:22:49PM 0 points [-]

Trying to model the world as crisp logical statements a la block worlds for example.

Comment deleted 14 June 2010 04:13:51PM [-]
Comment author: whpearson 14 June 2010 04:25:51PM 0 points [-]

Yup... which things were you asking for? Examples of things that do work? You don't actually need to find them to know that they are hard to find!

Comment author: Daniel_Burfoot 14 June 2010 08:46:16PM 2 points [-]

I disagree with this, basically because AI is a pre-paradigm science.

I am gratified to find that someone else shares this opinion.

What does an average AI prof know that a physics graduate who can code doesn't know?

A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can't?

Comment deleted 14 June 2010 10:47:12PM [-]
Comment author: SilasBarta 15 June 2010 12:20:17AM 0 points [-]

Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a "Hutter enthusiast", but I eventually concluded that his entire work is:

"Here's a few general algorithms that are really good, but take way too long to be of any use whatsoever."

Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?

Comment deleted 15 June 2010 08:33:33AM [-]
Comment author: CarlShulman 15 June 2010 12:46:08PM 4 points [-]

I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.

Statistics vs machine learning: FIGHT!

Comment author: SilasBarta 14 June 2010 09:19:45PM 3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.

So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?

Comment author: MatthewW 14 June 2010 07:10:44PM 2 points [-]

I think Hofstadter could fairly be described as an AI theorist.

Comment author: CarlShulman 14 June 2010 02:41:36PM 6 points [-]

10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).

Here's Ben Goertzel's survey. I think that Dan Dennett's median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.

Comment author: DanArmak 14 June 2010 02:54:39PM 8 points [-]

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas.

This is strictly true if you're talking about the working memory that is part of a complete model of your "mind". But a mind can access an unbounded amount of externally stored data, where a complete self-representation can be stored.

A Turing Machine of size N can run on an unbounded-size tape. A von Neumann PC with limited main memory can access an unbounded-size disk.

Although we can only load a part of the data into working memory at a time, we can use virtual memory to run any algorithm written in terms of the data as a whole. If we had an AI program, we could run it on today's PCs and while we could run out of disk space, we couldn't run out of RAM.

Comment author: xamdam 14 June 2010 03:33:00PM 2 points [-]

In addition to theoretical objections, I think the majoritarian argument is factually wrong. Remember, 'future is here, just not evenly distributed'.

http://www.google.com/trends?q=singularity shows a trend

http://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all - this week in NYT. Major MSFT and GOOG involvement.

http://www.acceleratingfuture.com/michael/blog/2010/04/transhumanism-has-already-won/

Comment author: IsaacLewis 14 June 2010 05:55:40PM 10 points [-]

Two counters to the majoritarian argument:

First, it is being mentioned in the mainstream - there was a New York Times article about it recently.

Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought - nuclear war. I've been reading Bertrand Russel's autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK's upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.

Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.

I think your second point is stronger. However, I don't think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you've got something that's like a human brain, but faster. Let it replicate itself, and you've got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.

Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?

Comment author: NancyLebovitz 15 June 2010 12:05:49PM 0 points [-]

How do we know that governments aren't secretly working on AI?

Is it worth speculating about the goals which would be built into a government-designed AI?

Comment author: NancyLebovitz 15 June 2010 01:34:28PM *  2 points [-]

Another argument against the difficulties of self-modeling point: It's possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.

It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.

Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn't trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.

What if it just works on having a better understanding of math, logic, and probability?

Comment author: khafra 14 June 2010 12:37:58PM 8 points [-]

Wikipedia says the term "Synthetic Intelligence" is a synonym for GAI. I'd like to propose a different use: as a name for the superclass encompassing things like prediction markets. This usage occurred to me while considering 4chan as a weakly superintelligent optimization process with a single goal; something along the lines of "producing novelty;" something it certainly does with a paperclippy single-mindedness we wouldn't expect out of a human.

It may be that there's little useful to be gained by considering prediction markets and chans as part of the same category, or that I'm unable to find all the prior art in this area because I'm using the wrong search terms--but it does seem somewhat larger and more practical than gestalt intelligence.

Comment author: NancyLebovitz 15 June 2010 01:41:21PM 0 points [-]

Could you expand on what would be included and excluded from Synthetic Intelligence?

Would a free market count?

Comment author: khafra 15 June 2010 02:11:30PM 0 points [-]

Good question. I didn't mean to take ownership of the term, but I'd consider the "invisible hand" part to be the synthetic intelligence; and the rest of the market's activities to be other synthetic appendages and organs.

Comment author: SilasBarta 14 June 2010 01:06:44PM *  0 points [-]

I'd like to pose a sort of brain-teaser about Relativity and Mach's Principle, to see if I understand them correctly. I'll post my answer in rot13.

Here goes: Assume the universe has the same rules it currently does, but instead consists of just you and two planets, which emit visible light. You are standing on one of them and looking at the other, and can see the surface features. It stays at the same position in the sky.

As time goes by, you gradually get a rotationally-shifted view of the features. That is, the longitudinal centerline of the side you see gradually shifts. This change in view could result from the other planet rotating, or from your planet revolving around it while facing it. (Remember, both planets emit light, so you don't see a different portion being in a shadow like the moon's phases.)

Question: What experiment could you do to determine whether the other planet is spinning, or your planet is revolving around it while facing it?

My answer (rot13): Gurer vf ab jnl gb qb fb, orpnhfr gurer vf ab snpg bs gur znggre nf gb juvpu bar vf ernyyl unccravat, naq vg vf yvgreny abafrafr gb rira guvax gung gurer vf n qvssrerapr. Gur bayl ernfba bar zvtug guvax gurer'f n qvssrerapr vf sebz orvat npphfgbzrq gb n havirefr jvgu zber guna whfg gurfr gjb cynargf, juvpu sbez n onpxtebhaq senzr ntnvafg juvpu bar bs gurz pbhyq or pbafvqrerq fcvaavat be eribyivat.

Comment deleted 14 June 2010 01:36:05PM *  [-]
Comment author: SilasBarta 14 June 2010 01:38:05PM *  0 points [-]

How would you measure the centrifugal force?

ETA: I'm not asking because I don't know the standard ways to measure cetrifugal force, I'm asking because the standard measurement methods don't work when the universe is just two planets.

Comment author: prase 14 June 2010 07:31:21PM 0 points [-]

Calculate the gravitational force on the surface of a planet of the same size and mass as yours and compare with what you actually measure.

Comment author: SilasBarta 14 June 2010 08:41:39PM 1 point [-]

What do you calibrate your equipment against?

Comment author: prase 14 June 2010 08:55:46PM 0 points [-]

The equipment is already calibrated. You have said that everything works in the same way as today, except the universe consists of two planets. Which I have interpreted like that the observer already knows the value of the gravitational constant in units he can use. If the gravitational constant has to be independently measured first, then it is more complicated, of course.

Comment author: SilasBarta 14 June 2010 09:14:10PM 1 point [-]

The equipment is already calibrated. You have said that everything works in the same way as today, except the universe consists of two planets.

Right: you know the laws of physics. You don't know your mass though, and you don't know any object that has a known mass. I posit this because, in the history of science, they made certain measurements that aren't possible in a two-planet universe, and to assume you can calibrate to those measurements would assume away the problem.

Comment author: prase 14 June 2010 10:27:20PM 0 points [-]

But still, in the rotating scenario the attractive force wouldn't be perpendicular to the planet's surface, and this can be established without knowing the gravitational constant. If the planet is spherical and you already know what is perpendicular, of course.

Comment author: SilasBarta 14 June 2010 01:43:49PM *  0 points [-]

The universe adheres to General Relativity, not Newton's laws. What does GR say about the effect of spinning and revolving bodies?

Comment author: wnewman 14 June 2010 03:02:59PM 2 points [-]

Relativity says that as motion becomes very much slower than the speed of light, behavior becomes very similar to Newton's laws. Everyday materials (and planetary systems) and energies give rise to motions very very much slower than the speed of light, so it tends to be very very difficult to tell the difference. For a mechanical experimental design that can accurately described in a nontechnical blog post and that you could reasonably imagine building for yourself (e.g., a Foucault-style pendulum), the relativistic predictions are very likely to be indistinguishable from Newton's predictions.

(This is very much like the "Bohr correspondence principle" in QM, but AFAIK this relativistic correspondence principle doesn't have a special name. It's just obvious from Einstein's equations, and those equations have been known for as long as ordinary scientists have been thinking about (speed-of-light, as opposed to Galilean) relativity.)

Examples of "see, relativity isn't purely academic" tend to involve motion near the speed of light (e.g., in particle accelerators, cosmic rays, or inner-sphere electrons in heavy atoms), superextreme conditions plus sensitive instruments (e.g., timing neutron stars or black holes in close orbit around each other), or extreme conditions plus supersensitive instruments (e.g., timing GPS satellites, or measuring subtle splittings in atomic spectroscopy).

Comment author: SilasBarta 14 June 2010 03:15:18PM *  1 point [-]

And the example I posited is a superextreme condition: the two bodies in question make up the entire universe, which amplifies the effects that are normally only observable with sensitive instruments. See frame-dragging.

Comment author: prase 14 June 2010 05:09:57PM 0 points [-]

Amplifies? The Schwarzschild spacetime (which behaves like Newtonian gravitational field in large distance limit) needs only one point-like massive object. What do you expect as a non-negligible difference made by (non-)existence of distant objects?

Comment author: SilasBarta 14 June 2010 05:18:51PM 1 point [-]

What do you expect as a non-negligible difference made by (non-)existence of distant objects?

The fact that there's no longer a frame against which to measure local rotation in any sense other than its rotation relative to the frame of the other body. So it makes a big difference what counts as "the rest of the universe".

Comment author: prase 14 June 2010 07:16:41PM 1 point [-]

People believed for a quite long period of time that the distant stars don't provide a stable reference frame. That it is the Earth which rotates was shown by Foucault pendulum or similar experiments, without refering to outer stellar frame.

Comment author: wnewman 15 June 2010 01:37:36PM 0 points [-]

(two points, one about your invocation of frame-dragging upstream, one elaborating on prase's question...)

point 1: I've never studied the kinds of tensor math that I'd need to use the usual relativistic equations; I only know the special relativistic equations and the symmetry considerations which constrain the general relativistic equations. But it seems to me that special relativity plus symmetry suffice to justify my claim that any reasonable mechanical apparatus you can build for reasonable-sized planets in your example will be practically indistinguishable from Newtonian predictions.

It also seems to me that your cited reference to wikipedia "frame-dragging" supports my claim. E.g., I quote: "Lense and Thirring predicted that the rotation of an object would alter space and time, dragging a nearby object out of position compared with the predictions of Newtonian physics. The predicted effect is small --- about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive."

You seem to be invoking the authority of standard GR to justify an informal paraphrase of version of Mach's principle (which has its own wikipedia article). I don't know GR well enough to be absolutely sure, but I'm about 90% sure that by doing so you misrepresent GR as badly as one misrepresents thermodynamics by invoking its authority to justify the informal entropy/order/whatever paraphrases in Rifkin's Entropy or in various creationists' arguments of the form "evolution is impossible because the second law of thermo prevents order from increase spontaneously."

point 2: I'll elaborate on prase's "What do you expect as a non-negligible difference made by (non-)existence of distant objects?" IIRC there was an old (monastic?) thought experiment critique of Aristotelian "heavy bodies fall faster:" what happens when you attach an exceedingly thin thread between two cannonballs before dropping them? Similarly, what happens to rotational physics of two bodies alone in the universe when you add a single neutrino very far away? Does the tiny perturbation cause the two cannonballs discontinously to have doubly-heavy-object falling dynamics, or the rotation of the system to discontinously become detectable?

Comment author: Wei_Dai 14 June 2010 03:35:02PM 1 point [-]

If the two planets aren't revolving around each other, wouldn't gravity pull them together? But maybe space is expanding at precisely the rate necessary to keep them at the same distance despite gravity? To test that, build a rocket on your planet and push it (the planet) slightly, either toward the other planet or away from it. If the planets are revolving around each other, you've just changed a circular orbit into an elliptical one, so you should see an oscillation in the distance between the two planets. If they are not revolving around each other, then they'll either keep getting closer together or further apart, depending on which direction you made the push.

(This is all based on my physics intuition. Somebody who knows the math should write down the two equations and check if they're isomorphic. :)

Comment author: SilasBarta 14 June 2010 03:47:46PM *  1 point [-]

If the two planets aren't revolving around each other, wouldn't gravity pull them together?

Gravity would pull, yes, but the rotation of a body also distorts space in such a way to produce another effect you have to consider.

ETA: Look at a similar scenario. Same as the one I proposed, but you always see the same portion of the other planet. How do you know how fast the two planets are revolving around each other? Isn't this the same as asking how fast the entire universe is rotating?

Comment author: Wei_Dai 14 June 2010 04:35:02PM *  0 points [-]

Here's another possible experiment. Send a robot to the other planet, cut it in half, and then build a beam to push the two halves apart. If that planet is rotating, then due to conservation of angular momentum, this should cause its rotation to slow down, and you'd see that. If the two planets are just revolving around each other, then you won't observe such a slowdown in the apparent rotation of the other planet.

ETA: I'm pretty curious what the math actually says. Do we have any GR experts here?

Comment author: prase 14 June 2010 05:15:38PM *  0 points [-]

Exactly as fast as needed to keep them on cyclical orbit (assuming you don't experience change of the distance to the second planet). For this, you can quite safely use the Newton laws.

In general-relativistic language, what exactly do you mean by "how fast the entire universe is rotating"?

Comment author: SilasBarta 14 June 2010 05:35:57PM *  -1 points [-]

In general-relativistic language, what exactly do you mean by "how fast the entire universe is rotating"?

I mean nothing. In GR, the very question is nonsense. The universe does not have a position, just relative positions of objects.

The universe does not have a velocity, just relative velocities of various objects.
The universe does not have an acceleration, just relative accelerations of various objects.
The universe does not have a rotational orientation, just relative rotational orientations of various objects.
The universe does not have a rotational velocity, just relative rotational velocities of various objects.

There is no way in this universe to distinguish between a bucket rotating vs. the rest of the universe rotating around the bucket. There is also no such thing as how fast the universe "as a whole" is rotating.

Comment author: Vladimir_M 14 June 2010 06:53:54PM 3 points [-]

I'm not sure if what you write makes sense. Take one simple example: a flat Minkowski spacetime, empty except for a few light particles (so that their influence on the metric is negligible). This means that special relativity applies, and it's clearly consistent with GR.

Accelerated motions are not going to be relative in this universe, just like they aren't in Newton's theory. You can of course observe an accelerating particle and insist on using coordinates in which it remains in the origin (which is sometimes useful, as in e.g. the Rindler coordinates), but in this coordinate system, the universe will not have the above listed properties in any meaningful sense.

Comment author: mkehrt 14 June 2010 04:08:46PM 2 points [-]

Couldn't you tell whether your planet is revolving or rotating using a Foucault's pendulum? I'm not sure whether you can get all the information about the planets' relations with a complex set of Foucault's pendula or not, but you could get some.

Also, I think your answer is a map-territory confusion. While GR does not distinguish certain types of motion from each other, and while GR seems to be the best model of macroscopic behavior we have, to claim that this means that there is really no fact of the matter seems a little overconfident.

Comment author: SilasBarta 14 June 2010 05:12:45PM 0 points [-]

Couldn't you tell whether your planet is revolving or rotating using a Foucault's pendulum? I'm not sure whether you can get all the information about the planets' relations with a complex set of Foucault's pendula or not, but you could get some.

The Foucault pendulum is able to measure earth's rotation in part because of the frame established by the rest of the universe. But in the scenario I described, the frame dragging effect of one or both planets blows up your ability to use the standard equations. Would the corrections introduced by including frame-dragging show a solution that varies depending on which of the planets is "really" moving?

Also, I think your answer is a map-territory confusion. While GR does not distinguish certain types of motion from each other, and while GR seems to be the best model of macroscopic behavior we have, to claim that this means that there is really no fact of the matter seems a little overconfident.

It's the other way around. The fact that there is no test that would distinguish your location along a dimension means that no such dimension exists, and any model requiring such a distinction is deviating from the territory.

Yes, GR could be wrong, but for it to be wrong in a way such that e.g. you actually can distinguish acceleration from gravity would require more than just a refinement of our models; it would mean the universe up to this point was a lie.

Comment author: mkehrt 14 June 2010 06:00:14PM *  0 points [-]

The Foucault pendulum is able to measure earth's rotation in part because of the frame established by the rest of the universe. But in the scenario I described, the frame dragging effect of one or both planets blows up your ability to use the standard equations. Would the corrections introduced by including frame-dragging show a solution that varies depending on which of the planets is "really" moving?

I must admit I'm a little baffled by this. I'm pretty ignorant of GR, but I was strongly under the impression that

(a) the frame dragging effect was miniscule, and

(b), that Foucault's pendulum works simply because there is no force acting on the pendulum to change the plane of its rotation. Thus, a perfect polar pendulum on a planet in a universe with no other bodies in it will never have any force exerted on it other than gravity and will continue to swing in the same plane. If the planet is rotating, an observer on the planet will be able to tell this by observing the pendulum, even in the absence of any other body in the universe. Similarly, in the above paradox, an observer can tell whether their planet is revolving around the other planet while remaining oriented towards it because the pendulum will rotate over the course of a "year".

Comment author: SilasBarta 14 June 2010 06:09:17PM *  0 points [-]

To appreciate how differently things are when you remove the rest of the universe, consider this: what if the universe is just one planet with the people on it? How will a Foucault pendulum behave in that universe? Shouldn't it behave quite differently, given that the rotation of the planet means the rotation of the entire universe, which is meaningless?

Comment author: prase 14 June 2010 08:25:22PM 1 point [-]

Rotation of the planets doesn't mean rotation of the universe, don't forget there are not only the planets, but also the gravitational field.

Comment author: Vladimir_M 14 June 2010 08:29:59PM *  1 point [-]

To appreciate how differently things are when you remove the rest of the universe, consider this: what if the universe is just one planet with the people on it?

As Prase said above, that depends on the boundary conditions. As the clearest example, if you imagine a flat empty Minkowski space and then add a lightweight sphere into it, then special relativity will hold and observers tied to the sphere's surface would be able to tell whether it's rotating by measuring the Coriolis and centrifugal forces. There would be a true anti-Machian absolute space around them, telling them clearly if they're rotating/accelerating or not. This despite the whole scenario being perfectly consistent with GR.

Comment author: Vladimir_M 14 June 2010 07:05:58PM *  2 points [-]

SilasBarta:

Yes, GR could be wrong, but for it to be wrong in a way such that e.g. you actually can distinguish acceleration from gravity would require more than just a refinement of our models; it would mean the universe up to this point was a lie.

This isn't really true. In GR, you can in principle always distinguish acceleration from gravity over finite stretches of spacetime by measuring the tidal forces. There is no distribution of mass that would produce an ideally homogeneous gravitational field free of tidal forces whose effect would perfectly mimic uniform acceleration in flat spacetime. The equivalence principle holds only across infinitesimal regions of spacetime.

See here for a good discussion of what the equivalence principle actually means, and the overview of various controversies it has provoked:
http://www.mathpages.com/home/kmath622/kmath622.htm

Comment author: SilasBarta 14 June 2010 08:51:39PM 1 point [-]

This isn't really true. In GR, you can in principle always distinguish acceleration from gravity over finite stretches of spacetime by measuring the tidal forces. ...

Yes, I was just listing an offhand example of an implication of GR and I didn't bother to specify it to full precision. My point was just that in order for a certain implication to be falsified (specifically, that there is no fact of the matter as to e.g. what the velocity of the universe is), you would need the laws of the universe to change, not just a refinement in the GR model.

Comment author: Vladimir_M 14 June 2010 06:35:53PM *  2 points [-]

I have only a superficial understanding of GR, but nevertheless, your question seems a bit unclear and/or confused. A few important points:

  • Whether GR is actually a Machian theory is a moot point, because it turns out that Mach's principle is hard to formulate precisely enough to tackle that question. See e.g. here for an overview of this problem: http://arxiv.org/abs/gr-qc/9607009

  • According to the Mach's original idea -- whose relation with GR is still not entirely clear, and which is certainly not necessarily implied by GR -- a necessary assumption for the "normal" behavior of rotational and other non-inertial motions is the large-scale isotropy of the universe, and the fact that enormous distant masses exist in every direction. If the only other mass in the universe is concentrated nearby, you'd see only weak inertial forces, and they would behave differently in different directions.

  • The geometry of spacetime in GR is not uniquely determined by the distribution of matter. You can have various crazy spacetime geometries for any distribution of matter. (As a trivial example, imagine you're living in the usual Minkowski or Schwarzschild metric, and then a powerful gravitational wave passes by.) In this sense, GR is deeply anti-Machian.

  • That said, assuming nothing funny's going on, in the scenario you describe, the classical limit applies, and the planets would move pretty much according to Newton's laws. This means they'd both be orbiting around their common center of mass, so it's not clear to me that the observations you listed would be possible. [ETA: please ignore this last point, my typing was faster than my thinking here. See the replies below.]

Therefore, the only way I can make sense of your example would be to assume that the other planet is much heavier than yours, and that the Schwarzschild metric applies and gives approximately Newtonian results, so we get something similar to the Moon's rotation around the Earth. Is that what you had in mind?

Comment author: prase 14 June 2010 07:29:08PM 0 points [-]

it's not clear to me that the observations you listed would be possible. ... the only way I can make sense of your example would be to assume that the other planet is much heavier than yours

I don't understand. The listed observations are in accordance with Newton, whatever the masses of the planets.

Comment author: Vladimir_M 14 June 2010 08:21:30PM *  0 points [-]

Yes, you're right. It was my failure of imagination. I thought about it again, and yes, even with similar or identical masses, the rotations of individual planets around their own axes could be set so as to provide the described view.

Comment author: prase 14 June 2010 06:41:31PM *  4 points [-]

Imagine a simplified scenario: only one planet. Is the planet rotating or not? You could construct a Foucault pendulum and see. It will show you a definite answer: either its plane of oscillation moves relatively to the ground or not. This doesn't depend on distant stars. If your planet is heavy and dense like hell, you could see the difference between a "rotating" Kerr metric and a "static" Schwarzschild metric.

Of course, general relativity is generally covariant, and any motion can be interpreted as a free fall in some gravitational field, and more, there is no absolute background spacetime with respect to which to measure acceleration. So you can likely find coordinates in which the planet is static and the pendulum movement explain by changing gravitational field. The price paid is that it will be necessary to postulate weird boundary conditions in the infinity. It is possible that more versions of boundary conditions are acceptable in the absence of distant objects and the question whether the planet is rotating is then less defined.

Carlo Rovelli in his Quantum Gravity (once I downloaded it from arXiv, now it seems unavailable, but probably it could still be found somewhere on the net) considers eight versions of Mach principle (MP). This is what he says (he has discussed the parabolic water surface of a rotating bucket before instead of two planets or Foucault pendula):

  • MP1: Distant stars can affect local inertial frame. True. Because matter affects the gravitational field.
  • MP2: The local inertial frame is completely determined by the matter content of the universe. False. The gravitational field has independent degrees of freedom.
  • MP3: The rotation of the inertial frame inside the bucket is in fact dragged by the bucket, and this effect increases with the mass of the bucket. True. This is the Lense-Thirring effect: a rotating mass drags the inertial frames in the vicinity.
  • MP4: In the limit in which the mass is large, the internal inertial reference frame rotates with the bucket. Depends on the details of the way the limit is taken.
  • MP5: There can be no global rotation of the universe. False. Einstein believed this to be true in GR, but Goedel's solution is a counter-example.
  • MP6: In the absence of matter, there would be no inertia. False. There are vacuum solutions of the Einstein equations.
  • MP7: There is no absolute motion, only motion relative to something else, therefore the water in the bucket does not rotate in absolute terms, it rotates with respect to some dynamical physical entity. True. This is the basic physical idea of GR.
  • MP8: The local inertial frame is completely determined by the dynamical fields of the universe. True. In fact, this is precisely Einstein key idea.

I think number 4 is especially relevant here. The boundary conditions or the global topology of the universe have to be taken into account, else the two-planet scenario is not entirely defined.

Edit: The last remark doesn't make much sense after all. The planets aren't thought to be too heavy and the dragging effect shouldn't be too big, and its relation to boundary conditions isn't straightforward. Nevertheless, the boundary conditions still play an important role (see my subcomment here).

Comment author: SilasBarta 14 June 2010 08:37:38PM *  0 points [-]

Imagine a simplified scenario: only one planet. Is the planet rotating or not? You could construct a Foucault pendulum and see. It will show you a definite answer: either its plane of oscillation moves relatively to the ground or not. This doesn't depend on distant stars.

Sure it does. If the rest of the objects in the universe were rotating in unison around the earth while the earth was still, that would be observationally indistinguishable from the earth rotating. The GR equations (so I'm told[1]) account for this in that, if the rest of the universe were treated as rotating, that would send gravitaitonal waves that would jointly cause the earth to be still in that frame of reference.

Remove that external mass, and you've removed the gravity waves. Nothing cancels the gravity wave generated by the motion of the planets.

It is possible that more versions of boundary conditions are acceptable in the absence of distant objects and the question whether the planet is rotating is then less defined.

Yes, I think that agrees with my answer to the question.

[1] See here:

Einstein's theory further had the property that moving matter would generate gravitational waves, propagating curvatures. Einstein suspected that if the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you. So you could construct the laws of physics in an accelerating or even rotating frame of reference, and end up observing the same laws - again freeing us of the specter of absolute space.

Comment author: prase 14 June 2010 09:18:02PM *  1 point [-]

if the rest of the universe were treated as rotating, that would send gravitaitonal waves that would jointly cause the earth to be still in that frame of reference

This is not so simple. The force of the gravitational waves depends on the mass of the rest of the universe. One can easily imagine the same observable rest of the universe with a very different mass (just remove all the dark matter or so). Both can't generate the same gravitational waves, but there would be no significant observable effect on Earth. The metric around here would be still more or less Schwarzschild (or Kerr). The fact that steady state can be interpreted as rotation whose effects are cancelled by gravitational waves has not necessarily much to do with the existence of other objects in the universe. Even in empty space, the gravitational waves can come from infinity.

So, while it's true that there is no absolute space with respect to which one measures the acceleration, there are still Foucault pendula. Because there is no absolute space, to define what constitutes rotation using any particular coordinates would be absurd. But we can still quite reasonably define rotation (extend our present definition of rotation) by use of the pendulum, or bucket, or whatever similar device. Even in single-planet universes, there can be buckets with both flat and parabolic surfaces.

Comment author: prase 14 June 2010 10:06:35PM *  2 points [-]

Let me write one more reply since I think my first one wasn't entirely clear.

Let's put all this into a thought experiment like this: Universe A contains only a light observer with a round bottle half full of water. Universe B contains all that, and moreover a lot of uniformly isotropically distributed distant massive stars. In both universes the spacetime region around the observer can be described by Minkowski metric. At the beginning, the observer sees that the water is spread near the walls of the bottle with a round vacuum bubble in the middle; this minimises the energy due to surface tension. Now, the observer gives the bottle some spin. Will the observation in universe A be different from that in universe B?

If GR is right, then no, it wouldn't. In both, the observers will see the water concentrated in regions most distant from a specific straight line, which is reasonable to call the axis of rotation. To see that, it is enough to realise that the distant stars influence the bottle only by means of the gravitational field, and it remains almost the same in both cases - approximately Minkowskian, assumed that the bottle and the observer aren't of black hole proportions.

Of course one can then change the coordinates to those in which the bottle is static. With respect to these coordinates, the stars in universe B would rotate, and in universe A, well, nothing much can be said. But in both universes, we will find a gravitational field which creates precisely the effects of the rotation of the now static bottle. The stars are there only to distract the attention.

We can almost do the coordinate change in the Newtonian framework: it amounts to use of centrifugal force, which can be thought of as a gravitational force (it is universal in the same way as the gravitational force; of course, this is the equivalence principle). There are only two "minor" problems in Newtonian physics: first, orthodox Newtonianism recognises only gravitational force emanating from massive objects in the way described by Newton's gravitational law, which is why the centrifugal force has to be treated differently, and second, there is the damned velocity dependent Coriolis force.

Edit: some formulations changed

Comment author: SilasBarta 15 June 2010 12:08:31AM 0 points [-]

Okay, I give up. I don't know the math well enough to speak confidently on this issue. I was just taking the Machian principles in the article I linked and extrapolating them to the scenario I envisioned, using some familiarity with frame-dragging effects.

Still, I think it's an interesting exercise in finding the implications of a universe without the background mass, and not as easy to answer as some initially assumed.

Comment author: prase 15 June 2010 05:51:29AM 0 points [-]

Yes, it's interesting, I was confused for quite a while, still the answer is simpler than what I initially assumed, which makes it a good brain teaser.

Comment author: MichaelBishop 14 June 2010 03:44:53PM *  2 points [-]

Whole Brain Emulation: The Logical Endpoint of Neuroinformatics? (google techtalk by Anders Sandberg)

I assume someone has already linked to this but I didn't see it so I figured I'd post it.

Comment author: MichaelBishop 14 June 2010 04:22:40PM *  12 points [-]

I'd like to share introductory level posts as widely as possible. There are only three with this tag. Can people nominate more of these posts, perhaps messaging the author to encourage them to tag their post "introduction."

We should link to, stumble on, etc. accessible posts as much as possible. The sequences are great, but intimidating for many people.

Added: Are there more refined tags we'd like to use to indicate who the articles are appropriate for?

Comment author: RobinZ 15 June 2010 04:23:26AM 9 points [-]

There are a few scattered posts in Eliezer's sequences which do not, I believe, have strong dependencies (I steal several from the About page, others from Kaj_Sotala's first and second lists) - I separate out the ones which seem like good introductory posts specifically, with a separate list of others I considered but do not think are specifically introductory.

Introductions:

Not introductions, but accessible and cool:

Comment author: blogospheroid 15 June 2010 05:27:09AM 3 points [-]

Thanks for this list.

Comment author: SilasBarta 15 June 2010 01:05:13PM 3 points [-]

As usual, I'll have to recommend Truly Part of You as an excellent introductory post, given the very little background required, and the high insight per unit length.

Comment author: Alexandros 14 June 2010 06:10:17PM *  3 points [-]

Off That (Rationalist Anthem) - Baba Brinkman

More about skeptics than rationalists, but still quite nice. Enjoy

Comment author: magfrump 14 June 2010 06:43:28PM 0 points [-]

I could have sworn that I'd seen this posted somewhere before, for example in this thread. Maybe it was on Stumbelupon...

Comment author: magfrump 14 June 2010 06:57:39PM 1 point [-]

Looking through a couple of posts on young rationalists, it occurred to me to ask the question, how many murderers have a loving relationship with non-murderer parents?

Is there a way to get these kinds of statistics? Is there a way to filter them for accuracy? Accuracy both of 'loving relationship' and of 'guilty of murder' (i.e. plea bargains, false charges, etc.)

Comment author: Dagon 14 June 2010 07:39:47PM 1 point [-]

I started to write: The probabilities in my priors are so low that I don't expect any update to occur, even if you could accurately measure. Then I thought: Wait, that's what 'prior' means: of course I don't expect any update to occur! Rationality is hard.

So instead, I'll phrase my confusion this way: I have a hard time stating a belief for which even a surprising result to this measurement would matter. There are so many other reasons to recommend being raised by loving parents that "increased likelihood of murder from near-zero to still-near-zero" is unlikely to change such a preference.

And the overall murder rate is already so low that the reverse isn't true either: you shouldn't worry significantly less about an acquaintance murdering someone just because they have loving parents. Because in most cases you CANNOT worry less than you already should, which is near-zero.

Comment author: magfrump 15 June 2010 03:51:07AM 0 points [-]

I'm not really thinking in terms of particular issues, the more interesting questions in my mind are the issues that would arise in collecting such data.

Comment author: Liron 14 June 2010 08:57:56PM 0 points [-]

Physics question: Is it physically possible to take any given mass, like the moon, and annihilate the mass in a way that yields usable energy?

Comment author: DanArmak 14 June 2010 09:00:11PM *  0 points [-]

Yes, if you collide it with the same mass of antimatter. Edit: I don't know enough to say if there are other ways.

This may not be very practical to do to the whole moon at once though :-)

Comment author: DanArmak 14 June 2010 09:18:18PM *  0 points [-]

This may not be very practical to do to the whole moon at once though :-)

Well, I shouldn't speak before checking. Taking numbers from Wikipedia (ETA fixed numbers):

  • The moon has a mass of 7.36e22 Kg, converting it to energy would yield 6.624e39 J.
  • The Sun's total output is about 3.86e26 J / s, so this is the equivalent of 3.17 million years of Sun energy (if you have a Dyson sphere).
  • A nova releases ~~ 1e34-1e37 J over a few days; only 1/100 as much as converting the moon to energy. A core-collapse supernova bursts 1e44-1e46 J of energy in 10 seconds - a lot more. (Range is according to different Google results.)

ETA: the numbers were completely wrong before and I corrected them.

Comment author: Christian_Szegedy 14 June 2010 10:46:35PM *  1 point [-]

Your numbers seem to be off: (e.g. 4.26e9 J/sec would be truly minsiscule) You probably meant 4.29e29 J/sec, but then 5.74e5 years are wrong. According to wikipedia, the Sun's energy output is: 1.2e34 J/s which is still at odd with both of your numbers.

Comment author: Liron 15 June 2010 08:36:13AM 0 points [-]

Yeah but does it require a lot of energy/negentropy to get ahold of the necessary antimatter? I'm wondering whether the moon's mass makes it analogous to a charged capacitor or an uncharged capacitor.

Comment author: Mitchell_Porter 15 June 2010 09:19:55AM *  3 points [-]

Antimatter is expensive to make. It would require the whole world GDP to make one anti-Liron. Conservation of energy says that to make an antiparticle, you need a collision with kinetic energy equal to the rest mass of the antiparticle you're making. Solar flares make some antimatter as they punch through the solar atmosphere, but good luck getting hold of it before it annihilates.

The standard cosmological model says that shortly after the big bang, matter and antimatter existed in equal quantities, but there were interactions which favored the production of matter, and so all the antimatter was annihilated, leaving an excess of matter, which then in the next stage formed the first atomic nuclei. Antimatter is therefore rare in the universe. There are probably no natural antistars, for example. So it is expensive to come by, but (for a cosmic civilization) it might be a good way to store energy.

Comment author: DanArmak 15 June 2010 02:03:08PM 2 points [-]

There are probably no natural antistars, for example.

And if there are, we don't know how to identify them from far away, do we?

BTW, can there be antimatter black holes? My limited understanding of physics is that matter/antimatter falling into a black hole passes the event horizon before it can interact with anything that fell into the hole in the past; and once it passes the event horizon, even if it mutually annihilates with something already in the black hole, the results can't escape outside. So from the outside there's no difference between matter, antimatter, and mixed black holes.

Comment author: nhamann 14 June 2010 09:00:14PM *  6 points [-]

“There is no scientist shortage,” declares Harvard economics professor Richard Freeman, a pre-eminent authority on the scientific work force. Michael Teitelbaum of the Alfred P. Sloan Foundation, a leading demographer who is also a national authority on science training, cites the “profound irony” of crying shortage — as have many business leaders, including Microsoft founder Bill Gates — while scores of thousands of young Ph.D.s labor in the nation’s university labs as low-paid, temporary workers, ostensibly training for permanent faculty positions that will never exist.

The Real Science Gap

ETA: Here's a money quote from near the end of the article:

The main difference between postdocs and migrant agricultural laborers, he jokes, is that the Ph.D.s don’t pick fruit.

(Ouch)

Comment author: Houshalter 15 June 2010 12:34:29AM *  0 points [-]

I'm not sure I see what the problem is. Capitalism works? It makes it seem like this system is unsustainable or bound to collapse, but I'm not sure I see how two and two fit together. I am particularly confused with this quote:

Obviously, the “pyramid paradigm can’t continue forever,” says Susan Gerbi, chair of molecular biology at Brown University and one of the relatively small number of scientists who have expressed serious concern about the situation. Like any Ponzi scheme, she fears, this one will collapse when it runs out of suckers — a stage that appears to be approaching. “We need to have solutions for some new steady-state model” that will limit the production of new scientists and offer them better career prospects, she adds.

First of all, how is it a ponzi scheme that is bound to collapse? Also limiting the number of scientists is not going to make the system better, except that maybe individuals will have less competition = more opportunities, which is not a benefit to the whole system, just to the individual.

EDIT: Fixed spelling.

Comment author: SilasBarta 15 June 2010 01:01:33AM *  25 points [-]

I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.

My take? The system of go-to-college/get-a-job needs to collapse and be replaced, for the most part, by apprenticeships (or "internships" as we fine gentry call them) at a younger age, which will give people significantly more financial security and enhance the economy's productivity. But this will be bad news for academics.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

So the slack will have to be picked up by people "outside the system". Yes, they'll be starved for funds and rely on rich people and donations to non-profits, but they'll mostly make up for it by their ability to get much more insight out of much less data: knowing what data-mining techniques to use, spotting parallels across different fields, avoiding the biases that infect academia, and generally automating the kind of inference currently believed to require a human expert to perform.

In short: this, too, shall pass -- the only question is how long we'll have to suffer until the transition is complete.

Sorry, [/rant].

Comment author: Houshalter 15 June 2010 02:31:23AM 0 points [-]

I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

I suppose thats true, but there is such thing as equilibrium where the factors equal each other out. I do fear that it might be to high, but again, when the price becomes unreasonable, people look for the other options that are cheaper.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

Thats kind of sad actually, but no amount of government regulation can fix that. Unfortunatley there is little actual incentive for actual science in a pure capitalist society, though we've been going good so far.

Comment author: fiddlemath 15 June 2010 03:51:51AM *  7 points [-]

I agree that college as an institution of learning is a waste for most folks - they will "never use this," most disregard the parts of a liberal arts education that they're force-fed, and neither they nor their jobs benefit. Maybe students gain something from networking with each other. But yes, Goodhart's Law applies. Employers appear to use a diploma as an indicator of deligence and intelligence. So long as that's true, students will fritter away four years of their lives and put themselves deep in debt to get a magic sheet of paper.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

It's been broken forever, in basically the same way it is now. Most working scientists are trying to prove their idea, because neagtive results don't carry nearly so much prestige as positive results, and the practice of science is mostly about prestige. I'm sure I could find citations for peer review being "pal review" throughout its lifetime. (ooh. I'll try this in a moment.)

To the extent that science has ever worked, it's because the social process of science has worked - scientists are just open-minded enough to, as a whole, let strong evidence change their collective minds. I'm not really convinced that the social process of science has changed significantly over the last decades, and I can imagine these assertions being rooted in generalized nostalgia. Do you have reasons to assert this?

(Are you just blowing off steam about this? I can totally support that, because argh argh argh the publication treadmill in my field headdesk headdesk expletives. But if you have evidence, I'd love to hear it.)

Comment author: SilasBarta 15 June 2010 04:20:34AM *  11 points [-]

I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.

I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it's getting worse per unit person.

For the absolute level, I've noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I'm too sleepy to go into detail right now, but briefly:

  • There's no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it's solvable.

  • There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google's PageRank, allowing it to identify crucial sites. Ecologists didn't apply it to the problem of identifying critical ecosystem species until a few years ago.

  • I've gone into grad school myself and found that existing explanations of concepts is a scattered mess: it's almost like they don't want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article "Explain yourself!")

As an example of what a mess it is (and at risk of provoking emotions that aren't relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren't as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.

Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, "Um, anyone who's got the links to the public data, it'd be nice if you could post them here..."

To clarify, I'm NOT trying to raise the issue about AGW being a scam, etc. I'm saying that no matter how good the science is, here we have a case where it's of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.

Comment author: fiddlemath 15 June 2010 05:54:27AM *  8 points [-]

If the quality of science in general has not increased with more people, it's getting worse per unit person.

Er, I'd just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s "science"), and thereby seriously influencing it. Further, that's not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.

All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:

Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists' individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world's useful knowledge, or satisfy curiosity[*].

This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis - a least publishable unit is very nearly the author's research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.

Since these explanations are scattered and confusing, it's brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren't likely to catch it, because it's hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.

All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.

[*] This demands substantiation, but I have no studies to point to. It's common knowledge, perhaps, and it's true in the research environments I've found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?

Comment author: Morendil 15 June 2010 06:22:55AM *  0 points [-]

At the conclusion of the interview, Pierre deduces one general lesson : "You can't be inhibited, you must free yourself of the psychological obstacle that consists in being tied to something." Oh no, our friend Pierre is not inhibited ; look how for the past twenty years he has jumped from subject to subject, from boss to boss, from country to country, bringing into action all the differences of potential, seizing polypeptides, selling them off as soon as they begin declining, betting on Monod and then dropping him as soon as he gets bogged down; and here he is, ready to pack his bags again for the West Coast, the title of professor, and a new laboratory. What thing is he accumulating ? Nothing in particular, except perhaps the absence of inhibition, a sort of free energy prepared to invest itself anywhere. Yes, this is certainly he, the Don Juan of knowledge. One will speak of "intellectual curiosity," a "thirst for truth," but the absence of inhibition in fact designates something else : a capital of elements without use value, which can assume any value at all, provided the cycle closes back on itself while always expanding further. Pierre Kernowicz capitalizes the jokers of knowledge.

-- Bruno Latour, Portait of a Biologist as Wild Capitalist

(ETA: see also.)

Comment author: Douglas_Knight 15 June 2010 04:29:44AM 1 point [-]

On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.

If you're not happy with what they did in finance, why do you think they would have been useful in science?

Comment author: SilasBarta 15 June 2010 04:38:19AM 1 point [-]

They're smart. They're capable of figuring out a creative solution. And the financial instruments they designed were creative, for what they were intended, which was to hide risk and allow banks to offload mortgages to someone else. Someone benefited from the creativity, just not the average worker or consumer.

Comment author: Douglas_Knight 15 June 2010 04:48:23AM 1 point [-]

Yes, capable of figuring out a creative solution to maximizing their goals when faced with the incentive structure of science. You think that the people who remain fail to do science when faced with these incentives, so why do you expect these others be more altruistic?

Comment author: nhamann 15 June 2010 05:00:57AM 2 points [-]

I'm not sure I see what the problem is.

From the article:

Paid out of the grant, these highly skilled employees might earn $40,000 a year for 60 or more hours a week in the lab. A lucky few will eventually land faculty posts, but even most of those won’t get traditional permanent spots with the potential of tenure protection. The majority of today’s new faculty hires are “soft money” jobs with titles like “research assistant professor” and an employment term lasting only as long as the specific grant that supports it.

I'm not sure how typical this experience is, but assuming it is as common as the article suggests: you don't see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that "capitalism works?" I'm not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it's reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.

Comment author: Houshalter 15 June 2010 01:27:08PM 2 points [-]

I'm not sure how typical this experience is, but assuming it is as common as the article suggests: you don't see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that "capitalism works?" I'm not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it's reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.

Maybe that was a little harsh. But the question is, why are "huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) [...] getting paid very little to work in conditions with almost no long-term job security?" The article suggests it's because we have a surplus. But if those people weren't so highly trained, would they then get those better jobs? Probably not, people don't discriminate against you because you're "highly trained".

Comment author: multifoliaterose 14 June 2010 09:18:52PM *  14 points [-]

I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain's post titled "That Other Kind of Status." I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I'm leaving it up to keep the responses in context).

I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.

I've been a lurker in this community for three months and I've found that it's the smartest community that I've ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having "arrived home."

At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe.

I don't want to get involved in a debate about this point now (although I'd be happy to elaborate and give my thoughts in detail if there's interest).

What I want to do is to draw attention to the remarks that I made in my second comment at the link. From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).

My drawing attention to this question is not out of malice toward any of you - as I indicated above, I feel more comfortable with Less Wrong than I do with almost any other large group that I've ever come across. I like you people and if some of you are suffering from the issue (*) I see this as understandable and am sympathetic - we're all only human.

But I am concerned that I haven't seen much evidence of serious reflection about the possibility of (*) on Less Wrong. The closest that I've seen is Yvain's post titled "Extreme Rationality: It's Not That Great". Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, (*) is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility (*).

Any thoughts? I'd also be interested in any relevant references.

[Edited in response to cupholder's comment, deleted extraneous words.]

Comment author: Vladimir_Nesov 14 June 2010 09:42:32PM *  0 points [-]

#: I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups.

Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, # is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility #.

# refers to a pattern of incorrect (intuitive) reasoning. This pattern is potentially dangerous specifically because it leads to incorrect beliefs. But if you are saying that there is no significant distortion in beliefs (in particular about the importance of Less Wrong or SIAI's missions*), doesn't this imply that the role of this potential bias is therefore unimportant? Either # isn't important, because it doesn't significantly distort beliefs, or it does significantly distort beliefs and therefore important.


* Although I should note that I don't remember there being a visible position about the importance of Less Wrong.

Comment author: multifoliaterose 15 June 2010 12:36:56AM 1 point [-]

Either # isn't important, because it doesn't significantly distort beliefs, or it does significantly distort beliefs and therefore important.

There's no single point at which distortion of beliefs becomes sufficiently large to register as "significant" - it's a gradualist thing

Although I should note that I don't remember there being a visible position about the importance of Less Wrong.

Probably I've unfairly conflated Less Wrong and SIAI. But at this post Kevin says "We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win." This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant. I will respond to the post asking Kevin to clarify what he was getting at.

Comment author: Vladimir_Nesov 15 June 2010 12:48:50AM 0 points [-]

There's no single point at which distortion of beliefs becomes sufficiently large to register as "significant" - it's a gradualist thing

But to avoid turning this into a fallacy of gray, you still need to take notice of the extent of the effect. Neither working on a bias, nor ignoring the bias, are "defaults" - it necessarily depends on the perceived level of significance.

Comment author: multifoliaterose 15 June 2010 12:57:42AM *  0 points [-]

I think I agree with you. My suggestion is that Less Wrong and SIAI are, at the margin, not paying enough attention to the bias (*).

Comment author: JoshuaZ 15 June 2010 01:30:16AM *  2 points [-]

"We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win." This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant.

Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids? There are lots of ways of potential existential threats. Unfriendly or rogue AIs are certainly one of them. Nuclear war is another. And I think a lot of people would agree that most humans don't pay nearly enough attention to existential threats. So one aspect of improving rational thinking should be a net reduction in existential threats of all types, not just those associated with AI. Kevin's statement thus isn't intrinsically connected to SIAI at all (although I'd be inclined to argue that even given that Kevin's statement is possibly a tad hyperbolic).

Comment author: multifoliaterose 15 June 2010 01:52:15AM *  4 points [-]

Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids?

The parallel is a good one. I would think it sort of crankish if somebody went around trying to get people to engage in amateur astronomy and search for dangerous asteroids on the grounds that any new amateur astronomer may be the one save us from being killed by a dangerous asteroid. Just because an issue is potentially important doesn't mean that one should attempt to interest as many people as possible in it. There's an issue of opportunity cost.

Comment author: JoshuaZ 15 June 2010 01:55:05AM 4 points [-]

Sure there's an opportunity cost, but how large is that opportunity cost? What if someone has good data that suggests that the current number of asteroid seekers is orders of magnitude below the optimum?

Comment author: multifoliaterose 15 June 2010 02:11:12AM 2 points [-]

improving rational thinking should be a net reduction in existential threats of all types

Two points:

(1) It's not clear that improving rational thinking matters much. The factors limiting human ability to reduce existential risk seem to me to have more to do with politics, marketing and culture rather than rationality proper. Devoting oneself to refining rationality may come at the cost of increasing one's ability to engage in politics and marketing and influence culture. I guess what I'm saying is that rationalists should win and consciously aspiring toward rationality may interfere with one's ability to win.

(2) It's not clear how much it's possible to improve rational thinking. It may be that beyond a certain point, attempts to improve rational thinking are self defeating (e.g. combating one bias may cause another bias).

Comment author: Vladimir_Nesov 15 June 2010 02:29:53AM 3 points [-]

It's not clear how much it's possible to improve rational thinking.

On the level of society, there seems to be tons of low-hanging fruit.

Comment author: multifoliaterose 15 June 2010 07:47:37AM 0 points [-]

What are some examples of this low-hanging fruit that you have in mind?

Comment author: magfrump 15 June 2010 09:31:37AM *  3 points [-]

Fact-checking in political discussions (i.e. senate politics), parenting and teaching methods, keeping a clean desk or being happy at work (see here), getting effective medical treatments rather than unproven treatments (sometimes this might require confronting your doctor), and maintaining budgets seem like decent examples (in no particular order, and of course these are at various heights but well within the reach of the general public).

Not sure if Vladimir would have the same types of things in mind.

Comment author: JoshuaZ 15 June 2010 02:34:39AM *  6 points [-]

Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense. In the United States alone, people spend around three billion dollars a year on homeopathy (source). If that went away, and only 5% of that ended up getting spent on things that actually increase general utility, that means around $150 million dollars are now going into useful things. And that's only a tiny example. The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste. We're not talking here about a Hansonian approach where much medicine is only of marginal use or only helps the very sick who are going to die soon. We're talking about "medicine" that does zero. And many of the people taking those alternatives will take those alternatives instead of taking medicine that will improve their lives. Improving the general population's rationality will be a net win for everyone. And if some tiny set of those freed resources goes to dealing with existential risk? Even better.

Comment author: multifoliaterose 15 June 2010 07:46:27AM 1 point [-]

Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense.

Okay, but now the rationality that you're talking about is "ordinary rationality" rather than "extreme rationality" and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?

The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste [...] We're talking about "medicine" that does zero.

Are you sure that the placebo effects are never sufficiently useful to warrant the cost?

Comment author: JoshuaZ 15 June 2010 01:08:17PM *  3 points [-]

Okay, but now the rationality that you're talking about is "ordinary rationality" rather than "extreme rationality" and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?

A lot of the aspects of "extreme rationality" are aspects of rationality in general (understanding the scientific method and the nature of evidence, trying to make experiments to test things, being aware of serious cognitive biases, etc.) Also, I suspect (and this may not be accurate) that a lot of the ideas of extreme rationality are ones which LWers will simply spread in casual conversation, not necessarily out of any deliberate attempt to spread them, but because they are really neat. For example, the representativeness heuristic is an amazing form of cognitive bias. Similarly, the 2-4-6 game is independently fun to play with people and helps them learn better.

Are you sure that the placebo effects are never sufficiently useful to warrant the cost?

I was careful to say that much, not all. Placebos can help. And some of it involves treatments that will eventually turn out to be helpful when they get studied. There are entire subindustries that aren't just useless but downright harmful (chelation therapy for autism would be an example). And large parts of the alternative medicine world involve claims that are emotionally damaging to patients (such as claims that cancer is a result of negatives beliefs). And when one isn't talking about something like homeopathy which is just water but rather remedies that involve chemically active substances the chance that actual complications will occur from them grows.

Deliberately giving placebos is of questionable ethical value, but if we think it is ok we can do it with cheap sugar pills delivered at a pharmacy. Cheaper, safer and better controlled. And people won't be getting the sugar pills as an alternative to treatment when treatment is possible.

Comment author: blogospheroid 15 June 2010 08:46:04AM *  3 points [-]

Anything we seek to do is a function of our capabilities and how important the activity is. Less Wrong is aimed mainly as a pointer towards increasing the capabilities of those who are interested in improving their rationality and Eliezer has mentioned in one of the sequences that there are many other aspects of the art that have to be developed. Epistemic rationality is one, luminosity as mentioned by Alicorn is another, so on and so forth.

Who knows that in the future, we may get many rational offshoots of lesswrong - lessshy, lessprocastinating, etc.

Now, getting back to my statement. Function of capabilities and Importance.

Importance - Existential risk is the most important problem that is not having sufficient light on it. Capability - The singinst is a group of powerless, poor and introverted geeks who are doing the best, that they think they can do, to reduce existential risk. This may include things that improve their personal ability to affect the future positively. It may include charisma and marketing, also. For all the time that they have thought on the issue, the singinst people consider raising the sanity waterline as really important to the cause. Unless and until you have specific data that that avenue is not the best use of their time, it is a worthwhile cause to pursue.

Before reading the paragraph below, please answer this simple question - What is your marginal time unit, taking into account necessary leisure, being used for?

If your capability is great, then you can contribute much more than SIAI. All you need to see is whether on the margin, your contribution is making a greater difference to the activity or not. Even Singinst cannot absorb too much money without losing focus. You, as a smart person know that. So, stop contributing to Singinst when you think your marginal dollar gets better value when spent elsewhere.

It is not whether you believe that singinst is the best cause ever. Honestly assess and calculate where your marginal dollar can get better value. Are you better off being the millionth voice in the climate change debate or the hundredth voice in the existential risk discussion?

EDIT : Edited the capability para for clarity

Comment author: cupholder 14 June 2010 09:53:19PM *  4 points [-]

Comment on markup: I saw the first version of your comment, where you were using "(*)" as a textual marker, and I see you're now using "#" because the asterisks were messing with the markup. You should be able to get the "(*)" marker to work by putting a backslash before the asterisk (and I preferred the "(*)" indicator because that's more easily recognized as a footnote-style marker).

Feels weird to post an entire paragraph just to nitpick someone's markup, so here's an actual comment!

From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups

Let me try and rephrase this in a way that might be more testable/easier to think about. It sounds like the question here is what is causing the correlation between being a member of LW/SIAI and agreeing with LW/SIAI that future AI is one of the most important things to worry about. There are several possible causes:

  1. group membership causes group agreement (agreement with the group)
  2. group agreement causes group membership
  3. group membership and group agreement have a common cause (or, more generally, there's a network of causal factors that connect group membership with group agreement)
  4. a mix of the above

And we want to know whether #1 is strong enough that we're drifting towards a cult attractor or some other groupthink attractor.

I'm not instantly sure how to answer this, but I thought it might help to rephrase this more explicitly in terms of causal inference.

Comment author: multifoliaterose 15 June 2010 01:45:17AM *  3 points [-]

I'm not sure that your rephrasing accurately captures what I was trying to get at. In particular, strictly speaking (*) doesn't require that one be a part of a group , although being part of a group often plays a role in enabling (*).

Also, I'm not only interested in possible irrational causes for LW/SIAI members' belief that future AI is one of the most important things to worry about, but also possible irrational causes for each of:

(1) SIAI members' belief that donating to SIAI in particular is the most leveraged way to reduce existential risks? Note that it's possible to devote ones' live to a project without believing that it's the best project for additional funding - see Givewell's blog posts on Room For More Funding:

For reference, PeerInfinity says

A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks.

(2) The belief that refining the art of human rationality is very important.

On (2), I basically agree with Yvain's post Extreme Rationality: It's Not That Great.

My own take is that the Less Wrong community has been very enriching in some of its members lives on account of allowing them the opportunity to connect with people similar to themselves, and that their very positive feelings connected with their Less Wrong experience have led some of them to overrate the overall importance of Less Wrong's stated mission. I can write more about this if there's interest.

Comment author: h-H 15 June 2010 02:03:12AM 0 points [-]

I can write more about this if there's interest.

I'm interested. I've been thinking about this issue myself for a bit, and something like an 'internal review' would greatly help in bringing any potential biases the community holds to light.

Comment author: cupholder 15 June 2010 10:26:19AM 0 points [-]

Thank you for clarifying. I don't think I really have an opinion on this, but I figure it's good to have someone bring it up as a potential issue.

Comment author: JoshuaZ 14 June 2010 10:02:09PM *  3 points [-]

I'm not aware of anyone here who would claim that LW is one of the most important things in the world right now but I think a lot of people here would agree that improving human reasoning is important if we can have those improvements apply to lots of different people across many different fields.

There is a definite group of people here who think that SIAI is really important. If one thinks that a near Singularity is a likely event then this attitude makes some sense. It makes a lot of sense if you assign a high probability to a Singularity in the near future and also assign a high probability to the possibility that many Singularitarians either have no idea what they are doing or are dangerously wrong. I agree with you that the SIAI is not that important. In particular, I think that a Singularity is not a likely event for the foreseeable future, although I agree with the general consensus here that a large fraction of Singularity proponents are extremely wrong at multiple levels.

Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important. That's the same reason that a lot of the general public thinks that tokamak fusion reactors will be practical in the next fifty years: The physicists and engineers who think that are going to loudly push for funding. The ones who don't are going to generally just go and do something else. Thus, in any given setting it can be difficult to estimate the general communal attitude towards something since the strongest views will be the views that are most apparent.

Comment author: Vladimir_Nesov 14 June 2010 10:24:49PM *  13 points [-]

I don't think intelligence explosion is imminent either. But I believe it's certain to eventually happen, absent the end of civilization before that. And I believe that its outcome depends exclusively on the values of the agents driving it, hence we need to be ready, with good understanding of preference theory at hand when the time comes. To get there, we need to start somewhere. And right now, almost nobody is doing anything in that direction, and there is very poor level of awareness of the problem and poor intellectual standards of discussing the problem where surface awareness is present.

Either right now, or 50, or 100 years from now, a serious effort has to be taken on, but the later it starts, the greater the risk of being too late to guide the transition in a preferable direction. The problem itself, as a mathematical and philosophical challenge, sounds like something that could easily take at least 100 years to reach clear understanding, and that is the deadline we should worry about, starting 10 years too late to finish in time 100 years from now.

Comment author: multifoliaterose 15 June 2010 12:10:43AM 0 points [-]

Vladimir, I agree with you that people should be thinking intelligence explosion, that there's a very poor level of awareness of the problem, and that the intellectual standards for discourse about this problem in the general public are poor.

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.

Comment author: Vladimir_Nesov 15 June 2010 01:02:29AM 6 points [-]

the dichotomy "paper clip maximizer vs. Friendly AI" seems like a false dichotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

Mainly Complexity of value. There is no way for human values to magically jump inside the AI, so if it's not specifically created to reflect them, it won't have them, and whatever the AI ends up with won't come close to human values, because human values are too complex to be resembled by any given structure that happens to be formed in the AI.

The more AI's preference diverges from ours, the more we lose, and this loss is on astronomic scale (even if preference diverges relatively little). The falloff with imperfect reflection of values might be so sharp that any ad-hoc solution turns the future worthless. Or maybe not, with certain classes of values that contain a component of sympathy that reflects values perfectly while giving them smaller weight in the overall game, but then we'd want to technically understand this "sympathy" to have any confidence in the outcome.

Comment author: multifoliaterose 15 June 2010 08:36:09AM *  1 point [-]

There is no way for human values to magically jump inside the AI, so if it's not specifically created to reflect them, it won't have them, and whatever the AI ends up with won't come close to human values, because human values are too complex to be resembled by any given structure that happens to be formed in the AI.

I'm not convinced by the claim that human values have high Kolmogorov complexity.

In particular, Eliezer's article Not for the Sake of Happiness Alone is totally at odds with my own beliefs. In my mind, it's incoherent to give anything other than subjective experiences ethical consideration. My own preference for real science over imagined science is entirely instrumental and not at all terminal.

Now, maybe Eliezer is confused about what his terminal values are, or maybe I'm confused about what my terminal values are, or maybe our terminal values are incompatible. In any case, it's not obvious that an AI should care about anything other than the subjective experiences of sentient beings.

Suppose that it's okay for an AI to exclude everything but subjective experience from ethical consideration. Is there then still reason to expect that human values have high Kolmogorov complexity?

I don't have a low complexity description to offer, but it seems to me that one can get a lot of mileage out of the principles "if an individual prefers state A to state B whenever he/she/it is in either of state A or state B, then state A is superior for that individual to state B" and "when faced with two alternatives, the moral alternative is the one that you would prefer if you were going to live through the lives of all sentient beings involved."

Of course "sentient being" is ill-defined and one would have to do a fair amount of work frame the things that I just said in more formal terms, but anyway, it's not clear to me that there's a really serious problem here.

The more AI's preference diverges from ours, the more we lose, and this loss is on astronomic scale (even if preference diverges relatively little).

I totally agree that if the creation of a superhuman AI is going to precede all other existential threats then we should focus all of our resources on trying to get the superhuman AI to be as friendly as possible.

Comment author: multifoliaterose 15 June 2010 08:41:44AM *  0 points [-]

But I would qualify the last sentence of my reply by saying that the best way to get a superhuman AI to be as friendly as possible may not be to work on friendly AI or advocate for friendly AI. For example, it may be best to work toward geopolitical stability to minimize the chances of some country rashly creating a potentially unsafe AI out of a sense of desperation during wartime.

Comment author: khafra 15 June 2010 10:28:27AM 3 points [-]

Have you read the Heaven post by denisbider and the two follow-ups constituting a mini-wireheading series? There have been other posts on the difference between wanting and liking; but it illustrates a fairly strong problem with wireheading: Even if all we're worried about is "subjective states," many people won't want to be put in that subjective state, even knowing they'll like it. Forcing them into it or changing their value system so they do want it are ethically suboptimal solutions.

So, it seems to me that if anything other than maximized absolute wireheading for everyone is the AI's goal, it's gonna start to get complicated.

Comment author: Vladimir_Nesov 15 June 2010 11:08:40AM 0 points [-]

I totally agree that if the creation of a superhuman AI is going to precede all other existential threats then we should focus all of our resources on trying to get the superhuman AI to be as friendly as possible.

(?) I never said that.

Comment author: Vladimir_Nesov 15 June 2010 11:16:42AM 1 point [-]

Maybe you should start with what's linked from fake fake utility functions then (the page on the wiki wasn't organized quite as I expected).

Comment author: Vladimir_Nesov 15 June 2010 01:58:31AM *  5 points [-]

SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.

Not clear to me either that unfriendly AI is the greatest risk, in the sense of having the most probability of terminating the future (though "resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues; and "world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed).

But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).

Comment author: multifoliaterose 15 June 2010 04:44:00AM 3 points [-]

"resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues;

I mean "existential risk" in a broad sense.

Suppose we run out of a source of, oh, say, electricity too fast to find a substitute. Then we would be forced to revert to a preindustrial society. This would be a permanent obstruction to technological progress - we would have no chance of creating a transhuman paradise or populating the galaxy with happy sentient machines and this would be an astronomical waste.

Similarly if we ran out of any number of things (say, one of the materials that's currently needed to build computers) before finding an adequate substitute.

"world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed.

My understanding is that a large scale nuclear war could seriously damage infrastructure. I could imagine this preventing technological development as well.

But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).

On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.

Comment author: Strange7 15 June 2010 08:52:42AM 3 points [-]

Suppose we run out of a source of, oh, say, electricity too fast to find a substitute.

That's not how economics works. If one source of electricity becomes scarce, that means it's more expensive, so people will switch to cheaper alternatives. All the energy we use ultimately comes from either decaying isotopes (fission, geothermal) or the sun; neither of those will run out in the next thousand years.

Modern computer chips are doped silicon semiconductors. We're not going to run out of sand any time soon, either. Of course, purification is the hard part, but people have been thinking up clever ways to purify stuff since before they stopped calling it 'alchemistry.'

Comment author: cupholder 15 June 2010 11:06:08AM 2 points [-]

That's not how economics works. If one source of electricity becomes scarce, that means it's more expensive, so people will switch to cheaper alternatives.

I would have thought that those 'cheaper alternatives' could still be more expensive than the initial cost of the original source of electricity...? In which case losing that original source of electricity could still bite pretty hard (albeit maybe not to the extent of being an existential risk).

Comment author: khafra 15 June 2010 01:37:45PM 3 points [-]

The energy requirements for running modern civilization aren't just a scalar number--we need large amounts of highly concentrated energy, and an infrastructure for distributing it cheaply. The normal economics of substitution don't work for energy.

A "tradeoff" exists between using resources (including energy and material inputs of fossil origin) to feed the growth of material production (industry and agriculture) and to support the economy’s structural transformation.

As the substitution of renewable for nonrenewable (primarily fossil) energy continues, nature exerts resistance at some point; the scale limit begins to bind. Either economic growth or transition must halt. Both alternatives lead to severe disequilibrium. The first because increased pauperization and the apparent irreducibility of income differentials would endanger social peace. Also, since an economic order built on competition among private firms cannot exist without expansion, the free enterprise system would flounder.

The second alternative is equally untenable because the depletion of nonrenewable resources, proceeding along a rising marginal cost curve or, equivalently, along a descending Energy Return on Energy Invested (EROI) schedule, increases production costs across the entire spectrum of activities. Supply curves shift upwards.

It's entirely possible that failure to create a superintelligence before the average EROI drops too low for sustainment would render us unable to create one for long enough to render other existential risks inevitabilities.

Comment author: Vladimir_Nesov 15 June 2010 11:02:17AM 1 point [-]

On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.

Yes.

Comment author: Craig_Morgan 15 June 2010 03:35:31AM *  4 points [-]

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

I believed similarly until I read Steve Omohundro's The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.

Comment author: multifoliaterose 15 June 2010 06:35:54AM 1 point [-]

Thanks Craig, I'll check it out!

Comment author: Benquo 15 June 2010 01:04:15AM -1 points [-]

"But I believe it's certain to eventually happen, absent the end of civilization before that."

And I will live 1000 years, provided I don't die first.

Comment author: Vladimir_Nesov 15 June 2010 01:26:46AM *  2 points [-]

But I believe it's certain to eventually happen, absent the end of civilization before that.

And I will live 1000 years, provided I don't die first.

(As opposed to gradual progress, of course. I could make a case with your analogy facing an unexpected distinction also, as in what happens if you got overrun by a Friendly intelligence explosion, and persons don't prove to be a valuable pattern, but death doesn't adequately describe the transition either, as value doesn't get lost.)

Comment author: multifoliaterose 15 June 2010 12:08:21AM *  1 point [-]

Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important.

---this is a good point, thanks.

Comment author: cousin_it 14 June 2010 09:59:06PM *  4 points [-]

Any LessWrongers understand basic economics? This could be another great topic set for all of us. Let's kick things off with a simple question:

I'm renting an apartment for X dollars a month. My parents have a spare apartment that they rent out to someone else for Y dollars a month. If I moved into that apartment instead, would that help or hurt the country's economy as a whole? Consider the cases X>Y, X<Y, X=Y.

ETA: It's fascinating how tricky this question turned out to be. Maybe someone knowledgeable in economics could offer a simpler question that does have a definite answer?

Comment author: MichaelBishop 14 June 2010 10:15:03PM *  0 points [-]

There is other information you want to consider. Tax rates for example, and whether or not the economy is in the sort of downturn that would benefit from stimulus or not.

Regardless, the effects on aggregate supply and demand will be tiny. How much you and your parents value these alternatives is what matters most.

Comment author: cousin_it 14 June 2010 10:24:01PM *  0 points [-]

I'm not asking about what I should decide, I'm asking about the sign of those tiny effects on the country as a whole. Is it actually a difficult question in disguise? Why? I know next to nothing about economics, but the question sounds to me like it should be really easy for anyone qualified.

Comment author: Houshalter 15 June 2010 02:46:25AM 2 points [-]

I think the best way to measure it in any meaningful way would be to consider the same scenerio with millions of people doing it instead of just one, but even then it doesn't look like it makes much of a difference.

Comment author: MichaelBishop 15 June 2010 04:33:02PM 1 point [-]

This is a good point. What happens in this individual case would be dominated by random facts about the individuals directly involved. If you imagine the same situation repeated many times, 100 should be plenty, the randomness cancels out.

Comment author: Vladimir_M 14 June 2010 10:16:45PM 2 points [-]

would that help or hurt the country's economy as a whole?

What exact metric do you have in mind?

Comment author: cousin_it 14 June 2010 10:22:18PM 0 points [-]

I'd be about equally happy if offered a solution in terms of GDP or some more abstract metric like "sum of happiness".

Comment author: Vladimir_M 14 June 2010 11:06:11PM *  3 points [-]

Trouble is, all these macroeconomic metrics that can be precisely defined have only a vague and tenuous link to the actual level of prosperity and quality of life, which is impossible to quantify precisely in a satisfactory manner. Moreover, predicting the future consequences of economic events reliably is impossible, despite all the endless reams of macroeconomic literature presenting various models that attempt to do so.

Thus, if you want to ask how your choice will affect the nominal GDP for the current year or some such measure, that's a well-defined question (though not necessarily easy to answer). However, if you want to interpret the result as "helping" or "hurting" the economy, it requires a much more difficult, controversial, and often inevitably subjective judgment.

Comment author: MichaelBishop 15 June 2010 03:30:39AM 1 point [-]

Of course, gdp only measures goods and services sold, not "household production."

Comment author: Vladimir_M 15 June 2010 06:28:26AM *  2 points [-]

That's only one of the main problems with GDP. Here's a fairly decent critique of the concept written from a libertarian perspective (but the main points hold regardless of whether you agree with the author's ideological assumptions):
http://www.econlib.org/library/Columns/y2010/HendersonGDP.html

In addition to these criticisms, I would point out the impossibility of defining meaningful price indexes that would be necessary for sensible comparisons of GDP across countries, and even across different time periods in the same country. The way these numbers are determined now is a mixture of arbitrariness and politicized number-cooking masquerading as science.

Comment author: SilasBarta 15 June 2010 03:39:58PM *  0 points [-]

Thanks for that link. I hadn't realized Henderson had written that, let alone just a few months ago! Its recency means he could critique the stimulus arguments of the last two years, making basically the same arguments I do.

My only complaint is that he noted that leaving off non-market exchanges (i.e. maid becoming wife) causes GDP to be understated, when he should have discussed its impact on the rate of change in GDP, which is more important.

Comment author: MichaelBishop 15 June 2010 04:09:04PM 1 point [-]

It is certainly true that some people make too much of GDP, but those numbers can be pretty helpful for answering certain research questions. Let's not throw the baby out with the bath water.

Comment author: SilasBarta 15 June 2010 04:42:35PM 0 points [-]

If we're going to do metaphors, then yes, you're right, but we also have to make sure we're not drinking the bathwater. The bathwater is for bathing, not for drinking. GDP should be used a very rough cross-country comparison, not as a measure of how well the economy's general ability to satisfy wants changes over short intervals.

Interestingly enough, I was arguing roughly your position a few years ago. But now, seeing how economist deliberately prioritize GDP over the fundamentals it's supposed to measure, I can't even justify defending it for purposes other than, "The US economy is more productive than Uganda's."

Comment author: MichaelBishop 15 June 2010 03:30:32AM 4 points [-]

I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands, presumably if you rent the apartment for $X when X>Y. My impression is that in good economic times, marginal spending is not considered to improve economic welfare.

Comment author: James_K 15 June 2010 05:53:39AM 7 points [-]

An interesting question. Here are some initial thoughts:

In terms of broad economic aggregates, it won't make any difference. If you rent the room off your parents for a market rate, GDP is exactly unaffected, people are paying the same money to different people. If you rent it for less than market rate, GDP is lower, but this reflects deficiencies in measured GDP since GDP uses market prices as a proxy for the value of a transaction (this is fine for the most part, but doing your child a favour is an exception conventional methodology can't deal with). So from a macroeconomic perspective I'd say it's a wash either way.

Microeconomically, there could be some efficiencies in you renting from your parents. If they trust you more than a random stranger (and let's hope they do) they will spend less time monitoring your behaviour (property inspections and the like) than they would a random stranger, but the value of your familial relationship should constrain you from taking advantage of that lax monitoring in the way a stranger would. This mean that your parents save time (which makes their life easier) and no one should be worse off (I assume the current tenant of their room would find adequate accommodation elsewhere).

However, one note of caution. If you were to get into a dispute of some sort with your parents over the tenancy, this could damage your relationship with your parents. If you value this relationship (and I assume you do), this is a potential downside that doesn't exist under the status quo. Also, some people might see renting from your parents as little different to living with your parents which (depending on your age) may cost you status in your day-to-day life (even if you pay a market rate). If you value status, you should be aware of this drawback.

So in summary, the most efficient outcome depends on three variables: 1) How much time and effort do your parents spend monitoring their tenant at the moment? 2) How likely is it that your relationship with them could be strained as a result of you living there? 3) How many friends / acquaintances / colleagues do you have that would think less of your for renting from your parents (and how much do you care)?

I hope that helps.

Comment author: AlephNeil 15 June 2010 10:35:29AM *  0 points [-]

My (admittedly simple-minded) answer would be "other things being equal it has no effect at all".

Each day you and your parents do whatever it is you do, creating a given amount of wealth (albeit perhaps in such a way that it's impossible to say exactly how much of this wealth you personally created, rather than your colleagues, or the equipment you use). Then a bunch of wealth gets redistributed in a funny way (through wages and rents being paid). But changing the way that wealth is redistributed doesn't affect the 'total rate of wealth-generation' which is what GDP is trying (sometimes unsuccessfully, as James_K says) to measure. In just the same way, getting a pay rise doesn't in itself help the economy (but it may have been caused by you doing more valuable work, which does help).

Comment author: cousin_it 15 June 2010 11:25:53AM *  0 points [-]

I'm pretty sure this is wrong. If I have a spare apartment and start renting it out, I'm creating wealth, not just redistributing it. So changing the pattern of who rents from whom should influence the total amount of wealth created.

Comment author: AlephNeil 15 June 2010 11:43:57AM 0 points [-]

But we're not talking about someone renting a previously empty apartment, we're talking about a change of occupier. The 'wealth' of the apartment is merely being 'consumed' by someone else.

Suppose without loss of generality (?) that the person who was previously in your parents' apartment is now in your old apartment. Then we can describe the change as follows:

  1. Two people have swapped apartments.
  2. They may be paying different rents from before.

Neither 1 nor 2 in itself changes the size of the economy. (Although, if a rent goes up because an apartment is more desirable then that changes the size of the economy.)

Comment author: cousin_it 15 June 2010 11:51:57AM *  0 points [-]

Apartments don't have a single intrinsic "desirability" value. Different people assign different values to the same apartment. If you think about it, the fact that different people can value a thing differently is the only reason any deals happen at all. The sum you agree to pay is a proxy for the value you place on the thing.

No, you can't assume without loss of generality that the person who was previously in my parents' apartment will be willing or able to move to mine. It depends on the relationship between X and Y.

Comment author: AlephNeil 15 June 2010 12:07:21PM 0 points [-]

No, you can't assume without loss of generality that the person who was previously in my parents' apartment will be willing or able to move to mine. It depends on the relationship between X and Y.

But the set of living spaces is the same as before. Can't we assume for simplicity that, even if it's not as simple as two people swapping places with each other, what we have is a 'permutation' such that all previously occupied houses and apartments remain occupied?

Then once again we can factor the change into (1) a permutation and (2) a change of rent, and ask whether either of them changes the wealth of the nation. I'm pretty sure that (2) in itself has no effect - it's just a 'redistribution' between landlords and their tenants. Whether (1) has an effect depends on whether or not we're including the fact that different people may make different assessments of desirability (i.e. whether different people have different preferences about the kind of apartment they'd like to live in.)

Of course you're quite right that different people do have different preferences - I was merely ignoring this for simplicity - but in any case the statement of the problem says nothing explicit about your or anyone else's preferences, it only talks about X and Y. Are your apartment-preferences supposed to change depending on the values of X and Y?

Comment author: cousin_it 15 June 2010 12:19:46PM *  0 points [-]

You're right that (2) has no effect, but (1) probably does have effect. I thought we could somehow guess the effect of (1) by looking at X and Y, but now I see it's not easy.

Comment author: AlephNeil 15 June 2010 11:51:32AM *  0 points [-]

Though I should clarify that when I talk about "the size of the economy" I'm talking about something intangible - the 'wealth of the nation', or more precisely the 'nation's rate of wealth-creation' - rather than simply GDP. Perhaps GDP will reflect the changing rents, perhaps not, depending on which type of GDP we're talking about (I seem to recall that there are several, including a 'spending' measure and an 'income' measure.)

Comment author: AlephNeil 15 June 2010 11:29:04AM 3 points [-]

Here's another question to chew on:

Suppose you're in a country that grows and consumes lots of cabbages, and all the cabbages consumed are home-grown. Suppose that one year people suddenly, for no apparent reason, decide that they like cabbages a lot more than they used to, and the price doubles. But at least to begin with, rates of production remain the same throughout the economy. Does this help or harm the economy, or have no effect?

In one sense it 'obviously' has no effect, because the same quantities of all goods and services are produced 'before' and 'afterwards'. So whether we're evaluating them according to the 'earlier' or the 'later' utility function, the total value of what we're producing hasn't changed. (Presumably the prices of non-cabbages would decline to some extent, so it's at least consistent that GDP wouldn't change, though I still can't see anything resembling a mathematical proof that it wouldn't.)

Comment author: SilasBarta 15 June 2010 03:02:22PM *  12 points [-]

If I moved into that apartment instead, would that help or hurt the country's economy as a whole?

Good question, not because it's hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.

  • If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don't.

  • If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don't.

  • If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.

ANYTHING beyond that -- anything whatsoever -- is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people's wants are being satisfied, which is supposed to be what we mean by a "good economy".

Today, economists equate growing GDP -- irrespective of measuring artifacts that make it deviate from what we want it to measure -- with a good economy. If the economy isn't doing well enough, well, we need more "aggregate demand" -- you see, people aren't buying enough things, which must be bad.

Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure. No, instead, we have come to define success by the number of money-based market exchanges, rather than whether people are getting the combination of work, consumption, and leisure (all broadly defined) that they want.

This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.

Now, it's true there are prisoner's dilemma-type situations where people have to cooperate and endure some pain to be better off in the aggregate. But the corresponding benefit that economists expect from this collective sacrifice is ... um ... more pointless work that doesn't satisfy real demand .. but hey, it keeps up "aggregate demand", so it must be what a sluggish economy needs.

Are you starting to see how skewed the standard paradigm is? If people found a more efficient, mutualist way to care for their children rather than make cash payments to day care, this would be regarded as a GDP contraction -- despite most people being made better off and efficiency improving. If people work longer hours than they'd like, to produce stuff no one wants, well, that shows up as more GDP, and it's therefore "good".

How the **** did we get into this mindset?

Sorry, [/another rant].

Comment author: Vladimir_Nesov 15 June 2010 03:09:47PM *  1 point [-]

There could be indirect consequences of the decision in question, resulting from counter-intuitive effects on the existing economic process, on lives of other people not directly involved in the decision. The relevant question is about estimate of those indirect consequences. However imprecise economic indicators are, you can't just replace them with presumption of total lack of consequences, and only consider the obvious.

Comment author: SilasBarta 15 June 2010 03:12:33PM *  0 points [-]

I didn't ignore the indirect consequences:

If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.

To the extent that the indirect effects go beyond this, standard mainstream metrics in economics don't measure them, because they are essentially independent of how well off others have become as a result of these rental decisions.

Comment author: Vladimir_Nesov 15 June 2010 03:36:07PM 0 points [-]

To the extent that the indirect effects go beyond this, standard mainstream metrics in economics don't measure them, because they are essentially independent of how well off others have become as a result of these rental decisions.

Well, maybe there are no such consequences (which is not obvious to me), but that's what I meant.

Comment author: thomblake 15 June 2010 04:07:03PM 0 points [-]

Nice to see this kind of thinking from a capitalistish.

Comment author: MichaelBishop 15 June 2010 04:23:47PM *  1 point [-]

Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure.

  1. Really? Because I hear economists talk about the value of leisure time quite frequently.
  2. IMO, most economists don't fetishize GDP the way you suggest they do.
  3. You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you're not defending it, you're just claiming it.
Comment author: SilasBarta 15 June 2010 04:42:39PM *  0 points [-]

Really? Because I hear economists talk about the value of leisure time quite frequently. ...IMO, most economists don't fetishize GDP the way you suggest they do.

Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.

Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for "what to do" about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.

You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you're not defending it, you're just claiming it.

I most certainly am defending it -- by showing the errors in the classification of what counts as a benefit. If the argument is that stimulus will get GDP numbers back up, then yes, I didn't provide counterarguments. But my point was that the effect of the stimulus is to worsen that which we really mean by a "good economy".

The stimulus is getting people to do blow resources doing (mostly) useless things. Whether or not it's effective at getting these numbers where they need to be, the numbers aren't measuring what we really want to know about. Success would mean the useless, make-work jobs eventually lead to jobs satisfying real demand, yet no metric that they focus on captures this.

Comment author: Kevin 14 June 2010 11:50:43PM 1 point [-]

Feds under pressure to open US skies to drones

http://news.yahoo.com/s/ap/20100614/ap_on_bi_ge/us_drones_over_america

Comment author: RobinZ 15 June 2010 01:37:56AM 3 points [-]
Comment author: Emile 15 June 2010 12:44:41PM *  8 points [-]

I don't know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :

You see, there's a shifty little game that proponents of gender discrimination are playing. They argue that high SAT scores are indicative of success in science, and then they say that males tend to have higher math SAT scores, and therefore it is OK to encourage more men in the higher ranks of science careers…but they never get around to saying what their SAT scores were. Larry Summers could smugly lecture to a bunch of accomplished women about how men and women were different and having testicles helps you do science, but his message really was "I have an intellectual edge over you because some men are incredibly smart, and I am a man", which is a logical fallacy.

From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn't that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.

The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.

Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.

Comment author: h-H 15 June 2010 02:11:17AM *  5 points [-]

yay! music composition AI

we've had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.

good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?

Comment author: JoshuaZ 15 June 2010 04:12:58AM 5 points [-]

I'm thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes' Theorem. Would people be interested in such a post?

Comment author: NancyLebovitz 15 June 2010 12:25:02PM *  17 points [-]

How to Keep Someone with You Forever.

This is a description of "sick systems"-- jobs and relationships which destructively take people's lives over.

I'm posting it here partly because it may be of use-- systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.

One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent-- and it can take a very long time to recognize the contradiction. It's plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what's being done to them, but is there any more to it than that?

One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children-- the parents are unlikely to have a substantial network of helpers, they aren't sharing a bed with the baby (leading to more serious sleep deprivation), and there's a belief that raising children is almost impossible to do well enough.

Also, it's interesting that people keep spontaneously inventing sick systems. It isn't as though there's a manual. I'm guessing that one of the drivers is feeling uncomfortable at seeing the victims feeling good and/or capable of independent choice, so that there are short-run rewards for the victimizers for piling the stress on.

On the other hand, there's a commenter who reports being treated better by her family after she disconnected from the craziness.

Comment author: cousin_it 15 June 2010 06:03:55PM *  2 points [-]

Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?

ETA: absent other suggestions, I'm going to call such devices "AI bombs".