In response to A Rational Education
Comment author: mindviews 24 June 2010 01:45:41AM 2 points [-]

I got an amazing amount of use out of Order of Magnitude Physics. It can get you in the habit of estimating everything in terms of numbers. I've found that relentlessly calculating estimates greatly reduces the number of biased intuitive judgments I make. A good class will include a lot of interaction and out-loud thinking about the assumptions your estimates are based on. Also or as an alternative, a high-level engineering design course can provide many of the same experiences within the context of a particular domain. (Aerospace/architecture/transportation/economic systems can all provide good design problems for this type of thinking - oddly, I haven't yet seen a computer science design problem example that works as well.)

Also, I'll second recommendations for just about any psychology course. And anywhere you see a course cross-listed between psychology and economics you'll have a good chance of learning about human bias.

Comment author: wedrifid 21 June 2010 09:41:28AM *  7 points [-]

Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.

To launch an AI out of our future light cone you must send it past a point at which the expansion of the universe makes that point further away from us at c. At that time it will be one of the points at the edge of our future light cone and beyond it the AI can never touch us.

Comment author: mindviews 22 June 2010 03:46:33AM 0 points [-]

So you're positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite - very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon - not practical.

To bring this thread back onto the LW Highway...

It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement - probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn't find examples in the LW wiki.) A better response to "I hope that was a joke..." than "You are mistaken" would be "Yeah. It was hyperbole for effect." or something along those lines.

A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn't what I intended.

Comment author: wedrifid 21 June 2010 08:12:37AM 1 point [-]

You are mistaken.

Comment author: mindviews 21 June 2010 08:56:55AM 1 point [-]

I'm pretty sure I'm not mistaken. At this risk of driving this sidetrack off a cliff...

Once an object (in this case, a potentially dangerous AI) is in our past light cone, the only way for its world line to stay outside of our future light cone forever (besides terminating it through thermite destruction as mentioned above) is for it to travel at the speed of light or faster. That was the physics nitpick I was making. In short, destroy it because you cannot send it far enough away fast enough to keep it from coming back and eating us.

Comment author: SilasBarta 20 June 2010 07:36:30PM 0 points [-]

Gravitomagnetism -- what's up with that?

It's an phrasing of how gravity works with equations that have the same form as Maxwell's equations. And frankly, it's pretty neat: writing the laws for gravity this way gets you mechanics while approximately accounting for general relativity (how approximate and what it leaves off, I'm not sure of).

When I first found out about this, it blew my mind to know that gravity acts just like electromagnetism, but for different properties. We all know about the parallel between Coulomb's law and Newton's law of gravitation, but the gravitoelectromagnetism (GEM) equations show that it goes a lot deeper.

Besides being a good way to ease into an intuitive understanding of the Einstein field equations, to me, it's basically saying that gravity and EM are both obeying some more general law. Anyone know if work has been done in unifying gravity and EM this way? All I hear about is that it's easy to unify strong, weak, and EM forces, but gravity is the stumbling block, so this should be something they'd want to explore more.

Yet when you go investigate "gravitational induction" to find out how the gravitic parallel to magnetic fields works, you find that this gravitomagnetic field is called the torsion field, and its existence is (at least approximately) implied by general relativity, but then the Wikipedia page says that the torsion field is a pseudoscientific concept. Hm...

So, anyone have an understanding of the GEM analogy and can make sense of this? Does it suggest a way to unify gravity and EM? Or how to create a coil of mass flow that can "gravitize" a region (as a coil of current magnitizes a metal bar)?

Comment author: mindviews 21 June 2010 08:16:06AM 5 points [-]

it's basically saying that gravity and EM are both obeying some more general law

No, what's happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.

Does it suggest a way to unify gravity and EM?

No.

Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you go from one to the other. That means your intuitive understanding of EM doesn't map well to understanding gravity.

Comment author: wedrifid 20 June 2010 05:49:46PM 1 point [-]

Another case I'd like to be considered more is "if we can't/shouldn't control the AIs, what can we do to still have influence over them?"

Thermite. Destroying or preventing them is the ONLY option in that situation. (Well, I suppose you could launch them out of our future light cone.)

Comment author: mindviews 21 June 2010 07:13:58AM 3 points [-]

Well, I suppose you could launch them out of our future light cone.

I hope that was a joke because that doesn't square with our current understanding of how physics works...

Comment author: PhilGoetz 20 June 2010 04:41:42AM 5 points [-]

An AI that "valued" keeping the world looking roughly the way it does now, that was specifically instructed never to seize control of more than X number of each of several thousand different kinds of resources, and whose principal intended activity was to search for, hunt down, and destroy AIs that seemed to be growing too powerful too quickly might be an acceptable risk.

This would not be acceptable to me, since I hope to be one of those AIs.

The morals of FAI theory don't mesh well at all with the morals of transhumanism. This is surprising, since the people talking about FAI are well aware of transhumanist ideas. It's as if people compartmentalize them and think about only one or the other at a time.

Comment author: mindviews 21 June 2010 06:58:03AM 1 point [-]

The morals of FAI theory don't mesh well at all with the morals of transhumanism.

It's not clear to me that a "transhuman" AI would have the same properties as a "synthetic" AI. I'm assuming that a transhuman AI would be based on scanning in a human brain and then running a simulation of the brain while a synthetic AI would be more declaratively algorithmic. In that scenario, proving a self-modification would be an improvement for a transhuman AI would be much more difficult so I would treat it differently. Because of that, I'd expect a transhuman AI to be orders of magnitude slower to adapt and thus less dangerous than a synthetic AI. For that reason, I think it is reasonable to treat the two classes differently.

Comment author: Yoreth 14 June 2010 08:10:24AM 5 points [-]

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Comment author: mindviews 14 June 2010 11:04:31AM 1 point [-]

Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.

So: do you know any counterarguments or articles that address either of these points?

I don't have any articles but I'll take a stab at counterarguments.

A Majoritarian counterargument: AI turned out to be harder and further away than originally thought. The general view is still tempered by the failure of AI to live up to those expectations. In short, the AI researchers cried "wolf!" too much 30 years ago and now their predictions aren't given much weight because of that bad track record.

A mind can't understand itself counterargument: Even accepting as a premise that a mind can't completely understand itself, that's not an argument that it can't understand itself better than it currently does. The question then becomes which parts of the AI mind are important for reasoning/intelligence and can an AI understand and improve that capability at a faster rate than humans.

Comment author: gwern 30 May 2010 11:31:09PM *  9 points [-]

Others complain that the existence of an easy medical solution prevents people from learning personal responsibility. But here we see the status-quo bias at work, and so can apply a preference reversal test. If people really believe learning personal responsibility is more important than being not addicted to heroin, we would expect these people to support deliberately addicting schoolchildren to heroin so they can develop personal responsibility by coming off of it. Anyone who disagrees with this somewhat shocking proposal must believe, on some level, that having people who are not addicted to heroin is more important than having people develop whatever measure of personal responsibility comes from kicking their heroin habit the old-fashioned way.

Now that's a good use of the reversal test!

Comment author: mindviews 31 May 2010 02:40:02AM *  3 points [-]

I don't think that's a good example. For the status-quo bias to be at work we need to have the case that we think it's worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I'm not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn't be in play and the preference reversal test wouldn't apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.)

I think the problem in the example is that it mixes the axes for our preferences for people to have personal responsibility and our preferences for people not to be addicted to heroin. So we have a space with at least these two dimensions. But I'll claim that personal responsibility and heroin use are not orthagonal.

I think the real argument is in the coupling between personal responsibility and heroin addiction. Should we have more coupling or less coupling? The drug in this example would make for less coupling. So let's do a preference reversal test and see if we had a drug that made your chances of heroin addiction more coupled to your personal responsiblity, would you take that? I think that would be a valid preference reversal test in this case if you think the current coupling is a local optimum.

Comment author: mindviews 17 May 2010 08:18:27AM 1 point [-]

Thoughts I found interesting:

The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

Interesting because I don't think it's true. I think the problem is more about the need of AI builders to show results. Providing a solution (or a partial solution or a path to a solution) in a narrow context is a way to do that when your tools aren't yet powerful enough for more general or mixed approaches. Given the variety of identifiable structures in the human brain that gives us intelligence I strongly expect that an AI will be built by combining many specialized parts that will probably be based on multiple research areas we'd recognize today.

One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident.

Interesting because it forced me to consider what I think AI is outside the context of computer science - something I don't normally do.

In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality.

Interesting because I'm very curious to see what this means in the context of your coming proposal.

Comment author: mindviews 16 May 2010 08:15:12AM 3 points [-]

Hi all - been lurking since LW started and followed Overcoming Bias before that, too.

View more: Prev | Next