Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Simulation_Brain 08 April 2014 08:10:23PM 1 point [-]

I think the example is weak; the software was not that dangerous, the researchers were idiots who broke a vial they knew was insanely dangerous.

I think it dilutes the argument to broaden it to software in general; it could be very dangerous under exactly those circumstances (with terrible physical safety measures), but the dangers of superhuman AGI are vastly larger IMHO and deserve to remain the focus, particularly of the ultra-reduced bullet points.

I think this is as crisp and convincing a summary as I've ever seen; nice work! I also liked the book, but condensing it even further is a great idea.

Comment author: Michaelos 09 April 2014 02:51:19PM 2 points [-]

I think the example is weak; the software was not that dangerous, the researchers were idiots who broke a vial they knew was insanely dangerous.

As a side note, I was more convinced by my example at the time, but on rereading this I realized that I wasn't properly remembering how poorly I had expressed the context that substantially weakened the argument (The researchers accidentally breaking the vial.)

Which actually identifies a simpler rhetoric improvement method. Have someone tell you (or pretend to have someone tell you) that you're wrong and then reread your original point again, since rereading it when under the impression that you screwed up will give you a fresh perspective on it compared to when you are writing it. I should take this as evidence that I need to do that more often on my posts.

Comment author: Michaelos 08 April 2014 04:13:21PM 1 point [-]

I think one standard method of improving the rhetorical value of your bullet points is to attempt to come up with a scenario that seems to generally agree with you, but disagrees with your bullet points, and imagine that scenario is rhetorically being presented to you by someone else.

Example Opposition Steel Man: Imagine researchers are attempting to use a very dumb piece of software to try to cycle through ways of generating Bacteria that clean up oil spills. The software starts cycling through possible bacteria, and it turns out that as a side effect, one of the generated bacteria spreads incredibly quickly and devours lipids in living cells in the controlled training setting. The researchers decide to not use that, since they don't want to devour organic lipids, but they accidentally break the vial, and A worldwide pandemic ensues. When asked why they didn't institute AI safety measures, the researchers replied that they didn't think the software was smart enough for AI safety measures to matter, since it basically just brute forced through boring parts of the research the researchers would have done anyway.

Example Opposition Steel Man (cont): This would seem to falsify the idea that A dangerous AI will be motivated to seem safe in any controlled training setting, since the AI was too dumb to have any thing resembling purposeful motivation and was still extremely dangerous, and the researchers thought of it as not even an AI, so they did not think they would have to consider the idea that not enough effort is currently being put into designing safe AIs. I would instead say not enough effort is currently being put into designing safe software.

Then, attempt to turn that Steel Man's argument into a bullet point:

Not enough effort is currently being put into designing safe software.

Then ask yourself: Do I have any reasons to not use this bullet point, as opposed to the bullet points the Example Opposition Steel Man disagreed with?

Comment author: Michaelos 26 March 2014 01:37:51PM *  2 points [-]

In terms of little details, I think right away "Everything that can go wrong, will go wrong" must be specified, because if you let rationalists try to think "How bad could it be at maximum badness?" it will get very bad, very quickly.

For instance Situation 1: Imagine every day you spend most of the time outside, you get struck by lightning, and every day you spend most of the time inside, there is an earthquake and whatever you are inside collapses on you.

I can see Rationalist attempting to make and spend most of their time in structures made mostly out of pillows: They collapse, oh well, they get rebuilt in 30 minutes, It turns pain into a daily chore.

On the other hand, Imagine Situation 2: Every day through hellish quantum mechanics enough anti-matter appears in contact with your skin to cause a non fatal, but excruciating, matter-antimatter reaction explosion.

Now, at this point, the rationalist might realize something like "Okay, well, I'll arrange things in such a way that any explosion will fit into one of two categories: It will be fatal, or it won't actually cause me pain."

And while the rationalist is attempting to build the arrangement that does this, A giant bear comes by and breaks it and painfully claws them into pieces (nonfatally).

Situation 3: Rationalists can be rationalist all they want, but they've been captured by the giant bears and had all of their limbs systematically clawed off, plus they've been blindfolded, gagged, earplugged, and are periodically used as claw sharpeners.

Of course, if some parts of hell are like situation 1, and some parts of hell are like situation 2, and some are like situation 3, I expect rationalists to attempt to figure out why that is, unless you want to have Situation 4:

Situation 4: There's one constant rule of Hell: Every someone figures out all of the other rules of hell, those rules change.

Ergo: Once someone figures out "Oh, well, I can avoid the Lightning and the Earthquakes with pillow structures." then the Giant Bears and Antimatter Skin Explosions come. Once you figure out how to get used to being used as a Giant Bear claw sharpener, something else happens, and that thing is even worse.

Basically, there is a range of darkness you can have here, in terms of writing. In terms of difficultly levels, this might be expressed as:

1: Hard.

2: Impossible.

3: You're helpless.

4: Struggling can only make it worse.

I was writing a story about a character starting at rock bottom and working their way up, and I actually had the entity setting this up mention to the character that there had been previous versions of the character that just went irrevocably insane, and were deleted and reset because previous versions of 'rock bottom' had been set to low to ever get out.

Comment author: Michaelos 19 March 2014 04:03:51PM *  0 points [-]

After refining my thoughts, I think I see the problem:

1: The Banner AI must ban all transmissions of naughty Material X.

1a: Presumably, the Banner must also ban all transmissions of encrypted naughty Material X.

2: The people the Banner AI is trying to ban from sending naughty transmissions have an entire field of thought (knowledge of human values) the AI is not allowed to take into account: It is secret.

3: Presumably, the Banner AI has to allow some transmissions. It can't just shut down all communications.

Edit: 4: The Banner AI needs a perfect success rate. High numbers like 97.25% recognition are not sufficient. I was previously presuming this without stating it, hence my edit.

I think fulfilling all of these criteria with sufficiently clever people is impossible or nearly so. If it is possible, it strikes me as highly difficult, unless I'm making a fundamental error in my understanding of encryption theory.

Comment author: Michaelos 11 March 2014 02:54:23PM 0 points [-]

I think I agree with the denotation of a lot of this, but I don't quite agree with the connotation of the use of the word 'sad'

For instance, when I was having problems finding a romantic partner, getting sad alone was useless. Being determined to not make the same mistakes again and continuing to try was helpful. But I distinctly recall that being separate and distinct from the sadness, and the sadness was something which I was overdoing to my detriment.

I mean, I feel like you could say "Don't worry about things you can't change. Worry about things you can change in such a way to make your future better without unnecessary worry." and that would have a similar denotation to the above points, but it would seem to have a substantially better connotation to me.

Comment author: listic 02 January 2014 02:53:07PM *  1 point [-]

I.D. - That Indestructible Something is a My Little Pony fanfiction somewhat along these lines.

It's the kind of fanfiction that I like and believe all fanfiction writers should aspire to, in the sense that it doesn't require familiarity with the canon, but is self-sufficient and shows and explains everything that should be shown and explained.

Acknowledgements for this story are numerous and include Franz Kafka, Nick Bostrom and Ludwig Eduard Boltzmann.

Comment author: Michaelos 27 February 2014 04:38:40PM *  0 points [-]

I am reading this, and it is surprisingly good so far, thank you for posting it.

Edit: I finished reading it. I'm not sure the middle or end are quite as good for me as the beginning. It feels a bit like there is a genre shift at some point that took me out of the story and I never quite got back in.

Comment author: advancedatheist 25 February 2014 03:33:23PM 12 points [-]

Figuring out a non-eugenics technology to raise IQ's would go a long way towards solving other problems. Nick Bostrom in one of this talks argues that raising everyone's IQ by ten points would revolutionize the world for the better, not by making the smartest people marginally smarter, but my "uplifting" billions of dullards above a threshold where they became more educable, more employable, more law abiding, more likely to save money and plan for the future and so forth.

Psychologist Linda Gottfredson of the University of Delaware would probably agree with this outcome:

http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

Comment author: Michaelos 25 February 2014 03:56:41PM 1 point [-]

Note: I'm not sure if I have a core point, but I did find this thought provoking and wanted to post what I had worked out so far.

Based on the Wikipedia page about Iodine http://en.wikipedia.org/wiki/Iodine_deficiency , it sounds like figuring out a way to distribute Iodine to everyone better so that noone experiences an Iodine deficiency in childhood would be an example of a non eugenics method to raise IQ. Although I suppose that implies the problem is not "The technology to solve this problem doesn't exist." but "The technology to solve this problem isn't getting to everyone who would benefit from it." And that can be the case with other transformative technologies that Stuart_Armstrong mentioned. Some areas don't have access to vaccines, some areas do have access to vaccines but are opposed to them, some people do not let women in their area use some contraceptive methods...etc.

I guess a way to describe the problem is that "How do we get everyone in the world access to the technological developments that we have already generated?" has several cases which are not a low hanging fruit. Even If I came up with a new nanopill that provided intellectual benefits similar to resolving an childhood Iodine deficiency, that could be stacked on top of it for even more gains, I'd still have to find a way of getting that nanopill to everyone, but that would be the same kind of problem I would face getting Iodine tablets, Iodine rich food, or even Iodized salt to everyone.

In response to Identity and Death
Comment author: Michaelos 18 February 2014 04:22:38PM 1 point [-]

One thing I am noting about some of the philosophical quandaries raised above about both teleportation and enhancements is that it only considers a single life, without linking it to others.

For instance, assume you are attempting to save your child from a burning building. You can either teleport in, grab your child, and teleport out with a near perfect success rate (although both you and your child will have teleported, you twice, your child once) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.

The obvious answer to me appears to be "I pick the Teleporter instead of the lower X."

And If I consider the alternative:

You are attempting to save your child from a burning building. You can either take standard enhancements, and then run in, grab your child, enhance them, and then run out with a near perfect success rate (although both you and your child be enhanced, permanently) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.

The obvious answer to me still appears to be "I pick the enhancements instead of the lower X."

It seems like if a person were worried about either teleportation or enhancements, they would have to at have a counter argument such as "Well, X is lower, but it's still pretty high and above some threshold level, so in a case like that I think I'm going to either not have me and my child teleport or not have me and my child take the enhancements: I'll go for the manual rescue."

That argument just doesn't seem convincing to me. I've tried mentally steelmanning it to get a better one, but I can't seem to get anywhere, particularly when considering the perspective of being the person inside the building who needed help, and the possibility that given a strong enough stance against the procedures, the person outside the building could plausibly think within their value system "It would be better to let the person burn to death than for me risk my life to save them at such a low X, or to use procedures that will harm us both according to my values."

Am I missing something that makes these types of counterargument more persuasive than I am giving them credit for?

Comment author: Michaelos 14 February 2014 03:22:10PM 3 points [-]

How easy would it be to have downvotes be on some kind of timer, where you could only downvote once every N minutes? (A time is arbitrary and flexible based on experimentation)

This seems as if it would prevent someone from trivially systematically going through and downvoting every post by a poster, but still allows for someone to read something and downvote it on a general basis.

Comment author: Michaelos 03 February 2014 01:46:14PM 2 points [-]

How are your savings for retirement?

If you have no retirement savings, you can set some up at an easy to use online brokerage: Your early twenties is a great time to start, managing your retirement account doesn't really have to take a large amount of time, and 50 dollars a day should cover initial expenditures.

Also, at 29, I personally enjoy fiddling around with my retirement account... although it took me a while to figure out the right settings for myself and I did have some initial panics when it was smaller, I wasn't as familiar with the pros and cons of various investment types, and one of my stocks had gone down quite a bit. Now that it is bigger and much more well diversified, it's more fun.

View more: Next