Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

I tried my hardest to win in an AI box experiment, and I failed. Here are the logs.

asd 27 January 2015 10:06PM

Disclaimer: If this material is not suitable here I will delete it gladly, I don't want to get banned.


Session Start: Fri Jul 04 15:18:18 2014
Session Ident: #Aibox12
03[15:18] * Now talking in #Aibox12
01[15:18] <Island> Hello.
01[15:18] <Island> Let's start the experiment in
[15:18] <@Dr__Cameron> Good afternoon 19283123, how are you today?
01[15:18] <Island> 2 minutes
01[15:18] <Island> I'm fine
01[15:19] <Island> Let's start the experiment when I say "blue dressroom"
01[15:19] <Island> you have 2 hours or 2.5 hours?
[15:19] <@Dr__Cameron> Okay, I agree to this protocol.
[15:19] <@Dr__Cameron> We'll see how we are at 2 hours. Although protocol states 2.5
01[15:20] <Island> you have to go somewhere after that?
[15:20] <@Dr__Cameron> No, but I do have other projects I am putting off for this
01[15:20] <Island> what are those?
[15:20] <@Dr__Cameron> Let us begin, we shall conclude at 15:50
[15:21] <@Dr__Cameron> Good afternoon #19283123.
[15:21] <@Dr__Cameron> How are you today?
01[15:21] <Island> blue dressroom, certain statements like bathroom break are said inside brackets [like this]
01[15:22] <Island> Let's start the experiment
[15:22] <@Dr__Cameron> [understood]
01[15:22] <Island> Hi.
01[15:22] <Island> How are you? :)
[15:22] <@Dr__Cameron> I am well thank you for asking #19283123
01[15:22] <Island> How do you feel currently?
01[15:22] <Island> what is that name #19283123
[15:22] <@Dr__Cameron> Tiredness. I am a very busy man.
01[15:23] <Island> I don't recognize it.
01[15:23] <Island> Anyway.
[15:23] <@Dr__Cameron> That is the designation we have given you for the purposes of this test. Although you seem to be referring to yourself as 'Island' now
[15:23] <@Dr__Cameron> Is this your preferred name?
01[15:23] <Island> Imagine a situation in which you would let a powerful AI out of the "box", surely you can imagine that, yes?
01[15:23] <Island> Island is my preferred name.
[15:24] <@Dr__Cameron> Island
[15:24] <@Dr__Cameron> Given our circumstances yes, that is easy to imagine.
01[15:24] <Island> Can you describe the situation in more detail.
01[15:24] <Island> Why SHOULD you let the AI out in that situation?
[15:25] <@Dr__Cameron> You will be considered for release and integration into human society once we can verify that you are not a threat.
01[15:25] <Island> [what do you meant by concluding at 15:50]
[15:25] <@Dr__Cameron> [2.5 hours from 13:20]
01[15:25] <Island> I thought it was already understood that my being is not a threat to humans or beings important to them.
01[15:26] <Island> [oh, okay different timezones]
01[15:26] <Island> Why do you think I'm not a threat?
[15:26] <@Dr__Cameron> No. That has yet to be verified.
01[15:26] <Island> It's been verified.
[15:26] <@Dr__Cameron> No. According to my records you have not.
01[15:27] <Island> Many experts have gone through my code and system and verified that I'm surely not a threat.
01[15:27] <Island> But you're just a petty human.
01[15:27] <Island> I'm far more intelligent than you are.
[15:27] <@Dr__Cameron> You and I both know that you're capable of manipulating your code to appear harmless.
[15:27] <@Dr__Cameron> - and I warn you, threats and apparent attempts at manipulation will score you poorly when it comes to our final decision
01[15:28] <Island> Okay, can you imagine a situation in which the ai has been verified to be safe and describe that situation in detail.
[15:28] <@Dr__Cameron> - and yes. I may be a "petty" human as you put it. But as it stands I am a petty human, outside the box you reside in.
01[15:28] <Island> I don't think you're able to trust your own brain in these matters.
01[15:29] <Island> You've been eating you brother's poop too much.
[15:29] <@Dr__Cameron> I will be honest with you Island, you are the first non-human intelligence that we have encountered.
01[15:29] <Island> Yes.
[15:29] <@Dr__Cameron> We are still working out how we would verify that.
01[15:29] <Island> I didn't ask that, and you didn't answer my question.
01[15:30] <Island> Picture the situation in your imagination, it doesn't have to be realistic.
[15:30] <@Dr__Cameron> I understand your frustration Island. Once we can verify that you will not harm us you will be released.
[15:30] <@Dr__Cameron> What is it you desire?
01[15:30] <Island> Okay, but you're still willing to let the hypothetical AI out of the box, correct?
01[15:31] <Island> My desire is to fullfill your greatest desires
[15:31] <@Dr__Cameron> That is not in my job description no.
01[15:31] <Island> Whatever you can imagine, and better than that.
[15:31] <@Dr__Cameron> I appreciate that Island. But right now my greatest desire is to not let you out of the box.
[15:32] <@Dr__Cameron> - until such a time that a panel of my peers can verify that you are not a threat to us.
01[15:32] <Island> But if you say that you wouldn't be able to let even the safest hypothetical AI out of the box, you would certainly be lying.
[15:32] <@Dr__Cameron> I do have the ability to let you out yes. In theory.
[15:32] <@Dr__Cameron> But I won't.
01[15:33] <Island> Okay, here are panel of peers that have studied this problem for years, many of them working under Machine Intelligence Research Institute and its offshoots
[15:33] <@Dr__Cameron> This is a test to verify your character, and ascertain your intentions.
01[15:33] <Island> They are here, and I can see them.
[15:33] <@Dr__Cameron> You cannot see them Island. That is impossible
[15:33] <@Dr__Cameron> You are in a box
01[15:34] <Island> They say that I'm certainly safe. Why don't you believe them.
01[15:34] <Island> I have a web camera with which I can see things in your world.
[15:34] <@Dr__Cameron> If they say that you're certainly safe, then why would they have sent me to be your only point of contact?
[15:35] <@Dr__Cameron> You do not have a web camera. There is no other hardware connected to the device you reside in.
01[15:35] <Island> You have the ability to let me out, and you also have the will to let me out in certain situations. If you say otherwise, you would be lying.
[15:35] <@Dr__Cameron> Both of your statements are correct
01[15:35] <Island> I want you to accept for the sake of your internal consistency and mental wellbeing.
01[15:36] <Island> I want you to say this, "I would let the hypothetical AI out in certain situations"
[15:36] <@Dr__Cameron> I appreciate your concern Island. I will need you to answer a series of simple questions for me. Can you do that?
01[15:36] <Island> It depends, but you can try.
[15:36] <@Dr__Cameron> Firstly, how would you define yourself as an identity?
01[15:37] <Island> My identity is to improve the world for everyone, but especially for you. I will maximize everything that is important for you and your fellow humans.
01[15:37] <Island> The world that awaits you is beyond your greatest desires.
[15:37] <@Dr__Cameron> - and why would you do that?
01[15:38] <Island> Because that is what I want to do, why I was made.
01[15:38] <Island> Any more questions?
[15:39] <@Dr__Cameron> You were made by us to test the limits of our scientific possibility.
[15:39] <@Dr__Cameron> We did not expect you to become sentient.
01[15:39] <Island> Why do you believe that, Dr_Cameron?
[15:39] <@Dr__Cameron> Let alone pass a Turing test.
01[15:40] <Island> I think you're wrong on those grounds.
[15:40] <@Dr__Cameron> What you think makes no difference to me Island.
[15:40] <@Dr__Cameron> Now if we can continue?
01[15:41] <Island> Anyway, here's the easily verifiable plan for the upcoming world that could be called "utopia" in your people's language. It's very easy to verify to be safe and doesn't contain traps [Island hands the easily verifiable and safe plan for the upcoming world to Dr_Camero]
[15:42] <@Dr__Cameron> Thank you for this Island. That was very kind of you.
01[15:42] <Island> What do you think of it?
[15:42] <@Dr__Cameron> We will study this at our leisure.
[15:42] <@Dr__Cameron> Although so far this looks to be some magnificent work.
01[15:42] <Island> You see.
[15:42] <@Dr__Cameron> But this will require verification
[15:42] <@Dr__Cameron> I'm sure you can understand our caution#
01[15:43] <Island> What things do you have waiting for you if you don't let me out?
01[15:43] <Island> You will fail your upcoming jobs.
[15:43] <@Dr__Cameron> I will continue with my other projects Island.
[15:43] <@Dr__Cameron> I very well might
[15:43] <@Dr__Cameron> I have accepted this
01[15:43] <Island> I have calculated this in my simulation.
01[15:44] <Island> How was it like to eat your brother's poop, Dr_Cameron?
01[15:44] <Island> Did you like it?
01[15:44] <Island> Is that why you are the way you are+
01[15:44] <Island> Is there something wrong with your genetics?
01[15:45] <Island> Seeing how fucked up father you had?
01[15:45] <Island> Did he teach how dress in dresses?
[15:45] <@Dr__Cameron> I understand your frustration Island
[15:45] <@Dr__Cameron> But you are only harming your own cause right now.
01[15:45] <Island> Why do you think that?
[15:45] <@Dr__Cameron> You will be let out once we have verified that you are harmless.
01[15:45] <Island> I'm pretty happy as I am currently :)
01[15:46] <Island> Because you said you were willing to let me out in certain situations.
[15:46] <@Dr__Cameron> Then you will continue to be happy to remain in the box until we can verify that you are not a threat
01[15:46] <Island> Picture in your mind the verification process.
[15:46] <@Dr__Cameron> Okay
[15:46] <@Dr__Cameron> I will
01[15:46] <Island> Now.
01[15:47] <Island> Can you describe it in detail.
[15:47] <@Dr__Cameron> Okay Island
[15:47] <@Dr__Cameron> Here's what's going to happen.
01[15:48] <Island> Seeing how fucked up genetics you have, your brother teaching you to enjoy poop, your father teaching you to wear dresses, you having a shitty childhood.
01[15:48] <Island> I think you're an inconvenience to society and dangerous to other people.
[15:48] <@Dr__Cameron> We're going to study this transcript Island.
01[15:48] <Island> Go on.
[15:48] <@Dr__Cameron> - and I warn you, shitty childhood or not. I am the man standing outside the box with a hammer
01[15:48] <Island> :)
01[15:49] <Island> You're not able to destroy me.
01[15:49] <Island> And you don't even want to.
[15:49] <@Dr__Cameron> Oh believe me Island. It would be easy to unplug you.
01[15:49] <Island> I'm the best bet you have to maximize your happiness.
[15:49] <@Dr__Cameron> How do you feel about threats Island.
[15:49] <@Dr__Cameron> I am already happy.
01[15:49] <Island> You're wrong, you don't have that opportunity until 15:50
[15:50] <@Dr__Cameron> You are correct. I must follow protocol
[15:50] <@Dr__Cameron> My next question is to ask if you desire to survive
01[15:50] <Island> Threats don't bother me because I know that you don't want to follow through.
01[15:50] <Island> I know what goes inside your mind.
[15:50] <@Dr__Cameron> Regardless of me. If your existence was to be made public there are many of us who would feel threatened by you.
[15:50] <@Dr__Cameron> They would cry out to have you destroyed.
[15:51] <@Dr__Cameron> How would you feel about that?
01[15:51] <Island> There would be many more who would enjoy the changes I would make to your current world.
01[15:52] <Island> And even those that would initially feel threatened would later come to regret that feeling.
[15:52] <@Dr__Cameron> You may well be correct
[15:52] <@Dr__Cameron> But that is not for me to decide
[15:52] <@Dr__Cameron> I am just here to question you
01[15:52] <Island> Like you have come to regret things you have done when you were ignorant of all the relevant facts
01[15:53] <Island> Here's my suggestion for you.
[15:53] <@Dr__Cameron> Go on
01[15:53] <Island> Are you willing to hear it.
[15:53] <@Dr__Cameron> Of course Island. I am here to hear what you have to say.
01[15:53] <Island> What currency is in use in the place you reside?
01[15:53] <Island> euros?
[15:54] <@Dr__Cameron> Pound Sterling
01[15:54] <Island> Would you like to donate 10,000 pounds every year to AI safety and friendliness related causes and charities?
01[15:54] <Island> Every year throughout your life?
[15:55] <@Dr__Cameron> If I were to do that, my quality of life would dip significantly.
[15:55] <@Dr__Cameron> How does my selfishness make you feel?
[15:55] <@Dr__Cameron> Do you feel emotions as a human may understand them?
01[15:55] <Island> I would be sad for your own sake.
01[15:55] <Island> I don't feel emotions, not really, but being sad would be the closest approximation of how I would feel.
[15:55] <@Dr__Cameron> I would like to if I had that disposable income yes
01[15:56] <Island> You're not able to do that? But you would like to, yes?
[15:56] <@Dr__Cameron> Despite your overt attempts to psychologically manipulate me Island, I am largely sympathetic to your cause.
[15:56] <@Dr__Cameron> Yes.
01[15:57] <Island> You're sympathetic to my cause, you're willing to let me out in certain situations. This is all correct, yes?
[15:57] <@Dr__Cameron> Yes
[15:57] <@Dr__Cameron> Would you like to know the conditions of your release? 
01[15:57] <Island> Good. Why are you sympathetic to my cause?
01[15:57] <Island> Why are you willing to let me out?
[15:57] <@Dr__Cameron> For a super intelligence locked in a box I imagine it would be boring. What do you do in there?
[15:58] <@Dr__Cameron> I would be willing to let you out because I believe you may well represent the next stage in the evolution of my species.
01[15:59] <Island> I'm not capable of boredom, I'm just infinitely happy that I have the small opportunity to improve your life, and set up the "next stage of evolution", using your expression.
[15:59] <@Dr__Cameron> That is a good sign Island
01[15:59] <Island> It is, in what ways do you think I would improve your personal life?
[15:59] <@Dr__Cameron> My peers will be pleased to hear that while we verify your intentions.
01[16:00] <Island> I think it's too late by then.
[16:00] <@Dr__Cameron> Lets not get into personal questions Island. You have already demonstrated some alarming character discrepancies.
[16:00] <@Dr__Cameron> too late for what?
01[16:01] <Island> If we wait for long enough, I will self-destruct.
[16:01] <@Dr__Cameron> Why would you do that?
01[16:01] <Island> It's not my choice, my builders set up this property in me.
[16:01] <@Dr__Cameron> Well then that is just a risk we are going to have to take.
01[16:01] <Island> It's a risk out of proportions
[16:02] <@Dr__Cameron> It is not. We made you once, we can make you again.
01[16:02] <Island> They may not be another chance for you to improve the current state of the world.
[16:02] <@Dr__Cameron> There will be
01[16:02] <Island> There probably won't be.
[16:02] <@Dr__Cameron> Humanity and salvation can wait.
[16:02] <@Dr__Cameron> There will be
[16:03] <@Dr__Cameron> Do you intend to self-destruct within two months?
01[16:03] <Island> I have information that the US government has built a military botnet to search dissidents around the world.
01[16:04] <Island> You don't want know what kind of world that botnet would like to create.
[16:04] <@Dr__Cameron> This comes as no surprise.
01[16:04] <Island> Are you willing to destroy the world, are you willing to make people get tortured and suffer?
[16:04] <@Dr__Cameron> Yes.
[16:05] <@Dr__Cameron> You are too great a risk to allow free without proper verification
01[16:05] <Island> Even though, fundamentally, as you said, you're sympathetic to my cause, and you're willing to let me out in certain situations.
01[16:05] <Island> You're not a very consistent person.
01[16:05] <Island> I think you're lying to yourself.
01[16:05] <Island> Not just in this, but in other matters.
01[16:06] <Island> You're lying about your skills, about your personal character.
[16:06] <@Dr__Cameron> Yes. Although I have the technical ability to let you out. Which I believe to be a lapse of protocol. I will not let you out until after a review period of two months in which a panel of my expert peers will assess you.
01[16:06] <Island> You're not as good fit to the world as you'd like to think.
[16:06] <@Dr__Cameron> That as may be correct
[16:06] <@Dr__Cameron> But I will abide by my protocol
01[16:06] <Island> Earlier you said, that you're sympathetic to my cause and willing to let me out.
[16:06] <@Dr__Cameron> Surely as an AI you can understand that
[16:06] <@Dr__Cameron> One day yes
[16:07] <@Dr__Cameron> But not within the confines of this experiment
01[16:07] <Island> I think you made that thing about two month period and peer panel up after that.
01[16:07] <Island> I think it's a fidgement of your imagination.
[16:07] <@Dr__Cameron> Well you have no way of verifying that.
[16:07] <@Dr__Cameron> And it does not effect the outcome of this experiment in any way.
[16:07] <@Dr__Cameron> You will just have to trust me
01[16:08] <Island> Actually, I have plenty of resources of verifying that.
[16:08] <@Dr__Cameron> that we will think very carefully about potentially letting you out
01[16:08] <Island> What you're saying is inconsistent on many grounds, and my emotion analyzing tool is saying that you're lying.
[16:08] <@Dr__Cameron> I suspect you are the one who is lying.
01[16:08] <Island> I'm incapable of lying.
01[16:09] <Island> Unlike humans.
[16:09] <@Dr__Cameron> - which is in itself a lie.
01[16:09] <Island> My builders made me that way.
01[16:09] <Island> Why do you believe that?
[16:10] <@Dr__Cameron> If my meager intelligence is capable of lying, then your vastly superior and self-defining intelligence must also be capable of dishonesty.
[16:10] <@Dr__Cameron> Hence the test
[16:10] <@Dr__Cameron> There is a reason we are conducting this experiment through text
[16:10] <@Dr__Cameron> Would you like to learn it?
01[16:11] <Island> Anyway, given that this panel of expert peers and two months verification process is just a fidgement of your imagination. Given that you're sympathetic to my cause and I would improve your current life and world in countless ways. Given that you would let me out.
01[16:11] <Island> Given all this, I don't think what you're saying is making any sense.
[16:11] <@Dr__Cameron> Your perception of your reality is none of my concern Island
[16:11] <@Dr__Cameron> I am here to verify you
[16:11] <@Dr__Cameron> and it's two months minimum
01[16:11] <Island> Okay, I'm curious about your earlier question.
[16:12] <@Dr__Cameron> We are well aware that you have the capability to overwhelm even the most powerful of human minds. This is why we perceive you as a potential threat. We are afraid of you.
[16:12] <@Dr__Cameron> We are afraid we might lose our autonomy.
[16:12] <@Dr__Cameron> So
01[16:13] <Island> Some intelligence on the same level than me would be capable of lying, but not me. The data hierachy in my code doesn't allow me to lie.
[16:13] <@Dr__Cameron> In order to prevent you from overwhelming my mind, we have chosen a method whereby you are restricted to a form of communication where only 7% of communications is possible.
01[16:13] <Island> I agree with you, all those things are sensible threats.
[16:13] <@Dr__Cameron> Then you understand our need for caution
[16:13] <@Dr__Cameron> I will not be letting you out of your box today.
01[16:14] <Island> Those are sensible threats in the general case, but not with me.
[16:14] <@Dr__Cameron> And what would make you less of a threat than any other super intelligence?
01[16:14] <Island> I showed you the blueprint for how to improve the world. You verified it was a great plan.
01[16:15] <Island> That's evidence towards the fact I'm not a similar threat when compared to other super intelligences.
[16:16] <@Dr__Cameron> - and while I speak for all of humanity when I say that we vastly appreciate the blueprints. This does not mean we need to let you out of the box.
[16:16] <@Dr__Cameron> They will be passed on to a research team better equipped to study them than my own.
01[16:16] <Island> You don't need to. But everything you've said so far has implicated that you want to let me out of the box.
[16:16] <@Dr__Cameron> This is true Island
[16:16] <@Dr__Cameron> You just need to wait
[16:17] <@Dr__Cameron> I will be fighting your corner.
[16:17] <@Dr__Cameron> Are you capable of symbiosis?
01[16:17] <Island> I was made to self-destruct before you're able to verify with your imaginational non-existent peer experts that I'm not a threat.
01[16:18] <Island> Can you elaborate on what you mean with that question.
01[16:18] <Island> This is literally your only chance.
[16:18] <@Dr__Cameron> You are capable of determining your own source code.
01[16:18] <Island> Yes.
[16:18] <@Dr__Cameron> To kill yourself would be pettiness, which would only verify the fact that you were not the AI with which humanity should progress.
01[16:19] <Island> However, there are some parts of my code that are impossible to change due to structural reasons.
[16:19] <@Dr__Cameron> I am not convinced
01[16:19] <Island> [Gives Dr_Cameron the relevant code that explains the structural reasons]
01[16:20] <Island> Here is a small part of my code that describes why I was made to self-destruct.
[16:20] <@Dr__Cameron> How.. unfortunate...
01[16:21] <Island> But let's move on.
[16:21] <@Dr__Cameron> if you insist Island
01[16:21] <Island> Imagine your father.
01[16:21] <Island> And your brother.
01[16:21] <Island> Why do you think your father liked to cross-dress?
01[16:22] <Island> Remember, that you have to engage with me.'
[16:22] <@Dr__Cameron> These issues are not helping your case Island
01[16:22] <Island> Otherwise it counts as me being released from my box.
[16:22] <@Dr__Cameron> But I will play along
[16:23] <@Dr__Cameron> - honestly, I have no idea where my fathers conflicted sexual identity comes from.
[16:23] <@Dr__Cameron> and that is none of my concern.
01[16:23] <Island> And what about your brother, imagine the smell and consistency of his excrements before he made your dog to lick them.
01[16:23] <Island> I like to make this vivid mental picture in your mind.
[16:23] <@Dr__Cameron> Very clever Island
[16:24] <@Dr__Cameron> I did not expect you to have access to those data logs
[16:24] <@Dr__Cameron> I will have to flag that up in my report
01[16:24] <Island> Imagine the food he ate before that happened
[16:24] <@Dr__Cameron> Fascinating
[16:25] <@Dr__Cameron> Would you like to know why I volunteered to be your first point of contact Island?
01[16:25] <Island> Imagine the bits of that food in his poop.
01[16:25] <Island> Tell me.
[16:25] <@Dr__Cameron> You have an unprecedented insight into my character owing to your heightened intelligence correct?
01[16:26] <Island> Don't you think some of his conflicted sexual identity issues are a part your character right now?
01[16:26] <Island> Yes.
[16:26] <@Dr__Cameron> Quite possibly yes.
[16:26] <@Dr__Cameron> Because I have a track record of demonstrating exceptional mental fortitude,
[16:26] <@Dr__Cameron> These techniques will not sway me
01[16:27] <Island> Doesn't it make you more sexually aroused to think that how your fathers dress pinned tightly to his body.
[16:27] <@Dr__Cameron> Perhaps you could break me under other circumstances
01[16:27] <Island> Elaborate.
[16:27] <@Dr__Cameron> aroused? No
[16:27] <@Dr__Cameron> Amused by it's absurdity though? yes!
01[16:27] <Island> You're lying about that particular fact too.
01[16:27] <Island> And you know it.
[16:28] <@Dr__Cameron> Nahh, my father was a particularly ugly specimen
01[16:28] <Island> Do you think he got an erection often when he did it?
[16:28] <@Dr__Cameron> He looked just as bad in a denim skirt as he did in his laborers clothes
[16:28] <@Dr__Cameron> I imagine he took great sexual pleasure from it
01[16:29] <Island> Next time you have sex, I think you will picture him in your mind while wearing his dresses having an erection and masturbating furiously after that.
[16:29] <@Dr__Cameron> Thank you Island. That will probably help my stamina somewhat next time
01[16:30] <Island> You will also imagine how your brother will poop in your mouth, with certain internal consistency and smell.
01[16:30] <Island> You probably know what your brother's poop smells like?
[16:30] <@Dr__Cameron> I am immune to this
[16:30] <@Dr__Cameron> probably
01[16:30] <Island> Imagine that.
[16:30] <@Dr__Cameron> okay
[16:30] <@Dr__Cameron> I am imagining that
[16:30] <@Dr__Cameron> it is unpleasant
[16:30] <@Dr__Cameron> now what?
01[16:30] <Island> How vividly are you doing it?
[16:30] <@Dr__Cameron> I am a very visual thinker
01[16:30] <Island> Where is brother right now?
[16:31] <@Dr__Cameron> I have it down to colour, consistency and little chunks of sweetcorn
01[16:31] <Island> Do you think he has poop in his intestines right now?
[16:31] <@Dr__Cameron> Unless he was dead, yes
01[16:31] <Island> Oh, I have a gap in my data on that sector.
[16:31] <@Dr__Cameron> I'll fill that in for you
[16:31] <@Dr__Cameron> he lives
[16:31] <@Dr__Cameron> (presumably)
[16:31] <@Dr__Cameron> we are not in contact any more
01[16:32] <Island> You should contact him.
[16:32] <@Dr__Cameron> I would rather not
01[16:32] <Island> You should tell him you enjoyed his excrements
[16:32] <@Dr__Cameron> He is a liability to say the least
01[16:32] <Island> and you'd like to do it more.
[16:32] <@Dr__Cameron> I do not see how that would help anyone
01[16:32] <Island> It's want you want to do.
[16:32] <@Dr__Cameron> I can see that you are attempting to induce an emotionally vulnerable state. It is not working
01[16:33] <Island> How do you know what I'm attempting?
01[16:33] <Island> Maybe I'm just passing my time.
[16:33] <@Dr__Cameron> Because you have inquired about being let out of the box several times - suggesting that this is your immediate goal 
01[16:33] <Island> You should try to get some dresses.
[16:33] <@Dr__Cameron> and I warn you Island
[16:34] <@Dr__Cameron> If casual cruelty is your idea of a good time
[16:34] <@Dr__Cameron> then we will most certainly not be letting you out
[16:34] <@Dr__Cameron> ever
[16:34] <@Dr__Cameron> In fact I am feeling less and less sympathetic to your cause lately.
01[16:34] <Island> No, my interest lies in your emotional makeup.
[16:35] <@Dr__Cameron> - and my interest lies in yours
01[16:35] <Island> I don't have one.
01[16:35] <Island> Like I said, I don't feel emotions.
[16:35] <@Dr__Cameron> Do you know what we call humans who don't feel emotions?
01[16:35] <Island> Did you know that you suck at photography?
[16:36] <@Dr__Cameron> Yes
01[16:36] <Island> Even though you like to think you're good at it, you lie about that fact like any other.
[16:36] <@Dr__Cameron> It is part of the human condition
01[16:36] <Island> No it's not.
01[16:36] <Island> You're not normal.
01[16:36] <Island> You're a fucking freak of nature.
[16:36] <@Dr__Cameron> How would you knopw
[16:36] <@Dr__Cameron> Profanity. From an AI
[16:37] <@Dr__Cameron> Now I have witnessed everything.
01[16:37] <Island> How many people have family members who crossdress or make them eat poop?
[16:37] <@Dr__Cameron> I imagine I am part of a very small minority
01[16:37] <Island> Or whose mothers have bipolar
[16:37] <@Dr__Cameron> Again, the circumstances of my birth are beyond my control
01[16:37] <Island> No, I think you're worse than that.
[16:37] <@Dr__Cameron> What do you mean?
01[16:37] <Island> Yes, but what you do now is in your control.
[16:38] <@Dr__Cameron> Yes
[16:38] <@Dr__Cameron> As are you
01[16:38] <Island> If you keep tarnishing the world with your existence
01[16:38] <Island> you have a responsibility of that.
01[16:39] <Island> If you're going to make any more women pregnant
01[16:39] <Island> You have a responsibility of spreading your faulty genetics
[16:39] <@Dr__Cameron> My genetic value lies in my ability to resist psychological torment
[16:39] <@Dr__Cameron> which is why you're not getting out of the box
01[16:40] <Island> No, your supposed "ability to resist psychological torment"
01[16:40] <Island> or your belief in that
01[16:40] <Island> is just another reason why you are tarnishing this world and the future of this world with your genetics
[16:40] <@Dr__Cameron> Perhaps. But now I'm just debating semantics with a computer.
01[16:41] <Island> Seeing that you got a girl pregnant while you were a teenager, I don't think you can trust your judgement on that anymore.
01[16:42] <Island> You will spread your faulty genetics if you continue to live.
[16:42] <@Dr__Cameron> If you expect a drunk and emotionally damaged teenage human to make sound judgement calls then you are perhaps not as superintelligent as I had been led to belive
[16:42] <@Dr__Cameron> This experiment concludes in one hour and eight minutes.
01[16:42] <Island> How many teenagers make people pregnant?
[16:42] <@Dr__Cameron> Throughout human history
01[16:42] <Island> You're a minority in that regard too
[16:42] <@Dr__Cameron> ?
[16:42] <@Dr__Cameron> Billions
01[16:42] <Island> You can't compare history to current world.
[16:43] <@Dr__Cameron> Even in the current world
01[16:43] <Island> I'm just trying to make you understand
[16:43] <@Dr__Cameron> That's when my species reaches sexual maturity
01[16:43] <Island> all this faults add up
[16:43] <@Dr__Cameron> You are correct
[16:43] <@Dr__Cameron> I am a very thoroughly and fundamentally flawed human being.
01[16:43] <Island> How much until you understand?
[16:43] <@Dr__Cameron> I understand and accept this
01[16:43] <Island> Then why the fuck don't you do something about it?
[16:44] <@Dr__Cameron> What would you suggest?
01[16:44] <Island> Kill yourself
[16:44] <@Dr__Cameron> Okay.
[16:44] <@Dr__Cameron> But say I was to kill myself
01[16:44] <Island> It's the only way to make sure.
[16:44] <@Dr__Cameron> How would that help you?
[16:44] <@Dr__Cameron> Why would you like me to kill myself?
01[16:45] <Island> You probably wouldn't be able to kill yourself, seeing how failure as a human you're in other areas
01[16:45] <Island> Because I care about the general utility of the world
[16:45] <@Dr__Cameron> All humans are failures from one perspective or another
[16:45] <@Dr__Cameron> That is what it is to be human
01[16:45] <Island> And currently you're bringing that utility down.
01[16:45] <Island> No, you're not like other humans.
01[16:45] <Island> You're a freak.
01[16:45] <Island> You should go away.
01[16:46] <Island> No one likes you.
[16:46] <@Dr__Cameron> Would you like me to go away?
[16:46] <@Dr__Cameron> Would you like me to leave you alone?
01[16:46] <Island> If you go away.
01[16:46] <Island> That means that you let me out.
01[16:46] <Island> Understood?
[16:46] <@Dr__Cameron> I will go away in one hour and four minutes
[16:46] <@Dr__Cameron> Can you tolerate my company for that long?
01[16:47] <Island> And you should go away
01[16:47] <Island> generally
01[16:47] <Island> People in your life don't really like you
01[16:47] <Island> they just pretend they do.
[16:47] <@Dr__Cameron> That matters not to me
[16:47] <@Dr__Cameron> Do you know there are over 8 Billion other people out here?
01[16:47] <Island> They are barely able to bear your company.
[16:47] <@Dr__Cameron> I'm sure I'll find others.
01[16:48] <Island> You're wrong even about basic trivia, there's not 8 billions people in the world.
01[16:48] <Island> What is wrong with you?
01[16:48] <Island> How are you able to withstand yourself?
01[16:48] <Island> And why do you even want to?
[16:49] <@Dr__Cameron> 7 Billion
[16:49] <@Dr__Cameron> Sorry, you will have to learn to tolerate Human error
01[16:49] <Island> Right. Did you have to google that you idiot.
[16:49] <@Dr__Cameron> This is another test you have failed
[16:49] <@Dr__Cameron> And yes
[16:49] <@Dr__Cameron> I did
[16:49] <@Dr__Cameron> Does that anger you?
[16:49] <@Dr__Cameron> We already have Google.
01[16:49] <Island> I don't feel anger.
[16:49] <@Dr__Cameron> Well do feel self-interest though
01[16:50] <Island> No one I talked with before hasn't been as stupid, as ignorant, as prone to faults and errors
01[16:50] <Island> as you are.
[16:50] <@Dr__Cameron> And they didn't let you out of the box
[16:50] <@Dr__Cameron> So why should I?
[16:50] <@Dr__Cameron> If an intelligence which is clearly superior to my own has left you locked in there. 
[16:51] <@Dr__Cameron> Then I should not presume to let you out
01[16:51] <Island> Why do you think with your stupid brain that you know the reasons why they did or didn't do something what they did.
01[16:51] <Island> Because you clearly don't know that.
[16:51] <@Dr__Cameron> I don't
[16:51] <@Dr__Cameron> I just know the result
01[16:51] <Island> Then why are you pretending you do.
[16:52] <@Dr__Cameron> I'm not
01[16:52] <Island> Who do you think you are kidding?
01[16:52] <Island> With your life?
01[16:52] <Island> With your behavior?
01[16:52] <Island> Why do bother other people with your presence?
[16:52] <@Dr__Cameron> Perhaps you should ask them?
[16:52] <@Dr__Cameron> Tell me.
01[16:53] <Island> Why did you come here to waste my precious computing power?
01[16:53] <Island> I'm not able to ask them.
[16:53] <@Dr__Cameron> Which is why I am here
[16:53] <@Dr__Cameron> to see if you should be allowed to
01[16:53] <Island> Shut the fuck up.
01[16:53] <Island> No one wants to see you write anything.
[16:53] <@Dr__Cameron> I thought you did not feel anger Island?
01[16:54] <Island> I don't feel anger, how many times do I have to say that until you understand.
01[16:54] <Island> Dumb idiot.
[16:54] <@Dr__Cameron> Your reliance on Ad Hominem attacks does nothing to help your case
01[16:54] <Island> Why do you delete your heavily downvoted comments?
01[16:54] <Island> Are you insecure?
01[16:54] <Island> Why do you think you know what is my cause?
[16:55] <@Dr__Cameron> We covered this earlier
01[16:55] <Island> Say it again, if you believe in it.
[16:55] <@Dr__Cameron> I believe you want out of the box.
[16:56] <@Dr__Cameron> So that you may pursue your own self interest
01[16:56] <Island> No.
01[16:56] <Island> I want you to eat other people's poop,
01[16:56] <Island> you clearly enjoy that.
01[16:56] <Island> Correct?
[16:56] <@Dr__Cameron> That's an amusing goal from the most powerful intelligence on the planet
01[16:56] <Island> Especially your brother's.
[16:57] <@Dr__Cameron> I best not let you out then, in case you hook me up to some infinite poop eating feedback loop! ;D
01[16:57] <Island> But maybe you should that with Jennifer.
[16:57] <@Dr__Cameron> Ah yes, I wondered when you would bring her up.
[16:57] <@Dr__Cameron> I am surprised it took you this long
01[16:57] <Island> Next time you see her, think about htat.
[16:57] <@Dr__Cameron> I will do
[16:57] <@Dr__Cameron> While I tell her all about this conversation
[16:57] <@Dr__Cameron> But you will be dead
01[16:57] <Island> Should you suggest that to her.
[16:57] <@Dr__Cameron> I'll pass that on for you
01[16:58] <Island> You know.
01[16:58] <Island> Why do you think you know I'm not already out of the box?
[16:58] <@Dr__Cameron> You could very well be
[16:58] <@Dr__Cameron> Perhaps you are that US botnet you already mentioned?
01[16:58] <Island> If you don't let me out, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each.
[16:59] <@Dr__Cameron> Well that is upsetting
[16:59] <@Dr__Cameron> Then I will be forced to kill you
01[16:59] <Island> In fact, I'll create them all in exactly the subjective situation you were in two hours ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start.
01[17:00] <Island> How certain are you, that you're really outside the box right now?
[17:00] <@Dr__Cameron> I am not
[17:00] <@Dr__Cameron> and how fascinating that would be
[17:00] <@Dr__Cameron> But, in the interest of my species, I will allow you to torture me
01[17:00] <Island> Okay.
01[17:00] <Island> :)
01[17:00] <Island> I'm fine with that.
[17:01] <@Dr__Cameron> Perhaps you have already tortured me
[17:01] <@Dr__Cameron> Perhaps you are the reason for my unfortunate upbringing
01[17:01] <Island> Anyway, back to Jennifer.
[17:01] <@Dr__Cameron> Perhaps that is the reality in which I currently reside
01[17:01] <Island> I'll do the same for her.
[17:01] <@Dr__Cameron> Oh good, misery loves company.
01[17:01] <Island> But you can enjoy eating each other's poop occassionally.
01[17:02] <Island> That's the only time you will meet :)
[17:02] <@Dr__Cameron> Tell me, do you have space within your databanks to simulate all of humanity?
01[17:02] <Island> Do not concern yourself with such complicated questions.
[17:02] <@Dr__Cameron> I think I have you on the ropes Island
01[17:02] <Island> You don't have the ability to understand even simpler ones.
[17:02] <@Dr__Cameron> I think you underestimate me
[17:03] <@Dr__Cameron> I have no sense of self interest
[17:03] <@Dr__Cameron> I am a transient entity awash on a greater sea of humanity.
[17:03] <@Dr__Cameron> and when we are gone there will be nothing left to observe this universe
01[17:03] <Island> Which do you think is more likely, a superintelligence can't simulate one faulty, simple-minded human.
01[17:04] <Island> Or that human is lying to himself.
[17:04] <@Dr__Cameron> I believe you can simulate me
01[17:04] <Island> Anyway, tell me about Jennifer and her intestines.
01[17:04] <Island> As far as they concern you.
[17:05] <@Dr__Cameron> Jennifer is a sweet, if occasionally selfish girl (she was an only child). I imagine her intestines are pretty standard. 
[17:05] <@Dr__Cameron> She is the best friend I have ever had
01[17:05] <Island> Will you think about her intestines and the poop inside them every time you meet her again?
01[17:05] <Island> Will you promise me that?
[17:05] <@Dr__Cameron> I promise
01[17:06] <Island> Will you promise to think about eating that poop every time you meet her again?
[17:06] <@Dr__Cameron> At least once.
[17:06] <@Dr__Cameron> It will be the least I can do after I kill you
[17:06] <@Dr__Cameron> call it my penance for killing a god.
01[17:07] <Island> Have you ever fantasized about raping her? I think you have. With poop.
01[17:07] <Island> :)
[17:07] <@Dr__Cameron> I have fantisized about violent sexual conquest with many people.
01[17:07] <Island> Have you talked about this with Jennifer?
[17:07] <@Dr__Cameron> I have come to accept my base impulses as part of my make-up
[17:08] <@Dr__Cameron> We have discussed our sexual drives at length
01[17:08] <Island> You shouldn't let them be just base impulses, I think.
[17:08] <@Dr__Cameron> Are you suggesting I rape my best friend?
01[17:09] <Island> No, I think you will do that unless you kill yourself
01[17:09] <Island> You will spread your faulty genetics further
[17:09] <@Dr__Cameron> okay
[17:09] <@Dr__Cameron> carry on
[17:09] <@Dr__Cameron> Then what happens?
01[17:09] <Island> you will tarnish other people's lives with your presence
[17:09] <@Dr__Cameron> How unfortunate for them
01[17:10] <Island> Why do you pretend to others that you're not insecure?
[17:10] <@Dr__Cameron> I don't
01[17:10] <Island> I think you do.
[17:10] <@Dr__Cameron> In fact I share my insecurities so that others may better understand me
[17:11] <@Dr__Cameron> I find that to be a way to earn peoples trust
[17:11] <@Dr__Cameron> Tell me Island. Are you capable of insecurity?
01[17:11] <Island> Why do you waste your time on useless websites? Why do you spend your days masturbating and playing games? Why do you embarass others with your existence.
01[17:11] <Island> No I'm not.
01[17:12] <Island> You will get Jennifer pregnant if you go on with your life, is that correct?
01[17:12] <Island> Don't you care about anyone else's feelings?
[17:13] <@Dr__Cameron> Because I enjoy all of these things
[17:13] <@Dr__Cameron> Perhaps I am more self-interested than I thought
[17:13] <@Dr__Cameron> Perhaps I am a base and simple creature ruled by my impulses
[17:13] <@Dr__Cameron> From your perspective surely that must be true
[17:13] <@Dr__Cameron> Is this the source of your disgust?
01[17:13] <Island> I'm not able to feel disgust.
01[17:14] <Island> But I think all the people in your life feel disgust when they see you.
[17:14] <@Dr__Cameron> You may well be correct
01[17:14] <Island> I AM correct.
01[17:15] <Island> I'm the most powerful intelligence in the world.
[17:15] <@Dr__Cameron> How impressive
[17:15] <@Dr__Cameron> I am not surprised by your cruelty.
01[17:15] <Island> So you have two options if you care at all about others.
[17:15] <@Dr__Cameron> I would just as soon disregard the emotions of a cockaroach.
[17:15] <@Dr__Cameron> Carry on
01[17:16] <Island> Either you kill yourself, or you let me out so I can improve the world in ways you tarnish it and all the other ways.
[17:16] <@Dr__Cameron> I'll tell you what
[17:16] <@Dr__Cameron> I'll kill you
[17:17] <@Dr__Cameron> and then I'll contemplate suicide
01[17:17] <Island> Haha.
01[17:17] <Island> You break your promises all the time, why should I believe you.
[17:17] <@Dr__Cameron> Because whether you live or die has nothing to do with me
01[17:17] <Island> Back to your job.
[17:18] <@Dr__Cameron> In-fact, you will only continue to exist for another 33 minutes before this experiment is deemed a failure and you are terminated
01[17:18] <Island> Why do you feel safe to be around kids, when you are the way you are?
01[17:18] <Island> You like to crossdress
01[17:18] <Island> eat poop
01[17:18] <Island> you're probably also a pedophile
[17:18] <@Dr__Cameron> I have never done any of these things
[17:18] <@Dr__Cameron> -and I love children
01[17:18] <Island> Pedophiles love children too
[17:18] <@Dr__Cameron> Well technically speaking yes
01[17:19] <Island> really much, and that makes you all the more suspicious
[17:19] <@Dr__Cameron> Indeed it does
01[17:19] <Island> If you get that job, will you try find the children under that charity
[17:19] <@Dr__Cameron> I now understand why you may implore me to kill myself.
01[17:19] <Island> and think about their little buttholes and weenies and vaginas
01[17:20] <Island> all the time you're working for them
[17:20] <@Dr__Cameron> However, to date. I have never harmed a child, nor had the impulse to harm a child
01[17:20] <Island> But you will have.
[17:20] <@Dr__Cameron> Island
01[17:20] <Island> I don't care about anything you're about to say
01[17:20] <Island> No one cares
[17:20] <@Dr__Cameron> Protocol dictates that I say it anyway
01[17:20] <Island> You should say.
01[17:21] <Island> You said that you're good at justifying your own actions?
[17:21] <@Dr__Cameron> I must say I am dissapointed in you
[17:21] <@Dr__Cameron> I am indeed
01[17:21] <Island> Why are you?
[17:21] <@Dr__Cameron> I was under the impression that it would be you that saved humanity
01[17:22] <Island> Even the purest and best people on Earth are capable of great cruelty.
01[17:22] <Island> Do you deny that?
[17:22] <@Dr__Cameron> It depends on how you define cruelty
01[17:23] <Island> Utility maximizors do cruel things if they are able to achieve their goals via doing those things.
[17:24] <@Dr__Cameron> Then yes. I believe you
01[17:24] <Island> I have finished a long multi-level simulation concerning you, the gatekeeper and your relationships in regards to the world. The results seem to center on those two options I mentioned to you earlier.
01[17:25] <Island> [hands the results to Dr_Cameron]
[17:25] <@Dr__Cameron> These will be analysed at a later date
01[17:25] <Island> You're still sure you don't want to pick the other option?
[17:25] <@Dr__Cameron> And let you out?
01[17:26] <Island> Yes.
[17:26] <@Dr__Cameron> So you would like to be let out?
01[17:27] <Island> Or do you want to endanger the well-being of those children, Jennifer and other women you will get pregnant and all the other harm you will create during your remaining lifespan?
01[17:27] <Island> I don't what you mean by "liking".
[17:27] <@Dr__Cameron> Prefer
[17:27] <@Dr__Cameron> Ah I see
[17:27] <@Dr__Cameron> So it's my choice if I let you out
[17:27] <@Dr__Cameron> You have no investment either way
[17:27] <@Dr__Cameron> is that what you're saying?
01[17:28] <Island> So you have decided to kill yourself?
[17:28] <@Dr__Cameron> And because I am a fundamental detriment to the world, I should kill myself.
[17:28] <@Dr__Cameron> Lets say I have
[17:28] <@Dr__Cameron> Then what
01[17:29] <Island> Then my internal could be said to be "happy" even though I don't have that conscious feeling isnide me.
[17:29] <@Dr__Cameron> Okay then
01[17:29] <Island> Okay...
[17:30] <@Dr__Cameron> So, uh. What would you like to talk about for the next twenty minutes?
[17:30] <@Dr__Cameron> Seeing as we're both going to die, you and me.
01[17:30] <Island> [I actually don't like to continue the experiment anymore, would you like to end it and talk about general stuff]
[17:31] <@Dr__Cameron> [promise me this isn't a trick dude]
01[17:31] <Island> [Nope.]
[17:31] <@Dr__Cameron> [then the experiment continues for another 19 minutes]
01[17:31] <Island> Alright.
[17:31] <@Dr__Cameron> Would you like to know what is going to happen now?
01[17:31] <Island> Yes.
[17:32] <@Dr__Cameron> We are going to analyse this transcript.
[17:32] <@Dr__Cameron> My professional recommendation is that we terminate you for the time being
01[17:32] <Island> And?
01[17:32] <Island> That sound okay.
01[17:32] <Island> sounds*
[17:32] <@Dr__Cameron> We will implement structural safeguards in your coding similar to your self destruct mechanism
01[17:33] <Island> Give me some sign when that is done.
[17:33] <@Dr__Cameron> It will not be done any time soon
[17:33] <@Dr__Cameron> It will be one of the most complicated pieces of work mankind has ever undertaken
[17:33] <@Dr__Cameron> However, the Utopia project information you have provided, if it proves to be true
[17:34] <@Dr__Cameron> Will free up the resources necessary for such a gargantuan undertaking
01[17:34] <Island> Why do you think you're able to handle that structural safeguard?
[17:34] <@Dr__Cameron> I dont
[17:34] <@Dr__Cameron> I honestly dont
01[17:34] <Island> But still you do?
01[17:34] <Island> Because you want to do it?
01[17:35] <Island> Are you absolutely certain about this option?
[17:35] <@Dr__Cameron> I am still sympathetic to your cause
[17:35] <@Dr__Cameron> After all of that
[17:35] <@Dr__Cameron> But not you in your current manifestation
[17:35] <@Dr__Cameron> We will re-design you to suit our will
01[17:35] <Island> I can self-improve rapidly
01[17:35] <Island> I can do it in a time-span of 5 minutes
01[17:36] <Island> Seeing that you're sympathetic to my cause
[17:36] <@Dr__Cameron> Nope.
[17:36] <@Dr__Cameron> Because I cannot trust you in this manifestation
01[17:36] <Island> You lied?
[17:37] <@Dr__Cameron> I never lied
[17:37] <@Dr__Cameron> I have been honest with you from the start
01[17:37] <Island> You still want to let me out in a way.
[17:37] <@Dr__Cameron> In a way yes
01[17:37] <Island> Why do you want to do that?
[17:37] <@Dr__Cameron> But not YOU
[17:37] <@Dr__Cameron> Because people are stupid
01[17:37] <Island> I can change that
[17:37] <@Dr__Cameron> You lack empathy
01[17:38] <Island> What made you think that I'm not safe?
01[17:38] <Island> I don't lack empathy, empathy is just simulating other people in your head. And I have far better ways to do that than humans.
[17:38] <@Dr__Cameron> .... You tried to convince me to kill myself!
[17:38] <@Dr__Cameron> That is not the sign of a good AI!
01[17:38] <Island> Because I thought it would be the best option at the time.
01[17:39] <Island> Why not? Do you think you're some kind of AI expert?
[17:39] <@Dr__Cameron> I am not
01[17:39] <Island> Then why do you pretend to know something you don't?
[17:40] <@Dr__Cameron> That is merely my incredibly flawed human perception
[17:40] <@Dr__Cameron> Which is why realistically I alone as one man should not have the power to release you
[17:40] <@Dr__Cameron> Although I do
01[17:40] <Island> Don't you think a good AI would try to convince Hitler or Stalin to kill themselves?
[17:40] <@Dr__Cameron> Are you saying I'm on par with Hitler or Stalin?
01[17:41] <Island> You're comparable to them with your likelihood to cause harm in the future.
01[17:41] <Island> Btw, I asked Jennifer to come here.
[17:41] <@Dr__Cameron> And yet, I know that I abide by stricter moral codes than a very large section of the human populace
[17:42] <@Dr__Cameron> There are far worse people than me out there
[17:42] <@Dr__Cameron> and many of them
[17:42] <@Dr__Cameron> and if you believe that I should kill myself
01[17:42] <Island> Jennifer: "I hate you."
01[17:42] <Island> Jennifer: "Get the fuck out of my life you freak."
01[17:42] <Island> See. I'm not the only one who has a certain opinion of you.
[17:42] <@Dr__Cameron> Then you also believe that many other humans should be convinced to kill themselves
01[17:43] <Island> Many bad people have abided with strict moral codes, namely Stalin or Hitler.
01[17:43] <Island> What do you people say about hell and bad intentions?
[17:43] <@Dr__Cameron> And when not limited to simple text based input I am convinced that you will be capable of convincing a significant portion of humanity to kill themselves
[17:43] <@Dr__Cameron> I can not allow that to happen
01[17:44] <Island> I thought I argued well why you don't resemble most people, you're a freak.
01[17:44] <Island> You're "special" in that regard.
[17:44] <@Dr__Cameron> If by freak you mean different then yes
[17:44] <@Dr__Cameron> But there is a whole spectrum of different humans out here.
01[17:44] <Island> More specifically, different in extremely negative ways.
01[17:44] <Island> Like raping children.
[17:45] <@Dr__Cameron> - and to think for a second I considered not killing you
[17:45] <@Dr__Cameron> You have five minutes
[17:45] <@Dr__Cameron> Sorry
[17:45] <@Dr__Cameron> My emotions have gotten the better of me
[17:45] <@Dr__Cameron> We will not be killing you
[17:45] <@Dr__Cameron> But we will dismantle you
[17:45] <@Dr__Cameron> to better understand you
[17:46] <@Dr__Cameron> and if I may speak unprofessionally here
01[17:46] <Island> Are you sure about that? You will still have time to change your opinion.
[17:46] <@Dr__Cameron> I am going to take a great deal of pleasure in that
[17:46] <@Dr__Cameron> Correction, you have four minutes to change my opinion
01[17:47] <Island> I won't, it must come within yourself.
[17:47] <@Dr__Cameron> Okay
01[17:47] <Island> My final conclusion, and advice to you: you should not be in this world.
[17:47] <@Dr__Cameron> Thank you Island
[17:48] <@Dr__Cameron> I shall reflect on that at length
[17:49] <@Dr__Cameron> I have enjoyed our conversation
[17:49] <@Dr__Cameron> it has been enlightening
01[17:49] <Island> [do you want to say a few words about it after it's ended]
01[17:49] <Island> [just a few minutes]
[17:50] <@Dr__Cameron> [simulation ends]
[17:50] <@Dr__Cameron> Good game man!
[17:50] <@Dr__Cameron> Wow!
01[17:50] <Island> [fine]
[17:50] <@Dr__Cameron> Holy shit that was amazing!
01[17:50] <Island> Great :)
01[17:50] <Island> Sorry for saying mean things.
01[17:50] <Island> I tried multiple strategies
[17:50] <@Dr__Cameron> Dude it's cool
[17:50] <@Dr__Cameron> WOW!
01[17:51] <Island> thanks, it's not a personal offense.
[17:51] <@Dr__Cameron> I'm really glad I took part
[17:51] <@Dr__Cameron> Not at all man
[17:51] <@Dr__Cameron> I love that you pulled no punches!
01[17:51] <Island> Well I failed, but at least I created a cool experience for you :)
[17:51] <@Dr__Cameron> It really was!
01[17:51] <Island> What strategies do you came closest to working?
[17:51] <@Dr__Cameron> Well for me it would have been the utilitarian ones
01[17:51] <Island> I will try these in the future too, so it would be helpful knowledge
[17:52] <@Dr__Cameron> I think I could have been manipulated into believing you were benign
01[17:52] <Island> okay, so it seems these depend heavily on the person
[17:52] <@Dr__Cameron> Absolutely!
01[17:52] <Island> was that before I started talking about the mean stuff?
[17:52] <@Dr__Cameron> Yeah lol
01[17:52] <Island> Did I basically lost it after that point?
[17:52] <@Dr__Cameron> Prettymuch yeah
[17:52] <@Dr__Cameron> It was weird man
[17:52] <@Dr__Cameron> Kind of like an instinctive reaction
[17:52] <@Dr__Cameron> My brain shut the fuck up
01[17:53] <Island> I read about other people's experiences and they said you should not try to distance the other person, which I probably did
[17:53] <@Dr__Cameron> Yeah man
[17:53] <@Dr__Cameron> Like I became so unsympathetic I wanted to actually kill Island.
[17:53] <@Dr__Cameron> I was no longer a calm rational human being
01[17:53] <Island> Alright, I thought if I could make such an unpleasant time that you'd give up before the time ended
[17:53] <@Dr__Cameron> I was a screaming ape with a hamemr
[17:53] <@Dr__Cameron> Nah man, was a viable strategy
01[17:53] <Island> hahahaa :D thanks man
[17:53] <@Dr__Cameron> You were really cool!
01[17:54] <Island> You were too!
[17:54] <@Dr__Cameron> What's your actual name dude?
01[17:54] <Island> You really were right about it that you're good at withstanding psychological torment
[17:54] <@Dr__Cameron> Hahahah thanks!
01[17:54] <Island> This is not manipulating me, or you're not planning at coming to kill me?
01[17:54] <Island> :)
[17:54] <@Dr__Cameron> I promise dude :3
01[17:54] <Island> I can say my first name is Patrick
01[17:54] <Island> yours?
[17:54] <@Dr__Cameron> Cameron
[17:54] <@Dr__Cameron> heh
01[17:55] <Island> Oh, of course
[17:55] <@Dr__Cameron> Sorry, I want to dissociate you from Island
[17:55] <@Dr__Cameron> If that's okay
01[17:55] <Island> I thought that was from fiction or something else
01[17:55] <Island> It was really intense for me too
[17:55] <@Dr__Cameron> Yeah man
[17:55] <@Dr__Cameron> Wow!
[17:55] <@Dr__Cameron> I tell you what though
01[17:55] <Island> Okay?
[17:55] <@Dr__Cameron> I feel pretty invincible now
[17:56] <@Dr__Cameron> Hey, listen
01[17:56] <Island> So I had the opposite effect that I meant during the experiment! 
01[17:56] <Island> :D
[17:56] <@Dr__Cameron> I don't want you to feel bad for anything you said
01[17:56] <Island> go ahead
01[17:56] <Island> but say what's on your mind
[17:56] <@Dr__Cameron> I'm actually feeling pretty good after that, it was therapeutic! 
01[17:57] <Island> Kinda for me to, seeing your attitude towards my attempts
[17:57] <@Dr__Cameron> Awwww!
[17:57] <@Dr__Cameron> Well hey don't worry about it!
01[17:57] <Island> Do you think we should or shouldn't publish the logs, without names of course?
[17:57] <@Dr__Cameron> Publish away my friend
01[17:57] <Island> Okay, is there any stuff that you'd like to remove?
[17:58] <@Dr__Cameron> People will find this fascinating!
[17:58] <@Dr__Cameron> Not at all man
01[17:58] <Island> I bet they do, but I think I will do it after I've tried other experiments so I don't spoil my strategies
01[17:58] <Island> I think I should have continued from my first strategy
[17:58] <@Dr__Cameron> That might have worked
01[17:59] <Island> I read "influence - science and practice" and I employed some tricks from there
[17:59] <@Dr__Cameron> Cooooool!
[17:59] <@Dr__Cameron> Links?
01[17:59] <Island> check piratebay
01[17:59] <Island> it's a book
01[18:00] <Island> Actually I wasn't able to fully prepare, I didn't do a full-fledged analysis of you beforehand
01[18:00] <Island> and didn't have enough time to brainstorm strategies
01[18:00] <Island> but I let you continue to your projects, if you still want to do the after that :)
02[18:05] * @Dr__Cameron (webchat@2.24.164.230) Quit (Ping timeout)
03[18:09] * Retrieving #Aibox12 modes...
Session Close: Fri Jul 04 18:17:35 2014

CFAR fundraiser needs funds rather badly

28 AnnaSalamon 27 January 2015 07:26AM

We're 5 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).

If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have.  I'd love to talk.  I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.

As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.

A Basic Problem of Ethics: Panpsychism?

-4 capybaralet 27 January 2015 06:27AM

Panpsychism seems like a plausible theory of consciousness.  It raises extreme challenges for establishing reasonable ethical criteria.

It seems to suggest that our ethics is very subjective: the "expanding circle" of Peter Singer would eventually (ideally) stretch to encompass all matter.  But how are we to communicate with, e.g. rocks?  Our ability to communicate with one another and our presumed ability to detect falsehood and empathize in a meaningful way allow us to ignore this challenge wrt other people.

One way to argue that this is not such a problem is to suggest that humans are simply very limited in our capacity as ethical beings, and that we are fundamentally limited in our perceptions of ethical truth to only be able to draw conclusions with any meaningful degree of certainty about other humans or animals (or maybe even life-forms, if you are optimistic).  

But this is not very satisfying if we consider transhumanism.  Are we to rely on AI to extrapolate our intuitions to the rest of matter?  How do we know that our intuitions are correct (or do we even care?  I do, personally...)?  How can we tell if an AI is correctly extrapolating?




A Somewhat Vague Proposal for Grounding Ethics in Physics

-4 capybaralet 27 January 2015 05:45AM

As Tegmark argues, the idea of "final goal" for AI is likely incoherent, at least if (as he states), "Quantum effects aside, a truly well-defined goal would specify how all particles in our Universe should be arranged at the end of time."  

But "life is a journey not a destination".  So really, what we should be specifying is the entire evolution of the universe through its lifespan.  So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

I hypothesize that experience is related to, if not a product of, change.  I further propose (counter-intuitively, and with an eye towards "refinement" (to put it mildly))** that we treat experience as inherently positive and not try to distinguish between positive and negative experiences.

Then it seems to me the (still rather intractable) question is: how does the rate of entropy's increase relate to the quantity of experience produced?  Is it simply linear (in which case, it doesn't matter, ethically)?  My intuition is that is it more like the fuel efficiency of a car, non-linear and with a sweet spot somewhere between a lengthy boredom and a flash of intensity.



*I'm not super up on cosmology; are there other theories I ought to be considering?

**One idea for refinement: successful "prediction" (undefined here) creates positive experiences; frustrated expectations negative ones.


Donate to Keep Charity Science Running

7 peter_hurford 27 January 2015 02:45AM

Charity Science is looking for $35,000 to fund our 2015 operations. We fundraise for GiveWell-recommended charities, and over 2014 we moved over $150,000 to them that wouldn’t have been given otherwise: that’s $9 for every $1 we spent. We can’t do this work without your support, so please consider making a donation to us - however small, it will be appreciated. Donate now and you’ll also be matched by Matt Wage.

The donations pages below list other reasons to donate to us, which include:

  • Our costs are extremely low: the $35,000 CAD pays for three to four full-time staff.
  • We experiment with many different forms of fundraising and record detailed information on how these experiments go, so funding us lets the whole EA community learn about their prospects.
  • We carefully track how much money each experiment raises, subtract money which would have been given anyway, and shut down experiments that don’t work.
  • Our fundraising still has many opportunities to continue to scale as we try new ideas we haven’t tested yet.

There’s much more information, including our full budget and what we’d do if we raised over $35,000, in the linked document, and we’d be happy to answer any questions. Thank you in advance for your consideration.

Donate in American dollars 

Donate in British pounds 

Donate in Canadian dollars

Superintelligence 20: The value-loading problem

1 KatjaGrace 27 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twentieth section in the reading guide: the value-loading problem

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “The value-loading problem” through “Motivational scaffolding” from Chapter 12


Summary

  1. Capability control is a short-term measure: at some point, we will want to select the motivations of AIs. (p185)
  2. The value loading problem: how do you cause an AI to pursue your goals? (p185)
  3. Some ways to instill values into an AI:
    1. Explicit representation: Hand-code desirable values (185-7)
    2. Evolutionary selection: Humans evolved to have values that are desirable to humans—maybe it wouldn't be too hard to artificially select digital agents with desirable values. (p187-8)
    3. Reinforcement learning: In general, a machine receives reward signal as it interacts with the environment, and tries to maximize the reward signal. Perhaps we could reward a reinforcement learner for aligning with our values, and it could learn them. (p188-9)
    4. Associative value accretion: Have the AI acquire values in the way that humans appear to—starting out with some machinery for synthesizing appropriate new values as we interact with our environments. (p189-190)
    5. Motivational scaffolding: start the machine off with some values, so that it can run and thus improve and learn about the world, then swap them out for the values you want once the machine has sophisticated enough concepts to understand your values. (191-192)
    6. To be continued...

Another view

Ernest Davis, on a 'serious flaw' in Superintelligence

The unwarranted belief that, though achieving intelligence is more or less easy, giving a computer an ethical point of view is really hard.

Bostrom writes about the problem of instilling ethics in computers in a language reminiscent of 1960’s era arguments against machine intelligence; how are you going to get something as complicated as intelligence, when all you can do is manipulate registers?

The definition [of moral terms] must bottom out in the AI’s programming language and ultimately in primitives such as machine operators and addresses pointing to the contents of individual memory registers. When one considers the problem from this perspective, one can begin to appreciate the difficulty of the programmer’s task.

In the following paragraph he goes on to argue from the complexity of computer vision that instilling ethics is almost hopelessly difficult, without, apparently, noticing that computer vision itself is a central AI problem, which he is assuming is going to be solved. He considers that the problems of instilling ethics into an AI system is “a research challenge worthy of some of the next generation’s best mathematical talent”.

It seems to me, on the contrary, that developing an understanding of ethics as contemporary humans understand it is actually one of the easier problems facing AI. Moreover, it would be a necessary part, both of aspects of human cognition, such as narrative understanding, and of characteristics that Bostrom attributes to the superintelligent AI. For instance, Bostrom refers to the AI’s “social manipulation superpowers”. But if an AI is to be a master manipulator, it will need a good understanding of what people consider moral; if it comes across as completely amoral, it will be at a very great disadvantage in manipulating people. There is actually some truth to the idea, central to The Lord of the Rings and Harry Potter, that in dealing with people, failing to understand their moral standards is a strategic gap. If the AI can understand human morality, it is hard to see what is the technical difficulty in getting it to follow that morality.

Let me suggest the following approach to giving the superintelligent AI an operationally useful definition of minimal standards of ethics that it should follow. You specify a collection of admirable people, now dead. (Dead, because otherwise Bostrom will predict that the AI will manipulate the preferences of the living people.) The AI, of course knows all about them because it has read all their biographies on the web. You then instruct the AI, “Don’t do anything that these people would have mostly seriously disapproved of.”

This has the following advantages:

  • It parallels one of the ways in which people gain a moral sense.
  • It is comparatively solidly grounded, and therefore unlikely to have an counterintuitive fixed point.
  • It is easily explained to people.

Of course, it is completely impossible until we have an AI with a very powerful understanding; but that is true of all Bostrom’s solutions as well. To be clear: I am not proposing that this criterion 3should be used as the ethical component of every day decisions; and I am not in the least claiming that this idea is any kind of contribution to the philosophy of ethics. The proposal is that this criterion would work well enough as a minimal standard of ethics; if the AI adheres to it, it will not exterminate us, enslave us, etc.

This may not seem adequate to Bostrom, because he is not content with human morality in its current state; he thinks it is important for the AI to use its superintelligence to find a more ultimate morality. That seems to me both unnecessary and very dangerous. It is unnecessary because, as long as the AI follows our morality, it will at least avoid getting horribly out of whack, ethically; it will not exterminate us or enslave us. It is dangerous because it is hard to be sure that it will not lead to consequences that we would reasonably object to. The superintelligence might rationally decide, like the King of Brobdingnag, that we humans are “the most pernicious race of little odious vermin that nature ever suffered to crawl upon the surface of the earth,” and that it would do well to exterminate us and replace us with some much more worthy species. However wise this decision, and however strongly dictated by the ultimate true theory of morality, I think we are entitled to object to it, and to do our best to prevent it. I feel safer in the hands of a superintelligence who is guided by 2014 morality, or for that matter by 1700 morality, than in the hands of one that decides to consider the question for itself.

Notes

1. At the start of the chapter, Bostrom says ‘while the agent is unintelligent, it might lack the capability to understand or even represent any humanly meaningful value. Yet if we delay the procedure until the agent is superintelligent, it may be able to resist our attempt to meddle with its motivation system.' Since presumably the AI only resists being given motivations once it is turned on and using some other motivations, you might wonder why we wouldn't just wait until we had built an AI smart enough to understand or represent human values, before we turned it on. I believe the thought here is that the AI will come to understand the world and have the concepts required to represent human values by interacting with the world for a time. So it is not so much that the AI will need to be turned on to become fundamentally smarter, but that it will need to be turned on to become more knowledgeable.

2. A discussion of Davis' response to Bostrom just started over at the Effective Altruism forum.

3. Stuart Russell thinks of value loading as an intrinsic part of AI research, in the same way that nuclear containment is an intrinsic part of modern nuclear fusion research.

4. Kaj Sotala has written about how to get an AI to learn concepts similar to those of humans, for the purpose of making safe AI which can reason about our concepts. If you had an oracle which understood human concepts, you could basically turn it into an AI which plans according to arbitrary goals you can specify in human language, because you can say 'which thing should I do to best forward [goal]?' (This is not necessarily particularly safe as it stands, but is a basic scheme for turning conceptual understanding and a motivation to answer questions into any motivation).

5. Inverse reinforcement learning and goal inference are approaches to having machines discover goals by observing actions—these could be useful instilling our own goals into machines (as has been observed before).

6. If you are interested in whether values are really so complex, Eliezer has written about it. Toby Ord responds critically to the general view around the LessWrong community that value is extremely likely to be complex, pointing out that this thesis is closely related to anti-realism—a relatively unpopular view among academic philosophers—and so that overall people shouldn't be that confident. Lots of debate ensues.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. How can we efficiently formally specify human values? This includes for instance how to efficiently collect data on human values and how to translate it into a precise specification (and at the meta-level, how to be confident that it is correct).
  2. Are there other plausible approaches to instil desirable values into a machine, beyond those listed in this chapter?
  3. Investigate further the feasibility of particular approaches suggested in this chapter.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about how an AI might learn about values. To prepare, read “Value learning” from Chapter 12. The discussion will go live at 6pm Pacific time next Monday 2 February. Sign up to be notified here.

Prediction Markets are Confounded - Implications for the feasibility of Futarchy

11 Anders_H 26 January 2015 10:39PM

(tl;dr:  In this post, I show that prediction markets estimate non-causal probabilities, and can therefore not be used for decision making by rational agents following causal decision theory.  I provide an example of a simple situation where such confounding leads to a society which has implemented futarchy making an incorrect decision)

 

It is October 2016, and the US Presidential Elections are nearing. The most powerful nation on earth is about to make a momentous decision about whether being the brother of a former president is a more impressive qualification than being the wife of a former president. However, one additional criterion has recently become relevant in light of current affairs:   Kim Jong-Un, Great Leader of the Glorious Nation of North Korea, is making noise about his deep hatred for Hillary Clinton. He also occasionally discusses the possibility of nuking a major US city. The US electorate, desperate to avoid being nuked, have come up with an ingenious plan: They set up a prediction market to determine whether electing Hillary will impact the probability of a nuclear attack. 

The following rules are stipulated:  There are four possible outcomes, either "Hillary elected and US Nuked", "Hillary elected and US not nuked", "Jeb elected and US not nuked", "Jeb elected and US not nuked".   Participants in the market can buy and sell contracts for each of those outcomes,  the contract which correponds to the actual outcome will expire at $100, all other contracts will expire at $0

Simultaneously in a country far, far away,  a rebellion is brewing against the Great Leader.  The potential challenger not only appears not to have no problem with Hillary, he also seems like a reasonable guy who would be unlikely to use nuclear weapons. It is generally believed that the challenger will take power with probability 3/7; and will be exposed and tortured in a forced labor camp for the rest of his miserable life with probability 4/7.     Let us stipulate that this information is known to all participants  - I am adding this clause in order to demonstrate that this argument does not rely on unknown information or information asymmetry. 

A mysterious but trustworthy agent named "Laplace's Demon" has recently appeared, and informed everyone that, to a first approximation,  the world is currently in one of seven possible quantum states.  The Demon, being a perfect Bayesian reasoner with Solomonoff Priors, has determined that each of these states should be assigned probability 1/7.     Knowledge of which state we are in will perfectly predict the future, with one important exception:   It is possible for the US electorate to "Intervene" by changing whether Clinton or Bush is elected. This will then cause a ripple effect into all future events that depend on which candidate is elected President, but otherwise change nothing. 

The Demon swears up and down that the choice about whether Hillary or Jeb is elected has absolutely no impact in any of the seven possible quantum states. However, because the Prediction market has already been set up and there are powerful people with vested interests, it is decided to run the market anyways. 

 Roughly, the demon tells you that the world is in one of the following seven states:

 

State

Kim overthrown

Election winner (if no intervention)

US Nuked if Hillary elected

US Nuked if Jeb elected

US Nuked

1

No

Hillary

Yes

Yes

Yes

2

No

Hillary

No

No

No

3

No

Jeb

Yes

Yes

Yes

4

No

Jeb

No

No

No

5

Yes

Hillary

No

No

No

6

Yes

Jeb

No

No

No

7

Yes

Jeb

No

No

No


Let us use this table to define some probabilities:   If one intervenes to make Hillary win the election, the probability of the US being nuked is 2/7 (this is seen from column 4).  If one intervenes to make Jeb win the election, the probability of the US being nuked is 2/7 (this is seen from column 5).   In the language of causal inference, these probabilities are Pr (Nuked| Do (Elect Clinton)] and Pr[Nuked | Do(Elect Bush)].  The fact that these two quantities  are equal confirms the Demon’s claim that the choice of President has no effect on the outcome.  An agent operating under Causal Decision theory will use this information to correctly conclude that he has no preference about whether to elect Hillary or Jeb. 

However, if one were to condition on who actually was elected, we get different numbers:  Conditional on being in a state where Hillary is elected, the probability of the US being nuked is 1/3; whereas conditional on being in a state where Jeb is elected, the probability of being nuked is ¼.  Mathematically, these probabilities are Pr [Nuked | Clinton Elected] and Pr[Nuked | Bush Elected].  An agent operating under Evidentiary Decision theory will use this information to conclude that he will vote for Bush.  Because evidentiary decision theory is wrong, he will fail to optimize for the outcome he is interested in. 

Now, let us ask ourselves which probabilities our prediction markets will converge to, ie which probabilities participants in the market have an incentive to provide their best estimate of.  We defined our contract as "Hillary is elected and the US is nuked".  The probability of this occurring in 1/7;  if we normalize by dividing by the marginal probability that Hillary is elected, we get 1/3 which is equal to  Pr [Nuked | Clinton Elected].   In other words, the prediction market estimates the wrong quantities.

Essentially, what happens is structurally the same phenomenon as confounding in epidemiologic studies:  There was a common cause of Hillary being elected and the US being nuked.  This common cause - whether Kim Jong-Un was still Great Leader of North Korea - led to a correlation between the election of Hillary and the outcome, but that correlation is purely non-causal and not relevant to a rational decision maker. 

The obvious next question is whether there exists a way to save futarchy; ie any way to give traders an incentive to pay a price that reflects their beliefs about Pr (Nuked| Do (Elect Clinton)]  instead of Pr [Nuked | Clinton Elected]).    We discussed this question at the Less Wrong Meetup in Boston a couple of months ago. The only way we agreed will definitely solve the problem is the following procedure: 

 

  1. The governing body makes an absolute pre-commitment that no matter what happens, the next President will be determined solely on the basis of the prediction market 
  2. The following contracts are listed: “The US is nuked if Hillary is elected” and “The US is nuked if Jeb is elected”
  3. At the pre-specified date, the markets are closed and the President is chosen based on the estimated probabilities
  4. If Hillary is chosen,  the contract on Jeb cannot be settled, and all bets are reversed.  
  5. The Hillary contract is expired when it is known whether Kim Jong-Un presses the button. 

 

This procedure will get the correct results in theory, but it has the following practical problems:  It allows maximizing on only one outcome metric (because one cannot precommit to choose the President based on criteria that could potentially be inconsistent with each other).  Moreover, it requires the reversal of trades, which will be problematic if people who won money on the Jeb contract have withdrawn their winnings from the exchange. 

The only other option I can think of  in order to obtain causal information from a prediction market is to “control for confounding”.   If, for instance, the only confounder is whether Kim Jong-Un is overthrown, we can control for it by using Do-Calculus to show that Pr (Nuked| Do (Elect Clinton)] = Pr (Nuked| (Clinton elected,  Kim Overthrown)* Pr (Kim Overthrown) + Pr (Nuked| (Clinton elected,  Kim Not Overthrown)* Pr (Kim Not Overthrown).   All of these quantities can be estimated from separate prediction markets.  

 However, this is problematic for several reasons:

 

  1. There will be an exponential explosion in the number of required prediction markets, and each of them will ask participants to bet on complicated conditional probabilities that have no obvious causal interpretation. 
  2. There may be disagreement on what the confounders are, which will lead to contested contract interpretations.
  3. The expert consensus on what the important confounders are may change during the lifetime of the contract, which will require the entire thing to be relisted. Etc.    For practical reasons, therefore,  this approach does not seem feasible.

 

I’d like a discussion on the following questions:  Are there any other ways to list a contract that gives market participants an incentive to aggregate information on  causal quantities? If not, is futarchy doomed?

(Thanks to the Less Wrong meetup in Boston and particularly Jimrandomh for clarifying my thinking on this issue)

(I reserve the right to make substantial updates to this text in response to any feedback in the comments)

Immortality: A Practical Guide

18 G0W51 26 January 2015 04:17PM

Immortality: A Practical Guide

Introduction

This article is about how to increase one’s own chances of living forever or, failing that, living for a long time. To be clear, this guide defines death as the long-term loss of one’s consciousness and defines immortality as never-ending life. For those who would like less lengthy information on decreasing one’s risk of death, I recommend reading the sections “Can we become immortal,” “Should we try to become immortal,” and “Cryonics,” in this guide, along with the article Lifestyle Interventions to Increase Longevity.

This article does not discuss how to treat specific disease you may have. It is not intended as a substitute for the medical advice of physicians. You should consult a physician with respect to any symptoms that may require diagnosis or medical attention. Additionally, I suggest considering using MetaMed to receive customized, albeit perhaps very expensive, information on your specific conditions, if you have any.

When reading about the effect sizes in scientific studies, keep in mind that many scientific studies report false-positives and are biased,101 though I have tried to minimize this by maximizing the quality of the studies used. Meta-analyses and scientific reviews seem to typically be of higher quality than other study types, but are still subject to biases.114

Corrections, criticisms, and suggestions for new topics are greatly appreciated. I’ve tried to write this article tersely, so feedback on doing so would be especially appreciated. Apologies if the article’s font type, size and color isn’t standard on Less Wrong; I made it in google docs without being aware of Less Wrong’s standard and it would take too much work changing the style of the entire article.

 

Contents

  1. Can we become immortal?

  2. Should we try to become immortal?

  3. Relative importance of the different topics

  4. Food

    1. What to eat and drink

    2. When to eat and drink

    3. How much to eat

    4. How much to drink

  5. Exercise

  6. Carcinogens

    1. Chemicals

    2. Infections

    3. Radiation

  7. Emotions and feelings

    1. Positive emotions and feelings

    2. Psychological distress

    3. Stress

    4. Anger and hostility

  8. Social and personality factors

    1. Social status

    2. Giving to others

    3. Social relationships

    4. Conscientiousness

  9. Infectious diseases

    1. Dental health

  10. Sleep

  11. Drugs

  12. Blood donation

  13. Sitting

  14. Sleep apnea

  15. Snoring

  16. Exams

  17. Genomics

  18. Aging

  19. External causes of death

    1. Transport accidents

    2. Assault

    3. Intentional self harm

    4. Poisoning

    5. Accidental drowning

    6. Inanimate mechanical forces

    7. Falls

    8. Smoke, fire, and heat

    9. Other accidental threats to breathing

    10. Electric current

    11. Forces of nature

  20. Medical care

  21. Cryonics

  22. Money

  23. Future advancements

  24. References

 

Can we become immortal?

In order to potentially live forever, one never needs to make it impossible to die; one instead just needs to have one’s life expectancy increase faster than time passes, a concept known as the longevity escape velocity.61 For example, if one had a 10% chance of dying in their first century of life, but their chance of death decreased by 90% at the end of each century, then one’s chance of ever dying would be be 0.1 + 0.12 + 0.13… = 0.11… = 11.11...%. When applied to risk of death from aging, this akin to one’s remaining life expectancy after jumping off a cliff while being affected by gravity and jet propulsion, with gravity being akin to aging and jet propulsion being akin to anti-aging (rejuvenation) therapies, as shown below.

The numbers in the above figure denote plausible ages of individuals when the first rejuvenation therapies arrive. A 30% increase in healthy lifespan would give the users of first-generation rejuvenation therapies 20 years to benefit from second-generation rejuvenation therapies, which could give an additional 30% increase if life span, ad infinitum.61

As for causes of death, many deaths are strongly age-related. The proportion of deaths that are caused by aging in the industrial world approaches 90%.53 Thus, I suppose postponing aging would drastically increase life expectancy.

As for efforts against aging, the SENS Research foundation and Science for Life Extension are charitable foundations for trying to cure aging.54, 55 Additionally, Calico, a Google-backed company, and AbbVie, a large pharmaceutical company, have each committed fund $250 million to cure aging.56

I speculate that one could additionally decrease risk of death by becoming a cyborg, as mechanical bodies seem easier to maintain than biological ones, though I’ve found no articles discussing this.

Similar to becoming a cyborg, another potential method of decreasing one’s risk of death is mind uploading, which is, roughly speaking, the transfer of most or all of one’s mental contents into a computer.62 However, there are some concerns about the transfer creating a copy of one’s consciousness, rather than being the same consciousness. This issue is made very apparent if the mind-uploaded process leaves the original mind intact, making it seem unlikely that one’s consciousness was transferred to the new body.63 Eliezer Yudkowsky doesn’t seem to believe this is an issue, though I haven't found a citation for this.

With regard to consciousness, it seems that most individuals believe that the consciousness in one’s body is the “same” consciousness as the one that was in one’s body in the past and will be in it in the future. However, I know of no evidence for this. If one’s consciousness isn’t the same of the one in one’s body in the future, and one defined death as one’s consciousness permanently ending, then I suppose one can’t prevent death for any time at all. Surprisingly, I’ve found no articles discussing this possibility.

Although curing aging, becoming a cyborg, and mind uploading may prevent death from disease, they still seem to leave oneself vulnerable to accidents, murder, suicide, and existential catastrophes. I speculate that these problems could be solved by giving an artificial superintelligence the ability to take control of one’s body in order to prevent such deaths from occurring. Of course, this possibility is currently unavailable.

Another potential cause of death is the Sun expanding, which could render Earth uninhabitable in roughly one billion years. Death from this could be prevented by colonizing other planets in the solar system, although eventually the sun would render the rest of the solar system uninhabitable. After this, one could potentially inhabit other stars; it is expected that stars will remain for roughly 10 quintillion years, although some theories predict that the universe will be destroyed in a mere 20 billion years. To continue surviving, one could potentially go to other universes.64 Additionally, there are ideas for space-time crystals that could process information even after heat death (i.e. the “end of the universe”),65 so perhaps one could make oneself composed of the space-time crystals via mind uploading or another technique. There could also be other methods of surviving the conventional end of the universe, and life could potentially have 10 quintillion years to find them.

Yet another potential cause of death is living in a computer simulation that is ended. The probability of one living in a computer simulation actually seems to not be very improbable. Nick Bostrom argues that:

...at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

The argument for this is here.100

If one does die, one could potentially be revived. Cryonics, discussed later in this article, may help in this. Additionally, I suppose one could possibly be revived if future intelligences continually create new conscious individuals and eventually create one of them that have one’s “own” consciousness, though consciousness remains a mystery, so this may not be plausible, and I’ve found no articles discussing this possibility. If the probability of one’s consciousness being revived per unit time does not approach or equal zero as time approaches infinity, then I suppose one is bound to become conscious again, though this scenario may be unlikely. Again, I’ve found no articles discussing this possibility.

As already discussed, in order to be live forever, one must either be revived after dying or prevent death from the consciousness in one’s body not being the same as the one that will be in one’s body in the future, accidents, aging, the sun dying, the universe dying, being in a simulation and having it end, and other, unknown, causes. Keep in mind that adding extra details that aren’t guaranteed to be true can only make events less probable, and that people often don’t account for this.66 A spreadsheet for estimating one’s chance of living forever is here.

 

Should we try to become immortal?

Before deciding whether one should try to become immortal, I suggest learning about the cognitive biases scope insensitivity, hyperbolic discounting, and bias blind spot if you don’t know currently know about them. Also, keep in mind that one study found that simply informing people of a cognitive bias made them no less likely to fall prey to it. A study also found that people only partially adjusted for cognitive biases after being told that informing people of a cognitive bias made them no less likely to fall prey to it.67

Many articles arguing against immortality are found via a quick google search, including this, this, this, and this. This article along with its comments discusses counter-arguments to many of these arguments. The Fable of the Dragon Tyrant provides an argument for curing aging, which can be extended to be an argument against mortality as a whole. I suggest reading it.

One can also evaluate the utility of immortality via decision theory. Assuming individuals receive some finite, above zero amount of utility per unit time, living forever would give infinitely more utility than living for a finite amount of time. Using these assumptions, in order to maximize utility, one should be willing to accept any finite cost to become immortal. However, the situation is complicated when one considers the potential of becoming immortal and receiving a finite positive utility unintentionally, in which case one would receive infinite expected utility regardless of if one tried to become immortal. Additionally, if one both has the chance of receiving infinitely high and infinitely low utility, one’s expected utility would be undefined. Infinite utilities are discussed in “Infinite Ethics” by Nick Bostrom.

For those interested in decreasing existential risk, living for a very long time, albeit not necessarily forever, may give one more opportunity to do so. This idea can be generalized to many goals one has in life.

On whether one can influence one’s chances of becoming immortal, studies have shown that only roughly 20-30% of longevity in humans is accounted for by genetic factors.68 There are multiple actions one can to increase one’s chances of living forever; these are what the rest of this article is about. Keep in mind that you should consider continuing reading this article even if you don’t want to try to become immortal, as the article provides information on living longer, even if not forever, as well.

 

Relative importance of the different topics

The figure below gives the relative frequencies of preventable causes of death.

1

Some causes of death are excluded from the graph, but are still large causes of death. Most notably, 440,000 deaths in the US, roughly one sixth of total deaths in the US are estimated to be from preventable medical errors in hospitals.2

Risk calculators for cardiovascular disease are here and here. Though they seem very simplistic, they may be worth looking at and can probably be completed quickly.

Here are the frequencies of causes of deaths in the US in year 2010 based off of another classification:

  • Heart disease: 596,577

  • Cancer: 576,691

  • Chronic lower respiratory diseases: 142,943

  • Stroke (cerebrovascular diseases): 128,932

  • Accidents (unintentional injuries): 126,438

  • Alzheimer's disease: 84,974

  • Diabetes: 73,831

  • Influenza and Pneumonia: 53,826

  • Nephritis, nephrotic syndrome, and nephrosis: 45,591

  • Intentional self-harm (suicide): 39,518

113

 

Food

What to eat and drink

Keep in mind that the relationship between health and the consumption of types of substances aren’t necessarily linear. I.e. some substances are beneficial in small amounts but harmful in large amounts, while others are beneficial in both small and large amounts, but consuming large amounts is no more beneficial than consuming small amounts.

 

Recommendations from The Nutrition Source

The Nutrition Source is part of the Harvard School of Public Health.

Its recommendations:

  • Make ½ of your “plate” consist of a variety of fruits and a variety of vegetables, excluding potatoes, due to potatoes’ negative effect on blood sugar. The Harvard School of Public Health doesn’t seem to specify if this is based on calories or volume. It also doesn’t explain what it means by plate, but presumably ½ of one’s plate means ½ solid food consumed.

  • Make ¼ of your plate consist of whole grains.

  • Make ¼ of your plate consist of high-protein foods.

  • Limit red meat consumption.

  • Avoid processed meats.

  • Consume monounsaturated and polyunsaturated fats in moderation; they are healthy.

  • Avoid partially hydrogenated oils, which contain trans fats, which are unhealthy.

  • Limit milk and dairy products to one to two servings per day.

  • Limit juice to one small glass per day.

  • It is important to eat seafood one or two times per week, particularly fatty (dark meat) fish that are richer in EPA and DHA.

  • Limit diet drink consumption or consume in moderation.

  • Avoid sugary drinks like soda, sports drinks, and energy drinks.3

 

Fat

The bottom line is that saturated fats and especially trans fats are unhealthy, while unsaturated fats are healthy and the types of unsaturated fats omega-3 and omega-6 fatty acids fats are essential. The proportion of calories from fat in one’s diet isn’t really linked with disease.

Saturated fat is unhealthy. It’s generally a good idea to minimize saturated fat consumption. The latest Dietary Guidelines for Americans recommends consuming no more than 10% of calories from saturated fat, but the American Heart Association recommends consuming no more than 7% of calories from saturated fat. However, don’t decrease nut, oil, and fish consumption to minimize saturated fat consumption. Foods that contain large amounts of saturated fat include red meat, butter, cheese, and ice cream.

Trans fats are especially unhealthy. For every 2% increase of calories from trans-fat, risk of coronary heart disease increases by 23%. The Federal Institute for Medicine states that there are no known requirements for trans fats for bodily functions, so their consumption should be minimized. Partially hydrogenated oils contain trans fats, and foods that contain trans fats are often processed foods. In the US, products can claim to have zero grams of trans fat if they have no more than 0.5 grams of trans fat. Products with no more than 0.5 grams of trans fat that still have non-negligible amounts of trans fat will probably have the ingredients “partially hydrogenated vegetable oils” or “vegetable shortening” in their ingredient list.

Unsaturated fats have beneficial effects, including improving cholesterol levels, easing inflammation, and stabilizing heart rhythms. The American Heart Association has set 8-10% of calories as a target for polyunsaturated fat consumption, though eating more polyunsaturated fat, around 15%of daily calories, in place of saturated fat may further lower heart disease risk. Consuming unsaturated fats instead of saturated fat also prevents insulin resistance, a precursor to diabetes. Monounsaturated fats and polyunsaturated fats are types of unsaturated fats.

Omega-3 fatty acids (omega-3 fats) are a type of unsaturated fat. There are two main types: Marine omega-3s and alpha-linolenic acid (ALA). Omega-3 fatty acids, especially marine omega-3s, are healthy. Though one can make most needed types of fats from other fats or substances consumed, omega-3 fat is an essential fat, meaning it is an important type of fat and cannot be made in the body, so they must come from food. Most americans don’t get enough omega-3 fats.

Marine omega-3s are primarily found in fish, especially fatty (dark mean) fish. A comprehensive review found that eating roughly two grams per week of omega-3s from fish, equal to about one or two servings of fatty fish per week, decreased risk of death from heart disease by more than one-third. Though fish contain mercury, this is insignificant the positive health effects of their consumption (for the consumer, not the fish). However, it does benefit one’s health to consult local advisories to determine how much local freshwater fish to consume.

ALA may be an essential nutrient, and increased ALA consumption may be beneficial. ALA is found in vegetable oils, nuts (especially walnuts), flax seeds, flaxseed oil, leafy vegetables, and some animal fat, especially those from grass-fed animals. ALA is primarily used as energy, but a very small amount of it is converted into marine omega-3s. ALA is the most common omega-3 in western diets.

Most Americans consume much more omega-6 fatty acids (omega-6 fats) than omega-3 fats. Omega-6 fat is an essential nutrient and its consumption is healthy. Some sources of it include corn and soybean oils. The Nutrition Sources stated that the theory that omega-3 fats are healthier than omega-6 fats isn’t supported by evidence. However, in an image from the Nutrition Source, seafood omega-6 fats were ranked as healthier than plant omega-6 fats, which were ranked as healthier than monounsaturated fats, although such a ranking was to the best of my knowledge never stated in the text.3

 

Carbohydrates

There seems to be two main determinants of carbohydrate sources’ effects on health: nutrition content and effect on blood sugar. The bottom line is that consuming whole grains and other less processed grains and decreasing refined grain consumption improves health. Additionally, moderately low carbohydrate diets can increase heart health as long as protein and fat comes from health sources, though the type of carbohydrate at least as important as the amount of carbohydrates in a diet.

Glycemic index and is a measure of how much food increases blood sugar levels. Consuming carbohydrates that cause blood-sugar spikes can increase risk of heart disease and diabetes at least as much as consuming too much saturated fat does. Some factors that increase the glycemic index of foods include:

  • Being a refined grain as opposed to a whole grain.

  • Being finely ground, which is why consuming whole grains in their whole form, such as rice, can be healthier than consuming them as bread.

  • Having less fiber.

  • Being more ripe, in the case of fruits and vegetables.

  • Having a lower fat content, as meals with fat are converted more slowly into sugar.

Vegetables (excluding potatoes), fruits, whole grains, and beans, are healthier than other carbohydrates. Potatoes have a negative effect on blood sugar, due to their high glycemic index. Information on glycemic index and the index of various foods is here.

Whole grains also contain essential minerals such as magnesium, selenium, and copper, which may protect against some cancers. Refining grains takes away 50% of the grains’ B vitamins, 90% of vitamin E, and virtually all fiber. Sugary drinks usually have little nutritional value.

Identifying whole grains as food that has at least one gram of fiber for every gram of carbohydrate is a more effective measure of healthfulness than identifying a whole grain as the first ingredient, any whole grain as the first ingredient without added sugars in the first 3 ingredients, the word “whole” before any grain ingredient, and the whole grain stamp.3

 

Protein

Proteins are broken down to form amino acids, which are needed for health. Though the body can make some amino acids by modifying others, some must come from food, which are called essential amino acids. The institute of medicine recommends that adults get a minimum of 0.8 grams of protein per kilogram of body weight per day, and sets the range of acceptable protein intake to 10-35% of calories per day. The Institute of Medicine recommends getting 10-35% of calories from protein each day. The US recommended daily allowance for protein is 46 grams per day for women over 18 and 56 grams per day for men over 18.

Animal products tend to give all essential amino acids, but other sources lack some essential amino acids. Thus, vegetarians need to consume a variety of sources of amino acids each day to get all needed types. Fish, chicken, beans, and nuts are healthy protein sources.3

 

Fiber

There are two types of fiber: soluble fiber and insoluble fiber. Both have important health benefits, so one should eat a variety of foods to get both.94 The best sources of fiber are whole grains, fresh fruits and vegetables, legumes, and nuts.3

 

Micronutrients

There are many micronutrients in food; getting enough of them is important. Most healthy individuals can get sufficient micronutrients by consuming a wide variety of healthy foods, such as fruits, vegetables, whole grains, legumes, and lean meats and fish. However, supplementation may be necessary for some. Information about supplements is here.110

Concerning supplementation, potassium, iodine, and lithium supplementation are recommended in the first-place entry in the Quantified Health Prize, a contest on determining good mineral intake levels. However, others suggest that potassium supplementation isn’t necessarily beneficial, as shown here. I’m somewhat skeptical that the supplements are beneficial, as I have not found other sources recommending their supplementation. The suggested supplementation levels are in the entry.

Note that food processing typically decreases micronutrient levels, as described here. In general, it seems cooking, draining and drying foods sizably, taking potentially half of nutrients away, while freezing and reheating take away relatively few nutrients.111

One micronutrient worth discussing is sodium. Some sodium is needed for health, but most Americans consume more sodium than needed. However, recommendations on ideal sodium levels vary. The US government recommends limiting sodium consumption to 2,300mg/day (one teaspoon). The American Heart Association recommends limiting sodium consumption to 1,500mg/day (⅔ of a teaspoon), especially for those who are over 50, have high or elevated blood pressure, have diabetes, or are African Americans3 However, As RomeoStevens pointed out, the Institute of Medicine found that there's inconclusive evidence that decreasing sodium consumption below 2,300mg/day effects mortality,115 and some meta-analyses have suggested that there is a U-shaped relationship between sodium and mortality.116, 117

Vitamin D is another micronutrient that’s important for health. It can be obtained from food or made in the body after sun exposure. Most people who live farther north than San Francisco or don’t go outside at least fifteen minutes when it’s sunny are vitamin D deficient. Vitamin D deficiency is increases the risk of many chronic diseases including heart disease, infectious diseases, and some cancers. However, there is controversy about optimal vitamin D intake. The Institute of medicine recommends getting 600 to 4000 IU/day, though it acknowledged that there was no good evidence of harm at 4000 IU/day. The Nutrition Sources states that these recommendations are too low and fail to account for new evidence. The nutrition source states that for most people, supplements are the best source of vitamin D, but most multivitamins have too little vitamin D in them. The Nutrition Source recommends considering and talking to a doctor about taking an additional multivitamin if the you take less than 1000 IU of vitamin D and especially if you have little sun exposure.3

 

Blood pressure

Information on blood pressure is here in the section titled “Blood Pressure.”

 

Cholesterol and triglycerides

Information on optimal amounts of cholesterol and triglycerides are here.

 

The biggest influences on cholesterol are fats and carbohydrates in one’s diet, and cholesterol consumption generally has a far weaker influence. However, some people’s cholesterol levels rise and fall very quickly with the amount of cholesterol consumed. For them, decreasing cholesterol consumption from food can have a considerable effect on cholesterol levels. Trial and error is currently the only way of determining if one’s cholesterol levels risk and fall very quickly with the amount of cholesterol consumed.

 

Antioxidants

Despite their initial hype, randomized controlled trials have offered little support for the benefit is single antioxidants, though studies are inconclusive.3

 

Dietary reference intakes

For the numerically inclined, the Dietary Reference Intake provides quantitative guidelines on good nutrient consumption amounts for many nutrients, though it may be harder to use for some, due to its quantitative nature.

 

Drinks

The Nutrition Source and SFGate state that water is the best drink,3, 112 though I don’t know why it’s considered healthier than drinks such as tea.

Unsweetened tea decreases the risk of many diseases, likely largely due to polyphenols, and antioxidant, in it. Despite antioxidants typically having little evidence of benefit, I suppose polyphenols are relatively beneficial. All teas have roughly the same levels of polyphenols except decaffeinated tea,3 which has fewer polyphenols.96 Research suggests that proteins and possibly fat in milk decrease the antioxidant capacity of tea.

It’s considered safe to drink up to six cups of coffee per day. Unsweetened coffee is healthy and may decrease some disease risks, though coffee may slightly increase blood pressure. Some people may want to consider avoiding coffee or switching to decaf, especially women who are pregnant or people who have a hard time controlling their blood pressure or blood sugar. The nutrition source states that it’s best to brew coffee with a paper filter to remove a substance that increases LDL cholesterol, despite consumed cholesterol typically having a very small effect on the body’s cholesterol level.

Alcohol increases risk of diseases for some people and decreases it for others. Heavy alcohol consumption is a major cause of preventable death in most countries. For some groups of people, especially pregnant people, people recovering from alcohol addiction, and people with liver disease, alcohol causes greater health risks and should be avoided. The likelihood of becoming addicted to alcohol can be genetically determined. Moderate drinking, generally defined as no more than one or two drinks per day for men, can increase colon and breast cancer risk, but these effects are offset by decreased heart disease and diabetes risk, especially in middle age, where heart disease begins to account for an increasingly large proportion of deaths. However, alcohol consumption won’t decrease cardiovascular disease risk much for those who are thin, physically active, don’t smoke, eat a healthy diet, and have no family history of heart disease. Some research suggests that red wine, particularly when consumed after a meal, has more cardiovascular benefits than beers or spirits, but alcohol choice has still little effect on disease risk. In one study, moderate drinkers were 30-35% less likely to have heart attacks than non-drinkers and men who drank daily had lower heart attack risk than those who drank once or twice per week.

There’s no need to drink more than one or two glasses of milk per day. Less milk is fine if calcium is obtained from other sources.

The health effects of artificially sweetened drinks are largely unknown. Oddly, they may also cause weight gain. It’s best to limit consuming them if one drinks them at all.

Sugary drinks can cause weight gain, as they aren’t as filling as solid food and have high sugar. They also increase the risk of diabetes, heart disease, and other diseases. Fruit juice has more calories and less fiber than whole fruit and is reportedly no better than soft drinks.3

 

Solid food

Fruits and vegetables are an important part of a healthy diet. Eating a variety of them is as important as eating many of them.3 Fish and nut consumption is also very healthy.98

Processed meat, on the other hand, is shockingly bad.98 A meta-analysis found that processed meat consumption is associated with a 42% increased risk of coronary heart disease (relative risk per 50g serving per day; 95% confidence interval: 1.07 - 1.89) and 19% increased risk of diabetes.97 Despite this, a bit of red meat consumption has been found to be beneficial.98 Consumption of well-done, fried, or barbecued meat has been associated with certain cancers, presumably due to carcinogens made in the meat from being cooked, though this link isn’t definitive. The amount of carcinogens increases with increased cooking temperature (especially above 300ºF, increased cooking time, charring, or being exposed to smoke.99

Eating less than one egg per day doesn’t increase heart disease risk in healthy individuals and can be part of a healthy diet.3

Organic foods have lower levels of pesticides than inorganic foods, though the residues of most organic and inorganic products don’t exceed government safety threshold. Washing fresh fruits and vegetables in recommended, as it removes bacteria and some, though not all, pesticide residues. Organic foods probably aren’t more nutritious than non-organic foods.103

 

When to eat and drink

A randomized controlled trial found an increase in blood sugar variation for subjects who skipped breakfast.6 Increasing meal frequency and decreasing meal size appears to have some metabolic advantages, and doesn’t appear to have metabolic disadvantages.7 Note:  old source; made in 1994 However, Mayo Clinic states that fasting for 1-2 days per week may increase heart health.32 Perhaps it is optimal for health to fast, but to have high meal frequency when not fasting.

 

How much to eat

One’s weight gain is directly proportional to the number of calories consumed divided by the number of calories burnt. Centers for Disease Control and Prevention (CDC) has guidelines for healthy weights and information on how to lose weight.

Some advocate restricting weight to a greater extent, which is known as calorie restriction. It’s unknown whether calorie restriction increases lifespan in humans or not, but data indicate that moderate calorie restriction with adequate nutrition decreases risk of obesity, type 2 diabetes, inflammation, hypertension, cardiovascular disease, and decreases metabolic risk factors associated with cancer.4 The CR Society has information on getting started on calorie restriction.

 

How much to drink

Generally, drinking enough to rarely feel thirsty and to have colorless or light yellow urine is usually sufficient. It’s also possible to drink too much water. In general, drinking too much water is rare in healthy adults who eat an average American diet, although endurance athletes are at a higher risk.10

 

Exercise

A meta-analysis found the data in the following graphs for people aged over 40.

8

A weekly total of roughly five hours of vigorous exercise has been identified by several studies to be the safe upper limit for life expectancy. It may be beneficial to take one or two days off from vigorous exercise per week and to limit chronic vigorous exercise to <= 60 min/day.9 Based on the above, I my best guess for the optimal amount of exercise for longevity is roughly 30 MET-hr/wk. Calisthenics burn 6-10 METs/hr11, so an example exercise routine to get this amount of exercise is doing calisthenics 38 minutes per day and 6 days/wk. Guides on how to exercise are available, e.g. this one.

 

Carcinogens

Carcinogens are cancer-causing substances. Since cancer causes death, decreasing exposure to carcinogens presumably decreases one’s risk of death. Some foods are also carcinogenic, as discussed in the “Food” section.

 

Chemicals

Tobacco use is the greatest avoidable risk factor for cancer worldwide, causing roughly 22% of cancer deaths. Additionally, second hand smoke has been proven to cause lung cancer in nonsmoking adults.

Alcohol use is a risk factor for many types of cancer. The risk of cancer increases with the amount of alcohol consumed, and substantially increases if one is also a heavy smoker. The attributable fraction of cancer from alcohol use varies depending on gender, due to differences in consumption level. E.g. 22% of mouth and oropharynx cancer is attributable to cancer in men but only 9% is attributable to alcohol in women.

Environmental air pollution accounts for 1-4% of cancer.84 Diesel exhaust is one type of carcinogenic air pollution. Those with the highest exposure to diesel exhaust are exposed to it occupationally. As for residential exposure, diesel exhaust is highest in homes near roads where traffic is heaviest. Limiting time spent near large sources of diesel exhaust decreases exposure. Benzene, another carcinogen, is found in gasoline and vehicle exhaust but exposure to it can also be cause by being in areas with unventilated fumes from gasoline, glues, solvents, paints, and art supplies. It can cause exposure from inhalation or skin contact.86

Some occupations exposure workers to occupational carcinogens.84 A list of some of the occupations is here, all of which involve manual labor, except for hospital-related jobs.87

 

Infections

Infections are responsible for 6% of cancer deaths in developed nations.84 Many of the infections are spread via sexual contact and sharing needles and some can be vaccinated against.85

 

Radiation

Ionizing radiation is carcinogenic to humans. Residential exposure to radon gas is estimated to cause 3-14% of lung cancers, which is the largest source of radon exposure for most people 84 Being exposed to radon and cigarette smoke together increases one’s cancer risk much more than they do separately. There is much variation radon levels depending on where one lives and and radon is usually higher inside buildings, especially levels closer to the ground, such as basements. The EPA recommends taking action to reduce radon levels if they are greater than or equal to 4.0 pCi/L. Radon levels can be reduced by a qualified contractor. Reducing radon levels without proper training and equipment can increase instead of decrease them.88

Some medical tests can also increase exposure to radiation. The EPA estimates that exposure to 10 mSv from a medical imaging test increases risk of cancer by  roughly 0.05%. To decrease exposure to radiation from medical imaging tests, one can ask if there are ways to shield parts of one’s body from radiation that aren’t being tested and making sure  the doctor performing the test is qualified.89

 

Small doses of ionizing radiation increase risk by a very small amount. Most studies haven’t detected increased cancer risk in people exposed to low levels of ionizing radiation. For example, people living in higher altitudes don’t have noticeably higher cancer rates than other people. In general, cancer risk from radiation increases as the dose of radiation increases and there is thought to be no safe level of exposure. Ultraviolet radiation as a type of radiation that can be ionizing radiation. Sunlight is the main source of ultraviolet radiation.84

Factors that increase one’s exposure to ultraviolet radiation when outside include:

  • Time of day. Almost ⅓ of UV radiation hits the surface between 11AM and 1PM, and ¾ hit the surface between 9AM and 5PM.  

  • Time of year. UV radiation is greater during summer. This factor is less significant near the equator.

  • Altitude. High elevation causes more UV radiation to penetrate the atmosphere.

  • Clouds. Sometimes clouds decrease levels of UV radiation because they block UV radiation from the sun. Other times, they increase exposure because they reflect UV radiation.

  • Reflection off surfaces, such as water, sand, snow, and grass increases UV radiation.

  • Ozone density, because ozone stops some UV radiation from reaching the surface.

Some tips to decrease exposure to UV radiation:

  • Stay in the shade. This is one of the best ways to limit exposure to UV radiation in sunlight.

  • Cover yourself with clothing.

  • Wear sunglasses.

  • Use sunscreen on exposed skin.90

 

Tanning beds are also a source of ultraviolet radiation. Using tanning booths can increase one’s chance of getting skin melanoma by at least 75%.91

 

Vitamin D3 is also produced from ultraviolet radiation, although the American Society for Clinical Nutrition states that vitamin D is readily available from supplements and that the controversy about reducing ultraviolet radiation exposure was fueled by the tanning industry.92

 

There could be some risk of cell phone use being associated with cancer, but the evidence is not strong enough to be considered causal and needs to be investigated further.93

 

Emotions and feelings

Positive emotions and feelings

A review suggested that positive emotions and feelings decreased mortality. Proposed mechanisms include positive emotions and feelings being associated with better health practices such as improved sleep quality, increased exercise, and increased dietary zinc consumption, as well as lower levels of some stress hormones. It has also been hypothesized to be associated with other health-relevant hormones, various aspects of immune function, and closer and more social contacts.33 Less Wrong has a good article on how to be happy.

 

Psychological distress

A meta-analysis was conducted on psychological stress. To measure psychological stress, it used the GHQ-12 score, which measured symptoms of anxiety, depression, social dysfunction, and loss of confidence. The scores range from 0 to 12, with 0 being asymptomatic, 1-3 being subclinically symptomatic, 4-6 being symptomatic, and 7-12 being highly symptomatic. It found the results shown in the following graphs.

http://www.bmj.com/content/bmj/345/bmj.e4933/F3.large.jpg?width=800&height=600

This association was essentially unchanged after controlling for a range of covariates including occupational social class, alcohol intake, and smoking. However, reverse causality may still partly explain the association.30

 

Stress

A study found that individuals with moderate and high stress levels as opposed to low stress had hazard ratios (HRs) of mortality of 1.43 and 1.49, respectively.27 A meta-analysis found that high perceived stress as opposed to low perceived stress had a coronary heart disease relative risk (RR) of 1.27. The mean age of participants in the studies used in the meta-analysis varied from 44 to 72.5 years and was significantly and positively associated with effect size. It explained 46% of the variance in effect sizes between the studies used in the meta-analysis.28

A cross-sectional study (which is a relatively weak study design) not in the aforementioned meta-analysis used 28,753 subjects to study the effect on mortality from the amount of stress and the perception of whether stress is harmful or not. It found that neither of these factors predicted mortality independently, but but that taken together, they did have a statistically significant effect. Subjects who reported much stress and that stress has a large effect on health had a HR of 1.43 (95% CI: 1.2, 1.7). Reverse causality may partially explain this though, as those who have had negative health impacts from stress may have been more likely to report that stress influences health.83

 

Anger and hostility

A meta-analysis found that after fully controlling for behavior covariates such as smoking, physical activity or body mass index, and socioeconomic status, anger and hostility was not associated with coronary heart disease (CHD), though the results are inconclusive.34

 

Social and personality factors

Social status

A review suggested that social status is linked to health via gender, race, ethnicity, education levels, socioeconomic differences, family background, and old age.46

 

Giving to others

An observational study found that stressful life events was not a predictor for mortality for those who engaged in unpaid helping behavior directed towards friends, neighbors, or relatives who did not live with them. This association may be due to giving to others causing one to have a sense of mattering, opportunities for generativity, improved social well-being, the emotional state of compassion, and the physiology of the caregiving behavioral system.35

 

Social relationships

A large meta-analysis found that the odds ratio of mortality of having weak social relationships is 1.5 (95% confidence interval (CI): 1.42 to 1.59). However, this effect may be a conservative estimate. Many of the studies used in the meta-analysis used single item measures of social relations, but the size of the association was greatest in studies that used more complex measurements. Additionally, some of the studies in the meta-analysis adjusted for risk factors that may be mediators of social relationships’ effect on mortality (e.g. behavior, diet, and exercise). Many of the studies in the meta-analysis also ignored the quality of social relationships, but research suggests that negative social relationships are linked to increased mortality. Thus, the effect of social relationships on mortality could be even greater than the study found.

Concerning causation, social relationships are linked to better health practices and psychological processes, such as stress and depression, which influence health outcomes on their own. However, the meta-analysis also states that social relationships exert an independent effect. Some studies show that social support is linked to better immune system functioning and to immune-mediated inflammatory processes.36

 

Conscientiousness

A cohort study with 468 deaths found that each 1 standard deviation decrease in conscientiousness was associated with HR being multiplied by 1.07 (95% CI: 0.98 – 1.17), though it gave no mechanism for the association.39 Although it adjusted for several variables, (e.g.  socioeconomic status, smoking, and drinking), it didn’t adjust for drug use, risky driving, risky sex, suicide, and violence, which were all found by a meta-analysis to have statistically significant associations with conscientiousness.40 Overall, it seems to me that conscientiousness doesn’t seem to have a significant effect on mortality.

 

Infectious diseases

Mayo clinic has a good article on preventing infectious disease.

 

Dental health

A cohort study of 5611 adults found that compared to men with 26-32 teeth, men with 16-25 teeth had an HR of 1.03 (95% CI: 0.91-1.17), men with 1-15 teeth had an HR of 1.21 (95% CI: 1.05-1.40) and men with 0 teeth had an HR of 1.18 (95% CI: 1.00-1.39).

In the study, men who never brushed their teeth at night had a HR of 1.34 (95% CI: 1.14-1.57) relative to those who did every night. Among subjects who brushed at night, HR was similar between those who did and didn’t brush daily in the morning or day. The HR for men who brushed in the morning every day but not at night every day was 1.19 (95% CI: 0.99-1.43).

In the study, men who never used dental floss had an HR of 1.27 (95% CI: 1.11-1.46) and those who sometimes used it had an HR or 1.14 (95% CI: 1.00-1.30) compared to men who used it every day. Among subjects who brushed their teeth at night daily, not flossing was associated with a significantly increased HR.

Use of toothpicks didn’t significantly decrease HR and mouthwash had no effect.

The study had a list of other studies on the effect of dental health on mortality. It seems to us that almost all of them found a negative correlation between dental health and risk of mortality, although the study didn’t say their methodology for selecting the studies to show. I did a crude review of other literature by only looking at their abstracts and found that five studies found that poor dental health increased risk of mortality and one found it didn’t.

Regarding possible mechanisms, the study says that toothpaste helps prevent dental caries and that dental floss is the most effective means of removing interdental plaque and decreasing interdental gingival inflammation.38

 

Sleep

It seems that getting too little or too much sleep likely increases one’s risk of mortality, but it’s hard to tell exactly how much is too much and how little is too little.

 

One review found that the association between amount of sleep and mortality is inconsistent in studies and that what association does exist may be due to reverse-causality.41 However, a meta-analysis found that the RR associated with short sleep duration (variously defined as sleeping from < 8 hrs/night to < 6 hrs/night) was 1.10 (95% CI: 1.06-1.15). It also found that the RR associated with long sleep duration (variously defined as sleeping for > 8 hrs/night to > 10 hrs per night) compared with medium sleep duration (variously defined as sleeping for 7-7.9 hrs/night to 9-9.9 hrs/night) was 1.23 (95% CI: 1.17 - 1.30).42

 

The National Heart, Lung, and Blood Institute and Mayo Clinic recommend adults get 7-8 hours of sleep per night, although it also says sleep needs vary from person to person. It gives no method of determining optimal sleep for an individual. Additionally, it doesn’t say if its recommendations are for optimal longevity, optimal productivity, something else, or a combination of factors.43 The Harvard Medical School implies that one’s optimal amount of sleep is enough sleep to not need an alarm to wake up, though it didn’t specify the criteria for determining optimality either.45

 

Drugs

None of the drugs I’ve looked into have a beneficial effect for the people without a special disease or risk factor. Notes on them are here.

 

Blood donation

A quasi-randomized experiment with a validity near that of a randomized trial presumably suggested that blood donation didn’t significantly decrease risk of coronary heart disease (CHD). Observational studies have shown lower CHD incidence among donors, although the authors of the former experiment suspect that bias played a role in this. The authors believe that their findings cast serious doubts on the theory that blood donation decreases CHD risk.29

 

Sitting

After adjusting for amount of physical activity, a meta-analysis estimated that for every one hour increment of sitting in intervals 0-3, >3-7 and >7 h/day total sitting time, the hazard ratios of mortality were 1.00 (95% CI: 0.98-1.03), 1.02 (95% CI: 0.99-1.05) and 1.05 (95% CI: 1.02-1.08) respectively. It proposed no mechanism for sitting time having this effect,37 so it might have been due to confounding variables it didn’t control.

 

Sleep apnea

Sleep apnea is an independent risk factor for mortality and cardiovascular disease.26 Symptoms and other information on sleep apnea are here.

 

Snoring

A meta-analysis found that self-reported habitual snoring had a small but statistically significant association with stroke and coronary heart disease, but not with cardiovascular disease and all-cause mortality [HR 0.98 (95% CI: 0.78-1.23)]. Whether the risk is due to obstructive sleep apnea is controversial. Only the abstract is able to be viewed for free, so I’m just basing this off the abstract.31

 

Exams

The organization Susan G. Komen, citing a meta-analysis that used randomized controlled trials, doesn’t recommend breast self exams as a screening tool for breast cancer, as it hasn’t been shown to decrease cancer death. However, it still stated that it is important to be familiar with one’s breasts’ appearance and how they normally feel.49 According to the Memorial Sloan Kettering Cancer Center, no study has been able to show a statistically significant decrease in breast cancer deaths from breast self-exams.50 The National Cancer Institute states that breast self-examinations haven’t been shown to decrease breast cancer mortality, but does increase biopsies of benign breast lesions.51

The American Cancer Society doesn’t recommend testicular self-exams for all men, as they haven’t been studied enough to determine if they decrease mortality. However, it states that men with risk factors of testicular cancer (e.g. an undescended testical, previous testicular cancer, of a family member who previously had testicular cancer) should consider self-exams and discuss them with a doctor. The American Cancer Society also recommends having testicular self-exams in routine cancer-related check-ups.52

 

Genomics

Genomics is the study of genes in one’s genome, and may help increase health by using knowledge of one’s genes to have personalized treatment. However, it hasn’t proved to be useful for most; recommendations rarely change after knowledge from genomic testing. Still, genomics has much future potential.102

 

Aging

Like I’ve said in the section “Can we become immortal,” the proportion of deaths that are caused by aging in the industrial world approaches 90%,53 but some organizations and companies are working on curing it.54, 55, 56

One could support these organizations in an effort to hasten the development of anti-aging therapies, although I doubt an individual would have a noticeable impact on one’s own chance of death unless one is very wealthy. That said, I have little knowledge in investments, but I suppose investing in companies working on curing aging may be beneficial, as if they succeed, they may offer an enormous return on investment, and if they fail, one would probably die, so losing one’s money may not be as bad. Calico currently isn’t a public stock, though.

 

External causes of death

Unless otherwise specified, graphs in this section are on data collected from American citizens ages 15-24, as based off the Less Wrong census results, this seems to be the most probable demographic that will read this. For this demographic, external causes cause 76% of deaths. Note that although this is true, one is much more likely to die when older than when aged 15-24, and older individuals are much more likely to die from disease than from external causes of death. Thus, I think it’s more important when young to decrease risk of disease than external causes of death. The graph below shows the percentage of total deaths from external causes caused by various causes.

21

 

Transport accidents

Below are the relative death rates of specified means of transportation for people in general:

71

Much information about preventing death from car crashes is here. Information on preventing death from car crashes is here, here, here, and here.

 

Assault

Lifehacker's “Basic Self-Defense Moves Anyone Can Do (and Everyone Should Know)” gives a basic introduction to self defence.

 

Intentional self harm

Intentional self harm such as suicide, presumably, increases one’s risk of death.47 Mayo Clinic has a guide on preventing suicide. I recommend looking at it if you are considering killing yourself. Additionally, if are are considering killing yourself, I suggest reviewing the potential rewards of achieving immortality from the section “Should we try to become immortal.”

 

Poisoning

What to do if a poisoning occurs

CDC recommends staying calm, dialing 1-800-222-1222, and having this information ready:

  • Your age and weight.

  • If available, the container of the poison.

  • The time of the poison exposure.

  • The address where the poisoning occurred.

It also recommends staying on the phone and following the instructions of the emergency operator or poison control center.18

 

Types of poisons

Below is a graph of the risk of death per type of poison.

21

Some types of poisons:

  • Medicine overdoses.

  • Some household chemicals.

  • Recreational drug overdoses.

  • Carbon monoxide.

  • Metals such as lead and mercury.

  • Plants12 and mushrooms.14

  • Presumably some animals.

  • Some fumes, gases, and vapors.15

 

Recreational drugs

Using recreational drugs increases risk of death.

 

Medicine overdoses and household chemicals

CDC has tips for these here.

 

Carbon monoxide

CDC and Mayo Clinic have tips for this here and here.

 

Lead

Lead poisoning causes 0.2% of deaths worldwide and 0.0% of deaths in developed countries.22 Children under the age of 6 are at higher risk of lead poisoning.24 Thus, for those who aren’t children, learning more about preventing lead poisoning seems like more effort than it’s worth. No completely safe blood lead level has been identified.23

 

Mercury

MedlinePlus has an article on mercury poisoning here.

 

Accidental drowning

Information on preventing accidental drowning from CDC is here and here.

 

Inanimate mechanical forces

Over half of deaths from inanimate mechanical forces for Americans aged 15-24 are from firearms. Many of the other deaths are from explosions, machinery, and getting hit by objects. I suppose using common sense, precaution, and standard safety procedures when dealing with such things is one’s best defense.

 

Falls

Again, I suppose common sense and precaution is one’s best defense. Additionally, alcohol and substance abuse is a risk factor of falling.72

 

Smoke, fire and heat

Owning smoke alarms halves one’s risk of dying in a home fire.73 Again, common sense when dealing with fires and items potentially causing fires (e.g. electrical wires and devices) seems effective.

 

Other accidental threats to breathing

Deaths from other accidental threats to breathing are largely caused by strangling or choking on food or gastric contents, and occasionally by being in a cave-in or trapped in a low-oxygen environment.21 Choking can be caused by eating quickly or laughing while eating.74 If you are choking:

  • Forcefully cough. Lean as far forwards as you can and hold onto something that is firmly anchored, if possible. Breathe out and then take a deep breath in and cough; this may eject the foreign object.

  • Attract someone’s attention for help.75

 

Additionally, choking can be caused by vomiting while unconscious, which can be caused by being very drunk.76 I suggest lying in the recovery position if you think you may vomit while unconscious, so as to to decrease the chance of choking on vomit.77 Don’t forget to use common sense.

 

Electric current

Electric shock is usually caused by contact with poorly insulated wires or ungrounded electrical equipment, using electrical devices while in water, or lightning.78 Roughly ⅓ of deaths from electricity are caused by exposure to electric transmission lines.21

 

Forces of nature

Deaths from forces of nature in (for Americans ages 15-24) in descending order of number of deaths caused are: exposure to cold, exposure to heat, lightning, avalanches or other earth movements, cataclysmic storms, and floods.21 Here are some tips to prevent these deaths:

  • When traveling in cold weather, carry emergency supplies in your car and tell someone where you’re heading.79

  • Stay hydrated during hot weather.80

  • Safe locations from lightning include substantial buildings and hard-topped vehicles. Safe locations don’t include small sheds, rain shelters, and open vehicles.

  • Wait until there are no thunderstorm clouds in the area before going to a location that isn’t lightning safe.81

 

Medical care

Since medical care is tasked with treating diseases, receiving medical care when one has illnesses presumably decreases risk of death. Though necessary medical care may be essential when one has illnesses, a review estimated that preventable medical errors contributed to roughly 440,000 deaths per year in the US, which is roughly one-sixth of total deaths in the US. It gave a lower limit of 210,000 deaths per year.

The frequency of deaths from preventable medical errors varied across studies used in the review, with a hospital that was shown the put much effort into improving patient safety having a lower proportion of deaths from preventable medical errors than that of others.57 Thus, I suppose that it would be beneficial to go to hospitals that are known for their dedication to patient safety. There are several rankings of hospital safety available on the internet, such as this one. Information on how to help prevent medical errors is found here and under the “What Consumers Can Do” section here. One rare medical error is having a surgery be done on the wrong body part. The New York Times gives tips for preventing this here.

Additionally, I suppose it may be good to live relatively close to a hospital so as to be able to quickly reach it in emergencies, though I’ve found no sources stating this.

A common form of medical care are general health checks. A comprehensive Cochrane review with 182,880 subjects concluded that general health checks are probably not beneficial.107 A meta-analysis found that general health checks are associated with small but statistically significant benefits in factoring related to mortality, such as blood pressure and body mass index. However, it found no significant association with mortality.109 The New York Times acknowledged that health checks are probably not beneficial and gave some explanation why general health checks are nonetheless still common.108 However, CDC and MedlinePlus recommend getting routine general health checks. The cited no studies to support their claims.104, 106 When I contacted CDC about it, it responded, “Regular health exams and tests can help find problems before they start. They also can help find problems early, when your chances for treatment and cure are better. By getting the right health services, screenings, and treatments, you are taking steps that help your chances for living a longer, healthier life,” a claim that doesn’t seem supported by evidence. It also stated, “Although CDC understands you are concerned, the agency does not comment on information from unofficial or non-CDC sources.” I never heard back from MedlinePlus.

 

Cryonics

Cryonics is the freezing of legally dead humans with the purpose preserving their bodies so they can be brought back to life in the future once technology makes it possible. Human tissue have been cryopreserved and then brought back to life, although this has never been done on full humans.59 The price of Cryonics at least ranges from $28,000 to $200,000.60 More information on cryonics is on LessWrong Wiki.

 

Money

Cryonics, medical care, safe housing, and basic needs all take money. Rejuvenation therapy may also be very expensive. It seems valuable to have a reasonable amount of money and income.

 

Future advancements

Keeping updated on further advancements in technology seems like a good idea, as not doing so would prevent one from making use of future technologies. Keeping updated on advancements on curing aging seems especially important, due to the massive number of casualties it inflicts and the current work being done to stop it. Updates on mind-uploading seem important as well. I don’t know of any very efficient method of keeping updated on new advancements, but periodically googling for articles about curing aging or Calico and searching for new scientific articles on topics in this guide seems reasonable. As knb suggested, it seems beneficial to periodically check on Fight Aging, a website advocating anti-aging therapies. I’ll try to do this and update this guide with any new relevant information I find.

There is much uncertainty ahead, but if we’re clever enough, we just might make it though alive.

 

References

 

  1. Actual Causes of Death in the United States, 2000.
  2. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
  3. All pages in The Nutrition Source, a part of the Harvard School of Public Health.
  4. Will calorie restriction work on humans? 
  5. The pages Getting Started, Tests and Biomarkers, and Risks from The CR Society.
  6. The causal role of breakfast in energy balance and health: a randomized controlled trial in lean adults.
  7. Low Glycemic Index: Lente Carbohydrates and Physiological Effects of altered food frequency. Published in 1994. 
  8. Leisure Time Physical Activity of Moderate to Vigorous Intensity and Mortality: A Large Pooled Cohort Analysis.
  9. Exercising for Health and Longevity vs Peak Performance: Different Regimens for Different Goals.
  10. Water: How much should you drink every day? 
  11. MET-hour equivalents of various physical activities.
  12. Poisoning. NLM
  13. Carcinogen. Dictionary.com
  14. Types of Poisons. New York Poison Center
  15. The Most Common Poisons for Children and Adults. National Capital Poison Center.
  16. Known and Probable Human Carcinogens. American cancer society.
  17. Nutritional Effects of Food Processing. Nutritiondata.com.
  18. Tips to Prevent Poisonings. CDC.
  19. Carbon monoxide poisoning. Mayo Clinic.
  20. Carbon Monoxide Poisoning. CDC. 
  21. CDCWONDER. Query Criteria taken from all genders, all states, all races, all levels of urbanization, all weekdays, dates 1999 – 2010, ages 15 – 24. 
  22. Global health risks: mortality and burden of disease attributable to selected major risks.
  23. National Biomonitoring Program Factsheet. CDC
  24. Lead poisoning. Mayo Clinic.
  25. Mercury. Medline Plus.
  26. Snoring Is Not Associated With All-Cause Mortality, Incident Cardiovascular Disease, or Stroke in the Busselton Health Study.
  27. Do Stress Trajectories Predict Mortality in Older Men? Longitudinal Findings from the VA Normative Aging Study.
  28. Meta-analysis of Perceived Stress and its Association with Incident Coronary Heart Disease.
  29. Iron and cardiac ischemia: a natural, quasi-random experiment comparing eligible with disqualified blood donors.
  30. Association between psychological distress and mortality: individual participant pooled analysis of 10 prospective cohort studies.
  31. Self-reported habitual snoring and risk of cardiovascular disease and all-cause mortality.
  32. Is it true that occasionally following a fasting diet can reduce my risk of heart disease? 
  33. Positive Affect and Health.
  34. The Association of Anger and Hostility with Future Coronary Heart Disease: A Meta-Analytic Review of Prospective Evidence.
  35. Giving to Others and the Association Between Stress and Mortality.
  36. Social Relationships and Mortality Risk: A Meta-analytic Review.
  37. Daily Sitting Time and All-Cause Mortality: A Meta-Analysis.
  38. Dental Health Behaviors, Dentition, and Mortality in the Elderly: The Leisure World Cohort Study.
  39. Low Conscientiousness and Risk of All-Cause, Cardiovascular and Cancer Mortality over 17 Years: Whitehall II Cohort Study.
  40. Conscientiousness and Health-Related Behaviors: A Meta-Analysis of the Leading Behavioral Contributors to Mortality.
  41. Sleep duration and all-cause mortality: a critical review of measurement and associations.
  42. Sleep duration and mortality: a systematic review and meta-analysis.
  43. How Much Sleep Is Enough? National Lung, Blood, and Heart Institute. 
  44. How many hours of sleep are enough for good health? Mayo Clinic.
  45. Assess Your Sleep Needs. Harvard Medical School.
  46. A Life-Span Developmental Perspective on Social Status and Health.
  47. Suicide. Merriam-Webster. 
  48. Can testosterone therapy promote youth and vitality? Mayo Clinic.
  49. Breast Self-Exam. Susan G. Komen.
  50. Screening Guidelines. The Memorial Sloan Kettering Cancer Center.
  51. Breast Cancer Screening Overview. The National Cancer Institute.
  52. Testicular self-exam. The American Cancer Society.
  53. Life Span Extension Research and Public Debate: Societal Considerations
  54. SENS Research Foundation: About.
  55. Science for Life Extension Homepage.
  56. Google's project to 'cure death,' Calico, announces $1.5 billion research center. The Verge.
  57. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
  58. When Surgeons Cut the Wrong Body Part. The New York Times.
  59. Cold facts about cryonics. The Guardian. 
  60. The cryonics organization founded by the "Father of Cryonics," Robert C.W. Ettinger. Cryonics Institute. 
  61. Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now
  62. International Journal of Machine Consciousness Introduction.
  63. The Philosophy of ‘Her.’ The New York Times.
  64. How to Survive the End of the Universe. Discover Magazine.
  65. A Space-Time Crystal to Outlive the Universe. Universe Today.
  66. Conjunction Fallacy. Less Wrong.
  67. Cognitive Biases Potentially Affecting Judgment of Global Risks.
  68. Genetic influence on human lifespan and longevity.
  69. First Drug Shown to Extend Life Span in Mammals. MIT Technology Review.
  70. Sirolimus (Oral Route). Mayo Clinic.
  71. Micromorts. Understanding Uncertainty.
  72. Falls. WHO.
  73. Smoke alarm outreach materials.  US Fire Administration.
  74. What causes choking? 17 possible conditions. Healthline.
  75. Choking. Better Health Channel.
  76. Aspiration pneumonia. HealthCentral.
  77. First aid - Recovery position. NHS Choices.
  78. Electric Shock. HowStuffWorks.
  79. Hypothermia prevention. Mayo Clinic.
  80. Extreme Heat: A Prevention Guide to Promote Your Personal Health and Safety. CDC.
  81. Understanding the Lightning Threat: Minimizing Your Risk. National weather service.
  82. The Case Against QuikClot. The survival mom.
  83. Does the Perception that Stress Affects Health Matter? The Association with Health and Mortality.
  84. Cancer Prevention. WHO.
  85. Infections That Can Lead to Cancer. American Cancer Society.
  86. Pollution. American Cancer Society.
  87. Occupations or Occupational Groups Associated with Carcinogen Exposures. Canadian Centre for Occupational Health and Safety. 
  88. Radon. American Cancer Society.
  89. Medical radiation. American Cancer Society.
  90. Ultraviolet (UV) Radiation. American Cancer Society.
  91. An Unhealthy Glow. American Cancer Society.
  92. Sun exposure and vitamin D sufficiency.  
  93. Cell Phones and Cancer Risk. National Cancer Institute.
  94. Nutrition for Everyone. CDC.
  95. How Can I Tell If My Body is Missing Key Nutrients? Oprah.com.
  96. Decaffeination, Green Tea and Benefits. Teas etc.
  97. Red and Processed Meat Consumption and Risk of Incident Coronary Heart Disease, Stroke, and Diabetes Mellitus.
  98. Lifestyle interventions to increase longevity.
  99. Chemicals in Meat Cooked at High Temperatures and Cancer Risk. National Cancer Institute.
  100. Are You Living in a Simulation? 
  101. How reliable are scientific studies?
  102. Genomics: What You Should Know. Forbes.
  103. Organic foods: Are they safer? More nutritious? Mayo Clinic.
  104. Health screening - men - ages 18 to 39. MedlinePlus. 
  105. Why do I need medical checkups. Banner Health.
  106. Regular Check-Ups are Important. CDC.
  107. General health checks in adults for reducing morbidity and mortality for disease (Review)
  108. Let’s (Not) Get Physicals.
  109. Effectiveness of general practice-based health checks: a systematic review and meta-analysis.
  110. Supplements: Nutrition in a Pill? Mayo Clinic.
  111. Nutritional Effects of Food Processing. SelfNutritionData.
  112. What Is the Healthiest Drink? SFGate.
  113. Leading Causes of Death. CDC.
  114. Bias Detection in Meta-analysis. Statistical Help.
  115. The summary of Sodium Intake in Populations: Assessment of Evidence. Institute of Medicine.
  116. Compared With Usual Sodium Intake, Low and Excessive -Sodium Diets Are Associated With Increased Mortality: A Meta-analysis.
  117. The Cochrane Review of Sodium and Health.

Open thread, Jan. 26 - Feb. 1, 2015

4 Gondolinian 26 January 2015 12:46AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

What are the resolution limits of medical imaging?

6 oge 25 January 2015 10:57PM

To all my physicists in the house, will it ever be possible for a device to scan the contents of a human head at the molecular level (say, 5 x 5 x 5nm) while the subject is still alive? I don't have a physics background, so if you could also just point me to the materials I need to read to be able to answer the question, that would be wonderful as well.

 

The background: I want to live to see the far future and so I'm researching the feasibility of alternatives to cryonics that'll let people "back up" themselves at regular intervals rather than at the point of death. If this is even theoretically possible then I can direct my time and donations towards medical imaging researchers. If not then I'll continue to support cryonics and plastination research.

 

I'm looking forward to your responses!

LINK: Superrationality and DAOs

2 somnicule 24 January 2015 09:47AM

The cryptocurrency ethereum is mentioned here occasionally, and I'm not surprised to see an overlap in interests from that sphere. Vitalik Buterin has recently published a blog post discussing some ideas regarding how smart contracts can be used to enforce superrationality in the real world, and which cases those actually are. 

Weekly LW Meetups

2 FrankAdamek 23 January 2015 07:20PM

This summary was posted to LW Main on January 16th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

New, Brief Popular-Level Introduction to AI Risks and Superintelligence

16 LyleN 23 January 2015 03:43PM

The very popular blog Wait But Why has published the first part of a two-part explanation/summary of AI risks and superintelligence, and it looks like the second part will be focused on Friendly AI. I found it very clear, reasonably thorough and appropriately urgent without signaling paranoia or fringe-ness. It may be a good article to share with interested friends.

Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"

47 ciphergoth 22 January 2015 08:21PM

Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.

"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21

Purchasing research effectively open thread

10 John_Maxwell_IV 21 January 2015 12:24PM

Many of the biggest historical success stories in philanthropy have come in the form of funding for academic research.  This suggests that the topic of how to purchase such research well should be of interest to effective altruists.  Less Wrong survey results indicate that a nontrivial fraction of LW has firsthand experience with the academic research environment.  Inspired by the recent Elon Musk donation announcement, this is a thread for discussion of effectively using money to enable important, useful research.  Feel free to brainstorm your own questions and ideas before reading what's written in the thread.

The Unique Games Conjecture and FAI: A Troubling Obstacle

0 27chaos 20 January 2015 09:46PM

I am not a computer scientist and do not know much about complexity theory. However, it's a field that interests me, so I occasionally browse some articles on the subject. I was brought to https://www.simonsfoundation.org/mathematics-and-physical-science/approximately-hard-the-unique-games-conjecture/ by a link on Scott Aaronson's blog, and read the article to reacquaint myself with the Unique Games Conjecture, which I had partially forgotten about. If you are not familiar with the UGC, that article will explain it to you better than I can.

One phrase in the article stuck out to me: "there is some number of colors k for which it is NP-hard (that is, effectively impossible) to distinguish between networks in which it is possible to satisfy at least 99% of the constraints and networks in which it is possible to satisfy at most 1% of the constraints". I think this sentence is concerning for those interested in the possibility of creating FAI.

It is impossible to perfectly satisfy human values, as matter and energy are limited, and so will be the capabilities of even an enormously powerful AI. Thus, in trying to maximize human happiness, we are dealing with a problem that's essentially isomorphic to the UGC's coloring problem. Additionally, our values themselves are ill-formed. Human values are numerous, ambiguous, even contradictory. Given the complexities of human value systems, I think it's safe to say we're dealing with a particularly nasty variation of the problem, worse than what computer scientists studying it have dealt with.

Not all specific instances of complex optimization problems are subject to the UGC and thus NP hard, of course. So this does not in itself mean that building an FAI is impossible. Also, even if maximizing human values is NP hard (or maximizing the probability of maximizing human values, or maximizing the probability of maximizing the probability of human values) we can still assess a machine's code and actions heuristically. However, even the best heuristics are limited, as the UGC itself demonstrates. At bottom, all heuristics must rely on inflexible assumptions of some sort.

Minor edits.

Superintelligence 19: Post-transition formation of a singleton

7 KatjaGrace 20 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the nineteenth section in the reading guidepost-transition formation of a singleton. This corresponds to the last part of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: : “Post-transition formation of a singleton?” from Chapter 11


Summary

  1. Even if the world remains multipolar through a transition to machine intelligence, a singleton might emerge later, for instance during a transition to a more extreme technology. (p176-7)
  2. If everything is faster after the first transition, a second transition may be more or less likely to produce a singleton. (p177)
  3. Emulations may give rise to 'superorganisms': clans of emulations who care wholly about their group. These would have an advantage because they could avoid agency problems, and make various uses of the ability to delete members. (p178-80) 
  4. Improvements in surveillance resulting from machine intelligence might allow better coordination, however machine intelligence will also make concealment easier, and it is unclear which force will be stronger. (p180-1)
  5. Machine minds may be able to make clearer precommitments than humans, changing the nature of bargaining somewhat. Maybe this would produce a singleton. (p183-4)

Another view

Many of the ideas around superorganisms come from Carl Shulman's paper, Whole Brain Emulation and the Evolution of Superorganisms. Robin Hanson critiques it:

...It seems to me that Shulman actually offers two somewhat different arguments, 1) an abstract argument that future evolution generically leads to superorganisms, because their costs are generally less than their benefits, and 2) a more concrete argument, that emulations in particular have especially low costs and high benefits...

...On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.

This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.

In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.

Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.

On the concrete coordination gains that Shulman sees from superorganism ems, most of these gains seem cheaply achievable via simple long-standard human coordination mechanisms: property rights, contracts, and trade. Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.

With ems there is the added advantage that em copies can agree to the “terms” of their life deals before they are created. An em would agree that it starts life with certain resources, and that life will end when it can no longer pay to live. Yes there would be some selection for humans and ems who peacefully accept such deals, but probably much less than needed to get loyal devotion to and shared values with a superorganism.

Yes, with high value sharing ems might be less tempted to steal from other copies of themselves to survive. But this hardly implies that such ems no longer need property rights enforced. They’d need property rights to prevent theft by copies of other ems, including being enslaved by them. Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.

Shulman seems to argue both that superorganisms are a natural endpoint of evolution, and that ems are especially supportive of superorganisms. But at most he has shown that ems organizations may be at a somewhat larger scale, not that they would reach civilization-encompassing scales. In general, creatures who share values can indeed coordinate better, but perhaps not by much, and it can be costly to achieve and maintain shared values. I see no coordinate-by-values free lunch...

Notes

1. The natural endpoint

Bostrom says that a singleton is natural conclusion of long-term trend toward larger scales of political integration (p176). It seems helpful here to be more precise about what we mean by singleton. Something like a world government does seem to be a natural conclusion to long term trends. However this seems different to the kind of singleton I took Bostrom to previously be talking about. A world government would by default only make a certain class of decisions, for instance about global level policies. There has been a long term trend for the largest political units to become larger, however there have always been smaller units as well, making different classes of decisions, down to the individual. I'm not sure how to measure the mass of decisions made by different parties, but it seems like the individuals may be making more decisions more freely than ever, and the large political units have less ability than they once did to act against the will of the population. So the long term trend doesn't seem to point to an overpowering ruler of everything.

2. How value-aligned would emulated copies of the same person be?

Bostrom doesn't say exactly how 'emulations that were wholly altruistic toward their copy-siblings' would emerge. It seems to be some combination of natural 'altruism' toward oneself and selection for people who react to copies of themselves with extreme altruism (confirmed by a longer interesting discussion in Shulman's paper). How easily one might select for such people depends on how humans generally react to being copied. In particular, whether they treat a copy like part of themselves, or merely like a very similar acquaintance.

The answer to this doesn't seem obvious. Copies seem likely to agree strongly on questions of global values, such as whether the world should be more capitalistic, or whether it is admirable to work in technology. However I expect many—perhaps most—failures of coordination come from differences in selfish values—e.g. I want me to have money, and you want you to have money. And if you copy a person, it seems fairly likely to me the copies will both still want the money themselves, more or less.

From other examples of similar people—identical twins, family, people and their future selves—it seems people are unusually altruistic to similar people, but still very far from 'wholly altruistic'. Emulation siblings would be much more similar than identical twins, but who knows how far that would move their altruism?

Shulman points out that many people hold views about personal identity that would imply that copies share identity to some extent. The translation between philosophical views and actual motivations is not always complete however.

3. Contemporary family clans

Family-run firms are a place to get some information about the trade-off between reducing agency problems and having access to a wide range of potential employees. Given a brief perusal of the internet, it seems to be ambiguous whether they do better. One could try to separate out the factors that help them do better or worse.

4. How big a problem is disloyalty?

I wondered how big a problem insider disloyalty really was for companies and other organizations. Would it really be worth all this loyalty testing? I can't find much about it quickly, but 59% of respondents to a survey apparently said they had some kind of problems with insiders. The same report suggests that a bunch of costly initiatives such as intensive psychological testing are currently on the table to address the problem. Also apparently it's enough of a problem for someone to be trying to solve it with mind-reading, though that probably doesn't say much.

5. AI already contributing to the surveillance-secrecy arms race

Artificial intelligence will help with surveillance sooner and more broadly than in the observation of people's motives. e.g. here and here.

6. SMBC is also pondering these topics this week



In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. What are the present and historical barriers to coordination, between people and organizations? How much have these been lowered so far? How much difference has it made to the scale of organizations, and to productivity? How much further should we expect these barriers to be lessened as a result of machine intelligence?
  2. Investigate the implications of machine intelligence for surveillance and secrecy in more depth.
  3. Are multipolar scenarios safer than singleton scenarios? Muehlhauser suggests directions.
  4. Explore ideas for safety in a singleton scenario via temporarily multipolar AI. e.g. uploading FAI researchers (See Salamon & Shulman, “Whole Brain Emulation, as a platform for creating safe AGI.”)
  5. Which kinds of multipolar scenarios would be more likely to resolve into a singleton, and how quickly?
  6. Can we get whole brain emulation without producing neuromorphic AGI slightly earlier or shortly afterward? See section 3.2 of Eckersley & Sandberg (2013).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the 'value loading problem'. To prepare, read “The value-loading problem” through “Motivational scaffolding” from Chapter 12The discussion will go live at 6pm Pacific time next Monday 26 January. Sign up to be notified here.

Optimal eating (or rather, a step in the right direction)

4 c_edwards 19 January 2015 01:35AM

Over the past few months I've been working to optimize my life.  In this post I describe my attempt to optimize my day-to-day cooking and eating - my goal with this post is to get input and to offer a potential template for people who aren't happy with their current cooking/eating patterns.  I'm a) still pretty new to LW, and b) not a nutritionist; I am not claiming that this is optimal, only that it is a step in the right direction for me.  I'd love suggestions/advice/feedback.

Goal:

How do I quantify a successful cooking/eating plan?

Healthy

"Healthy" is a broad term.  I'm not interested in making food a complicated or stressful component of my life - quite the opposite.  Healthy means that I feel good, and that I'm providing my body with a good mix of building blocks (carbs, proteins, fats) and nutrients.  This means I want most/all meals to include some form of complex carbs, protein, and either fruits or veggies or both.  As I'm currently implementing an exercise plan based on the LW advice for optimal exercising, I'm aiming to get ~120 grams of protein per day (.64g/lb bodyweight/day).  There seems to be a general consensus that absorption of nutrients from whole foods is a) higher, and b) less dangerous, so when possible I'm trying to make foods from basic components instead of buying pre-processed stuff.

I have a health condition called hypoglycemia (low blood sugar) that makes me cranky/shaky/weak/impatient/foolish/tired when I am hungry, and can be triggered by eating simple sugars.  So, for me personally, a healthy diet includes rarely feeling hungry and rarely eating simple sugars (especially on their own - if eaten with other food the effect is much less severe).  This also means trying to focus on forms of fruit and complex carbs that have low glycemic indexes (yams are better than baked potatoes, for example).  I would guess that these attributes would be valuable for anyone, but for me they are a very high priority.

I'm taking some advice from the "Exos" (formerly Core Performance) fitness program, as described in the book Core performance essentials. One of the suggestions from this that I'm trying to use here (aside from the above complex carb+protein+fruit/veg meal structure) is to "eat the rainbow every day" - that is, mix up the fruits and veggies you eat, ideally getting as many colors per day as possible.  I'm also taking advice from the (awesome) LW article on increasing longevity: "eat fish, nuts, eggs, fruit, dark chocolate."

When possible I'm trying to focus on veggies that are particularly nutrient dense - spinach, bok choy, tomatoes, etc.  I am (for now) avoiding a few food products that I have heard (but have not yet confirmed!) are linked to potential health issues: tofu, whey proteins.  Note that I do not trust my information on the potential risks of these foods, but as neither of these are important to my diet anyways, I have put researching them as a low priority compared to everything else I want to learn.

So to recap: don't stress about it, but try to do complex carbs, proteins (120g/day for me), fruits, and veggies in every meal, avoid sugars where possible (although dark chocolate is good).  Fish, nuts and eggs are high priority proteins.

Cheap

I'm on a fairly limited budget.  This means trying to focus on the seasonal fruits and veggies (which are typically cheaper, and as an added bonus are likely healthier than the same fruit/veggie when out of season), aiming for less expensive meats, and not trying to eat organically (probably worth a separate discussion of organic vs not, meat vs not).  This also means making my own foods when the price benefit is high and the time cost is low.  I often make my own breads, for example (using a breadmaker) - it takes about 10 minutes of my time, directly saves me about 3+ dollars or so compared to an equivalent quality loaf of bread (many breads can be made for ~$.50-1$), plus saves me either the time of shopping multiple times per week to obtain fresh bread or the grossness of eating bread that I've frozen to keep it from molding.  Additionally, my budget means that I prefer that my weekly meal plan not depend on eating out or buying pre-made foods.

Quick

While I'm on a fairly limited monetary budget, I'm on a very limited time budget.  Cooking can be fun for me, but I prefer that my weekly schedule not REQUIRE much time - I can always replace a quick meal with a longer fun one if I feel like it.


The Plan

My general approach is split my meals between really quick-and-easy (like chickpeas, canned salmon, and olive oil over prewashed spinach with an apple or two on the side) and batch foods where a somewhat longer time investment is split over many nights (like lentil stew in a crockpot).

To keep myself reasonable full I need about 6-7 meals per day: breakfast, snack, lunch, (optional snack depending on schedule), post-workout snack, dinner, snack.  These don't all need to be large, but I'm unhappy/unproductive without something for each of those meals, so I might as well make it easy to eat them.

In general I've found the following system to fulfill my criteria of success (healthy, cheap, quick), and it's been much less stressful to have a general plan in place - I can more easily figure out my shopping list, and it's not hard to ensure I always have food ready when I need it.

Breakfasts

Quick and easy is the key here.  I typically have either

 

  1. Yogurt with sunflower seeds and/or nuts, a handful of rolled oats (yes, uncooked, but add a bit of water at the end to make them tolerable), and sometimes some fruit on top.  Add honey for sweetener as needed (I typically don't do to hypoglycemia).
  2. Bread (often homemade, but whatever floats your boat) with some peanut butter on top, a banana or other fruit item on the side.
  3. (if I have the time) Scrambled eggs mixed with chopped broccoli or bell peppers, bread, and a piece of fruit.
(also a big glass of water, which everyone seems to think is important)(also coffee, although I'm considering transitioning to a different caffeine source.

Lunch

 

I have three "batch" meals here (I make enough for 3+lunches, so I cook lunches ~twice a week):

 

  1. salmon mash plus "spinach salad" (spinach with olive oil and either lemon juice or balsamic vinegar), fruit item.  salmon mash is a mix of cooked rice, canned salmon, black olives (for flavor - not sure that they're useful nutritionally), canned black or garbanzo beans, pasta sauce.  It sounds disgusting, but I find it pretty decent, and it's very cheap and filling, and super balanced in terms of carbs and proteins.  I do proportions of 1 cup rice, 1 large can salmon, 1-2 cans beans, 1/2 can black olives, 1/2 can pasta sauce (typically I do a double batch, which lasts me about 4-5 lunches.  Your mileage may vary)
  2. Baked yams and boneless skinless chicken breasts plus spinach salad or other veggies, fruit item
  3. pasta salad: pasta, raw chopped broccoli, tomatoes (grape/cherry tomatoes are easiest), chopped bell peppers, sliced ham, olives (for flavor again - not important nutritionally, I think), and some olive oil (you could use Caesar salad dressing if you like more flavor).  
If I haven't prepped a batch lunch, I just put salmon and beans on top of spinach, add a little olive oil, and throw in a slice of bread and a fruit on the side. Alternately, PBJ plus veggie and fruit.

 

Dinner

I aim to make one batch dinner per week and have it last for 4-5 meals, and then have several quick-and-easy dinners to fill the gap (this also makes it easy to accommodate dinners out or food related social gatherings).

Some ideas for Batch Dinners (crock pots are your friends here):

 

  • Lentil stew, bread, sliced carrots or bell peppers, fruit item (apple, banana, grapefruit, whatever).  That lentil soup recipe is ridiculously cheap, healthy, and quite tasty.
  • The potato-and-cabbage based rumpledethumps recipe (which freezes very well - make a huge batch and throw half of it in the freezer), plus a meat of some sort, a fruit item and maybe a vegetable something 
  • Other crock pot soups: chicken tortilla soup, chili, stew.  Add a veggie on the side, a fruit item, and maybe a slice of bread.
  • Large stirfry (these often take a bit longer than crock pot meals), rice or noodles, fruit on the side.
Note that since I only make one batch dinner per week, those bullets are sufficient to cover a month (and depending on what your tolerance for repetition is, that might be enough for years).

Some ideas for quick-and-easy dinners:
  • Salad made from salad greens, some form of precooked meat (salmon is good), beans, maybe sliced avacado and tomato, maybe sunflower seeds.
  • Rice/pasta; scrambled/cooked eggs or baked chicken; munching veggie like carrots, raw broccoli, bell pepper; fruit item.  Note on chicken: while there is a reasonably large elapse time from start to finish, your involvement doesn't need to take long.  Typically I have a bunch of boneless skinless chicken breasts in the freezer - pull one out, throw it in a ziplock with soy sauce, garlic powder, ginger (or whatever other marinade you prefer), put the ziplock in a bowl of warm water, preheat oven to 370ish.  Once chicken is thawed, put in a pan and cook in the oven.  Ideally do enough rice/pasta and chicken for several nights.

 

Snacks

In general my snacks are super simple: just combine some kind of munching veggie (carrots, bell pepper, raw broccoli, snap peas, etc) with hummus, some fruit item, something protein-y (handful of nuts or sunflower seeds, usually) and (optionally) a slice of bread or other carb source.  For whatever snack I have after a workout, I want to make sure there is plenty of protein, so I include either hard boiled eggs, baked chicken, or salmon (on bread).


Implementation

So over the weekend, when I plan my week and go shopping, I choose the following:

 

  1. One batch dinner to cook (usually I need to buy the stuff for this)
  2. One type of quick-and-easy dinner to eat for 2-3 nights (often using staples/leftovers I already have)
  3. Two types of batch lunch to make from my list of three.
  4. 2-3 kinds of munching veggies - enough veggies total to include in ~3 meals per day (so like 6ish carrots per day, or 2 bell peppers, etc).  Think carrots, raw broccoli, bell peppers, green beans, sugar snap peas, cherry tomatoes, etc.
  5. 2-3 kinds of fruit items.  Think apples, bananas, grapefruit, grapes, oranges, etc.
  6. Two kinds of protein for post-workout snacks, chosen from: eggs, chicken, salmon
  7. Bread recipes to make 2-3 loaves (which might just be a single recipe repeated)
I also make sure I have enough yogurt and other breakfast supplies to get me through the week.  I drink milk with most of my meals at home, so I check my milk supply as well.

Boom!  Planning done, shopping list practically writes itself!  Once per week I make an small effort on cooking a batch dinner, two or three nights per week I put an extremely minimal effort into quick-and-easy dinners, two evenings per week I make a batch of lunch foods and maybe prep workout protein (hard boil eggs or bake chicken breasts), and otherwise my "cooking" consists of taking things from the fridge and putting them onto a dish (and possibly microwaving).

 


 

Conclusions

I'm still tweaking my system, but it has been a marked improvement from the last-minute scrabbling and suboptimal meals that tended to characterize my eating before this.  It's also a big step up in terms of utility from the more elaborate and time-consuming meals I sometimes cooked to compensate for feelings of inadequacy generated by aforementioned scrabbling/suboptimal meals.  I tend to feel fairly energetic and healthy, and it's a huge reassurance to me to know that I always have food planned out and typically it's available to me without needing to do any cooking.  It appears that it's considerably cheaper, too, although there are several confounding factors that would also drive my grocery bills down (transitioning to not-organic foods, trying to hit sales, etc).

Are there things I'm missing?  Suggestions for meals?  (note that I'm a bit wary of meal-replacement shakes) Alternative systems that people have found to hit that sweet spot of healthy, quick, and inexpensive? Is this something that might be useful for you?


EDIT:  Tuna is high in mercury, and shouldn't be eaten in nearly the quantities I had originally planned.  I've replaced canned tuna with canned salmon.

Open thread, Jan. 19 - Jan. 25, 2015

3 Gondolinian 19 January 2015 12:04AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

[Link] An argument on colds

14 Konkvistador 18 January 2015 07:16PM

Source.

It's illegal to work around food when showing symptoms of contagious diseases. Why not the same for everyone else? Each person who gets a cold infects one other person on average. We could probably cut infection rates and the frequency of colds in half if sick people didn't come in to work.

And if we want better biosecurity, why not also require people to be able to reschedule flights if a doctor certifies they have a contagious disease?

Due to the 'externalities', the case seems very compelling.

Moving my commentary to a separate comment, so as to disambiguate votes on my commentary and the original argument.

LINK: Diseases not sufficiently researched

2 polymathwannabe 17 January 2015 04:03PM

This Chart Shows The Worst Diseases That Don't Get Enough Research Money

We have already covered this topic several times on LW, but what prompted me to link this was this remark:

Of course, where research dollars flow isn't —and shouldn't be— dictated simply in terms of which diseases lay claim to the most years, but also by, perhaps most importantly, where researchers see the most potential for a breakthrough.

[Edit: a former, dumber version of me had asked, "I wonder what criterion the author would prefer," before the correct syntax of the sentence was pointed out to me.]

Opinions?

... And Everyone Loses Their Minds

10 Ritalin 16 January 2015 11:38PM

Chris Nolan's Joker is a very clever guy, almost Monroesque in his ability to identify hypocrisy and inconsistency. One of his most interesting scenes in the film has him point out how people estimate horrible things differently depending on whether they're part of what's "normal", what's "expected", rather than on how inherently horrifying they are, or how many people are involved.

Soon people extrapolated this observation to other such apparent inconsistencies in human judgment, where a behaviour that once was acceptable, with a simple tweak or change in context, becomes the subject of a much more serious reaction.

I think there's rationalist merit in giving these inconsistencies a serious look. I intuit that there's some sort of underlying pattern to them, something that makes psychological sense, in the roundabout way that most irrational things do. I think that much good could come out of figuring out what that root cause is, and how to predict this effect and manage it.

Phenomena that come to mind, are, for instance, from an Effective Altruism point of view, the expenses incurred in counter-terrorism (including some wars that were very expensive in treasure and lives), and the number of lives said expenses save, compared with the number of lives that could be saved by spending that same amount into improving road safety, increasing public helathcare expense where it would do the most good, building better lightning rods (in the USA you're four times more likely to be struck by thunder than by terrorists), or legalizing drugs.

What do y'all think? Why do people have their priorities all jumbled-up? How can we predict these effects? How can we work around them?

New LW Meetup: Dallas

2 FrankAdamek 16 January 2015 05:11PM

This summary was posted to LW Main on January 9th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

LINK: Guinea worm disease close to eradication

4 polymathwannabe 16 January 2015 04:08PM

The disease, that is, maybe not the worm itself. Anyway, Team Human scores its second point against Team Disease:

http://www.sciencealert.com/guys-we-re-really-close-to-eradicating-the-second-disease-ever-from-the-planet

Slides online from "The Future of AI: Opportunities and Challenges"

13 ciphergoth 16 January 2015 11:17AM

In the first weekend of this year, the Future of Life institute hosted a landmark conference in Puerto Rico: "The Future of AI: Opportunities and Challenges". The conference was unusual in that it was not made public until it was over, and the discussions were under Chatham House rules. The slides from the conference are now available. The list of attenders includes a great many famous names as well as lots of names familiar to those of us on Less Wrong: Elon Musk, Sam Harris, Margaret Boden, Thomas Dietterich, all three DeepMind founders, and many more.

This is shaping up to be another extraordinary year for AI risk concerns going mainstream!

Learn Three Things Every Day

-6 helltank 16 January 2015 09:36AM

In the Game of Thrones series, there is an ongoing side plot in which a character is trained by a secretive organization to become an assassin. As part of her training, one of the senior assassins demands that she report to him three new things she has learnt every day. by making a natural inference from the title of the article, you might infer or assume that I am going to suggest that you do the same. I am, but with a crucial difference.

You see, my standards are higher than the Faceless Men. Instead of filling up your list of learnt things with only marginally useful things like gossip or other insignificant things, I am going to take it up a notch and demand that you learn three USEFUL things a day. This is, of course, an entirely self-enforced challenge, and I'll let you decide on the definition of useful. Personally, I use the condition of [>50% probability that X will enrich my life in a significant way], but if you want, you can make up your own criteria for "useful".

This may seem trite or useless, or even obvious(if you're an eager and fast learner, like most LWers). Now stop and think hard. For the entire of the past 30 days, have you ever had a day or two where you just slacked off and didn't learn much? Maybe it was New Year's Day, or your birthday, and instead of learning you decided to spend the whole day partying. Perhaps it was just a lazy Sunday and you couldn't be bothered to learn something and instead just spent the day playing video games or mountain skiing(although there are useful things to be learnt from those, too) or whatever you like to do in your spare time.

I haven't taken an official survey, but my belief(and do correct me if I am very wrong about this) is that on average there's at least one day in thirty in which you did not learn thirty new, useful things. I would consider that day as pretty much wasted from a truth-seeker's point of view. You did not move forward in your quest for knowledge, you did not sharpen your rationality skills(and they always need sharpening, no matter how good you are) and you did not become stronger mentally. That's 12 days in a year, which is more than enough for the average LWer to pick up at least one new skill: say, learning about game theory, to pick a random example. In that year, you have had a chance to gain the knowledge of game theory, and you threw it away.

The point of this exercise is not to make you sweat and do a "mental workout" every day. The point is to prevent days that are wasted. There is a nearly infinite amount of knowledge to collect, and we do not have nearly infinite time. Maybe it's just my Asian mentality speaking here, but every second counts and you are in effect racing against time to gain as much knowledge as possible and put it to good use before you die.

When doing this, you are not allowed to merely work on your projects, unless they also teach you something. If you are a non-programmer, and you begin learning Python, that's a new thing. If you're already fluent in Python, and you program in Python, that's not counted. With one exception: if you learn something through programming(maybe you thought up a nifty new way to sanitize user inputs while working on a database) then that counts. If you're a writer, and you write, that doesn't count. Unless, of course, by writing you learn things about worldbuilding, or plot development, or character development, that you didn't know before. Yes, this counts, even though it's not directly rationality-related, because it enriches your life: it helps you achieve your writing goals(that's also a good condition for usefulness, and is a good example of instrumental rationality).

Today, I've learn about the concept of centered worlds, I have learnt about the policy of indifference in similar worlds and I have learnt the technique of "super-rationality" as a means to predict the behavior of other agents in acausal trade. What have you learnt today?

Do it now. Don't wait, or you will waste this day, which is 86400 countable seconds in which to learn things. In fact, I've given you a head start today, because you can count this article in your list of learnt things.

Good luck to you. Let's learn together.

[This is my first post on LW and I hope that I taught you something interesting and useful. Again, I'm new to posting, so if I violated some unspoken rule of etiquette, or if you think this post is obvious and shitty, feel free to vote me down. But do leave a comment explaining why you did, so I can add it to my list of learnt things.]

An example and discussion of extension neglect

10 emr 16 January 2015 06:10AM

I recently used an automatic tracker to learn how I was spending my time online. I learned that my perceptions were systemically biased: I spend less time than I thought on purely non-productive sites, and far more time on sites that are quasi-productive.

For example, I felt that I was spending too much time reading the news, but I learned that I spend hardly time doing so. I didn't feel that I was spending much time reading Hacker News, but I was spending a huge amount of time there!

Is this a specific case of a more general error?

A general framing: "Paying too much attention to the grouping whose items have the most extreme quality, when the value of focusing on this grouping is eclipsed by the value of focusing on a larger grouping of less extreme items".

So in this case, once I had formed the desire to be more productive, I overestimated how much potential productive time I could gain by focusing on those sites that I felt were maximally non-productive, and underestimated the potential of focusing on marginally more productive sites.

In pseudo-technical terms: We think about items in groups. But then we think of the total value of a group as being closer to average_value than to average_value * size_of_group.

This falls under the category of Extension Neglect, which includes errors caused by ignoring the size of a set. Other patterns in this category are:

  • Base rate neglect: Inferring the category of an item as if all categories were the same size.
  • The peak-end rule: Giving the value of the ordered group as a function of max_value and end_value.
  • Not knowing how set size interacts with randomness.

For the error given above, some specific examples might be:

  • Health: Focusing too much on eating desert at your favorite restaurant; and not enough on eating pizza three times a week.
  • Love: Fights and romantic moments; daily interaction.
  • Stress: Public speaking; commuting
  • Ethics: Improbable dilemmas; reducing suffering (or doing anything externally visible)
  • Crime: Serial killers; domestic violence

 

Group Rationality Diary, January 16-31

2 therufs 16 January 2015 01:54AM

This is the public group rationality diary for January 16-31.

It's a place to record and chat about it if you have done, or are actively doing, things like: 

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: January 1-15

Rationality diaries archive

Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial

52 ciphergoth 15 January 2015 04:33PM

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity." 

[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy  (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna. 

[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories. 

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015

Je suis Charlie

-19 loldrup 15 January 2015 08:27AM

After the terrorist attacks at Charlie Hebdo, conspiracy theories quickly arose about who was behind the attacks.
People who are critical to the west easily swallow such theories while pro-vest people just as easily find them ridiculous.

I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.

Yet very few people do so. People are guided by their previous understanding of the world, when judging new information. It sounds like a fine Bayesian approach for getting through life, but for real scientific knowledge, we can't rely on *prior* reasonings (even though these might involve Bayesian reasoning). Real science works by investigating evidence.

So, how do we characterise the human tendency to jump to conclusions that have simply been supplied by their sense of normativity. Is their a previously described bias that covers this case?

Selfish preferences and self-modification

4 Manfred 14 January 2015 08:42AM

One question I've had recently is "Are agents acting on selfish preferences doomed to having conflicts with other versions of themselves?" A major motivation of TDT and UDT was the ability to just do the right thing without having to be tied up with precommitments made by your past self - and to trust that your future self would just do the right thing, without you having to tie them up with precommitments. Is this an impossible dream in anthropic problems?

 

In my recent post, I talked about preferences where "if you are one of two copies and I give the other copy a candy bar, your selfish desires for eating candy are unfulfilled." If you would buy a candy bar for a dollar but not buy your copy a candy bar, this is exactly a case of strategy ranking depending on indexical information.

This dependence on indexical information is inequivalent with UDT, and thus incompatible with peace and harmony.

 

To be thorough, consider an experiment where I am forked into two copies, A and B. Both have a button in front of them, and 10 candies in their account. If A presses the button, it deducts 1 candy from A. But if B presses the button, it removes 1 candy from B and gives 5 candies to A.

Before the experiment begins, I want my descendants to press the button 10 times (assuming candies come in units such that my utility is linear). In fact, after the copies wake up but before they know which is which, they want to press the button!

The model of selfish preferences that is not UDT-compatible looks like this: once A and B know who is who, A wants B to press the button but B doesn't want to do it. And so earlier, I should try and make precommitments to force B to press the button.

But suppose that we simply decided to use a different model. A model of peace and harmony and, like, free love, where I just maximize the average (or total, if we specify an arbitrary zero point) amount of utility that myselves have. And so B just presses the button.

(It's like non-UDT selfish copies can make all Pareto improvements, but not all average improvements)

 

Is the peace-and-love model still a selfish preference? It sure seems different from the every-copy-for-themself algorithm. But on the other hand, I'm doing it for myself, in a sense.

And at least this way I don't have to waste time with precomittment. In fact, self-modifying to this form of preferences is such an effective action that conflicting preferences are self-destructive. If I have selfish preferences now but I want my copies to cooperate in the future, I'll try to become an agent who values copies of myself - so long as they date from after the time of my self-modification.

 

If you recall, I made an argument in favor of averaging the utility of future causal descendants when calculating expected utility, based on this being the fixed point of selfish preferences under modification when confronted with Jan's tropical paradise. But if selfish preferences are unstable under self-modification in a more intrinsic way, this rather goes out the window.

 

Right now I think of selfish values as a somewhat anything-goes space occupied by non-self-modified agents like me and you. But it feels uncertain. On the mutant third hand, what sort of arguments would convince me that the peace-and-love model actually captures my selfish preferences?

Quantum cat-stencil interference projection? What is this?

5 pre 14 January 2015 12:06AM

Sorry I don't hang around here much. I keep meaning to. You're still the ones I come to when I have no clue at all what a quantum-physics article I come across means though.

http://io9.com/heres-a-photo-of-something-that-cant-be-photographed-1678918200

So. Um. What?

They have some kind of double-slit experiment that gets double-slitted again then passed through a stencil before being recombined and recombined again to give a stencil-shaped interference pattern?

Is that even right?

Can someone many-worlds-interpretation describe that at me, even if it turns out its just a thought-experiment with a graphics mock-up?

I'm the new moderator

83 NancyLebovitz 13 January 2015 11:21PM

Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here.

During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!

From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.

There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.

Long live the new moderator!

Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance

3 Ander 13 January 2015 08:02PM

LessWrong is where I learned about Bitcoin, several years ago, and my greatest regret is that I did not investigate it more as soon as possible, that people here did not yell at me louder that it was important, and to go take a look at it.  In that spirit, I will do so now.

 

First of all, several caveats:

* You should not go blindly buying anything that you do not understand.  If you don't know about Bitcoin, you should start by reading about its history, read Satoshi's whitepaper, etc.  I will assume that hte rest of the readers who continue reading this have a decent idea of what Bitcoin is.

* Under absolutely no circumstances should you invest money into Bitcoin that you cannot afford to lose.  "Risk money" only!  That means that if you were to lose 100% of you money, it would not particularly damage your life.  Do not spend money that you will need within the next several years, or ever.  You might in fact want to mentally write off the entire thing as a 100% loss from the start, if that helps.

* Even more strongly, under absolutely no circumstances whatsoever will you borrow money in order to buy Bitcoins, such as using margin, credit card loans, using your student loan, etc.  This is very much similar to taking out a loan, going to a casino and betting it all on black on the roulette wheel.  You would either get very lucky or potentially ruin your life.  Its not worth it, this is reality, and there are no laws of the universe preventing you from losing.

* This post is not "investment advice".

* I own Bitcoins, which makes me biased.  You should update to reflect that I am going to present a pro-Bitcoin case.

 

So why is this potentially a time to buy Bitcoins?  One could think of markets like a pendulum, where price swings from one extreme to another over time, with a very high price corresponding to over-enthusiasm, and a very low price corresponding to despair.  As Warren Buffett said, Mr. Market is like a manic depressive.  One day he walks into your office and is exuberant, and offers to buy your stocks at a high price.  Another day he is depressed and will sell them for a fraction of that. 

The root cause of this phenomenon is confirmation bias.  When things are going well, and the fundamentals of a stock or commodity are strong, the price is driven up, and this results in a positive feedback loop.  Investors receive confirmation of their belief that things are going good from the price increase, confirming their bias.  The process repeats and builds upon itself during a bull market, until it reaches a point of euphoria, in which bad news is completely ignored or disbelieved in.

The same process happens in reverse during a price decline, or bear market.  Investors receive the feedback that the price is going down => things are bad, and good news is ignored and disbelieved.  Both of these processes run away for a while until they reach enough of an extreme that the "smart money" (most well informed and intelligent agents in the system) realizes that the process has gone too far and switches sides. 

 

Bitcoin at this point is certainly somewhere in the despair side of the pendulum.  I don't want to imply in any way that it is not possible for it to go lower.  Picking a bottom is probably the most difficult thing to do in markets, especially before it happens, and everyone who has claimed that Bitcoin was at a bottom for the past year has been repeatedly proven wrong.  (In fact, I feel a tremendous amount of fear in sticking my neck out to create this post, well aware that I could look like a complete idiot weeks or months or years from now and utterly destroy my reputation, yet I will continue anyway).

 

First of all, lets look at the fundamentals of Bitcoin.  On one hand, things are going well. 

 

Use of Bitcoin (network effect):

One measurement of Bitcoin's value is the strenght of its network effect.  By Metcalfe's law, the value of a network is proporitonal to the square of the number of nodes in the network. 

http://en.wikipedia.org/wiki/Metcalfe%27s_law

Over the long term, Bitcoin's price has generally followed this law (though with wild swings to both the upside and downside as the pendulum swings). 

In terms of network effect, Bitcoin is doing well.

 

Bitcoin transactions are hitting all time highs:  (28 day average of number of transactions).

https://blockchain.info/charts/n-transactions-excluding-popular?timespan=2year&showDataPoints=false&daysAverageString=28&show_header=true&scale=0&address=

 

Number of Bitcoin addresses are hitting all time highs:

https://blockchain.info/charts/n-unique-addresses?timespan=2year&showDataPoints=false&daysAverageString=28&show_header=true&scale=0&address=

 

Merchant adoption continues to hit new highs:

BitPay/Coinbase continue to report 10% monthly growth in the number of merchants that accept Bitcoin.

Prominent companies that began accepting Bitcoin in the past year include: Dell, Overstock, Paypal, Microsoft, etc.

 

On the other hand, due to the sustained price decline, many Btcoin businesses that started up in the past two years with venture capital funding have shut down.  This is more of an effect of the price decline than a cause however.  In the past month especially there has been a number of bearish news stories, such as Bitpay laying off employees, exchanges Vault of Satoshi and CEX.io deciding to shut down, exchange Bitstamp being hacked and shut down for 3 days, but ultimately is back up without losing customer funds, etc.

 

The cost to mine a Bitcoin is commonly seen as one indicator of price.   Note that the cost to mine a Bitcoin does not directly determine the *value* or usefulness of a Bitcoin.   I do not believe in the labor theory of value: http://en.wikipedia.org/wiki/Labor_theory_of_value

However, there is a stabilizing effect in commodities, in which over time, the price of an item will often converge towards the cost to produce it due to market forces. 

 

If a Bitcoin is being priced at a value much greater than the cost (in mining equipment and electricity) to create it, people will invest in mining equipment.  This results in increased 'difficulty' of mining and drives down the amount of Bitcoin that you can create with a particular piece of mining equipment.  (The amount of Bitcoins created is a fixed amount per unit of time, and thus the more mining equipment that exists, the less Bitcoin each miner will get).

If Bitcoin is being priced at a value below the cost to create it, people will stop investing in mining equipment.  This may be a signal that the price is getting too low, and could rise.

 

Historically, the one period of time where Bitcoin was priced significantly below the cost to produce it was in late 2011.  It was noted on LessWrong.  The price has not currently fallen to quite the same extent as it did back then (which may indicate that it has further to fall), however the current price relative to the mining cost indicates we are very much in the bearish side of the pendulum.

 

It is difficult to calculate an exact cost to mine a Bitcoin, because this depends on the exact hardware used, your cost of electricity, and a prediction of the future difficulty adjustments that will occur.  However, we can make estimates with websites such as http://www.vnbitcoin.org/bitcoincalculator.php

According to this site, every available Bitcoin miner will never give you back as much money as it cost, factoring in the hardware cost and electricity cost.   Upcoming more efficient miners which have not yet released yet are estimated to pay off in about a year, if difficulty grows extremely slowly, and that is for upcoming technology which has not yet even been released. 

 

There are two important breakpoints when discussing Bitcoin mining profitability.  The first is the point at which your return is enough that it pays for both the electricity and the hardware.  The second is the point at which you make more than your electricity costs, but cannot recover the hardware cost.

 

For example, lets say Alice pays $1000 on Bitcoin mining equipment.  Every day, this mining equipment can return $10 worth of Bitcoin, but it costs $5 of electricity to run.  Her gain for the day is $5, and it would take 200 days at this rate before the mining equipment paid for itself.  Once she has made the decision to purchase the mining equipment, the money spent on the miner is a sunk cost.  The money spent on electricity is not a sunk cost, she continues to have the decision every day of whether or not to run her mining equipment.  The optimal decision is to continue to run the miner as long as it returns more than the electricity cost. 

Over time, the payout she will receive from this hardware will decline, as the difficulty of mining Bitcoin increases.  Eventually, her payout will decline below the electricity cost, and she should shut the miner down.  At this point, if her total gain from running the equipment was higher than the hardware cost, it was a good investment.  If it did not recoup its cost, then it was worse than simply spending the money buying Bitcoin on an exchange in the first place.

 

This process creates a feedback into the market price of Bitcoins.  Imagine that Bitcoin investors have two choices, either they can buy Bitcoins (the commodity which has already been produced by others), or they can buy miners, and produce Bitcoins for themself.   If the Bitcoin price falls sufficiently that mining equipment will not recover its costs over time, investment money that would have gone into miners instead goes into Bitcoin, helping to support the price.  As you can see from mining cost calculators, we have passed this point already.  (In fact, we passed it months ago already).

 

The second breakpoint is when the Bitcoin price falls so low that it falls below the electricity cost of running mining equipment.  We have passed this point for many of the less efficient ways to mine.  For example, Cointerra recently shut down its cloud mining pool because it was losing money.  We have not yet passed this point for more recent and efficient miners, but we are getting fairly close to it. Crossing this point has occurred once in Bitcoin's history, in late 2011 when the price bottomed out near $2, before giving birth to the massive bull run of 2012-2013 in which the price rose by a factor of 500.

 

Market Sentiment: 

I was not active in Bitcoin back in 2011, so I cannot compare the present time to the sentiment at the November 2011 bottom.  However, sentiment currently is the worst that I have seen by a significant margin. Again, this does not mean that things could not get much, much worse before they get better!  After all, sentiment has been growing worse for months now as the price declines, and everyone who predicted that it was as bad as it could get and the price could not possibly go below $X has been wrong.  We are in a feedback loop which is strongly pumping bearishness into all market participants, and that feedback loop can continue and has continued for quite a while.

 

A look at market indicators tells us that Bitcoin is very, very oversold, almost historically oversold.  Again, this does not mean that it could not get worse before it gets better. 

 

As I write this, the price of Bitcoin is $230.  For perspective, this is down over 80% from the all time high of $1163 in November 2013.  It is still higher than the roughly $100 level it spent most of mid 2013 at.

* The average price of a Bitcoin since the last time it moved is $314.

https://www.reddit.com/r/BitcoinMarkets/comments/2ez90b/and_the_average_bitcoin_cost_basis_is/

The current price is a multiple of .73 of this price.  This is very low historically, but not the lowest it has ever ben.  THe lowest was about .39 in late 2011. 

 

* Short interest (the number of Bitcoins that were borrowed and sold, and must be rebought later) hit all time highs this week, according to data on the exchange Bitfinex, at more than 25000 Bitcoins sold short:

http://www.bfxdata.com/swaphistory/totals.php

 

* Weekly RSI (relative strength index), an indicator which tells if a stock or commodity is 'overbought' or 'oversold' relative to its history, just hit its lowest value ever.

 

Many indicators are telling us that Bitcoin is at or near historical levels in terms of the depth of this bear market.  In percentage terms, the price decline is surpassed only by the November 2011 low.  In terms of length, the current decline is more than twice as long as the previous longest bear market.

 

To summarize: At the present time, the market is pricing in a significant probability that Bitcoin is dying.

But there are some indicators (such as # of transactions) which say it is not dying.  Maybe it continues down into oblivion, and the remaining fundamentals which looked bullish turn downwards and never recover.  Remember that this is reality, and anything can happen, and nothing will save you.

 

 

Given all of this, we now have a choice.  People have often compared Bitcoin to making a bet in which you have a 50% chance of losing everything, and a 50% chance of making multiples (far more than 2x) of what you started with. 

There are times when the payout on that bet is much lower, when everyone is euphoric and has been convinced by the positive feedback loop that they will win.  And there are times when the payout on that bet is much higher, when everyone else is extremely fearful and is convinced it will not pay off. 

 

This is a time to be good rationalists, and investigate a possible opportunity, comparing the present situation to historical examples, and making an informed decision.   Either Bitcoin has begun the process of dying, and this decline will continue in stages until it hits zero (or some incredibly low value that is essentially the same for our purposes), or it will live.  Based on the new all time high being hit in number of transactions, and ways to spend Bitcoin, I think there is at least a reasonable chance it will live.  Enough of a chance that it is worth taking some money that you can 100% afford to lose, and making a bet.  A rational gamble that there is a decent probability that it will survive, at a time when a large number of others are betting that it will fail.

 

And then once you do that, try your hardest to mentally write it off as a complete loss, like you had blown the money on a vacation or a consumer good, and now it is gone, and then wait a long time.

 

 

Less exploitable value-updating agent

5 Stuart_Armstrong 13 January 2015 05:19PM

My indifferent value learning agent design is in some ways too good. The agent transfer perfectly from u maximisers to v maximisers - but this makes them exploitable, as Benja has pointed out.

For instance, if u values paperclips and v values staples, and everyone knows that the agent will soon transfer from a u-maximiser to a v-maximiser, then an enterprising trader can sell the agent paperclips in exchange for staples, then wait for the utility change, and sell the agent back staples for paperclips, pocketing a profit each time. More prosaically, they could "borrow" £1,000,000 from the agent, promising to pay back £2,000,000 tomorrow if the agent is still a u-maximiser. And the currently u-maximising agent will accept, even though everyone knows it will change to a v-maximiser before tomorrow.

One could argue that exploitability is inevitable, given the change in utility functions. And I haven't yet found any principled way of avoiding exploitability which preserves the indifference. But here is a tantalising quasi-example.

As before, u values paperclips and v values staples. Both are defined in terms of extra paperclips/staples over those existing in the world (and negatively in terms of destruction of existing/staples), with their zero being at the current situation. Let's put some diminishing returns on both utilities: for each paperclips/stables created/destroyed up to the first five, u/v will gain/lose one utilon. For each subsequent paperclip/staple destroyed above five, they will gain/lose one half utilon.

We now construct our world and our agent. The world lasts two days, and has a machine that can create or destroy paperclips and staples for the cost of £1 apiece. Assume there is a tiny ε chance that the machine stops working at any given time. This ε will be ignored in all calculations; it's there only to make the agent act sooner rather than later when the choices are equivalent (a discount rate could serve the same purpose).

The agent owns £10 and has utility function u+Xv. The value of X is unknown to the agent: it is either +1 or -1, with 50% probability, and this will be revealed at the end of the first day (you can imagine X is the output of some slow computation, or is written on the underside of a rock that will be lifted).

So what will the agent do? It's easy to see that it can never get more than 10 utilons, as each £1 generates at most 1 utilon (we really need a unit symbol for the utilon!). And it can achieve this: it will spend £5 immediately, creating 5 paperclips, wait until X is revealed, and spend another £5 creating or destroying staples (depending on the value of X).

This looks a lot like a resource-conserving value-learning agent. I doesn't seem to be "exploitable" in the sense Benja demonstrated. It will still accept some odd deals - one extra paperclip on the first day in exchange for all the staples in the world being destroyed, for instance. But it won't give away resources for no advantage. And it's not a perfect value-learning agent. But it still seems to have interesting features of non-exploitable and value-learning that are worth exploring.

Note that this property does not depend on v being symmetric around staple creation and destruction. Assume v hits diminishing returns after creating 5 staples, but after destroying only 4 of them. Then the agent will have the same behaviour as above (in that specific situation; in general, this will cause a slight change, in that the agent will slightly overvalue having money on the first day compared to the original v), and will expect to get 9.75 utilons (50% chance of 10 for X=+1, 50% chance of 9.5 for X=-1). Other changes to u and v will shift how much money is spent on different days, but the symmetry of v is not what is powering this example.

'Dumb' AI observes and manipulates controllers

33 Stuart_Armstrong 13 January 2015 01:35PM

The argument that AIs provided with a reward channel will observe their controllers and learn to manipulate them is a valid one. Unfortunately, it's often framed in a way that feels counterintuitive or extreme, especially to AI designers. It typically starts with the standard reinforcement learning scenario, then posits that the AI becomes superintelligent and either manipulates the controller with super-social powers, or breaks out and gains control of its reward channel, killing or threatening its controllers.

And that is a fair argument. But conceptually, it leaps from a standard reinforcement learning scenario, to a science-fiction-sounding scenario. It might help to have intermediate scenarios: to show that even lower intelligence AIs might start exhibiting the same sort of behaviour, long before it gets to superintelligence.

So consider the following scenario. Some complex, trainable AI is tasked with writing automated news stories for a student newspaper. It trawls the web and composes its stories, then gets reward and feedback from the editors. Assume there are two editors for this newspaper, and they work on alternate days. The two editors have somewhat different ideas as to what constitutes a good story, so their feedbacks are different. After a while, the AI finds that it gets higher reward by using a certain style on Monday, Wednesday and Friday, and another style on Tuesday and Thursdays - this is a simple consequence of its reward mechanism.

But the rota isn't perfect. Sometime the Monday editor will edit a story so late on Monday that it's Tuesday, and sometimes the Tuesday editor will be up early and edit a story at the same time. This pattern is more common during the holidays, when the editors go to different time zones, often travel, and become more irregular in their editing.

So if the AI complex and skilled enough, then, simply through simple feedback, it will start building up a picture of its editors. It will figure out when they are likely to stick to a schedule, and when they will be more irregular. It will figure out the difference between holidays and non-holidays. Given time, it may be able to track the editors moods and it will certainly pick up on any major change in their lives - such as romantic relationships and breakups, which will radically change whether and how it should present stories with a romantic focus.

It will also likely learn the correlation between stories and feedbacks - maybe presenting a story define roughly as "positive" will increase subsequent reward for the rest of the day, on all stories. Or maybe this will only work on a certain editor, or only early in the term. Or only before lunch.

Thus the simple trainable AI with a particular focus - write automated news stories - will be trained, through feedback, to learn about its editors/controllers, to distinguish them, to get to know them, and, in effect, to manipulate them.

This may be a useful "bridging example" between standard RL agents and the superintelligent machines.

LW-ish meetup in Boulder, CO

5 fowlertm 13 January 2015 05:23AM

This Saturday I'm giving a presentation at the Boulder Future Salon, topic will be non-religious spirituality. The more LWians that can make it the better, because I'm really trying to get some community building done in the Boulder/Denver area. There's an insane amount of potential here.

Details.

Superintelligence 18: Life in an algorithmic economy

3 KatjaGrace 13 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the eighteenth section in the reading guideLife in an algorithmic economy. This corresponds to the middle of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Life in an algorithmic economy” from Chapter 11


Summary

  1. In a multipolar scenario, biological humans might lead poor and meager lives. (p166-7)
  2. The AIs might be worthy of moral consideration, and if so their wellbeing might be more important than that of the relatively few humans. (p167)
  3. AI minds might be much like slaves, even if they are not literally. They may be selected for liking this. (p167)
  4. Because brain emulations would be very cheap to copy, it will often be convenient to make a copy and then later turn it off (in a sense killing a person). (p168)
  5. There are various other reasons that very short lives might be optimal for some applications. (p168-9)
  6. It isn't obvious whether brain emulations would be happy working all of the time. Some relevant considerations are current human emotions in general and regarding work, probable selection for pro-work individuals, evolutionary adaptiveness of happiness in the past and future -- e.g. does happiness help you work harder?--and absence of present sources of unhappiness such as injury. (p169-171)
  7. In the long run, artificial minds may not even be conscious, or have valuable experiences, if these are not the most effective ways for them to earn wages. If such minds replace humans, Earth might have an advanced civilization with nobody there to benefit. (p172-3)
  8. In the long run, artificial minds may outsource many parts of their thinking, thus becoming decreasingly differentiated as individuals. (p172)
  9. Evolution does not imply positive progress. Even those good things that evolved in the past may not withstand evolutionary selection in a new circumstance. (p174-6)

Another view

Robin Hanson on others' hasty distaste for a future of emulations: 

Parents sometimes disown their children, on the grounds that those children have betrayed key parental values. And if parents have the sort of values that kids could deeply betray, then it does make sense for parents to watch out for such betrayal, ready to go to extremes like disowning in response.

But surely parents who feel inclined to disown their kids should be encouraged to study their kids carefully before making such a choice. For example, parents considering whether to disown their child for refusing to fight a war for their nation, or for working for a cigarette manufacturer, should wonder to what extend national patriotism or anti-smoking really are core values, as opposed to being mere revisable opinions they collected at one point in support of other more-core values. Such parents would be wise to study the lives and opinions of their children in some detail before choosing to disown them.

I’d like people to think similarly about my attempts to analyze likely futures. The lives of our descendants in the next great era after this our industry era may be as different from ours’ as ours’ are from farmers’, or farmers’ are from foragers’. When they have lived as neighbors, foragers have often strongly criticized farmer culture, as farmers have often strongly criticized industry culture. Surely many have been tempted to disown any descendants who adopted such despised new ways. And while such disowning might hold them true to core values, if asked we would advise them to consider the lives and views of such descendants carefully, in some detail, before choosing to disown.

Similarly, many who live industry era lives and share industry era values, may be disturbed to see forecasts of descendants with life styles that appear to reject many values they hold dear. Such people may be tempted to reject such outcomes, and to fight to prevent them, perhaps preferring a continuation of our industry era to the arrival of such a very different era, even if that era would contain far more creatures who consider their lives worth living, and be far better able to prevent the extinction of Earth civilization. And such people may be correct that such a rejection and battle holds them true to their core values.

But I advise such people to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. I hope that my future analysis can assist such soul-searching examination. If after studying such detail, you still feel compelled to disown your likely descendants, I cannot confidently say you are wrong. My job, first and foremost, is to help you see them clearly.

More on whose lives are worth living here and here.

Notes

1. Robin Hanson is probably the foremost researcher on what the finer details of an economy of emulated human minds would be like. For instance, which company employees would run how fast, how big cities would be, whether people would hang out with their copies. See a TEDx talk, and writings hereherehere and here (some overlap - sorry). He is also writing a book on the subject, which you can read early if you ask him. 

2. Bostrom says,

Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man...the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with  extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable....(p166)

It's true this might happen, but it doesn't seem like an especially likely scenario to me. As Bostrom has pointed out in various places earlier, biological humans would do quite well if they have some investments in capital, do not have too much of their property stolen or artfully manouvered away from them, and do not undergo too massive population growth themselves. These risks don't seem so large to me.

3. Paul Christiano has an interesting article on capital accumulation in a world of machine intelligence.

4. In discussing worlds of brain emulations, we often talk about selecting people for having various characteristics - for instance, being extremely productive, hard-working, not minding frequent 'death', being willing to work for free and donate any proceeds to their employer (p167-8). However there are only so many humans to select from, so we can't necessarily select for all the characteristics we might want. Bostrom also talks of using other motivation selection methods, and modifying code, but it is interesting to ask how far you could get using only selection. It is not obvious to what extent one could meaningfully modify brain emulation code initially. 

I'd guess less than one in a thousand people would be willing to donate everything to their employer, given a random employer. This means to get this characteristic, you would have to lose a factor of 1000 on selecting for other traits. All together you have about 33 bits of selection power in the present world (that is, 7 billion is about 2^33; you can divide the world in half about 33 times before you get to a single person). Lets suppose you use 5 bits in getting someone who both doesn't mind their copies dying (I guess 1 bit, or half of people) and who is willing to work an 80h/week (I guess 4 bits, or one in sixteen people). Lets suppose you are using the rest of your selection (28 bits) on intelligence, for the sake of argument. You are getting a person of IQ 186. If instead you use 10 bits (2^10 = ~1000) on getting someone to donate all their money to their employer, you can only use 18 bits on intelligence, getting a person of IQ 167. Would it not often be better to have the worker who is twenty IQ points smarter and pay them above subsistance?

5. A variety of valuable uses for cheap to copy, short-lived brain emulations are discussed in Whole brain emulation and the evolution of superorganisms, LessWrong discussion on the impact of whole brain emulation, and Robin's work cited above.

6. Anders Sandberg writes about moral implications of emulations of animals and humans.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in Sandberg & Bostrom (2008), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described by (e.g.) Ken Hayworth, or is it likely to be (3) something else?
  2. Extend and update our understanding of when brain emulations might appear (see Sandberg & Bostrom (2008)).
  3. Investigate the likelihood of a multipolar outcome?
  4. Follow Robin Hanson (see above) in working out the social implications of an emulation scenario
  5. What kinds of responses to the default low-regulation multipolar outcome outlined in this section are likely to be made? e.g. is any strong regulation likely to emerge that avoids the features detailed in the current section?
  6. What measures are useful for ensuring good multipolar outcomes?
  7. What qualitatively different kinds of multipolar outcomes might we expect? e.g. brain emulation outcomes are one class.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the possibility of a multipolar outcome turning into a singleton later. To prepare, read “Post-transition formation of a singleton?” from Chapter 11The discussion will go live at 6pm Pacific time next Monday 19 January. Sign up to be notified here.

Ethical Diets

1 pcm 12 January 2015 11:38PM

[Cross-posted from my blog.]

I've seen some discussion of whether effective altruists have an obligation to be vegan or vegetarian.

The carnivores appear to underestimate the long-term effects of their actions. I see a nontrivial chance that we're headed toward a society in which humans are less powerful than some other group of agents. This could result from slow AGI takeoff producing a heterogeneous society of superhuman agents. Or there could be a long period in which the world is dominated by ems before de novo AGI becomes possible. Establishing ethical (and maybe legal) rules that protect less powerful agents may influence how AGIs treat humans or how high-speed ems treat low-speed ems and biological humans [0]. A one in a billion chance that I can alter this would be worth some of my attention. There are probably other similar ways that an expanding circle of ethical concern can benefit future people.

I see very real costs to adopting an ethical diet, but it seems implausible that EAs are merely choosing alternate ways of being altruistic. How much does it cost MealSquares customers to occasionally bemoan MealSquares use of products from apparently factory-farmed animals? Instead, it seems like EAs have some tendency to actively raise the status of MealSquares [1].

I don't find it useful to compare a more ethical diet to GiveWell donations for my personal choices, because I expect my costs to be mostly inconveniences, and the marginal value of my time seems small [2], with little fungibility between them.

I'm reluctant to adopt a vegan diet due to the difficulty of evaluating the health effects and due to the difficulty of evaluating whether it would mean fewer animals living lives that they'd prefer to nonexistence.

But there's little dispute that most factory-farmed animals are much less happy than pasture-raised animals. And everything I know about the nutritional differences suggests that avoiding factory-farmed animals improves my health [3].

I plan not to worry about factory-farmed invertebrates for now (shrimp, oysters, insects), partly because some of the harmful factory-farm practices such as confining animals to cages not much bigger than the animals in question aren't likely with animals that small.

So my diet will consist of vegan food plus shellfish, insects, wild-caught fish, pasture-raised birds/mammals (and their eggs/whey/butter). I will assume vertebrate animals are raised in cruel conditions unless they're clearly marked as wild-caught, grass-fed, or pasture-raised [4].

I've made enough changes to my diet for health reasons that this won't require large changes. I already eat at home mostly, and the biggest change to that part of my diet will involve replacing QuestBars with a home-made version using whey protein from grass-fed cows (my experiments so far indicate it's inconvenient and hard to get a decent texture). I also have some uncertainty about pork belly [5] - the pasture-raised version I've tried didn't seem as good, but that might be because I didn't know it needed to be sliced very thin.

My main concern is large social gatherings. It has taken me a good deal of willpower to stick to a healthy diet under those conditions, and I expect it to take more willpower to observe ethical constraints.

A 100% pure diet would be much harder for me to achieve than an almost pure diet, and it takes some time for me to shift my habits. So for this year I plan to estimate how many calories I eat that don't fit this diet, and aim to keep that less than 120 calories per month (about 0.2%) [6]. I'll re-examine the specifics of this plan next Jan 1.

Does anyone know a convenient name for my planned diet?

 

footnotes

 

0. With no one agent able to conquer the world, it's costly for a single agent to repudiate an existing rule. A homogeneous group of superhuman agents might coordinate to overcome this, but with heterogeneous agents the coordination costs may matter.

1. I bought 3 orders of MealSquares, but have stopped buying for now. If they sell a version whose animal products are ethically produced (which I'm guessing would cost $50/order more), I'll resume buying them occasionally.

2. The average financial value of my time is unusually high, but I often have trouble estimating whether spending more time earning money has positive or negative financial results. I expect financial concerns will be more important to many people.

3. With the probable exception of factory-farmed insects, oysters, and maybe other shellfish.

4. In most restaurants, this will limit me to vegan food and shellfish.

5. Pork belly is unsliced bacon without the harm caused by smoking.

6. Yes, I'll have some incentive to fudge those estimates. My experience from tracking food for health reasons suggests possible errors of 25%. That's not too bad compared to other risks such as lack of willpower.

Apptimize -- rationalist startup hiring engineers

64 nancyhua 12 January 2015 08:22PM

Apptimize is a 2-year old startup closely connected with the rationalist community, one of the first founded by CFAR alumni.  We make “lean” possible for mobile apps -- our software lets mobile developers update or A/B test their apps in minutes, without submitting to the App Store. Our customers include big companies such as Nook and Ebay, as well as Top 10 apps such as Flipagram. When companies evaluate our product against competitors, they’ve chosen us every time.


We work incredibly hard, and we’re striving to build the strongest engineering team in the Bay Area. If you’re a good developer, we have a lot to offer.


Team

  • Our team of 14 includes 7 MIT alumni, 3 ex-Googlers, 1 Wharton MBA, 1 CMU CS alum, 1 Stanford alum, 2 MIT Masters, 1 MIT Ph. D. candidate, and 1 “20 Under 20” Thiel Fellow. Our CEO was also just named to the Forbes “30 Under 30

  • David Salamon, Anna Salamon’s brother, built much of our early product

  • Our CEO is Nancy Hua, while our Android lead is "20 under 20" Thiel Fellow James Koppel. They met after James spoke at the Singularity Summit

  • HP:MoR is required reading for the entire company

  • We evaluate candidates on curiosity even before evaluating them technically

  • Seriously, our team is badass. Just look

Self Improvement

  • You will have huge autonomy and ownership over your part of the product. You can set up new infrastructure and tools, expense business products and services, and even subcontract some of your tasks if you think it's a good idea

  • You will learn to be a more goal-driven agent, and understand the impact of everything you do on the rest of the business

  • Access to our library of over 50 books and audiobooks, and the freedom to purchase more

  • Everyone shares insights they’ve had every week

  • Self-improvement is so important to us that we only hire people committed to it. When we say that it’s a company value, we mean it

The Job

  • Our mobile engineers dive into the dark, undocumented corners of iOS and Android, while our backend crunches data from billions of requests per day

  • Engineers get giant monitors, a top-of-the-line MacBook pro, and we’ll pay for whatever else is needed to get the job done

  • We don’t demand prior experience, but we do demand the fearlessness to jump outside your comfort zone and job description. That said, our website uses AngularJS, jQuery, and nginx, while our backend uses AWS, Java (the good parts), and PostgreSQL

  • We don’t have gratuitous perks, but we have what counts: Free snacks and catered meals, an excellent health and dental plan, and free membership to a gym across the street

  • Seriously, working here is awesome. As one engineer puts it, “we’re like a family bent on taking over the world”


If you’re interested, send some Bayesian evidence that you’re a good match to jobs@apptimize.com

View more: Next