If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
92 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

9turchin
These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain. * I don't understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven't solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form. * I understand that all choices contain risk. However, I believe that the "information" theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:
1DataPacRat
The intended meaning, which it seems I will need to rephrase to clarify: "If you are experimenting with uploading, and can meet these minimal common-sense standards, then I'm willing to volunteer ahead of time be your guinea pig. If you can't meet them, then I'd rather stay frozen a little longer. Just FYI."
2WhySpace_duplicate0.9261692129075527
This is potentially quite important. MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers. Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI: 1. Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI. 2. The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone's mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift. The obvious solution is to hand Bostrom's Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.
6DataPacRat
New version of the draft text: https://www.datapacrat.com/temp/Cryo Revival Preferences - draft 0.1.2.txt
2DataPacRat
Today's version: https://www.datapacrat.com/temp/Cryo Revival Preferences - draft 0.1.3.txt The change: Added new paragraph: There is no such thing as being able to have 100% certainty that a piece of software is without flaws or errors. One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection. Without that strategy, not only are bugs much more likely to remain, but when someone does manage to find a bug, it is likely to remain secret and uncorrected. Such uncorrected bugs can be used by unscrupulous people to do just about anything to any data stored on a computer. This is bad enough when that data is merely personal email, or even a bank's financial records; when the data is a sapient mind, the possibilities are horrifying. Given the possible downsides, I find it difficult to trust the motives of anyone who wishes to run an uploaded mind on a computer that uses closed-source software. Therefore, if there is a choice between uploading my mind using uninspectable, closed-source software, and not being revived, I would choose not to be uploaded in that fashion, even if doing so increases the risk of never being revived at all. If there is a choice between being uploading my mind using closed-source software that the uploaded mind can inspect, then if that includes all the documentation that is necessary for the uploaded mind to learn how to understand the software, I would reluctantly agree to the uploading procedure as being preferable to risking never being revived at all.
2Lumifer
That's a claim often made ("With enough eyes all bugs are shallow") but it's not so clear-cut in practice. In real life a lot of open-source projects are very buggy and remain very buggy (and open to 'sploits) for a very long time. At the same time there is closed-source software which is considerably more bug-free (but very expensive) -- e.g. the code in fly-by-wire airplanes. Besides, physical control, generally speaking, trumps all. If your mind is running on top of, say, open-source Ubuntu 179.5 Zooming Zazzle but I have access to your computing substrate, that is, the physical machine which runs the code, the fact that the machine runs an open-source OS is quite irrelevant. You're looking for impossible guarantees. And remember, that you are not making choices, but requests. You can't "trust the motives" or not -- if someone revives you with malicious intent, he can ignore your requests easily enough.
0DataPacRat
Yep. Yep. I'm not looking for guarantees at all. (Put another way, I'm well aware that 0 and 1 are not probabilities.) What I am doing is trying to gauge the odds; and given my own real-world experience, open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software, to the extent that I'm willing to make an important choice based on whether or not a piece of software is open-source. True, as far as it goes. However, this document I'm writing is also something of a letter to anyone who is considering reviving me, and given how history goes, they are very likely going to have to take into account factors that I currently can't even conceive of. Thus, I'm writing this doc in a fashion that not only lists my specific requests in regards to particular items, but also describes the reasoning behind the requests, so that the prospective reviver has a better chance of being able to extrapolate what my preferences about the unknown factors would likely be. If someone revives me with malicious intent, then all bets are off, and this document will nigh-certainly do me no good at all. So I'm focusing my attention on scenarios involving at least some measure of non-malicious intent.
0Lumifer
On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer. Not to mention that you're assuming that "open-source" and "closed-source" concepts will still make sense in that high-tech future. As an example, let's say I give you a trained neural net. It's entirely open source, you can examine all the nodes, all the weights, all the code, everything. But I won't tell you how I trained that NN. Are you going to trust it?
0DataPacRat
That's true. But given the various reasonably-possible scenarios I can think of, making this extreme of a request seems to be the only way to express the strength of my concern. I'll admit it's not a common worry; of course, this isn't a common sort of document. (If you want to know more about what leads me to this conclusion, you could do worse than to Google one of Cory Doctorow's talks or essays on 'the war on general-purpose computation'.) You provide insufficient data about your scenario for me to make a decent reply. Which is why I included the general reasoning process leading to my requests about open- and closed-source - and in the latest version of the doc, have mentioned part of the reason for going into that detail is to let revivalists have some data to extrapolate what my choices would be in unknown scenarios. (In this particular case, the whole point of differentiating between open- and closed-source software is the factor of /trust/ - and in your scenario, you don't give any information on how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted.)
0Lumifer
I am well aware of the war on general computation, but I fail to see how it's relevant here. If you are saying you don't want to be alive in a world where this war has been lost, that's... a rather strong statement. To make an analogy, we're slowly losing the ability to fix, modify, and, ultimately, control our own cars. I think that is highly unfortunate, but I'm unlikely to declare a full boycott of cars and go back to horses and buggy whips. Since you're basically talking about security, you might find it useful to start by specifying a threat model. What do you mean by "such NNs"? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly -- it's too general for a meaningful answer. In any case, the point is that the preference for open-source relies on it being useful, that is, the ability to gain helpful information from examining the code, and the ability to modify it to change its behaviour. You can examine a sufficiently complex trained NN all you want, but the information you'll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.
0DataPacRat
I thought I had; it's the part around the word 'horrifying'. We actually already have a lot of the fundamental software required to run an "emulate brain X" program - stuff that accesses hardware, shuffles swap space around, arranges memory addresses, connects to networking, models a virtual landscape and avatars within, and so on. Some scientists have done extremely primitive emulations of neurons or neural clusters, so we've got at least an idea of what software is likely to need to be scaled up to run a full-blown human mind. None of this software has any particular need for neural-nets. I don't know how such NNs as you propose would be necessary to emulate a brain; I don't know what service they would add, how fundamental they would be, what sort of training data would be used, and so on. Put another way, as best as I can interpret your question, it's like saying "And what if future cars required an algae system?", without even saying whether the algae tubing is connected to the fuel, or the exhaust, or the radiator, or the air conditioner. You're right that NNs are general-purpose; that is, in fact, the issue I was trying to raise. Alright. In this model, in which it appears that the training data is unavailable, that the existing NN can't be retrained or otherwise modified, and that there doesn't seem to be any mention of being able to train up a replacement NN with different behaviours, then it appears to match the relevant aspects of "closed-source" software much more closely than "open-source", in that if a hostile exploiter finds a way to, say, leverage increased access and control of the computer through the NN, there is little-to-no chance of detecting or correcting the aspects of the NN's behaviour which allow that. I'll spend some time today seeing if I can rework the relevant paragraphs so that this conclusion can be more easily derived.
0Lumifer
That's not a threat model. A threat model is basically a list of adversaries and their capabilities. Typically, defensive measures help against some of them, but not all of them -- a threat model helps you figure out the right trade-offs and estimate who you are (more or less) protected from, and who you are vulnerable to. That stuff usually goes by the name of "operating system". Why do you think that brain emulations will run on top of something that's closely related to contemporary operating systems? You seem to worry a lot about your brain emulation being hacked from the outside, but you don't worry as much about what the rightful owner of the hardware and the software on top of which your em lives might do?
0DataPacRat
I'm merely a highly-interested amateur. Would you be willing to help me work out the details of such a model? Because even as a scifi fan, I can only make so many guesses about alternatives, and it seems at least vaguely plausible that the same info-evolutionary pressures that led to the development of contemporary operating systems will continue to exist for at least the next couple of decades. At least, plausible enough that I should at least cover it as a possibility in the request-doc. Without getting into the whole notion of property rights versus the right to revolution, if I thought whoever was planning to run a copy of me on a piece of hardware was fully trustworthy, why would I have included the 'neutral third-third party' clause?
0Lumifer
You are writing a, basically, living will for a highly improbable situation. Conditional on that situation happening, I think that since you have no idea into which conditions you will wake up, it's best to leave the decision to the future-you. Accordingly, the only thing I would ask for is the ability for your future-you to decide his fate (notably, including his right to suicide if he makes this choice).
0DataPacRat
In the latest draft, I've rewritten at least half from scratch, focusing on the reasons why I want to be revived in the first place, and thus under which circumstances reviving me would help those reasons. The whole point about being worried about hostile entities taking advantage of vulnerabilities hidden in closed-source software is that future-me might be even less trustable to work towards my values than the future-self of a dieter can be trusted not to grab an Oreo if any are left in their home. Note to self: include the word 'precommitment' in version 0.2.1.
0Lumifer
If whoever revives you deliberately modifies you, you're powerless to stop it. And if you're worried that future-you will be different from past-you, well, that's how life works. A future-you in five years will be different from current-you who is different from the past-you of five years ago. As to precommitment, I don't think you have any power to precommit, and I don't think it's a good idea either. Imagine if a seven-year-old past-you somehow found a way to precommit the current-you to eating a pound of candy a day, every day...
0DataPacRat
True, which is why I'm assuming a certain minimal amount of good-will on the part of whoever revives me. However, just because the reviver has control over the technology allowing my revival doesn't mean they're actually technically competent in matters of computer security - I've seen too many stories in /r/talesfromtechsupport of computer-company executives being utterly stupid in fundamental ways for that. The main threat I'm trying to hold off is, roughly, "good-natured reviver leaves the default password in my uploaded self's router unchanged, script-kiddie running automated attacks on the whole internet gains access, script turns me into a sapient bitcoin-miner-equivalent for that hacker's benefit". That's just one example of a large class of threats. No hostile intent by the reviver is required, just a manager-level understanding of computer security. Yes, I know. This is one reason that I am trying not to specify /what/ it is I value in the request-doc, other than 1) instrumental goals that are good for achieving many terminal goals, and 2) valuing my own life both as an instrumental and a terminal goal, which I confidently expect to remain as one of my fundamental values for quite some time to come. I'll admit that I'm still thinking on this one. Socially, precommitting is mainly useful as a deterrence, and I'm working out whether trying to precommit to work against anyone who modifies my mind without my consent, or any other variation of the tactic, would be worthwhile even if I /can/ follow through.
2Lumifer
Imagine a Faerie Queen popping into existence near you and saying: Yo, I have a favour to ask. See, a few centuries ago a guy wished to live in the far future, so I thought why not? it's gonna be fun! and I put him into stasis. It's time for him to wake up, but I'm busy so can you please reanimate him? Here is the scroll which will do it, it comes with instructions. Oh, and the guy wrote a lengthy letter before I froze him -- he seemed to have been very concerned about his soul being tricked by the Devil -- here it is. Cheers, love, I owe you one! ...and she pops out of existence again. You look at the letter (which the Faerie Queen helpfully translated into more or less modern English) and it's full of details about consecrated ground, and wards against evil eyes, and witch barriers, and holy water, and what kind of magic is allowed anywhere near his body, and whatnot. How seriously are going to take this letter?
0DataPacRat
Language is a many-splendored thing. Even a simple shopping list contains more information than a mere list of goods; a full letter is exponentially more valuable. As one fictional character once put it, it's worth looking for the "underneath the underneath"; as another one put it, it's possible to deduce much of modern civilization from a cigarette butt. If you need a specific reason to pay attention to such a letter spelled out for you, then it could be looked at for clues as to how likely the reanimated fellow would need to spend time in an asylum before being deemed competent to handle his own affairs and released into modern society, or if it's safe to plan on just letting him crash on my couch for a few days. And that's without even touching the minor detail that, if a Faerie Queen is running around, then the Devil may not be far behind her, and the resurrectee's concerns may, in fact, be completely justified. :) PS: I like this scenario on multiple levels. Is there any chance I could convince you to submit it to /r/WritingPrompts, or otherwise do more with it on a fictional level? ;)
0gjm
It looks like you've changed the subject a bit -- from whether the letter should be taken seriously in the sense of doing what it requests, to whether it should be taken seriously in the sense of reading it carefully.
0DataPacRat
Why can't we have both?
0Lumifer
Oh, I'm sure the letter is interesting, but the question is whether you will actually set up wards and have a supply of holy water on hand before activating the scroll. Though the observation that the existence of the Faerie Queen changes things is a fair point :-) I don't know if the scenario is all that exciting, it's a pretty standard trope, a bit tarted-up. If you want to grab it and run with it, be my guest.
0DataPacRat
I'm still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I've hammered these into good enough shape to include in the revival-request doc), here's a few paragraphs I've been working on: Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of "it was for their own good". To that end, I am going to attempt a strategy called "precommitment". Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission. I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts - if there is clear evidence that doing so will reduce
0Lumifer
And how are you going to do this? Precommitment is not a promise, it's making it so that you are unable to choose in the future.
0DataPacRat
Well, if you don't mind my tweaking your simple and absolute "unable" into something more like "unable, at least without suffering significant negative effects, such as a loss of wealth", then I am aware of this, yes. Precommitment for something on this scale is a big step, and I'm taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me's resources to fund the development of a version of me without the editing.
1Lumifer
If you're unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can "fund the development of a version of me without the editing", don't you just want to do it unconditionally?
0DataPacRat
I think we're bumping up against some conflicting assumptions. At least at this stage of the drafting process, I'm focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn't a neutral third-party around with a duty to try to act in my best interests... then, in such a future, I'm reasonably confident that it doesn't matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self's basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.
5moridinamael
If I were going to make such a document, I would make it minimally restrictive. I would rather be brought back even in less-than-ideal circumstances, so that I could to observe how the world had developed, and then decide whether I wanted to stay. At least then I would have a me-like agent operating on my own behalf. If they bring me back as a qualia-less em, then at least there's a chance that the em will be able to say, "Hey, this is cool and everything, but this isn't actually what my predecessor wanted. So even though I don't have qualia, I'll make it my personal mission to try to bring myself back with qualia." Precommitting to such an attitude now while you're alive boosts the odds of this. At worst, if it turns out to be impossible to revive the "observer", there's a thing-like-you running around in the future spreading your values, even if it doesn't have your consciousness, and I can't see that as a bad thing.
0Houshalter
Well what if suicide is illegal in the future? And even if it isn't, suicide is really hard to go through with. A lot of people have preferences that they would prefer not to be revived with brain damage, but people with brain damage do not commonly kill themselves.
3Dagon
I see this combination of expressed preference and actions (would prefer not to live with brain damage, but then actually choose to live with brain damage) as a failure of imagination and incorrect far-mode statements, NOT as an indication that the prior statement true, but was thwarted by some outside force. Future-me instances have massively more information about what they're experiencing in the future than present-me has now. It's ludicrous for present-me to try to constrain future-me's decisions, and even more so to try to identify situations where present-me's wishes will be honored but future-me's decisions won't. You can prevent adverse revival by cremation or burial (in which case you also prevent felicitous revival). If an evil regime wants you, any contract language is useless. If an individual-respecting regime considers your revival, future you would prefer to be revived and asked rather than being held to a past-you document that cannot predict the details of the current the situation very well.
1Lumifer
More to the point, what if suicide is impossible? It's not hard at all to prevent an em from committing suicide and, of course, if you have copies and backups, he can suicide all he wants...
4ChristianKl
You don't seem to describe what you would consider as a revived copy of you. How much of your personality has to stay intact?
4turchin
I would add lines about would you prefer to be revived together with your friends, family members, before them or after. May be I would add a secret question to check if you are restored properly. I would also add all my digital immortality back-up information, which could be used to fill gaps in case if some information is lost. I also expect that revival may happen in maybe 20-30 years from my death so I should add some kind of will about how to manage my property during my absence.
0DataPacRat
I'm afraid that none of my friends or family are interested in cryo. I already created one recognition protocol, but it's more for multiple copies of myself meeting. I suppose it would be easy enough to include an MD5 hash of a keyphrase in this doc. I already have provisions in place for my other data, which will end up in that "perpetual storage drawer" I mentioned. Preserving assets while im dead is an entirely different kettle of fish, and assumes that I will have any worth preserving, which, given my financial situation, I don't expect to be the case.
2ChristianKl
I think MD5 hashes are likely broken by the time of any resurrection. MD5 already has collision problems today.
3scarcegreengrass
Interesting idea! I guess you could add a 'when in doubt' for whether you'd rather be revived in an early period (eg, if resurrection is possible with a 80% success rate) or to be downprioritized until resurrection is very mature and safe.
1DataPacRat
It shouldn't be too hard to add some quantitative numbers, or at least which numbers I'd like potential revivers to consider.
2pcm
My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario. Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party. My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.
2siIver
Great idea. I will probably do a similar thing myself at some point, and it will probably look similar to yours. The only thing I see that might be missing is advise for a scenario in which the odds of revival go down with time, creating pressure to revive you sooner rather than later. In that case your wishes may contradict with each other (since later revival could still increase the odds of living indefinitely). That seems far fetched but not entirely impossible. Other than that, I'd say be more specific to avoid any possible misinterpretation. You never know how much bureaucracy will be involved in the process when it finally happens.
[-][anonymous]120

Astrobiology bloggery got interrupted by a SEVERE bout of a sleep disorder, developing systems to measure metabolic states of single yeast cells in order to freaking graduate soonish, and having a bit of a life for a while.

Astrobiology bloggery resumes within 1 week, with my blog moved from thegreatatuin.blogspot.com to thegreatatuin.wordpress.com, blogger being completely unusable when it comes to inserting graphs and the like. Dear gods I'm excited, the last year has seen a massive explosion in origin of life research and study of certain outer solar system bodies. To the point that I'm pretty sure the metabolism of the last universal common ancestor has been figured out and the origin of the ribosome (and therefore protein-coding genetics) as well.

Advice on running personal wordpress account welcomed.

For a while now, I have been working on a potentially impactful project. The main limiting factor is my own personal productivity- a great deal of the risk is frontloaded in a lengthy development phase. Extrapolating the development duration based on progress so far does not yield wonderful results. It appears I should still be able to finish it in a not-absurd timespan, it will just be slower than ideal.

I've always tried to improve my productivity, and I've made great progress in that compared to ten or even five years ago, but at this point I've picked m... (read more)

Have you ever taken Adderall? I greatly suspect you have not.

People who fight chronic akrasia because of varoius degrees of ADHD and related mental disorders have a different response to stimulants than "normal" individuals. For me, Adderall puts me into cool, calm, clear focus. The kind of productive mode of being that most people get into by drinking a cup of coffee (except coffee makes me jittery and unfocused). Being on Adderall is just... "normal." Indeed the first time I tried it I thought the dose was too low because I didn't feel a thing.. until 8 hours later when I realized I was still cranking away good code and able to focus instead of my normal bouts of mid-day akrasia. I could probably count on my hands the number of times I had a full day of highly focused work without feeling stress or burn-out afterwards... now it's the new normal :)

For such people low-dose amphetamines don't provide any high, nor are they accompanied by some sort of berserker productivity binge like popular media displays. In the correct dosages they also don't seem to come with any addiction or withdraw -- I go off of it without any problems, other than reverting to the normal,... (read more)

1Throawey
You are correct that I have not taken Adderall, or any other amphetamines. I would probably be less hesitant if I already knew how I reacted to them. I do fully recognize ADD/ADHD as real, though. I have spent a great deal of time around people with it. Some are very, very severely impacted. (I have to laugh a bit whenever I see implications that it's somehow 'fake'- it can be about as subtle as a broken bone.) But my familiarity with it is also part of the reason why I have never really considered the possibility of having it. Even measured against 'normal' people, I seem to be very productive, and when I compare my difficulties with those of people I know with ADHD... It seems like mine would have to be a relatively mild case, or there would need to be some factor that is mitigating its impact. That said, from a hereditary perspective it would be a little weird if I don't have it to some degree. The situation and low cost of asking basically demand that I give it further investigation, at least.
6hg00
Drugs are prescribed based on a cost-benefit analysis. In general, the medical establishment is pretty conservative (there's little benefit to the doctor if your problem gets solved, but if they hurt you they're liable to get sued). In the usual case for amphetamines, the cost is the risk of side effects and the benefit is helping someone manage their ADHD. For you, the cost is the same but it sounds like the benefit is much bigger. So even by the standards of the risk-averse medical establishment, this sounds like a risk you should take. You're an entrepreneur. A successful entrepreneur thinks and acts for themselves. This could be a good opportunity to practice being less scrupulous. Paul Graham on what makes founders successful: I'd recommend avoiding Adderall as a first option. I've heard stories of people whose focus got worse over time as tolerance to the drug's effects developed. Modafinil, on the other hand, is a focus wonder drug. It's widely used in the nootropics community and bad experiences are quite rare. (/r/nootropics admin: "I just want to remind everyone that this is a subreddit for discussing all nootropics, not just beating modafinil to death.") The legal risks involved with Modafinil seem pretty low. Check out Gwern's discussion. My conclusion is that buying some Modafinil and trying it once could be really valuable, if only for comfort zone expansion and value of information. I have very little doubt that this is the right choice for you. Check out Gwern's discussion of suppliers. (Lying to your doctor is another option if you really want to practice being naughty.)
0fr00t
If they don't give me what I want after I say the correct sequence of words I won't be returning to them. It's easy to find a doctor who will work with you.
0Throawey
Thanks for the links. I do notice that the idea of trying modafinil does not result in the nearly the same degree of automatic internal 'no' as amphetamines. That would suggest my inhibitions are somehow related to the relative perceived potency, or potential health effects... or I'm disinclined to do something that could signal 'drug abuser', which I associate much more strongly with amphetamines than modafinil. Hm. I've also been going around and asking the more conservative people in my circle about this situation as well, to try to give a more coherent voice to my subverbal objections. So far I've found that they actually support me trying things, which suggests I really should try to recalibrate those gut reactions a bit. Upon reflection, I think I could actually get modafinil completely legitimately. I feel a bit dumb for not resolving to do this sooner, given that I was fully aware of modafinil- even to the point of very nearly purchasing some a while ago, before I knew it was schedule 4- and given that I was fully aware of what modafinil was often used to treat. At this point, the choice is pretty massively overdetermined.
0Douglas_Knight
Amphetamine is officially more dangerous than modafinil (for good reason), but doctors actually respond worse to patients asking for modafinil than asking for amphetamine because it's weird. The easiest way to get modafinil is probably to start with amphetamine and later ask for modafinil because it's weaker and safer.
0Throawey
That's... pretty goofy. I would hope sleep specialists, at least, would tend to reach for modafinil before amphetamines.
0Douglas_Knight
Yes, I'm sure that narcoleptics are referred to sleep specialists who know that it is on-label for narcolepsy. Probably that makes them more likely to prescribe it off-label. But few people go to sleep specialists. Scott Alexander has written many times about how as a psychiatry resident he sees patients who need a stimulant, but can't take amphetamine. He brainstorms with his supervisor and suggests modafinil and even in this perfect setup, he gets pushback. But I wasn't talking about sleep problems, which includes the approved use of modafinil. I was talking about using it in place of amphetamine for ADHD, which is further off-label.
0hg00
Glad I could help :D
0ChristianKl
The idea that doctors who describe Adderal to ADHD patients are conversative about prescribing it seems to be an extraordinary claim. How many doctors do you think get sued for giving patients adderal? There a lot of money from drug companies who lobby that drugs like Adderal don't get perscribed in a conservative fashion.
0hg00
I'm assuming you think the answer is "not many". If so, this shows it's not a very risky drug--it rarely causes side effects that are nasty enough for a patient to want to sue their doctor. From what I've read about pharmaceutical lobbying, it consists primarily of things like buying doctors free meals for in exchange for using the company's drug instead of a competitor's drug. I doubt many doctors are willing to run a serious risk of losing their career over some free meals.
1ChristianKl
No. It also consists of lobbying the relevant politicians to make it hard to sue doctors and generally policies to reduce harms caused by drugs. Drugmakers fought state opioid limits amid crisis: ---------------------------------------- That argument assumes that only side effects that can be proven in court to be bad are meaningful to worry about. Giving that establishing causation of drug effects usually takes millions of money to run well controlled studies that get published in leading medical journals that allow the drug companies that publish the studies that don't follow best standards of science that the journals pledged to honor (the CONSORT standards), it's not easy to prove all causation.
4moridinamael
This is not intended to be snarky or backhanded or anything. You did ask for insights. It sounds like you're seeking some kind of complex justification to do something that you want to do anyway. Currently your reasons are not-necessarily-rational and maybe not fully consciously acknowledged, but you feel the desire/compulsion anyway. I say just go ahead and do what your gut is suggesting, while keeping in mind that you can always back. This isn't an irrevocable decision, so you lose almost nothing for trying.
0Throawey
There is probably some of that going on. More potent nootropics have long been a kind of forbidden fruit to me.
2Gurkenglas
Perhaps you expect to in the future be in a position where your expected impact is significantly larger, and so your gut tells you to be careful with anything whose long-term effects are not clear?
0Throawey
Possibly. I don't know if my gut is that smart and forward thinking, but that is a bit of a conscious concern.
[-]Elo70

have updated the list of common human goals.
http://lesswrong.com/r/discussion/lw/mnz/list_of_common_human_goals/

social looked like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people. Do you have an established social network? Do you have intimacy?

and now looks like:

Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people.

... (read more)
0ChristianKl
Those are interesting semantics.
4Elo
not necessarily in lw jargon, but it appeals to some.

Let's say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I'm also going to make the simplifying assumption that the effects of the learning materials are independent.

I'm looking for an experimental protocol with the following conditions:

  1. I want to be able to give each student as many learning materials as possible. I don't want a simple RCT, but a factorial experiment where student

... (read more)
3gwern
You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they're more intended for fixed up-front designing of experiments. They also tend to be oriented towards overall information or reduction of variance, which doesn't necessarily correspond to your loss function. Having priors affects the optimal design somewhat (usually, you can spend fewer datapoints on the variables with prior information; for a Bayesian experimental design, you can simulate a set of parameters from your priors and then simulate drawing n datapoints with a particular experimental design, fit the model, find your loss or your entropy/variance, record the loss/design, and repeat many times; then find the design with the best average loss.). If you are running the learning material experiment indefinitely and want to maximize cumulative test scores, then it's a multi-armed bandit and so Thompson sampling on a factorial Bayesian model will work well & handle your 3 desiderata: you set your informative priors on each learning material, model as a linear model (with interactions?), and Thompson sample from the model+data. If you want to find what set of learning materials is optimal as fast as possible by the end of your experiment, then that's the 'best-arm identification' multi-armed bandit problem. You can do a kind of Thompson sampling there too: best-arm Thompson sampling: http://imagine.enpc.fr/publications/papers/COLT10.pdf https://www.escholar.manchester.ac.uk/api/datastream?publicationPid=uk-ac-man-scw:227658&datastreamId=FULL-TEXT.PDF http://nowak.ece.wisc.edu/bestArmSurvey.pdf http://arxiv.org/pdf/1407.4443v1.pdf https://papers.nips.cc/paper/4478-multi-bandit-best-arm-identification.pdf One version goes: with the full posteriors, find the action A with the best expected loss; for all the other actions B..Z, Thompson sample their possible value; take the action with the best loss out of A..Z. This explores the othe
1MattG2
So after looking at the problem I'm actually working on, I realize an adaptive/sequential design isn't really what I'm after. What I really want is a fractional factorial model that takes a prior (and minimizes regret between information learned and cumulative score). It seems like the goal of multi-armed bandit is to do exactly that, but I only want to do it once, assuming a fixed prior which doesn't update over time. Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?
1gwern
I still don't understand what you're trying to do. If you're trying to maximize test scores by increasing them through picking textbooks and this is done many times, you want a multi-armed bandit to help you find what is the best textbook over the many students exposed to different combinations. If you are throwing out the information from each batch and assuming the interventions are totally different each time, then your decision is made before you do any learning and your optimal choice is simply whatever your prior says: the value of information is the subsequent decisions it affects, except you're not updating your prior so the information can't change any decisions after the first one and is worthless. Dunno. Simulation is the most general way of tackling the problem, which will work for just about anything, but can be extremely computationally expensive. There are many special cases which can reuse computations or have closed-form solutions, but must be considered on a case by case basis.

I have a question for LWers who are non-native English speakers.

I am working on a software system for linguistically sophisticated analysis of English text. At the core of the system is a sentence parser. Unlike most other research in NLP, a central goal of my work is to develop linguistic knowledge and then build that knowledge into the parser. For example, my system knows that the verb ask connects strongly to subjectized infinitive phrases ("I asked him to take out the trash"), unlike most other verbs.

The system also has a nice parse visualiz... (read more)

6Gunnar_Zarncke
Poll for it: I'm a native speaker [pollid:1163] Such a tool to visualize parse trees would be/have been helpful. [pollid:1164]
2Daniel_Burfoot
Thanks to Gunnar for setting up the poll and also to all who answered.
2Gunnar_Zarncke
You should look at the correlations in the raw data really. Also: Polls are easy. You should have created your own really. See https://wiki.lesswrong.com/wiki/Comment_formatting#Polls
0MrMind
How would you use the grammar visualization tool to aid study? Many people answered "unsure" to the poll because it's not clear how it should be used, or "Not really" because the first uses they thought about were not helpful. You should give the user the guidelines on how to better consume your product. Usually needs --> tools. Yours seems a case of inverted implication.

Did Zuckerberg make the right choice by a Berkeley, Stanford, and University of California collaboration decide how to spend their money? I guess BioHub will be similar than the NIH is how it allocates funding.

Zuckerberg could also have funded Aubrey de Grey. They could have funded research on how to make medical research better the way the Laura and John Arnold Foundation does.

TechCrunch:

The technologies Zuckerberg listed were “AI software to help with imaging the brain…to make progress on neurological diseases, machine learning to analyze large databa

... (read more)
4turchin
He didn't not. Also Buck institute of aging is underfunded.
0ChristianKl
Having read a bit more sources besides TechCrunch I'm a bit more optimistic. Chen/Zuckerberg won't judge applications and tool building is a valid goal. The Cell Atlas also looks like a valid project.

Six plant extracts delay yeast chronological aging through different signaling pathways

"Our recent study has revealed six plant extracts that slow yeast chronological aging more efficiently than any chemical compound yet described."

http://www.impactjournals.com/oncotarget/index.php?journal=oncotarget&page=article&op=view&path[]=10689&path[]=33840

article http://www.kurzweilai.net/these-six-plant-extracts-could-delay-aging

Sleep Learning: Your Brain Continues to Process Simple Tasks, Classify Words Subconsciously

The experiment showed that when people were subjected to simple word classification tasks before sleeping, the brain continues to unconsciously make classifications even in sleep."

http://www.medicaldaily.com/sleep-learning-your-brain-continues-process-simple-tasks-classify-words-subconsciously-302746

Source: Kouider, Andrillon T, Barbosa L, et al. Inducing task-relevant responses to speech in the sleeping brain, Current Biology. 2014.

UK posts their guidelines for robotic ethics, pay-walled at 200 bucks tho. Article follows

http://www.digitaltrends.com/computing/bsi-robot-ethics-guidelines/

edit: and robot disarms entrenched shooter by stealing his rifle

http://www.latimes.com/local/lanow/la-me-ln-robot-barricaded-suspect-lancaster-20160915-snap-story.html

Here is a real world control problem: Self driving cars. Companies are currently taking dash cam footage of people driving, and using it to train AIs to drive cars.

There is a serious problem with this. The AIs can learn to predict exactly what a human would do. But humans aren't actually optimal drivers. They make tons of mistakes. They have slow reaction times. They fail to notice things. They don't apply the optimal braking or acceleration, they speed, they don't make optimal turns, etc.

AIs trained on human data end up mimicking all of these imperfection... (read more)

2ChristianKl
It's not my impression that self driving cars simply try to copy what a human does in any case. The AI don't violate speed limits and generally try to drive with as little risk as possible. Humans drive very differently.
0Houshalter
You might be thinking of Google's self driving car which seems like it was designed from the ground up with traditional programming. I am thinking of system's like Comma.ai's which use machine learning to train self driving cars, by predicting what a human driver would do. Of course you can put a regulator on the gas pedal and prevent the AI from speeding. But other issues are more difficult to control. How do you enforce that the Ai should "try to drive with as little risk as possible"? We have very few training examples of accidents, and we can't let the car experiment under real conditions. My guess on how to solve this issue is to develop a way to "speak" with the AI. So we can see what it is thinking, and tell it what we would prefer it to do. But this is difficult and there is little research on methods to do this, yet.
0ChristianKl
Google car also uses machine learning. That still doesn't mean that it tries to emulate a human driver. The article doesn't say that the car predicts what a human driver would do. There's the example of the Google car waiting for the woman in the wheelchair who chased ducks. That's behavior you get from the way Google algorithm cares about safety that you wouldn't get from emulating human drivers.
0Houshalter
Google uses machine learning, but it's not based on it. There is a difference between a special "stop sign detector" function, and an "end to end" approach where a single algorithm learns everything. Comma.ai's business model is to pay people to upload their dashcam footage, and train neural networks based on it. As far what I described is their approach.
0ChristianKl
I would be surprised if they setup their system in a way where they can't tell a car to approach a red light by using less fuel than human drivers use. As far as accidents go, the idea that automatic breaking should take over in emergency situations is already implemented in many cars on the road. It's unlikely that the system would react how a human driven car would have reacted a decade ago.

I have a name that I want to give my new product. That name is already trademarked for an unrelated use. Is it a bad idea to go ahead and use that product name? Is a trademark comprehensive enough that I should just pick a different name?

6ChristianKl
Registering a trademark doesn't cost that much. You can simply apply to register a trademark for your product for your usage and see whether the government will grant you a trademark for that usage.
4Gunnar_Zarncke
The cost depends on the number of class and countries you want to register it in. For reference: There are 45 internationally agreed on classes.
5Elo
is it google-able. If you google the name, will you show up easy? That's what having a name is all about right?
0ChristianKl
You also don't want to get sued and be forced to change your name.

UN declares antibiotic resistance largest global threat

“Antimicrobial resistance poses a fundamental threat to human health, development, and security,”

http://news.nationalgeographic.com/2016/09/antibiotic-resistance-bacteria-disease-united-nations-health/?linkId=29137110

There seems to be a sizable amount of people in the census who consider that there's a decent probability that there's another intelligent civilisation in our universe.

When it comes to existential risk discussions, there often the argument that existential risk is important for the future of intelligent life. If there's other intelligent life out there, is existential risk still as important?

0[anonymous]
Important to who? Any intelligent system wants to stick around as long as possible in an indifferent universe.

It would be interesting to make Null experiment, which will consist only of two control groups, so we will know what is the medium difference between two equal groups. It would also interesting to add two control groups in each experiment, as we will see how strong is the effect.

For example if we have difference between main and control in 10 per cent, it could looks like strong result. But if we have second control group, and it has 7 per cent difference from first control group, our result is not so strong after all.

I think that it is clear that can't do... (read more)

0gwern
You can. Cross-validation, the bootstrap, permutation tests - these rely on that sort of procedure. They generate an empirical distribution of differences between groups or effect sizes which replace the assumption of being two normal distributions etc. It would be better to do those with both the experimental and control data, though.