If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
70 comments, sorted by Click to highlight new comments since: Today at 8:52 AM

Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?

2Lumifer7y
I expect them to be nice places to work (because they are not subject to the vulgar and demeaning necessity to turn a profit), I also don't expect them to be making much progress in the near future.
6[anonymous]7y
I have spoken to someone who has spoken to some of the scientific higher ups at Calico and they are excited about the longer-term funding models for biomedical research they think they can get there for sure. I have also seen a scientific talk about a project that was taken up by Calico from a researcher who visited my university. Honestly not sure how much detail I should/can go into about the details of the project before I look up how much of what I saw was published versus not (haven't thought about it in a while), but I saw very preliminary data from mice on the effects of a small molecule from a broad screen in slowing the progression of neurodegenerative disease and traumatic brain injury. Having no new information for ~2 years on the subject but having seen what I saw there and knowing what I know about cell biology, I find myself suspecting that it probably will actually slow these diseases, probably does not affect lifespan much especially for the healthy, and in my estimation has a good chance of increasing the rate of cancer progression (which needs more research, this hasn't been demonstrated). Which would totally be worth it for the diseases involved. EDIT: Alright, found press releases. https://www.calicolabs.com/news/2014/09/11/ http://www.cell.com/cell/abstract/S0092-8674(14)00990-8 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4163014/
6username27y
That assessment is actually quite common with approaches to radical longevity "likely leads to more cancers." I am encouraged for the long term prospects of SENS in particular because the "regular maintenance" approach doesn't necessarily require mucking around with internal cellular processes. At least not as much as the more radical approaches.

I just came up with a funny argument for thirdism in the sleeping beauty problem.

Let's say I'm sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.

What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.

...The next morning, I wake up not knowing whether I'm sti... (read more)

2Thomas7y
You have changed the initial conditions. The initial conditions don't speak about some external memory.
0cousin_it7y
I'm not using any external memory during the experiment. Only later, at home. What I do at home is my business.
0Thomas7y
Then, it's not the experiment's business.
0cousin_it7y
If you deny that indistinguishable states of knowledge can be created, the sleeping beauty problem is probably meaningless to you anyway.
1Thomas7y
There are (at least) two (meaningful) versions of the sleeping beauty problem. One is yours. But they are two different problems.
0Xianda_GAO_duplicate0.53215057823957197y
This argument is the same as Cian Dorr's version with a weaker amnesia drug. In that experiment a weaker amnesia drug is used on beauty if Heads which only delays the recollection of memory for a few minutes, just like in your case the memory is delayed until the message is checked. This argument was published in 2002. It is available before majority of the literature on the topic is published. Suffice to say it is not convincing to halfers. Even supporter like Terry Horgan admit the argument is suggestive and could run a serious risk of slippery slope.
0cousin_it7y
Thank you for the reference! Indeed it's very similar, the only difference is that my version relies on the beauty's precommitment instead of the experimenter, but that probably doesn't matter. Shame on me for not reading enough.
0Xianda_GAO_duplicate0.53215057823957197y
Nothing shameful on that. Similar arguments, which Jacob Ross categorized as "hypothetical priors" by adding another waking in case of H, have not been a main focus of discussion in literatures for the recent years. I would imagine most people haven't read that. In fact you should take it as a compliment. Some academic who probably spent a lot of time on it came up the same argument as you did.
0entirelyuseless7y
I agree with Thomas -- even if this proved that thirdism is right when you are planning to do this, it would not prove that it is right if you are not planning to do this. In fact it suggests the opposite: since the update is necessary, thirdism is false without the update.
0cousin_it7y
The following principle seems plausible to me: creating any weird situation X outside the experiment shouldn't affect my beliefs, if I can verify that I'm in the experiment and not in situation X. Disagreeing with that principle seems like a big bullet to bite, but maybe that's just because I haven't found any X that would lead to anything except thirdism (and I've tried). It's certainly fair to scrutinize the idea because it's new, and I'd love to learn about any strange implications.
1entirelyuseless7y
"The next morning, I wake up not knowing whether I'm still in the experiment or not. " By creating a situation outside the experiment which is originally indistinct from being in the experiment, you affect how the experiment should be evaluated. The same thing is true, for example, if the whole experiment is done multiple times rather than only once.
0cousin_it7y
Yeah, if the whole experiment is done twice, and you're truthfully told "this is the first experiment" or "this is the second experiment" at the beginning of each day (a minute after waking up), then I think your reasoning in the first experiment (an hour after waking up) should be the same as though the second experiment didn't exist. Having had a minute of confusion in your past should be irrelevant.
0entirelyuseless7y
I disagree. I have presented arguments on LW in the past that if the experiment is run once in the history of the universe, you should reason as a halfer, but if the experiment is run many times, you will assign a probability in between 1/2 and 1/3, approaching one third as the number of times approaches infinity. I think that this applies even if you know the numerical identity of your particular run.
0cousin_it7y
Interesting! I was away from LW for a long time and probably missed it. Can you give a link, or sketch the argument here?
0entirelyuseless7y
Actually, I was probably mistaken. I think I was thinking of this post and in particular this thread and this one. (I was previously using the username "Unknowns".) I think I confused this with Sleeping Beauty because of the similarity of Incubator situations with Sleeping Beauty. I'll have to think about it but I suspect there will be similar results.

For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?

1Turgurth7y
I saw this same query in the last open thread. I suspect you aren't getting any responses because the answer is long and involved. I don't have time to give you the answer in full either, so I'll give you the quick version: I am in the process of signing up with Alcor, because after ten years of both observing cryonics organizations myself and reading what other people say about them, Alcor has given a series of cues that they are the more professional cryonics organization. So, the standard advice is: if you are young, healthy with a long life expectancy, and are not wealthy, choose C.I., because they are less expensive. If those criteria do not apply to you, choose Alcor, as they appear to be the more serious, professional organization. In other words: choose C.I. as the type of death insurance you want to have, but probably won't use, or choose Alcor as the type of death insurance you probably will use.
2ImmortalRationalist7y
If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?

Up LW's alley: A Pari-mutuel like Mechanism for Information Aggregation: A Field Test Inside Intel

Abstract:

A new information aggregation mechanism (IAM), developed via laboratory experimental methods, is implemented inside Intel Corporation in a long-running field test. The IAM, incorporating features of pari-mutuel betting, is uniquely designed to collect and quantize as probability distributions dispersed, subjectively held information. IAM participants’ incentives support timely information revelation and the emergence of consensus beliefs over future

... (read more)
[-][anonymous]7y20

Update on Instrumental Rationality sequence: about 40% done with a Habits 101 post. Turns out habits are denser than planning and have more intricacies. Plus, the techniques for creating / breaking habits are less well-defined and not as strong, so I'm still trying to "technique-ify" some of the more conceptual pieces.

0Kaj_Sotala7y
You might already be aware of them / their contents, but I found these two papers useful in creating a habit workshop: * Wood & Rünger (2016) Psychology of Habit * Wood & Neal (in press) Habit-Based Behavior Change Interventions
0[anonymous]7y
Thanks for the links! I'd bumped into both papers a little while back, and I'll indeed be citing them!

From Gwern's newsletter: did you know that algorithms already can obtain legal personhood?
Not scary at all.

5WalterL7y
How encouraging. Truly we are making great strides in respecting ever more threatened minorities. Algorithmic-Americans have much to contribute, once the biophobes cease their persecution.
2turchin7y
What worries me is that if ransomware virus could own money, it could pay some human to install itself on others people computers, and also pay programmers for finding new exploits and even for the improvement of the virus. But for such development legal personhood is not needed, only illegal.
0Kaj_Sotala7y
Malware developers already invest in R&D and buying exploits; I'm not sure what key difference it makes whether the malware or its owners does the investing.
0turchin7y
It makes me worry because it looks like Seed AI with clearly non-human goals. The difference between current malware developers may be subtle, but it may grow after some threshold of such narrow AI virus. You can't turn off the virus, but malware creators are localised and could be caught. They also share most human values with other humans and will stop at some level of possible destruction.
0lmn7y
I don't think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.
0turchin7y
The human has to leave part of the money with the virus, as the virus needs to pay for installing its ransomware and for other services. If the human takes all money, the virus will be noneffective and will not so quickly replicate. Thus some form of natural selection will help viruses that give only part of their money (and future revenues) for programmers in exchange for modification.
0username27y
With bitcoin botnet mining this was briefly possible. Also see "google eats itself."
0lmn7y
I don't think this could work. Where would the virus keep its private key?
0username27y
On a central command and control server it owns, and pays bitcoin to maintain.
0lmn7y
Ok, so where does it store the administrator password to said server?
0username27y
It ... doesn't? That's where it works from. No external access.
0turchin7y
why "was briefly possible"? - Was the botnet closed?
1philh7y
They may be referring to the fact that bitcoin mining is unprofitable on most people's computers.
4Lumifer7y
It is profitable for a botnet -- that is, if someone else pays for electricity.
0username27y
You need to earn minimum amounts before you can receive a payout share or, worse, solo mine a block. With the asymmetric advantage provided by optimized hardware, your expectation time for finding enough shares to earn a payout using cpu mining is in the centuries to millenniums timeframe. This is without considering rising fees that raise the bar even higher.
0[anonymous]7y
What with the way the ASIC mining chips keep upping the difficulty, can a CPU botnet even pay for the developer's time to code the worm that spreads it any more?
0turchin7y
It's already happening via market mechanisms.
0username27y
Um, no it hasn't. Not in bitcoin. Botnets had an effect in the early days, but the only ones around in this asic age are lingering zombie botnets that are still mining only because no one bothered to turn them off.
0Lumifer7y
The difficulty keeps going up, but so does the Bitcoin price (at least recently) :-) Plus there are now other cryptocurrencies you can mine (e.g. Ethereum) with different proofs-of-work.
0username27y
The difficulty has gone up 12 orders of magnitude. The bitcoin price hasn't had that good of a return.
0Oscar_Cunningham7y
A different question in the same vein:
0Thomas7y
The collision of two "infinitely small" points is quite another problem. Have some similarities, too. For two points on a colliding path, the action and reaction force are present and of equal size and oppposite directions. My example can have finite size balls or zero size mass points, but there is no reaction force to be seen. At least, I don't.
0Manfred7y
Note that your force grows unboundedly in N, so close to zero you have things that are arbitrarily heavy compared to their distance. So what this paradox really is about, is alternating series' that grow with N, and whether we can say that they add up to zero. If we call the force between the first two bodies f12, then the series of internal forces on this system of bodies (using negative to denote vector component towards zero) looks like -f12+f12-f23+f23-f13+f13-f34..., where, again, each new term is bigger than the last. If you split this sum up by interactions, it's (-f12+f12)+(-f23+f23)+(-f13+f13)..., so "obviously" it adds up to zero. But if you split this sum up by bodies, each term is negative (and growing!) so the sum must be negative infinity. The typical physicist solution is to say that open sets aren't physical, and to get the best answer we should take the limit of compact sets.
0Gurkenglas7y
The same can be said of unit masses at every whole negative number. The arrow that points to the right is at the same place that the additional guest in Hilbert's Hotel goes. Such unintuitiveness is life when infinities/singularities such as the diverging forces acting on your points are involved.
0Lumifer7y
I think the point is still that infinities are bad and can screw you up in imaginative ways.
0Thomas7y
Agree.

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

1drethelin7y
You can justify a belief in "Induction works" by induction over your own life.
0ImmortalRationalist7y
Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.
0g_pepper7y
Wouldn't that be question begging?
0hairyfigment7y
Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality. Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'

How do I contact a mod or site administrator on Lesswrong?

0Elo7y
Pm me

Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?

0ImmortalRationalist7y
Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.

Is there something that lets you search all the rationality/EA blogs at once? I could have sworn I've seen something - maybe a web app made by chaining a bunch of terms together in Google - but I can't remember where or how to find it.

never mind this was stupid

[This comment is no longer endorsed by its author]Reply
1WalterL7y
The reliable verification methods are a dream, of course, but the 'forbidden from sharing this information with non-members' is even more fanciful.
0madhatter7y
Is there no way to actually delete a comment? :)
0Viliam7y
Not after someone already replied to it, I think. Without replies, you need to retract it, then refresh the page, and then there is a Delete button.
0turchin7y
In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world - will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes. Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created - or both. Basically, it means the creation of the world government, design especially to contain AI. It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.