Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jul. 17 - Jul. 23, 2017

1 Post author: MrMind 17 July 2017 08:15AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (70)

Comment author: Daniel_Burfoot 18 July 2017 08:55:42PM 4 points [-]

Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?

Comment author: Lumifer 18 July 2017 09:04:38PM *  1 point [-]

I expect them to be nice places to work (because they are not subject to the vulgar and demeaning necessity to turn a profit), I also don't expect them to be making much progress in the near future.

Comment author: CellBioGuy 18 July 2017 11:41:41PM *  4 points [-]

I have spoken to someone who has spoken to some of the scientific higher ups at Calico and they are excited about the longer-term funding models for biomedical research they think they can get there for sure.

I have also seen a scientific talk about a project that was taken up by Calico from a researcher who visited my university. Honestly not sure how much detail I should/can go into about the details of the project before I look up how much of what I saw was published versus not (haven't thought about it in a while), but I saw very preliminary data from mice on the effects of a small molecule from a broad screen in slowing the progression of neurodegenerative disease and traumatic brain injury.

Having no new information for ~2 years on the subject but having seen what I saw there and knowing what I know about cell biology, I find myself suspecting that it probably will actually slow these diseases, probably does not affect lifespan much especially for the healthy, and in my estimation has a good chance of increasing the rate of cancer progression (which needs more research, this hasn't been demonstrated). Which would totally be worth it for the diseases involved.

EDIT: Alright, found press releases. https://www.calicolabs.com/news/2014/09/11/

http://www.cell.com/cell/abstract/S0092-8674(14)00990-8

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4163014/

Comment author: username2 19 July 2017 07:46:07AM 4 points [-]

That assessment is actually quite common with approaches to radical longevity "likely leads to more cancers."

I am encouraged for the long term prospects of SENS in particular because the "regular maintenance" approach doesn't necessarily require mucking around with internal cellular processes. At least not as much as the more radical approaches.

Comment author: cousin_it 20 July 2017 12:15:38PM *  3 points [-]

I just came up with a funny argument for thirdism in the sleeping beauty problem.

Let's say I'm sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.

What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.

...The next morning, I wake up not knowing whether I'm still in the experiment or not. Then I play back the message on the answering machine and learn that the experiment is over, the coin came up heads, and I'm safely home. I've forgotten some information and then remembered it; a trivial operation.

But that massively simplifies the problem! Now I always wake up with amnesia twice, so the anthropic difference between heads and tails is gone. In case of heads, I find a message on my answering machine with probability 1/2, and in case of tails I don't. So failing to find the message becomes ordinary Bayesian evidence in favor of tails. Therefore while I'm in the original experiment, I should update on failing to find the message and conclude that tails are 2/3 likely, so thirdism is right. Woohoo!

Comment author: Thomas 21 July 2017 11:51:06AM 2 points [-]

You have changed the initial conditions. The initial conditions don't speak about some external memory.

Comment author: cousin_it 21 July 2017 11:56:31AM *  0 points [-]

I'm not using any external memory during the experiment. Only later, at home. What I do at home is my business.

Comment author: Thomas 21 July 2017 01:33:00PM 0 points [-]

Then, it's not the experiment's business.

Comment author: cousin_it 21 July 2017 01:58:12PM *  0 points [-]

If you deny that indistinguishable states of knowledge can be created, the sleeping beauty problem is probably meaningless to you anyway.

Comment author: Thomas 21 July 2017 02:26:34PM 1 point [-]

There are (at least) two (meaningful) versions of the sleeping beauty problem. One is yours.

But they are two different problems.

Comment author: Xianda_GAO 22 July 2017 05:13:55PM 0 points [-]

This argument is the same as Cian Dorr's version with a weaker amnesia drug. In that experiment a weaker amnesia drug is used on beauty if Heads which only delays the recollection of memory for a few minutes, just like in your case the memory is delayed until the message is checked.

This argument was published in 2002. It is available before majority of the literature on the topic is published. Suffice to say it is not convincing to halfers. Even supporter like Terry Horgan admit the argument is suggestive and could run a serious risk of slippery slope.

Comment author: cousin_it 23 July 2017 12:20:55AM *  0 points [-]

Thank you for the reference! Indeed it's very similar, the only difference is that my version relies on the beauty's precommitment instead of the experimenter, but that probably doesn't matter. Shame on me for not reading enough.

Comment author: Xianda_GAO 25 July 2017 12:15:28AM 0 points [-]

Nothing shameful on that. Similar arguments, which Jacob Ross categorized as "hypothetical priors" by adding another waking in case of H, have not been a main focus of discussion in literatures for the recent years. I would imagine most people haven't read that.

In fact you should take it as a compliment. Some academic who probably spent a lot of time on it came up the same argument as you did.

Comment author: entirelyuseless 21 July 2017 02:27:21PM *  0 points [-]

I agree with Thomas -- even if this proved that thirdism is right when you are planning to do this, it would not prove that it is right if you are not planning to do this. In fact it suggests the opposite: since the update is necessary, thirdism is false without the update.

Comment author: cousin_it 21 July 2017 03:18:40PM *  0 points [-]

The following principle seems plausible to me: creating any weird situation X outside the experiment shouldn't affect my beliefs, if I can verify that I'm in the experiment and not in situation X. Disagreeing with that principle seems like a big bullet to bite, but maybe that's just because I haven't found any X that would lead to anything except thirdism (and I've tried). It's certainly fair to scrutinize the idea because it's new, and I'd love to learn about any strange implications.

Comment author: entirelyuseless 22 July 2017 01:30:23AM 1 point [-]

"The next morning, I wake up not knowing whether I'm still in the experiment or not. "

By creating a situation outside the experiment which is originally indistinct from being in the experiment, you affect how the experiment should be evaluated. The same thing is true, for example, if the whole experiment is done multiple times rather than only once.

Comment author: cousin_it 22 July 2017 05:56:53AM *  0 points [-]

Yeah, if the whole experiment is done twice, and you're truthfully told "this is the first experiment" or "this is the second experiment" at the beginning of each day (a minute after waking up), then I think your reasoning in the first experiment (an hour after waking up) should be the same as though the second experiment didn't exist. Having had a minute of confusion in your past should be irrelevant.

Comment author: entirelyuseless 22 July 2017 02:36:30PM 0 points [-]

I disagree. I have presented arguments on LW in the past that if the experiment is run once in the history of the universe, you should reason as a halfer, but if the experiment is run many times, you will assign a probability in between 1/2 and 1/3, approaching one third as the number of times approaches infinity. I think that this applies even if you know the numerical identity of your particular run.

Comment author: cousin_it 22 July 2017 02:40:57PM *  0 points [-]

Interesting! I was away from LW for a long time and probably missed it. Can you give a link, or sketch the argument here?

Comment author: entirelyuseless 22 July 2017 03:32:41PM 0 points [-]

Actually, I was probably mistaken. I think I was thinking of this post and in particular this thread and this one. (I was previously using the username "Unknowns".)

I think I confused this with Sleeping Beauty because of the similarity of Incubator situations with Sleeping Beauty. I'll have to think about it but I suspect there will be similar results.

Comment author: ImmortalRationalist 20 July 2017 10:42:38AM 3 points [-]

For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?

Comment author: Turgurth 22 July 2017 05:09:51PM 1 point [-]

I saw this same query in the last open thread. I suspect you aren't getting any responses because the answer is long and involved. I don't have time to give you the answer in full either, so I'll give you the quick version:

I am in the process of signing up with Alcor, because after ten years of both observing cryonics organizations myself and reading what other people say about them, Alcor has given a series of cues that they are the more professional cryonics organization.

So, the standard advice is: if you are young, healthy with a long life expectancy, and are not wealthy, choose C.I., because they are less expensive. If those criteria do not apply to you, choose Alcor, as they appear to be the more serious, professional organization.

In other words: choose C.I. as the type of death insurance you want to have, but probably won't use, or choose Alcor as the type of death insurance you probably will use.

Comment author: ImmortalRationalist 24 July 2017 02:10:58PM 1 point [-]

If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?

Comment author: Thomas 17 July 2017 11:08:11AM 2 points [-]

Try this

Comment author: Oscar_Cunningham 18 July 2017 02:44:44PM 0 points [-]

A different question in the same vein:

Two Newtonian point particles, A and B, with mass 1kg are at rest separated by a distance of 1m. They are influenced only by the other's gravitational attraction. Describe their future motion. In particular do they ever return to their original positions, and after how long?

Comment author: Thomas 18 July 2017 07:05:18PM 0 points [-]

The collision of two "infinitely small" points is quite another problem. Have some similarities, too.

For two points on a colliding path, the action and reaction force are present and of equal size and oppposite directions.

My example can have finite size balls or zero size mass points, but there is no reaction force to be seen. At least, I don't.

Comment author: Manfred 17 July 2017 08:05:50PM *  0 points [-]

Note that your force grows unboundedly in N, so close to zero you have things that are arbitrarily heavy compared to their distance. So what this paradox really is about, is alternating series' that grow with N, and whether we can say that they add up to zero.

If we call the force between the first two bodies f12, then the series of internal forces on this system of bodies (using negative to denote vector component towards zero) looks like -f12+f12-f23+f23-f13+f13-f34..., where, again, each new term is bigger than the last.

If you split this sum up by interactions, it's (-f12+f12)+(-f23+f23)+(-f13+f13)..., so "obviously" it adds up to zero. But if you split this sum up by bodies, each term is negative (and growing!) so the sum must be negative infinity.

The typical physicist solution is to say that open sets aren't physical, and to get the best answer we should take the limit of compact sets.

Comment author: Gurkenglas 17 July 2017 07:43:26PM *  0 points [-]

The same can be said of unit masses at every whole negative number.

The arrow that points to the right is at the same place that the additional guest in Hilbert's Hotel goes. Such unintuitiveness is life when infinities/singularities such as the diverging forces acting on your points are involved.

Comment author: Lumifer 17 July 2017 07:50:38PM 0 points [-]

I think the point is still that infinities are bad and can screw you up in imaginative ways.

Comment author: Thomas 18 July 2017 08:05:50AM 0 points [-]

Agree.

Comment author: Lumifer 27 July 2017 07:46:15PM 1 point [-]

Up LW's alley: A Pari-mutuel like Mechanism for Information Aggregation: A Field Test Inside Intel

Abstract:

A new information aggregation mechanism (IAM), developed via laboratory experimental methods, is implemented inside Intel Corporation in a long-running field test. The IAM, incorporating features of pari-mutuel betting, is uniquely designed to collect and quantize as probability distributions dispersed, subjectively held information. IAM participants’ incentives support timely information revelation and the emergence of consensus beliefs over future outcomes. Empirical tests demonstrate the robustness of experimental results and the IAM’s practical usefulness in addressing real-world problems. The IAM’s predictive distributions forecasting sales are very accurate, especially for short horizons and direct sales channels, often proving more accurate than Intel’s internal forecast.

Comment author: lifelonglearner 21 July 2017 04:13:42AM 1 point [-]

Update on Instrumental Rationality sequence: about 40% done with a Habits 101 post. Turns out habits are denser than planning and have more intricacies. Plus, the techniques for creating / breaking habits are less well-defined and not as strong, so I'm still trying to "technique-ify" some of the more conceptual pieces.

Comment author: Kaj_Sotala 25 July 2017 10:16:35AM 0 points [-]

You might already be aware of them / their contents, but I found these two papers useful in creating a habit workshop:

Comment author: lifelonglearner 26 July 2017 02:06:34PM 0 points [-]

Thanks for the links! I'd bumped into both papers a little while back, and I'll indeed be citing them!

Comment author: ImmortalRationalist 20 July 2017 10:39:25AM 1 point [-]

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment author: drethelin 20 July 2017 05:13:12PM 1 point [-]

You can justify a belief in "Induction works" by induction over your own life.

Comment author: ImmortalRationalist 21 July 2017 03:32:30AM 0 points [-]

Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.

Comment author: g_pepper 20 July 2017 05:27:05PM *  0 points [-]

Wouldn't that be question begging?

Comment author: hairyfigment 19 August 2017 10:41:39PM 0 points [-]

Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality.

Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'

Comment author: Eitan_Zohar 21 July 2017 04:44:02PM *  0 points [-]

How do I contact a mod or site administrator on Lesswrong?

Comment author: Elo 21 July 2017 09:03:17PM 0 points [-]

Pm me

Comment author: AlexMennen 20 July 2017 12:14:20AM 0 points [-]

Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?

Comment author: ImmortalRationalist 21 July 2017 03:34:36AM 0 points [-]

Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.

Comment author: eukaryote 19 July 2017 07:05:29AM *  0 points [-]

Is there something that lets you search all the rationality/EA blogs at once? I could have sworn I've seen something - maybe a web app made by chaining a bunch of terms together in Google - but I can't remember where or how to find it.

Comment author: MrMind 18 July 2017 08:36:24AM 0 points [-]

From Gwern's newsletter: did you know that algorithms already can obtain legal personhood?
Not scary at all.

Comment author: WalterL 18 July 2017 07:05:10PM 4 points [-]

How encouraging. Truly we are making great strides in respecting ever more threatened minorities. Algorithmic-Americans have much to contribute, once the biophobes cease their persecution.

Comment author: turchin 18 July 2017 08:22:17PM 1 point [-]

What worries me is that if ransomware virus could own money, it could pay some human to install itself on others people computers, and also pay programmers for finding new exploits and even for the improvement of the virus.

But for such development legal personhood is not needed, only illegal.

Comment author: Kaj_Sotala 24 July 2017 07:12:45PM 0 points [-]

Malware developers already invest in R&D and buying exploits; I'm not sure what key difference it makes whether the malware or its owners does the investing.

Comment author: turchin 24 July 2017 09:17:50PM *  0 points [-]

It makes me worry because it looks like Seed AI with clearly non-human goals.

The difference between current malware developers may be subtle, but it may grow after some threshold of such narrow AI virus. You can't turn off the virus, but malware creators are localised and could be caught. They also share most human values with other humans and will stop at some level of possible destruction.

Comment author: lmn 20 July 2017 12:52:48AM 0 points [-]

even for the improvement of the virus.

I don't think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.

Comment author: turchin 20 July 2017 10:12:18AM 0 points [-]

The human has to leave part of the money with the virus, as the virus needs to pay for installing its ransomware and for other services. If the human takes all money, the virus will be noneffective and will not so quickly replicate. Thus some form of natural selection will help viruses that give only part of their money (and future revenues) for programmers in exchange for modification.

Comment author: username2 19 July 2017 07:48:08AM 0 points [-]

With bitcoin botnet mining this was briefly possible. Also see "google eats itself."

Comment author: lmn 20 July 2017 12:58:43AM 0 points [-]

I don't think this could work. Where would the virus keep its private key?

Comment author: username2 22 July 2017 03:50:16AM 0 points [-]

On a central command and control server it owns, and pays bitcoin to maintain.

Comment author: lmn 22 July 2017 07:35:26AM 0 points [-]

Ok, so where does it store the administrator password to said server?

Comment author: username2 22 July 2017 03:36:21PM 0 points [-]

It ... doesn't? That's where it works from. No external access.

Comment author: turchin 19 July 2017 10:16:04AM 0 points [-]

why "was briefly possible"? - Was the botnet closed?

Comment author: philh 19 July 2017 10:49:01AM 1 point [-]

They may be referring to the fact that bitcoin mining is unprofitable on most people's computers.

Comment author: Lumifer 19 July 2017 02:48:19PM 2 points [-]

It is profitable for a botnet -- that is, if someone else pays for electricity.

Comment author: username2 22 July 2017 03:47:11AM 0 points [-]

You need to earn minimum amounts before you can receive a payout share or, worse, solo mine a block. With the asymmetric advantage provided by optimized hardware, your expectation time for finding enough shares to earn a payout using cpu mining is in the centuries to millenniums timeframe. This is without considering rising fees that raise the bar even higher.

Comment author: CellBioGuy 19 July 2017 06:05:25PM *  0 points [-]

What with the way the ASIC mining chips keep upping the difficulty, can a CPU botnet even pay for the developer's time to code the worm that spreads it any more?

Comment author: turchin 19 July 2017 09:20:34PM 0 points [-]

It's already happening via market mechanisms.

Comment author: username2 22 July 2017 03:48:14AM 0 points [-]

Um, no it hasn't. Not in bitcoin. Botnets had an effect in the early days, but the only ones around in this asic age are lingering zombie botnets that are still mining only because no one bothered to turn them off.

Comment author: Lumifer 19 July 2017 08:23:44PM 0 points [-]

The difficulty keeps going up, but so does the Bitcoin price (at least recently) :-) Plus there are now other cryptocurrencies you can mine (e.g. Ethereum) with different proofs-of-work.

Comment author: username2 22 July 2017 03:49:26AM 0 points [-]

The difficulty has gone up 12 orders of magnitude. The bitcoin price hasn't had that good of a return.

Comment author: madhatter 17 July 2017 10:52:48PM *  0 points [-]

never mind this was stupid

Comment author: WalterL 17 July 2017 11:16:30PM 1 point [-]

The reliable verification methods are a dream, of course, but the 'forbidden from sharing this information with non-members' is even more fanciful.

Comment author: madhatter 17 July 2017 11:18:42PM *  0 points [-]

Is there no way to actually delete a comment? :)

Comment author: Viliam 18 July 2017 07:01:24AM 0 points [-]

Not after someone already replied to it, I think.

Without replies, you need to retract it, then refresh the page, and then there is a Delete button.

Comment author: turchin 17 July 2017 11:30:43PM *  0 points [-]

In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world - will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes.

Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created - or both. Basically, it means the creation of the world government, design especially to contain AI.

It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.