Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jul. 17 - Jul. 23, 2017

1 Post author: MrMind 17 July 2017 08:15AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (39)

Comment author: Daniel_Burfoot 18 July 2017 08:55:42PM 3 points [-]

Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?

Comment author: Lumifer 18 July 2017 09:04:38PM *  1 point [-]

I expect them to be nice places to work (because they are not subject to the vulgar and demeaning necessity to turn a profit), I also don't expect them to be making much progress in the near future.

Comment author: CellBioGuy 18 July 2017 11:41:41PM *  3 points [-]

I have spoken to someone who has spoken to some of the scientific higher ups at Calico and they are excited about the longer-term funding models for biomedical research they think they can get there for sure.

I have also seen a scientific talk about a project that was taken up by Calico from a researcher who visited my university. Honestly not sure how much detail I should/can go into about the details of the project before I look up how much of what I saw was published versus not (haven't thought about it in a while), but I saw very preliminary data from mice on the effects of a small molecule from a broad screen in slowing the progression of neurodegenerative disease and traumatic brain injury.

Having no new information for ~2 years on the subject but having seen what I saw there and knowing what I know about cell biology, I find myself suspecting that it probably will actually slow these diseases, probably does not affect lifespan much especially for the healthy, and in my estimation has a good chance of increasing the rate of cancer progression (which needs more research, this hasn't been demonstrated). Which would totally be worth it for the diseases involved.

EDIT: Alright, found press releases. https://www.calicolabs.com/news/2014/09/11/

http://www.cell.com/cell/abstract/S0092-8674(14)00990-8

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4163014/

Comment author: username2 19 July 2017 07:46:07AM 2 points [-]

That assessment is actually quite common with approaches to radical longevity "likely leads to more cancers."

I am encouraged for the long term prospects of SENS in particular because the "regular maintenance" approach doesn't necessarily require mucking around with internal cellular processes. At least not as much as the more radical approaches.

Comment author: cousin_it 20 July 2017 12:15:38PM *  2 points [-]

I just came up with a funny argument for thirdism in the sleeping beauty problem.

Let's say I'm sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.

What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.

...The next morning, I wake up not knowing whether I'm still in the experiment or not. Then I play back the message on the answering machine and learn that the experiment is over, the coin came up heads, and I'm safely home. I've forgotten some information and then remembered it; a trivial operation.

But that massively simplifies the problem! Now I always wake up with amnesia twice, so the anthropic difference between heads and tails is gone. In case of heads, I find a message on my answering machine with probability 1/2, and in case of tails I don't. So failing to find the message becomes ordinary Bayesian evidence in favor of tails. Therefore while I'm in the original experiment, I should update on failing to find the message and conclude that tails are 2/3 likely, so thirdism is right. Woohoo!

Comment author: ImmortalRationalist 20 July 2017 10:42:38AM 1 point [-]

For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?

Comment author: Thomas 17 July 2017 11:08:11AM 1 point [-]

Try this

Comment author: Oscar_Cunningham 18 July 2017 02:44:44PM 0 points [-]

A different question in the same vein:

Two Newtonian point particles, A and B, with mass 1kg are at rest separated by a distance of 1m. They are influenced only by the other's gravitational attraction. Describe their future motion. In particular do they ever return to their original positions, and after how long?

Comment author: Thomas 18 July 2017 07:05:18PM 0 points [-]

The collision of two "infinitely small" points is quite another problem. Have some similarities, too.

For two points on a colliding path, the action and reaction force are present and of equal size and oppposite directions.

My example can have finite size balls or zero size mass points, but there is no reaction force to be seen. At least, I don't.

Comment author: Manfred 17 July 2017 08:05:50PM *  0 points [-]

Note that your force grows unboundedly in N, so close to zero you have things that are arbitrarily heavy compared to their distance. So what this paradox really is about, is alternating series' that grow with N, and whether we can say that they add up to zero.

If we call the force between the first two bodies f12, then the series of internal forces on this system of bodies (using negative to denote vector component towards zero) looks like -f12+f12-f23+f23-f13+f13-f34..., where, again, each new term is bigger than the last.

If you split this sum up by interactions, it's (-f12+f12)+(-f23+f23)+(-f13+f13)..., so "obviously" it adds up to zero. But if you split this sum up by bodies, each term is negative (and growing!) so the sum must be negative infinity.

The typical physicist solution is to say that open sets aren't physical, and to get the best answer we should take the limit of compact sets.

Comment author: Gurkenglas 17 July 2017 07:43:26PM *  0 points [-]

The same can be said of unit masses at every whole negative number.

The arrow that points to the right is at the same place that the additional guest in Hilbert's Hotel goes. Such unintuitiveness is life when infinities/singularities such as the diverging forces acting on your points are involved.

Comment author: Lumifer 17 July 2017 07:50:38PM 0 points [-]

I think the point is still that infinities are bad and can screw you up in imaginative ways.

Comment author: Thomas 18 July 2017 08:05:50AM 0 points [-]

Agree.

Comment author: lifelonglearner 21 July 2017 04:13:42AM 0 points [-]

Update on Instrumental Rationality sequence: about 40% done with a Habits 101 post. Turns out habits are denser than planning and have more intricacies. Plus, the techniques for creating / breaking habits are less well-defined and not as strong, so I'm still trying to "technique-ify" some of the more conceptual pieces.

Comment author: ImmortalRationalist 20 July 2017 10:39:25AM 0 points [-]

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment author: drethelin 20 July 2017 05:13:12PM 0 points [-]

You can justify a belief in "Induction works" by induction over your own life.

Comment author: ImmortalRationalist 21 July 2017 03:32:30AM 0 points [-]

Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.

Comment author: g_pepper 20 July 2017 05:27:05PM *  0 points [-]

Wouldn't that be question begging?

Comment author: AlexMennen 20 July 2017 12:14:20AM 0 points [-]

Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?

Comment author: ImmortalRationalist 21 July 2017 03:34:36AM 0 points [-]

Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.

Comment author: eukaryote 19 July 2017 07:05:29AM *  0 points [-]

Is there something that lets you search all the rationality/EA blogs at once? I could have sworn I've seen something - maybe a web app made by chaining a bunch of terms together in Google - but I can't remember where or how to find it.

Comment author: MrMind 18 July 2017 08:36:24AM 0 points [-]

From Gwern's newsletter: did you know that algorithms already can obtain legal personhood?
Not scary at all.

Comment author: WalterL 18 July 2017 07:05:10PM 3 points [-]

How encouraging. Truly we are making great strides in respecting ever more threatened minorities. Algorithmic-Americans have much to contribute, once the biophobes cease their persecution.

Comment author: turchin 18 July 2017 08:22:17PM 1 point [-]

What worries me is that if ransomware virus could own money, it could pay some human to install itself on others people computers, and also pay programmers for finding new exploits and even for the improvement of the virus.

But for such development legal personhood is not needed, only illegal.

Comment author: lmn 20 July 2017 12:52:48AM 0 points [-]

even for the improvement of the virus.

I don't think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.

Comment author: turchin 20 July 2017 10:12:18AM 0 points [-]

The human has to leave part of the money with the virus, as the virus needs to pay for installing its ransomware and for other services. If the human takes all money, the virus will be noneffective and will not so quickly replicate. Thus some form of natural selection will help viruses that give only part of their money (and future revenues) for programmers in exchange for modification.

Comment author: username2 19 July 2017 07:48:08AM 0 points [-]

With bitcoin botnet mining this was briefly possible. Also see "google eats itself."

Comment author: lmn 20 July 2017 12:58:43AM 0 points [-]

I don't think this could work. Where would the virus keep its private key?

Comment author: turchin 19 July 2017 10:16:04AM 0 points [-]

why "was briefly possible"? - Was the botnet closed?

Comment author: philh 19 July 2017 10:49:01AM 1 point [-]

They may be referring to the fact that bitcoin mining is unprofitable on most people's computers.

Comment author: Lumifer 19 July 2017 02:48:19PM 2 points [-]

It is profitable for a botnet -- that is, if someone else pays for electricity.

Comment author: CellBioGuy 19 July 2017 06:05:25PM *  0 points [-]

What with the way the ASIC mining chips keep upping the difficulty, can a CPU botnet even pay for the developer's time to code the worm that spreads it any more?

Comment author: turchin 19 July 2017 09:20:34PM 0 points [-]

It's already happening via market mechanisms.

Comment author: Lumifer 19 July 2017 08:23:44PM 0 points [-]

The difficulty keeps going up, but so does the Bitcoin price (at least recently) :-) Plus there are now other cryptocurrencies you can mine (e.g. Ethereum) with different proofs-of-work.

Comment author: madhatter 17 July 2017 10:52:48PM *  0 points [-]

never mind this was stupid

Comment author: WalterL 17 July 2017 11:16:30PM 1 point [-]

The reliable verification methods are a dream, of course, but the 'forbidden from sharing this information with non-members' is even more fanciful.

Comment author: madhatter 17 July 2017 11:18:42PM *  0 points [-]

Is there no way to actually delete a comment? :)

Comment author: Viliam 18 July 2017 07:01:24AM 0 points [-]

Not after someone already replied to it, I think.

Without replies, you need to retract it, then refresh the page, and then there is a Delete button.

Comment author: turchin 17 July 2017 11:30:43PM *  0 points [-]

In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world - will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes.

Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created - or both. Basically, it means the creation of the world government, design especially to contain AI.

It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.