Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lmn 22 April 2017 07:52:03AM 2 points [-]

Of course, the there is a game theoretic reason to shoot the messenger. The whole point of doing so is to burn a bridge. The original meaning of the term is:

Originally in military sense of intentionally cutting off one's own retreat (burning a bridge one has crossed) to commit oneself to a course of action

Ancient battles, and probably to large extend in modern battles as well, were won or lost on moral. When a large part of your army panicked and ran your side was almost certain to loose. Furthermore, whoever was the last to run would be the first one killed when the enemy overran your position. Thus, if you were afraid the soldier next you would run, you were likely to run as well. Burning the bridge behind you was one way to resolve the game theoretic dilemma. Running cannot save your life, so you might as well hold the line.

Metaphorically burning a bridge by killing the messenger serves the same purpose. By publicly killing Sauron's messenger Aragorn is reassuring his allies that he's not going to betray them by cutting a deal with Sauron that leaves them out to dry.

Comment author: username2 22 April 2017 07:25:03PM 1 point [-]

Furthermore Aragorn et al specifically saw this conflict as a one-shot dilemma that had to be definitely resolved with the absolute destruction of Sauron. They already knew what negotiated peace with the enemy looked like (Saruman) and were not willing to risk that outcome, or any other outcome that would result in the rise of Sauron again. This is why they risk everything by making a frontal assault on Mordor against overwhelming odds. Killing the messenger / burning bridges is perfectly in line with the character motivations here and actually a point where the original source material fails.

Comment author: juliawise 22 April 2017 01:27:27AM 3 points [-]

Yeah, I remember around 2007 a friend saying her parents weren't sure whether it was right for them to have children circa 1983, because they thought nuclear war was very likely to destroy the world soon. I thought that was so weird and had never heard of anyone having that viewpoint before, and definitely considered myself living in a time when we no longer had to worry about apocalypse.

Comment author: username2 22 April 2017 07:18:16PM 2 points [-]

It's a bit ironic to say that on a website with a large contingent of people that are purposefully child-free until the control problem is solved.

Comment author: username2 22 April 2017 07:16:14PM 0 points [-]

I don't think the rise of humanity has been very beneficial to monkeys, overall. Indeed the species we most directly evolved from, which at one point coexisted with modern humans, are now all extinct.

Comment author: Kaj_Sotala 21 April 2017 05:07:27PM 2 points [-]

If you're not convinced that utopian outcomes are even possible, isn't that completely compatible with the claim that utopian futures are not inevitable and low-probability?

Comment author: username2 21 April 2017 06:22:15PM 0 points [-]

"low probability possibilities they must work towards"

It's weird to devote your life to something that is impossible / logically inconsistent.

Comment author: Han 20 April 2017 04:48:39AM 4 points [-]

I think there's a rule-of-thumby reading of this that makes a little bit more sense. It's still prejudiced, though.

A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren't going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they're going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn't, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering -- and they're not even superintelligent

I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it's extremely worth pointing out that these are ideas humans love aesthetically. If it's true, great for us, but it's a very pretty version of the truth. Even if I'm wrong, I'm skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn't need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.

(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don't think this defeats my point, because for the conditions of these peoples' narrative to be met, we need the superintelligence to do things we wouldn't have thought of in most of the domains relevant to creating a utopia)

Comment author: username2 20 April 2017 10:07:27PM 0 points [-]

This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.

Comment author: Kallandras 19 April 2017 03:10:26AM 1 point [-]

The improvement in human productivity would be substantial, just in terms of the time saved while not driving, not to mention the extra man-hours from people not dying in preventable collisions.

I've also been thinking that it could cause a big shakeup in the housing market, as living in suburbs would be more appealing when your hour-long commute is reading/working time instead of driving time.

Comment author: username2 19 April 2017 08:38:15PM 0 points [-]

You mean living in suburbs is not appealing? ;)

Comment author: Dagon 19 April 2017 03:54:55PM *  1 point [-]

Downvote accepted, I do miss that feedback mechanism (when it worked, not when it got abused). My comment was perhaps over-brief.

I stand by my assertion that any definition of "successful" for cryonics must include actual revivals or measurable progress toward such. Nobody would ever wonder why chemotherapy isn't more successful because many cancer patients choose not to try it.

It now occurs to me that OP may have intentionally distinguished "cryonics movement" from "cryonics" in terms of success metrics, in which case I'm still concerned, but have expressed the wrong dimension of concern.

Comment author: username2 19 April 2017 08:23:30PM 2 points [-]

Yes, I believe we have wandered off he OP's original topic.

But for what it's with I think you are comparing apples to oranges. All cryonics cases that have not experienced early failure due to organizational or engineering flaws are still ongoIng. Only about 2% have failed. The other 98% remains to be seen. It is absolutely the case that modern cryonics organizations like Alcor have made tremendous progress in increasing the probably of success, mostly through organizational and funding changes, but also improvements to the suspension process as well.

Comment author: Lumifer 19 April 2017 02:49:53PM 0 points [-]

Maybe they said it ironically :-P

Comment author: username2 19 April 2017 03:31:35PM 2 points [-]

Philh is correct.

Comment author: eternal_neophyte 18 April 2017 05:51:57PM *  0 points [-]

For one thing I need to be able to run it on on a server without x-windows on it; so I need to be able to change code on my own machine, have a script upload it to the remote server and update the running code without halting any running processes. I also need the input source code to be transformed so every variable assignment, function-call or generator call is wrapped in a logging function which can be switched on or off, and for the output of the logs to be viewable by something basically resembling an Excel spreadsheet, where rows and columns can be filtered out according to the source and nature of the logging message; so I can examine the operational trace of a complex running program to find the source of a bug without having to manually write logging statements and try/except blocks throughout the whole system. I don't know to what extent Jupyter's feature-set intersects with what I need but when I checked it out it seemed to be basically browser-based.

"Something like a Smalltalk environment" - yes. Pharo looks a lot like what I would want and I have toyed with it slightly.

Comment author: username2 19 April 2017 02:23:31AM 0 points [-]

Sounds like you need to look into erlang.

Comment author: oomenss 18 April 2017 02:23:19PM *  5 points [-]

Right now, it's overcoming my unbearable procrastination/lethargy/aversion to/for anything that even seems unpleasant. If I can't do hard work, I'm basically useless for whatever I have planned anyway, so it's critical.

Comment author: username2 19 April 2017 02:21:50AM 4 points [-]

Two (unrelated) suggestions, from personal experience:

  1. See a psychiatrist. There may be a chemical solution.

  2. Go out of your comfort zone. Sign up for intense martial arts classes, and don't quit after you come home bruised from your first session. It will not take long to overcome pain thresholds. Mental pain/aversion is not the same as physical pain, yes, but the skill of breaking past aversion limits is transferable. There's a reason for the saying "pain is only in the mind."

View more: Next