Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: komponisto 28 May 2017 07:11:52AM 21 points [-]

For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter's opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider -- a behavior pattern that doesn't need more practice, IMHO.

(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)

Comment author: FeepingCreature 31 May 2017 11:41:37AM *  1 point [-]

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like.

Yeah but exposure therapy doesn't work like that though. If people are too sensitive, you can't just rub their faces in the thing they're sensitive about and expect them to change. In fact, what you'd want to desensitize people is the exact opposite - really tight conversation norms that still let people push slightly outside their comfort zone.

Comment author: gjm 16 February 2017 01:42:04AM 0 points [-]

Well, we've had a basilisk already. Apparently we're slowly crawling backwards through alphabetical order. Next up, perhaps, Bahamut or Azathoth.

Comment author: FeepingCreature 16 February 2017 01:44:21AM 0 points [-]
Comment author: John_Maxwell_IV 03 December 2016 12:58:17PM *  11 points [-]

If Alyssa Vance is correct that the community is bottlenecked on idea generation, I think this is exactly the wrong way to respond. My current view is that increasing hierarchy has the advantage of helping people coordinate better, but it has the disadvantage that people are less creative in a hierarchical context. Isaac Asimov on brainstorming:

If a single individual present has a much greater reputation than the others, or is more articulate, or has a distinctly more commanding personality, he may well take over the conference and reduce the rest to little more than passive obedience. The individual may himself be extremely useful, but he might as well be put to work solo, for he is neutralizing the rest.

I believe this has already happened to the community through the quasi-deification of people like Eliezer, Scott, and Gwern. It's odd, because I generally view the LW community as quite nontraditional. But when I look at academia, I get the impression that college professors are significantly closer in status to their students than our intellectual leadership.

This is my steelman of people who say LW is a cult. It's not a cult, but large status differences might be a sociological "code smell" for intellectual communities. Think of the professor who insists that they always be addressed as "Dr. Jones" instead of being called by their first name. This is rarely the sort of earnest, energetic, independent-minded person who makes important discoveries. "The people I know who do great work think that they suck, but that everyone else sucks even more."

The problem is compounded by the fact that Eliezer, Scott, and Gwern are not actually leaders. They're high status, but they aren't giving people orders. This leads to leadership vacuums.

My current guess is that we should work on idea generation at present, then transform into a more hierarchical community when it's obvious what needs to be done. I don't know what the best community structure for idea generation is, but I suspect the university model is a good one: have a selective admissions process, while keeping the culture egalitarian for people who are accepted. At least this approach is proven.

Comment author: FeepingCreature 03 December 2016 01:58:39PM *  6 points [-]

I shall preface by saying that I am neither a rationalist nor an aspiring rationalist. Instead, I would classify myself as a "rationality consumer" - I enjoy debating philosophy and reading good competence/insight porn. My life is good enough that I don't anticipate much subjective value from optimizing my decisionmaking.

I don't know how representative I am. But I think if you want to reach "people who have something to protect" you need to use different approaches from "people who like competence porn", and I think while a site like LW can serve both groups we are to some extent running into issues where we may have a population that is largely the latter instead of the former - people admire Gwern, but who wants to be Gwern? Who wants to be like Eliezer or lukeprog? We may not want leaders, but we don't even have heroes.

I think possibly what's missing, and this is especially relevant in the case of CFAR, is a solid, empirical, visceral case for the benefit of putting the techniques into action. At the risk of being branded outreach, and at the very real risk of significantly skewing their post-workshop stats gathering, CFAR should possibly put more effort into documenting stories of success through applying the techniques. I think the main focus of research should be full System-1 integration, not just for the techniques themselves but also for CFAR's advertisement. I believe it's possible to do this responsibly if one combines it with transparency and System-2 relevant statistics. Contingent, of course, on CFAR delivering the proportionate value.

I realize that there is a chicken-and-egg problem here where for reasons of honesty, you want to use System-1-appealing techniques that only work if the case is solid, which is exactly the thing that System-1 is traditionally bad at! I'm not sure how to solve that, but I think it needs to be solved. To my intuition, rationality won't take off until it's value-positive for S1 as well as S2. If you have something to protect you can push against S1 in the short-term, but the default engagement must be one of playful ease if you want to capture people in a state of idle interest.

Comment author: sarahconstantin 27 November 2016 10:21:45AM 9 points [-]

This is a nontrivial cost. I'm considering it myself, and am noticing that I'm a bit put off, given that some of my (loyal and reflective) readers/commenters are people who don't like LW, and it feels premature to drag them here until I can promise them a better environment. Plus, it adds an extra barrier (creating an account) to commenting, which might frequently lead to no outside comments at all.

A lighter-weight version of this (for now), might be just linking to discussion on LW, without disabling blog comments.

Comment author: FeepingCreature 29 November 2016 12:28:47PM 1 point [-]

Would you use the LW comments section if it was embeddable, like Disqus is?

In response to Identity map
Comment author: kebwi 16 August 2016 06:02:34AM *  2 points [-]

I don't mean to sound harsh below Alexey. On the whole, I think you've done a wonderful job, but that said, here's my take...

I personally think the question is poorly phrased. Throughout the document, Turchin asks the question "will some future entity be me?" The copy problem, which he takes as one of the central issues at task, demonstrates why this question is so poorly formulated, for it leads us into such troublesome quandaries and paradoxes. I think the future-oriented question (will a future entity be me?) is simply a nonsense question, perhaps best conceptualized by the fact that the future doesn't exist and therefore it violates some fundamental temporal property to speak about things as if they already exist and are available for scrutiny. They don't and it is wrong to do so.

I think the only rational way to pose the question is past-oriented: "was some past entity me?" Notice how simply and totally the copy problem evaporates in this context. Two current people can both give a positive answer to the question via a branching scenario in which one person splits into two, perhaps physically, perhaps psychologically (informationally). Despite both answering "yes", practically all the funny questions and challenges just go away when we phrase the question in a past-oriented manner. Asking which of multiple future people is you paralyzes you with some "choice" to be made from the available future options. Hence the paradox. But asking which past person of you puts no choice on the table. The copy scenario always consists of a single person splitting, and so there is only one ancestor from which a descendant could claim to have derived. No choice from a set of available people enters into the question.

One question claws its way back into the discussion though. If current persons A and B both answer "yes" to identifying with past C, then does that somehow make them identified with one another? That can be a highly problematic notion since they can seem to be so irrefutably different, both in their memories and in their "conscious states" (and also in their physical aspects if one cares about physical or body identity). The solution to this addition problem is simple however: identity is not transitive to begin with. Thus, the fact that A is C and B is C does not imply that A is B in the first place. It never implied that anyway, so why even entertain the question? No, of course A and B aren't the same, and yet they are both still identified as C. No problem.

Draw a straight line segment. At one end, deviate with smooth curvature to bend the line to the left. But also deviate from the same point to the right. As we trace along the line, approaching the branch point, we are faced with the classic question: which of the impending branches is the line? We can follow either branch with smooth curvature, which is one good definition of a line "identity" (with lines switching identity at sharp angles). The question is unanswerable since either branch could be identified as the original, yet we are phrasing the question so as to insist upon one choice. I maintain that the question is literally nonsense. Now, pick an arbitrary point on either of the branches, looking back along its history and ask which set of points along its smooth curvature are the same line as that branch? For each branch we can conclude that points along the segment prior to the split are the same line as the branch itself, yet the branches are not equivalent to one another since that would require turning a sharp corner to switch branches. How is this possible? Simple. It is not a transitive relation. It never was.

Turchin pays very minor attention to branching (or splitting) identity in his map, which I think is disappointing since it is likely the best model of identity available.

And that is why I advocate for branching identity in my book and my various articles and papers. I just genuinely think it is the closest theory of identity available to an actual notion of some fundamental truth, assuming there is any such thing on these matters.

In response to comment by kebwi on Identity map
Comment author: FeepingCreature 09 September 2016 09:39:40AM 0 points [-]

The past doesn't exist either.

Comment author: Eliezer_Yudkowsky 10 November 2012 06:32:12PM 14 points [-]

You mean you're not?

I'm signed up for cryonics. I'm a bit worried about what happens to everyone else.

Going on the basic anthropic assumption that we're trying to do a sum over conditional probabilities while eliminating Death events to get your anticipated future, then depending on to what degree causal continuity is required for personal identity, once someone's measure gets small enough, you might be able to simulate them and then insert a rescue experience for almost all of their subjective conditional probability. The trouble is if you die via a route that degrades the detail and complexity of your subjective experience before it gets small enough to be rescued, in which case you merge into a lot of other people with dying experiences indistinguishable from yours and only get rescued as a group. Furthermore, anyone with computing power can try to grab a share of your soul and not all of them may be what we would consider "nice", just like if we kindly rescued a Babyeater we wouldn't go on letting them eat babies. As the Doctor observes of this proposition in the Finale of the Ultimate Meta Mega Crossover, "Hell of a scary afterlife you got here, missy."

The only actual recommendations that emerge from this set of assumptions seem to amount to:

1) Sign up for cryonics. All of your subjective future will continue into quantum worlds that care enough to revive you, without regard for worlds where the cryonics organization went bankrupt or there was a nuclear war.

2) If you can't be suspended, try to die only by routes that kill you very quickly with certainty, or (this is possibly better) kill almost all of your measure over a continuous period without degrading your processing power. In other words, the ideal disease has a quantum 50% probability of killing you while you sleep, but has no visible effects when you wake up, and finally kills you with certainty after a couple of months. Your soul's measure will be so small that almost all of its subjective quantity will at this point be in worlds simulated by whatever Tegmark Level IV parties have an interest in your soul, if you believe that's a good thing. If you don't think that's a good thing, try to die only by routes that kill you very quickly with certainty, so that it requires a violation of physical law rather than a quantum improbability to save you.

3) In other words, sign up for cryonics.

Comment author: FeepingCreature 25 July 2016 08:04:41AM 0 points [-]

"Hell of a scary afterlife you got here, missy."

! ! !

Be honest. Are you prescient? And are you using your eldritch powers to troll us?

Comment author: [deleted] 27 March 2016 09:11:02AM 2 points [-]

Why would someone make major decisions based on metaphysical interpretations of quantum physics that are lacking experimental verifiability? That seems like poor life choices.

Comment author: FeepingCreature 27 March 2016 10:42:07PM *  2 points [-]

Tegmark 4 is not related to quantum physics. Quantum physics does not give an avenue for rescue simulations; in fact, it makes them harder.

As a simulationist, you can somewhat salvage traditional notions of fear if you retreat into a full-on absurdist framework where the point of your existence is to give a good showing to the simulating universes; alternately, risk avoidance is a good Schelling point for a high score. Furthermore, no matter how much utility you will be able to attain in Simulationist Heaven, this is your single shot to attain utility on Earth, and you shouldn't waste it.

It does take the sting off death though, and may well be maladaptive in that sense. That said - it seems plausible a lot of simulating universes would end up with a "don't rescue suicides" policy, purely out of a TDT desire to avoid the infinite-suicidal-regress loop.

I am continuously amused how catholic this cosmology ends up by sheer logic.

Comment author: Lumifer 20 December 2015 10:05:47PM 1 point [-]

CFAR's methods are antifragile

What does that mean?

Comment author: FeepingCreature 25 December 2015 12:55:13AM 0 points [-]

book

Basically, systems that can improve from damage.

Comment author: [deleted] 22 May 2015 11:34:08AM 1 point [-]

But is it possible to have power without all the rest?

Comment author: FeepingCreature 22 May 2015 02:49:03PM *  0 points [-]

Not sure. Suspect nobody knows, but seems possible?

I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own "universal" values.

Comment author: [deleted] 22 May 2015 08:24:09AM *  1 point [-]

Can you link to a longer analysis of yours regarding this?

I simply feel overwhelmed when people discuss AI. To me intelligence is a deeply anthropomorphic category, includes subcategories like having a good sense of humor. Reducing it to optimization, without even sentience or conversational ability with self-consciousness... my brain throws out the stop sign already at this point and it is not even AI, it is the pre-studies of human intelligence that already dehumanize, deanthromorphize the idea of intelligence and make it sound more like a simple and brute-force algorithm. Like Solomonoff Induction, another thing that my brain completely freezes over: how can you have truth and clever solutions without even really thinking, just throwing a huge number of random ideas in and seeing what survives testing? Would it all be so quantitative? Can you reduce the wonderful qualities of the human mind to quantities?

Comment author: FeepingCreature 22 May 2015 11:06:03AM 9 points [-]

Intelligence to what purpose?

Nobody's saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it'll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that's basically the point of worrying about UFAI.

View more: Next