Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by calef on Academic papers
Comment author: army1987 31 October 2014 10:33:06AM 0 points [-]

rapid

LOL.

Comment author: Viliam_Bur 31 October 2014 10:18:58AM *  0 points [-]

It's what happens when you look at the lessons of "Politics is the Mind-Killer" and "Reversed stupidity is not intelligence", and decide to ignore them because affective spirals are too much fun to give up.

But it's difficult to choose whether the correct reversed stupidity in politics should actually be libertarianism or monarchy. The former seems more popular among LW crowd, but that also makes it kinda boring; the latter seems more original, but is usually defended by worse arguments. So you invent a libertarian-ish monarchy world, where the freely competing subjects are not the puny average humans, but the Gods-Emperors of different states. (You call all other regimes "demotist" to show that they are actually all the same.)

Of course, putting it this way is not attractive, so you have to hide it in hundreds of pages written in obscurantist language, so that no outsider is really sure what you are actually talking about. Then you insert some interesting historical facts, and a lot of criticism of political left, some of which is insightful.

And then you keep promoting the new teaching in LessWrong debates, because clever contrarianism is your selling point, and LessWrong has a weakness for clever contrarians. And then you use your presence at LessWrong as a proof that rational people support you, despite the fact that your fans are actually a tiny minority here (probably even smaller than religious people; and LW is explicitly atheistic).

Better analysis can be found here: "Reactionary Philosophy In An Enormous, Planet-Sized Nutshell", "The Anti-Reactionary FAQ". The first article explains the ideas better than the original sources, and the second article shows that this map doesn't fit the territory.

Comment author: Viliam_Bur 31 October 2014 09:39:38AM 0 points [-]

I think we're relatively safe

Returning to the original question ("Where are you right, while most others are wrong? Including people on LW!"), this is exactly the point where my opinion differs from the LW consensus.

I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field

For a sufficiently high value of "eventually", I agree. I am worried about what would happen until then.

I'm really hoping we don't get tested on that one.

I'm hoping that this is not the best answer we have. :-(

Comment author: Viliam_Bur 31 October 2014 08:57:59AM 0 points [-]

In short them, yes. I long term, some people would benefit from awareness-increasing techniques, such as meditation or therapy, while other people would benefit from changing their behavior.

In response to Academic papers
Comment author: cameroncowan 31 October 2014 08:52:08AM 0 points [-]

I have alot of fun exploring Academia.edu (I have papers posted there myself). I think going to the source is important when you want to understand an idea at a deep level and you wish to really digest something. Research, of course, is best done at this level.

Comment author: Evan_Gaensbauer 31 October 2014 08:40:36AM 0 points [-]

There's two ways I'm thinking your aversion could be interpreted: not revealing something because mostly you feel it's personally embarrassing, or not revealing something you because you believe you would be widely negatively judged for it. I tried to offer a solution to the former interpretation of the problem in the other comment. In this comment, I'll cover what I believe makes sense when you believe you'd be very harshly judged. I don't believe such aversions to sharing such thoughts are miscalibrated.

I'll start with an example. When I wrote this above post, when example I was considering using was from one user, not using a real name, who was asking about whether it was worth taking an illegal psychoactive substance for its therapeutic and cognitive effects. Now, I didn't need to ask anyone's permission to include their own perspectives as examples, and I still don't. That's because nobody does. However, this one user might be linked to their public identity. I'm not mentioning either the username, nor the substance in question, so it's not searchable. That was an edge case for which I erred to be more discrete, and not very publicly profile someone who asks a more taboo question. They got the answer they wanted, which is what's important. I wrote the post so individual users would get value for themselves, not ask questions out of a sense of 'improving the community', or whatever.

That's the sort of personal detail that might attract unwanted attention outside of Less Wrong norms. Talking about our own personal politics, or ideological beliefs (fringe-science, social, philosophical, etc.), that aren't shared by most others isn't always appreciated on Less Wrong. It's fine to hold those beliefs if you're willing to accept you may very well be wrong, but debating such on Less Wrong still seems problematic. However, the community has shifted from "politics is the mind-killer to "politics is hard mode to "we have other sites specifically for discussing controversial topics".

Comment author: Evan_Gaensbauer 31 October 2014 08:27:19AM 0 points [-]

Users can always start a throwaway account, and post in a thread. That's done on reddit. It may be more difficult to start a discussion with a throwaway account, but I suppose it could be done. I just discussed this in the open thread. Some etiquette was covered:

  • Indicate clearly, and from the beginning, that the account you're using is a throwaway. For example, "this is a throwaway account..."

  • Use it to discuss topics you don't want to have your real name, or your regular account linked to, but don't use it as an excuse to engage Less Wrong at a lower level than you usually do.

  • Don't use the throwaway account as a mask to get away with trolling, harassment, bad jokes, vitriol, or not trying to be reasonable.

The community may be indifferent, or sympathetic, but usually not exclusionary. I mean, if somebody is using a throwaway account to discuss why it's rational for all of us to start hating this one particular outgroup, that would deservedly receive flak. However, maybe someone wants to discuss really signing up for cryonics, but the feel it's still too weird to have their name publicly linked to it. Or, maybe, they have a problem they believe Less Wrong might be able to solve better than other online, or meatspace, support communities, but they're embarrassed for people to know it's them. If I was in that particular sort situation, I would make it clear that I'm already a regular user of Less Wrong, and it's too harrowing.

However, no user would be obliged to qualify why they're using a throwaway, even if another user doesn't have the perspective to understand why a throwaway might feel necessary.

Comment author: Benito 31 October 2014 08:23:16AM 1 point [-]

I see this as an example of how anyone can rationalise any goal they please.

Comment author: Wes_W 31 October 2014 07:23:52AM 0 points [-]

I'm not sure whether I'm grossly ignorant of the biology here. Supposing they'd still be helpful, would it be important to get your gut bacteria back, rather than some other gut bacteria? Would that be more akin to replacing a kidney, or replacing part of the brain?

Comment author: dougclow 31 October 2014 07:07:11AM 0 points [-]

Empirically we seem to be converging on the idea that the expansion of the universe continues forever (see Wikipedia for a summary of the possibilities), but it's not totally slam-dunk yet. If there is a Big Crunch, then that puts a hard limit on the time available.

If - as we currently believe - that doesn't happen, then the universe will cool over time, until it gets too cold (=too short of negentropy) to sustain any given process. A superintelligence would obviously see this coming, and have plenty of time to prepare - we're talking hundreds of trillions of years before star formation ceases. It might be able to switch to lower-power processes to continue in attenuated form, but eventually it'll run out.

This is, of course, assuming our view of physics is basically right and there aren't any exotic possibilities like punching a hole through to a new, younger universe.

Comment author: shminux 31 October 2014 07:05:03AM 0 points [-]

Or a simulation of their beneficial effects.

Comment author: Mitchell_Porter 31 October 2014 07:00:02AM 0 points [-]

What does it mean for one thing to be more real than another thing?

Also, when you say something is "map not territory", what do you mean? That the thing in question does not exist, but it resembles something else which does exist? Presumably a map must at least resemble the territory it represents.

Comment author: dougclow 31 October 2014 06:53:00AM 0 points [-]

Yes, good point that I hadn't thought of, thanks. It's very easy to imagine far-future technology in one respect and forget about it entirely in another.

To rescue my scenario a little, there'll be an energy cost in transporting the iron together; the cheapest way is to move it very slowly. So maybe there'll be paperclips left for a period of time between the first pass of the harvesters and the matter ending up at the local black hole harvester.

Comment author: Curiouskid 31 October 2014 06:26:24AM 0 points [-]

Bayesianism and Causality, or, Why I am only a Half-Bayesian (Judea Pearl)

“The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships.”

Comment author: Curiouskid 31 October 2014 06:25:54AM 0 points [-]

I had the same thought when I read Hayworth's recent interview. It's really good.

Comment author: calef 31 October 2014 05:54:22AM *  0 points [-]

Not that I actually believe most of what I wrote above (just that it hasn't yet been completely excluded), if QG introduced small nonlinearities to quantum mechanics, fun things could happen, like superluminal signaling as well as the ability to solve NP-Complete and P#-Complete problems in polynomial time (which is probably better seen as a reason to believe that QG won't have a nonlinearity).

Comment author: DeterminateJacobian 31 October 2014 05:35:40AM *  0 points [-]

I like that article. For people capable of thinking about what methods make humans happy, it seems unlikely that simply performing any feel-good method will overcome barriers as difficult as what happiness means or what use is happiness anyway. They might improve one's outlook in the short term, or provide an easier platform to help answer those questions, but to me the notion that therapy works because of therapists (a sort of research supported idea if I recall correctly) corresponds well to the intuition that humans are just too wrapped up for overly easy feel-good solutions to work. (This is as opposed to psychiatric solutions to psychiatric issues, for which you should be following this algorithm if you're depressed).

I've had trouble with the notion that happiness is even a goal to be strived for at all, because of the self-referential reality that a really good way to become happy is to become less self-focused, but that thinking about being happy is sort of self-focused. In that sense, I'd much rather seek out "fulfillment" or "goodness" than "happiness," but I now think that my issue here is just an artifact of the language of people using the word "happy." That word is just too wrapped up in ideas that make it out to be something like wireheading, which as we know is something that nobody actually wants. And so while I do think people looking for X often stop short with not-very-desirable things, it's good to separate this from people who actually want to be the most good kind of happy, the kind that one would always want, and maybe still even call "happy."

In response to comment by Salemicus on Academic papers
Comment author: satt 31 October 2014 05:15:29AM *  1 point [-]

On the topic of which: an economics paper which made a big impression on me is Hahnel & Sheeran's "Misinterpreting the Coase Theorem" (Journal of Economic Issues, 43, pp. 215-237). Unfortunately there appears to be no freely available copy of the published version online, but there is a preprint without the figures. It's a bit less accessible than Coase's paper, but I imagine pretty well anyone who's taken a microeconomics class could follow it.

Comment author: DeterminateJacobian 31 October 2014 05:00:58AM 0 points [-]

I've sort of internalized the idea that everything is, at least in principle, a solvable problem. And more importantly, that this corresponds without conflict to the normal way that I go around being good and human when not operating under the rationalist guise.

I'd say rationalism often takes this in-principle role in my thinking, providing a meta-level solution to extremely hard problems that my brain is already trying to solve by non-rationalist means. In the example set by the recent months in my life, I've had an extremely hard time reconciling my knowledge that I'm the one to blame for all of my problems, with the idea that I shouldn't feel guilty for not being perfect at solving all of my problems. This is a very human question that's filled to the brim with mental pitfalls, but I've been able to make a little progress by recognizing that, by definition and method, instrumental rationality is actually equivalent to making myself good and awesome, whether or not the on-paper rationalist method is the one my brain is using most of the time. I'm better capable of realizing that the human inability to be theoretically optimal is subsumed by human rationality, that the only optimal that exists is the kind that is actual, and that all that is left to do is take the infinite stream of possible self-improvements you can think of and start checking them off the list.

And so, when faced with something that seems next to impossible to solve (e.g. finding somebody to love <sheds a lone, hopeful tear>) there's no reason to blame the world, myself, or my proclivity to blame the world or myself. There's only the chance to do the most possible fun thing, which is to enjoy the journey of being myself, where myself is defined as someone who ceaselessly self-improves, even when that means putting less pressure on myself to improve on the object level.

For a while the "weirdness" of Less Wrong made me want to shy away from really engaging with the people here, but I'd love for that to change. If everything is a solvable problem, and we only want to solve things that are problems, then either Less Wrong is just fine (and I can improve my perception), or it is sort of actually weird but can be improved. And I wouldn't mind contributing wherever this is possible.

Comment author: DeterminateJacobian 31 October 2014 04:32:10AM 0 points [-]

Heh, clever. In a sense, iron has the highest entropy (atomically speaking) of any element. So if you take the claim that an aspect of solving intergalactic optimization problems involves consuming as much negentropy as possible, and that the highest entropy state of space time is low-density iron (see schminux's comment on black holes), then Clippy it is. It seems though like superintelligent anything-maximizers would end up finding even higher entropy states that go beyond the merely atomic kind.

...Or even discover ways that suggest that availability of negentropy is not an actual limiter on the ability to do things. Does anyone know the state of that argument? Is it known to be true that the universe necessarily runs out of things for superintelligences to do because of thermodynamics?

Comment author: Luke_A_Somers 31 October 2014 04:18:54AM 1 point [-]

We actually know quite a bit about quantum gravity: it must fall under a quantum mechanical framework, and it needs to result in gravity, and gravitons haven't been directly detected yet. This isn't enough to determine what the theory is, but it is enough to say some things about it. The main two things are:

1: Since it's just quantum mechanics, whatever it does, it'll just set another Hamiltonian. If it changes the ground rules, then it's not a theory of quantum gravity. It's a theory of something else gravity.

2: Gravity is weak. Ridiculously weak. Simply getting the states to not mush up into a continuum will be more difficult by a factor for which 'billions of times' would be a drastic understatement.

In order for gravity to be even noticeable, let alone the main driver of action, you either need to have really really enormous amounts of stuff, or things have to be insanely high energy and short-ranged and short-lived (unification energies).

Either of these would utterly murder coherence. In the former case your device would be big enough (and/or slow enough) that even neutrino collisions would decohere it fairly comprehensively long before the first operation could complete. In the latter case your computer is exploding at nearly the speed of light every time you turn it on and incidentally requires a particle accelerator that makes CERN look like 5V power cable,

So, everything that makes gravity different from electromagnetism makes it much much worse for computing.

Comment author: satt 31 October 2014 04:03:24AM *  0 points [-]

Someone reminded me of these recent if UK-centric examples a few weeks ago. [Edit: they're not about public opinion, but they're in the same vein of things that catch people out.]

Comment author: fubarobfusco 31 October 2014 03:11:33AM *  2 points [-]

Thanks. To explain the joke and/or show my work:

  • The seed idea here was the abolition of copyright in a post-consumerist society — not post-Singularity, but dramatically post-scarcity compared to today. Commercial media stopped being a thing because ① people don't need jobs because post-scarcity; ② noncommercial media descended from fan-works continued to improve in production quality; but ③ people still like good stories, and the most popular stories are often ones based on established, well-known characters. (From Anansi to Hamlet would make a great book title.)
  • The twist was literary theory as a scientific-mathematical discipline. This is an extrapolation from the computational turn in linguistics. In this future, "literary theory" refers to the mathematical study of possible and actual stories; with computational literary theory being the application of computational linguistics and cognitive science to the topic.
  • The bit that I had to go back and rewrite was to consistently use the words "storytelling" and "story" in place of words such as "fiction" and "literature", except in the article title and the academic field "literary theory". This future doesn't consider there to be hard boundaries between "folktales", "genre fiction", "fan fiction", and "literature" — all of these are stories, and this isn't a fluffy postmodern doctrine but a scientific result.
  • It's Whig history. The future writers think of their unitary concept of storytelling as both scientifically proven and obviously true, and the former era's distinctions (and laws) as being both superstitious and wicked. They think of copyright as an unnatural imposition on human culture — but they do so from a standpoint where authors/storytellers don't have to worry about earning a living.
  • Chiyoda is the ward of Tokyo in which Akihabara district is located.
  • E. Mitchell Leonard, of Leonard's Theorem, is E. L. James from a parallel universe.
In response to comment by calef on Academic papers
Comment author: Capla 31 October 2014 02:43:27AM 1 point [-]

I acknowledge that they are separate questions.

I hope asking the wrong questions leads me to the right ones. Thank you.

Comment author: drethelin 31 October 2014 02:04:12AM 1 point [-]

In addition people get a notification when someone responds directly to a comment they made

Comment author: ancientcampus 31 October 2014 01:42:07AM 0 points [-]

I'm not going to lie - I always find discussions at LW very intense and rather intimidating. Discussing my and other people's ideas is bad enough - I personally would rather not expose anything highly personal to the brutally honest scrutiny here.

Comment author: ancientcampus 31 October 2014 01:35:47AM 0 points [-]

"and we're back at square one"

Comment author: ancientcampus 31 October 2014 01:33:10AM 1 point [-]

Nice! I really hope the pendulum doesn't swing that far, though.

Comment author: ancientcampus 31 October 2014 01:28:37AM 0 points [-]

I appreciate what you're saying. Just going by the information I posted, that wasn't nearly enough information to conclude "AMF has more money than they can use". It merely raised the question - which I had answered here. :)

Comment author: ancientcampus 31 October 2014 01:17:34AM 0 points [-]

For those interested, here's a graph of the AMF's "recurring donation" income over time: http://www.againstmalaria.com/RecurringDonations.aspx?emailID=20130315 Take-away points: 1) It's been in steady decline for about a year 2) they're not nearly as big as I thought - it's currently at $60,000, which isn't even enough to support a decently sized staff.

Comment author: Azathoth123 31 October 2014 01:15:03AM *  0 points [-]

As you said, he likely does not have a coherent preference for Dubai's system. I don't see why it's an interesting question.

They're more coherent than the preferences revealed by polls. It's fairly well known that polls can be made to produce vastly different results by slight reformulations of the question.

economic "revealed" preferences are conditioned on people's current available income and assets

In other words, when revealed and stated preferences disagree it means that people's stated preferences lead to results that the person isn't willing or able to actually live with.

Comment author: Azathoth123 31 October 2014 01:11:00AM 0 points [-]

Actually, hold on, jump out, meta-level question: why are you privileging the hypothesis that "voting with their feet" represents a reflectively-coherent all-else-equal preference anyway?

Well, for one thing "voting with one's feet" doesn't have the rational ignorance problem that voting does.

Comment author: ancientcampus 31 October 2014 01:09:52AM 0 points [-]

Both excellent things to know, thanks!

Comment author: Lumifer 31 October 2014 01:05:53AM 1 point [-]

The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else.

I would describe that person as a charismatic manipulator. I don't think it requires being a sociopath, though being one helps.

Comment author: jpaulson 31 October 2014 12:57:11AM 0 points [-]

I work at Google, and I work ~40 hours a week. And that includes breakfast and lunch every day. As far as I can tell, this is typical (for Google).

I think you can get more done by working longer hours...up to a point, and for limited amounts of time. Loss in productivity still means the total work output is going up. I think the break-even point is 60h / week.

Comment author: pushcx 31 October 2014 12:51:48AM *  1 point [-]

So if you want to keep people occupied for a looooong time without running out of game-world, focus on PvP

Or invest in "procedural content generation", where the game world is constantly generated or regenerated. The "roguelike" genre has made games that have been played for decades (like Rogue, Nethack, ADOM) and continues to grow (Ultima Ratio Regum, Dwarf Fortress). It's hybridizing into other genres like action platformers (Rogue Legacy, Spelunky, Risk of Rain). Games are creating new genres by starting with PCG (FTL, Minecraft). Civilization and the Maxis Sim games are classics in large part because of content generation.

For another perspective, game designer Dan Cook has written several blog posts on PCG leading to better-designed game systems than handcrafted content. Similarly, Jonathan Blow has argued extensively against games that extend their use of systems (eg. across all the levels of a Super Mario, Modern Warfare, or Call of Duty game, the player will see few or no changes in rules, just new sets) rather than exploring a system once thoroughly (Braid, The Witness, Portal, Polarity).

I'll leave the comparisons to "Scientific Progress [as] the PvE of real life" for the simulationists and solipsists. But I've always seen the human obsession with status and gossip as a bug rather than a feature and endeavored to advance more interesting things in the world.

Comment author: Douglas_Knight 31 October 2014 12:51:14AM 0 points [-]

I estimate that 95% of readers of Cialdini read it for business.

Comment author: Snorri 31 October 2014 12:47:58AM 0 points [-]

Here is a PDF of 40 sleep mindhacks: https://www.goodreads.com/ebooks/download/8114179-40-sleep-hacks

To be honest, I found the list rather simplistic, but it may be a good starting point for others. The one bit of advice that I found useful was waking up to the sound of pleasant music (via mp3 alarm), rather than the screeching of an alarm clock.

Comment author: jaime2000 31 October 2014 12:36:11AM *  2 points [-]

Neoreaction is an intellectual tradition of political philosophy composed of bloggers who are ideologically descended from the ideas of Curtis Yarvin, better known as Mencius Moldbug. If you want the five-minute version, read Konkvistador's summaries. If you are willing to read a much longer introduction, try one of these. Or just read the Neoreactionary Canon, which includes all three.

Anyway, the relevance to the grandparent is that LessWrong has a non-trivial neoreactionary minority (3% as of the last survey), and that former MIRI employee Michael Anissimov and his friends went and made a neoreactionary website called MoreRight (an obvious pun on LessWrong). Eliezer Yudkowsky was not amused.

Comment author: Douglas_Knight 31 October 2014 12:33:48AM 1 point [-]

The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented

I thought that it was chosen in part for a story like: a paperclip manufacturer wants an AI to help it better manufacture paperclips.

Comment author: Adele_L 31 October 2014 12:31:57AM 0 points [-]

The article talked about endless contrarianism, where people disagree as a default reaction, instead of because of a pre-existing difference in models. I think that is a problem in the LW community.

In response to Academic papers
Comment author: calef 31 October 2014 12:06:57AM 4 points [-]

You might be asking the wrong question. For example, the set of papers satisfying your first question:

What are the most important or personally influential academic papers you've ever read? (call this set A)

has almost no overlap with what I would consider the set of papers satisfying:

Which ones are essential (or just good) for an informed person to have read? (call this set B)

And this is for a couple of reasons. Scientific papers are written to communicate, "We have evidence of a result--here is our evidence, here is our result." with fairly minimal grounding of where that result stands within the broader scientific literature. Yes, there's an introduction section usually filled with a bunch of citations, and yes there's a conclusion section, but papers are (at least in my field) usually directed at people that are already experts in what the paper is being written about (unless that paper is a review article).

And this is okay. Scientific papers are essentially rapid communications. They're a condensed, "I did this!". Sometimes they're particularly well written and land in category A above. But I can't think of a single paper in my A column that I'd want a layman to read. None of them would make any sense to an "informed" layman.

My B column would probably have really good popular books written by experts--something like Quantum Computing Since Democritus, or, like others have said, introductory level textbooks.

In response to Academic papers
Comment author: Fluttershy 30 October 2014 11:46:22PM 4 points [-]

In Chemistry in particular, and the natural sciences in general, I find that reading textbooks is a much more efficient way to digest knowledge than reading papers. The largest advantage which reading papers confers relative to reading textbooks is that textbooks rarely cover the newest of the new advances in any field. I rarely find that I need to read a paper to learn something that I can't find in a textbook-- this is probably because, in the natural sciences at the undergraduate level, people don't often need to find information which was discovered within the last five years. The major exception to this trend is people who specialize heavily within a particular field, such as PhD students, postdocs, professors, and the like.

There are other reasons why reading individual journal articles can be helpful, but since you asked this question from the perspective of someone hoping to continue their efforts at self-education, I would advise you to stick with textbooks, for the most part.

Also, reading meta-analyses of papers, which will themselves be published in journals, is often better (in terms of efficiency and knowledge gathering power) than reading individual studies.

Comment author: Ritalin 30 October 2014 11:42:37PM 0 points [-]

Measuring the difference between those three is hardly trivial, though. Can't they be considered the same for all practical purposes?

Comment author: jimrandomh 30 October 2014 11:36:16PM 3 points [-]

Wouldn't you also want to throw the paperclips into black holes, to harvest the gravitational energy?

Comment author: shminux 30 October 2014 11:31:36PM *  0 points [-]

I think it started in http://lesswrong.com/lw/cew/group_rationality_diary_51412/

which was inspired by a CFAR's "applied rationality" minicamp, and presumably interpreted "applied" as "instrumental".

Comment author: Curiouskid 30 October 2014 11:25:18PM 0 points [-]

Could you elaborate on this? I've heard the term Neoreactionary thrown around, but I'm not exactly sure what it means.

Comment author: ChristianKl 30 October 2014 11:24:20PM 0 points [-]

Intelligent sociopaths generally don't go around telling people that they're sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of.

The usually won't say it in a way that the would predict will put other people on guard. On the other hand that doesn't mean that they don't say it at all.

I don't find the link at the moment but a while ago someone posted on LW that he shouldn't have trusted another person from a LW meetup who openly said those things and then acted like that.

Categorising Internet Tough Guys is hard. Base rates for psychopathy aren't that low but you are right that not everyone who says those things is a psychopath. Even that it's a signal for not giving full trust to that person.

Comment author: shminux 30 October 2014 11:17:20PM 9 points [-]

What you are describing is an accidental Clippy, just like humans are accidental CO2 maximizers. Which is a fair point, if we meet what looks like an alien Clippy, we should not jump to conclusions that paperclip maximizing is its terminal value.

Also, just to nitpick, if you have a lot of mass available, it would make sense to lump all this iron together and make a black hole, as you can extract a lot more energy from throwing stuff toward it than from the nuclear fusion proper. Or you can use fusion first, then throw the leftover iron bricks into the accreting furnace.

So the accidental Clippy would likely present as a black hole maximizer.

Comment author: NancyLebovitz 30 October 2014 10:59:15PM 0 points [-]

The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we're relatively safe. :-)

I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field, but I'm really hoping we don't get tested on that one.

In response to comment by Capla on Academic papers
Comment author: Emile 30 October 2014 10:53:31PM 0 points [-]

(you also have "soundign" in your article)

Comment author: David_Gerard 30 October 2014 10:51:56PM -3 points [-]
Comment author: Gunnar_Zarncke 30 October 2014 10:47:46PM 0 points [-]

I notice that I become more attached to LessWrong (again). I habitually open the LW page when I start PC work and check for new posts and messages (and karma). The last time this happened I controlled it by placing 'minor inconvenences' around LW (block in /etc/hosts). Do others notice this too? Is this normal in some way? What should I do about this?

Comment author: Nornagest 30 October 2014 10:40:22PM *  1 point [-]

If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.

Intelligent sociopaths generally don't go around telling people that they're sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they've generally been confused teenagers, Internet Tough Guys, and a few people who're just really bad at recognizing their own emotions -- who also aren't the best people to trust, granted, but for different reasons.

I'd be more worried about people who habitually underestimate the empathy of others and don't have obviously poor self-image or other issues to explain it. Most of the sociopaths I've met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.

Comment author: Viliam_Bur 30 October 2014 10:36:49PM *  1 point [-]

That's right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.

(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor -- which I don't know whether it's true or not -- it could again be 1-4% among the wannabe rationalists.)

Comment author: SilentCal 30 October 2014 10:31:41PM 0 points [-]

One answer is that using your intelligence to to improve your own cognitive architecture is an entirely new field of investment. The economic growth that accrues from modern investing looks steady from inside the modern economy, but it's explosive from the perspective of a premodern society.

Comment author: ChristianKl 30 October 2014 10:31:04PM 0 points [-]

In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.

That's no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.

There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.

If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.

But if you use that as a criteria for kicking people out you people won't be open about their own beliefs anymore.

In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn't a good idea.

On the other hand LW is per default inclusive and not structured in a way where it's a good idea to kick out people on such a basis.

Comment author: Nornagest 30 October 2014 10:29:12PM *  1 point [-]

I got my first real job by summing up all the volunteer work and major personal projects I'd ever done and putting them on my resume. It turns out that at least at the entry level, people don't actually much care if you've gotten paid for doing something before -- they just want to be able to verify that you know enough not to flail around wasting money for months or years while you learn the basics of process.

(I'm in tech.)

Comment author: TimMartin 30 October 2014 10:28:17PM 2 points [-]

I worked as a neuroscience research assistant for 5 years. For the latter 3 of those years, I had wanted to leave that job and move on to something better, but had been unable to make a decision about what to pursue and to actually pursue it.

7 months after my first CFAR workshop, I started a new job making 25% more. There were other causal factors. Part of the motivation to do job searching was due to the fact that my research position would be ending, and part of the salary increase was due to the fact that I left academia. But I also credit CFAR training, including the follow-ups and the support I got from the community, as a significant cause of this success.

Other semi-quantifiable changes: -I keep a budget now. -I'm investing money for retirement each month. I was not investing any before. -I've learned 1.5 new programming languages, and have learned several new statistical analysis methods (consider that I was doing almost nothing in terms of job-relevant skill development prior to CFAR). -I've started a biweekly productivity meeting at my apartment (before I did not organize events other than the occasional party).

I've made many other changes in my life regarding habits, learning and practicing new things, and pushing the boundaries of my comfort zone. Perhaps the most important thing for me is that I no longer have the sense of being overwhelmed by life, or of there being large categories of things that I just can't do. I'd say this is mostly the result of a cascade of changes that occurred in my life due to attending CFAR. And to repeat what nbouscal said, I feel like I can change my life in ways that will both work and feel good.

Comment author: SilentCal 30 October 2014 10:22:01PM 0 points [-]

I found my first job using this tool http://mappedinny.com/. It's specific to tech jobs in New York City. I have no idea how anyone ever gets a job in other places/sectors.

Comment author: Nornagest 30 October 2014 10:21:22PM *  0 points [-]

I agree about all of that except for contrarianism (and yes, I'm aware of the irony). You want to have some amount of contrarianism in your ecosystem, because people sometimes aren't satisfied with the hivemind and they need a place to go when that happens. Sometimes they need solutions that work where the mainstream answers wouldn't, because they fall into a weird corner case or because they're invisible to the mainstream for some other reason. Sometimes they just want emotional support. And sometimes they want an argument, and there's a place for that too.

What you don't want is for the community's default response to be "find the soft bits of this statement, and then go after them like a pack of starving hyenas tearing into a pinata made entirely of ham". There need to be safe topics and safe stances, or people will just stop engaging -- no one's always in the mood for an argument.

On the other hand, too much agreeableness leads to another kind of failure mode -- and IMO a more sinister one.

Comment author: NancyLebovitz 30 October 2014 10:07:48PM *  5 points [-]

How communities Work, and What Wrecks Them

One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves.

Behavior patterns that grind communities down: endless contrarianism, axe-grinding, persistent negativity, ranting, and grudges.

Comment author: tog 30 October 2014 10:02:11PM 0 points [-]

To answer the OP's question 2 in more depth, any of the charities recommended there would be excellent candidates: GiveDirectly and the two deworming charities (which you pick will depend on which is tax-deductible in your country). GiveDirectly is certain transparent and efficient, in that it simply gives a large unconditional cash transfer to poor families, with around 90% 'efficiency' in the sense that 90% of your money ends up in their pocket. See this accessible introduction and this detailed analysis.

Comment author: tog 30 October 2014 09:58:17PM 0 points [-]

Upvoted - and, to clarify, by upvoting I mean that I'd be interested! You can also send them to me via my EA Profile contact form.

Comment author: Swimmer963 30 October 2014 09:30:56PM 0 points [-]

I think reduced inhibitions that come with tiredness might help here.

In response to comment by gjm on Academic papers
Comment author: Capla 30 October 2014 09:17:01PM 2 points [-]

Nope. I should care about the most basic signaling at least, and I've come to rely on those little red lines to tell me I've got a speelign error.

Fixed.

Comment author: dthunt 30 October 2014 08:51:26PM 0 points [-]

You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.

Comment author: dthunt 30 October 2014 08:37:42PM 1 point [-]

Noticing confusion is the first skill I tried to train up last year, and is definitely a big one, because knowing what your models predict and noticing when they fail is a very valuable feedback loop that prevents you from learning if you can't even notice it.

Picturing what sort of evidence would unconvince you of something you actively believe is a good exercise to pair with the exercise of picturing what sort of evidence would convince you of something that seems super unlikely. Noticing unfairness there is a big one.

Realizing when you are trying to "win" at truthfinding, which is... ugh.

Comment author: NancyLebovitz 30 October 2014 08:37:13PM 1 point [-]

I can believe that 1% - 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don't have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.

Comment author: jnarx 30 October 2014 08:26:37PM 1 point [-]

I think this particular example doesn't really exemplify what I think you're trying to demonstrate here.

A simpler example would be:

You draw one ball our of a jar containing 99% red balls and 1% silver balls (randomly mixed).

The ball is silver. Is this surprising? Yes.

What if you instead draw a ball in a dark room so you can't see the color of the ball (same probability distribution). After drawing the ball, you are informed that the red balls contain a high explosive, and if you draw a red ball from the jar it would instantly explode, killing you.

The lights go on. You see that you're holding a silver ball. Does this surprise you?

Comment author: Lumifer 30 October 2014 08:10:43PM 3 points [-]

how do people notice them?

They pop up on top of the "recent comments" list. Enough people read LW comments via this list (there are actually two, one for Main and one for Discussion).

Comment author: iarwain1 30 October 2014 08:02:32PM *  1 point [-]

If I add a comment to an old Sequence article or respond to a comment on an old Sequence article, will my comment get noticed by anyone? I've seen comments added years later that did seem to get noticed, but how do people notice them?

Comment author: Lumifer 30 October 2014 07:49:21PM *  0 points [-]

Can you express what you want to protect against while tabooing words like "bad", "evil", and "abuse"?

Comment author: dthunt 30 October 2014 07:44:39PM *  1 point [-]

Not feeling connected with people, or, increasingly feeling less connection with people.

I actively socialize myself, and this helps, but the other thing maybe suggests to me I'm doing something wrong.

(Edit: to clarify, my empathy thingy works as well (maybe better) than it ever has, I just feel like the things I crave from social interactions are getting harder to acquire. Like, people "getting" you, or having enough things in common that you can effectively talk about the stuff that interests you. So, like, obviously, one of the solutions there is to hang out with more bright-and-happy CFAR-ish/LW-ish/EA-ish people.)

In response to Academic papers
Comment author: Lumifer 30 October 2014 07:41:39PM *  2 points [-]

Is there any body of research of which you found the original papers much more valuable than than the popularizations or secondary sources

Medicine, in particular nutrition. My prior is that mass-media reporting is just nonsense on stilts, pretty much always, and you have to look at the original paper to see what the authors tried and what they found (which, often enough, is junk, anyway).

Comment author: Viliam_Bur 30 October 2014 07:36:19PM *  4 points [-]

Unfortunately, I don't feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don't think I have a solution. I just noticed a danger, and general unwillingness to debate it.

Probably the best thing I can do right now is to recommend good books on this topic. That would be:

  • The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
  • People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read

I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.

As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and "winning". (And "something bad" offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.

Comment author: Qiaochu_Yuan 30 October 2014 07:28:13PM *  1 point [-]

Fair enough. In that case, after my first CFAR workshop I lost 15 pounds over the course of a few months (mostly through changes in my diet) and started sleeping better (harder to quantify, but I would estimate at least an effective hour's worth of extra sleep a night).

In response to Academic papers
Comment author: Toggle 30 October 2014 07:24:02PM 3 points [-]

More in the 'personally influential' than the 'essential for an informed person':

The Long Term Evolution Experiment is one of my very favorite in the biological and physical sciences. It's decades old by now, and using that time to actually test the behavior of bacteria on true evolutionary timescales. They have seen multi-part ecosystems evolve from monocultures, and used the lab environment to 'roll back the tape' and see if evolutionary patterns can be made to jump through the same hoop twice.

Comment author: Michaelos 30 October 2014 07:13:47PM 0 points [-]

This article discusses a paper that seems interesting from the perspective of effective altruism and how peoples behavior changes based on where they think their money might be going:

http://www.vox.com/2014/10/30/7131345/overhead-free-donations-charity-fundraising-seed-matching-gneezy

If you want a link directly to the paper, that link is both in the article and reposted here:

http://www.sciencemag.org/content/346/6209/632

Short summary: When considering donations, people in the study donated more when they know their donation is not going to overhead.

In response to Academic papers
Comment author: Salemicus 30 October 2014 07:13:26PM *  4 points [-]

The more technical and abstruse a paper, and the less you are an expert in the area yourself, the more you should rely on secondary sources that may be able to present it in a more user-friendly way. There is, after all, no point reading the original if you can't truly understand it. However, some academic papers are written in a sufficiently comprehensible style that almost anyone can be enlightened by them.

Some that have had a particular effect on me:

The last is the most cited law review article of all time, in no small part because of its accessibility.

View more: Next