Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Manfred 08 December 2016 11:30:26PM 1 point [-]

The worst part is that there's a lot the journal doesn't protect you from, no matter how reputable. Data shown in the paper can be assumed to be presented in the prettiest possible way for the specific data set they got, and interpretation of the data can be quite far off base and still get published if none of the reviewers happen to be experts on the particular methods or theories.

Comment author: Manfred 08 December 2016 11:24:30PM 1 point [-]

I would recommend just correctly estimating everything, really.

Comment author: Foo 08 December 2016 10:55:15PM *  0 points [-]

Hello Less Wrong!

My name is Bryan Faucher. I'm a 27 year old from Edmonton (Canada) in the middle of the slow process of immigrating to Limerick (Ireland) where my wife has taken a contract with the University. I've been working in education for the past five years but I'm looking to pursue a masters in mathematical modeling next year, rather than attempting to fight for the right to work in a crowded industry as a non-citizen.

I've been aware of LW for something like six years, having been introduced by an old roommate's SO by way of HPMOR. In that time I've read through the sequences and a great deal of what I suppose could be called the "supplementary content" available on the site, but never found a reason to dive in to the discussion. I don't remember exactly when I created this account, but it was nice to have it waiting for me when I needed it!

I'm joining in now because I was very much grabbed by Sarah Constantin's "A Return to Discussion". I've been a member of a mid-sized discussion forum for over a decade, where I now volunteer my time as an administrator. We've done OK - better than most - in terms of maintaining activity in the face of the web's movement away from forums and bulletin boards, but the tone of our conversations has certainly changed: in may ways sliding through the grooves which Sarah seems to be describing. My purview as admin includes the "serious" discussion section of the form, and I feel I'm fighting a losing battle year over year to maintain "nerd space" in the face of cynical irony and the widespread fear of engagement.

I'm hoping to be inspired by the changes the LW community has set to out to make. To learn from what goes right here, and, in some small way, to contribute to the effort which I think is an important one. Intellectually, I don't have a hope in hell of keeping up with the local heavy hitters, but I can bring a lot of, ya know... grit.

Anyway, thanks for reading. I hope this was a fair place to post this. A new newbie thread seems to be wanting, unless I missed something, and I suppose if nothing else I can wrack up enough karma in the next few days to create one. See you around!

Comment author: Vladimir_Nesov 08 December 2016 10:19:55PM 1 point [-]

Yes, it's worthwhile to spend quite a lot of time on choosing which textbooks to study seriously, at least a fraction of the time needed to study them, and to continuously reevaluate the choice as you go on.

Comment author: ciphergoth 08 December 2016 08:35:58PM 0 points [-]

I don't think the first problem is a big deal. No-one worries about "I boosted that from a Priority 3 to a Priority 1 bug".

Comment author: TheAncientGeek 08 December 2016 03:29:58PM 0 points [-]

Pjeby, the specifics of dissolving each dissolvable question are different.

How true. Oh, and there's no guarantee that any particular question is disolvable ahead of disolving it....

Comment author: TheAncientGeek 08 December 2016 03:23:31PM *  0 points [-]

Having to use a strange definition of qualia to explain your views may be evidence that you actually a qualia sceptic, a possibility which you seem open to.

Comment author: TheAncientGeek 08 December 2016 03:04:43PM *  1 point [-]

Now suppose you tell a murderer, "It is necessary for you to stop killing people." He can simply say, "Necessary, is it?" and then kill you. Obviously it is not necessary, since he can do otherwise. So what did you mean by calling it necessary? You meant it was necessary for some hypothesis.

You are assuming that the only thing that counts as necessity per se is physical necessity, ie there is no physical possiibity of doing otherwise. But moral necessity is more naturally cashed out as the claim that there is no permissable state of affairs in which the murdered can murder.

http://www.hsu.edu/academicforum/2000-2001/2000-1AFThe%20Logic%20of%20Morality.pdf

In less abstract terms, what we are saying is that morality does not work like a common-or-garden in-order-to-achieve-X-do-Y. because you cannot excuse yourself , or obtain permissibility, simply by stating that you have some end in mind other than being moral. Even without logical necessity, morality has social obligatoriness, and that needs to be explained, and a vanilla account in terms of hypotetical necessities in order to achieve arbtrary ends cannot do that.

The reason that moral good means doing something good, is that the hypothesis that we always care about, is whether it would be good to do something.

If the moral good were just a rubber-stamp of approval for whatever we have in our utility functions, there would be no need for morality as a behaviour-shaping factor in human society. Morality is not "do what thou wilt".

That gives you a reason to say "it is necessary" without saying for what, because everyone wants to do something that would be good to do.

In some sense of "good", but, as usual, an unqualified "good" does not give you plausible morality.

Suppose you define moral goodness to be something else. Then it might turn out that it would be morally bad to do something that would be good to do, and morally good to do something that would be bad to do. But in that case, who would say that we ought to do the thing which is morally good, instead of the thing that would be good to do?

It's tautologous that we morally-should do what is morally-good.

Comment author: owencb 08 December 2016 02:45:30PM *  3 points [-]

I had mixed feelings towards this post, and I've been trying to process them.

On the positive side:

  • I think AI safety is important, and that collective epistemology is important for this, so I'm happy to know that there will be some attention going to this.
  • There may be synergies to doing some of this alongside more traditional rationality work in the same org.

On the negative side:

  • I think there is an important role for pursuing rationality qua rationality, and that this will be harder to do consistently under an umbrella with AI safety as an explicit aim. For example one concern is that there will be even stronger pressure to accept community consensus that AI safety is important rather than getting people to think this through for themselves. Since I agree with you that the epistemology matters, this is concerning to me.
  • With a growing community, my first inclination would be that one could support both organisations, and that it would be better to have something new focus on epistemology-for-AI, while CFAR in a more traditional form continues to focus more directly on rationality (just as Open Phil split off from GiveWell rather than replacing the direction of GiveWell). I imagine you thought about this; hopefully you'll address it in one of the subsequent posts.
  • There is potential reputational damage by having these things too far linked. (Though also potential reputational benefits. I put this in "mild negative" for now.)

On the confused side:

  • I thought the post did an interesting job of saying more reasonable things than the implicature. In particular I thought it was extremely interesting that it didn't say that AI safety was a new focus. Then in the ETA you said "Even though our aim is explicitly AI Safety..."

I think framing matters a lot here. I'd feel much happier about a CFAR whose aim was developing and promoting individual and group rationality in general and particularly for important questions, one of whose projects was focusing on AI safety, than I do about a CFAR whose explicit focus is AI safety, even if the basket of activities they might pursue in the short term would look very similar. I wonder if you considered this?

Comment author: siIver 08 December 2016 01:19:36PM *  0 points [-]

There is one thing that confuses me about this post, which I haven't found in any of the comments

So the final program state is:

Configuration "A photon going toward A": (-1 + 0i)

Configuration "A photon going from A toward 1": (0 + -i)

Configuration "A photon going from A toward 2": (-1 + 0i)

Why does the bolded configuration still exist in the same way? Shouldn't it go back to zero once the photon has reached A, since the rest of the post seems to imply a timely order of things?

Comment author: Good_Burning_Plastic 08 December 2016 09:47:33AM 0 points [-]

Many academics focus on writing for academics, but many don't.

Giver "publish or perish", usually the latter won't stay in academia for long.

Comment author: AnnaSalamon 08 December 2016 09:23:00AM 1 point [-]

I think we'll have it all posted by Dec 18 or so, if you want to wait and see. My personal impression is that MIRI and CFAR are both very good buys this year and that best would be for each to receive some donation (collectively, not from each person); I expect the care for MIRI to be somewhat more straight-forward, though.

I'd be happy to Skype/email with you or anyone re: the likely effects of donating to CFAR, especially after we get our posts up.

Comment author: calebwithers 08 December 2016 08:37:52AM 2 points [-]

I intend to donate to MIRI this year; do you anticipate that upcoming posts or other reasoning/resources might or should persuade people like myself to donate to CFAR instead?

Comment author: John_Maxwell_IV 08 December 2016 08:22:00AM *  0 points [-]

Academics write textbooks, popular books, and articles that are intended for a lay audience.

Nevertheless, I think it's great if LW users want to compile & present facts that are well understood. I just don't think we have a strong comparative advantage.

LW already has a reputation for exploring non-mainstream ideas. That attracts some and repels others. If we tried to sanitize ourselves, we probably would not get back the people who have been repulsed, and we might lose the interest of some of the people we've attracted.

Comment author: Jiro 08 December 2016 05:12:46AM 1 point [-]

Don't underestimate Wikipedia as a really good place to get a (usually) unbiased overview of things and links to more in-depth sources.

Don't overestimate it, either.

Comment author: Protagoras 08 December 2016 04:15:52AM 0 points [-]

It certainly becomes stranger when you drop a word. But either way, strangeness is rarely evidence of very much.

Comment author: entirelyuseless 08 December 2016 03:10:49AM *  0 points [-]

The point about the words is that it is easy to see from their origins that they are about hypothetical necessity. You NEED to do something. You MUST do it. You OUGHT to do it, that is you OWE it and you MUST pay your debt. All of that says that something has to happen, that is, that it is somehow necessary.

Now suppose you tell a murderer, "It is necessary for you to stop killing people." He can simply say, "Necessary, is it?" and then kill you. Obviously it is not necessary, since he can do otherwise. So what did you mean by calling it necessary? You meant it was necessary for some hypothesis.

I agree that some people disagree with this. They are not listening to themselves talk.

The reason that moral good means doing something good, is that the hypothesis that we always care about, is whether it would be good to do something. That gives you a reason to say "it is necessary" without saying for what, because everyone wants to do something that would be good to do.

Suppose you define moral goodness to be something else. Then it might turn out that it would be morally bad to do something that would be good to do, and morally good to do something that would be bad to do. But in that case, who would say that we ought to do the thing which is morally good, instead of the thing that would be good to do? They would say we should do the thing that would be good to do, again precisely because it is necessary, and therefore we MUST do the supposedly morally bad thing, in order to be doing something good to do.

Comment author: John_Maxwell_IV 08 December 2016 02:17:16AM *  1 point [-]

OK, so I told you the other day that I find you a difficult person to have discussions with. I think I might find your comments less frustrating if you made an effort to think of things I would say in response to your points, and then wrote in anticipation of those things. If you're interested in trying this, I converted all my responses using rot13 so you can try to guess what they will be before reading them.

Oh yes. For example, Physical Review Letters is mostly interested in the former, while HuffPo -- in the latter.

UhssCb vf gelvat gb znkvzvmr nq erirahr ol jevgvat negvpyrf gung nccrny gb gur fbeg bs crbcyr jub pyvpx ba nqf. Gur rkvfgrapr bs pyvpxonvg gryyf hf onfvpnyyl abguvat nobhg ubj hfrshy vg jbhyq or sbe lbhe nirentr Yrff Jebatre gb fcraq zber gvzr trarengvat ulcbgurfrf. Vg'f na nethzrag ol nanybtl, naq gur nanybtl vf dhvgr ybbfr.

V jbhyq thrff Culfvpny Erivrj Yrggref cevbevgvmrf cncref gung unir vagrerfgvat naq abiry erfhygf bire cncref gung grfg naq pbasvez rkvfgvat gurbevrf va jnlf gung nera'g vagrerfgvat. Shegurezber, V fhfcrpg gung gur orfg culfvpvfgf gel gb qb erfrnepu gung'f vagrerfgvat, naq crre erivrj npgf nf n zber gehgu-sbphfrq svygre nsgrejneqf.

That's not true because you must also evaluate all these hypotheses and that's costly. For a trivial example, given a question X, would you find it easier to identify a correct hypothesis if I presented you with five candidates or with five million candidates?

Gur nafjre gb lbhe dhrfgvba vf gung V jbhyq cersre svir zvyyvba pnaqvqngrf. Vs svir ulcbgurfrf jrer nyy V unq gvzr gb rinyhngr, V pbhyq fvzcyl qvfpneq rirelguvat nsgre gur svefg svir.

Ohg ulcbgurfvf rinyhngvba unccraf va fgntrf. Gur vavgvny fgntr vf n onfvp cynhfvovyvgl purpx juvpu pna unccra va whfg n srj frpbaqf. Vs n ulcbgurfvf znxrf vg cnfg gung fgntr, lbh pna vairfg zber rssbeg va grfgvat vg. Jvgu n ynetre ahzore bs ulcbgurfrf, V pna or zber fryrpgvir nobhg juvpu barf tb gb gur evtbebhf grfgvat fgntr, naq erfgevpg vg gb ulcbgurfrf gung ner rvgure uvtuyl cynhfvoyr naq/be ulcbgurfvf gung jbhyq pnhfr zr gb hcqngr n ybg vs gurl jrer gehr.

Gurer frrzf gb or cerggl jvqrfcernq nterrzrag gung YJ ynpxf pbagrag. Jr qba'g frrz gb unir gur ceboyrz bs gbb znal vagrerfgvat ulcbgurfrf.

I would like to suggest attaching less self-worth and less status to ideas you throw out. Accept that it's fine that most of them will be shot down.

I don't like the kindergarten alternative: Oh, little Johnny said something stupid, like he usually does! He is such a creative child! Here is a gold star!

V pvgrq fbzrbar V pbafvqre na rkcreg ba gur gbcvp bs perngvivgl, Vfnnp Nfvzbi, ba gur fbeg bs raivebazrag gung ur guvaxf jbexf orfg sbe vg. Ner gurer ernfbaf jr fubhyq pbafvqre lbh zber xabjyrqtrnoyr guna Nfvzbi ba guvf gbcvp? (Qvq lbh gnxr gur gvzr gb ernq Nfvzbi'f rffnl?)

Urer'f nabgure rkcreg ba gur gbcvp bs perngvivgl: uggcf://ivzrb.pbz/89936101

V frr n ybg bs nterrzrag jvgu Nfvzbi urer. Lbhe xvaqretnegra nanybtl zvtug or zber ncg guna lbh ernyvmr--V guvax zbfg crbcyr ner ng gurve zbfg perngvir jura gurl ner srryvat cynlshy.

uggc://jjj.birepbzvatovnf.pbz/2016/11/zlcynl.ugzy

Lbh unir rvtugrra gubhfnaq xnezn ba Yrff Jebat. Naq lrg lbh unira'g fhozvggrq nalguvat ng nyy gb Qvfphffvba be Znva. Lbh'er abg gur bayl bar--gur infg znwbevgl bs Yrff Jebat hfref nibvq znxvat gbc-yriry fhozvffvbaf. Jul vf gung? Gurer vf jvqrfcernq nterrzrag gung YJ fhssref sebz n qrsvpvg bs pbagrag. V fhttrfg perngvat n srj gbc-yriry cbfgf lbhefrys orsber gnxvat lbhe bja bcvavba ba gurfr gbcvpf frevbhfyl.

Comment author: Vaniver 07 December 2016 11:35:04PM 0 points [-]

Thanks for sharing! I appreciate the feedback but because it's important to distinguish between "the problem is that you are X" and "the problem is that you look like you are X," I think it's worth hashing out whether some points are true.

The sequences and list of top posts on LW are mostly about AI risk

Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding. (His views have since changed, a document I think is worth reading in full.)

And the Sequences themselves are rarely if ever directly about AI risk; they're more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them "no, they're different" it's easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say "oh yeah, those are different." And if you're just curious about intelligence and morality, then you still end up with a crisper model than you started with!

which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought

I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of "do I understand this piece of mental machinery well enough to program it?", which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.)

But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of "advancement of more rational discourse in public life" would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.

I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents.

For the first half, I think considering this involves becoming more precise about 'healthiest'. On the one hand, LW's reputation has a lot of black spots, and those basically can't be washed off, but on the other hand, it doesn't seem like reputation strength is the most important thing to optimize for. That is, having a place where people are expected to have a certain level of intellectual maturity that grows over time (as the number of things that are discovered and brought into the LW consensus grows) seems like the sort of thing that is very difficult to do with a number of semi-related communities.

Comment author: satt 07 December 2016 11:17:14PM *  1 point [-]

But academics write for other academics, and journalists don't and can't. (They've tried. They can't. Remember Vox?)

Would that be Vox, Vox, or Vox?

Edit, 5 minutes later: a bit more seriously, I'm not sure I'd agree that "academics write for other academics" holds as a strong generalization. Many academics focus on writing for academics, but many don't. I think the (relatively) low level of information flow from academia to general audiences is at least as much a demand-side phenomenon as a supply-side one.

View more: Next