If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open Thread, May 18 - May 24, 2015
New Comment
176 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm looking for some "next book" recommendations on typography and graphically displaying quantitative data.

I want to present quantitative arguments and technical concepts in an attractive manner via the web. I'm an experienced web developer about to embark on a Masters in computational statistics, so the "technical" side is covered. I'm solid enough on this to be able to direct my own development and pick what to study next.

I'm less hot on the graphical/design side. As part of my stats-heavy undergrad degree, I've had what I presume to be a fairly standard "don't use 3D pie charts" intro to quantitative data visualisation. I'm also reasonably well-introduced to web design fundamentals (colour spaces, visual composition, page layouts, etc.). That's where I'm starting out from.

I've read Butterick's Practical Typography, which I found quite informative and interesting. I'd now like a second resource on typography, ideally geared towards web usage.

I've also read Edward Tufte's Visual Display of Quantitative Information, which was also quite informative, but felt a bit dated. I can see why it's considered a classic, but I'd like to read something on a simi... (read more)

Please post here if you learn a good answer elsewhere.

6MSwaffer
With your background in web development have you read things like Krug's Don't Make Me Think and William's The Non-Designer's Design Book? These are focused more on the design aspect of web however they contain some good underlying principles for data visualization as well. Tufte's book are all great for underlying principles even though, as you noted, they aren't focused on modern technologies. Beautiful Evidence from 2006 has some updated thoughts but he still borrows heavily from his earlier books. For general multimedia concepts, Mayer's Multimedia Learning is good from a human learning perspective (my background). I found Data Points: Visualization That Means Something to be a good modern guide. From my perspective, I am glad you are looking down the road and recognizing that after the data are analyzed the analysis must be communicated.
3sixes_and_sevens
This is all kinds of useful. Thanks! You can learn an astonishing amount about web development without ever having to think about how it'll look to another human being. In a professional context, I know enough to realise when I should hand it over to a specialist, but I won't always have that luxury.
4MSwaffer
You are definitely right in that we need to think about how it will look to another human being. If you are interested in pursuing this idea further, Don Norman has written a number of books about design in general. These are not about graphic design but just design thinking. The Psychology of Everyday Things is a classic and Emotional Design builds on the work of people like Antonio Damasio with regard to the role of emotion in cognition. Norman has another book called The Design of Everyday Things which I have not read but I imagine is a great read as well. All of these works emphasize the role of design in helping humans accomplish their goals. Some practitioners of data analytics view the output of prose, charts, tables and graphs as the final product. In most cases however the final product of a data analytics effort is a decision. That decision might be to do more research, to buy one company versus another or propose a new policy to Congress. Regardless of the nature of the decision, how well you design the output will have an impact on the quality of the decision made.
4sixes_and_sevens
I've read The Design of Everyday Things. You don't need to read The Psychology of..., as it's the same book, renamed for marketing reasons.
5palladias
My job (not at the WSJ!) gave me The Wall Street Journal Guide to Information Graphics: The Dos and Don'ts of Presenting Data, Facts, and Figures in my new hire bundle, and I love it!
5sixes_and_sevens
Do you love it to the tune of $20?
4palladias
Yeah, I'd say so.
5Douglas_Knight
Learn the library ggplot2. It is worth learning the language R just to use this library (though there is a port in progress for python/pandas). Even if you cannot incorporate the library into your workflow, its very good defaults show you what you should be doing with more work in other libraries. It is named after a book, the Grammar of Graphics, that I have not read.
9Lumifer
I don't know if I'm that enthusiastic about ggplot2. It is certainly a competent library and it produces pretty plots. However it has a pronounced "my way or the highway" streak which sometimes gets in the way. I like nice defaults, I don't like it when a library enforces its opinions on me (see e.g. this noting that Hadley is the ggplot2 author).
3sixes_and_sevens
I've dabbled with ggplot, but I've put it on hold for the immediate future in lieu of getting to grips with D3. I'll be getting all the R I can handle next year. I did not know about the book, but it's available to view from various sources. If I get time I'll give it a look-in and report back.
0Adam Zerner
You may be interested in some of Bret Victor's stuff. I too am a web developer looking to learn more about design. And I too have read Butterick's Practical Typography, Don't Make Me Think, Visual Display of Quantitative Information as well as a few other classics. But I don't think it's made me much better at design. I sense that there are a few "roadblocks". Ie. things I don't know that are preventing me from actually applying the things I learned in reading those books. Any thoughts on this?

Every so often in the EA community, someone will ask what EA volunteer activities one can do in ones spare time in lieu of earning to give. Brian Tomasik makes an interesting case for reading social science papers and contributing what you learn to Wikipedia.

7Ishaan
On the topic of popularization, I think the ratio of idealistic people interested in alleviating global poverty to people who are aware of the concept of meta-charities that determine the optimal way to do so is shockingly low. That seems like one of those "low hanging fruits" - dropping it into casual conversations, mentioning it in high visibility comment threads, and on. There's really no excuse for Kony to be more well known than Givewell.
6Lumifer
People actually interested in alleviating global poverty, or people who are interested in signaling to themselves and their social circle that they are caring and have appropriate attitudes? By the way, saving lives (which Givewell focuses on) and "alleviating global poverty" are two very different goals.
7ChristianKl
I don't think that it's fair to say that GiveWell only focuses on lives saved. Their reports about charities are long. It's just that they focus on the number of "saving lives" when they boil down the justification to short paragraphs.
1Ishaan
Frankly who cares? If someone wants to signal, then fine we can work with that. Life saving is an archetypal signal of heroism. Start a trend of wearing necklaces with one bead for each life you saved to remind everyone of the significance of each life and to remind you that you've given back to this world. That would be pretty bad ass, I'd wear it. Imagine you feel sad, then look down and remember you've added more QALYs to this world than your entire natural lifespan, that you've added centuries of smiles. Perhaps too blatant a boast for most people's tastes? Point is, even if it was all signalling, you could boast more if you knew how to get qalys efficiently. (I saved 2 lives sounds way better than i spent 10000 dollars)
2Lumifer
If people are actually interested in signaling to their social circle, they will ignore geeky Givewell and do a charity walk for a local (for-profit) hospital instead. I would consider anyone who would do this (based on the dollar amount of donation) to be terribly pretentious and, frankly, silly.
3Ishaan
I do have a parallel thought process which finds it pretentious, but I ignore it because it also said that the ice bucket was pretentious. And the ice bucket challenge was extremely effective. I think the dislike is just contrarian signalling, and is why our kind can't cooperate. That or some kind of egalitarian instinct against boasting. Isn't "pretentious" just a negative way to say "signalling"? Of course that idea might not be effective signalling but abstractly, the idea is that EA is well suited for signalling so why isn't it? I'd consider value in doing a local hospital. Local community strengthening and good feelings is its own thing with its own benefits, and there's a special value in the aid coming from local people who know what's what - as a natural extension of the idea that aid is better coming from parents to children than from distant government to children. I'm talking about the global poverty crowd here.
4Lumifer
That I find something pretentious is my moral/aesthetic judgement. Evaluating the effectiveness of dark arts techniques is an entirely different question. Speaking of signaling, pretentiousness means you tried to signal and failed.
0Ishaan
Why is it dark? Doesn't it have to be a drawback in order to be dark? (agreed about pretentiousness=signal failure)
0Lumifer
It's dark because it's manipulation. You are pushing buttons in other people's minds to achieve a certain outcome.
0Ishaan
All interactions involving people involve pushing buttons for outcomes. Negative-connotation-Manipulation is when you do it in ways that they would not approve of it if they realized exactly what you were doing. The ice bucket challenge for example does exactly what it says on the tin - raise awareness, raise money, have social activity.
0Lumifer
I disagree.
0OrphanWilde
All actions have a drawback, in at least the form of opportunity costs.
0ChristianKl
It's signaling more status than the people around you want to give you.
0NancyLebovitz
"Pretentious" might be signalling of high status [1]that's irritating to receive, which leads to a large new topic. When is signalling fun vs. not fun? Is it just a matter of what's a positive signal in the recipient's group? [1] Signalling about sports teams isn't pretentious, even when it's annoying. I don't think there's a word for the annoyingness of middle-to-low status signaling. "Vulgar" covers some cases, but not most of them.
0[anonymous]
Why?
3Lumifer
I do not accept that a dollar is a unit of caring. I do not think that contributing money to an organization which runs programs which statistically save lives can be legitimately called "I saved X lives". Compare: "I bought some war bonds so I can say I personally killed X enemy soldiers". I think that strutting one's charitable activities is in very poor taste.
3jefftk
What would you use "I saved X lives" to mean if not "compared to what I would have done otherwise, X more people are alive today"? (I don't at all like the implied precision in giving a specific number, though.)
0Lumifer
There are two issues here. One is tracking of individual contributions. When a charity says "A $5000 donation saves one life" they do not mean that your particular $5000 will save one specific life. Instead they divide their budget of $Z by their estimate of Y lives saved and produce a dollars/life number. This is an average and doesn't have much to do with you personally other than that you were one data point in the set from which this average was calculated. "I contributed to the common effort which resulted in preventing Y deaths from malaria" is a more precise formulation which, of course, doesn't sound as good as "I saved X lives". Two is the length of the causal chain. If you, with your own hands, pull a drowning kid out of the water, that's one life saved with the causal chain of length 1. If you give money to an organization which finances another organization which provides certain goods for the third organization to distribute with the help of a bunch of other organizations, the causal chain is long and the longer it goes, the fuzzier it gets. As always, look at incentives.Charity fundraising is effectively advertising with greater social latitude to use emotional manipulation. One strand in that manipulation is to make the donor feel an direct emotional connection with "direct" being the key word. That's why you have "Your donation saves lives!" copy next to a photo of an undernourished black or brown kid (preferably a girl) looking at the camera with puppy eyes.
0jefftk
If someone is saying "I saved 10 lives" because they gave $500 to a charity that advertises a cost per life saved of $50, then yes, that's very different from actually saving lives. But the problem is that charities' reports of their cost effectiveness are ridiculously exaggerated, and you just shouldn't trust anything they say. What we want are marginal costs, not average costs, and these are what organizations like GiveWell try to estimate. Yes, this is real. But we're ok with assigning credit along longish causal chains in many domains; why exclude charity?
0Lumifer
Oh, trust me, I don't :-D The problem with marginal costs is that they are conditional. For example, the marginal benefit of your $1000 contribution depends on whether someone made a $1m contribution around the same time. I don't know about that -- I'm wary of assigning credit "along longish causal chains", charity is not an exception for me.
0Ishaan
It's not intended as a unit of caring - it's a unit of achievement, a display of power, focused on outcomes. Consequences over virtue ethics, utils over fuzzies. Don't get me wrong, I do see the ugliness in it. I too have deeply held prejudices against materialism and vanity, and the whole thing bites against the egalitarian instinct for giving even more status to the wealthy. But helping people is something worthy of pride, unlike the mercedes or thousand dollar suits or flashy diamonds and similar trifles people use for the same purpose. My point is, you said they were signalling. I'm not approving of signalling so much as saying, why not signal productively, in a manner that actually does what you've signaled to do?
0Lumifer
Some people think otherwise. How about buying status signals with the the minor side-effect of helping people? Of course they do. "So much money, so little taste" is a common attitude. "Unnecessarily large houses" are known as McMansions in the US.
0[anonymous]
Beware, envy lives here. Cloaked in the robes of social decency, he whispers: “Imposters, all of them. They don’t deserve praise…you do.”
0Lumifer
Huh?
0[anonymous]
If I were you, I would consider the possibility that I am envious of those who signal and receive praise, and that I am rationalizing my feelings by claiming to uphold the social standard of "good taste".
1Lumifer
That seems unlikely. First, even after introspection I don't have envious feelings towards such people which is probably because in my social circle ostentatious displays of kinda-virtue usually lead not to praise but to slight awkwardness. Second, this is consistent with my general taste in other things and looks to be a pretty ancient attitude :-)
0John_Maxwell
Agree. (The EA community is already very well aware of "spreading EA" as a valuable volunteer activity, but I'd seen less discussion of Tomasik's proposal.)
4ChristianKl
I agree that adding content to Wikipedia is worthwhile. In addition to Wikipedia I think that StackExchange pages can often be very worthwhile. Often when I come across an interesting claim on the internet where I don't know whether it's true, I post it on Skeptics.StackExchange or a subject specific site in the StackExchange network.
[-]Shmi130

What changes would LW require to make itself attractive again to the major contributors who left and now have their own blogs?

As I often say, I haven't been here long, but I notice a sort of political-esque conflict between empirical clusters of people that I privately refer to as the Nice People and the Forthright People. The Nice People think that being nice is pragmatic. The Forthright People think that too much niceness decreases the signal-to-noise ratio and also that there's a slippery slope towards vacuous niceness that no longer serves its former pragmatic functions. A lot of it has to do with personality. Not everyone fits neatly, and there are Moderate People, but many fit pretty well.

I also notice policy preferences among these groups. The Nice don't mind discussion of object-level things that people have been drawn towards as the result of purportedly rational thinking and deciding. The Forthright often prefer technical topics and more meta-level discussion of how to be rational, and many harken back to the Golden Age when LW was, as far as I can tell, basically a way to crowdsource hyperintelligent nerds (in the non-disparaging sense) to work past inadequate mainstream decision theories, and also to do cognitive-scientific philosophizing as opposed to the ceiling-gazing sort. The Nice think t... (read more)

3NancyLebovitz
That's an interesting distinction, but I think the worst problem at LW is just that people rarely think of interesting things to write about. I don't know whether all the low-hanging fruit has been gathered, or if we should be thinking about ways to find good topics. Scott Alexander seems to manage.
[-][anonymous]100

whether all the low-hanging fruit has been gathered

Still there is the issue that it is a format of publishing sorted by publishing date. It is not like a library where it is just as easy to find a book published 5 years ago than the one published yesterday because they are sorted by topic or the author's name or something. Sequences and the wiki help this, still, a timeless view of the whole thing would be IMHO highly useful. A good post should not be "buried" just because it is 4 years old.

3NancyLebovitz
There's a tremendous amount of material on LW. Do you have ideas about how to identify good posts and make them easier to find? I can think of a solutions, but they might just converge on a few posts. Have a regular favorite posts thread. Alternatively, encourage people to look at high-karma older posts.
3Vaniver
Actually, we could probably use off-the-shelf (literally) product recommendation software. The DB knows what posts people have upvoted and downvoted, and which posts they haven't looked at yet (in order to get the "new since last visit" colored comment border).
2Gram_Stone
That's the thing though. My hypothesis is that the 'people who seem to manage' have left because the site is a lukewarm compromise between the two extremes that they might prefer it to be. Thus, subreddits. Like, what would a Class Project to make good contributors on LW look like? Does that sound feasible to you? Oh man, I'm arguing that blogging ability is innate.
1Vaniver
Obviously there's an innate portion to blogging ability. We can still manipulate the environmental portion.
1Gram_Stone
I hope I didn't come off like I'm going to automatically shoot all suggestions to reinvigorate LW out of the sky. That's most of the problem with the userbase! I genuinely wonder what such a Class Project would look like, and would also be willing to participate if I am able. Since my comment was written in the context of Nancy_Lebovitz's comment, I'm specifically curious about how one would go about molding current members into high-quality contributors. I see a lot of stuff above about finding ways to make the user experience more palatable, but that in itself doesn't seem to ensure the sort of change that I think most people want to see.
1Shmi
I don't believe I was against subreddits, just against the two virtually useless ones we have currently. Certainly subreddits work OK on, well, Reddit. Maybe a bit of a segmentation with different topics and different moderation rules is a good idea, but there is no budget for this, as far as I know, and there is little interest from those still nominally in charge. In fact, I am not sure why Trike doesn't just pull the plug. It costs them money, there are no ads or any other revenue, I am guessing.

In my view, you're asking the wrong question. The major contributors are doing great; they have attracted their own audiences. A better question might be: how can LW grow promising new posters in to future major contributors (who may later migrate off the platform)?

I had some ideas that don't require changing the LW source that I'll now create polls for:

Should Less Wrong encourage readers to write appreciative private messages for posts that they like?

[pollid:976]

Should we add something to the FAQ about how having people tear your ideas apart is normal and expected behavior and not necessarily a sign that you're doing anything wrong?

[pollid:977]

Should we add something to the FAQ encouraging people to use smiley faces when they write critical comments? (Smiley faces take up very little space, so don't affect the signal-to-noise-ratio much, and help reinforce the idea that criticism is normal and expected. The FAQ could explain this.)

[pollid:978]

We could start testing these ideas informally ASAP, make a FAQ change if polls are bullish on the ideas, and then announce them more broadly in a Discussion post if they seem to be working well. To keep track of how the ideas seem to be working out, people could post their experiences with them in this subthread.

Should we add something to the FAQ

Does anyone read the FAQ? Specifically, do the newbies look at the FAQ while being in the state of newbiedom?

5Vaniver
At least some do. In general, we could improve the onboarding experience of LW.
5Lumifer
"Hello, I see you found LW. Here is your welcome package which consists of a first-aid trauma kit, a consent form for amputations, and a coupon for a PTSD therapy session..." X-)
4Error
...and a box of paperclips.
3Lumifer
...please don't use it to tease resident AIs, it's likely to end very very badly...
0John_Maxwell
What concrete actions could we take to improve the onboarding experience?
0Vaniver
I imagine there are UI design best practices, like watching new users try out the site, that could be followed. A similarly serious approach I've seen is having a designated "help out the newbie" role, either as someone people are encouraged to approach or specifically pairing mentees with mentors. Both of those probably cost more than they deliver. A more reasonable approach would be having two home pages: one for logged-in users that probably links to /r/all/new (or the list version), and one for new users that explains more about LW, and maybe has a flowchart about where to start reading based on interests.
0John_Maxwell
So the homepage already explains some stuff about LW. What do you think is missing? I'd guess we can get 80% of the value of a flowchart with some kind of bulleted question/recommendation list like the one at http://lesswrong.com/about/ Maybe each bullet should link to more posts though? Or recommend an entire sequence/tag/wiki page/something else? And the bullets could be better chosen?
3[anonymous]
...yes.
1John_Maxwell
It's linked to from the About page. Scroll to the bottom and you can see it has over 40,000 views: http://wiki.lesswrong.com/wiki/FAQ But it's not among the top 10 most viewed pages on the LW wiki: http://wiki.lesswrong.com/wiki/Special:Statistics So it seems as though the FAQ is not super discoverable. It looks like the About page has been in approximately its current form since September 2012, including the placement of the FAQ link. For users who have discovered LW since September 2012, how have you interacted with the FAQ? [pollid:981] If you spent time reading it, did you find it useful? [pollid:982] Should we increase its prominence by linking to it from the home page too? [pollid:983]
1[anonymous]
I went directly to the sequences, not sure why. Probably the sheer size of the list of contents was kind of intimidating.
0John_Maxwell
"the sheer size of the list of contents" - hm? What are you referring to?
0[anonymous]
The FAQ
0John_Maxwell
I figure an exhaustive FAQ isn't that bad, since it's indexed by question... you don't have to read all the questions, just the ones you're interested in.
0[anonymous]
No, it is not bad at all. But it does what it says on the tin, answers questions. When starting with LW from zero there are no questions yet or not many, but more like exploration.
7Sarunas
I think that while appreciative messages are (I imagine) pleasant to get, I don't think they are the highest form of praise that a poster can get. I imagine that if I wrote a LW post, the highest form of praise to me would be comments that take the ideas expressed in a post (provided they are actually interesting) and develop them further, perhaps create new ideas that would build upon them. I imagine that seeing other people synthesizing their ideas with your ideas would be perhaps the best praise a poster could get. While comments that nitpick the edge cases of the ideas expressed in a post obviously have their value, often they barely touch the main thesis of the post. An author might find it annoying having to respond to people who mostly nitpick his/her offhand remarks, instead of engaging with the main ideas of the post which the author finds the most interesting (that's why he/she wrote it). The situation when you write a comment and somehow your offhand remark becomes the main target of responses (whereas nobody comments on the main idea you've tried say) is quite common. I am not saying that we should discourage people from commenting on remarks that are not central to the post or comment. I am trying to say that arguing about the main thesis is probably much more pleasant than arguing about offhand remarks, and, as I have said before, seeing other people take your ideas and develop them further is even more pleasant. Of course, only if those ideas are actually any good. That said, even if the idea is flawed, perhaps there is a grain of truth that can be salvaged? For example, maybe the idea works under some kind of very specific conditions? I think that most people would be more likely to post if they knew that even commenters discovered flaws in their ideas, the same commenters would be willing to help to identify whether something can be done to fix those flaws. (This comments only covers LW posts (and comments) where posters present their own ideas.
5[anonymous]
Maybe it would be a good thing for the site if people were encouraged to write critical reviews of something in their fields, the way SSC does? It has been mentioned that criticizing is easier than creating.
6John_Maxwell
Sounds like a good idea. Do it!
8[anonymous]
I do have something specific in mind (about how plant physiology is often divorced from population research), but I am in a minority here, so it might be more interesting for most people to read about other stuff.
3John_Maxwell
You mean you are studying a field most LWers are unfamiliar with? Well that means we can learn more from your post, right? ;) If people don't find it interesting they won't read it. Little harm done. Polls indicate that LWers want to see more content, and I think you're displaying the exact sort of self-effacing attitude that is holding us back :) I'm not guaranteeing that people will vote up your post or anything, but the entire point of the voting system is to help people find good content and ignore bad content. So upvoted posts are more valuable than downvoted posts are harmful.
0faul_sname
I, for one, would be interested in such a post.
3[anonymous]
Thank you, I will do it ASAP, I'm just a bit rushed by PhD schedule and some other work that can be done only in summer. Do you have similar observations? It would be great to compile them into a post, because my own experience is based more on literature and less on personal communication, for personal reasons.
1faul_sname
I really don't have any similar observations, since I mostly focused on biochem and computational bio in school. I'm actually not entirely sure what details you're thinking of -- I'm imagining something like the influence of selective pressure from other members of the same species, which could cover things like how redwoods are so tall because other redwoods block out light below the canopy. On the other hand, insight into the dynamics of population biologists and those studying plant physiology would also be interesting. According to the 2014 survey we have about 30 biologists on here, and there are considerably more people here who take an interest in such things. Go ahead and post -- the community might say they want less of it, but I'd bet at 4:1 odds that the community will be receptive.
2[anonymous]
...you know, this is actually odd. I would expect ten biologists to take over a free discussion board. Where are those people?
1[anonymous]
No, I meant rather what between-different-fields-of-biology observations you might have. It doesn't matter what you study, specifically. It's more like 'but why did those biochists study the impact of gall on probiotics for a whole fortnight of cultivation, if every physiologist knows that the probiotic pill cannot possibly be stuck in the GI tract for so long? thing.' Have you encountered this before?
0faul_sname
I can come up with a few examples that seemed obvious that they wouldn't work in retrospect, mostly having to do with gene insertion using A. tumefaciens, but none that I honestly predicted before I learned that they didn't work. Generally, the biological research at my institution seemed to be pretty practical, if boring. On the other hand, I was an undergrad, so there may have been obvious mistakes I missed -- that's part of what I'd be interested in learning.
0[anonymous]
Oh, I really can't tell you much about that:) In my field, it's much more basic. Somehow, even though everyone knows that young ferns exist because adult ferns reproduce, there are very few studies that incorporate adult ferns into young ferns' most crucial life choices (like, what to produce - sperm or eggs.) I have no idea why it is so beyond simple laboratory convenience. It is not even a mistake, it's a complete orthogonality of study approaches.
1NancyLebovitz
I don't recommend smiley faces-- I don't think they add much. I do recommend that people be explicit if they like something about a post or comment.
1Vaniver
Hmm. I typically see emoticons as tied to emotion, and am unsurprised to see that women use them more than men. While a LW that used emoticons well might be a warmer and more pleasant place, I'm worried about an uncanny valley.
5Jiro
Putting smiley faces on critical comments is likely to encourage putting smiley faces on anything that may be perceived as negative, which in turn will lead people to put smiley faces on actual hostility. Putting a smiley face on hostility just turns it into slightly more passive aggressive hostility (how dare you react to this as if it's hostile, see, I put a smiley face on) and should be discouraged. I also worry that if we start putting smiley faces on critical comments, we'll get to the point where it's expected and someone whose comments are perceived as hostile will be told "it's your own fault--you should have put a smiley face on".
0estimator
I believe that the most LWers have some STEM background, so they are already familiar with such level of criticism, therefore criticism-is-normal disclaimers aren't necessary. Am I wrong? :) Positive reinforcement is a thing. But how are you going to efficiently encourage readers to do that? :) Also, we have karma system, which (partially?) solves the feedback problem.
5John_Maxwell
Possibly, given that lukeprog, Eliezer, and Yvain have all complained that writing LW posts is not very rewarding. Reframing criticism might do a bit to mitigate this effect on the margin :) One of the things that strikes me as interesting reading Eliezer's old sequence posts is the positive comments that were heaped on him in the absence of a karma system. I imagine these were important in motivating him to write one post a day for several years straight. Nowadays we consider such comments low-signal and tell people to upvote instead. But getting upvotes is not as rewarding as getting appreciative comments in my view. I imagine that 10 verbal compliments would do much more for me than 10 upvotes. In terms of encouraging readers... like I said, put it in the FAQ and announce it in a discussion post. Every time someone sends me an encouraging PM, I get reminded to send others encouraging PMs when I like their work.

I recently wrote this, which would probably have been of interest to LW. But when I considered submitting it, my brain objected that someone would make a comment like "you shouldn't have picked a name that already redirects to something else on wikipedia", and... I just didn't feel like bothering with that kind of trivia. (I know I'm allowed to ignore comments like that, but I still didn't feel like bothering.)

I don't know if that was fair or accurate of my brain, but Scott has also said that the comments on LW discourage him from posting, so it seems relevant to bring up.

The HN comments, and the comments on the post itself, weren't all interesting, but they weren't that particular kind of boring.

19eB1
One of those HN comments made me realize that you'd perfectly described a business situation that I'd just been in (a B2B integration, where the counterparty defected scuttling the deal), so they were interesting to me. Maybe this argues that you should have included more examples, but it's unlikely that it would have sparked that thought except that it was the perfect example.
-4Halfwitz
I doubt there's much to be done. I wouldn't be surprised if MIRI shut down LessWrong soon. It's something of a status drain because of the whole Roko thing and no one seems to use it anymore. Even the open threads seem to be losing steam. We still get most of the former value from the SlateStarCodex, Gwern.net, and the tumblr scene. Even for rationality, I'm not sure LessWrong is needed now that we have CFAR.

I don't think a shutdown is even remotely likely. LW is still the Schelling point for rationalist discussion; Roko-gate will follow us regardless; SSC/Gwern.net are personal blogs with discussion sections that are respectively unusable and nonexistent. CFAR is still an IRL thing, and almost all of MIRI/CFAR's fans have come from the internet.

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

To have a website with content like the original Sequences, we need someone who (a) can produce enough great content, and (b) believes that producing content for a website is the best use of their time.

It already sounds like a paradox: the more rational and awesome a person is, the more likely it is that they can use their time much better than writing a blog.

Well, unless they use the blog to sell something...

I think Eliezer wrote the original Sequences pretty much to find people to cooperate with him at MIRI, and to make people more sympathetic and willing to send money to MIRI. Mission accomplished.

What would be the next mission (for someone else) which could be accomplished by writing interesting articles to LW?

3Cariyaga
If Less Wrong is, indeed, losing steam as a community (I wouldn't have considered myself part of it until recently, and hadn't kept up with it before then), there are options to deal with it. First, we could create enjoyable media to be enjoyed by large quantities of people, with rationalistic principles, and link back to Less Wrong in it. HPMOR is already a thing, and certainly does well for its purpose of introducing people to and giving some basic instruction in applied rationality. However, as it's over, the flow of people from the readership it generated has ceased. Other media is a possibility. If people are interested in supporting Less Wrong and CFAR specifically, there could perhaps be a youtube channel made for it; maybe streaming live discussions and taking questions from the audience. Non-video means are also, obviously, possible. Webcomics are somewhat niche, but could drive readership if a high quality one was made. I'm loathe to suggest getting already-established content creators to read and support Less Wrong, partially because of my own reticence in such, and partially because of a host of problems that would come with that, as our community is somewhat insular, and though welcoming in our own way, Less Wrong often comes off to people as arrogant or elitist. On that note, while I would not suggest lowering our standards for discourse, I think that in appealing to a larger community it's necessary to realize that newer members of the community may not have the background necessary to take constructively the criticisms given. I'm not sure how to resolve this problem. Being told to "go and read such and such, then you'll understand" comes off rudely. Perhaps some form of community primer link on the front page, regarding customs here? The about page is a little cluttered and not entirely helpful. That in addition to a marker next to someone's name indicating they're new to Less Wrong could do a lot to help. Furthermore, a section for the "younger"
2[anonymous]
Your attitude to informational videos is: [pollid:979]
2ChristianKl
There's some research that suggests that videos that actually help people to learn aren't pleasant to watch. http://chronicle.com/article/Confuse-Students-to-Help-Them/148385/ If the student feels confused by the video they are more likely to actually update. The kind of informational videos that are popular aren't useful for learning and vice versa.
0Cariyaga
I voted other. The reason I suggested nontextual formats is because I don't believe that rationality can be taught solely through text, even if I personally prefer to learn that way. I have multiple friends who do not learn well at all in such a manner, but I believe that both of them would learn much more effectively from a video; I suspect this extends out to others, for whom the text dump nature of this site might be intimidating.
1John_Maxwell
I'm not sure about webcomics or Youtube videos. LW is full of essays on abstract philosophical topics; if you don't like reading, you're probably not going to get much out of it. I think the biggest ways for people to help LW are: * Write quality posts. There are a bunch of suggestions in this FAQ question. * Share Less Wrong posts with thoughtful people who will find them interesting. Think Facebook friends, your favorite subreddit, etc. Ideally people who are even smarter than you are. Improving the about page is also high-leverage. I encourage you to suggest concrete changes or simply ignore the existing one and write an alternative about page from scratch so we can take the best ideas from each.
0Cariyaga
Certainly, writing high quality posts is essential for improving on what we already do well, but as I mentioned in a reply above, not everyone learns best -- or at all effectively -- that way. To be clear, I'm not suggesting we do any less of that, but I think that we may be limiting ourselves somewhat by producing only that style of content. I think that we would be able to get more people interested in Less Wrong by producing non-textual content as well. I will note, however, that when I suggested webcomics, I wasn't specifically intending a webcomic about Less Wrong (although one about biases in general could work quite well!) so much as one written by someone from Less Wrong, with a rationalist bent, to get people interested in it. Although, admittedly, going at it with that goal in mind may produce less effective content. Regarding improving the about page, the main thing that jumped out to me is that there seem to be far too many hyperlinks. My view of the About page is that it should be for someone just coming into Less Wrong, from some link out there on the net, with no clue what it is. Therefore, there should be less example in the form of a list of links, and more explanation as to what Less Wrong's function is, and what its community is like.
3John_Maxwell
If someone wants to create a rationalist webcomic, Youtube channel, etc. I'm all for that. I did the current About page. I put in a lot of links because I remembered someone saying that it seems like people tend to get in to Less Wrong when they read a particular article that really resonates with them, so I figured I would put in lots of links so that people might find one that would resonate. Also, when I come across a new blog that seems interesting, I often look over a bunch of posts trying to find the gems, and providing lots of links seems like it would facilitate this behavior. What important info about LW's function/community would you like to see on the about page?
-2SanguineEmpiricist
Part of the reason it is losing steam is there is a small quantity of posters that post wayyyy too much using up everyone's time and they hardly contribute anything. Too many contrarians. We have a lot of regular haters that could use some toning down.

It's true that Less Wrong has a reputation for crazy ideas. But as long as it has that reputation, we might as well continue posting crazy ideas here, since crazy ideas can be quite valuable. If LW was "rebooted" in some other form, and crazy ideas were discussed there, the new forum would probably acquire its own reputation for crazy ideas soon enough.

The great thing about LW is that it allows a smart, dedicated, unknown person to share their ideas with a bunch of smart people who will either explain why it's wrong or change their actions based on it relatively quickly. Many of LW's former major contributors have now independently acquired large audiences that pay attention to their ideas, so they don't need LW anymore. But it's very valuable to leave LW open in order to net new contributors like Nate Soares (who started out writing book reviews for LW and was recently promoted to be MIRI's executive director). (Come to think of it, lukeprog was also "discovered" through Less Wrong as well... he went from atheist blogger to LW contributor to MIRI visiting fellow to MIRI director.)

Consider also infrequent bloggers. Kaj Sotala's LW posts seem to get substantially more comments than the posts on his personal blog. Building and retaining an audience on an independent blog requires frequent posting, self-promotion, etc... we shouldn't require this of people who have something important to say.

9raydora
I recently joined this site after lurking for awhile. Are blog contributions of that sort are the primary purpose of Less Wrong? It seems like it fulfills a niche that the avenues you listed do not: specifically, in the capacity of a community rather than an individual, academic, or professional endeavor. There are applications of rational thought present in these threads that I don't see gathered anywhere else. I'm sure I'm missing something here, but could viewing Less Wrong as a potential breeding ground for contributors of that kind be useful? I realize it's a difficult line to follow without facing the problems inherent to any community, especially one that preaches a Way. I haven't encountered the rationalist tumblr scene. Is such a community there?
3[anonymous]
Eh, it is just useful to have a generic discussion forum on the Internet with a high average IQ and a certain culture of epistemic sanity / trying to avoid at least the worst fallacies and biases. If out of the many ideas in the sequences, at least "tabooing" would get out into the wild so people on other forums would get more used to discussing actual things instead of labels and categories, it could become bearable out there. For example you can hardly have a sane discussion in economics.reddit.com because labels like capitalism and socialism being used as rallying flags.

When should a draft be posted in discussion and when should it be posted in LessWrong?

I just wrote a 3000+ word post on science-supported/rational strategies to get over a break-up, I'm not sure where to put it!

7NancyLebovitz
Do you mean whether it should be posted to Discussion or Main? You can post it to Discussion. It might get promoted to Main. I'm not sure who makes those decisions. You can post it to Main, and take your chances on it being downvoted. You can post a link to it, and see if you get advice on where you should post it.
6lululu
OK, thank you. This is my first LessWrong post. I posted to discussion, hopefully it will find its place.

A comment about some more deep learning feats:

Interestingly, they initialise the visual learning model using the ImageNet images. Was it 3 years ago that was considered a pretty much intractable problem, and now the fact a CNN can work on it well enough to be useful isn't even worth a complete sentence.

(Background on ImageNet recent progress: http://lesswrong.com/lw/lj1/open_thread_jan_12_jan_18_2015/bvc9 )

Clicking on the tag "open thread" on this post only shows open threads from 2011 and earlier, at "http://lesswrong.com/tag/open_thread/". If I manually enter "http://lesswrong.com/r/discussion/tag/open_thread/", then I get the missing open threads. The problem appears to be that "http://lesswrong.com/tag/whatever/" only shows things posted to Main. "http://lesswrong.com/r/all/tag/open_thread/" seems to behave the same as "http://lesswrong.com/tag/open_thread/", i.e. it only shows things posted to ... (read more)

[-]gjm50

It looks like someone downvoted about 5 of my old comments in the last ~10 hours. (Not recent ones that are still under any kind of discussion, I think. I can't tell which old ones.)

I mention this just in case others are seeing the same; I suspect Eugine_Nier/Azathoth123 has another account and is up to his old mass-downvoting tricks again. (I actually have a suspicion which account, too, but nowhere near enough evidence to be making accusations.)

2Gram_Stone
Another data point: someone would downvote every comment I made up until April 1st. Not sure if I successfully signalled my 'rationality' or if I successfully signalled that I'm not going away.
1Dahlen
Same here, in fact I've been keeping an eye on that account for a while, and noticed when you expressed your complaints about downvoting in a discussion with him recently. There's no apparent sign of the sheer downvote rampages of old so far, if we're right he's been a little more careful this time around about obvious giveaways (or maybe it's just the limited karma)... Alas, old habits die hard. I'm not even sure anyone can do anything about it; LessWrong is among those communities that are vulnerable to such abuses. Without forum rules relating to member conduct, without a large number of active moderators, without a culture of holding new members under close scrutiny until they prove themselves to bring value to the forum, but with a built-in mechanism for anyone to disrupt the forum activity of anyone else...
3gjm
It's interesting that you're confident of which account it is; I didn't say. I had another PM from another user, naming the same account (and giving reasons for suspicion which were completely different from mine). So: yeah, putting this all together, either it's him again or there are a whole bunch of similarities sufficient to ring alarm bells independently for multiple different users. I don't see any need for anyone to swing the banhammer again unless he starts being large-scale abusive again, in which case no doubt he'll get re-clobbered. Perhaps by then someone will have figured out how to get his votes undone. (In cases where someone's been banned for voting abuses, come back, and done the same things again, I would be in favour of simply zeroing out all votes by the revenant account.)
1skeptical_lurker
I think Azarthoth is back too, and I think I know which account, but I don't get the impression that the mass upvote sockpuppets that were suspected to be helping his previous incarnations are active. I think there should be simple ways to combat this sort of problem anyway, for a start people's accounts could list the percentage of upvotes you give in the same way it currently lists the percentage of upvotes you receive. Limits could be put on the amount of downvotes you can issue by saying that they cannot exceed your karma (or a multiple thereof). This problem has been encountered before in places like reddit - how did they deal with it there?
2Gurkenglas
Wouldn't they just mass-upvote random posts not from that person?
-2Lumifer
And what exactly would you infer from this metric? As far as I know solely downvoting the posts you don't like and never upvoting anything is fully within the rules. Such limits exist and are in place, I think.
2skeptical_lurker
You would infer that they are a very critical person, I suppose.
0Lumifer
Actually, would you? This is an interesting inference/rationality question. If someone's voting history has 900 downvotes and 100 upvotes then yes, it looks reasonable to conclude that this is very critical person with high standards. But if a voting history contains only 1000 downvotes and not upvotes at all? I would probably decide that this person has some rules (maybe set by herself for herself) which prevent her from upvoting. And in such a case you can't tell whether she's highly critical or not.
0Viliam
The important thing would be who received those 900 downvotes. I am not sure about the exact formula, but the first approximation is whether the set of 900 comments downvoted by user X would correlate more with "what other people downvoted" or with "who wrote those comments". That is, how much the user has high standards vs how much is a personal grudge. To some degree "what other people downvoted" and "who wrote those comments" correlate with each other, because some people are more likely to write good comments, and some people are more likely to write bad comments. The question would be whether the downvoting patterns of user X correlate with "who wrote that" significantly more strongly that the downvoting patterns of an average user. (Of course, any algorithm, when made public, can be gamed. For example, detection by the algorithm as described above could be avoided by a bot who would (a) upvote every comment that already has karma 3 or more, unless the comment author is in the "target" list; (b) downvote every comment that already has karma -3 or less, and (c) downvote every comment whose author is in the "target" list. The first two parts would make the bot profile seem similar to the average user, if the detection algorithm ignores the order of votes for each comment.)
2Lumifer
That doesn't look like a good approach to me. Correlating with "what other people downvoted" doesn't mean "high standards" to me, it means "follows the hivemind". Imagine a forum which is populated by representatives of two tribes, Blue and Green, and moreover 90% of the forum participants are Green and only 10% are Blue. Let's take Alice who's Blue -- her votes will not be positively correlated with other people's votes for obvious reasons. You're thinking about a normative situation where people should vote based on ill-defined "quality" of the post, but from a descriptive point of view people vote affectively, even on LW. I think what you want is fairly easy to define without correlations. You are looking for a voting pattern that: * Stems from a single account (or a small number of them) * Is targeted at a single account (or a small number of them) * Has a large number of negative votes in a short period of time * Targets old posts, often in a particular sequence that matches the way software displays comments
0Good_Burning_Plastic
Me too. Should you I PM you to tell which one?
0gjm
By all means. At this point I'll be quite surprised if you don't suspect the same account as I do! It would be interesting to know your reasons for suspicion, too.
0Good_Burning_Plastic
PM sent.
0Dorikka
If I remember correctly, NancyLebovitz is the forum moderator; she might have the means and willingness to look into this kind of thing, and take action if needed.

Some unrefined thoughts on why rationalists don't win + a good story.

Why don't rationalists win?

1) As far as being happy goes, the determinants of that are things like optimism, genetics, good relationships, sense of fulfillment etc. All things you could easily get without being rational, and that rationality doesn't seem too correlated with (there's probably even a weak-moderate negative correlation).

2) As far as being right goes (epistemic rationality), well people usually are wrong a lot. But people have an incredible ability to compartmentalize, and pe... (read more)

3Adam Zerner
* I sense that a common rationalist perspective is to pay a lot more attention to the bad things, and not to be satisfied with the good. More generally, this seems to be the perspective of ambitious people. * Rationalists don't seem to be able to derive as much joy from interaction with normal people, and thus probably struggle to find strong relationships. * Normal people seem to derive a sense of fulfillment from things that they probably shouldn't. For example, my Uber driver was telling me how much fulfillment she gets from her job, and how she loves being able to help people get to where they're going. She didn't seem to be aware of how replaceable she is. She wasn't asking the question of "what would happen if I wasn't available as an Uber driver". Or "what if there was one less Uber drive available". I should note that none of this is desirable, and that someone who's a perfect rationalist would probably do quite well in all of these areas. But I think that Reason as a memetic immune disorder applies here. It seems that the amount of rationality that is commonly attained often acts as an immune disorder in these situations.

In thinking/talking to people, it's too hard to be comprehensive, so I usually simplify things. The problem is that I feel pressure to be consistent with what I said, even though I know it's a simplification.

This sorta seems like an obvious thing to say, but I get the sense that making it explicit is useful. I notice this to be a moderate-big problem in myself, so I vow to be much much much better at this from now on (I'm annoyed that I fell victim to it at all).

2MrMind
It might not be. Any further explanation is costly, and would be welcomed only if the subject is really interested in the topic. If not, you would come across as pedantic and boring. I think that you should learn to resist the pressure. It's very rare that someone will call you out for some inconsistency, even blatant. It's quite amazing, actually. In the rare cases where someone does call you out, you can just offer further explanation, if you care to.
[-][anonymous]40

If using multiple screens at work made you more productive, care to give an example or two what do you put on one and the other and how they interact? Perhaps also negatives, in what situations that doesn't help?

Hypothesis: they only work with transformation type work e.g. translation where you read a document in one and translate in another, or read a spec in one and write code to implement it in another or at any rate the output you generate is strongly dependent on an input that you need to keep referring to.

I actually borrowed a TV as a second screen b... (read more)

5gjm
At work: Software development: text editors (or IDE) on one screen, terminal/command-prompt window(s) for building, running tests, etc., on another. Exploratory work in MATLAB: editor(s) and MATLAB figure windows (plots, images, ...) on one screen, MATLAB command window on another. I use virtual desktops as well as multiple monitors, so things like email and web browser are over in another universe and less distracting. (This does have the downside that when I'm, say, replying to something on Less Wrong, my work is over in another universe and less distracting.) So are other things (e.g., documents being written, to-do lists, etc.). Of course things may get moved around; e.g., if I'm writing a document based on some technical exploration then I may want a word processor coexisting with MATLAB and a web browser. At home: email on one monitor, web browser on another. (And all kinds of other things on other virtual desktops.)
1[anonymous]
Hm, so we have two cases now, thanks: * Read on S1 -> think -> write on S2 * Write on S1, execute / do other things with what is written on S2 Third case, such as web browser and email does not sound that useful to me, but it at least forces you to move your neck which is actually good, lower chance if getting stiff and painful from staring ahead unmoving for hours. Actually I wonder if from this angle, encouraging motion, we should put another one on the floor, one on the ceiling :) If neither money nor work productivity was a huge issue, the most healthy setup would be robotic arms rearranging screens around you every few minutes in 3D, encouraging regular movement along all axes.
1gjm
Sometimes useful: e.g. get email saying "hey, look at this interesting thing on the web", or "could you please do X" where X requires buying something online. Or see something interesting on the web and send an email to a friend about it. But yeah, it's not hugely valuable. (I have two monitors on my home machine because sometimes I do more serious things on it and they're useful then. And because there was a spare monitor going cheap so I thought I might as well.) If money and productivity were that little an issue, why would you be sat at this contraption in the first place?
1[anonymous]
Good question. Actually - it might not even reduce productivity. Suppose you put a terminal where you run commands on the average every ten minutes on one such screen positioned on a fully 3D positionable robotic arm. You lose maybe 2 seconds finding out if this time is it is over your left shoulder or up right on the ceiling. But the improved blood flow from the movement could improve your cognitive skills and maybe being forced into a 3D all-around situational awareness "awakens the ancestral hunter" ie.e. improves awareness, focus and concentration. A good example is driving a car. It tends to put me in a focused mode. But, lacking that, at least having some neck movement between screens must be a good thing.
0Lumifer
Have you read Stephenson's REAMDE? It describes in detail an interesting working setup... :-)
3MSwaffer
I have 2 desks in my office, both with multiple screen layouts. Your question made me think about how I use them and it comes down to the task I am performing. Like others, when I am programming I typically have an IDE where I am doing work on one and a reference open on another. When doing web development my third monitor usually has a browser where I can immediately refresh my work to see results, for other development it may be a virtual machine or remote desktop that I am logged into. When I am doing academic work, I often have EndNote (reference manager) on one monitor, the document I am writing on another and the documents I am finding / reading on the third. Since both my desks are next to each other, I often "borrow" a monitor from the other setup to keep communication windows open (Skype, Lync, Hangouts, #Slack etc.) This allows me to keep in touch with coworkers and colleagues without having to flip windows every time I get a message. So I would say there are three purposes identified: * Active Work * Reference Material * Communication
3OrphanWilde
I put source code/IDE/logging output in one, and the program I'm running in the other, particularly when debugging a program; running in debug mode or watching logs is simpler. I also put remote desktops in a separate screen, often copying the contents of configuration files over as text, as I don't generally get the ability to drag files directly into environments (people who prevent direct copying of files or dragging and dropping, your security is getting in the way without providing any security - Base64 encoding exists). Otherwise I will have social applications open in one (e-mail application, chats with clients, etc), and my actual work in the other.
3Vaniver
I of course do much of the "work on A, reference on B" that others have talked about--the IDE open on one screen and the documentation open on the other--but it's also worth pointing out the cases where there are multiple pieces of reference material that I'm trying to collide somehow, and having both of them open simultaneously is obviously incredible useful.
3wadavis
The typical theme is reference material on one screen, and working material on the other screen. The equivalent of having all your reference material open on your desk so you are not flipping back an forth through notes. Edit: Read The Intelligent Use of Space by David Kirsh as recommended by this LessWrong post.
3Unknowns
I work with multiple screens and I estimate that I save between 20 minutes and one hour per day in comparison to using only one. I do financial work and examples would be: Quickbooks open on one screen and an internet bank account open on the other; or the account open on one page and some financial pdf open on the other; or similar things.
0[anonymous]
So read on screen 1-> thought and transformational work -> write on screen 2?
2Shmi
3 monitors, 1 for a browser, 1 for IDE, 1 for misc stuff, like watching syslog messages, file manager, etc.
2[anonymous]
One screen (small square monitor I found for free) is often filled up with my matlab data files and matlab command window. The other (large) contains some combination of figures generated by my matlab scripts from my yeast data (constantly popping in and out), analysis I am writing, and scripts I am editing. (I should really map out the dependencies of all my scripts sometime...) When things are slower the small monitor often contains the live feed from the space station.
2sixes_and_sevens
I don't know how common this is, but with a dual-monitor setup I tend to have one in landscape and one in portrait. The portrait monitor is good for things like documents, or other "long" windows like log files and barfy terminal output. The landscape monitor is good for everything that's designed to operate in that aspect ratio (like web stuff). More generally, there's usually something I'm reading and something I'm working on, and I'll read from one monitor, while working on whatever is in the other. At work I make use of four Gnome workspaces: one which has distracting things like email and project management gubbins; one active work-dev workspace; one self-development-dev workspace; and one where I stick all the applications and terminals that I don't actively need to look at, but won't run minimised/headlessly for one reason or another.
[-][anonymous]20

How do other people study? I'm constantly vacillating between the ideas of taking notes and making flashcards, or just making flashcards. I'd like to study everything the same way, but it seems like for less technical subjects like philosophy making flashcards wouldn't suffice and I'd need to take notes. For some reason the idea of taking notes for some subjects but not others is uncomfortable to me. And I'm also stuck between taking notes on the literature I read or just keeping a list. It's getting to the point where I don't even study or read anymore be... (read more)

5estimator
I believe that both making notes and making flashcards are suboptimal; the best (read: fastest) method I know is to read and understand what you want to learn, then close your eyes and recall everything in full detail (that is hard, and somewhat painful; you should try to remember something for at least few minutes before giving up). Re-read whatever you haven't remembered. Repeat until convergence. In math, it helps to solve problems and find counterexamples to theorem conditions, because it leads to deeper understanding, which makes remembering significantly easier. Also try to make as much connections to already known facts and possible applications as possible: our memory is associative.
2Dorikka
If possible, I like to allocate full attention to listening to the lecturer instead of dividing it between such and taking notes. However, this isnt always feasible. It helps if there is a slidepack or something similar that will be avaliable afterwards. Most of the time, I'm trying to build a mental construct for how all of the things that I'm learning fit together. Depending on the difficulty of the material, I may need to begin creating this construct pretty soon so I can understand the material, or it may be able to wait until pretty close to the exam. (If I'm not having to take notes, I can start doing it in class, which is more efficient and effective.) I try to fill in the gaps in my mental model with a list of questions to ask in office hours. In the process, the structure of the material becomes a bit more evident. Is it very interconnected, either in a logical or physical sense? Is it something that seems to be made of arbitrary facts? If the latter, and the material is neither interesting nor useful nor required, I will be tempted to drop the class. If it is interesting or useful, facts stick much better, as I can think about how I can use them, how they help me understand things in such a manner that I can more easily affect them. Not sure that I personally have found many classes interesting but not useful if they lack a structure. If neither, but required, I prefer creating a structure that helps me link things together in a way that will help me remember them. A memorable example was committing a picture of the amino acid table to memory and then stepping through it vertically, horizontally, diagonally to make it stick. A structure that can be useful here is to repeat all past memorized items when memorizing a list. So A, then AB, then ABC, and so on. I like pictures, lists, and cheat sheets (often worth making for me to help memtal organization even if I can't take them into a test) for the facts that don't fit in my mental model, or just as redund
0OrphanWilde
Focus on grokking, on understanding, rather than remembering.

Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.

Epistemic status: unlikely that my proposal works, though I am confident that my calculations are correct. I'm only posting this now because I need to go to bed soon, and will likely not get around to posting it later if I put it off until another day.

Does anyone know of any biological degradation processes with a very low energy of activation that occur in humans?

I was reading over the "How Cold Is Cold Enough" article on Alcor's website, in which it is asserted that the temperature of dry ice (-78.5 C, though they use -79.5 C) isn't a cold eno... (read more)

According to the official story Pakistan didn't know about Osama Bin Ladin's location at the time of his death.

What your credence that the official story is true about that claim? (answer as probability between 0 and 1) [pollid:980]

Define Pakistan.

7ChristianKl
At least one of Ashfaq Parvez Kayani (chief of military), Ahmad Shuja Pasha (directior of ISI) and Asif Ali Zardari (Pakistani president) knew about it.
3ike
Isn't the official story that the US didn't know that Pakistan knew? As in, it's both possible that Pakistan knew/ didn't know, but the US didn't know one way or another. I'm assuming you're talking about the US's official story.
3ChristianKl
Googling a bit it seems that various people do say different things about the likelihood of Pakistan knowing. If I would formulate the question again I might drop the word official and ask directly for whether Pakistan knew. I think this question is still okay in this form because it asks directly for whether the respondent of the poll believes Pakistan to have known.
1bogus
Given the location where OBL was eventually found, this "official story" is not plausible in the least, and everyone knows that. The only reason for its existence is nobody wants to 'officially' admit that Pakistan was running a _scam_ on the U.S. by asking for $$$ "to help search for Bin Ladin", and the U.S. government fell for it for quite a while.
[-][anonymous]20

Yesterday, I stumbled upon this reddit comment by the author of the open textbook AI Security, Dustin Juliano. If I understood it correctly, the claim is basically that an intelligence explosion is unlikely to happen, and thus the development of strong AI should be an open, democratic process so that not a single person or a small circle can gain considerable amount of power. What is Bostrom's/the MIRI's take on this issue?

9Gram_Stone
They're not exactly patrolling Reddit for critics, but I'll bite. From what I understand, Bostrom's only premise is that intelligent machines can in principle perform any intellectual task that a human can, and this includes the design of intelligent machines. Juliano says that Bostrom takes hard-takeoff as a premise: He doesn't do that. Chapter 4 of Superintelligence addresses both hard- and soft-takeoff scenarios. However, Bostrom does consider medium- to hard-takeoff scenarios more likely than soft-takeoff scenarios. Another thing, when he says: There can't be evidence of an intelligence explosion because one hasn't happened yet. But we predict an intelligence explosion because it's based on an extrapolation of our current scientific generalizations. This sort of criticism can be made against anything that is possible in principle but that has not yet happened. If he wanted to argue against the possibility of an intelligence explosion, he would need to explain how it isn't in line with our current generalizations. You have to have a more complex algorithm for evaluating claims than "evidence = good & no-evidence = bad" to get around mistakes like this. He actually sort-of seems to imply that he doesn't think it's in line with our generalizations, when he says "people [...] don't understand the technical issues behind why that is not going to happen", which would be a step in the right direction, but he doesn't actually say anything about where he disagrees. Also, Bostrom has a whole section in Chapter 14 on whether or not AGI should be a collaborative effort, and he's strongly in favor of collaboration. Race dynamics penalize safety-conscious AGI projects, and collaboration mitigates the risk of a race dynamic. Also, most people's preferences are resource-satiable; in other words, there's not much more that someone could do with a billion galaxies' worth of resources as opposed to one galaxy's worth, so it's better for everyone to collaborate and maximize th
2[anonymous]
Can you recommend an article about the inner view on intelligence? The outer view seems to be an optimization ability, which I am not sure I buy but won't challenge either, let's say accepted as a working hypothesis. But what it is it on the inside? Can we say that it is like a machine shop? Where ideas are first disassembled, and this is called understanding them, taking them apart and seeing their connections. (Latin: intelligo = to understand.) And then reassembled e.g. to generate a prediction. Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down? For example randomly generating hypotheses and testing them, while it may be very efficient for optimization, does not really sound like textbook intelligence. Textbook intelligence must have a feature of understanding, and understanding is IMHO idea-disassembly, model-disassembly. Intelligence-as-.understanding (intelligo), interpreted as the ability to understand ideas proposed by other minds and hence conversational ability, have this disassembly feature. From this angle one could build an efficient hypothesis-generator-and-tester type optimizer who is not intelligent in the textbook sense, is not too good at "intelligo", could not discuss Kant's philosophy. I am not sure I would call that AI and it is not simply a question of terminology, most popular AI fiction is about conversation-machines, not "silent optimizers" so it is important how we visualize it.
1Gram_Stone
I'm having a really hard time modeling your thought process. Like, I don't know what is generating the things that you are saying; I am confused. I'm not sure what you mean by inner vs. outer view. Well, IQ tests test lots of things. This seems like a good metaphor for working memory, and even though WM correlates with IQ, it's also just one component. I don't really get what you mean when you say that it's important how we visualize it. Well, if you take, say, AIXI, which sounds like this sort of hypothesis-testing-optimizer-type AI that you're talking about, AIXI takes an action at every timestep, so if you consider a hypothetical where AIXI can exist and still be unbounded, or maybe a computable approximation in which it has a whole hell of a lot of resources and a realistic world model, one of those actions could be human natural language if it happened to be the action that maximized expected reward. So I'd say that you're anthropomorphizing a bit too much. But AIXI is just the provably-best jack-of-all-trades; from what I understand there could be algorithms that are worse than AIXI in other domains but better in particular domains.
1[anonymous]
I think the keyword to my thought process is anthropomorphizing. The intuitive approach to intelligence is that it is a a human characteristic, almost like handsomeness or richness. Hence the pop culture AI is always an anthropomorphic conversation machine from Space Odyssey to Matrix 3 to Knight Rider. For example, it should probably have a sense of humor. The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don't have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic. Looking at humans, beside optimization, the human traits that are considered part of intelligence, such as a sense of humor, or easily understanding difficult ideas in a conversation, are parts of it too, and they lie outside the optimization domain. The outer view is that we can observe intelligent humans optimizing things, this being one of their characteristics, although not exhaustive. However it does not lead to a full understanding of intelligence, just one facet of it, the optimization facet. It is merely an output, outcome of intelligence, not the process but its result. So when a human with a high IQ tells you to do something in a different way, this is not intelligence, intelligence was the process that resulted in this optimization. To understand the process, you need to look at something else than optimization, the same way to understand software you cannot just look at its output. What I was asking is how to look at it from the inner view. What is the soft
5ChristianKl
"IQ" is just a terms for something on the map. It's what we measure. It's not a platonic idea. It's a mistake to treat it as such. On the other hand it's useful measurement. It correlates with a lot of quantities that we care about. We know that because people did scientific studies. That allows us to see things that we wouldn't see if we just reason on an armchair with concepts that we developed as we go along in our daily lives. Scientific thinking needs well defined concepts like IQ, that have a precise meaning and that don't just mean what we feel they mean. Those concepts have value when you move in areas where the naive map breaks down and doesn't describe the territory well anymore.
3Gram_Stone
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there's no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There's no reason to think about things in human terms; there's only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value. Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic. If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have. This is AIXI.

I have an extremely crazy idea - framing political and economic arguments in the form of a 'massively multiplayer' computer-verifiable model.

Human brains are really terrible at keeping track of a lot of information at once and sussing out how subtle interactions between parts of a system lead to the large-scale behavior of the whole system. This is why economists frequently build economic models in the form of computer simulations to try to figure out how various economic policies could affect the real world.

That's all well and good, but economic models bu... (read more)

2skeptical_lurker
No-one would agree on what models to use. I think this is an interesting idea in theory. And if you connect it to prediction markets, then this could be some sort of computational collaborative futarchy.
0passive_fist
The biggest issue (aside from computational cost) is definitely how to reconcile conflicting models, although no one would ever be editing the entire model, only small parts of it. I hope (and I could be wrong) that once the system reaches a certain critical mass, predicting the emergent behaviour from the microscopic details becomes so hard that someone with a political agenda couldn't easily come up with ways to manipulate the system just by making a few local changes (you can think of this as similar to a cryptographic hashing problem). Other large-scale systems (like cryptocurrencies) derive security from similar 'strength in numbers' principles. One option is to limit input to the system to only peer-reviewed statistical studies. But this isn't a perfect solution, for various reasons. Using a connection to prediction markets (so that people have some skin in the game) is a nice idea, but I'm not sure how you're thinking of implementing that?
0skeptical_lurker
Well, models generally rely on parameter values which can be determined empirically or reasoned about more theoretically or the model could be fitted to data and the parameters inferred by some form of optimisation algo such as monte carlo markov chains. Anyway, suppose two people disagree on the value of a parameter. Running the model with different parameter values would produce different predictions, which they could then bet on.
0raydora
This sounds like a larger implementation of the models pathologists use to try and predict the infection rate of a disease. Considering the amount of computing power needed for that, such a service might be prohibitively expensive- at least in the near future. I'm wondering if there would be a way for participants to place some skin in the game, besides a connection to prediction markets.
0Lumifer
So what happens when 4chan discovers it?
0passive_fist
Same as what happened when 4chan discovered wikipedia. I suspect there will be vandalism but also self-correction. Ideally you'd want to build in mechanisms to make vandalism harder.

Some software already tries to read and affect human emotions: link

Sample:

EmoSPARK, say its creators, is dedicated to your happiness. To fulfil that, it tries to take your emotional pulse, adapting its personality to suit yours, seeking always to understand what makes you happy and unhappy.

I find that I learn better when I am eating. I sense that the pleasure coming from the food helps me pay attention and/or remember things. It seems similar to the phenomena of people learning better after/during exercise (think: walking meetings).

Does anyone know of any research that supports this? Any anecdotal evidence?

0[anonymous]
I think learn better if I stop to eat whenever I feel like eating, and not get distracted by thoughts of food (I also lack 10 kg to normal weight, so I can afford it.)

Suffering and AIs

Disclaimer - For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown ... (read more)

0DanielLC
Killer robots with no pain or fear of death would be much easier to fight off than ones who have pain and fear of death. It doesn't mean they won't get distracted and lose focus on fighting when they're injured or in danger. It means that they won't avoid getting injured or killed. It's a lot easier to kill someone if they don't mind it if you succeed.
0the-citizen
True! I was actually trying to be funny in (4), tho apparently I need more work.

Disclaimer: I may not be the first person to come up with this idea

What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?

Could this be useful to prevent overdoses?

3Lumifer
If the dispensing device is "locked" against the user and you want to enforce dosing you don't need any crypto keys. Just make the device have an internal clock and dispense a dose every X hours. In the general case, the device is externally controlled and then people who have control can do whatever they want with it. I'm still not seeing a particular need for a crypto key.
0DanielLC
Forever? What if you want to change the dosage. So that only the person who's supposed to control it can control it. You don't want someone altering it with their laptop just because they have bluetooth. Edit: Somehow I was thinking of implanting something that dispensed drugs. Just dispensing pills would make most of that pointless. Why worry about someone breaking it with a laptop if they can break it with a hammer? I suppose it might work if you somehow build the thing like a safe.
29eB1
There are already dispensing machines that dispense doses on a timer. They are mostly targeted at people who need reminding (e.g. Alzheimers), though, rather than people who may want to take too much. I don't think the cryptographic security would be the problem in that scenario, but the physical security of the device. You would need some trusted way to reload it and it would have to be very difficult to open even though it would presumably just be sitting on your table at home, which is a very high bar. It could possibly be combined with always-on tampering reporting and legal threats to make the idea of tampering with it less appealing though.