I'm looking for some "next book" recommendations on typography and graphically displaying quantitative data.
I want to present quantitative arguments and technical concepts in an attractive manner via the web. I'm an experienced web developer about to embark on a Masters in computational statistics, so the "technical" side is covered. I'm solid enough on this to be able to direct my own development and pick what to study next.
I'm less hot on the graphical/design side. As part of my stats-heavy undergrad degree, I've had what I presume to be a fairly standard "don't use 3D pie charts" intro to quantitative data visualisation. I'm also reasonably well-introduced to web design fundamentals (colour spaces, visual composition, page layouts, etc.). That's where I'm starting out from.
I've read Butterick's Practical Typography, which I found quite informative and interesting. I'd now like a second resource on typography, ideally geared towards web usage.
I've also read Edward Tufte's Visual Display of Quantitative Information, which was also quite informative, but felt a bit dated. I can see why it's considered a classic, but I'd like to read something on a simi...
Every so often in the EA community, someone will ask what EA volunteer activities one can do in ones spare time in lieu of earning to give. Brian Tomasik makes an interesting case for reading social science papers and contributing what you learn to Wikipedia.
What changes would LW require to make itself attractive again to the major contributors who left and now have their own blogs?
As I often say, I haven't been here long, but I notice a sort of political-esque conflict between empirical clusters of people that I privately refer to as the Nice People and the Forthright People. The Nice People think that being nice is pragmatic. The Forthright People think that too much niceness decreases the signal-to-noise ratio and also that there's a slippery slope towards vacuous niceness that no longer serves its former pragmatic functions. A lot of it has to do with personality. Not everyone fits neatly, and there are Moderate People, but many fit pretty well.
I also notice policy preferences among these groups. The Nice don't mind discussion of object-level things that people have been drawn towards as the result of purportedly rational thinking and deciding. The Forthright often prefer technical topics and more meta-level discussion of how to be rational, and many harken back to the Golden Age when LW was, as far as I can tell, basically a way to crowdsource hyperintelligent nerds (in the non-disparaging sense) to work past inadequate mainstream decision theories, and also to do cognitive-scientific philosophizing as opposed to the ceiling-gazing sort. The Nice think t...
whether all the low-hanging fruit has been gathered
Still there is the issue that it is a format of publishing sorted by publishing date. It is not like a library where it is just as easy to find a book published 5 years ago than the one published yesterday because they are sorted by topic or the author's name or something. Sequences and the wiki help this, still, a timeless view of the whole thing would be IMHO highly useful. A good post should not be "buried" just because it is 4 years old.
In my view, you're asking the wrong question. The major contributors are doing great; they have attracted their own audiences. A better question might be: how can LW grow promising new posters in to future major contributors (who may later migrate off the platform)?
I had some ideas that don't require changing the LW source that I'll now create polls for:
Should Less Wrong encourage readers to write appreciative private messages for posts that they like?
[pollid:976]
Should we add something to the FAQ about how having people tear your ideas apart is normal and expected behavior and not necessarily a sign that you're doing anything wrong?
[pollid:977]
Should we add something to the FAQ encouraging people to use smiley faces when they write critical comments? (Smiley faces take up very little space, so don't affect the signal-to-noise-ratio much, and help reinforce the idea that criticism is normal and expected. The FAQ could explain this.)
[pollid:978]
We could start testing these ideas informally ASAP, make a FAQ change if polls are bullish on the ideas, and then announce them more broadly in a Discussion post if they seem to be working well. To keep track of how the ideas seem to be working out, people could post their experiences with them in this subthread.
Should we add something to the FAQ
Does anyone read the FAQ? Specifically, do the newbies look at the FAQ while being in the state of newbiedom?
I recently wrote this, which would probably have been of interest to LW. But when I considered submitting it, my brain objected that someone would make a comment like "you shouldn't have picked a name that already redirects to something else on wikipedia", and... I just didn't feel like bothering with that kind of trivia. (I know I'm allowed to ignore comments like that, but I still didn't feel like bothering.)
I don't know if that was fair or accurate of my brain, but Scott has also said that the comments on LW discourage him from posting, so it seems relevant to bring up.
The HN comments, and the comments on the post itself, weren't all interesting, but they weren't that particular kind of boring.
I don't think a shutdown is even remotely likely. LW is still the Schelling point for rationalist discussion; Roko-gate will follow us regardless; SSC/Gwern.net are personal blogs with discussion sections that are respectively unusable and nonexistent. CFAR is still an IRL thing, and almost all of MIRI/CFAR's fans have come from the internet.
Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.
Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.
To have a website with content like the original Sequences, we need someone who (a) can produce enough great content, and (b) believes that producing content for a website is the best use of their time.
It already sounds like a paradox: the more rational and awesome a person is, the more likely it is that they can use their time much better than writing a blog.
Well, unless they use the blog to sell something...
I think Eliezer wrote the original Sequences pretty much to find people to cooperate with him at MIRI, and to make people more sympathetic and willing to send money to MIRI. Mission accomplished.
What would be the next mission (for someone else) which could be accomplished by writing interesting articles to LW?
It's true that Less Wrong has a reputation for crazy ideas. But as long as it has that reputation, we might as well continue posting crazy ideas here, since crazy ideas can be quite valuable. If LW was "rebooted" in some other form, and crazy ideas were discussed there, the new forum would probably acquire its own reputation for crazy ideas soon enough.
The great thing about LW is that it allows a smart, dedicated, unknown person to share their ideas with a bunch of smart people who will either explain why it's wrong or change their actions based on it relatively quickly. Many of LW's former major contributors have now independently acquired large audiences that pay attention to their ideas, so they don't need LW anymore. But it's very valuable to leave LW open in order to net new contributors like Nate Soares (who started out writing book reviews for LW and was recently promoted to be MIRI's executive director). (Come to think of it, lukeprog was also "discovered" through Less Wrong as well... he went from atheist blogger to LW contributor to MIRI visiting fellow to MIRI director.)
Consider also infrequent bloggers. Kaj Sotala's LW posts seem to get substantially more comments than the posts on his personal blog. Building and retaining an audience on an independent blog requires frequent posting, self-promotion, etc... we shouldn't require this of people who have something important to say.
When should a draft be posted in discussion and when should it be posted in LessWrong?
I just wrote a 3000+ word post on science-supported/rational strategies to get over a break-up, I'm not sure where to put it!
A comment about some more deep learning feats:
Interestingly, they initialise the visual learning model using the ImageNet images. Was it 3 years ago that was considered a pretty much intractable problem, and now the fact a CNN can work on it well enough to be useful isn't even worth a complete sentence.
(Background on ImageNet recent progress: http://lesswrong.com/lw/lj1/open_thread_jan_12_jan_18_2015/bvc9 )
Clicking on the tag "open thread" on this post only shows open threads from 2011 and earlier, at "http://lesswrong.com/tag/open_thread/". If I manually enter "http://lesswrong.com/r/discussion/tag/open_thread/", then I get the missing open threads. The problem appears to be that "http://lesswrong.com/tag/whatever/" only shows things posted to Main. "http://lesswrong.com/r/all/tag/open_thread/" seems to behave the same as "http://lesswrong.com/tag/open_thread/", i.e. it only shows things posted to ...
It looks like someone downvoted about 5 of my old comments in the last ~10 hours. (Not recent ones that are still under any kind of discussion, I think. I can't tell which old ones.)
I mention this just in case others are seeing the same; I suspect Eugine_Nier/Azathoth123 has another account and is up to his old mass-downvoting tricks again. (I actually have a suspicion which account, too, but nowhere near enough evidence to be making accusations.)
Some unrefined thoughts on why rationalists don't win + a good story.
Why don't rationalists win?
1) As far as being happy goes, the determinants of that are things like optimism, genetics, good relationships, sense of fulfillment etc. All things you could easily get without being rational, and that rationality doesn't seem too correlated with (there's probably even a weak-moderate negative correlation).
2) As far as being right goes (epistemic rationality), well people usually are wrong a lot. But people have an incredible ability to compartmentalize, and pe...
In thinking/talking to people, it's too hard to be comprehensive, so I usually simplify things. The problem is that I feel pressure to be consistent with what I said, even though I know it's a simplification.
This sorta seems like an obvious thing to say, but I get the sense that making it explicit is useful. I notice this to be a moderate-big problem in myself, so I vow to be much much much better at this from now on (I'm annoyed that I fell victim to it at all).
If using multiple screens at work made you more productive, care to give an example or two what do you put on one and the other and how they interact? Perhaps also negatives, in what situations that doesn't help?
Hypothesis: they only work with transformation type work e.g. translation where you read a document in one and translate in another, or read a spec in one and write code to implement it in another or at any rate the output you generate is strongly dependent on an input that you need to keep referring to.
I actually borrowed a TV as a second screen b...
How do other people study? I'm constantly vacillating between the ideas of taking notes and making flashcards, or just making flashcards. I'd like to study everything the same way, but it seems like for less technical subjects like philosophy making flashcards wouldn't suffice and I'd need to take notes. For some reason the idea of taking notes for some subjects but not others is uncomfortable to me. And I'm also stuck between taking notes on the literature I read or just keeping a list. It's getting to the point where I don't even study or read anymore be...
Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.
Epistemic status: unlikely that my proposal works, though I am confident that my calculations are correct. I'm only posting this now because I need to go to bed soon, and will likely not get around to posting it later if I put it off until another day.
Does anyone know of any biological degradation processes with a very low energy of activation that occur in humans?
I was reading over the "How Cold Is Cold Enough" article on Alcor's website, in which it is asserted that the temperature of dry ice (-78.5 C, though they use -79.5 C) isn't a cold eno...
According to the official story Pakistan didn't know about Osama Bin Ladin's location at the time of his death.
What your credence that the official story is true about that claim? (answer as probability between 0 and 1) [pollid:980]
Yesterday, I stumbled upon this reddit comment by the author of the open textbook AI Security, Dustin Juliano. If I understood it correctly, the claim is basically that an intelligence explosion is unlikely to happen, and thus the development of strong AI should be an open, democratic process so that not a single person or a small circle can gain considerable amount of power. What is Bostrom's/the MIRI's take on this issue?
I have an extremely crazy idea - framing political and economic arguments in the form of a 'massively multiplayer' computer-verifiable model.
Human brains are really terrible at keeping track of a lot of information at once and sussing out how subtle interactions between parts of a system lead to the large-scale behavior of the whole system. This is why economists frequently build economic models in the form of computer simulations to try to figure out how various economic policies could affect the real world.
That's all well and good, but economic models bu...
I find that I learn better when I am eating. I sense that the pleasure coming from the food helps me pay attention and/or remember things. It seems similar to the phenomena of people learning better after/during exercise (think: walking meetings).
Does anyone know of any research that supports this? Any anecdotal evidence?
Suffering and AIs
Disclaimer - For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that
One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown ...
Disclaimer: I may not be the first person to come up with this idea
What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?
Could this be useful to prevent overdoses?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.