Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TheAltar 24 April 2017 03:56:16PM 2 points [-]

This has a long list of sound arguments in it which exist in tandem with a narrative that may not actually be true. Most of the points are valid regardless, but whether they have high importance in aggregate or whether any of the conclusions reached actually matter depends heavily on what lens we're looking through and what actually has been going on in reality at Open Phil and Open AI.

I can imagine a compelling and competing narrative where Open Phil has decided that AI safety is important and thinks that the most effective thing they can do with a ton of their money is to use it to make the world safer against that x-risk. They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI. (Open AI may not be a good source for that, but that's a separate discussion). However, since they're donating so much money and don't really know what Open AI might do with it in practice in the future, they ensure that they get a person they trust business-wise on the board of directors to ensure that it ends up getting spent in ways that are in line with their original desires. (A good backup plan when there's open questions of whether any group working on AI is doing more to help or harm it.)

Gwern makes a quick fermi estimate on here about how much Open AI actually costs to run per year and reminds us that while $1 billion has been "committed" to Open AI, that's really just a press release social-statement about a pseduo-promise by people who are known to be flaky and aren't under any obligation to give them that money. If we're estimating Open AI to be running on $9 million per year, then $30 million is a very hefty donation which gives the company three years more runway to work on things. That's a big deal to Open AI being in existence or not in existence, and if they already have $9 million coming in per year from another source then that could potentially double their income per year and allow them to expand into lots of new areas as a result.

~

There are a number of inductive leaps going on within the large model presented in the original post that I think are worth pointing out and examining. I'll also stick what I think is the community affect/opinion on the end of them because I've been up all night and think it's worth denoting.

  1. Open Phil is now taking AI Safety as a serious threat to the world and pledged $30 million of money donated to them on it. (Yay! Finally!)
  2. Open Phil is giving that money to Open AI. (Boo! Give it to MIRI!)
  3. Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don't like him because he screwed up #2 and we don't respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn't write the people we don't like a blank check. That's a terrible idea in this climate!)

These are the parts that actually matter. Whether the money is going to a place that is actually useful for reducing x-risk and whether Holden as board member is there to just ensure the money isn't be wasted on useless projects or whether he'll be messing with the distribution of funds larger than $30 million in ways that are harmful (or helpful!) to AI Safety. He could end up spending them wisely in ways that make the world directly safer, directly less safe, safer because it was spent badly versus alternatives that would have been bad, or less safe because they weren't spent on better options.

Insofar that I think any of us should particularly care about all of this it will have far more to do with these points than other things. They also sound nicely far more tractable since the other problems you mention about Open Phil sound pretty shitty and I don't expect a lot of those things to change much at this point.

Comment author: TheAltar 07 December 2016 11:05:35PM 0 points [-]

This is probably my favorite link post that's appeared on LW thus far. I'm kinda disappointed more people haven't checked it out and voted it upward.

Comment author: Raemon 28 November 2016 04:12:55PM 14 points [-]

Quick note: Having finally gotten used to using discussion as the primary forum, I totally missed this post as a "promoted" post and would not have seen it if it hadn't been linked on Facebook, ironically enough.

I realize this was an important post that deserved to be promoted in any objective sense, but am not sure promoting things is the best way to do that by this point.

Comment author: TheAltar 28 November 2016 11:25:44PM 4 points [-]

Having the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise.

The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.

Comment author: TheAltar 27 November 2016 08:48:13PM 0 points [-]

"You can deduce that verbally. But I bet you can’t predict it from visualizing the scenario and asking what you’d be suprised or not to see."

I like this.

In my mind, this plugs into Eliezer's recent facebook post regarding thinking about the world in mundane terms or in terms of what is merely-real or in terms of how you personally would go and fix a sink or how you go and buy groceries at the store VS. the way you think about everything else in the world. I think these methods of thought in which you are visualizing actual objects and physics in the real world, thinking of them in terms of bets, and checking your surprise at what you internally simulate all point at a mindset that is extremely important to learn and possess as a skill.

Comment author: TheAltar 27 November 2016 08:43:26PM 0 points [-]

I hadn't sufficiently considered the long term changes of LW to have occurred within the context of the overall changes in the internet before. Thank you very much for pointing it out. Reversing the harm of Moloch on this situation is extremely important.

I remember posting in the old vbulletin days where a person would use a screenname, but anonymity was much higher and the environment itself felt much better to exist in. Oddly enough, the places I posted at back then were not non-hostile and had a subpopulation who would go out of their way to deliberately and intentionally insult people as harshly as possible. And yet... for some reason I felt substantially safer, more welcome, and accepted there than I have anywhere else online.

To at least some extent there was a sort of compartmentalization going on in those places where serious conversation was in one area while pure-fluffy, friendly, jokey banter-talk was going on in another. Attempting to use a single area for both sounds like a bad idea to me and is the sort of thing that LessWrong was trying to avoid (for good reason) in order to maintain high standards and value of conversation but places like Tumblr allow and possibly encourage. (I don't really know about tumblr since I avoid it, but that's what it looks like from the outside.) There may also have been a factor that I had substantially more in common with the people who were around at that time whereas the internet today is full of a far mroe diverse set of people who have far less interest in acculturating into strange new environments.

Short-term thinking, slight pain/fear avoidance, and trivial conveniences that shifted everyone from older styles like vbulletin or livejournal to places like reddit and tumblr ultimately pattern matches to Moloch in my mind if it leads to things like less common widescale discussion of rationality or decreased development of rationalist-beloved areas. Ending or slowing down open, long-term conversations on important topics is very bad and I hope that LW does get reignited to change the progression of that.

Comment author: RyanCarey 27 November 2016 06:43:11AM *  12 points [-]

Thanks for addressing what I think is one of the central issues for the future of the rationalist community.

I agree that we would be in a much better situation if rationalist discussion was centralized and that we are instead in a tragedy of the commons - more people would post here if they knew that others would. However, I contend that we're further from that desired equilibrium that you acknowledge. Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:

  1. Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.
  2. Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. This is mostly because of (1). (Although posts on LW do reliably get at least a handful of comments and views)
  3. Comments on LessWrong are more critical and less polite than comments on other sites.
  4. Posts on LessWrong are held in lower regard academic communities like ML and policy than posts elsewhere, including on Medium.

The incentive that pushes in our favor is that writers can correctly perceive that by writing here, they are participating in a community that develops very well-informed and considered opinions on academic and future-oriented topics. But that it not enough.

To put this more precisely, it seems to me that the incentive gradient is currently pointing far too steeply away from LessWrong for 'I [and several friends] will try and post and comment here more often...' to be anything like a viable solution.

However, I would not go as far as to say that the whole project is necessarily doomed. I would give the following counterproposals:

  • i) Wait for Arbital to build something that serves this purpose,thereby fixing (1)-(4)
  • ii) Build a long list of bloggers who will move back (for some reasonable definition) to LessWrong, or some other such site, if >n other bloggers do. It's the "free state project" type approach where once >n people commit, you "trigger the move", thereby fixing the tragedy of the commons dynamic. Maybe one can independently patch (3) in this context by using this as a Schelling point to improve on community norms.
  • iii) Raise funds for a couple of competent developers to make a new LessWrong in order to fix (1) and (2).

I think (i) or (ii) would have some reasonable hope of working. Maybe we should wait to figure out whether (i) will occur, and if not, then proceed with (ii) with or without (iii)?

Comment author: TheAltar 27 November 2016 08:06:52AM *  4 points [-]

A separate action that could be taken by bloggers who are interested in it (especially people just starting new blogs) is to continue posting where they do, but disable comments on their posts and link people to corresponding LW link post to comment on. This is far less ideal, but allows them to post elsewhere and to have the comments content appear here on LW.

Comment author: gwern 02 July 2016 08:11:41PM 3 points [-]

Psychology/biology:

  • "Ann Roe's scientists: original published papers" (One of the very few data sets, excluding TIP/SMPY, of extremely intelligent people. I am still reading through them but one impression I get is that the education system in America when most of them were growing up around 1910-1920 was grossly inadequate and unchallenging; many of them seem to only drift into their field when they happen to run into a challenging course in college. Quite a few mention incredibly little access to books and severe poverty (although interestingly, they all come from what are clearly middle/upper-class descent families, even if in some cases they are so poor as to be unable to afford shoes). Smart kids are so much better off these days with Internet access to anything at all they want to read. As I've noted in reading biographies of American scientists, the academic environment pre- and post-WWII is strikingly different than the pressure-cooker race to the bottom we are familiar with now. Relative underperformance in grades compared to females is also a running theme. With the chemists and physicists, home chemistry kits seem to have been nigh universal - which is something that sure doesn't happen these days!)
  • "Gifted Today But Not Tomorrow? Longitudinal Changes in Ability and Achievement in Elementary School", Lohman & Korb 2006 (Challenges in gifted education in elementary or earlier: IQ scores are unstable and so regression to the mean implies that few children in G&T programs will grow up to be gifted.)
  • "Is Education Associated With Improvements in General Cognitive Ability, or in Specific Skills?", Ritchie et al 2015
  • "Understanding the Improvement in Disability Free Life Expectancy In the U.S. Elderly Population", Chernew et al 2016 (Adult disability-free life expectancy continues to increase, due in large part to eye surgery improvements; vision is probably, like falling, the proximate cause of a lot of health issues.)
  • "Nicotine Contents in Some Commonly Used Toothpastes and Toothpowders: A Present Scenario", Agrawal & Ray 2012 (/not sure if harmful or helpful)
  • vision:

    • Orthostatic hypotension: when you stand up and feel like you are about to faint & your vision becomes totally obscured by silver mist
    • Visual snow: when you see the world slightly fuzzy and noisily, like very gentle translucent static on a TV screen
    • Closed-eye hallucination with phospenes: when you close your eyes and see a colored background with blobs and lights, especially in a pitch-black room or at night

Technology:

Economics:

Philosophy:

Fiction:

Comment author: TheAltar 14 July 2016 03:42:07PM *  0 points [-]

I have visual snow from trying out a medication. I can confirm that it sucks and is annoying. It's not debilitating though and is mostly just inconvenient.

Then again, it may be slightly harming my ability to focus while reading books. Still checking that out.

Meetup : San Antonio Meetup

0 TheAltar 11 July 2016 01:48AM

Discussion article for the meetup : San Antonio Meetup

WHEN: 17 July 2016 02:00:00PM (-0500)

WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230

Meetup to discuss rationality and all things LessWrong at Yumi Berry.

Look for the sign that says "LW".

Discussion article for the meetup : San Antonio Meetup

Comment author: Pimgd 08 June 2016 03:16:00PM 3 points [-]

I read that in the FAQ as well. ... Weirdly enough, taking that option would make me just feel guilty. I would have gone there, I would have learned, and then I would have said "well this is nice and all but is not as great as I envisioned - it's kinda like counting to 10 instead of immediately screaming at people, and that's not worth all this" - whilst I did get what was offered - lessons, boarding, food, people to talk to... I don't know how to put it. It feels like I'd be hurting other people just to fix my own mistake.

Comment author: TheAltar 08 June 2016 06:11:40PM 3 points [-]

I went through similar thought processes before attending and decided that it was extremely unlikely that I would ask for my money back even if I didn't think the workshop had been worth the cost. That made me decide that the offer wasn't a legitimate one for me to consider as real and I ignored it when making my final considerations of whether to go or not.

I ultimately went and thought it was fully worth it for me. I know 3+ people who follow that pattern who I spoke to shortly after the workshop and 1 who thought that it hadn't actually been worth it but did not ask for their money back.

Comment author: Gleb_Tsipursky 18 May 2016 06:39:58AM 0 points [-]

I'm going to the CFAR workshop that starts May 18th, and want to ask anyone who went to previous workshops about what you would have recommended to your pre-workshop self to do before and during the workshop? What would you have done differently? Thanks for any advice, and I'll convey it to fellow workshop attendees.

Comment author: TheAltar 18 May 2016 02:49:56PM 4 points [-]

Normally I say get plenty of sleep, but I think you asked a bit late to get that answer.

View more: Next