Posts

Sorted by New

Wiki Contributions

Comments

This has a long list of sound arguments in it which exist in tandem with a narrative that may not actually be true. Most of the points are valid regardless, but whether they have high importance in aggregate or whether any of the conclusions reached actually matter depends heavily on what lens we're looking through and what actually has been going on in reality at Open Phil and Open AI.

I can imagine a compelling and competing narrative where Open Phil has decided that AI safety is important and thinks that the most effective thing they can do with a ton of their money is to use it to make the world safer against that x-risk. They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI. (Open AI may not be a good source for that, but that's a separate discussion). However, since they're donating so much money and don't really know what Open AI might do with it in practice in the future, they ensure that they get a person they trust business-wise on the board of directors to ensure that it ends up getting spent in ways that are in line with their original desires. (A good backup plan when there's open questions of whether any group working on AI is doing more to help or harm it.)

Gwern makes a quick fermi estimate on here about how much Open AI actually costs to run per year and reminds us that while $1 billion has been "committed" to Open AI, that's really just a press release social-statement about a pseduo-promise by people who are known to be flaky and aren't under any obligation to give them that money. If we're estimating Open AI to be running on $9 million per year, then $30 million is a very hefty donation which gives the company three years more runway to work on things. That's a big deal to Open AI being in existence or not in existence, and if they already have $9 million coming in per year from another source then that could potentially double their income per year and allow them to expand into lots of new areas as a result.

~

There are a number of inductive leaps going on within the large model presented in the original post that I think are worth pointing out and examining. I'll also stick what I think is the community affect/opinion on the end of them because I've been up all night and think it's worth denoting.

  1. Open Phil is now taking AI Safety as a serious threat to the world and pledged $30 million of money donated to them on it. (Yay! Finally!)
  2. Open Phil is giving that money to Open AI. (Boo! Give it to MIRI!)
  3. Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don't like him because he screwed up #2 and we don't respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn't write the people we don't like a blank check. That's a terrible idea in this climate!)

These are the parts that actually matter. Whether the money is going to a place that is actually useful for reducing x-risk and whether Holden as board member is there to just ensure the money isn't be wasted on useless projects or whether he'll be messing with the distribution of funds larger than $30 million in ways that are harmful (or helpful!) to AI Safety. He could end up spending them wisely in ways that make the world directly safer, directly less safe, safer because it was spent badly versus alternatives that would have been bad, or less safe because they weren't spent on better options.

Insofar that I think any of us should particularly care about all of this it will have far more to do with these points than other things. They also sound nicely far more tractable since the other problems you mention about Open Phil sound pretty shitty and I don't expect a lot of those things to change much at this point.

This is probably my favorite link post that's appeared on LW thus far. I'm kinda disappointed more people haven't checked it out and voted it upward.

Having the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise.

The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.

"You can deduce that verbally. But I bet you can’t predict it from visualizing the scenario and asking what you’d be suprised or not to see."

I like this.

In my mind, this plugs into Eliezer's recent facebook post regarding thinking about the world in mundane terms or in terms of what is merely-real or in terms of how you personally would go and fix a sink or how you go and buy groceries at the store VS. the way you think about everything else in the world. I think these methods of thought in which you are visualizing actual objects and physics in the real world, thinking of them in terms of bets, and checking your surprise at what you internally simulate all point at a mindset that is extremely important to learn and possess as a skill.

I hadn't sufficiently considered the long term changes of LW to have occurred within the context of the overall changes in the internet before. Thank you very much for pointing it out. Reversing the harm of Moloch on this situation is extremely important.

I remember posting in the old vbulletin days where a person would use a screenname, but anonymity was much higher and the environment itself felt much better to exist in. Oddly enough, the places I posted at back then were not non-hostile and had a subpopulation who would go out of their way to deliberately and intentionally insult people as harshly as possible. And yet... for some reason I felt substantially safer, more welcome, and accepted there than I have anywhere else online.

To at least some extent there was a sort of compartmentalization going on in those places where serious conversation was in one area while pure-fluffy, friendly, jokey banter-talk was going on in another. Attempting to use a single area for both sounds like a bad idea to me and is the sort of thing that LessWrong was trying to avoid (for good reason) in order to maintain high standards and value of conversation but places like Tumblr allow and possibly encourage. (I don't really know about tumblr since I avoid it, but that's what it looks like from the outside.) There may also have been a factor that I had substantially more in common with the people who were around at that time whereas the internet today is full of a far mroe diverse set of people who have far less interest in acculturating into strange new environments.

Short-term thinking, slight pain/fear avoidance, and trivial conveniences that shifted everyone from older styles like vbulletin or livejournal to places like reddit and tumblr ultimately pattern matches to Moloch in my mind if it leads to things like less common widescale discussion of rationality or decreased development of rationalist-beloved areas. Ending or slowing down open, long-term conversations on important topics is very bad and I hope that LW does get reignited to change the progression of that.

A separate action that could be taken by bloggers who are interested in it (especially people just starting new blogs) is to continue posting where they do, but disable comments on their posts and link people to corresponding LW link post to comment on. This is far less ideal, but allows them to post elsewhere and to have the comments content appear here on LW.

I have visual snow from trying out a medication. I can confirm that it sucks and is annoying. It's not debilitating though and is mostly just inconvenient.

Then again, it may be slightly harming my ability to focus while reading books. Still checking that out.

I went through similar thought processes before attending and decided that it was extremely unlikely that I would ask for my money back even if I didn't think the workshop had been worth the cost. That made me decide that the offer wasn't a legitimate one for me to consider as real and I ignored it when making my final considerations of whether to go or not.

I ultimately went and thought it was fully worth it for me. I know 3+ people who follow that pattern who I spoke to shortly after the workshop and 1 who thought that it hadn't actually been worth it but did not ask for their money back.

Normally I say get plenty of sleep, but I think you asked a bit late to get that answer.

This looks like it. Thank you!

Load More