I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
Thanks, v helpful!
Re (2): I am not sure about the one-day-off-per-week. I think it's healthy; also I'm not sure that, looking back in a year or two, whether most residents will think "I wish I took it all a bit more measured" or "I'm glad I went all out" that month.
Re (3): Perhaps next Inkhaven, the mandatory things will be:
The average ratings out of 10 for how helpful are:
How did https://wordpress.com/ come to be so good?? (Inkhaven brought to you by WordPress ❤️ .)
Love the shout out; I will repeat myself once more, it’s important to distinguish between WordPress (the open-source software) and WordPress.com (the commercial hosting service run by Automattic). Automattic was founded by Matt Mullenweg, who co-founded the open-source WordPress project, and the company continues to contribute to WordPress, but they’re separate entities.
Currently most X-risk reduction resources are directed by a presumption that AGI is coming in less than a decade. I think this "consensus" is somewhat overconfident, and also somewhat unreal (i.e. it's less of a consensus than it seems). That's a very usual state of affairs, so I don't want to be too melodramatic about it, but it still has concrete bad effects. I wish people would say "I don't have additional clearly-expressible reasons to think AGI is coming very soon, that I'll defend in a debate, beyond that it seems like everyone else thinks that.". I also wish people would say "I'm actually mainly thinking that AGI is coming soon because thoughtleaders Alice and Bob say so.", if that's the case. Then I could critique Alice's and/or Bob's stated position, rather than taking potshots at an amorphous unaccountable ooze.
I'm a bit confused about whether it's actually good. I think I often run a heuristic counter to it... something like:
"When you act in accordance with a position and someone challenges you on it, it's healthy for the ecosystem and culture to give the best arguments for it, and find out whether they hold up to snuff (i.e. whether the other person has good counterarguments). You don't have to change your mind if you lose the argument—because often we hold reasons for illegible but accurate intuitions—but it's good to help people figure out the state of the best arguments at the time."
I guess this isn't in conflict, if you just separately give the cause for your belief? e.g. "I believe it for cause A. But that's kind of hard to discuss, so let me volunteer the best argument I can think of, B."
Added the content warning; thx.
I thought this was going to take the tack that it's still okay to birth people who are definitely going to die soon. I think on the margin I'd like to lose a war with one more person on my team, one more child I love. I reckon it's a valid choice to have a child you expect to die at like 10 or 20. In some sense, every person born dies young (compared to a better society where people live to 1,000).
I'm not having a family because I'm busy and too poor to hire lots of childcare, but I'd strongly consider doing it if I had a million dollars.
I think Eliezer has oft-made the meta observation you are making now, that simple logical inferences take shockingly long to find in the space of possible inferences. I am reminded of him talking about how long backprop took.
In 1969, Marvin Minsky and Seymour Papert pointed out that Perceptrons couldn't learn the XOR function because it wasn't linearly separable. This killed off research in neural networks for the next ten years.
[...]
Then along came this brilliant idea, called "backpropagation":
You handed the network a training input. The network classified it incorrectly. So you took the partial derivative of the output error (in layer N) with respect to each of the individual nodes in the preceding layer (N - 1). Then you could calculate the partial derivative of the output error with respect to any single weight or bias in the layer N - 1. And you could also go ahead and calculate the partial derivative of the output error with respect to each node in the layer N - 2. So you did layer N - 2, and then N - 3, and so on back to the input layer. (Though backprop nets usually had a grand total of 3 layers.) Then you just nudged the whole network a delta - that is, nudged each weight or bias by delta times its partial derivative with respect to the output error.
It says a lot about the nonobvious difficulty of doing math that it took years to come up with this algorithm.
I find it difficult to put into words just how obvious this is in retrospect. You're just taking a system whose behavior is a differentiable function of continuous paramaters, and sliding the whole thing down the slope of the error function. There are much more clever ways to train neural nets, taking into account more than the first derivative, e.g. conjugate gradient optimization, and these take some effort to understand even if you know calculus. But backpropagation is ridiculously simple. Take the network, take the partial derivative of the error function with respect to each weight in the network, slide it down the slope.
If I didn't know the history of connectionism, and I didn't know scientific history in general - if I had needed to guess without benefit of hindsight how long it ought to take to go from Perceptrons to backpropagation - then I would probably say something like: "Maybe a couple of hours? Lower bound, five minutes - upper bound, three days."
"Seventeen years" would have floored me.
I am starting to get something from these posts.
I feel confused about the notion that people only want to donate to a thing if they will be on the hook for needing to donate every year forevermore to keep it afloat, as opposed to donating to cause it to get its business in order and then it can sustain itself.
Yep, we ask people to submit wordcount with every post that they submit.
Here's the frequency of posts at each length.
Here it is in three simple buckets