LESSWRONG
LW

1762
Ben Pace
36781Ω10922824764201
Message
Dialogue
Subscribe

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good to take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.
  • It is wrong to directly cause the end of the world. Even if you are fatalistic about what is going to happen.

Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.

(Longer bio.)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs
8 Questions for the Future of Inkhaven
Ben Pace13h40

Yep, we ask people to submit wordcount with every post that they submit.

Here's the frequency of posts at each length.

Here it is in three simple buckets

Reply
8 Questions for the Future of Inkhaven
Ben Pace18h41

Thanks, v helpful!

Re (2): I am not sure about the one-day-off-per-week. I think it's healthy; also I'm not sure that, looking back in a year or two, whether most residents will think "I wish I took it all a bit more measured" or "I'm glad I went all out" that month.

Re (3): Perhaps next Inkhaven, the mandatory things will be:

  1. Publish 500 words every day.
  2. Share your writing with a minimum of one feedback circle per week, and with one advisor per week.
Reply
8 Questions for the Future of Inkhaven
Ben Pace19h40

The average ratings out of 10 for how helpful are:

  • Fellow residents: 7.4
  • Your assigned coach: 7.4
  • Contributing writers: 7.0
Reply
The problem of graceful deference
Ben Pace2d40

How did https://wordpress.com/ come to be so good?? (Inkhaven brought to you by WordPress ❤️ .)

Love the shout out; I will repeat myself once more, it’s important to distinguish between WordPress (the open-source software) and WordPress.com (the commercial hosting service run by Automattic). Automattic was founded by Matt Mullenweg, who co-founded the open-source WordPress project, and the company continues to contribute to WordPress, but they’re separate entities.

Reply1
The problem of graceful deference
Ben Pace2d40

Currently most X-risk reduction resources are directed by a presumption that AGI is coming in less than a decade. I think this "consensus" is somewhat overconfident, and also somewhat unreal (i.e. it's less of a consensus than it seems). That's a very usual state of affairs, so I don't want to be too melodramatic about it, but it still has concrete bad effects. I wish people would say "I don't have additional clearly-expressible reasons to think AGI is coming very soon, that I'll defend in a debate, beyond that it seems like everyone else thinks that.". I also wish people would say "I'm actually mainly thinking that AGI is coming soon because thoughtleaders Alice and Bob say so.", if that's the case. Then I could critique Alice's and/or Bob's stated position, rather than taking potshots at an amorphous unaccountable ooze.

I'm a bit confused about whether it's actually good. I think I often run a heuristic counter to it... something like: 

"When you act in accordance with a position and someone challenges you on it, it's healthy for the ecosystem and culture to give the best arguments for it, and find out whether they hold up to snuff (i.e. whether the other person has good counterarguments). You don't have to change your mind if you lose the argument—because often we hold reasons for illegible but accurate intuitions—but it's good to help people figure out the state of the best arguments at the time."

I guess this isn't in conflict, if you just separately give the cause for your belief? e.g. "I believe it for cause A. But that's kind of hard to discuss, so let me volunteer the best argument I can think of, B."

Reply
Breaking the Hedonic Rubber Band
Ben Pace2d20

Added the content warning; thx.

Reply
Mourning a life without AI
Ben Pace5d1915

I thought this was going to take the tack that it's still okay to birth people who are definitely going to die soon. I think on the margin I'd like to lose a war with one more person on my team, one more child I love. I reckon it's a valid choice to have a child you expect to die at like 10 or 20. In some sense, every person born dies young (compared to a better society where people live to 1,000).

I'm not having a family because I'm busy and too poor to hire lots of childcare, but I'd strongly consider doing it if I had a million dollars.

Reply1
Legible vs. Illegible AI Safety Problems
Ben Pace8dΩ6115

I think Eliezer has oft-made the meta observation you are making now, that simple logical inferences take shockingly long to find in the space of possible inferences. I am reminded of him talking about how long backprop took.

In 1969, Marvin Minsky and Seymour Papert pointed out that Perceptrons couldn't learn the XOR function because it wasn't linearly separable.  This killed off research in neural networks for the next ten years.

[...]

Then along came this brilliant idea, called "backpropagation":

You handed the network a training input.  The network classified it incorrectly.  So you took the partial derivative of the output error (in layer N) with respect to each of the individual nodes in the preceding layer (N - 1).  Then you could calculate the partial derivative of the output error with respect to any single weight or bias in the layer N - 1.  And you could also go ahead and calculate the partial derivative of the output error with respect to each node in the layer N - 2.  So you did layer N - 2, and then N - 3, and so on back to the input layer.  (Though backprop nets usually had a grand total of 3 layers.)  Then you just nudged the whole network a delta - that is, nudged each weight or bias by delta times its partial derivative with respect to the output error.

It says a lot about the nonobvious difficulty of doing math that it took years to come up with this algorithm.

I find it difficult to put into words just how obvious this is in retrospect.  You're just taking a system whose behavior is a differentiable function of continuous paramaters, and sliding the whole thing down the slope of the error function.  There are much more clever ways to train neural nets, taking into account more than the first derivative, e.g. conjugate gradient optimization, and these take some effort to understand even if you know calculus.  But backpropagation is ridiculously simple.  Take the network, take the partial derivative of the error function with respect to each weight in the network, slide it down the slope.

If I didn't know the history of connectionism, and I didn't know scientific history in general - if I had needed to guess without benefit of hindsight how long it ought to take to go from Perceptrons to backpropagation - then I would probably say something like:  "Maybe a couple of hours?  Lower bound, five minutes - upper bound, three days."

"Seventeen years" would have floored me.

Reply
People Seem Funny In The Head About Subtle Signals
Ben Pace8d130

I am starting to get something from these posts.

Reply
Reasons against donating to Lightcone Infrastructure
Ben Pace10d31

I feel confused about the notion that people only want to donate to a thing if they will be on the hook for needing to donate every year forevermore to keep it afloat, as opposed to donating to cause it to get its business in order and then it can sustain itself.

Reply
Load More
23Benito's Shortform Feed
Ω
7y
Ω
333
2010 Types of LessWrong Post
7h
0
208 Questions for the Future of Inkhaven
1d
16
885 Things I Learned After 10 Days of Inkhaven
2d
2
20Breaking the Hedonic Rubber Band
3d
4
137The Inkhaven Residency
3mo
35
37LessOnline 2025: Early Bird Tickets On Sale
8mo
5
20Open Thread Spring 2025
8mo
50
281Arbital has been imported to LessWrong
9mo
30
142The Failed Strategy of Artificial Intelligence Doomers
9mo
77
109Thread for Sense-Making on Recent Murders and How to Sanely Respond
9mo
146
Load More
LessWrong Reacts
a month ago
LessWrong Reacts
a month ago
LessWrong Reacts
a month ago
LessWrong Reacts
a month ago
(+3354/-3236)
LessWrong Reacts
2 months ago
LessWrong Reacts
2 months ago
(+638/-6)
LessWrong Reacts
2 months ago
(+92)
LessWrong Reacts
2 months ago
(+248)
Adversarial Collaboration (Dispute Protocol)
10 months ago