All of JJ Hepburn's Comments + Replies

Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus:

Can you link to these?

This is great! Thanks for doing this.

Would you be able to add people's titles and affiliations for some context? Possibly also links to their websites, LinkedIn or similar.



 

You can now also subscribe to be automatically emailed when new events are added or updated. You can opt for either daily or weekly updates 
 

Signup here:

https://airtable.com/shrEp75QWoCrZngXg

I have always thought of it like a vehicle blind spot not an ocular blind spot. More related to the structure of the situation than the individual.

7Duncan Sabien (Deactivated)
I think that makes sense/is valid for the standard metaphor, but I want to reiterate that the standard metaphor doesn't actually apply most of the times people use it. Like, you can't rotate your way out of color blindness.  You can't lean and look over your shoulder to solve a color blindness problem.
Answer by JJ Hepburn10

How many places did you apply for before getting your current role or position?

How much time have you spent on applying for open opportunities?

What are some things that your org has that others don’t and should?

What are some things that other orgs have that your org should have?

Answer by JJ Hepburn120

What are some boring parts of your job that you have to do?

What are some frustrating parts of your job that you have to do?

What aspects of your job/place of work are different from what you expected from the outside?

Do you feel like you have good job security?

Not exactly sure what I was trying to say here. Probably using the PhD as an example of a  path to credentials. 

Here are some related things I believe:

  • I don't think a PhD is necessary or the only way
  • University credentials are not now and should not be the filter for people working on these problems
  • There is often a gap between peoples competencies and their ability to signal them
  • Credentials are the default signal for competence 
  • Universities are incredibly inefficient ways to gain competence or signal
  • Assessing people is expensive and so review
... (read more)
2TekhneMakre
Cool. I appreciate you making these things explicit. If the bottleneck is young people believing that if they work on the really hard problems, then there will be funding for them somehow, then, it seems pretty important for funders to somehow signal that they would fund such people. By default, even using credentials as a signal at all, signals to such young people that this funder is not able/willing to do something weird with their money. I think funders should probably be much more willing to say to someone with (a) a PhD and (b) boring ideas, "No, sorry, we're looking for people working on the hard parts of the problem".

Could do Go, Poker or some E-Sports with commentary. Poker unlike chess has the advantage that the commentators can see all of the players hands but the players can only see their own. Commentators often will talk about what a player must be thinking in this situation and account for what is observable to the player or not.

This would certainly be easier to scale but not as good quality.

The plan and numbers I lay out above you actually finish friendly AI in 2036, which is the 10% point

Yes, if you have a solution in 2026 it isn't likely to be relevant to something used in 2050. But 2026 is the planned solution date and 2050 is the median TAI date. 

The numbers I used above a just to demonstrate the point thought. The broad idea is that coming up with a solution/theory to alignment takes longer than planned. Having a theory isn't enough, you still have some time to make it count. Then TAI might come at the early end of your probability distribution. 

It's pretty optimistic to plan that TAI will come at your median estimate and that you won't run into the planning fallacy.

1Tao Lin
What I'm trying to say is that it's much harder to do AI alignment research while models are still small, so TAI timelines somewhat dictate the progress of AI alignment research. If I wanted my 5 year plan to have the best chance at success, I would have "test this on a dog-intelligence-level AI" in my plan, even if I thought that probably wouldn't arrive by 2036, because that would make AI alignment research much easier.
1JJ Hepburn
The plan and numbers I lay out above you actually finish friendly AI in 2036, which is the 10% point

Really excited about this! Donation on the way