Andrew Keenan Richardson

ML researcher

Wiki Contributions

Comments

Sorted by

I love this idea but I have the logistics concern that it might be difficult-to-impossible to reserve the island for the time we want. 

Small edit: $100,000,000 dollars per human genome to $1,000 dollars per genome is 5 orders of magnitude, not 6. 

I'm looking forward to Part 2! (and Part 3?)

Yes, this is basically what people are doing.  

I'm the Astera researcher that Nathan spoke to. This is a pretty bad misrepresentation of my views based on a 5 minute conversation that Nathan had with me about this subject (at the end of a technical interview). 

A few responses:

  • We do publish open source code at https://github.com/Astera-org but we are considering moving to closed source at some point in the future for safety concerns
  • It is untrue that we are "not interested in securing [our] code or models against malicious actors", but it is true that we are not currently working on the interventions suggested by Nathan
  • My personal view is that AI alignment needs to be tailored to the model, an approach that I am working on articulating further and hope to post on this forum
  • Steve Byrnes works at the Astera institute on alignment issues

For those who don't want to break out a calculator, Wikipedia has it here:

https://en.wikipedia.org/wiki/Equal_temperament#Comparison_with_just_intonation

You can see the perfect fourth and perfect fifth are very close to 4/3 and 3/2 respectively. This is basically just a coincidence and we use 12 notes per octave because there are these almost nice fractions. A major scale uses the 2212221 pattern because that hits all the best matches with low denominators, skipping 16/15 but hitting 9/8, for example. 

Imagine you had an oracle which could assess the situation an agent is in and produce a description for an ML architecture that would correctly "solve" that situation.

I think for some strong versions of this oracle, we could create the ML component from the architecture description with modern methods. I think this combination could effectively act as AGI over a wide range of situations, again with just modern methods. It would likely be insufficient for linguistic tasks.

I think that's what this article is getting at. The author is an individual from Uber. Does anyone know if this line of thinking has other articles written about it?

I also go to church regularly. Albeit it is a Unitarian Universalist church, and I am an atheist.