Substacks:
- https://aievaluation.substack.com/
- https://peterwildeford.substack.com/
- https://www.exponentialview.co/
- https://milesbrundage.substack.com/
Podcasts:
- Cognitive Revolution. https://www.cognitiverevolution.ai/tag/episodes/
- Doom debates. https://www.youtube.com/@DoomDebates
- AI policy podcast https://www.csis.org/podcasts/ai-policy-podcast
Worth checking this too: https://forum.effectivealtruism.org/posts/5Hk96JqpEaEAyCEud/how-do-you-follow-ai-safety-news
Vague thoughts/intuitions:
there are features such as X_1 which are perfectly recovered
Just to check, in the toy scenario, we assume the features in R^n are the coordinates in the default basis. So we have n features X_1, ..., X_n
Separately, do you have intuition for why they allow network to learn b too? Why not set b to zero too?
I think this is missing from the list. https://wba-initiative.org/en/25057/. Whole brain architectue initiative.
What do we mean by ?
I think the setting is:
So in this context, is just a fixed function measuring the error between the learnt values and true values.
I think confusion could be using the term to represent both a single instance or the random variable/process.
In Sakana AI's paper on AI Scientist v-2, they claim that the sytem is independent of human code. Based on quick skim, I think this is wrong/deceptful. I wrote up my thoughts here: https://lovkush.substack.com/p/are-sakana-lying-about-the-independence
Main trigger was this line in the system prompt for idea generation: "Ensure that the proposal can be done starting from the provided codebase."