- 80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"
This seems really bizarre -- what's the class you asked this in? I feel like I'd get a dramatically different answer.
Why was connectionism unlikely to succeed?
Can you clarify? I'm not sure what you mean by this with respect to machine learning.
https://plato.stanford.edu/entries/connectionism/
Pretty sure it means the old school of "neural networks are the way to go"?
My guess is she's asking, "why was connectionism considered/thought of as unlikely to succeed?"
Yup, that's what I mean. Specifically, I had Pinker in mind: https://forum.effectivealtruism.org/posts/3nL7Ak43gmCYEFz9P/cognitive-science-and-failed-ai-forecasts
- Will a capacity for "doing science" be sufficient condition for general intelligence?
We can probably "do science", at least to the level of the median scientist, with current LLMs. Automating data analysis and paper writing isn't a big leap for existing models.
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/4TcaBNu7EmEukjGoc/questions-about-ai-that-have-bothered-me
As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.
(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)
First posted: 12/6/22
Last updated: 1/30/23
General Cognition
Deception
Agent Foundations
Theory of Machine Learning
Epistemology of Alignment (I've written about this here)
Philosophy of Existential Risk
Teaching and Communication
Governance/Strategy