The "Less Wrong position"? Are we all supposed to have 1 position here? Or did you mean to ask what EY's position is?
I don't think I understand your statement/question (?) - In order to know what an AI would do, you just need to simulate it with an AI?
I think you're saying that you could simulate what an AGI would do via any computer. If you're simulating an AGI, are you not building an AGI?
So how does a poor summary of Hume = a refutation Solomonoff's induction?
Could I say something like, Alanf wants us to think he refuted a book but he can't even spell the author's name right...
Ok, but what does this have to do with the capital of Italy?
I am new to this stuff but did we not have like 200 years of observations about Newton's theories? How would have a Bayesian adjusted their models here? I use this example as a "we now know better" - Is it the "new" observation that is key?
Where can I learn more about Critical Rationalism?
(Not from curi and his group as I am not welcomed there and tbh after seeing this wall of shame: http://curi.us/2215-list-of-fallible-ideas-evaders I am glad I never posted any personal information)
Try searching "parsimony" maybe? Another way to express Occam.
I just discovered he keeps a wall of shame for people who left his forum:
http://curi.us/2215-list-of-fallible-ideas-evaders
Are you in this wall?
I am uncomfortable with this practice. I think I am banned from participating in curi's forum now anyway due to my comments here so it doesn't affect me personally but it is a little strange to have this list with people's personal information up.
She's just not a philosopher.
Don't get me wrong, I agree with a ton of her observations. As much as I agree with a ton of Buddhism. It is just not Philosophy.
The duplicates you got banned for here no? It seems self evident, but I don't hold anything against you for that.
fallibledupicates or w/e.
which literature do you recommend?