yes
That's the idea behind the post, yeah. I am referring more to the general culture of the site, since it is relevant here.
I find it strange that our response to "politics is the mindkiller" has been less "how can we think more rationally about politics?" and more "let's avoid politics". If feasible, the former would pay off long-term.
Of course, a lot of more general ideas pertaining to rationality can be applied to politics too. But if politics is still the mindkiller, this may not be enough -- more techniques may be needed to deal with the affective override that politics can cause.
Listeners are probably not assuming that the person they are listening to is being honest.
Seconded.
Interesting, thanks for the reply. I agree that it could develop superhuman ability in some domains, even if that ability doesn't manifest in the model's output, so that seems promising (although not very scaleable). I haven't read on mesa optimizers yet.
I have very little knowledge of AI or the mechanics behind GPT, so this is more of a question than criticism:
If a scaled up GPT-N is trained on human-generated data, how would it ever become more intelligent than the people whose data it is trained on?
Or maybe good enough is the enemy of better. Regardless, the point's been made
In my case, I probably wouldn't give my life for less than lives of a billion strangers, so that ratio would have to be extremely high, to the point where it's probably incalculable.
Why?
I haven't seen Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More. Why did it get universally negative votes?