From my very much outside view, extending the rate limiting to 3 comments a week indefinitely would have solved most of the stated issues.
Sorry for the delayed reply... I don't get notifications of replies, and the LW RSS has been broken for me for years now, so I only poke my head here occasionally.
Well that sounds... scary, at best. I hope you've come out of it okay.
50/100. But that rather exciting story is best not told in a public forum.
Though these distinctions are kinda confusing for me.
Well, lack of appearance of something otherwise expected would be negative, and appearance of something otherwise unexpected would be positive?
For example, a false pregnancy is a "positive somatization". Or stigmata. Having trouble coming up with intentionally "good" examples, other than the visualizations helping you shoot a hoop better or something. Not sure if the new-agey "think yourself better" is actually a thing. Hence my question. "Send more blood to your hands" seems like a good example, actually. Not something one would normally think possible except by physical labor.
I really like this post! (I have liked most of your posts of the last decade and a bit. They also inspired me to learn hypnosis, which led to rather cataclysmic changes in my life.) I think therapists call this "somatization", which can be both positive and negative, in the same sense the hypnotic (or psychotic) illusions are. You seem to mainly focus on the negative somatization (no swelling) and a bit on positive ones, though I suspect that positive somatization (both beneficial and detrimental) is just as controllable with the intent/expectation fusion. Maybe visualizing making the hoop really helps to steady your hand.
I once wrote a post claiming that human learning is not computationally efficient: https://www.lesswrong.com/posts/kcKZoSvyK5tks8nxA/learning-is-asymptotically-computationally-inefficient
It looks like the last three years of AI progress suggest that learning is sub-linear in resource use, but probably not logarithmically as I claimed for humans. Looks like the scaling benchmarks show something like capability increase ~ 4th root of model size. https://epoch.ai/data/ai-benchmarking-dashboard
Looks like the hardest part in this model is how to " choose robustly generalizable subproblems and find robustly generalizable solutions to them", right?
How does one do that in any systematic way? What are the examples from your own research experience where this worked well, or at all?
Right, eventually it will. But abstraction building is very hard! If you have any other option, like growing in size, I would expect it to be taken first.
I guess I should be a bit more precise. Abstraction building at the same level as before is probably not very hard. But going up a level is basically equivalent to inventing a new way of compressing knowledge, which is a quantitative leap.
The argument goes through on probabilities of each possible world, the limit toward perfection is not singular. given the 1000:1 reward ratio, for any predictor who is substantially better than chance once ought to one-box to maximize EV. Anyway, this is an old argument where people rarely manage to convince the other side.
I once conjectured that
> Studying a subject gets progressively harder as you learn more and more, and the effort required is conjectured to be exponential or worse … the initial ‘honeymoon’ phase tends to peter out eventually.
In terms of AI this would mean that the model size/power consumption would be exponential in "intelligence" (whatever it might mean, probably some unsaturated benchmark score). Do the last 3 years confirm or refute this?
If confirmed, would it not give us some optimism that we are not all gonna die, because the "true" superintelligence we cannot ever hope to control would require so much resources, we would have to colonize the lightcone as non-superintelligent humans to get there?