Crosspost from my blog: https://mechanisticmind.substack.com/p/many-common-problems-are-np-hard What problems will be left for superintelligence? AI models are quickly getting better, but certain important and very ordinary challenges are surprisingly, stubbornly hard. While large language models have made some previously difficult computational tasks trivial, a fundamental class of problems that challenge classical computers...
Crossposted from my personal blog: https://mechanisticmind.com/hyperbolic-discounting-and-pascals-mugging/ TL;DR Hyperbolic discounting, shown in 🟥, is an imperfect approximation for exponential discounting, shown in 🟦. It's commonly pointed out that this causes humans to overvalue nearterm rewards, but it's less commonly appreciated that this causes us to overvalue distant rewards as well. There's...
A recent post from Scott Alexander argues in favor of treating intelligence as a coherent and somewhat monolithic concept. Especially when thinking about ML, the post says, it is useful to think of intelligence as a quite general faculty rather than a set of narrow abilities. I encourage you to...
If you work in AI, then probably none of this is new to you, but if you’re curious about the near future of this technology, I hope you find this interesting! Reinforcement Learning in LLMs Large Language Models (LLMs) have shown impressive results in the past few years. I’ve noticed...