While I believe SC2 and Dota would fail today with sufficient effort, the models didn't quite perform superhuman, and as far as I am aware no community bots do either.
One of the reasons why it's plausible that today's or tomorrow's LLMs can result in brief simulations of consciousness or even qualia is that it happens with dreams in humans. Dreams are likely some sort of processing of information/compression/garbage collection, yet they still result in (badly) simulated experiences as a clear side-effect of trying to work with human experience data.
I still want something even closer to Givewell but for AI Safety (though it is easier to find where to donate now than before). Hell, I wouldn't mind if LW itself had recommended charities in a prominent place (though I guess LW now mostly asks for Lightcone donations instead).
Thanks for sharing this. Based on the About page, my 'vote' as a EU citizen working in an ML/AI position could conceivably count for a little more, so it seems worth doing it. I'll put it in my backlog and aim to get to it on time (it does seem like a lengthy task).
If you don't know who to believe then falling back on prediction markets or at least expert consensus is not the worst strategy.
Do you truly not believe that for your own ljfe - to use the examples there - solving aging, curing all disease, solving energy isn't even more valuable? To you? Perhaps you don't believe those possible but then that's where the whole disagreement lies.
And if you are talking about Superintelligent AGI and automation why even talk about output per person? I thought you at least believe people are automated out and thus decoupled?
Does he not believe in AGI and Superintelligence at all? Why not just say that?
AI could cure all diseases and “solve energy”. He mentions “radical abundance” as a possibility as well, but beyond the R&D channel
This is clearly about Superintelligence and the mechanism through which it will happen in that scenario is straightforward and often talked about. And if he disagrees he either doesn't believe in AGI (or at least advanced AGI) or believes that solving energy, curing disease is not that valuable? Or he is purposefully talking about a pre-AGI scenario while arguing against post-AGI views?
to lead to an increase in productivity and output *per person*
This quote certainly suggests this. It's just hard to tell if this is due to bad reasoning or on purpose to promote his start-up.
AI 2027 is more useful for the arguments than the specific year but even if not as aggressive, prediction markets (or at least Manifold) predict 61% chance before 2030, 65% before 2031, 73% by 2033.
I, similarly, can see it happening slightly later than 2027-2028 because some specific issues take longer to solve than others but I see no reason to think a timeline beyond 2035, like yours, let alone 30 years is grounded in reality.
It also doesn't help that when I look at your arguments and apply them to what would then seem to be very optimistic forecast in 2020 about progress in 2025 (or even Kokotajlo's last forecast), those same arguments would have similarly rejected what has happened.
I believe he means rationality-associsted discourse and it's not like there are so many contenders.
There's indeed been no one with that level of reach that has spread this much misinformation and started this many negative rumors in the space as David Gerard and RW. Whoever the second closest contender is, is likely not even close.
You can trace back to him A LOT of the negative press online that LW, EY and a ton of other places and people have got. If it wasn't for RW LW would be much, much more respected.
A somewhat related thing that I do is to read/watch stories with clever/intelligent/rational (or whatever I want to be) characters such as Death Note/HPMOR (I have a bunch of other examples) which both seems to prime me to think a bit like them (or to enter in a mode where I think that my narrative is similar to theirs) and also gives me role models on which I can fall back to in some situations (like in your Naruto example). This has definitely at least partially worked (might be placebo) as I almost always have more motivation on which I act to study/do productive things after watching/reading such a story.