I’ve read that OpenAI and DeepMind are hiring for multi-agent reasoning teams. I can imagine that gives another source of scaling.
I figure things like Amdahl’s law / communication overhead impose some limits there, but MCTS could probably find useful ways to divide the reasoning work and have the agents communicating at least at human level efficiency.
I might have missed something, but it looks to me like the first ordering might be phrased like the self improvement and the risk aversion are actually happening simultaneously.
If an AI had the ability to self improve for a couple of years before it developed risk aversion, for instance, I think we end up in the "maximal self improvement" / 'high risk" outcomes.
This seems like a big assumption to me:
...But self-improvement additionally requires that the AI be aware that it is an AI and be able to perform cutting-edge machine learning research. Thus, solving s
This is the kind of thing that has been in my head as a kind of "nuclear meltdown rather than nuclear war" kind of outcome. I've pondering what the largest bad outcome might be, that requires the least increase in the capabilities we have today.
A Big Bad scenario I've been mentally poking is "what happens if the internet went away, and stayed away?". I'd struggle to communicate, inform myself about things, pay for things. I can imagine it would severely degrade the various businesses / supply chains I implicitly rely on. People might panic. It seems like i...
The little pockets of cognitive science that I've geeked out about - usually in the predictive processing camp - have featured researchers who are usually quite surprised by or are going to great lengths to double underline the importance of language and culture in our embodied / extended / enacted cognition.
A simple version of the story I have in my head is this: We have physical brains thanks to evolution, and then by being an embodied predictive perception/action loop out in the world, we started transforming our world into affordances for new perceptio...
Actually I think the shoggoth mask framing is somewhat correct, but it also applies to humans. We don't have a single fixed personality, we are also mask-wearers.
Motivation: I'm asking this question because one thing I notice is that there's the unstated assumption that AGI/AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I'd really like to know why LWers hold that AGI/ASI will be a big deal.
This is confusing to me.
I've read lots of posts on here about why AGI/AI would be a huge deal, and the ones I'm remembering seemed to do a good job at unpacking their assumptions (or at least a better job than I would do by default). It see...
It seems - at least to to me - like the argumentation around AI and alignment would be a good source of new beliefs, since I can't figure it all out on my own. People also seem to be figuring out new things fairly regularly.
Between those two things, I'm struggling to understand what it would be like to assert a static belief "field X doesn't matter", in way that is reasonably grounded in what is coming out of field X, particularly as the field X evolves.
Like, if I believe that AI Alignment won't matter much and I use that to write off the field of AI Align...
Meta: I might be reading some the question incorrectly, but my impression is that it lumps "outside views about technology progress and hype cycles" together with "outside views about things people get doom-y about".
If it is about "people being doom-y" about things, then I think we are more playing in the realm of things where getting it right on the first try or first few tries matter.
Expected values seem relevant here. If people think there is a 1% chance of a really bad outcome and try to steer against that, even if they are correct you are going to see...
I've split this off into it's own comment, to talk a little more about how I've found Kegan-related things useful, for myself.
I'm skeptical that global stages are actually real, and I still think there is still plenty of use to be had from the thinking and theory behind it. I treat it as a lossy model and I still find it helpful.
An example of something suggested by Kegan's theories that has helped me: communicating across sizeable developmental gaps or subject/object divides is really difficult, and if you can spot that in advance you can route aroun...
This reads to me like you're making a universal claim that these things aren't useful - based on "Some of these concepts are useful. Some aren't" and "I recommend evicting from your thoughts".
If that is your claim, I'd like to see lots more evidence or argument to go along with it - enough to balance the scales against the people who have been claiming to find these things useful.
If what you are saying is more that you don't find them useful yourself, or that you are skeptical of other people's claims that they are getting use out of these things, that is ...
It looks like there might be an Omicron variant which doesn't have the S gene dropout [1]. I'm wondering how that might impact various modelling efforts, but haven't had time to think it through.
[1] https://www.abc.net.au/news/2021-12-08/qld-coronavirus-covid-omicron-variant/100682280
Most of my resources are Haskell related.
If you are new to programming, I usually recommend "How to Design Programs". It is the only text I know of that seems to teach people how to design programs, rather than expecting that they'll work it out themselves based on writing code for a few years.
For a starting point for programmers, I usually recommend the Spring 2013 version of CIS194 - "Introduction to Haskell" - from UPenn. The material is good quality and it has great homework. Our meetup group relayed the lectures, so there a...
Thanks! I'm not on Facebook, but I have reached out to the not-very-active Slate Star Codex meetup folks and hope to have a chat with them about what meetup options would work for them. I'll talk to some of my collaborators about reaching out to the Facebook group.
Hi all. My name is Dave, I recently went along to some AI Risk for Computer Scientist workshops and consequently read Rationality: AI to Zombies, HPMOR and The Codex, and have been generally playing with CFAR tools and slowly thinking more and more AI safety related thoughts.
A few coworkers have also been along to those workshops, and some other people in my various circles have been pretty interested in the whole environment, and so I'm currently polling a few people for interest in setting up a LessWrong meetup in Brisbane, Australia. I'm loo...
I've seen a bunch of people talking about how recent reasoning models are only useful for tasks which we are able to automatically verify.
I'm not sure this is necessarily true.
Reading the rStar paper has me thinking that if someone is able to turn the RL handle on mostly-general reasoning - using automatically verifiable tasks to power the training - it seems plausible that they might end up locking onto something that generalises enough to be superhuman on other tasks.
It's a shame that little things - counting, tokenization - seem like they're muddying th... (read more)