There are various meetups around the world that work on altruistic software development. Random Hacks of Kindness is at the top of my mind at the moment. The main site appears to be down at the moment, but the Australian and Canadian sites are still up, and the Australian one...
I've seen a bunch of people talking about how recent reasoning models are only useful for tasks which we are able to automatically verify.
I'm not sure this is necessarily true.
Reading the rStar paper has me thinking that if someone is able to turn the RL handle on mostly-general reasoning - using automatically verifiable tasks to power the training - it seems plausible that they might end up locking onto something that generalises enough to be superhuman on other tasks.
It's a shame that little things - counting, tokenization - seem like they're muddying the waters for LLM poetry (although maybe I'm out-of-date with my understanding of this). If that weren't the case, it feels like it'd be a nice way to check out-of-distribution reasoning power.
I’ve read that OpenAI and DeepMind are hiring for multi-agent reasoning teams. I can imagine that gives another source of scaling.
I figure things like Amdahl’s law / communication overhead impose some limits there, but MCTS could probably find useful ways to divide the reasoning work and have the agents communicating at least at human level efficiency.
I might have missed something, but it looks to me like the first ordering might be phrased like the self improvement and the risk aversion are actually happening simultaneously.
If an AI had the ability to self improve for a couple of years before it developed risk aversion, for instance, I think we end up in the "maximal self improvement" / 'high risk" outcomes.
This seems like a big assumption to me:
But self-improvement additionally requires that the AI be aware that it is an AI and be able to perform cutting-edge machine learning research. Thus, solving self-improvement appears to require more, and more advanced, capabilities than apprehending risk.
If an AI has enough resources and... (read more)
This is the kind of thing that has been in my head as a kind of "nuclear meltdown rather than nuclear war" kind of outcome. I've pondering what the largest bad outcome might be, that requires the least increase in the capabilities we have today.
A Big Bad scenario I've been mentally poking is "what happens if the internet went away, and stayed away?". I'd struggle to communicate, inform myself about things, pay for things. I can imagine it would severely degrade the various businesses / supply chains I implicitly rely on. People might panic. It seems like it would be pretty harmful.
That scenario is assuming AI capable enough to seize, for example,... (read more)
The little pockets of cognitive science that I've geeked out about - usually in the predictive processing camp - have featured researchers who are usually quite surprised by or are going to great lengths to double underline the importance of language and culture in our embodied / extended / enacted cognition.
A simple version of the story I have in my head is this: We have physical brains thanks to evolution, and then by being an embodied predictive perception/action loop out in the world, we started transforming our world into affordances for new perceptions and actions. Things took off when language became a thing - we can could transmit categories and affordances and... (read 372 more words →)
Motivation: I'm asking this question because one thing I notice is that there's the unstated assumption that AGI/AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I'd really like to know why LWers hold that AGI/ASI will be a big deal.
This is confusing to me.
I've read lots of posts on here about why AGI/AI would be a huge deal, and the ones I'm remembering seemed to do a good job at unpacking their assumptions (or at least a better job than I would do by default). It seems to me like those assumptions have been stated and... (read more)
It seems - at least to to me - like the argumentation around AI and alignment would be a good source of new beliefs, since I can't figure it all out on my own. People also seem to be figuring out new things fairly regularly.
Between those two things, I'm struggling to understand what it would be like to assert a static belief "field X doesn't matter", in way that is reasonably grounded in what is coming out of field X, particularly as the field X evolves.
Like, if I believe that AI Alignment won't matter much and I use that to write off the field of AI Alignment, it feels like I'm either... (read more)
Meta: I might be reading some the question incorrectly, but my impression is that it lumps "outside views about technology progress and hype cycles" together with "outside views about things people get doom-y about".
If it is about "people being doom-y" about things, then I think we are more playing in the realm of things where getting it right on the first try or first few tries matter.
Expected values seem relevant here. If people think there is a 1% chance of a really bad outcome and try to steer against that, even if they are correct you are going to see 99 people pointing at things that didn't turn out to be a... (read more)
I've split this off into it's own comment, to talk a little more about how I've found Kegan-related things useful, for myself.
I'm skeptical that global stages are actually real, and I still think there is still plenty of use to be had from the thinking and theory behind it. I treat it as a lossy model and I still find it helpful.
An example of something suggested by Kegan's theories that has helped me: communicating across sizeable developmental gaps or subject/object divides is really difficult, and if you can spot that in advance you can route around some trouble.
One of Kegan's students - Jennifer Garvey Berger - wrote a book about applying this... (read 549 more words →)
This reads to me like you're making a universal claim that these things aren't useful - based on "Some of these concepts are useful. Some aren't" and "I recommend evicting from your thoughts".
If that is your claim, I'd like to see lots more evidence or argument to go along with it - enough to balance the scales against the people who have been claiming to find these things useful.
If what you are saying is more that you don't find them useful yourself, or that you are skeptical of other people's claims that they are getting use out of these things, that is another matter entirely! Although in this case I'm left wondering... (read more)
There are various meetups around the world that work on altruistic software development. Random Hacks of Kindness is at the top of my mind at the moment.
The main site appears to be down at the moment, but the Australian and Canadian sites are still up, and the Australian one is asking for projects which help with the COVID-19 response.
That got me wondering. What software projects would be high leverage and not currently saturated? Which of them are amenable to being worked on by groups of developers with mixed skills and backgrounds?
This can probably be broken down further into software for different groups. Healthcare workers probably have different needs in this time than people who are struggling to make the case for working from home. My gut feeling is that efforts that helps with social support and mental health support will also have high value over time.