Darcey has not written any posts yet.

This year's ACX Meetup everywhere in South Bend, Indiana, USA.
Location: Chicory Cafe in Downtown South Bend (*not* the one in Mishawaka) – https://plus.codes/86HMMPGX+3W
Contact: darcey.riley@gmail.com
I'd be interested to see links to those papers!
I'm a little bit skeptical of the argument in "Transformers are not special" -- it seems like, if there were other architectures which had slightly greater capabilities than the Transformer, and which were relatively low-hanging fruit, we would have found them already.
I'm in academia, so I can't say for sure what is going on at big companies like Google. But I assume that, following the 2017 release of the Transformer, they allocated different research teams to pursuing different directions: some research teams for scaling, and others for the development of new architectures. It seems like, at least in NLP, all of the big flashy new models have come about via scaling. This... (read more)
Thanks for this post! More than anything I've read before, it captures the visceral horror I feel when contemplating AGI, including some of the supposed FAIs I've seen described (though I'm not well-read on the subject).
One thought though: the distinction between wrapper-minds and non-wrapper-minds does not feel completely clear-cut to me. For instance, consider a wrapper-mind whose goal is to maximize the number of paperclips, but rather than being given a hard-coded definition of "paperclip", it is instructed to go out into the world, interact with humans, and learn about paperclips that way. In doing so, it learns (perhaps) that a paperclip is not merely a piece of metal bent into a... (read more)
Aha, thanks for clarifying this; was going to ask this too. :)