nc

Views my own, not my employers.

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
nc30

I've noticed they perform much better on graduate-level ecology/evolution questions (in a qualitative sense - they provide answers that are more 'full' as well as technically accurate). I think translating that into a "usefulness" metric is always going to be difficult though.

nc10

I would have found it helpful in your report for there to be a ROSES-type diagram or other flowchart showing the steps in your paper collation. This would bring it closer in line with other scoping reviews and would have made it easier to understand your methodology.

nc40

Linguistic Drift, Neuralese, and Steganography

In this section you use these terms implying there's a body of research underneath these terms. I'm very interested in understanding this behaviour but I wasn't aware it was being measured. Is anyone currently working on models of linguistic drift/measuring it with manuscripts you could link?

nc10

My impression is that's a little simplistic, but I also don't have the best knowledge of the market outside WGS/WES and related tools. That particular market is a bloodbath. Maybe there's better scope in proteomics/metabolomics/stuff I know nothing about.

nc10

My impression is that much of this style of innovation is happening inside research institutes and then diffusing outward. There are plenty of people doing "boring" infrastructure work at the Sanger Institute, EMBL-EBI, etc. And you all get it for free! I can however see that on-demand services for biotech are a little different.

nc40

This fail-state is particularly worrying to me, although it is not obvious whether there is enough time for such an effect to actually intervene on the future outcome.

nc10

Are you aware of anyone else working on the same topic?

nc10

I was reading the UK National Risk Register earlier today and thinking about this. Notable to me that the top-level disaster severity has a very low cap of ~thousands of casualties, or billions of economic loss. Although it does note in the register that AI is a chronic risk that is being managed under a new framework (that I can't find precedent for).

nc10

I do think this comes back to the messages in On Green and also why the post went down like a cup of cold sick - rationality is about winning. Obviously nobody on LW wants to "win" in the sense you describe, but more winning over more harmony on the margin, I think.

The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that's the nature of things.

Load More