Wiki Contributions

Comments

Sorted by
rchplg40

Relatedly on "obviously dropping the ball": has Eliezer tried harder prescription stimulants?  With his P(doom) & timelines, there's relatively little downside to this done in reasonable quantities I think. They can be prescribed. Seems extremely likely to help with fatigue

From what I've read, the main warning would be to get harder blocks on whatever sidetracks eliezer (e.g. use friends to limit access, have a child lock given to a trusted person, etc)

Seems like this hasn't been tried much beyond a basic level, and I'm really curious why not given high Eliezer/Nate P(doom)s.  There are several famously productive researchers who did this

rchplg73

Relatedly on "obviously dropping the ball": has Eliezer tried harder stimulants?  With his P(doom) & timelines, there's relatively little downside to this done in reasonable quantities I think.  Seems very likely to help with fatigue

From what I've read, main warning would be to get harder blocks on whatever sidetracks eliezer (e.g. use friends to limit access, have a child lock given to a trusted person, etc)

rchplg70

You might want to post this on the effective altruism forum too, if you haven't considered it.  I think many groups interested in running similar AGISF programs don't read lesswrong, but do skim the forum

rchplg20

Trivially do better than the naive thing I human would do*, sry (e.g. v.s. looking at the sun & seasons, which is what I think human trying to tell time would do to locally improve).  Definitely agree can't trivially do a great job on traditional standards.  Wasn't a carefully chosen example

The broader point was that some subskills can enable better performance at many tasks, which causes spiky performance in humans at least.  I see no reason why this wouldn't apply to nns. (e.g. part of the nn develops a model of something for one task, once it's good enough discovers that it can use that model for very good performance on an entirely different task - likely observed as a relatively sudden, significant improvement)

rchplg10

Naive analogy: two tasks for humans: (1) tell time (2) understand mechanical gears. Training a human on (1) will outperform (2) for a good while, but once they get a really good model for (2) they can trivially do (1) & performance would spike dramatically

rchplg60

I'll just note that several of these bets don't work as well if I expect discontinuous &/or inconsistently distributed progress.  As was observed on many of individual tasks in PaLM: https://twitter.com/LiamFedus/status/1511023424449114112 (obscured by % performance & by the top-level benchmark averaging 24 subtasks that spike at different levels of scaling)

I might expect performance just prior to AGI to be something like 99% 40% 98% 80% on 4 subtasks, where parts of the network developed (by gd) for certain subtasks enable more general capabilities

rchplg10

I have not looked into much, but saw what looks like more evidence that boosters do help: https://mobile.twitter.com/DataDrivenMD/status/1469448926562455555

Again, I did not look into much detail - so feel free to look at zvi's newer post with some updated data - but overall weight seems in favor of booster more than when I made this comment.

rchplg70

This seems right, but also interested in whether boosters might be net-harmful v.s. nothing.

Does original antigenic sin also mean that e.g. your body would have a harder time fighting off omicron if infected, because rather than developing new antibodies it would just keep trying to deploy the old ones?

If we have data that boosters reduce severity for omicron, that would seem to answer this.  But do we?

rchplg20

Some old links (e.g. from googling for things) are broken: https://www.lesswrong.com/lw/o5f/wireheading_done_right_stay_positive_without/

Given that this is a major way people find things (coming from google, including trying to find old things they remember), I'd try to fix this when possible. (Though ignore if this is just related to other people's bugs)