In response to comment by Alexei on Optimal Exercise
Comment author: RomeoStevens 12 July 2015 04:12:49AM 1 point [-]
Comment author: Alexei 12 July 2015 03:58:17PM 0 points [-]

Perfect, thank you!

In response to Optimal Exercise
Comment author: Alexei 12 July 2015 01:16:18AM 0 points [-]

Any research on Myofibrillar hypertrophy vs Sarcoplasmic hypertrophy? Some articles I've read suggest training with heavy loads with few reps increases the former and leads to strength increase, while light loads with many reps increases the later and leads to size gain.

Comment author: Squark 07 June 2015 07:15:17PM 1 point [-]

To the best of my understanding, Gremaining is the amount of remaining research towards UFAI, it is not the remaining time. Researchers realizing the danger means less research on UFAI and more research on FAI, thus lower Ge' and higher Fe'.

Comment author: Alexei 10 June 2015 03:53:50AM 0 points [-]

Ah, you are right! Thanks.

Comment author: Squark 05 June 2015 07:41:40PM *  0 points [-]

Raising the sanity waterline doesn't affect Gremaining. Instead, it means increasing Fe' / Ge'

Comment author: Alexei 06 June 2015 06:24:31PM 0 points [-]

It could increase Gremaining by making more AI researchers realize that AI could be dangerous, thus postponing the kind of AI they would write/run.

Comment author: Alexei 04 June 2015 05:16:06AM 1 point [-]

By raising the sanity waterline for AI researchers, one could, in principle, extend Gremaining.

Comment author: jessicat 03 June 2015 10:02:59PM *  13 points [-]

This model seems quite a bit different from mine, which is that FAI research is about reducing FAI to an AGI problem, and solving AGI takes more work than doing this reduction.

More concretely, consider a proposal such as Paul's reflective automated philosophy method, which might be able to be implemented using epsiodic reinforcement learning. This proposal has problems, and it's not clear that it works -- but if it did, then it would have reduced FAI to a reinforcement learning problem. Presumably, any implementations of this proposal would benefit from any reinforcement learning advances in the AGI field.

Of course, even if we a proposal like this works, it might require better or different AGI capabilities from UFAI projects. I expect this to be true for black-box FAI solutions such as Paul's. This presents additional strategic difficulties. However, I think the post fails to accurately model these difficulties. The right answer here is to get AGI researchers to develop (and not publish anything about) enough AGI capabilities for FAI without running a UFAI in the meantime, even though the capabilities to run it exist.

Assuming that this reflective automated philosophy system doesn't work, it could still be the case that there is a different reduction from FAI to AGI that can be created through armchair technical philosophy. This is often what MIRI's "unbounded solutions" research is about: finding ways you could solve FAI if you had a hypercomputer. Once you find a solution like this, it might be possible to define it in terms of AGI capabilities instead of hypercomputation, and at that point FAI would be reduced to an AGI problem. We haven't put enough work into this problem to know that a reduction couldn't be created in, say, 20 years by 20 highly competent mathematician-philosophers.

In the most pessimistic case (which I don't think is too likely), the task of reducing FAI to an AGI problem is significantly harder than creating AGI. In this case, the model in the post seems to be mostly accurate, except that it neglects the fact that serial advances might be important (so we get diminishing marginal progress towards FAI or AGI per additional researcher in a given year).

Comment author: Alexei 04 June 2015 05:12:47AM 3 points [-]

I feel like this still fits their model pretty well. You need Fremaining time to find AGI->FAI solution, while there is Gremaining time before someone builds an AGI.

Comment author: Steve_Rayhawk 03 March 2015 12:34:34AM *  2 points [-]

[nvm]

Comment author: Alexei 03 March 2015 01:46:51AM 0 points [-]

Sorry, it's not yet ready for public consumption. Please delete your post.

Comment author: Daniel_Burfoot 01 January 2015 04:40:25PM 12 points [-]

To paraphrase some business guru: I know half of my happiness budget is wasted, but I don't know which half.

Comment author: Alexei 17 January 2015 06:15:27AM 1 point [-]

A/B test?

Comment author: John_Maxwell_IV 11 January 2015 12:03:20PM 0 points [-]

Did this ever end up working out?

Comment author: Alexei 13 January 2015 01:12:24AM 0 points [-]

As far as i can remember I never heard from them.

Comment author: Alexei 22 November 2014 10:27:49PM 5 points [-]

My advice would be to stop playing games could turkey and direct that time and focus into what you want to do.

View more: Next