All of myutin's Comments + Replies

myutin31

I know the answer to "couldn't you just-" is always "no", but couldn't you just make an AI that doesn't try very hard? i.e., it seeks the smallest possible intervention that ensures 95% chance of whatever goal it's intended for.

This isn't a utility maximizer, because it cares about intermediate states. Some of the coherence theorems wouldn't apply.

2Vladimir_Nesov
The only bound on incoherence is ability to survive. So most of alignment is not about EU maximizers, it's about things that might eventually build something like EU maximizers, and the way they would be formulating their/our values. If there is reliable global alignment security, preventing rival misaligned agents from getting built where they have a fighting chance, anywhere in the world, then the only thing calling for transition to better agent foundations is making more efficient use of the cosmos, bringing out more potential for values of the current civilization. (See also: hard problem of corrigibility, mild optimization, cosmic endowment, CEV.)
myutin40

Hey, I'm new here. I'm looking for help on dealing with akrasia. My most common failure mode is as follows: when I have many different tasks to do, I'm not able to start any one of them.

I'm planning on working through the hammertime sequence: i've asked for 9 days off work, for a total of 13 days free. Will this be achievable / helpful? What other resources are available?

Specs:
DC area. Have read MoR, Sig. Digits, Inadequate Equilibria, and half of the Sequences. Heavy background in Math/CS/Physics.