Unknown, your comment strikes me as a good way of looking at it.
The "me of now" as a region of configuration space contains residue of causal relationships to other regions of configuration space ("the past" and my memories of it). And the timeless probability field on configuration space causally connects the "me of now" to the "future" (other regions of configuration space). Just because this is true, and -- even more profoundly -- even though the "me of now" configuration space region has no special status (no shining "moment in the sun" as the privileged focus of a global clock ticking a path through configuration space), I am still what I am and I do what I do (from a local perspective which is all I have detailed information about), which includes making decisions.
Our decisions are based on what we know and believe, so an acceptance of the viewpoint Eliezer has been putting forth is likely to have *some* impact on decisions we make... I wonder what that impact is, and what should it be?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Re your moral dilemma: you've stated that you think your approach needs a half-dozen or so supergeniuses (on the level of the titans of physics). Unless they have already been found -- and only history can judge that -- some recruitment seems necessary. Whether these essays capture supergeniuses is the question.
Demonstrated (published) tangible and rigorous progress on your AI theory seems more likely to attract brilliant productive people to your cause.