In my understanding, you are on the right track, but note the difference between taking the action and observing the action.
EDT doesn't assume the agent's action causally determines the state, but rather that you are not restricted (as in CDT) from considering how observing the action may work as evidence about the state. Consider the problem from a detached perspective. If you saw an agent one-box but did not see the outcome of that choice, then you would still be justified in believing because the Newcomb predictor is...
I expected far greater pushback from doctors and lawyers, for example, than we have seen so far.
I believe it's a matter of (correlated) motivated reasoning. My doctor and lawyer friends both seem to be excited about the time that AIs will save them as they do their jobs—in both professions there is immense efficiency to gain by automating more rote parts of the work (legal research, writing patient notes, dealing with insurance, etc.)—but seem to believe that AIs will never fully supplant them. When I press the issue, especially with my doctor friend, he t...
I think that, roughly speaking, these are possible outcomes:
From a doomer perspective, the items 2-5 are not worth discussing, but if we ignore that...
Option 2 is only actionable for you, if you have the power (economical or military) to get control over the first super-intelligent AI, otherwise you...
Precisely. And just to trace the profit motive explicitly: many of the features in question that get pop-ups in Office, for example, are just marginally useful enough in some niche that 0.5% of people who see the pop-up might try the feature. In the aggregate, there's some telemetry that says those 0.5% of people spend some very slightly higher proportion of time in the product, and some other analysis demonstrates that people who spend more time in the product are less likely to cancel. Everyone else dismisses the pop-up once, forgets about it, and it's a...
Since writing this, I have come across Moravec's paradox, which is, in fact, precisely what I intended to get at with this piece.
See also e.g. the ocean from Stanislaw Lem's Solaris for an amazing account in fiction of the inscrutability of understanding intelligences very difficult than us. It is a case where our intuitions almost necessarily carry us astray.