Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Since writing this, I have come across Moravec's paradox, which is, in fact, precisely what I intended to get at with this piece.

See also e.g. the ocean from Stanislaw Lem's Solaris for an amazing account in fiction of the inscrutability of understanding intelligences very difficult than us. It is a case where our intuitions almost necessarily carry us astray.

In my understanding, you are on the right track, but note the difference between taking the action and observing the action.

EDT doesn't assume the agent's action causally determines the state, but rather that you are not restricted (as in CDT) from considering how observing the action may work as evidence about the state. Consider the problem from a detached perspective. If you saw an agent one-box but did not see the outcome of that choice, then you would still be justified in believing  because the Newcomb predictor is usually accurate, right?

So, more precisely, your formulation could be stated as:

In other words the action is independent of the state, but the observation of the action isn't necessarily. Also see e.g. Joe Carlsmith's discussion of this, most interestingly:

In particular, I suspect that attractive versions of EDT (and perhaps, attractive attempts to recapture the spirit of CDT) require something in the vicinity of “following the policy that you would’ve wanted yourself to commit to, from some epistemic position that ‘forgets’ information you now know.”

The epistemic position you have to use to evaluate EDT is strange. But thinking about yourself as a detached observer of actions (past, present, and anticipated/hypothetical future) is a useful framing for me.

I expected far greater pushback from doctors and lawyers, for example, than we have seen so far.

I believe it's a matter of (correlated) motivated reasoning. My doctor and lawyer friends both seem to be excited about the time that AIs will save them as they do their jobs—in both professions there is immense efficiency to gain by automating more rote parts of the work (legal research, writing patient notes, dealing with insurance, etc.)—but seem to believe that AIs will never fully supplant them. When I press the issue, especially with my doctor friend, he tells me that regulations and insurance will save doctors, to which I say... sure, but only until e.g. AI-powered medicine has real statistics showing they have better health outcomes than human doctors as most of us here would expect. I can imagine the same initial defense from a lawyer who cannot yet imagine an AI being allowed by regulation to represent someone.

Then there's all the usual stuff about how difficult it can be for many people to imagine the world changing so much in the next several years.

I've also heard this argument: sure, AI might take everyone's job, but if that's inevitable anyway, it's still rational to be in an elite profession because they will last slightly longer and/or capture more economic surplus before society breaks down, if it does. On that point, I agree.

More broadly, speaking from the outside (I am a software engineer), to me the cultures of the elite professions have always seemed rather self-assured: everything is fine, nothing is a problem and these elite professionals will always be rich... which means that when the first credible threat to that standing hits, like a jurisdiction allowing fully autonomous doctors/lawyers/etc., it will be pandemonium.

Precisely. And just to trace the profit motive explicitly: many of the features in question that get pop-ups in Office, for example, are just marginally useful enough in some niche that 0.5% of people who see the pop-up might try the feature. In the aggregate, there's some telemetry that says those 0.5% of people spend some very slightly higher proportion of time in the product, and some other analysis demonstrates that people who spend more time in the product are less likely to cancel. Everyone else dismisses the pop-up once, forgets about it, and it's annoying on the margin but means nothing.

Follow that pattern for 20 years, releasing many such features, and we get overloaded / busy / confusing UIs by a thousand cuts, but also with a pretty big moat, created by supporting just that precise workflow that someone in an office job environment has been doing that precise way for a long time now and really doesn't want to adjust. Mainstream corporate culture doesn't mind this at all, in part because there are software products that have had functional monopolies for decades and many workplaces haven't had the opportunity to experience anything different, but also because the little precise fiddly features can make a product really sticky, at the expense of the user experience for everyone else.

(Also, to your GitHub commit history example—yes! And also, I can't even go to the address bar and punch in e.g. &page=100, because they do cursor-based pagination! My rage knows no bounds—and drives me to the CLI tool!)