from a retributive perspective I expect any punishment to be proportional to the mental harm caused to others by the crime, which once again is small in comparison to the potential lifespans here.
Depends, if the crime is murder how do you count the harm caused by ending someone's near-infinite life?
I'm not sure what the effects on deterrence would be, though.
I haven't fully worked out my theory of deterrence, but the crude first approximation, as briefly discussed here, is that the disutility to the criminal of the punishment should be greater than the utility they received from committing the crime, adjusted for things like probability of getting caught.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
On a tangential note, exactly how close is this to your area of expertise? In my experience it tends to be mathematicians who are in related areas but don't actually work on complexity theory directly who insist on being agnostic about P vs NP - almost all complexity theorists are pretty much completely convinced of even stronger complexity theoretic assumptions (eg, I bet Scott Aaronson would give pretty good odds on BQP != NP).
I'm not entirely sure how tangential this is, as it seems to suggest that there may be some sort of sweet point of expertise (at least on this question) - any layman would take my word for it that P != NP, most non-CS mathematicians would refuse to have an opinion and most complexity theorists are convinced of its truth for their own reasons. I guess this might be something that's unique to mathematics, with its insistence on formal proof as a standard of truth, can anyone thing of anything similar in other fields?
Not "almost all are completely convinced"; according to this poll, 61 supposed experts "thought P != NP" (which does not imply that they would bet their house on it), 9 thought the opposite and 22 offered no opinion (the author writes that he asked "theorists", partly people he knew, but also partly by posting to mailing lists - I'm pretty sure he filtered out the crackpots and that enough of the rest are really people working in the area)
Even that case wouldn't increase the likelyhood of P != NP to 1-epsilon, as experts have been wrong in past and their greater confidence could stem from more reinforcement through groupthink or greater exposition to things they simply understand wrong rather than a better overview. Somewhere in Eliezers posts, a study is referenced where something happens only in 70 % of the cases when an expert says that he is 99 % sure; in another referenced study, people raised their subjective confidence in something vastly more than they actually changed their mind when they got greater exposition to an issue which means that the experts confidence doesn't prove much more than the non-experts (who had light exposition to an issue) confidence.