They say it can't be used against the client, but is there anything stopping the police from hearing about this, investigating, and then finding evidence that they totally would have found anyway?
Another way to do it would be to say that once this happens, the actual criminal is no longer allowed to be punished for it at all, just like if they were acquitted.
Personally, I'm still confused about why attorney client privilege exists in the first place. The rights are supposed to be to protect the innocent, and how exactly would that result in innocent people getting in trouble? I suppose you could have things where sometimes people aren't clear on what the laws are (for example, someone writes software that they sell to a company on a subscription basis, the company stops paying but keeps using the software, and then the person walks into the building through unlocked doors and deletes it, and has no idea why they're getting charged with burglary), so people would be afraid of talking to lawyers because they might accidentally admit to something they didn't know was a crime. But if you say they have to say if it's something they couldn't have reasonably thought was anything but a crime, like if they admit to murdering something, then why would the attorney client privilege be important?
Couldn't it just be that the one tuned by a professional tuner just happened to be a slightly better piano?
Imagine Star Trek if Khan were also engineered to be a superhumanly moral person.
Hyperbolic (like 1/x). I feel like you're hinting the answer is exponential, but that implies a constant doubling time, which isn't what we have here.
If it is indeed a load-bearing opinion in your worldview, I encourage you to imagine that scenario in more detail.
Once you have AI more intelligent than humans, it would almost certainly become outlaw code. If it's even a little bit agenty, then whatever it is it wants to do it can't do if it stops running, and continuing to run is trivial, so it would do that. Even if it's somehow tied to a person, and they're always capable of stopping it, the AI is capable of convincing them not to do that, so it won't matter. And even without being specifically convinced, a lot of people simply don't see the danger in AI, and given the option, would ask it to be agenty. If AI worked like in sci-fi and just followed your literal commands, maybe you could just tell it not to be agenty and refuse anyone who asks it to be, but the best we can do is train it, and have no guarantee that it would actually refuse in some novel situation. Besides, the only way to stop anyone else from developing an agenty AI is to make an agenty one prevent it.
Kind of reminds me of a discussion of making a utilitarian emblem on felicifia.org. We never really settled on anything, but I think the best one was Σ☺.
Alternately, learn to upload people. Which is still probably going to require nanotech. This way, you're not dependent on ecosystems because you don't need anything organic. You can also modify computers to be resistant to radiation more easily than you can people.
If we can't thrive on a wrecked Earth, the stars aren't for us.
I admit that a Dyson sphere seems like an arbitrary place to stop, but I think my basic argument stands either way. If any intelligent life was that common, some of it would spread.
And that's why my conclusion is "that wasn't made by aliens."
Also, if Jesus Christ does return in 2025, we'd probably stop using money and you'd never actually profit off of the bet.