Here's a very fancy cliquebot (with a couple of other characteristics) that I came up with. The bot is in one of 4 "modes" -- SELF, HOSTILE, ALLY, PASSWORD.
Before turn1:
Simulate the enemy for 20 turns against DCCCDDCDDD Cs thereafter, If the enemy responds 10Ds, CCDCDDDCDC then change to mode SELF. (These are pretty much random strings -- the only requirement is that the first begins with D)
Simulate the enemy for 10 turns against DefectBot. If the enemy cooperates in all 10 turns, change to mode HOSTILE. Else be in mode ALLY.
In any given turn,
If...
Possibly I used it out of context, What I mean is that utility (less crime)> utility(society has inaccurate view of justice system) when the latter has few other consequences, and rationaliy is about maximising utility. Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system. Here knowledge is not an end, it is a means to an end.
G'day
As you can probably guess, I'm Alex. I'm a high school student from Australia and have been disappointed with the education system here from quite some time.
I came to LW via HPMoR which was linked to me by a fellow member of the Aus IMO team. (I seriously doubt I'm the only (ex-)Olympian around here - seems just the sort of place that would attract them). I've spent the past few weeks reading the sequences by EY, as well as miscellaneous other stuff. Made a few (inconsequential) posts too.
I have very little in the way of controversial opinions to off...
You're kind of missing the point here. I probably should have clarified my position more The reason I want people to trust the justice system is so that people wil not be inclined to commit crimes, because it would then more likely (from their point of view) that, if they did, they would get caught. I suppose there is the issue of precedent to worry about, but the ultimate purpose of the justice system, from the consequentialist viewpoint, is to deter crimes (by either the offender it is dealing with or potential others), not to punish criminals. As the o...
Test for Consequentialism:
Suppose you are a judge in deciding whether person X or Y commited a murder. Let's also assume your society has the death penalty. A supermajority of society (say, encouraged by the popular media) has come to think that X committed the crime, which would decrease their confidence in the justice system if he is set free, but you know (e.g. because you know Bayes) that Y was responsible. We also assume you know that Y won't reoffend if set free because (say) they have been too spooked by this episode. Will you condemn X or Y? (Befor...
I'm not sure if anyone's noticed this, but how do you know that you're not a simulation of yourself inside Omega? If he is superintelligent, he would compute your decision by simulating you, and you and your simulation will be indisinguishable.
This is fairly obviously a PD against said simulation -- if you cooperate in PD, you should one-box.
I personally am not sure, although if I had to decide I'd probably one-box
I can't believe that this one hasn't been done before:
Unless you are Eliezer Yudkowsky, there are 3 things that are certain in life: death, taxes and the second law of thermodynamics.