Comment author: Psychohistorian 18 February 2012 09:12:41PM *  4 points [-]

If your point is that there are a lot of people locked up for violating laws that are basically stupid, you're absolutely right.

But that issue is largely irrelevant to the subject of the primary post, which is the accuracy of courts. If the government bans pot, the purpose of evidence law is to determine whether people are guilty of that crime with accuracy.

In other words, your criticism of the normative value of the American legal system is spot-on; we imprison far more people than we should and we have a lot of stupid statutes. But since this context is a discussion of the accuracy of evidentiary rules and court procedure, your criticism is off-topic.

Comment author: Michael_Sullivan 19 February 2012 12:36:37PM 0 points [-]

Is it really off-topic to suggest that looking at the accuracy of the courts may amount to rearranging the deck chairs on the titanic in a context where we've basically all agreed that

  1. the courts are not terrible at making accurate determinations of whether a defendant broke a law

  2. The set of laws where penalties can land you in prison are massively inefficient socially and in most people's minds unjust (when we actually grapple with what the laws are, as opposed to how they are usually applied to people like us, for those of us who are white and not poor).

  3. The system of who is tried versus who makes plea bargains versus who never gets tried is systematically discriminatory against those with little money or middle/upper class social connections, and provides few effective protections against known widespread racial bias on the part of police, prosecutors and judges.

How different is this in principle from TimS's suggestion about lower hanging fruit within evidentiary procedure, just at a meta level? Or did you consider that off-topic as well?

Comment author: skepsci 15 February 2012 11:46:50AM 4 points [-]

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Comment author: Michael_Sullivan 15 February 2012 12:12:27PM 6 points [-]

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox

Comment author: AlexMennen 12 December 2011 04:20:11PM *  7 points [-]

Axioms are not true or false. They either model what we intended them to model, or they don't. In puzzle 1, assuming you have carefully checked both proofs, confidence that (F, P1, P2, P3) implies T and (F, P1, P2, P3) implies ~T are both justified, rendering (F, P1, P2, P3) an uninteresting model that probably does not reflect the system that you were trying to model with those axioms. If you are trying to figure out whether or not T is true within the system you were trying to model, then of course you cannot be confident one way or the other, since you aren't even confident of how to properly model the system. The fact that your proof of T relied on fewer axioms would seem to be some evidence that T is true, but is not particularly strong.

puzzle 2: (ME) points both ways. While it certainly seems to be strong evidence against the reliability of (RM), since she just reasoned from clearly inconsistent axioms, it can't prove that F is the axiom you should throw away. Consider the possibility that you could construct a proof of ~T given only F, P1, and P2. Now, (ME) could not possibly say anything different about F and P3.

Comment author: Michael_Sullivan 14 December 2011 12:19:48PM 0 points [-]

Confidence that the same premises can imply both ~T and T is confidence that at least one of your premises is logically inconsistent with he others -- that they cannot all be true. It's not just a question of whether they model something correctly -- there is nothing they could model completely correctly.

In puzzle one, I would simply conclude that either one of the proofs is incorrect, or one of the premises must be false. Which option I consider most likely will depend on my confidence in my own ability, Ms. Math's abilities, whether she has confirmed the logic of my proof or been able to show me a misstep, my confidence in Ms. Math's beliefs about the premises, and my priors for each premise.

Comment author: jimmy 07 November 2011 07:14:00PM *  1 point [-]

That's not quite right in practice either. Even if you took all my money, I'd still take the 15% chance at $1M and maybe sell a 15% chance of $5k for $500.

Or if that is somehow not allowed, then I'd run into a bit of debt until my next pay check. Even if I really was spending all the money I make and averaging $0, $500 is a mere blip in the noise, not a factor of infinity more money.

It makes more sense to look at the total money in over whatever time scale you plan for.

Comment author: Michael_Sullivan 27 November 2011 04:37:34AM 0 points [-]

The present value of my expected future income stream from normal labor, plus my current estimated net worth is what I use when I do these calculations for myself as a business owner considering highly risky investments.

For most people with decent social capital (almost anyone middle class in a rich country), the minimum base number in typical situations should be something >200kUS$ even for those near bankruptcy.

Obviously, this does not cover non-typical situations involving extremely important time-sensitive opportunities requiring more cash than you can raise on short notice (such as the classic life-saving medical treatment required).

Comment author: Vaniver 21 November 2011 09:11:20PM 40 points [-]

Background: lukeprog wrote this post about articles he wouldn't have the time to write, and the first one on the list was something I was confident about, and so I decided to write a post on it. (As a grad student in operations research, practical decision theory is what I spend most of my time thinking about.)

Amusingly enough, I had the most trouble working in his 'classic example.' Decision analysis tends to be hinged on Bayesian assumptions often referred to as "small world"- that is, your model is complete and unbiased (If you knew there was a bias in your model, you'd incorporate that into your model and it would be unbiased!). Choosing a career is more of a search problem, though- specifying what options you have is probably more difficult than picking from them. You can still use the VoI concept- but mostly for deciding when to stop accumulating new information. Before you've done your first research, you can't predict the results of your research very well, and so it's rather hard to put a number on how valuable looking into potential careers is.

There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That's the sort of thing I suspect I could write a useful primer on, whereas I find it hard to care about, say, Sleeping Beauty.

Comment author: Michael_Sullivan 24 November 2011 02:29:44AM 5 points [-]

I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I've actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.

So count me in for a rousing endorsement of interest in more practical decision theory.

Comment author: Vaniver 22 November 2011 02:01:41PM 2 points [-]

I like the idea of having pictures but I do not like the idea of procuring pictures. I'll make it a higher priority for future posts, though, and if someone wants to send me pictures (which I can legally use) for this post I'll be happy to edit them in.

I replaced the "x"s with "p"s; hopefully that'll make it a bit clearer.

We start off with a prior P(p)=1. That is, I think every p is equally likely, and when I integrate over the domain of p (from 0 to 1) I get 1, like I should.

Then I update on seeing heads. For each p value, the chance I saw heads was p- and so I expect my function to have the functional form P(p)=p. Notice that after seeing heads I think the mode is a coin that always lands on heads and that it's impossible that the coin always lands on tails- both are what I expect. When I integrate p from 0 to 1, though, I get 1/2. I need to multiply it by 2 to normalize it, and so we have P(p)=2p.

This might look odd at first because it sounds like the probability of the coin always landing on heads is 2, which suggests an ill-formed probability. That's the probability density, though- right now, my prior puts 0 probability on the coin always landing on heads, because that's an integral with 0 width.

The 2-2x comes from the same argument, but the form is now 1-x.

Comment author: Michael_Sullivan 24 November 2011 02:26:03AM -1 points [-]

I'm not sure it isn't clearer with 'x's, given that you have two different kinds of probabilities to confuse.

It may just be that there's a fair bit of inferential distance to clear, though in presenting this notation at all.

I have a strong (if rusty) math background, but I had to reason through exactly what you could possibly mean down a couple different trees (one of which had a whole comment partially written asking you to explain certain things about your notation and meaning) before it finally clicked for me on a second reading of your comment here after trying to explain my confusion in formal mathematical terms.

I think a footnote about what probability distribution functions look like and what the values actually represent (densities, rather than probabilities), and a bit of work with them would be helpful. Or perhaps there's enough inferential work there to be worth a whole post.

Comment author: TheOtherDave 26 October 2010 04:26:04PM 1 point [-]

Yes.

Perhaps only peripherally related to the point of your post: I have a pet peeve about people using "X is absurd!" to mean they feel really strongly that X is false.

What I try to mean when I call a proposition P1 absurd in a context C1 is that P1 contradicts some fundamental organizing principle of C1, such that if I accept P1 the entire system of thought comes under attack.

I think this is what the absurdists (in a literary sense) are getting at: absurd statements, if taken seriously, challenge the ways we interpret events and leave us unable to trust the metaphorical -- and perhaps the literal -- ground under our feet. They challenge axioms, if you prefer.

That doesn't necessarily mean they're false, though it would of course be nice to believe that any system of thought I actually implement doesn't allow for true absurd statements. It does mean that, if true, they are important.

If P1 is genuinely absurd, two things follow: 1. Any evidence that supports P1 (that shifts its probability up, if you prefer) is worth considering very carefully and explicitly, because the emotional drive to simply dismiss it will be strong. 2. If there is evidence supporting it, I should tread carefully around the implications of that, because it's quite plausible that my normal habits of thought won't work quite right for them.

(Yes, of course, careful and explicit and rigorous thought is always a good thing. But most of the time, its benefits aren't all that immediate.)

Comment author: Michael_Sullivan 07 October 2011 02:45:16AM -2 points [-]

I think of this as "heresy", and agree that it is a very useful concept.

Comment author: Luke_A_Somers 10 September 2011 11:09:36PM 0 points [-]

The problem is that by declaring something "Absurd" you're making a very strong bet against it. You're going to lose a fair number of these bets.

Suppose calling something absurd merely means it's 1% probable. If you're right about that 90% of the time, each one you get wrong costs you a factor of 10 on your accuracy, far more than you gain from ascribing the extra 9% probability to the other 9 cases you happened to be right. And 1% is high enough few would call it truly absurd.

Calling something absurd is asking to be smacked hard (in terms of accuracy) if you're wrong - and feeling safe about it.

Comment author: Michael_Sullivan 07 October 2011 02:38:16AM 1 point [-]

Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses -- the determination of whether a claim/prediction has surface plausibility. If not we file it under "absurd". An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.

On the other hand, we have the sense of "Absurd!" as a very strong negative claim about something's probability of truth. So "Absurd!" stands in for "less than .01/.001/whatever", instead of a term such as "unlikely" which might mean "less than .15"

I was talking only about the first sense. It seemed to me that Eliezer was making a very strong claim that the absurdity heuristic (in the first sense) does no better than maximum entropy. That's equivalent to saying that surface plausibility or lack thereof amounts to zero evidence. That allowing yourself to modify probabilities downward due to "absurdity" even a small amount would be an error.

I strongly doubt that this is the case.

I agree completely that a claim of "Absurd!" in the second sense about a long-dated future prediction cannot ever be justified merely by absurdity in the first sense.

In response to comment by Petro on Optimal Employment
Comment author: datadataeverywhere 01 February 2011 02:56:23PM *  3 points [-]

Police officers in larger cities make decent scratch to start with (IIRC 60k in some areas of California), and then have significant opportunities for overtime and "moonlighting" as security. In some cases there are Bay Area police making over 120k a year.

Given cost of living adjustments, this is still nowhere near three times as much as soldiers start making.

And as far as "soldiers really don't have to do much". Yeah, I don't wanna get banned here, so let's just say you have no idea of what you're talking about.

I haven't the faintest idea why you'd get banned for correcting me. I'd be happy to have you give me greater clarity. Here's where what I said comes from: I have a (half) brother and two good friends in the US Army; I of course have several other acquaintances in the Army through them. They report "not having to do anything", and have talked about just hanging out all day on base. One friend is a medic; he works out for three hours a day, mans a medical station (where he reads, since people rarely come in) for another three hours a day, and then goes home. My brother maintains equipment, and I gather has a similarly uneventful schedule; I don't know about my other friend, but he has lots of time on his hands and is usually bored and never stressed about work. My SO's brother-in-law is a newer recruit, and currently deployed; he didn't have nearly as much free time prior to his deployment, but he was in training constantly prior to that.

Please note, I'm not talking about danger and fighting! I was talking about a counterfactual world where soldiers are never deployed. This is not our world, and I thought I made it clear that this changes everything! All three men that I'm talking about, and most of their friends have been been deployed for several tours of duty. None of them have significant physical injuries, but all bear serious psychological damage. It's broken their families and torn apart their lives. Each of them knows more people who have committed suicide than I hope to ever know. This is not okay, and not something I recommend as a "low stress" position.

Comment author: Michael_Sullivan 26 February 2011 01:24:24PM 0 points [-]

You have to be careful with counterfactuals, as they have a tendency to be counter factual.

In a world in which soldiers were never (or even just very very rarely) deployed, what is the likelihood that they would be paid (between money and much of living expenses) anywhere near as well as current soldiers and yet asked to do very very little?

The reason the lives of soldiers who are not deployed are extremely low-stress and not particularly difficult is because of deployment. They are being healed from previous deployments and readied for future deployments. In the current environment where soldiers are being deployed for much longer periods with much shorter dwell times, it's very likely that the services are doing everything they can to make the dwell time as low-stress as possible. 3 hours at the gym and 3 hours doing a relatively low-stress job in your field sounds like what a lot of people I know who are "retired" do. It sounds like a schedule designed to make your life as easy as possible while still keeping you healthy and alert, rather than falling into depression.

In a counter factual world where the army was almost never deployed, they would surely be used for some other purpose on a regular basis, police/rescue/disaster relief/etc. or simply be much much smaller, with pay not needing to be as competitive. We've even experienced this to an extent -- during peaceful times, the active duty military shrinks dramatically, and most of our army is in a reserve or national guard capacity, where they have day jobs, and do not get full time pay from the army unless they are called up to active service. This is still to most accounts a pretty good gig (especially if you use it to get free college tuition) even though it can't replace full time work -- as long as you don't get called up.

In fact, I think that's what some of the people my age that I know in the service were expecting when they joined in peacetime. Very rare callups for crucial work they felt obligated to do well for the good of the country or world. Didn't work out that way though.

Comment author: Michael_Sullivan 10 June 2008 03:53:28PM -1 points [-]

I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you *are* predicting and what you are *not* predicting.

Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.

Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered social structure and technology beyond our current imagination is highly probable. And that's the key: "beyond our current imagination". The specifics of what will happen aren't very predictable today. If they were, we'd already be *in* the singularity. The things that happen will seem strange and almost incomprehensible by today's standards, in the way that our world is strange and incomprehensible by the standards of the 19th century.

The last 200 years already are much like a singularity from the perspective of someone looking forward from 15th century europe and getting a vision of what happened between 1800 and 2000, even though the basic groundwork for that future was already being laid.

View more: Prev | Next