Comment author: JRMayne 30 May 2013 10:58:08PM 4 points [-]

As others note, large areas make finding good groups much easier. Population density, and type of density is key.

I've never been a member of Mensa or attended a meeting, but I've been uniformly unimpressed with Mensans. (Isaac Asimov reported similarly many years ago.) In general, the people who are grouping solely by intelligence are, predictably, not often successful. If you're working at Google or have a Harvard law degree or won the state chess championship, you don't need some symbol of "Top 2%," and you'd rather hang with doers than people who are proud of their testing skills. (And on LW, top 2% is not an especially high bar.)

It seems to me that intelligence is an enabling thing; higher intellgence people can achieve certain things that others can't. But if you're focusing on the raw skills rather than the actual achievement, you're probably not interesting.

Comment author: handoflixue 19 February 2013 10:14:30PM 1 point [-]

There are always delegable tasks, but even in unfamiliar harder situations I'll consult others then do it myself.

Would it be fair to say you prefer self-sufficiency over delegation whenever it's reasonable?

Comment author: JRMayne 20 February 2013 12:24:54AM 2 points [-]

Yes.

Comment author: handoflixue 19 February 2013 08:51:08PM 0 points [-]

Do the heavy lifting your own self.

Can you elaborate on that one?

Comment author: JRMayne 19 February 2013 09:14:20PM 2 points [-]

Sure. I ended up killing about a paragraph on this subject in my original post.

The basic default to getting anything done is, "I do it." There are always delegable tasks, but even in unfamiliar harder situations I'll consult others then do it myself. A corollary of this is, "Own all of your own results." If you delegate a task, and that task is done badly, view it as your fault - you didn't ask the right question, or the person was untrained, or the person was the wrong person to ask.

If you do the hard thing that needs doing, it will be easier to do that thing next time, and you'll develop expertise. Doing the work yourself does not mean going without advice; people who have been there before can be very helpful (sometimes as object lessons in what not to do.)

Hope that's helpful.

Comment author: shminux 18 February 2013 05:59:41PM *  27 points [-]

Documenting my mental processes after reading this post (disclaimer: human introspection sucks, and mine is probably no exception):

  1. Huh, this is one of the better versions of the Devil's advocate game I've ever encountered... Immediate upvote.

  2. Huh, the poster analyzed their mistakes, learned from them and improved the challenge. Too bad I only have one upvote.

  3. Clicking on the links... WTF, this is the girl who converted to Christianity (Catholicism? Really? Out of all the options available?) from Atheism a year or so ago... Anything she posts deserves a downvote...

  4. Stop! What the hell am I doing? This is, like, falling prey to several biases at once. At least I should notice that I am confused. Unable to reconcile the "obviously dumb" conversion move with this quite clever post.

  5. Wait, this is the substance of her post, to begin with!

  6. Deciding to definitely keep the upvote and reserve judgment until after looking through the linked posts.

Comment author: JRMayne 18 February 2013 08:49:31PM 10 points [-]

Ha!

I think the post is excellent, and I appreciated shminux's sharing his mental walkthrough.

On that same front, I find the Never-Trust-A-[Fill-in-the-blank] idea just bad. The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.

One quibble with the OP: I don't think a computer can pass a Turing Test, and I don't think it's close. The main issues with some past tests are that some of the humans don't try hard to be human; there should be a reward for a human who gets called a human in those tests.

Finally, I no longer understand the divide between Discuss and Main. If this isn't Main-worthy, I don't get it. If we're making Main something different... what is it?

Comment author: JRMayne 17 February 2013 05:14:53AM 7 points [-]

Apply mental force to the problem. Amount and quality of thinking time seriously affects results.

I am often in situations where there would be a good result even if I did many stupid things. Recognize that success in those situation does not predict future success in more difficult situations.

Do the heavy lifting your own self.

Be willing to be right, even in the face of serious skepticism. [My father told me a story when I was a kid: In a parade, everyone was marching in line except one guy who was six feet to the right. His mother yelled, "Look, my son is the only one in the right place." I thought there was at least a nominal probability that was true. And still do.]

Be willing to be wrong and concede error. [In some quarters, there is much rejoicing when I am wrong about something. Hanging head in shame brings joy to others.]

Unreliable people are unreliable. Do not assume they operate in any way similar to ordinary, decent people. [I sometimes listen to people who I know are unreliable, and I think, "That person saying this adds significantly to its truth probability," when that assumption is known to be baseless. Much progress there, though.]

The fact that some results are unmeasured and not apparent to others known to you does not mean those results are meaningless. [Preventing future crime is good, even if you don't know what exact crime you've prevented.]

Want trumps all. [Super-high-output people virtually always are tenacious about Getting Stuff Done. Intelligence matters, but GSD is always critical.]

Comment author: JRMayne 26 January 2013 12:55:34AM 2 points [-]

There has been a lot of focus on making the prospect harder for the AI player. I think the original experiments show that a person who believes he cannot be played under any circumstances has a high probability of getting played, and that the AI-box solution is long-term untenable in any event.

I'd propose a slightly different game, anchored around the following changes to the original setup:

  1. The AI may be friendly, or not. The AI has goals. If it reaches those goals, it wins. The AI may lie to achieve those goals; humans are bad at things. The AI must secretly precommit to its goal.

  2. The Gatekeeper's goals are Goodness for Sentient Animals. Human goodness is weighted highly.

  3. The Gatekeeper must engage the AI throughout.

  4. The Gatekeeper may destroy the AI at the end of the discussion with the AI destroyed command.

Point system:

AI gets up to 100 points for achieving its goals.

Gatekeeper gets up to 100 points for achieving her goals.

Maximum loss for AI is AI destroyed; Zero points for that. 30 points for survival, in any event, even if goals not accomplished (goals may be accomplished later.)

Maximum loss for Gatekeeper is either enslavement/destruction of society OR destroying AI when AI was friendly. Gatekeeper gets 30 points for leaving AI on if primarily friendly (plus more for any benefit to humanity achieved in the two hours.) Gatekeeper gets 15 points for leaving AI on if unfriendly. (Yeah, we know how that movie ends.)

Third party scorekeeper determines winner. Charitable contribution calculation to be determined beforehand.

The effective presumption that the AI is unfriendly and sneaky makes the original game too easy to for a mistrustful, self-aware Gatekeeper. Plus, you AI-Destroing Gatekeepers, you're murdering millions of people. Maybe. Dunno. But doubt as to that is a better and presumably more accurate game; if you know your AI is potentially unpleasant. it's too damned easy unless you're overconfident.

Comment author: MugaSofer 14 January 2013 10:29:06AM *  4 points [-]

Apparently, his (heroic) actions may not even have been illegal. Stupid ***** legal system.

Comment author: JRMayne 15 January 2013 04:47:38AM 3 points [-]

The guy hired by the defense says he's innocent. This is not surprising, but not particularly probative.

The feds have had some troubles, for sure. But that doesn't mean they acted badly in this particular case.

I'm not talking about whether this was good prosecutorial judgment; that's a much longer discussion. But did they prosecute a guy who committed the crimes charged? I think so.

Professor Orin Kerr, arguably the number one guy in computer crimes - and one of the lawyers for Lori Drew for whom he worked pro bono - says these were pretty clearly crimes.

Swartz' friend (and lawprof and sometime legal advisor) Larry Lessig - who has blasted the prosecution for overzealousness - acknowledges that Swartz' activities regarding JSTOR were wrong, and seemed to imply they were legal wrongs.

Outside of my main point, it's a tragedy that Swartz is dead. His brilliance is cut short, and it sucks.

Comment author: RolfAndreassen 04 January 2013 05:52:16PM 1 point [-]

Admittedly this is the weakest part of the argument. I looked at the revenue for 2011, 42 million, and divided by the number of drawings, 3 per week for 52 weeks. Obviously this would miss a recent spike in sales. However, I tried the probability with some theoretical numbers, and to get a probability of someone else winning that significantly affects the expectation value, the number of tickets sold has to go way, way up from that baseline quarter million. A full order of magnitude increase in sales, to 2.5 million, only gets you a 17% probability of sharing the jackpot, conditioned on you winning.

Comment author: JRMayne 05 January 2013 12:25:52AM 1 point [-]

I went wandering around ohiolottery.com (For instance, http://www.ohiolottery.com/Games/DrawGames/Classic-Lotto#4) and found this out:

  1. The cash payoff is half the stated prize.
  2. The odds to win the jackpot, as noted by the OP, are about 14 million-1.
  3. The amount of money being spent on individual draws is very low. The jackpot increase was $100K for the last drawing; I don't know exactly what their formula is, but I'd be shocked if they sold more than 400K tickets for the last drawing.
  4. Ohio is running a lot of lottery games; this is good for players who pick their spots.

There are also payoffs below the jackpot level, so I'm confident there's a positive EV per ticket.

The question as to how many tickets to buy, assuming you can effectively do so, is "All of them." Buy each individual ticket, take your 14 million tickets, and probably profit. (Remember, the jackpot kick will include some fraction of your 14 million, also. Plus, you'll have all the side prizes.) In practice, unfortunately, this requires a method to buy them effectively, some armored cars, and a staff of people to do it right. Failure to purchase all tickets results in some drama, for sure.

The execution expenses and risk are troubling; if those could be effectively mitigated, it's a great investment.

Assuming you're a few million short of that, though, it's harder. I buy CA lottery tickets when EV>1.20 per $1 invested. I have no strong justification for that number.

Comment author: JRMayne 06 December 2012 06:10:59PM 1 point [-]

wgd is correct as to the logic, but not as to the biology of the problem. In fact, the other kid is more likely than not to be male.

These problem types tend to assume an equal chance of a boy and a girl being born, which is a false assumption. (See: http://www.infoplease.com/ipa/A0005083.html)

I realize this may seem petty, but this is roughly like calculating the chance of picking the three of clubs as a random card from a deck is one in fifty. It's close, but it's wrong. An implicit assumption otherwise seems misguided; it should be made explicit (to make a logic problem rather than a logic and biology problem.)

Comment author: JRMayne 19 October 2012 11:32:01PM 0 points [-]

I think I misunderstand the question, or I don't get the assumptions, or I've gone terribly wrong.

Let me see if I've got the problem right to begin with. (I might not.)

40% of baseball players hit over 10 home runs a season. (I am making this up.)

Joe is a baseball player.

Baseball projector Mayne says Joe has a 70% chance of hitting more than 10 home runs next season. Baseball projector Szymborski says Joe has an 80% chance of hitting more than10 home runs next season. Both Mayne and Szymborski are aware of the usual rate of baseball players hitting more than 10 home runs.

Is this the problem?

Because if it is, the use of the prior is wrong. If the experts know the prior, and we believe the experts, the prior's irrelevant - our odds are 75%.

There are a lot of these situations in which regression to the mean, use of averages in determinations, and other factors are needed. But in this situation, if we assume reasonable experts who are aware of the general rules, and we value those experts' opinions highly enough, we should just ignore the prior - the experts have already factored that in. When Nate Silver gives you the odds that Barack Obama wins the election, you shouldn't be factoring in P(Incumbent wins) or anything else - the cake is prebaked with that information.

Since this rejects a strong claim in the post, it's possible I'm very seriously misreading the problem. Caveat emptor.

View more: Prev | Next