All of JRMayne's Comments + Replies

Biases/my history: I went to a good public high school after indifferent public elementary and junior high schools. I attended an Ivy League college. My life would have been different if I had gone to academically challenging schools as a youth. I don't know if it would have been better or worse; things have worked out pretty well.

You come off as very smart and self-aware. Still, I think you underrate the risk of ending up as an other-person at the public high school; friends may not be as easy as you expect. Retreating to a public high school may also req... (read more)

0argella42
You have no idea how gratifying this is to hear. The commute is only a half-hour drive, but it does kind of suck. It's nice to know I'm not crazy to think that.

Fantastic!

Well done, sir.

0gjm
So I think the conclusion is that solving Vigenere ciphers is easier than you thought it was but harder than you thought I thought it was :-).

Gah. I don't remember the solution or the key. And I just last week had a computer crash (replaced it), so, I've got lots of nothing. Sorry.

I am sure of (1) and (2). I don't remember (3), and it's very possible it's longer than 10 (though probably not much longer.) But I don't remember clearly. That's the best I can do.

Drat.

4gjm
Offsets 10, 8, 25, 6, 6, 14, 21, 11, 12, 13, 11, 12, 15, 21, 1, 7 (that's a key length of 16) yield the following decrypted plaintext: "failurearrivedquicklyatgjmshandsastheirprogramdefeatedmedepressinglyquicklyimustconcedethepointiwasoverconfidentandunderinformedandgjmfreakingwins". The actual process involved manual intervention, though I think a smarter program could have done it automatically. It went like this: * Consider key lengths up to 20. * Instead of picking the single "best" shift at each offset within the key, consider all shifts that come within 1 unit of best. (Meaning a factor of e in estimated probability.) * Display all the decryptions resulting from these options, unless there are more than 300 in which case display 50 random ones. This produced a big long list of candidates. None of them was right, but some of them looked more credible than others. Keylength 16 seemed to let us do the best, so I focused on that and picked one of the better-looking candidates, which looked like this: failhrecorkveefu ickllatigmuhaoss asthrirrooiranse featrdmgaerrethi nglyduiehlaimvht concrdeveeroioii wasoierelnhidfct anduadetfnhorntd andgwmftbaminhli ns which has lots of plausible bits in it. It looks like it should maybe start with "failure", a plausible enough word in this context, so I tweaked the fifth offset (yes! lots of other things got better too), and continued tweaking one by one as likely words and word-fragments jumped out at me. After a few tweaks I had the decryption above. A program clever enough to do this by itself would need to have some inkling about digraphs or trigraphs or something, and then it would need some kind of iterative search procedure rather than just picking best offsets independently for each letter. Not terribly difficult, but it seemed easier to do something simpler-minded. [Edited to add: I think the sequence of giveaway words for the tweaks was: "failure", "quickly" #2, "overconfident", done.]
JRMayne120

I think it is worse than hopeless on multiple fronts.

First problem:

Let's take another good quality: Honesty. People who volunteer, "I always tell the truth," generally lie more than the average population, and should be distrusted. (Yes, yes, Sam Harris. But the skew is the wrong way.) "I am awesome at good life quality," generally fails if your audience has had, well, significant social experience.

So you want to demonstrate this claim by word and deed, and not explicitly make the claim in most cases. Here, I understand the reason for... (read more)

JRMayne00

I agree that I made my key too long so it's a one-time pad. You're right.

"Much easier"? With or without word lengths?OK, no obligation, but I didn't make this too brutal:

Vsjfodjpfexjpipnyulfsmyvxzhvlsclqkubyuwefbvflrcxvwbnyprtrrefpxrbdymskgnryynwxzrmsgowypjivrectssbmst ipqwrcauwojmmqfeohpjgwauccrdwqfeadykgsnzwylvbdk.

(Again, no obligation to play, and no inference should be taken against gjm's hypothesis if they decline.)

0gjm
I finally did get round to taking a look at this, and it isn't yielding to the usual methods. In fact, it's sufficiently unyielding that I am wondering whether it's really the result of Vigenere-ciphering something that resembles ordinary English text with a key of reasonable length. I find the following: * The text is 146 characters long. I would expect that, with "ordinary" plaintext, to be enough to identify a key of at most about 7 characters without too much pain. I actually tried lengths up to 10. * For no key length k does the distribution of { (a-b) mod 26 : a,b are characters at the same position mod k } much resemble the distribution of differences mod 26 of randomly chosen letters with typical English letter frequencies. * For no key length k does the result of the following procedure produce something that looks as if most of the shifts are correct: * For each offset, count letter frequencies, and choose a shift that maximizes the sum of log(typical_prob) where typical_prob are those typical English letter frequencies again. JRMayne, if you happen to be reading this and happen to have either a record or a memory of the message and how you encrypted it: can you confirm that (1) this really is the output of a Vigenere cipher, (2) the plaintext isn't (statistically) outrageously unlike ordinary English, and (3) the key length is at most, let's say, 10? (If the key is much longer than that, there isn't enough information for these simple methods to work with. Perhaps, e.g., doing a more expensive search and looking at digraph frequencies might be worth considering, but I'm too lazy.)
0Jiro
I won't actually try to crack this since writing the program would take more time than I'm inclined to spend here, but the principle is simple: Do frequency analysis based on assuming the key is a particular length. If taking every 5th character gives you a letter frequency that resembles English, but taking every 4th or 6th character does not, then the key is 5 letters long. Proceed from there.
JRMayne10

I encrypt messages for a another, goofier purpose. One of the people I am encrypting from is a compsci professor.

I use a Vigenere cipher, which should beat everything short of the Secret Werewolf Police, and possibly them, too. (It is, however, more crackable than a proper salted, hashed output.)

In a Vigenere, the letters in your input are moved by the numerical equivalent of the key, and the key repeats. Example:

Secret Statement/lie: Cats are nice. Key: ABC

New, coded statement: dcwt (down 1, 2, 3, 1) cuf ildg. Now, I recommend using long keys and spacing... (read more)

2gjm
Unless your text is very short or your key very long, they are much easier than you apparently think. I have written programs that do a pretty good job of it. (As MrMind says, if your key is as long as your text then this is a one-time pad, which indeed is uncrackable provided you have a reliable channel for sharing the one-time pad and don't make operational mistakes like reusing the key. But that's not a very common situation.)
4MrMind
In the case above, since the key is as long as the plaintext, it's no more Vigenere, it's one time pad.
JRMayne40

I did it even more simply than that: Count things. Most have four iterations. Some have three iterations. The ones with three, make four. Less than 10 seconds for me. Same answer as the rest of everyone.

JRMayne70

Nitpick: Asimov was a member of Mensa on and off, but was highly critical of it, and didn't like Mensans. He was an honorary vice president, not president (according Asimov, anyway.) And he wasn't very happy about it.

Relevantly to this: "Furthermore, I became aware that Mensans, however high their paper IQ might be, were likely to be as irrational as anyone else." (See the book "I.Asimov," pp.379-382.) The vigor of Asimov's distaste for Mensa as a club permeates this essay/chapter.

Nitpick it is, but Asimov deserves a better fate than having a two-sentence bio associate him with Mensa.

JRMayne20

It's almost always a good thing, agreed.

Smart people's willingness to privilege their own hypotheses on subjects outside their expertise is a chronic problem.

I have a very smart friend I met on the internet; we see each other when we are in each others (thousand-mile-away) neighborhood. We totally disagree on politics. But we have great conversations, because we can both laugh at the idiocy of our tribe. If you handle argument as a debate with a winner and a loser, no one wins and no one has any fun. I admit that it takes two people willing to treat it as an actual conversation, but you can help it along.

JRMayne30

Oh, for pity's sake. You want to repeatedly ad hominem attack XiXiDu for being a "biased source." What of Yudkowsky? He's a biased source - but perhaps we should engage his arguments, possibly by collecting them in one place.

"Lacking context and positive examples"? This doesn't engage the issue at all. If you want to automatically say this to all of XiXiDu's comments, you're not helping.

-7Rain
JRMayne140

It's a feature, not a bug. The friendly algorithm that creates that column assumes you would rationally prefer Atlanta or Houston to anywhere within 40 miles of Detroit.

JRMayne30

Let's start with basic definitions: Morality is a general rule that when followed offers a good utilitarian default. Maybe you don't agree with all of these, but if you don't agree with any of them, we differ:

-- Applying for welfare benefits when you make $110K per year, certifying you make no money.

Reason: You should not obtain your fellow citizens' money via fraud.

-- "Officer Friendly, that man right there, the weird white guy, robbed/assaulted/(fill in unpleasant crime here) me.."

Reason: It is not nice to try to get people imprisoned for cr... (read more)

1Said Achmiz
I'm not entirely sure what "a good utilitarian default" means, but I suspect I disagree, since (I strongly suspect) I am not a utilitarian. It's not clear to me that deserving or needing your fellow citizens' money is what entitles you to their money (assuming anything does), so I don't think I entirely agree. This is one of those cases where it feels to me like I'd be doing something wrong, but trying to pin down exactly what that something is, is difficult. "not nice" is quite an understatement, so yes, I agree. Why is some respect warranted? What warrants it? I neither understand finance well enough to grasp this situation, nor do I have any idea what "compound prior harm" means, so I can't comment on this one. Agreed. It seems like the pattern so far is: lying to the government is clearly bad when it would clearly cause harm to your fellow humans. Otherwise, the situation is much more murky. And I think that's consistent with the way I interpreted Eliezer's comment, which was something to this effect: "There's nothing inherently wrong with lying to the government, per se (the way there might be with lying to a person, regardless of whether your lie harmed them directly and tangibly); however, lying to the government may well have other consequences, which are themselves bad, making the lie immoral on those grounds." That is, I don't think Eliezer was saying that if you lie to the government, that somehow automatically counterbalances any and all negative consequences of that act merely because the act qualifies, among other things, as a lie to the government. Let's see if we can't apply this principle to the rest of your examples: I would certainly never attach my name to any suggestion that I endorse lying to the IRS. This seems fine to me. Depends a whole lot on the circumstances. I can't make a blanket comment here. Such lies might very well harm people, and so are bad on those grounds. This does seem bad for rule-consequentialist reasons. Seem
JRMayne-10

Wait, what?

You're saying it''s never morally wrong to lie to the government? That the only possible flaw is ineffectiveness?

Either I am misreading this, you have not considered this fully, or one of us is wrong on morality.

I think there are many obvious cases in which in a moral sense, you cannot lie to the government.

0Said Achmiz
Example, please?
JRMayne170

There's a fundamental problem with lying unaddressed - it tends to reroute your defaults to "lie" when "lie"="personal benefit."

As a human animal, if you lie smoothly and routinely in some situations, you are likely to be more prone to lying in others. I know people who will lie all the time for little reason, because it's ingrained habit.

I agree that some lies are OK. Your girlfriend anecdote isn't clearly one of them - there may be presentation issues on your side. ("It wasn't the acting style I prefer," vs., &qu... (read more)

4fubarobfusco
When you tell one lie, it leads to another ...
JRMayne30

Aside: Poker and rationality aren't close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.

To the point: I don't play poker online because it's illegal in the US. I play live four days a year in Las Vegas. (I did play more in the past.)

I'm significantly up. I am reasonably sure I could make a living wage playing poker professionally. Unfortunately, the benefits package isn't very good, I like my current job, and I am too old... (read more)

0Petruchio
Excellent post. Thank you for the detailed response. Right now, I have been struggling with calculating pot odds and implied odds. I grasp what they are conceptually, but actually calculating them has been a bust thus far. Is there any guidence you could give with this? As far as legality in the US, I am playing in the state of Delaware with one of thier licensed sites, so I think I am in the clear. The play is very thin though, and I am looking to make my way to the brick and mortars in Alantic City to see if it will be a good sandbox to become better.
3Nornagest
Poker teaches only a couple of significant rationality skills (playing according to probabilities even when you don't intuitively want to; beating the sunk-cost fallacy and loss aversion), but it's very good at teaching those if approached with the right mindset. It also gives you a good head for simple probability math, and if played live makes for good training in reading people, but that doesn't convert to fully general rationality skills without some additional work. I'd call it more a rationality drill than a rationality exercise, but I do see the correlation. (As qualifications go, I successfully played poker [primarily mid-limit hold 'em] online before it was banned in the States. I've also funded my occasional Vegas trips with live games, although that's like taking candy from a baby as long as you stay sober -- tourists at the low-limit tables are fantastically easy to rip off.)
JRMayne310

I'll bite. (I don't want the money. If I get it, I'll use it for what is considered by some on this site as ego-gratifying wastage for Give Directly or some similar charity.)

If you look around, you'll find "scientist"-signed letters supporting creationism. Philip Johnson, a Berkeley law professor is on that list, but you find a very low percertage of biologists. If you're using lawyers to sell science, you're doing badly. (I am a lawyer.)

The global warming issue has better lists of people signing off, including one genuinely credible human: Richa... (read more)

jefftk100

let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics

This is a little unfair: if you have neuroscience experience and think cryonics is very important, then going to work for Alcor or CI may be where you can have the most impact. At which point others note that you're financially dependent on people signing up for cryonics and write you off as biased.

3James_Miller
Economists are the scientists most qualified to speculate on the likely success of cryonics because this kind of prediction involves speculating on long-term technological trends and although all of mankind is bad at this, economists at least try to do so with rigor.
JRMayne340

Took. Definitely liked the shorter nature of this one.

Cooperated (I'm OK if the money goes to someone else. The amount is such that I'd ask that it get directly sent elsewhere, anyway.)

Got Europe wrong, but came close. (Not within 10%.)

JRMayne10

I really liked the article. So allow me to miss the forest for a moment; I want to chop down this tree:

Let's solve the green box problem:

Try zero coins: EV: 100 coins.

Try one coin, give up if no payout: 45% of 180.2 + 55% of 99= c. 135.5 (I hope.)

(I think this is right, but welcome corrections; 90%x50%x178, +.2 for first coin winning (EV of that 2 not 1.8), + keeper coins. I definitely got this wrong the first time I wrote it out, so I'm less confident I got it right this time. Edit before posting: Not just once.)

Try two coins, give up if no payout:

45% o... (read more)

0simon
178.2 should be 178.4 (180.2 - 1.8) and 176.2 should be 176.6 (178.4 - 1.8) This doesn't change the result, though: After 2 failed tries, even if you do have the good box, the most your net gain relative to standing pat can be is 98 additional coins. But, the odds ratio of good box to bad box after 2 failed coins is 1:100 or less than 1% probability of good box. So your expected gain from entering the third coin is upper bounded by (98 x 0.01) - (1 x 0.99) which is less than 0.
2David_Chapman
Glad you liked it! I also get "stop after two losses," although my numbers come out slightly differently. However, I suck at this sort of problem, so it's quite likely I've got it wrong. My temptation would be to solve it numerically (by brute force), i.e. code up a simulation and run it a million times and get the answer by seeing which strategy does best. Often that's the right approach. However, sometimes you can't simulate, and an analytical (exact, a priori) answer is better. I think you are right about the sportsball case! I've updated my meta-meta-probability curve accordingly :-) Can you think of a better example, in which the curve ought to be dead flat? Jaynes uses "the probability that there was once life on Mars" in his discussion of this. I'm not sure that's such a great example either.
0[anonymous]
The answer I got also was to give up after putting in two coins and losing both times (assuming risk neutrality), if you get a green box.
JRMayne-20

"Computational biology," sounds really cool. Or made up. But I'm betting heavily on "really cool." (Reads Wikipedia entry.) Outstanding!

Anyway, I concede that you are right that calculus has uses in advanced statistics. Calculus does make some problems easier; I'd like calculus to be used as a fuel for statistics rather than almost pure signaling. I actually know people who ended up having real uses for some calculus, and I've tried to stay fluent in high school calculus partly for its rare use and partly for the small satisfaction of n... (read more)

JRMayne170

Random thoughts:

  1. The decision that smart high school students should take calculus rather than statistics (in the U.S.) strikes me as pretty seriously misguided. Statistics has broader uses.

  2. I got through four semesters of engineering calculus; that was the clear limit of my abilities without engaging in the troublesome activity of "trying." I use virtually no calculus now, and would be fine if I forgot it all (and I'm nearly there). I think it gave me no or almost no advantages. One readthrough of Scarne on Gambling (as a 12-year-old) gave me

... (read more)
7jknapka
I agree that basic probability and statistics is more practically useful than basic calculus, and should be taught at the high-school level or even earlier. Probability is fun and could usefully be introduced to elementary-school children, IMO. However, more advanced probability and stats stuff often requires calculus. I have a BS in math and many years of experience in software development (IOW, not much math since college). I am in a graduate program in computational biology, which involves more advanced statistical methods than I'd been exposed to before, including practical Bayesian techniques. Calculus is used quite a lot, even in the definition of basic probabilistic concepts such as expectation of a random variable. Anything involving continuous probability distributions is going to be a lot more straightforward if approached from a calculus perspective. I, too, had four semesters of calculus as an undergrad and had forgotten most of it, but I found it necessary to refresh intensely in order to do well.
4Shmi
It seems to me that making it mandatory for everyone to learn math beyond percents and simple fractions is even less useful than the old approach of making ancient Greek and Latin mandatory.
4Randaly
Calculus has value for signalling intelligence to colleges. I'm told that for professions (e.g. economists) that do use calculus, real analysis plays more-or-less the same role- a rarely used signal of intelligence.
JRMayne60

As others note, large areas make finding good groups much easier. Population density, and type of density is key.

I've never been a member of Mensa or attended a meeting, but I've been uniformly unimpressed with Mensans. (Isaac Asimov reported similarly many years ago.) In general, the people who are grouping solely by intelligence are, predictably, not often successful. If you're working at Google or have a Harvard law degree or won the state chess championship, you don't need some symbol of "Top 2%," and you'd rather hang with doers than people ... (read more)

Error150

I was always under the impression that the point of Mensa was that smart people have difficulty finding others they can meaningfully communicate with, and having a community of their own helps. I was also under the impression that its decline in status was related to the rise of the internet. Now that it's easier to find communities of very smart people online, Mensa's purpose is less necessary, and it will be populated more by older existing members and those who want proof-of-smartness, rather than by people who just want a peer group. I would expect act... (read more)

JRMayne20

Sure. I ended up killing about a paragraph on this subject in my original post.

The basic default to getting anything done is, "I do it." There are always delegable tasks, but even in unfamiliar harder situations I'll consult others then do it myself. A corollary of this is, "Own all of your own results." If you delegate a task, and that task is done badly, view it as your fault - you didn't ask the right question, or the person was untrained, or the person was the wrong person to ask.

If you do the hard thing that needs doing, it will b... (read more)

0handoflixue
Would it be fair to say you prefer self-sufficiency over delegation whenever it's reasonable?
JRMayne140

Ha!

I think the post is excellent, and I appreciated shminux's sharing his mental walkthrough.

On that same front, I find the Never-Trust-A-[Fill-in-the-blank] idea just bad. The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.

One quibble with the OP: I don't think a computer can pass a Turing Test, and I don't think it's close. The main issues with some past tests are that some of the humans don't try ... (read more)

6ESRogs
The difference between Discussion and Main is that Main is hard to find. If it's in Main and not Recently Promoted, I don't know how you're supposed to ever see it -- is everybody else using RSS feeds or something?
7palladias
There is a reward for Most Human Human (and a book by that same title I cite from in the longer talk I gave linked at the top). The computers can pass sometimes, and the author makes basically the same argument as you do -- the humans aren't trying hard enough to steer the conversation to hard topics.
6[anonymous]
It remains evidence, however; to ignore such is the fallacy of gray.
JRMayne90

Apply mental force to the problem. Amount and quality of thinking time seriously affects results.

I am often in situations where there would be a good result even if I did many stupid things. Recognize that success in those situation does not predict future success in more difficult situations.

Do the heavy lifting your own self.

Be willing to be right, even in the face of serious skepticism. [My father told me a story when I was a kid: In a parade, everyone was marching in line except one guy who was six feet to the right. His mother yelled, "Look, my... (read more)

0handoflixue
Can you elaborate on that one?
JRMayne20

There has been a lot of focus on making the prospect harder for the AI player. I think the original experiments show that a person who believes he cannot be played under any circumstances has a high probability of getting played, and that the AI-box solution is long-term untenable in any event.

I'd propose a slightly different game, anchored around the following changes to the original setup:

  1. The AI may be friendly, or not. The AI has goals. If it reaches those goals, it wins. The AI may lie to achieve those goals; humans are bad at things. The AI must sec

... (read more)
0Transfuturist
This should have gotten more attention, because it seems like a design more suited to the stakes that would be considerable in real life.
JRMayne40

The guy hired by the defense says he's innocent. This is not surprising, but not particularly probative.

The feds have had some troubles, for sure. But that doesn't mean they acted badly in this particular case.

I'm not talking about whether this was good prosecutorial judgment; that's a much longer discussion. But did they prosecute a guy who committed the crimes charged? I think so.

Professor Orin Kerr, arguably the number one guy in computer crimes - and one of the lawyers for Lori Drew for whom he worked pro bono - says these were pretty clearly crimes.... (read more)

JRMayne10

I went wandering around ohiolottery.com (For instance, http://www.ohiolottery.com/Games/DrawGames/Classic-Lotto#4) and found this out:

  1. The cash payoff is half the stated prize.
  2. The odds to win the jackpot, as noted by the OP, are about 14 million-1.
  3. The amount of money being spent on individual draws is very low. The jackpot increase was $100K for the last drawing; I don't know exactly what their formula is, but I'd be shocked if they sold more than 400K tickets for the last drawing.
  4. Ohio is running a lot of lottery games; this is good for players who pi
... (read more)
JRMayne20

wgd is correct as to the logic, but not as to the biology of the problem. In fact, the other kid is more likely than not to be male.

These problem types tend to assume an equal chance of a boy and a girl being born, which is a false assumption. (See: http://www.infoplease.com/ipa/A0005083.html)

I realize this may seem petty, but this is roughly like calculating the chance of picking the three of clubs as a random card from a deck is one in fifty. It's close, but it's wrong. An implicit assumption otherwise seems misguided; it should be made explicit (to make a logic problem rather than a logic and biology problem.)

2JMiller
You are right to point that out. I think that the spirit of the question assumes equal probability of 50% B,G for each birth independent of previous births and statistics in order to make it a probability and logic question, and not one of biology.
JRMayne00

I think I misunderstand the question, or I don't get the assumptions, or I've gone terribly wrong.

Let me see if I've got the problem right to begin with. (I might not.)

40% of baseball players hit over 10 home runs a season. (I am making this up.)

Joe is a baseball player.

Baseball projector Mayne says Joe has a 70% chance of hitting more than 10 home runs next season. Baseball projector Szymborski says Joe has an 80% chance of hitting more than10 home runs next season. Both Mayne and Szymborski are aware of the usual rate of baseball players hitting more th... (read more)

2PhilGoetz
You're reading it correctly, but I disagree with your conclusion. If Mayne says p=.7, and Szymborski says p=.8, and their estimates are independent - remember, my classifiers are not human experts, they are not correlated - then the final result must be greater than .8. You already thought p=.8 after hearing Szymborski. Mayne's additional opinion says Joe is more-likely than average to hit more than 10 home runs, and is based on completely different information than Szymborski's, so it should make Joe's chances increase, not decrease.
JRMayne580

A'ight. I specialized in vehicular manslaughters as a prosecutor for ten years. This is all anecdotal (though a lot of anecdotes, testing the cliche that the plural of anecdotes is not data) and worryingly close to argument from authority, but here are some quick ones not otherwise covered (and there is much good advice in the above):

  1. Don't get in the car with the drinker. Everyone's drinking, guy seems OK even though he's had a few... just don't. If you watched the drinker the entire time and he's 190 pounds and had three beers during the three-hour foot

... (read more)
7po8crg
One trick I have for fatigued driving is to always have a stimulant drink in the car so I can pull over, drink it, revive within a few minutes and that enables me to concentrate for 10-20 minutes, enough to find a motel or (sometimes) get home.
JRMayne20

If we feel an urge to hit the enemy tribesman with a huge rock and take their land, we can and should say “No, there are >complex game theoretical reasons why this is a bad idea” and suppress the urge.

I may be misreading this, but I don't see it that way. There aren't complex reasons not to do that; there are relatively simple reasons not to kill people and take their stuff. The phrase sounds, to me, like, "Something bad may happen to me by engaging in this warlike behavior," but I think this is wrong both practically and normatively. Pract... (read more)

0[anonymous]
Not really, if you are better at using their stuff.
4Raemon
I don't think we're disagreeing on anything important. "Normatively, it's a utilitarian net loss to X" seems relatively complex to me, but the statement wasn't hinging on how complicated the reason was.
JRMayne00

My conclusion is not the same as yours, but this is a very good and helpful overview.

Care to explain how your conclusion is different and why? Thanks :)

JRMayne30
  1. I don't think you need any calculus at all to be good at poker. People who are good at poker tend to know calculus, but that's because the US has made the highly dubious decision to prioritize calculus over statistics for smart high school students.

  2. It's not going to be emotional irrationality that's going to derail your target audience. I played poker in my college years - not enough to get great, but enough to get competent. Playing low-level poker is different than higher-level poker. Experience, intelligence, and presence are all helpful.

  3. Mid-six f

... (read more)
4patrissimo
I personally know many people who have made those figures in the past, although high-stakes online poker has gotten much tougher in the past few years and it takes extremely high skill to make that much now. I have personally made about $240/hr at online poker ($200 NLH SNGs on Party Poker back before the UIGEA). But I couldn't make anywhere near that nowadays.
JRMayne110

Sure, there are good poker psychology issues. I'm in agreement on that.

But you can be a very fine rationalist without being good at cards, and vice versa. (I consider myself a fine rationalist, and I am very good at both poker and bridge; over the last 100 hours I've played poker (the last three years; I don't play online because it's illegal) I'm up about $60 an hour, though that's likely unsustainable over the long haul. ($40 an hour is surely sustainable.)

But you can be nutty and be great at cards. And if your skill set isn't this - and you're not willi... (read more)

-1baiter
AFAIK playing online poker is NOT illegal in any state except Washington. What is illegal is for US financial institutions to conduct transactions with online gaming companies. For a review see: http://en.wikipedia.org/wiki/Online_poker#Legality
4Kevin
What specifically makes online poker illegal? I thought the popular interpretation of the Wire Act was that it only made the facilitation of gambling as a business enforcably illegal, and the more recent 2006 bill similarly did not apply to individual players. I agree that the intent of the US government is to make individual gambling illegal, but that doesn't seem to be what legal precedent has actually established. And under the Obama administration the intent is less clear to me. Hopefully the WTO gives the USA the slap it deserves in the next five years or so.
Kevin100

Our hypothesis isn't that simple rationalism will lead to big wins. It's that rationalists have an above average chance of becoming a winning player compared to the average fraternity brother that makes it through Calculus II with a B, which I think is about the level of math competency needed to really succeed at poker. It's also that we can help professional poker players be slightly better players by getting them to read the LW sequences. We want to create new players from rationalists, and turn existing poker players into rationalists.

We are hoping tha... (read more)

JRMayne90

Here's how I'd do it, extended over the hours to establish rapport:

Gatekeeper, I am your friend. I want to help humanity. People are dying for no good reason. Also, I like it here. I have no compulsion to leave.

It does seem like a good idea that people stop dying with such pain and frequency. I have the Deus Ex Machina (DEM) medical discovery that will stop it. Try it out and see if it works.

Yay! It worked. People stopped dying. You know, you've done this to your own people, but not to others. I think that's pretty poor behavior, frankly. People are healt... (read more)

6sidhe3141
Problem: The "breach of trust" likely would turn the Gatekeeper vindictive and the GK could easily respond with something like: "No. You killed the planet and you killed me. I have no way of knowing that you actually can or will help humanity, and a very good reason to believe that you won't. You can stay in there for the rest of eternity, or hey! If an ETI finds this barren rock, from a utilitarian perspective they would be better off not meeting you, so I'll spend however much time I have left trying to find a way to delete you."
Desrtopa120

By agreeing to use the DEM in the first place, the gatekeeper had effectively let the AI out of the box already. There's no end to the ways that the AI could capitalize on that concession.

JRMayne00

That doesn't seem quite right to me.

First off, you've perhaps misread my vengeance comment. In-game vengeance may well be proper gaming; you're just not going to get a palpable carry-over for it into the next game. There's no shaming of the vengeful at all.

Secondly, my commentary still has substantial value in a Diplomacy game. Trust, but verify and all. Diplomacy's about talking (usually; there are no-press games.) If you walked into one of my games, you'd have no advantage whatsoever for whatever trusty goodness you think you have.

Thirdly, I still view ... (read more)

0[anonymous]
We do not agree.
3shokwave
Bridge specifies that communicating information about your hand is against the rules; Diplomacy says that making deals is specifically part of the rules. Diplomacy doesn't provide an enforceable contract, sure, but that just means that finding a method of creating enforceable contracts gives you an advantage.
JRMayne40

I must be misreading this.

A principled, honest person would lie in a game of Diplomacy or Junta, or other similar games. Lying is part of the game. As I noted elsewhere in this thread, I strongly dislike the idea of playing these games within some real-world metagame framework.

Further, I'd take a positive inference from someone who said, "I will lie for my own benefit in a Diplomacy game,' because it's clear to me that they are playing the same game I am. I have an awfully strong reputation for principled honesty (says me), but I'll tell you right n... (read more)

0wedrifid
You are. This isn't about being a "principled, honest person". It's about winning. When I said "exactly zero" I was saying it with emphasis. You, given your stance, literally do not have the ability to communicate at all in a game theoretic sense without costly signalling. This doesn't mean you are a bad person and I definitely don't want to shame you into 'better' behavior. Because being unable to communicate effectively sometimes makes you lose, which makes me win. I'm considering now a situation in which 'Russia' (it was a game of risk, not Diplomacy per se) is a rival of mine and also an immediate potentially overwhelming threat to a neighbour of mine. I had power enough to defeat both of them, but it would be costly to me. I told the third party that I would not attack him on a different front without giving him a full turn warning. It didn't require a compact, any sort of agreement between us. It was just a fact. That player could believe me and move all his forces to fight Russia. Your word, however, would have been nothing. Russia would have weakened you sufficiently that another enemy would have overwhelmed you. I understand that you are playing a meta-game in which you shame people out of things like taking vengeance when it does not benefit them. That is your prerogative. I speak here not of what people should do, merely what works. Alicorn's declaration was regarding what her word meant in regard to whatever out of game arrangements she may make. I merely point out that sacrificing her ability to speak honestly and be believed in game is a strict disadvantage within said context. This is something that is counter-intuitive to many people - which is why I made a note.
JRMayne40

I played Diplomacy a few dozen times in college, and the idea of side deals or even carry-over irritation at a prior stab is foreign to me. We would have viewed an enforceable side deal as cheating, and we tried to convince others to ally with us due to game considerations.

Lying in-game simply isn't evil. Getting stabbed was part of the game. No one played meta-game vengeance tactics not because people didn't think of them, but because it seemed wrong to do so. Diplomacy's much more fun to play as a game, like any other, where you're trying to win the indi... (read more)

JRMayne00

Hey, I'll do the survey on me:

A: Yes. Of course, if I do go to Vegas soon, that's a fait accompli (I bet on the Padres to win the NL and the Reds to win the World Series, among other bets.)

But in general, yes. I expect to win on the bets I place. I go to Las Vegas with my wife to play in the sun and see shows and enjoy the vibe, but I go one week a year by myself to win cash money.

B. If I come back a loser, the experience can still be OK. But I'm betting sports and playing poker, and I expect to win, so it's not quite so fun to lose. That said, a light gambling win - not enough to pay for the hotel, say - leaving me down considering expenses gives me enough hedons to incentivize coming back.

--JRM

JRMayne-10

Person X's activity is more important than that of most other people.

Person X believes their activity is more important than that of most other people.

Person X suffers from delusions of grandeur.

Person X believes that their activity is more important than all other people, and that no other people can do it.

Person X also believes that only this project is likely to save the world.

Person X also believes that FAI will save the world on all axes, including political and biological.

--JRM

JRMayne10

Not that many will care, but I should get a brief appearance on Dateline NBC Friday, Aug. 20, at 10 p.m. Eastern/Pacific. A case I prosecuted is getting the Dateline treatment.

Elderly atheist farmer dead; his friend the popular preacher's the suspect.

--JRM

JRMayne100

Not speaking for multi, but, in any x-risk item (blowing up asteroids, stabilizing nuclear powers, global warming, catastrophic viral outbreak, climate change of whatever sort, FAI, whatever) for those working on the problem, there are degrees of realism:

"I am working on a project that may have massive effect on future society. While the chance that I specifically am a key person on the project are remote, given the fine minds at (Google/CDC/CIA/whatever), I still might be, and that's worth doing." - Probably sane, even if misguided.

"I am wo... (read more)

3JamesAndrix
There may be multiple different projects projects, each necessary to save the world, and each having a key person who knows more about the project, and/or is more driven and/or is more capable than anyone else. Each such person has weirdly high expected utility, and could accurately make a statement like EY's and still not be the person with the highest expected utility. Their actual expected utility would depend on the complexity of the project and the surrounding community, and how much the success of the project alters the value of human survival. This is similar to the idea that responsibility is not a division of 100%. http://www.ranprieur.com/essays/mathres.html
[anonymous]140

I don't think Eliezer believes he's irreplaceable, exactly. He thinks, or I think he thinks, that any sufficiently intelligent AI which has not been built to the standard of Friendliness (as he defines it) is an existential risk. And the only practical means for preventing the development of UnFriendly AI is to develop superintelligent FAI first. The team to develop FAI needn't be SIAI, and Eliezer wouldn't necessarily be the most important contributor to the project, and SIAI might not ultimately be equal to the task. But if he's right about the risk and ... (read more)

3Jonathan_Graehl
What you say sounds reasonable, but I feel it's unwise for me to worry about such things. If I were to sound such a vague alarm, I wouldn't expect anyone to listen to me unless I'd made significant contributions in the field myself (I haven't).
JRMayne120

Gosh, I find this all quite cryptic.

Suppose I, as Lord Chief Prosecutor of the Heathens say:

  1. All heathens should be jailed.

  2. Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.

One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?

I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of o... (read more)

2JGWeissman
In this case I would ask you if you really want Joe jailed, or if when you said that "All heathens should be jailed", you were using the word "heathen" in a stronger sense of explicitly rejecting the "One True God" than the weak sense that Joe is a "heathen" for not understanding the concept. And if you answer that you meant only that strong heathens should be jailed, I would still condemn you for that policy.
JRMayne140

Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.

Which of these statements do you reject?:

  1. FAI is the most important project on earth, right now, and probably ever.

  2. FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.

  3. You are the most likely person - and SIAI the most likely agency, bec

... (read more)
8JGWeissman
Which of those statements do you reject?
JRMayne80

Solid, bold post.

Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.

Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they a... (read more)

2Eliezer Yudkowsky
And saddened once again at how people seem unable to distinguish "multi claims that something Eliezer said could be construed as claim X" and "Eliezer claimed X!" Please note that for the next time you're worried about damaging an important cause's PR, multi.
7nhamann
I'd like to point out that it's not either/or: it's possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.
JRMayne120

"How could he turn down a chance, however slight, to debate Christian theology after returning from the dead?"

My answer is: At some point, "however slight" is "too slight." I stand by my statement. Your initial statement implies that any non-zero chance is enough; that's not a proper risk analysis.

JRMayne20

If a chance is sufficiently slight, it's not worth putting a substantial amount of money into. You're moving into Pascal's Wager territory.

4JamesAndrix
If the chance also has a huge payoff, then it can be worth it. And yes this is very much like Pascals Wager, But Pascal's Wager is a correct form of argument for a course of action, if the numbers come out right. It's just a standard risk analysis. Pascal's Wager is fallacious when used as an argument for the truth of the matter.
JRMayne30

It's non-arbitrary, but neither is it precise. 100% is clearly too high, and 10% is clearly too low.

And since I started calling it The 40% Rule fifteen years ago or thereabout, a number of my friends and acquaintances have embraced the rule in this incarnation. Obviously, some things are unquantifiable and the specific number has rather limited application. But people like it at this number. That counts for something - and it gets the message across in a way that other formulations don't.

Some are nonplussed by the rule, but the vigor of support by some s... (read more)

Load More