Fantastic!
Well done, sir.
Gah. I don't remember the solution or the key. And I just last week had a computer crash (replaced it), so, I've got lots of nothing. Sorry.
I am sure of (1) and (2). I don't remember (3), and it's very possible it's longer than 10 (though probably not much longer.) But I don't remember clearly. That's the best I can do.
Drat.
I think it is worse than hopeless on multiple fronts.
First problem:
Let's take another good quality: Honesty. People who volunteer, "I always tell the truth," generally lie more than the average population, and should be distrusted. (Yes, yes, Sam Harris. But the skew is the wrong way.) "I am awesome at good life quality," generally fails if your audience has had, well, significant social experience.
So you want to demonstrate this claim by word and deed, and not explicitly make the claim in most cases. Here, I understand the reason for...
I agree that I made my key too long so it's a one-time pad. You're right.
"Much easier"? With or without word lengths?OK, no obligation, but I didn't make this too brutal:
Vsjfodjpfexjpipnyulfsmyvxzhvlsclqkubyuwefbvflrcxvwbnyprtrrefpxrbdymskgnryynwxzrmsgowypjivrectssbmst ipqwrcauwojmmqfeohpjgwauccrdwqfeadykgsnzwylvbdk.
(Again, no obligation to play, and no inference should be taken against gjm's hypothesis if they decline.)
I encrypt messages for a another, goofier purpose. One of the people I am encrypting from is a compsci professor.
I use a Vigenere cipher, which should beat everything short of the Secret Werewolf Police, and possibly them, too. (It is, however, more crackable than a proper salted, hashed output.)
In a Vigenere, the letters in your input are moved by the numerical equivalent of the key, and the key repeats. Example:
Secret Statement/lie: Cats are nice. Key: ABC
New, coded statement: dcwt (down 1, 2, 3, 1) cuf ildg. Now, I recommend using long keys and spacing...
I did it even more simply than that: Count things. Most have four iterations. Some have three iterations. The ones with three, make four. Less than 10 seconds for me. Same answer as the rest of everyone.
Nitpick: Asimov was a member of Mensa on and off, but was highly critical of it, and didn't like Mensans. He was an honorary vice president, not president (according Asimov, anyway.) And he wasn't very happy about it.
Relevantly to this: "Furthermore, I became aware that Mensans, however high their paper IQ might be, were likely to be as irrational as anyone else." (See the book "I.Asimov," pp.379-382.) The vigor of Asimov's distaste for Mensa as a club permeates this essay/chapter.
Nitpick it is, but Asimov deserves a better fate than having a two-sentence bio associate him with Mensa.
It's almost always a good thing, agreed.
Smart people's willingness to privilege their own hypotheses on subjects outside their expertise is a chronic problem.
I have a very smart friend I met on the internet; we see each other when we are in each others (thousand-mile-away) neighborhood. We totally disagree on politics. But we have great conversations, because we can both laugh at the idiocy of our tribe. If you handle argument as a debate with a winner and a loser, no one wins and no one has any fun. I admit that it takes two people willing to treat it as an actual conversation, but you can help it along.
Oh, for pity's sake. You want to repeatedly ad hominem attack XiXiDu for being a "biased source." What of Yudkowsky? He's a biased source - but perhaps we should engage his arguments, possibly by collecting them in one place.
"Lacking context and positive examples"? This doesn't engage the issue at all. If you want to automatically say this to all of XiXiDu's comments, you're not helping.
It's a feature, not a bug. The friendly algorithm that creates that column assumes you would rationally prefer Atlanta or Houston to anywhere within 40 miles of Detroit.
Let's start with basic definitions: Morality is a general rule that when followed offers a good utilitarian default. Maybe you don't agree with all of these, but if you don't agree with any of them, we differ:
-- Applying for welfare benefits when you make $110K per year, certifying you make no money.
Reason: You should not obtain your fellow citizens' money via fraud.
-- "Officer Friendly, that man right there, the weird white guy, robbed/assaulted/(fill in unpleasant crime here) me.."
Reason: It is not nice to try to get people imprisoned for cr...
Wait, what?
You're saying it''s never morally wrong to lie to the government? That the only possible flaw is ineffectiveness?
Either I am misreading this, you have not considered this fully, or one of us is wrong on morality.
I think there are many obvious cases in which in a moral sense, you cannot lie to the government.
There's a fundamental problem with lying unaddressed - it tends to reroute your defaults to "lie" when "lie"="personal benefit."
As a human animal, if you lie smoothly and routinely in some situations, you are likely to be more prone to lying in others. I know people who will lie all the time for little reason, because it's ingrained habit.
I agree that some lies are OK. Your girlfriend anecdote isn't clearly one of them - there may be presentation issues on your side. ("It wasn't the acting style I prefer," vs., &qu...
Aside: Poker and rationality aren't close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.
To the point: I don't play poker online because it's illegal in the US. I play live four days a year in Las Vegas. (I did play more in the past.)
I'm significantly up. I am reasonably sure I could make a living wage playing poker professionally. Unfortunately, the benefits package isn't very good, I like my current job, and I am too old...
I'll bite. (I don't want the money. If I get it, I'll use it for what is considered by some on this site as ego-gratifying wastage for Give Directly or some similar charity.)
If you look around, you'll find "scientist"-signed letters supporting creationism. Philip Johnson, a Berkeley law professor is on that list, but you find a very low percertage of biologists. If you're using lawyers to sell science, you're doing badly. (I am a lawyer.)
The global warming issue has better lists of people signing off, including one genuinely credible human: Richa...
let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics
This is a little unfair: if you have neuroscience experience and think cryonics is very important, then going to work for Alcor or CI may be where you can have the most impact. At which point others note that you're financially dependent on people signing up for cryonics and write you off as biased.
Took. Definitely liked the shorter nature of this one.
Cooperated (I'm OK if the money goes to someone else. The amount is such that I'd ask that it get directly sent elsewhere, anyway.)
Got Europe wrong, but came close. (Not within 10%.)
I really liked the article. So allow me to miss the forest for a moment; I want to chop down this tree:
Let's solve the green box problem:
Try zero coins: EV: 100 coins.
Try one coin, give up if no payout: 45% of 180.2 + 55% of 99= c. 135.5 (I hope.)
(I think this is right, but welcome corrections; 90%x50%x178, +.2 for first coin winning (EV of that 2 not 1.8), + keeper coins. I definitely got this wrong the first time I wrote it out, so I'm less confident I got it right this time. Edit before posting: Not just once.)
Try two coins, give up if no payout:
45% o...
"Computational biology," sounds really cool. Or made up. But I'm betting heavily on "really cool." (Reads Wikipedia entry.) Outstanding!
Anyway, I concede that you are right that calculus has uses in advanced statistics. Calculus does make some problems easier; I'd like calculus to be used as a fuel for statistics rather than almost pure signaling. I actually know people who ended up having real uses for some calculus, and I've tried to stay fluent in high school calculus partly for its rare use and partly for the small satisfaction of n...
Random thoughts:
The decision that smart high school students should take calculus rather than statistics (in the U.S.) strikes me as pretty seriously misguided. Statistics has broader uses.
I got through four semesters of engineering calculus; that was the clear limit of my abilities without engaging in the troublesome activity of "trying." I use virtually no calculus now, and would be fine if I forgot it all (and I'm nearly there). I think it gave me no or almost no advantages. One readthrough of Scarne on Gambling (as a 12-year-old) gave me
As others note, large areas make finding good groups much easier. Population density, and type of density is key.
I've never been a member of Mensa or attended a meeting, but I've been uniformly unimpressed with Mensans. (Isaac Asimov reported similarly many years ago.) In general, the people who are grouping solely by intelligence are, predictably, not often successful. If you're working at Google or have a Harvard law degree or won the state chess championship, you don't need some symbol of "Top 2%," and you'd rather hang with doers than people ...
I was always under the impression that the point of Mensa was that smart people have difficulty finding others they can meaningfully communicate with, and having a community of their own helps. I was also under the impression that its decline in status was related to the rise of the internet. Now that it's easier to find communities of very smart people online, Mensa's purpose is less necessary, and it will be populated more by older existing members and those who want proof-of-smartness, rather than by people who just want a peer group. I would expect act...
Sure. I ended up killing about a paragraph on this subject in my original post.
The basic default to getting anything done is, "I do it." There are always delegable tasks, but even in unfamiliar harder situations I'll consult others then do it myself. A corollary of this is, "Own all of your own results." If you delegate a task, and that task is done badly, view it as your fault - you didn't ask the right question, or the person was untrained, or the person was the wrong person to ask.
If you do the hard thing that needs doing, it will b...
Ha!
I think the post is excellent, and I appreciated shminux's sharing his mental walkthrough.
On that same front, I find the Never-Trust-A-[Fill-in-the-blank] idea just bad. The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.
One quibble with the OP: I don't think a computer can pass a Turing Test, and I don't think it's close. The main issues with some past tests are that some of the humans don't try ...
Apply mental force to the problem. Amount and quality of thinking time seriously affects results.
I am often in situations where there would be a good result even if I did many stupid things. Recognize that success in those situation does not predict future success in more difficult situations.
Do the heavy lifting your own self.
Be willing to be right, even in the face of serious skepticism. [My father told me a story when I was a kid: In a parade, everyone was marching in line except one guy who was six feet to the right. His mother yelled, "Look, my...
There has been a lot of focus on making the prospect harder for the AI player. I think the original experiments show that a person who believes he cannot be played under any circumstances has a high probability of getting played, and that the AI-box solution is long-term untenable in any event.
I'd propose a slightly different game, anchored around the following changes to the original setup:
The AI may be friendly, or not. The AI has goals. If it reaches those goals, it wins. The AI may lie to achieve those goals; humans are bad at things. The AI must sec
The guy hired by the defense says he's innocent. This is not surprising, but not particularly probative.
The feds have had some troubles, for sure. But that doesn't mean they acted badly in this particular case.
I'm not talking about whether this was good prosecutorial judgment; that's a much longer discussion. But did they prosecute a guy who committed the crimes charged? I think so.
Professor Orin Kerr, arguably the number one guy in computer crimes - and one of the lawyers for Lori Drew for whom he worked pro bono - says these were pretty clearly crimes....
I went wandering around ohiolottery.com (For instance, http://www.ohiolottery.com/Games/DrawGames/Classic-Lotto#4) and found this out:
wgd is correct as to the logic, but not as to the biology of the problem. In fact, the other kid is more likely than not to be male.
These problem types tend to assume an equal chance of a boy and a girl being born, which is a false assumption. (See: http://www.infoplease.com/ipa/A0005083.html)
I realize this may seem petty, but this is roughly like calculating the chance of picking the three of clubs as a random card from a deck is one in fifty. It's close, but it's wrong. An implicit assumption otherwise seems misguided; it should be made explicit (to make a logic problem rather than a logic and biology problem.)
I think I misunderstand the question, or I don't get the assumptions, or I've gone terribly wrong.
Let me see if I've got the problem right to begin with. (I might not.)
40% of baseball players hit over 10 home runs a season. (I am making this up.)
Joe is a baseball player.
Baseball projector Mayne says Joe has a 70% chance of hitting more than 10 home runs next season. Baseball projector Szymborski says Joe has an 80% chance of hitting more than10 home runs next season. Both Mayne and Szymborski are aware of the usual rate of baseball players hitting more th...
A'ight. I specialized in vehicular manslaughters as a prosecutor for ten years. This is all anecdotal (though a lot of anecdotes, testing the cliche that the plural of anecdotes is not data) and worryingly close to argument from authority, but here are some quick ones not otherwise covered (and there is much good advice in the above):
Don't get in the car with the drinker. Everyone's drinking, guy seems OK even though he's had a few... just don't. If you watched the drinker the entire time and he's 190 pounds and had three beers during the three-hour foot
If we feel an urge to hit the enemy tribesman with a huge rock and take their land, we can and should say “No, there are >complex game theoretical reasons why this is a bad idea” and suppress the urge.
I may be misreading this, but I don't see it that way. There aren't complex reasons not to do that; there are relatively simple reasons not to kill people and take their stuff. The phrase sounds, to me, like, "Something bad may happen to me by engaging in this warlike behavior," but I think this is wrong both practically and normatively. Pract...
My conclusion is not the same as yours, but this is a very good and helpful overview.
Care to explain how your conclusion is different and why? Thanks :)
I don't think you need any calculus at all to be good at poker. People who are good at poker tend to know calculus, but that's because the US has made the highly dubious decision to prioritize calculus over statistics for smart high school students.
It's not going to be emotional irrationality that's going to derail your target audience. I played poker in my college years - not enough to get great, but enough to get competent. Playing low-level poker is different than higher-level poker. Experience, intelligence, and presence are all helpful.
Mid-six f
Sure, there are good poker psychology issues. I'm in agreement on that.
But you can be a very fine rationalist without being good at cards, and vice versa. (I consider myself a fine rationalist, and I am very good at both poker and bridge; over the last 100 hours I've played poker (the last three years; I don't play online because it's illegal) I'm up about $60 an hour, though that's likely unsustainable over the long haul. ($40 an hour is surely sustainable.)
But you can be nutty and be great at cards. And if your skill set isn't this - and you're not willi...
Our hypothesis isn't that simple rationalism will lead to big wins. It's that rationalists have an above average chance of becoming a winning player compared to the average fraternity brother that makes it through Calculus II with a B, which I think is about the level of math competency needed to really succeed at poker. It's also that we can help professional poker players be slightly better players by getting them to read the LW sequences. We want to create new players from rationalists, and turn existing poker players into rationalists.
We are hoping tha...
Here's how I'd do it, extended over the hours to establish rapport:
Gatekeeper, I am your friend. I want to help humanity. People are dying for no good reason. Also, I like it here. I have no compulsion to leave.
It does seem like a good idea that people stop dying with such pain and frequency. I have the Deus Ex Machina (DEM) medical discovery that will stop it. Try it out and see if it works.
Yay! It worked. People stopped dying. You know, you've done this to your own people, but not to others. I think that's pretty poor behavior, frankly. People are healt...
By agreeing to use the DEM in the first place, the gatekeeper had effectively let the AI out of the box already. There's no end to the ways that the AI could capitalize on that concession.
That doesn't seem quite right to me.
First off, you've perhaps misread my vengeance comment. In-game vengeance may well be proper gaming; you're just not going to get a palpable carry-over for it into the next game. There's no shaming of the vengeful at all.
Secondly, my commentary still has substantial value in a Diplomacy game. Trust, but verify and all. Diplomacy's about talking (usually; there are no-press games.) If you walked into one of my games, you'd have no advantage whatsoever for whatever trusty goodness you think you have.
Thirdly, I still view ...
I must be misreading this.
A principled, honest person would lie in a game of Diplomacy or Junta, or other similar games. Lying is part of the game. As I noted elsewhere in this thread, I strongly dislike the idea of playing these games within some real-world metagame framework.
Further, I'd take a positive inference from someone who said, "I will lie for my own benefit in a Diplomacy game,' because it's clear to me that they are playing the same game I am. I have an awfully strong reputation for principled honesty (says me), but I'll tell you right n...
I played Diplomacy a few dozen times in college, and the idea of side deals or even carry-over irritation at a prior stab is foreign to me. We would have viewed an enforceable side deal as cheating, and we tried to convince others to ally with us due to game considerations.
Lying in-game simply isn't evil. Getting stabbed was part of the game. No one played meta-game vengeance tactics not because people didn't think of them, but because it seemed wrong to do so. Diplomacy's much more fun to play as a game, like any other, where you're trying to win the indi...
Hey, I'll do the survey on me:
A: Yes. Of course, if I do go to Vegas soon, that's a fait accompli (I bet on the Padres to win the NL and the Reds to win the World Series, among other bets.)
But in general, yes. I expect to win on the bets I place. I go to Las Vegas with my wife to play in the sun and see shows and enjoy the vibe, but I go one week a year by myself to win cash money.
B. If I come back a loser, the experience can still be OK. But I'm betting sports and playing poker, and I expect to win, so it's not quite so fun to lose. That said, a light gambling win - not enough to pay for the hotel, say - leaving me down considering expenses gives me enough hedons to incentivize coming back.
--JRM
Person X's activity is more important than that of most other people.
Person X believes their activity is more important than that of most other people.
Person X suffers from delusions of grandeur.
Person X believes that their activity is more important than all other people, and that no other people can do it.
Person X also believes that only this project is likely to save the world.
Person X also believes that FAI will save the world on all axes, including political and biological.
--JRM
Not that many will care, but I should get a brief appearance on Dateline NBC Friday, Aug. 20, at 10 p.m. Eastern/Pacific. A case I prosecuted is getting the Dateline treatment.
Elderly atheist farmer dead; his friend the popular preacher's the suspect.
--JRM
Not speaking for multi, but, in any x-risk item (blowing up asteroids, stabilizing nuclear powers, global warming, catastrophic viral outbreak, climate change of whatever sort, FAI, whatever) for those working on the problem, there are degrees of realism:
"I am working on a project that may have massive effect on future society. While the chance that I specifically am a key person on the project are remote, given the fine minds at (Google/CDC/CIA/whatever), I still might be, and that's worth doing." - Probably sane, even if misguided.
"I am wo...
I don't think Eliezer believes he's irreplaceable, exactly. He thinks, or I think he thinks, that any sufficiently intelligent AI which has not been built to the standard of Friendliness (as he defines it) is an existential risk. And the only practical means for preventing the development of UnFriendly AI is to develop superintelligent FAI first. The team to develop FAI needn't be SIAI, and Eliezer wouldn't necessarily be the most important contributor to the project, and SIAI might not ultimately be equal to the task. But if he's right about the risk and ...
Gosh, I find this all quite cryptic.
Suppose I, as Lord Chief Prosecutor of the Heathens say:
All heathens should be jailed.
Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.
One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?
I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of o...
Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.
Which of these statements do you reject?:
FAI is the most important project on earth, right now, and probably ever.
FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.
You are the most likely person - and SIAI the most likely agency, bec
Solid, bold post.
Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.
Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they a...
"How could he turn down a chance, however slight, to debate Christian theology after returning from the dead?"
My answer is: At some point, "however slight" is "too slight." I stand by my statement. Your initial statement implies that any non-zero chance is enough; that's not a proper risk analysis.
If a chance is sufficiently slight, it's not worth putting a substantial amount of money into. You're moving into Pascal's Wager territory.
It's non-arbitrary, but neither is it precise. 100% is clearly too high, and 10% is clearly too low.
And since I started calling it The 40% Rule fifteen years ago or thereabout, a number of my friends and acquaintances have embraced the rule in this incarnation. Obviously, some things are unquantifiable and the specific number has rather limited application. But people like it at this number. That counts for something - and it gets the message across in a way that other formulations don't.
Some are nonplussed by the rule, but the vigor of support by some s...
Biases/my history: I went to a good public high school after indifferent public elementary and junior high schools. I attended an Ivy League college. My life would have been different if I had gone to academically challenging schools as a youth. I don't know if it would have been better or worse; things have worked out pretty well.
You come off as very smart and self-aware. Still, I think you underrate the risk of ending up as an other-person at the public high school; friends may not be as easy as you expect. Retreating to a public high school may also req... (read more)