All of burger_flipper2's Comments + Replies

I doubt we'd find much comfort in the government's plans for an A-bomb in San Fran, and probably much less in their execution.

Figuring out how to do the cube was the highlight of 7th grade. Didn't have to use a cheater book, but my method wasn't going to win any speed contests.

My wife got me a picture cube a year or so back and it was fun playing around with it again. It came back really quickly, though I hadn't touched one in a couple decades. But I couldn't always solve it. Sometimes I'd get one center square 180 degrees off and the only way I could fix it was by totally scrambling the cube and resolving. Sometimes it takes 3 or 4 rescrambles.

So do I know how to solve a picture cube?

"Burger I think you overestimate the effect of agreeing to be an organ donor."

That's disappointing. I assumed with all the calls to allow payments to increase organ donations that the ones I'm giving gratis would get used and provide benefit.

And since there is the possibility of eternal life with cryonics why isn't there a Pascal's wager type situation? Not saying you should don a bubble wrap suit, but I'd think you would avoid convertibles, motorcycles, and other potential brain-liquefiers.

Bambi, I'll grant you that eating your vegetables and smoking aren't mutually exclusive, but I do wonder about the rationality of a smoker who makes certain to take their vitamins daily.

And as to the organs, I was thinking more of the potential recipients' suffering and not that of the hospitals.

I'm curious about a couple of things.

If this is a rational choice, why does Robin jeopardize his future driving around a convertible and if you cannot be frozen and also donate organs, how do you justify it morally?

Did anyone ever carry through on the AI box thing? That is telling about the community's prospects, without the lead personalities.

Let it go. With a few exceptions, non Hanson/EY posts tend to be weak.

What? The pic of the sirens is back? I got excited when EY hinted he and RH were building to a discussion of their differences, just in time for Hanson to make good on his promise to quit.

I assumed the blank header of the last few days was for suspense building, pending a site relaunch. (And it was kinda working.)

For anyone who encounters trouble with the three links to the speaker page, here's the direct address:http://yudkowsky.net/contact/speaking

"The lack of a log is key. The Gatekeeper could not be convinced if the log were made public."

I think the project loses a lot of interest if no logs are published. There is no glory for a gatekeeper victory. Plenty for an AI.

Why not keep the gatekeeper anonymous but announce the AI?

"Given that I think my chances are somewhere below 1/4, I'd expect my $5 to match your $20"

We need a pledge drive to set up a fund for a successful AI. This will give the AI a reasonable return, but not give gatekeepers a strong monetary disincentive that leaves them typing "nope" over and over again.

I have ten bucks for the first AI that defeats a gatekeeper (while risking some dough) and posts a link to the transcript here.

0Trevj
I would like to play an AI.

How about this one:

My guess is that Eliezer Yudkowsky feels that nobody can convince him to publish the transcripts.

How about, with the same protocols as the original experiment, someone wagers $10 over IRC Chat to convince him to publish the transcripts? Somebody as the AI and Eliezer as the gatekeeper.

Any takers?

-Erik

I wonder if a sinecure isn't a similar pitfall for someone who's out to save the world.

Roland. That's a clever twist and I like it. I would not pony up any $, but I'd expect him to be able to raise it and wouldn't set out for California armed to the teeth on a Sarah Connor mission to stop him either. So I'd fail to recognize and execute my role as gatekeeper by your rules.

But I do think there's a flaw in the scenario. For it to truly parallel the AI box, the critter either needs to stay in its cage or get out. I do agree with the main thrust of the original post here and built into your scenario is the assumption that EY has some sort ... (read more)

Roland, I'd certainly be willing to play gatekeeper, but if you have such a concise argument, why not just proffer it here for all to see?

Yet it's referred to as "humanly impossible" in the link (granted this may be cheeky).

Who is the target audience for this AI box experiment info? Who is detached enough from biases to weigh the avowals as solid evidence without further description, yet not detached enough to see they themselves might have fallen for it? Seems like most people capable of the first could also see the second.

Interesting choice to use the A.I. box experiment as an example for this post, when the methods used by EY in it were not revealed. Whatever the rationale for keeping it close to the vest, not showing how it was done struck me as an attempt to build mystique, if not appear magical.

This post also seems a little inconsistent with EY’s assistant researcher job listing, which said something to the effect that only those with 1 in 100k g need apply, though those with 1 in 1000 could contribute to the cause monetarily. The error may be mine in this instance, because I may be in the minority when I assume someone who claims to have Einstein’s intelligence is not claiming anything like 1 in 100k g.

1TraderJoe
Why would you need any g to contribute money?
2Eliezer Yudkowsky
blink blink Whaaa? Is this saying you think Einstein had substantially less than 1 in 100,000 general intelligence? That seems like a severe underestimate. 1 in 1e5 really isn't much, there should be 70,000 people in the world like that. There isn't a small city full of Einsteins. I've gotten back standardized test reports showing higher percentiles than that. This reminds me of the time somebody asked me if I considered myself a genius and I asked them to define genius as a fraction of the population. "1 in 100,000? 1 in 1 million?" I inquired. And they said, "1 in 300" to which my reply was to just laugh. Or am I reading it the wrong way around, i.e., Einstein is much above this level? If so, I wouldn't think more than a couple of orders of magnitude above, like 1 in 1,000,000 or 1 in 10,000,000. Other factors than native g will be decisive past that point.

Cialdini also seems to have put out the same info in a textbook (which does not read like one) "Influence, Science and Practice." Amazon reviews say it is nearly identical, except it has chapter reviews and problems. I only mention this because this is the version that was available at my 2 nearest library systems. Very good reading a quarter of the way in-- so thanks for the tip.

EY-- what other books are in the "own 3 copy" club?

I can point you to an example where a book found a real publisher, because it did fairly well on Lulu and was written by someone with an internet following:

http://www.amazon.com/Seagalogy-Study-Ass-Kicking-Steven-Seagal/dp/1845769279/ref=pd_bbs_1?ie=UTF8&s=books&qid=1205436229&sr=8-1

All he had to do was pull the Lulu edition. I would think the overlap between your 500 page technical work and a popular book wouldn't be much greater than that between Freakonomics and Levitt's journal articles, and I don't think he lost too many sales because peop... (read more)

follow up on the poker player's results (he put up 10K because he was convinced Intrade was easy to beat):

Can't sleep, so, postmortem:

I did two things very right, and one thing wrong. Almost all of my predictions prior to 2/5 were dead on, and they largely continued to be once the count actually began, except for one thing: because they fit my preconceptions and what I hoped the last minute vote was doing, I trusted Drudge's leaked polls. As such, I immediately jumped into the Dem market with both feet when I was previously committed to staying away from i... (read more)

Adanthar, a poker pro who helped break the Absolute scandal and developed a "robotic" small-stakes algorithm on a lark that supposedly returned approximately 20% before it became well known put down 10k and has been updating his Intrade progress here: http://forumserver.twoplustwo.com/showthread.php?t=88375 I'm interested to see how he'll do today.

Dawes gives a very similar 2-gamble example of a money pump on pg 105 of Rational Choice.

Mamet: "The stoics wrote that the excellent king can walk through the streets unguarded. Our contemporary Secret Service spends tens of millions of dollars every time the president and his retinue venture forth.

Mythologically, the money and the effort are spent not to protect the president's life--all our lives are fragile--but to protect the body politic against the perception that his job is ceremonial, and that for all our attempts to invest it with real power--the Monroe Doctrine, the war powers act, the "button"--there's no one there but us."

This coming Monday at burger-flipping centrail (Norman, OK) there is going to be a bi/non-partisan pep rally convened by David Boren and some other cheerleaders. They've invited each R and D presidential canidate (so far only magic undies has agreed to come) along with Bloomberg. They plan to elicit pledges for some tangible plan for bipartisanship, or create a justification for Bloomberg to go 3rd party (my understanding is he is fiscally conservative and socially liberal). I'm going to have to shut down the q'ing ovens, dig the heat lamps out of the dumpster, premake about 700 Mac's, and play hookie to be there. Some of us more discriminating politicos don't cheer for blue or green. We wanna see the clown juggle at half time.

"I briefly thought to myself: 'I bet most people would be experiencing 'stage fright' about now. But that wouldn't be helpful, so I'm not going to go there.'"

This is part of E's history? Both this and his reaction to 9/11, ticking off a series of thoughts in robotic fashion, strike me as unlikely, given my experience being a human and viewing others.

I don't know what to label this, whether it is an attempt to establish authority by seeming exceedingly rational, remembering events in a way that pleases him, something close to the truth, or something else completely different.

But it does strike me as both odd and unlikely.

Relatively new to the forum and just watched the 2 1/2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction.

My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn't answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.

This was the question about the friendly AI: "Why are you assuming it knows the outcome of its modifications?"

Any pointer to the answer would be much appreciated.

I'm with McCabe-- what was the epiphany?

So is the propensity to say, "I knew it instantaneously" a kissing cousin of the hindsight bias?

p=.02 the first 3 conscious thoughts were, sequentially: "I guess I really am living in the Future. Thank goodness it wasn't nuclear. and then The overreaction to this will be ten times worse than the original event."

I can see the utility in starting off the post with such a narrative (grabbing attention and establishing svengali authority), and don't doubt those 3 thoughts popped up fairly quickly, in one form or another.

I know it's effective, but I expect a little better.

I've always used motorcycle fatalites as the yardstick to put it in perspective; 9-11 came up just short.

I suspected we might be in trouble when they floated the story that Bush didn't return to Washington because of a credible threat to Air Force One, a threat in which, the supposed terrorists were more concerned with establishing credibility than carrying out their attach and thus used some sort of code word that only someone with inside knowledge would have.

It was perfectly reasonable for Bush to put a half dozen states between himself and the most like... (read more)