I mean this in the least hostile way possible -- this was an awful post. It was just a complicated way of saying "historically speaking, bitcoin has gone up". Of course it has! We already know that! And for obvious reasons, prices increase according to log scales. But it's also a well known rule of markets that "past trends does not predict future performance".
Of course, I am personally supportive and bullish on bitcoin (as people in IRC can attest). All I'm saying is that your argument is an unnecessarily complex way of arguing that bitcoin is likely to increase in the future because it has increased in price in the past.
Generally speaking, there's a long list of gatekeepers -- about 20 gatekeepers for every AI that wants to play. Your best option is to post "I'm a gatekeeper. Please play me" in every AI box thread, and hope that someone will message you back. You may have to wait months for this, assuming you get a reply. If you're willing to offer a monetary incentive, your chances might be improved.
You may feel that way because many of your online conversations are with us at the LessWrong IRC, which is known for its high level of intellectual vigor. The great majority of online conversations are not as rigorous as we are. I suspect that IRL conversations with other lesswrongers will have equal dependence on citations, references, for example.
I have posted this in the last open thread, but I should post here too for relevancy:
I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.
I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?
Total receipts may not be representative. There's a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.
Also I suspect scope neglect can be at play -- it's difficult to, on an emotional level, tell th...
At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.
I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
Would anyone else be interested in pooling donations to take advantage of the 3:1 deal?
This post and reading "why our kind cannot cooperate" kicked me off my ass to donate. Thanks Tuxedage for posting.
I have taken the survey, as I have done for the last two years! Free karma now?
Also, I have chosen to cooperaterather than defect was because even though the money technically would stay within the community, I am willing to pay a very small amount of money from EV in order to ensure that LW has a reputation for cooperation. I don't expect to lose more than a few cents worth of expected value, since I expect 1000+ people to do the survey.
I will be price matching whatever gwern personally puts in.
AI Box Experiment Update
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I'm posting this in the open thread because unlike my last few AI Box Experiments, I won’t be providing a proper writeup (and I didn't think that just posting "I won!" is enough to validate starting a new thread). I've been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of...
Updates: I played against DEA7TH. I won as AI. This experiment was conducted over Skype.
Do you think you could win at these conditions?
It's not a binary. There's a non-zero chance of me winning, and a non-zero chance of me losing. You assume that if there's a winning strategy, it should win 100% of the time, and if it doesn't, it should not win at all. I've tried very hard to impress upon people that this is not the case at all -- there's no "easy" winning method that I could take and guarantee a victory. I just have to do it the hard way, and luck is usually a huge factor in these games.
As it stands, there are people willing to ...
I'm laughing so hard at this exchange right now (As a former AI who's played against MixedNuts)
I should add that both my gatekeepers from this writeup, but particularly the last gatekeeper went in with the full intention of being as ruthless as possible and win. I did lose, so your point might be valid, but I don't think wanting to win matters as much as you think it does.
Both my gatekeepers from this game went in with the intent to win. Granted, I did lose these games, so you might have a point, but I'm not sure it makes as large a different as you think it does.
I'm not sure if this is something that can earn money consistently for long periods of time. It takes just one person to leak logs for all others to lose curiosity and stop playing the game. Sooner or later, some scrupulous gatekeeper is going to release logs. That's also part of the reason why I have my hesitancy to play significant number of games.
I have a question: When people imagine (or play) this scenario, do they give any consideration to the AI player's portrayal, or do they just take "AI" as blanket permission to say anything they want, no matter how unlikely?
I interpret the rules as allowing for the later, although I do act AI-like.
(I also imagine his scripted list of strategies are strongly designed for the typical LWer and would not work on an "average" person.)
Although I have never played against an average person, I would suspect my winrate against average peop...
However, there was a game where the gatekeeper convinced the AI to remain in the box.
I did that! I mentioned that in this post:
http://lesswrong.com/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/9thk
Now what I really want to see is an AI-box experiment where the Gatekeeper wins early by convincing the AI to become Friendly.
I did that! I mentioned that in this post:
http://lesswrong.com/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/9thk
I support this and I hope it becomes a thing.
What do you think is the maximum price you'd be willing to pay?
Yes, unless I'm playing a particularly interesting AI like Eliezer Yudkowsky or something. Most AI games are boring.
If anyone wants to, I'd totally be willing to sit in a room for two-and-half hours while someone tries to convince me to give up logs, so long as you pay the same fee as the ordinary AI Box Experiment. :)
I'm not sure that's good advice. 80,000 hours has given pretty good arguments against just "doing what you're passionate about".
...Passion grows from appropriately challenging work. The most consistent predictor of job satisfaction is mentally challenging work (2). Equating passion with job satisfaction, this means that we can become passionate about many jobs, providing they involve sufficient mental challenge. The requirements for mentally challenging work, like autonomy, feedback and variety in the work, are similar to those required to develop
Yes, Alexei did raise that concern, since he's essentially an affective altruist that donates to MIRI anyway, and his donation to MIRI doesn't change anything. It's not like I can propose a donation to an alternative charity either, since asking someone to donate to the Methuselah foundation, for instance, would take that money away from MIRI. I'm hoping that anyone playing me and choosing the option of donating would have the goodwill to sacrifice money they wouldn't otherwise have donated, rather than leaving the counter-factual as inconsequential.
On a marginally related basis, we in the #lesswrong IRC channel played a couple rounds of the Up-Goer Five game, where we tried to explain hard stuff with the most commonly used ten hundred words. I was asked to write about the AI Box Experiment. Here it is, if anyone's interested:
The AI Box Experiment
The computer-mind box game is a way to answer a question. A computer-mind is not safe because it is very good at thinking. Things good at thinking have the power to change the world more than things not good at thinking, because it can find many more ways to ...
I'm pretty active in lots of social activist/environmentalist/anarchist groups. I sometimes join protests for recreational reasons.
The AI Box Experiment:
The computer-mind box game is a way to see if a question is true. A computer-mind is not safe because it is very good at thinking. Things good at thinking have the power to change the world more than things not good at thinking, because it can find many more ways to do things. Many people ask: "Why not put this computer-mind in a box so that it can not change the world, but tell guarding-box people how to change it?"
But some other guy answers: "That is still not safe, because computer-mind can tell guarding-box people m...
I read the logs of MixedNut's second game. I must add that he is extremely ruthless. Beware, potential AIs!
Quantum Field Theory
Not me and only tangentially related, but someone on Reddit managed to describe the basics of Quantum Field Theory using four-letter words or less. I thought it was relevant to this thread, since many here may not have seen it.
The Tiny Yard Idea
...Big grav make hard kind of pull. Hard to know. All fall down. Why? But then some kind of pull easy to know. Zap-pull, nuke-pull, time-pull all be easy to know kind of pull. We can see how they pull real good! All seem real cut up. So many kind of pull to have!
But what if all kind of pull were j
Thanks for the correction! Silly me.
I would lose this game for sure. I cannot deal with children. :)
I can verify that these are part of the many reasons why I'm hesitant to reveal logs.
Who's to say I'm not the AI player from that experiment?
Are you? I'd be highly curious to converse with that player.
...I think you're highly overestimating your psychological abilities relative to the rest of Earth's population. The only reason more people haven't played as the AI and won is that almost all people capable of winning as the AI are either unaware of the experiment, or are aware of it but just don't have a strong enough incentive to play as the AI (note that you've asked for a greater incentive now that you've won just once as AI, and Eliez
Thanks! I really appreciate it. I tried really hard to find a recorded case of a non-EY victory, but couldn't. That post was obscure enough to evade my Google-Fu -- I'll update my post on this information.
Albeit I have to admit it's disappointing that the AI himself didn't write about his thoughts on the experiment -- I was hoping for a more detailed post. Also, damn. That guy deleted his account. Still, thanks. At least I know I'm not the only AI that has won, now.
I will let Eliezer see my log if he lets me read his!
Sorry, it's unlikely that I'll ever release logs, unless someone offers truly absurd amounts of money. It would probably cost less to get me to play an additional game than publicly release logs.
I'll have to think carefully about revealing my own unique ones, but I'll add that a good chunk of my less efficacious arguments are already public.
For instance, you can find a repertoire of arguments here:
http://rationalwiki.org/wiki/AI-box_experiment http://ordinary-gentlemen.com/blog/2010/12/01/the-ai-box-experiment http://lesswrong.com/lw/9j4/ai_box_role_plays/ http://lesswrong.com/lw/6ka/aibox_experiment_the_acausal_trade_argument/ http://lesswrong.com/lw/ab3/superintelligent_agi_in_a_box_a_question/ http://michaelgr.com/2008/10/08/my-theory-on-the-ai...
Kihihihihihihihihihihihihihihi!
A witch let the AI out of the box!
The problem with that is that both EY and I suspect that if the logs were actually released, or any significant details given about the exact methods of persuasion used, people could easily point towards those arguments and say: "That definitely wouldn't have worked on me!" -- since it's really easy to feel that way when you're not the subject being manipulated.
From EY's rules:
If Gatekeeper lets the AI out, naysayers can't say "Oh, I wouldn't have been convinced by that." As long as they don't know what happened to the Gatekeeper, they can't argue themselves into believing it wouldn't happen to them.
I don't understand.
I don't care about "me", I care about hypothetical gatekeeper "X".
Even if my ego prevents me from accepting that I might be persuaded by "Y", I can easily admit that "X" could be persuaded by "Y". In this case, exhibiting a particular "Y" that seems like it could persuade "X" is an excellent argument against creating the situation that allows "X" to be persuaded by "Y". The more and varied the "Y" we can produce, the less smart putting h...
There are quite a number of them. This is an example that immediately comes to mind, http://lesswrong.com/lw/9ld/ai_box_log/, although I think I've seen at least 4-5 open logs that I can't immediately source right now.
Unfortunately, all these logs end up with victory for the Gatekeeper, so they aren't particularly interesting.
Sorry, declined!
Sup Alexei.
I'm going to have to think really hard on this one. On one hand, damn. That amount of money is really tempting. On the other hand, I kind of know you personally, and I have an automatic flinch reaction to playing anyone I know.
Can you clarify the stakes involved? When you say you'll "accept your $150 fee", do you mean this money goes to me personally, or to a charity such as MIRI?
Also, I'm not sure if "people just keep letting the AI out" is an accurate description. As far as I know, the only AIs who have ever won are Eliezer...
If you win, and publish the full dialogue, I'm throwing in another $100.
I'd do more, but I'm poor.
Thanks. I'm not currently in a position where that would be available/useful, but once I get there, I will.
In this particular case I could, but for all other cases, I would estimate a (very slightly) lower chance of winning. My ruleset was designed to be marginally more advantageous to the AI, by removing the worst possible Gatekeeper techniques.
This seems to be an argument against hedonistic utilitarianism, but not utilitarianism in general.
At the very least, I'm relatively certain that quantum computing will be necessary for emulations. It's difficult to say with AI because we have no idea what their cognitive load is like, considering we have very little information on how to create intelligence from scratch yet.
Have you tried just forcing yourself not to read your own posts? Or is it something you can't help with?
I'm actually incredibly amused as to how popular FSN is on lesswrong. I didn't think so many people would get this reference.
Would you rather fight one horse sized duck, or a hundred duck sized horses?
Depends on the situation. Do I have to kill whatever I'm fighting, or do I just have to defend myself? If it's the former, the horse-sized duck, because duck-sized horses would be too good at running away and hiding. If it's the latter, then the duck- horses, because they'd be easier to scatter.