I mean this in the least hostile way possible -- this was an awful post. It was just a complicated way of saying "historically speaking, bitcoin has gone up". Of course it has! We already know that! And for obvious reasons, prices increase according to log scales. But it's also a well known rule of markets that "past trends does not predict future performance".
Of course, I am personally supportive and bullish on bitcoin (as people in IRC can attest). All I'm saying is that your argument is an unnecessarily complex way of arguing that bitcoin is likely to increase in the future because it has increased in price in the past.
Generally speaking, there's a long list of gatekeepers -- about 20 gatekeepers for every AI that wants to play. Your best option is to post "I'm a gatekeeper. Please play me" in every AI box thread, and hope that someone will message you back. You may have to wait months for this, assuming you get a reply. If you're willing to offer a monetary incentive, your chances might be improved.
You may feel that way because many of your online conversations are with us at the LessWrong IRC, which is known for its high level of intellectual vigor. The great majority of online conversations are not as rigorous as we are. I suspect that IRL conversations with other lesswrongers will have equal dependence on citations, references, for example.
I have posted this in the last open thread, but I should post here too for relevancy:
I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.
I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?
Total receipts may not be representative. There's a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.
Also I suspect scope neglect can be at play -- it's difficult to, on an emotional level, tell the difference between $1 million worth of donations, or ten million, or a hundred million. Seeing each donation that led to adding up to that amount may help.
At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.
I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
I have taken the survey, as I have done for the last two years! Free karma now?
Also, I have chosen to cooperaterather than defect was because even though the money technically would stay within the community, I am willing to pay a very small amount of money from EV in order to ensure that LW has a reputation for cooperation. I don't expect to lose more than a few cents worth of expected value, since I expect 1000+ people to do the survey.
I will be price matching whatever gwern personally puts in.
AI Box Experiment Update
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I'm posting this in the open thread because unlike my last few AI Box Experiments, I won’t be providing a proper writeup (and I didn't think that just posting "I won!" is enough to validate starting a new thread). I've been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.
Sorry, folks.
This puts my current AI Box Experiment record at 2 wins and 3 losses.
Would you rather fight one horse sized duck, or a hundred duck sized horses?