As formulated, zero -- under the rules you posted you never win anything. Is there an unstated assumption that you can stop the game at any time and exit with your stake?
I guess I didn't formulate the rules clearly enough--if the coin lands on tails, you exit with the stake. For example, if you play and the sequence is HEADS -> HEADS -> TAILS, you exit with $4. The game only ends when tails is flipped.
Suppose someone offers you the chance to play the following game:
You are given an initial stake of $1. A fair coin is flipped. If the result is TAILS, you keep the current stake. If the result is HEADS, the stake doubles and the coin is flipped again, repeating the process.
How much money should you be willing to pay to play this game?
A botnet startup. People sign up for the service, and install an open source program on their computer. The program can:
- Use their CPU cycles to perform arbitrary calculations.
- Use their network bandwidth to relay arbitrary data.
- Let the user add restrictions on when/how much it can do the above.
For every quantum of data transferred / calculated, the user earns a token. These tokens can then be used to buy bandwidth/cycles of other users on the network. You can also buy tokens for real money (including crypto-currency).
Any job that you choose to execute on the other users machines has to be somehow verified safe for those users (maybe the users have to be able to see the source before accepting, maybe the company has to authorize it, etc). The company also offers a package of common tasks you can use, such as DDoS, Tor/VPN relays, seedboxes, cryptocurrency mining and bruteforcing hashes/encryption/etc.
I did not find the project so laughable. It's hopelessly outdated in the sense that logical calculus does not deal with incomplete information, and I suspect that they simply conflate "moral" with "utilitarian" or even just "decision theoretic".
It appears they are going with some kind of modal logic, which also does not appear to deal with incomplete information. I also suspect "moral" will be conflated with "utilitarian" or "utilitarian plus a diff". But then there is this bit in the press release:
Bringsjord’s first step in designing ethically logical robots is translating moral theory into the language of logic and mathematics. A robot, or any machine, can only do tasks that can be expressed mathematically. With help from Rensselaer professor Mei Si, an expert in the computational modeling of emotions, the aim is to capture in “Vulcan” logic such emotions as vengefulness.
...which makes it sound like the utility function/moral framework will be even more ad hoc.
Possibly of local interest: Research on moral reasoning in intelligent agents by the Renssalear AI and Reasoning Lab.
(I come from a machine learning background, and so I am predisposed to look down on the intelligent agents/cognitive modelling folks, but the project description in this press release just seems laughable. And if the goal of the research is to formalize moral reasoning, why the link to robotic/military systems, besides just to snatch up US military grants?)
Is anyone interested in another iterated prisoner's dilemma tournament? It has been nearly a year since the last one. Suggestions are also welcome.
So, to follow up on this, I'm going to announce the 2015 tournament in early August. Everything will be the same except for the following:
- Random-length rounds rather than fixed length
- Single elimination instead of round-robin elimination
- More tooling (QuickCheck-based test suite to make it easier to test bots, and some other things)
Edit: I am also debating whether to make the number of available simulations per round fixed rather than relying on a timer.
I also played around with a version in which bots could view each other's abstract syntax tree (represented as a GADT), but I figured that writing bots in Haskell was already enough of a trivial inconvenience for people without involving a special DSL, so I dropped that line of experimentation.
In addition to current posters, these tournaments generate external interest. I, and more importantly So8res, signed up for an account at LessWrong for one of these contests.
Wow, I was not aware of that. I saw that the last one got some minor attention on Hacker News and Reddit, but I didn't think about the outreach angle. This actually gives me a lot of motivation to work on this year's tournament.
Is anyone interested in another iterated prisoner's dilemma tournament? It has been nearly a year since the last one. Suggestions are also welcome.
Yes, agreed. (I tried to write an entry for the one-shot tournament but never finished; I'd like to see that revisited sometime with a Scheme variant tailored for the contest.)
Wow, I had no idea that people missed out on the tournament because I posted it to discussion. I'll keep this in mind for next year. Apologies to Sniffnoy and BloodyShrimp and anyone else who missed the opportunity.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Reading the wikipedia article on the St Petersburg paradox, that's exactly the game tetronian2 has described.
Yep. I don't think I was ever aware of the name; someone threw this puzzle at me in a job interview a while ago, so I figured I'd post it here for fun.