You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Some thoughts on decentralised prediction markets

-4 Clarity 23 November 2015 04:35AM

**Thought experiment 1 – arbitrage opportunities in prediction market**

You’re Mitt Romney, biding your time before riding in on your white horse to win the US republican presidential preselection (bear with me, I’m Australian and don’t know US politics). Anyway, you’ve had your run and you’re not too fussed, but some of the old guard want you back in the fight.

Playing out like a XKCD comic strip ‘Okay’, you scheme. ‘Maybe I can trump Trump at his own game and make a bit of dosh on the election’.

A data-scientist you keep on retainer sometimes talks about LessWrong and other dry things. One day she mentions that decentralised prediction markets are being developed, one of which is Augur. She says one can bet on the outcome of events such as elections.

You’ve made a fair few bucks in your day. You read the odd Investopedia page and a couple of random forum blog posts. And there’s that financial institute you run. Arbitrage opportunity, you think.

You don’t fancy your chance of winning the election. 40% chance, you reckon. So, you bet against yourself. Win the election, lose the bet. Lose the bet, win the election. Losing the election doesn’t mean much to you, losing the bet doesn’t mean much to you, winning the election means a lot of to you and winning the bet doesn’t mean much to you. There ya go. Perhaps if you put

Let’s turn this into a probability weighted decision table (game theory):

Not participating in prediction market:

Election win (+2 value)

Election lose (-1 value)

40%

60%

Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

participating in prediction market::

 

Election win +2

Election lose -1

Bet win (0)

0

60%

Bet lose (0)

40%

0

 

Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

They’re the same outcome!
Looks like my intuitions were wrong. Unless you value winning more than losing, then placing an additional bet, even in a different form of capital (cash v.s. political capital for instance), then taking on additional risks isn’t an arbitrage opportunity.

For the record, Mitt Romney probably wouldn’t make this mistake, but what does post suggest I know about prediction?

 

**Thought experiment 2 – insider trading**

Say you’re a C level executive in a publicly listed enterprise. However, for this example you don’t need to be part of a publicly listed organisatiion, but it serves to illustrate my intuitions. Say you have just been briefed by your auditors of massive fraud by a mid level manager that will devastate your company. Ordinarily, you may not know how to safely dump your stocks on the stock exchange because of several reasons, one of which is insider trading.

Now, on a prediction market, the executive could retain their stocks, thus not signalling distrust of the company themselves (which itself is information the company may be legally obliged to disclose since it materially influences share price) but make a bet on a prediction market of impending stock losses, thus hedging (not arbitraging, as demonstrated above) their bets.

 

**Thought experiment 3 – market efficiency**

I’d expect that prediction opportunities will be most popular where individuals weighted by their capital believe they gave private, market relevant information. For instance, if a prediction opportunity is that Canada’s prime minister says ‘I’m silly’ on his next TV appearance, many people might believe they know him personally well enough that it’s a higher probability that the otherwise absurd sounding proposition sounds. They may give it a 0.2% chance rather than 0.1% chance. However, if you are the prime minister yourself, you may decide to bet on this opportunity and make a quick, easy profit…I’m not sure where I was going with this anymore. But it was something about incentives to misrepresent how much relevant market information one has, and the amount that competitor betters have (people who bet WITH you)

Arguing Orthogonality, published form

10 Stuart_Armstrong 18 March 2013 04:19PM

My paper "General purpose intelligence: arguing the Orthogonality thesis" has been accepted for publication in the December edition of Analysis and Metaphysics. Since that's some time away, I thought I'd put the final paper up here; the arguments are similar to those here, but this is the final version, for critique and citation purposes.

General purpose intelligence: arguing the Orthogonality thesis

 

STUART ARMSTRONG
stuart.armstrong@philosophy.ox.ac.uk
Future of Humanity Institute, Oxford Martin School
Philosophy Department, University of Oxford

 

In his paper “The Superintelligent Will”, Nick Bostrom formalised the Orthogonality thesis: the idea that the final goals and intelligence levels of artificial agents are independent of each other. This paper presents arguments for a (narrower) version of the thesis. It proceeds through three steps. First it shows that superintelligent agents with essentially arbitrary goals can exist in our universe – both as theoretical impractical agents such as AIXI and as physically possible real-world agents. Then it argues that if humans are capable of building human-level artificial intelligences, we can build them with an extremely broad spectrum of goals. Finally it shows that the same result holds for any superintelligent agent we could directly or indirectly build. This result is relevant for arguments about the potential motivations of future agents: knowing an artificial agent is of high intelligence does not allow us to presume that it will be moral, we will need to figure out its goals directly.

 

Keywords: AI; Artificial Intelligence; efficiency; intelligence; goals; orthogonality

 

1                       The Orthogonality thesis

Scientists and mathematicians are the stereotypical examples of high intelligence humans. But their morality and ethics have been all over the map. On modern political scales, they can be left- (Oppenheimer) or right-wing (von Neumann) and historically they have slotted into most of the political groupings of their period (Galois, Lavoisier). Ethically, they have ranged from very humanitarian (Darwin, Einstein outside of his private life), through amoral (von Braun) to commercially belligerent (Edison) and vindictive (Newton). Few scientists have been put in a position where they could demonstrate genuinely evil behaviour, but there have been a few of those (Teichmüller, Philipp Lenard, Ted Kaczynski, Shirō Ishii).

continue reading »

The principle of ‘altruistic arbitrage’

18 RobertWiblin 09 April 2012 01:29AM

Cross-posted from http://www.robertwiblin.com

There is a principle in finance that obvious and guaranteed ways to make a lot of money, so called ‘arbitrages’, should not exist. It has a simple rationale. If market prices made it possible to trade assets around and in the process make a guaranteed profit, people would do it, in so doing shifting some prices up and others down. They would only stop making these trades once the prices had adjusted and the opportunity to make money had disappeared. While opportunities to make ‘free money’ appear all the time, they are quickly noticed and the behaviour of traders eliminates them. The logic of selfishness and competition mean the only remaining ways to make big money should involve risk taking, luck and hard work. This is the ’no arbitrage‘ principle.

Should a similar principle exist for selfless as well as selfish finance? When a guaranteed opportunity to do a lot of good for the world appears, philanthropists should notice and pounce on it, and only stop shifting resources into that activity once the opportunity has been exhausted. This wouldn’t work as quickly as the elimination of arbitrage on financial markets of course. Rather it would look more like entrepreneurs searching for and exploiting opportunities to open new and profitable businesses. Still, in general competition to do good should make it challenging for an altruistic start-up or budding young philanthropist to beat existing charities at their own game.

There is a very important difference though. Most investors are looking to make money and so for them a dollar is a dollar, whatever business activity it comes from. Competition between investors makes opportunities to get those dollars hard to find. The same is not true of altruists, who have very diverse preferences about who is most deserving of help and how we should help them; a ‘util’ from one charitable activity is not the same as a ‘util’ from another. This suggests that unlike in finance, we may able to find ‘altruistic arbitrages’, that is to say ‘opportunities to do a lot of good for the world that others have left unexploited.’

The rule is simple: target groups you care about that other people mostly don’t, and take advantage of strategies other people are biased against using.  That rule is the root of a lot of advice offered to thoughtful givers and consequentialist-oriented folks. An obvious example is that you shouldn’t look to help poor people in rich countries. There are already a lot of government and private dollars chasing opportunities to assist them, so the low hanging fruit has all been used up and then some. The better value opportunities are going to be in poor, unromantic places you have never heard of, where fewer competing philanthropist dollars are directed. Similarly, you should think about taking high risk-high return strategies. Most do-gooders are searching for guaranteed and respectable opportunities to do a bit of good, rather than peculiar long-shot opportunities to do a lot of good. If you only care about the ‘expected‘ return to your charity, then you can do more by taking advantage of the quirky, improbable bets neglected by others.

Who do I personally care about more than others? For me the main candidates are animals, especially wild ones, and people who don’t yet exist and may never exist – interest groups that go largely ignored by the majority of humanity. What are the risky strategies I can employ to help these groups? Working on future technologies most people think are farcical naturally jumps to mind but I’m sure there are others and would love to hear them.

This principle is the main reason I am skeptical of mainstream political activism as a way to improve the world. If you are part of a significant worldwide movement, it’s unlikely that you’re working in a neglected area and exploiting how your altruistic preferences are distinct from those of others.

What other conclusions can we draw thinking about philanthropy in this way?

 

Meta Addiction

17 Voltairina 15 March 2012 04:58AM

I was wondering if anyone has ever had the feeling, like I get sometimes, that they were addicted to 'meta-level' optimizing rather than low-level acting? As in, I'd rather think about how to encourage myself to brush my teeth more than brush my teeth. I'm guessing there's something about this under the akrasia threads?

The motivations to remain in meta and thinking about things rather than acting on them seems to be that it takes less effort to think about doing things than to do them, and there is potentially more long-term benefit in making an overall improvement than in engaging in a specific action. The drawback is that if you remain thinking about meta all the time, you won't get anything done.

Too many cooks

1 PhilGoetz 20 September 2011 04:14PM

I was in a game last weekend where, at one point, the players needed to solve an in-game problem.  The mechanics for "solving" the problem were for us to assemble a 3D "jigsaw" puzzle.  (One of those geometric shapes made by getting a lot of little shapes to fit together in just the right way.)

Three of us sat down to solve the puzzle together.  Looking at the pieces, looking at the picture of the assembled figure, we made observations about what constraints we saw, what piece on the table might correspond to something in the picture, convinced whoever was holding the piece in question to put it in a particular place, and gradually assembled the puzzle cooperatively.  We had an instruction sheet with pictures of the puzzle at 3 different stages of completion.  It took us something under 10 minutes.  (Several of those were taken up mutually deciding in which order to follow the pictures, as one person had started using the picture showing the first stage, one had started with the picture showing the last stage and was disassembling the semi-assembled first stage for parts, and one had no clear strategy.)

A few hours later, after the game ended, I sat at the table with the disassembled puzzle pieces, and put it together by myself, following the pictures from first stage to last.  I was not aware of any memories of how it had been assembled the last time; and anyway, the instruction sheet was much more valuable than any memories I had.  (It wasn't one of those symmetric 3D puzzles where there's a pattern or trick to it; it was a collection of oddly-shaped unique pieces assembled in three layers.)  It took less than a minute.

Are we trying to do things the hard way?

10 NancyLebovitz 31 October 2010 12:16PM

A TED talk about remarkable low-cost Indian products-- the Tata car which costs $2000 and is a real car, a $28 artificial lower leg which permits walking on rough ground, tree climbing, jumping, and running, and fast cheap drug development which starts with traditional Indian remedies. It's an example of something to defend because the effort is to develop products that very poor people can afford, so that incremental improvements and cost-cutting aren't good enough.

It leaves me wondering whether the process of creating FAI should be re-evaluated-- whether there's a built-in assumption of high personal costs which is unnecessary. That's wondering, not an absolute certainty, it's just that the $28 artificial lower leg shocked me into thinking about how much is being made harder than necessary.

Even if FAI is being worked on about as efficiently as possible, there may be a huge amount of possibility for making things easier in life generally.