Why do some societies exhibit more antisocial punishment than others? Martin explores both some literature on the subject, and his own experience living in a country where "punishment of cooperators" was fairly common.
I've been pushing for latent variables to be added to prediction markets, including by making a demo for how it could work. Roughly speaking, reflective latent variables allow you to specify joint probability distributions over a bunch of observed variables. However, this is a very abstract description which people tend to find quite difficult, which probably helps explain why the proposal hasn't gotten much traction.
If you are familiar with finance, another way to think of latent variable markets is that they are sort of like an index fund for prediction markets, allowing you to make overall bets across multiple markets. (Though the way I've set them up in this post differs quite a bit from financial index funds.)
Now I've just had a meeting with some people working...
Your linked paper is kind of long - is there a single part of it that summarizes the scoring so I don't have to read all of it?
Either way, yes, it does seem plausible that one could create a market structure that supports latent variables without rewarding people in the way I described it.
Thanks to Jesse Richardson for discussion.
Polymarket asks: will Jesus Christ return in 2025?
In the three days since the market opened, traders have wagered over $100,000 on this question. The market traded as high as 5%, and is now stably trading at 3%. Right now, if you wanted to, you could place a bet that Jesus Christ will not return this year, and earn over $13,000 if you're right.
There are two mysteries here: an easy one, and a harder one.
The easy mystery is: if people are willing to bet $13,000 on "Yes", why isn't anyone taking them up?
The answer is that, if you wanted to do that, you'd have to put down over $1 million of your own money, locking it up inside Polymarket through the end of...
Wouldn't higher liquidity and lower transaction costs sort this out? Say you have some money tied up in "No, Jesus will not return this year", but you really want to bet on some other thing. If transaction costs were completely zero then, even if you have your entire net worth tied up in "No Jesus" bets you could still go to a bank, point out you have this more-or-less guaranteed payout on the Jesus market, and you want to borrow against it or sell it to the bank. Then you have money now to spend. This would not in any serious way shift the prices of the "...
you should not reject the 'offer' of a field that yields an 'unfair' amount of grain! - Ultimatum Game (Arbital)
In this post, I demonstrate a problem in which there is an agent that outperforms Logical Decision Theory, and show how for any agent you can construct a problem and competing agent that outperforms it. Defining rationality as winning, this means that no agent is rational in every problem.
We consider a slight variation on the ultimatum game to make it completely symmetrical. The symmetrical ultimatum game is a two-player game in which each players says how much money they want. The amount is a positive integer number of dollars. If the sum is ≤$10, both players get the amount of money they choose. Otherwise, they both...
But you really aren't assuming that, you're doing something much stranger.
Either the actual opponent is a rock, in which case it gains nothing from "winning" the game, and there's no such thing as being more or less rational than something without preferences, or the actual opponent is the agent who wrote the number on the rock and put it in front of the agent, in which case the example fails because the game actually started with an agent explicitly trying to manipulate the LDT agent into underperforming.
Thank you for the suggestion.
A while ago I tried using AI suggest writing improvements on a different topic, and I didn't really like any of the suggestions. It felt like the AI didn't understand what I was trying to say. Maybe the topic was too different from its training data.
But maybe it doesn't hurt to try again, I heard the newer AI are smarter.
If I keep procrastinating maybe AI capabilities will get so good they actually can do it for me :/
Just kidding. I hope.
confidence level: I am a physicist, not a biologist, so don’t take this the account of a domain level expert. But this is really basic stuff, and is very easy to verify.
Edit: I have added a few revisions and included a fact check of this post by an organic chemist. You can also read the comments on the EA forum to see Yudkowsky's response.
Recently I encountered a scientific claim about biology, made by Eliezer Yudkowsky. I searched around for the source of the claim, and found that he has been repeating versions of the claim for over a decade and a half, including in “the sequences” and his TED talk. In recent years, this claim has primarily been used as an argument for why an AGI attack...
This post is a stronger arguments against Drexlerian nanomachines that outperform biology in general, which doesn't rely on the straw man.
LessWrong has been receiving an increasing number of posts and comments that look like they might be LLM-written or partially-LLM-written, so we're adopting a policy. This could be changed based on feedback.
Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong's standards. Please do not submit unedited or lightly-edited LLM content. You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of...
So, I've got a question about the policy. My brain is just kind of weird so I really appreciate having claude being able to translate my thoughts into normal speak.
The case study is the following comments in the same comment section:
13 upvotes - written with help of claude
1 upvote (me) - written with the help of my brain only
I'm honestly quite tightly coupled to claude at this point, it is around 40-50% of my thinking process (which is like kind of weird when I think about it?) and so I don't know how to think about this policy change?
Thanks for this!
What I was saying up there is not a justification of Hurwicz' decision rule. Rather, it is that if you already accept the Hurwicz rule, it can be reduced to maximin, and for a simplicity prior the reduction is "cheap" (produces another simplicity prior).
Why accept the Hurwicz' decision rule? Well, at least you can't be accused of a pessimism bias there. But if you truly want to dig deeper, we can start instead from an agent making decisions according to an ambidistribution, which is a fairly general (assumption-light) way of making decision...