NunoSempere

I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value. And I haven't left the forum entirely: I remain subscribed to its RSS, and generally tend to at least skim all interesting posts.


I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship. Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.


You can share feedback anonymously with me here.

Sequences

Forecasting Newsletter
Inner and Outer Alignment Failures in current forecasting systems

Wikitag Contributions

Comments

Sorted by
curl 'https://www.lesswrong.com/graphql?' \
  -H 'accept: application/json' -H 'content-type: application/json'   -H 'user-agent: bot' \
  --data-raw '{"query":"{ post(input: {selector: {_id: \"7p9WB5NsrQiqKEbPA\"}}) { result { htmlBody } }}"}'  | \
  jq .data.post.result.htmlBody | \
  llm "I am a very rich software engineer in the Bay area worried about AI and pandemics. What is the most actionable information in this post?"

Cheers. Not sure that is the right thing to be optimizing for. I guess that for stuff like this that coverse different topics, people could ask an LLM what the parts they're more likely to find useful are.

I agree this is good in the american public sphere, but such speculation is still very useful to better predict behaviors. I don't think we disagree that much here.

I recognize the limitations of armchair diagnosis, especially of public figures. But there's value in examining these patterns as case studies in how psychiatric conditions manifest in high-functioning individuals, particularly when those individuals have publicly acknowledged aspects of their mental health.

Completely agree, thanks for the speculation

Interesting. Some thoughts

  • I dislike "infinite money" as an analogy to resolving a market; seems like it isn't needed since in actual practice prediction markets don't require infinite money
  • A "trading strategy" (as per the Logical Induction paper) is a function from prices to a list of buy/sell actions. => it feels like this isn't as elegant as "a probability is a representation of degree of belief". It feels like trying to ground an epistemology in prediction markets is incomplete.

I've known Jaime for about ten years. Seems like he made an arguably wrong call when first dealing with real powaah, but overall I'm confident his heart is in the right place.

Shapley values are constructed such that introducing a null player doesn't change the result. You are doing something different by considering the wrong counterfactual (one where C exists but isn't part of the coalition, vs one when it doesn't exist)

Adding a person with veto power is not a neutral change.

Maybe you could address these problems, but could you do so in a way that is "computationally cheap"? E.g., for forecasting on something like extinction, it is much easier to forecast on a vague outcome than to precisely define it.

Load More