by plex
1 min read

5

This is a special post for quick takes by plex. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
18 comments, sorted by Click to highlight new comments since:
[-]plex130

Life is Nanomachines

In every leaf of every tree
If you could look, if you could see
You would observe machinery
Unparalleled intricacy
 
In every bird and flower and bee
Twisting, churning, biochemistry
Sustains all life, including we
Who watch this dance, and know this key

Illustration: A magnified view of a vibrant green leaf, where molecular structures and biological nanomachines are visible. Hovering nearby, a bird's feathers reveal intricate molecular patterns. A bee is seen up close, its body showcasing complex biochemistry processes in the form of molecular chains and atomic structures. Nearby, a flower's petals and stem reveal the dance of biological nanomachines at work. Human silhouettes in the background observe with fascination, holding models of molecules and atoms.

Related: Video of the death of a single-celled Blepharisma.

[-]plex114

Rationalists try to be well calibrated and have good world models, so we should be great at prediction markets, right?

Alas, it looks bad at first glance:

I've got a hopeful guess at why people referred from core rationalist sources seem to be losing so many bets, based on my own scores. My manifold score looks pretty bad (-M192 overall profit), but there's a fun reason for it. 100% of my resolved bets are either positive or neutral, while all but one of my unresolved bets are negative or neutral.

Here's my full prediction record:

The vast majority of my losses are on things that don't resolve soon and are widely thought to be unlikely (plus a few tiny not particularly well thought out bets like dropping M15 on LK-99), and I'm for sure losing points there. but my actual track record cached out in resolutions tells a very different story.

I wonder if there are some clever stats that @James Grugett @Austin Chen  or others on the team could do to disentangle these effects, and see what the quality-adjusted bets on critical questions like the AI doom ones would be absent this kind of effect. I'd be excited to see the UI showing an extra column on the referrers table showing cashed out predictions only rather than raw profit. Or generally emphasising cached out predictions in the UI more heavily, to mitigate the Keynesian beauty contest style effects of trying to predict distant events.

These datapoints just feel like the result of random fluctuations. Both Writer and Eliezer mostly drove people to participate on the LK-99 stuff where lots of people were confidently wrong. In-general you can see that basically all the top referrers have negative income: 

Among the top 10, Eliezer and Writer are somewhat better than the average (and yaboi is a huge outlier, which my guess is would be explained by them doing something quite different than the other people). 

[-]plex40

Agree, expanding to the top 9[1] makes it clear they're not unusual in having large negative referral totals. I'd still expect Ratia to be doing better than this, and would guess a bunch of that comes from betting against common positions on doom markets, simulation markets, and other things which won't resolve anytime soon (and betting at times when the prices are not too good, because of correlations in when that group is paying attention).

  1. ^

    Though the rest of the leaderboard seems to be doing much better

The vast majority of my losses are on things that don't resolve soon

The interest rate on manifold makes such investments not worth it anyway, even if everyone had reasonable positions to you.

A couple of months ago I did some research into the impact of quantum computing on cryptocurrencies, seems maybe significant, and a decent number of LWers hold cryptocurrency. I'm not sure if this is the kind of content that's wanted, but I could write up a post on it.

Writeup of good research is generally welcome on LessWrong.

[-]plex20

Titles of posts I'm considering writing in comments, upvote ones you'd like to see written.

[-]plex220

Why SWE automation means you should probably sell most of your cryptocurrency

[-]plex130

An opinionated tour of the AI safety funding landscape

[-]plex90

Grantmaking models and bottlenecks

[-]plex70

Ayahuasca: Informed consent on brain rewriting

(based on anecdotes and general models, not personal experience)

Can't you do this as polls in a single comment?

[-]plex20

LW supports polls? I'm not seeing it in https://www.lesswrong.com/tag/guide-to-the-lesswrong-editor, unless you mean embed a manifold market which.. would work, but adds an extra step between people and voting unless they're registered to manifold already.

[-]plex30

Mesa-Hires - Making AI Safety funding stretch much further

Thinking about some things I may write. If any of them sound interesting to you let me know and I'll probably be much more motivated to create it. If you're up for reviewing drafts and/or having a video call to test ideas that would be even better.

  • Memetics mini-sequence (https://www.lesswrong.com/tag/memetics has a few good things, but no introduction to what seems like a very useful set of concepts for world-modelling)
    • Book Review: The Meme Machine (focused on general principles and memetic pressures toward altruism)
    • Meme-Gene and Meme-Meme co-evolution (focused on memeplexes and memetic defences/filters, could be just a part of the first post if both end up shortish)
    • The Memetic Tower of Generate and Test (a set of ideas about the specific memetic processes not present in genetic evolution, inspired by Dennet's tower of generate and test)
    • (?) Moloch in the Memes (even if we have an omnibenevolent AI god looking out for sentient well-being, things maybe get weird in the long term as ideas become more competent replicators/persisters if the overseer is focusing on the good and freedom of individuals rather than memes. I probably won't actually write this, because I don't have much more than some handwavey ideas about a situation that is really hard to think about.)
  • Unusual Artefacts of Communication (some rare and possibly good ways of sharing ideas, e.g. Conversation Menu, CYOA, Arbital Paths, call for ideas. Maybe best as a question with a few pre-written answers?)

Hi, did you ever go anywhere with Conversation Menu? I'm thinking of doing something like this related to AI risk to try to quickly get people to the arguments around their initial reaction and if helping with something like this is the kind of thing you had in mind with Conversation Menu I'm interested to hear any more thoughts you have around this. (Note, I'm thinking of fading in buttons more than a typical menu.) Thanks!