Evolution doesn't optimize for biological systems to be understandable. But, because only a small subset of possible biological designs can robustly certain common goals (i.e. robust recognition of molecules, robust signal-passing, robust fold-change detection, etc) the requirement to work robustly limits evolution to use a handful of understandable structures.
For months, I had the feeling: something is wrong. Some core part of myself had gone missing.
I had words and ideas cached, which pointed back to the missing part.
There was the story of Benjamin Jesty, a dairy farmer who vaccinated his family against smallpox in 1774 - 20 years before the vaccination technique was popularized, and the same year King Louis XV of France died of the disease.
There was another old post which declared “I don’t care that much about giant yachts. I want a cure for aging. I want weekend trips to the moon. I want flying cars and an indestructible body and tiny genetically-engineered dragons.”.
There was a cached instinct to look at certain kinds of social incentive gradient, toward managing more people or growing an organization or playing...
I think having a king at all might be positive sum though, via enabling of coordination.
Increased physical security isn't much of difficulty.
Is everyone dropping the ball on cryonics
More or less AFAIK. (See though https://www.amazon.com/Future-Loves-You-Should-Abolish-ebook/dp/B0CW9KTX76 )
Hi all! PhD student here who's been been working on a little side project the past few months and it's finally done - my new podcast on future technologies, New Horizons, has launched!
The topics and style are very much aligned with the interests/values of the rationalist community, which I consider myself to be a part of, so I'm posting it here.
Links to Spotify, Apple Podcasts, and Youtube:
https://open.spotify.com/show/3CNoUESyO1xquxqAOY5fih...
https://podcasts.apple.com/.../new-horizons/id1816013818
https://m.youtube.com/@nezir1999
Episode titles for the first season:
1. Extinction or Utopia? The Future of AI
2. Are We the First Generation to Live Forever?
3. Same-Sex Babies When?
4. Listening to the Universe: The Future of Gravitational Wave Astronomy
5. Debating a Catholic
6. Could We Prevent a Supervolcanic Eruption?
7. Will Your Next Burger be Grown from Cells? The Cultivated Meat Revolution
8. Could AIs Be Conscious?
Episodes 1 and 2 are out today - from now until the end of the season, a new one will come out each Thursday afternoon.
The guest for episode 2 is quite high profile
Hope you give it a listen and enjoy!
Those who aim for moral ASIs:
Are they sure they know how moral works for human beings? When dealing with existencial risks, one has to be sure to avoid any biases. This includes the rational consideration of the most cynical theories of moral relativism.
This is a D&D.Sci scenario: a puzzle where players are given a dataset to analyze and an objective to pursue using information from that dataset.
Thank you to Juan Vasquez for playtesting.
Intended Difficulty: ~3.5/5
The fairy in your bedroom explains that she is a champion of Fate, tasked with whisking mortals into mysterious realms of wonderment and (mild) peril; there, they forge friendships with mythical creatures, do battle with ancient evils, and return to their mundane lives having gained the confidence that comes with having saved a world[1]. But there’s an unusually large, important world experiencing an unusually non-mild amount of peril - she’d even go so far as to call it moderate peril! - and in this circumstance, only the best will suffice. For this reason, she fervently...
I am going for number 11, mainly because other adventurers with predictions similar to 11 did unusually well.
That is a serious concern. It is possible that advocacy could backfire. That said, I'm not sure the correct hypothesis isn't just "rich people start AI companies and sometimes advocacy isn't enough to stop this". Either way, the solution seems to be better advocacy. Maybe split testing, focus testing, or other market research before deploying a strategy. Devoting some intellectual resources to advocacy improvement, at least short term.
As for the knowledge bottleneck - I think that's a very good point. My comment doesn't remove that bottleneck, just shift it to advocacy (i.e. maybe we need better knowledge on how or what to advocate).
This year's Spring ACX Meetup everywhere in Austin.
Location: The Brewtorium, 6015 Dillard Cir A, Austin, TX 78752; We'll have a LessWrong sign at a long table indoors – https://plus.codes/862487GM+96
Group Link: https://groups.google.com/g/austin-less-wrong/
Feel free to bring kids. We'll order shareable items for the group (fries and pretzels) and you can order from the food and drink menu.
Contact: sbarta@gmail.com
We’re set up by the door at table 15.