Steven Universe s1e24
Oh, based on your DM, they will still preserve and transport your brain to OregonCryo for indefinite storage for free if you can't afford it. So I wouldn't say they're no longer active. But it's still good info to know they aren't doing the storage locally anymore. Thanks for sharing.
wow, nice, thanks for sharing 😅
Oh wow! Damn ☹️ Well, I'm super grateful for the time it was active. If you don't mind sending me a copy of your exchanges, I'd be interested.
I don't feel strongly about this one way or another, but I think it's reasonable to expend the term cryonics to mean any brain preservation method done with the hope for future revival as that seems like the core concept people are referring to when using the term. When the term was first coined, room temperature options weren't a thing. https://www.lesswrong.com/posts/PG4D4CSBHhijYDvSz/refactoring-cryonics-as-structural-brain-preservation
I don't know. The brain preservation prize to preserve the connective of a large mammal was won with aldehyde-stabilization though
Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out
Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it
To be clear, it's subsidized. So it's not like there's no money to maintain you in preservation. As far as I know, Oregon Brain Preservation has a trust similar to Alcor in terms of money per volume preserved for it's cryonics patients. Which seems more than enough to maintain in storage just with the interests. Of course, there could be major economic disruptions that change that. I'm not sure about how much Cryonics Germany is putting aside though.
Plus, Oregon Brain Preservation's approach seems to work at fridge temperature rather than requiring PB2 te...
fair enough! maybe i should edit my post with "brain preservation some through cryonics for indefinite storage with the purpose of future reanimation is sufficiently subsidized to be free or marginally free in some regions of the world" 😅
I'm in favour of saying true things. I feel the (current) title is slightly misleading.
i don't think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️
if you're alive, you can kill yourself when s-risks increases beyond your comfort point. if you're preserved, then you rely on other people to execute on those wishes
I mean, it's not a big secret, there's a wealthy person behind it. And there's 2 potential motivations for it:
1) altruistic/mission-driven
2) helps improve the service to have more cases, which can benefit themselves as well.
But also, Oregon Brain Preservation is less expensive as a result of:
1) doing brain-only (Alcor doesn't extract the brain for its neuro cases)
2) using chemical preservation which doesn't require LN2 (this represents a significant portion of the cost)
3) not including the cost of stand-by, which is also a significant portion (ie. staying ...
Who is the wealthy person?
I mean, you can trust it to preserve your brain more than you can trust a crematorium to preserve your brain.
And if you do chemical preservation, the operational complexity of maintaining a brain in storage is fairly simple. LN2 isn't that complex either, but does have higher risks.
That said, I would generally suggest using Tomorrow Biostasis for Europe residents if you can afford it.
here's my new fake-religion, taking just-world bias to its full extreme
the belief that we're simulations and we'll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone's perspective you have a very high probability of transcending to Utopia in 1 second
^^
Llifelogging as life extension version of this post would be like "You Only Live 1.5 Times" ^^
epistemic status: speculative, probably simplistic and ill defined
Someone asked me "What will I do once we have AGI?"
I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:
1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they're working vs while they're not working rather than while they're not...
imagine (maybe all of a sudden) we're able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost
we might decide to have anyone able to buy AI agents aligned with their values
or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can't all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)
then it seems like everyone/most people would start to run...
AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)
it doesn't need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy t...
Oregon Brain Preservation is a solid organization offering a free option in the US: https://www.oregoncryo.com/services.html, and Cryonics Germany a free option in Europe: https://cryonics-germany.org/en/
Thanks for engaging with my post. I keep thinking about that question.
I'm not quite sure what you mean by "values and beliefs are perfectly correlated here", but I'm guessing you mean they are "entangled".
there is no test we could perform which would distinguish what it wants from what it believes.
Ah yeah, that seems true for all systems (at least if you can only look at their behaviors and not their mind); ref.: Occam’s razor is insufficient to infer the preferences of irrational agents. Summary: In principle, all possible sets of possible value-syste...
when potentially ambiguous, I generally just say something like "I have a different model" or "I have different values"
topic: economics
idea: when building something with local negative externalities, have some mechanism to measure the externalities in terms of how much the surrounding property valuation changed (or are expected to change based, say, through a prediction market) and have the owner of that new structure pay the owners of the surrounding properties.
I wonder what fraction of people identify as "normies"
I wonder if most people have something niche they identify with and label people outside of that niche as "normies"
if so, then a term with a more objective perspective (and maybe better) would be non-<whatever your thing is>
like, athletic people could use "non-athletic" instead of "normies" for that class of people
just a loose thought, probably obvious
some tree species self-selected themselves for height (ie. there's no point in being a tall tree unless taller trees are blocking your sunlight)
humans were not the first species to self-select (for humans, the trait being intelligence) (although humans can now do it intentionally, which is a qualitatively different level of "self-selection")
on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality
Board game: Medium
2 players reveal a card with a word, then they need to say a word based on that and get points if it's the same word (basically, with some more complexities).
Example at 1m20 here: https://youtu.be/yTCUIFCXRtw?si=fLvbeGiKwnaXecaX
I'm glad past Mati cast a wider net has the specifics for this year's Schelling day are different ☺️☺️
idk if the events are often going over time, but I might pass by now if it's still happening ☺️
I liked reading your article; very interesting! 🙏
One point I figured I should x-post with our DMs 😊 --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/all of the tech you described also have a probability of causing an x-risk if they're not implemented. I don't think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilitie...
I love this story so much, wow! It feels so incredibly tailored to me (because it is 😄). I value that a lot! It's a very scarce resource to begin with, but it hardly gets more tailored than that 😄
that's awesome; thanks for letting me know :)
i'd be curious to know how the first event went if you're inclined to share ☺
from ChatGPT
Sounds like a fun challenge! Here are 50 ways
to send something to the moon:
1. Catapult
2. Giant crossbow
3. Balloon with an endless supply of helium
4. A giant seesaw
5. Sky elevator
6. Beam of light (if the object can ride light)
7. Teleportation device
8. Moon magnet
9. Whale blowhole
10. Bubble with a perfect vacuum inside
11. Tornado creator
12. Inflatable space tube
13. A jump by the Hulk
14. Sonic boom from a supersonic plane
15. Floating on a cloud machine
16. Warp drive
17. Ice cannon
18. Rocket rollercoaster
19. A super springboard
20. Fling via a
topics: AI, sociology
thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.
(which seems like it would be harmful for the purpose of reducing death)
I'm surprised this has this many upvotes. You're taking the person that contributed the most to warning humanity about AI x-risks, and are saying what you think they could have done better in what comes across as blamy to me. If you're blaming zir, you should probably blame everyone. I'd much rather if you wrote what people could have done in general rather than targeting one of the best contributors.
ok that's fair yeah! thanks for your reply. I'm guessing a lot of those historical quotes are also taking out of context actually.
you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)
Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^
related concept: https://en.wikipedia.org/wiki/Information_panspermia
video on this that was posted ~15 hours ago: https://www.youtube.com/watch?v=K4Zghdqvxt4
I wish there was an audio version ☺️