All of Mati_Roy's Comments + Replies

I wish there was an audio version ☺️

Answer by Mati_Roy20

Steven Universe s1e24

Oh, based on your DM, they will still preserve and transport your brain to OregonCryo for indefinite storage for free if you can't afford it. So I wouldn't say they're no longer active. But it's still good info to know they aren't doing the storage locally anymore. Thanks for sharing.

Oh wow! Damn ☹️ Well, I'm super grateful for the time it was active. If you don't mind sending me a copy of your exchanges, I'd be interested.

2Mati_Roy
Oh, based on your DM, they will still preserve and transport your brain to OregonCryo for indefinite storage for free if you can't afford it. So I wouldn't say they're no longer active. But it's still good info to know they aren't doing the storage locally anymore. Thanks for sharing.

I don't feel strongly about this one way or another, but I think it's reasonable to expend the term cryonics to mean any brain preservation method done with the hope for future revival as that seems like the core concept people are referring to when using the term. When the term was first coined, room temperature options weren't a thing. https://www.lesswrong.com/posts/PG4D4CSBHhijYDvSz/refactoring-cryonics-as-structural-brain-preservation

Steven Universe s1e5 is about a being that follows commands literally, and is a metaphor for some AI risks

I don't know. The brain preservation prize to preserve the connective of a large mammal was won with aldehyde-stabilization though

Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out

Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it

To be clear, it's subsidized. So it's not like there's no money to maintain you in preservation. As far as I know, Oregon Brain Preservation has a trust similar to Alcor in terms of money per volume preserved for it's cryonics patients. Which seems more than enough to maintain in storage just with the interests. Of course, there could be major economic disruptions that change that. I'm not sure about how much Cryonics Germany is putting aside though.

Plus, Oregon Brain Preservation's approach seems to work at fridge temperature rather than requiring PB2 te... (read more)

Mati_Roy*8-6

fair enough! maybe i should edit my post with "brain preservation some through cryonics for indefinite storage with the purpose of future reanimation is sufficiently subsidized to be free or marginally free in some regions of the world" 😅

ROM*2819

I'm in favour of saying true things. I feel the (current) title is slightly misleading. 

i don't think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷‍♂️

Mati_Roy147

if you're alive, you can kill yourself when s-risks increases beyond your comfort point. if you're preserved, then you rely on other people to execute on those wishes

5nim
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one's life in the current era are rather extreme. Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not. If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it's far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I've spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Mati_Roy*121

I mean, it's not a big secret, there's a wealthy person behind it. And there's 2 potential motivations for it:
1) altruistic/mission-driven
2) helps improve the service to have more cases, which can benefit themselves as well.

But also, Oregon Brain Preservation is less expensive as a result of:
1) doing brain-only (Alcor doesn't extract the brain for its neuro cases)
2) using chemical preservation which doesn't require LN2 (this represents a significant portion of the cost)
3) not including the cost of stand-by, which is also a significant portion (ie. staying ... (read more)

Who is the wealthy person?

3ROM
I assumed this was an overstatement. A quick check shows I'm wrong: TomorrowBio offer whole body (€200k) or just brain preservation (€60k). The 'standby, stabilisation and transport' service (included in the previous costs) amount to €80k and €50k respectively. I expected it to be much less.  That said, they still set aside €10K for long term storage of the head. I guess this means your head has a higher chance of being stored safety.
Mati_Roy*204

I mean, you can trust it to preserve your brain more than you can trust a crematorium to preserve your brain.

And if you do chemical preservation, the operational complexity of maintaining a brain in storage is fairly simple. LN2 isn't that complex either, but does have higher risks.

That said, I would generally suggest using Tomorrow Biostasis for Europe residents if you can afford it.

here's my new fake-religion, taking just-world bias to its full extreme

the belief that we're simulations and we'll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone's perspective you have a very high probability of transcending to Utopia in 1 second

^^

Mati_Roy911

Is the opt-in button for Petrov Day a trap? Kinda scary to press on large red buttons 😆

4DiamondSolstice
Last year, I checked Less wrong on the 27th, and found a message that told me that nobody, in fact, had pressed the red button.  When I saw the red button today, it took me about five minutes to convince myself to press it. The "join the Petrov Game" message gave me confidence and after I pressed it, there was no bright red message with the words "you nuked it all" So no, not a trap. At least not in that sense - it adds you to a bigger trap, because once pressed the button cannot be unpressed.
3damiensnyder
it's not

Llifelogging as life extension version of this post would be like "You Only Live 1.5 Times" ^^

epistemic status: speculative, probably simplistic and ill defined

Someone asked me "What will I do once we have AGI?"

I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:

1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they're working vs while they're not working rather than while they're not... (read more)

imagine (maybe all of a sudden) we're able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost

we might decide to have anyone able to buy AI agents aligned with their values

or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can't all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)

then it seems like everyone/most people would start to run... (read more)

Mati_Roy*40

AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)

it doesn't need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy t... (read more)

Oregon Brain Preservation is a solid organization offering a free option in the US: https://www.oregoncryo.com/services.html, and Cryonics Germany a free option in Europe: https://cryonics-germany.org/en/

Thanks for engaging with my post. I keep thinking about that question.

I'm not quite sure what you mean by "values and beliefs are perfectly correlated here", but I'm guessing you mean they are "entangled".

there is no test we could perform which would distinguish what it wants from what it believes.

Ah yeah, that seems true for all systems (at least if you can only look at their behaviors and not their mind); ref.: Occam’s razor is insufficient to infer the preferences of irrational agents. Summary: In principle, all possible sets of possible value-syste... (read more)

i want a better conceptual understanding of what "fundamental values" means, and how to disentangled that from beliefs (ex.: in an LLM). like, is there a meaningful way we can say that a "cat classifier" is valuing classifying cats even though it sometimes fail?

7cubefox
I guess for a cat classifier, disentanglement is not possible, because it wants to classify things as cats if and only if it believes they are cats. Since values and beliefs are perfectly correlated here, there is no test we could perform which would distinguish what it wants from what it believes. Though we could assume we don't know what the classifier wants. If it doesn't classify a cat image as "yes", it could be because it is (say) actually a dog classifier, and it correctly believes the image contains something other than a dog. Or it could be because it is indeed a cat classifier, but it mistakenly believes the image doesn't show a cat. One way to find out would be to give the classifier an image of the same subject, but in higher resolution or from another angle, and check whether it changes its classification to "yes". If it is a cat classifier, it is likely it won't make the mistake again, so it probably changes its classification to "yes". If it is a dog classifier, it will likely stay with "no". This assumes that mistakes are random and somewhat unlikely, so will probably disappear when the evidence is better or of a different sort. Beliefs react to such changes in evidence, while values don't.
Mati_Roy20

when potentially ambiguous, I generally just say something like "I have a different model" or "I have different values"

Mati_Roy171

it seems to me that disentangling beliefs and values are important part of being able to understand each other

and using words like "disagree" to mean both "different beliefs" and "different values" is really confusing in that regard

4Viliam
Lets use "disagree" vs "dislike".
Mati_Roy20

topic: economics

idea: when building something with local negative externalities, have some mechanism to measure the externalities in terms of how much the surrounding property valuation changed (or are expected to change based, say, through a prediction market) and have the owner of that new structure pay the owners of the surrounding properties.

Mati_Roy20

I wonder what fraction of people identify as "normies"

I wonder if most people have something niche they identify with and label people outside of that niche as "normies"

if so, then a term with a more objective perspective (and maybe better) would be non-<whatever your thing is>

like, athletic people could use "non-athletic" instead of "normies" for that class of people

3CstineSublime
Does "normie" crossover with "(I'm) just a regular guy/girl"? While they are obviously have highly different connotations, is the central meaning similar? I tend to assume, owing to Subjectivism and Egocentric Bias, that at times people are more likely to identify as part of the majority (and therefore 'normie') than the minority unless they have some specific reason to do so. What further complicates this like a matryoshka doll is not only the differing sociological roles that a person can switch between dozens of times a day (re: the stereotypical Twitter bio "Father. Son. Actuary. Tigers supporter")  but within a minority one might be part of the majority of the minority, or the minority of the minority many times over. Like the classic Emo Phillips joke "Northern Conservative Baptist, or Northern Liberal Baptist" "He said "Northern Conservative Baptist", I said "me too! Northern Conservative Fundamentalist Baptist..."" itself a play on "No True-Scotsman".  
Mati_Roy*152

just a loose thought, probably obvious

some tree species self-selected themselves for height (ie. there's no point in being a tall tree unless taller trees are blocking your sunlight)

humans were not the first species to self-select (for humans, the trait being intelligence) (although humans can now do it intentionally, which is a qualitatively different level of "self-selection")

on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality

Answer by Mati_Roy20

Board game: Medium

2 players reveal a card with a word, then they need to say a word based on that and get points if it's the same word (basically, with some more complexities).

Example at 1m20 here: https://youtu.be/yTCUIFCXRtw?si=fLvbeGiKwnaXecaX

Mati_Roy20

I'm glad past Mati cast a wider net has the specifics for this year's Schelling day are different ☺️☺️

idk if the events are often going over time, but I might pass by now if it's still happening ☺️

I liked reading your article; very interesting! 🙏

One point I figured I should x-post with our DMs 😊 --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/all of the tech you described also have a probability of causing an x-risk if they're not implemented. I don't think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilitie... (read more)

I love this story so much, wow! It feels so incredibly tailored to me (because it is 😄). I value that a lot! It's a very scarce resource to begin with, but it hardly gets more tailored than that 😄

that's awesome; thanks for letting me know :)

i'd be curious to know how the first event went if you're inclined to share ☺

7nick lacombe
4 people attended including me. i'd say it went well but i don't know the opinion of the other people. it was very informal: we discussed at which level we were of signing up or not, and what was our next step or cryonics projects we want to work on.

cars won't replace horses, horses with cars will

Answer by Mati_Roy40

from ChatGPT

Sounds like a fun challenge! Here are 50 ways 

to send something to the moon:

1. Catapult
2. Giant crossbow
3. Balloon with an endless supply of helium
4. A giant seesaw
5. Sky elevator
6. Beam of light (if the object can ride light)
7. Teleportation device
8. Moon magnet
9. Whale blowhole
10. Bubble with a perfect vacuum inside
11. Tornado creator
12. Inflatable space tube
13. A jump by the Hulk
14. Sonic boom from a supersonic plane
15. Floating on a cloud machine
16. Warp drive
17. Ice cannon
18. Rocket rollercoaster
19. A super springboard
20. Fling via a

... (read more)

topics: AI, sociology

thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.

(which seems like it would be harmful for the purpose of reducing death)

I'm surprised this has this many upvotes. You're taking the person that contributed the most to warning humanity about AI x-risks, and are saying what you think they could have done better in what comes across as blamy to me. If you're blaming zir, you should probably blame everyone. I'd much rather if you wrote what people could have done in general rather than targeting one of the best contributors.

ok that's fair yeah! thanks for your reply. I'm guessing a lot of those historical quotes are also taking out of context actually.

you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)

Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^

https://youtube.com/shorts/wYeGVStouqw?feature=share

1Sinclair Chen
earbuds are just speakers in your ears. they're also way better than speakers.
2niplav
I believe VR/AR are not going to be as big of a deal as smartphones85%, and not produce >$200 bio. of revenue in 203055%.
4mako yass
This wasn't him taking a stance. It ends with a question, and it's not a rhetorical question, he doesn't have a formed stance. Putting him in a position where he feels the need to defend a thought he just shat out about a topic he doesn't care about while drinking a beer is very bad discourse.
Load More