All of Risto_Saarelma's Comments + Replies

This isn't working for me as pumping the intuition you seem to want it to. I think life is worth living and I'd just cut to the chase and pick 1 because option 2 doesn't make sense as a way to get more life. Pattern theory of identity, life is a process, not a weighted lump of time-space-matter-stuff where you can just say "let's double the helping" like this. If you run the exact same process twice, that doesn't get you any new patterns and new life compared to just running it once.

Or if the idea is that I'd be aware of having gotten a second ru... (read more)

And for people on the Vim side, there's VimOutliner for doing workflowy-like outlines, also with a time-tracking component.

Cal Newport on "Write Every Day". If it's not your main job, you're going to end up having no write days, and if you're committed to a hard schedule a missed day is going to translate into "welp, couldn't make the cut then, better quit for good".

4Elo
Disagree with his opinion. He suggests the biggest problem with write every day is: Yea, and? That doesn't have to cause failure. We know things like You don't have to fail with abandon.. Also iteration cycles He also says: Which is fine. That's his experience. You can listen to him and his experience or you can not. Anecdata is anecdata. There's a reason why every list of writing advice and every famous writer says to write every day. If you only ever do that; you will be doing yourself a disservice. You don't need to know the full plan before setting off. And it's often a waste of time to not start and pivot. Imagine having to know every word of a book before you start writing it down on paper. That's a ridiculous concept. ---------------------------------------- in counter point - if you set out to write every day - yes you will fail. That doesn't mean you can't try to do it, and do really really well in the process. If you fail you don't have to quit for good. If you iterate and try again you can diagnose that failure mode and try again. Try harder. Try smarter.

Yes, The Mind Illuminated is basically the same ten-step model as the one in that article, but expanded to book length and with lots of extra practice advice and theory of mental models.

The Mind Illuminated by John Yates is my new favorite meditation instruction book. Has lots of modern neuroscience grounding, completely secular, and presents a very detailed step-by-step instruction on going from not having a daily meditation habit going to attaining very deep concentration states.

7[anonymous]
I also think John Yates's Progressive Stages of Mindfulness in Plain English is orders of magnitude better than all the other meditation books I've read. From what I could tell from looking at the table of contents (and page lengths) for both, the book/pdfs I linked covers the same content, but is free! Though, I might consider buying his newest book, just because I liked the other one so much.
3Hohn
I recently got hold of From Under the Rubble, a collection of essays by Solzhenitsyn and other Russian thinkers written in the 1970s and circulated surreptitiously among themselves. It occurred to me that I'm not the first person to ask: "where do we go from here? How do we go there?" and these essays are thought-provoking responses. They are decidedly not very rational-- I'm not sure there is a rational basis for optimism in 1970s Soviet Russia.

One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren't really that many checks and balances that keep things from drifting further ... (read more)

1Kaj_Sotala
Agreed both with this being a real risk, and it being good that Ilya hangs out here.

Congratulations on getting a "ban any new user posting the sort of stuff Eugine would post" moderation norm on the way I guess.

This sounds like someone who's salient feature is math anxiety from high school asking how to be a research director at CERN. It's not just that the salient feature seems at odds with the task, it's that the task isn't exactly something you just walk into, while you sound like you're talking about helping someone overcome a social phobia by taking a part-time job at supermarket checkout. Is your friend someone who wins International Math Olympiads?

Maybe someday someone clever will figure out how to disseminate that knowledge, but it simply isn't there yet.

Based on Razib Khan's blog posts, many cutting edge researchers seem to be pretty active on Twitter where they can talk about their own stuff and keep up on what their colleagues are up to. Grad students on social media will probably respond to someone asking about their subfield if it looks like they know their basics and may be up to something interesting.

The tiny bandwidth is of course a problem. "Professor Z has probably proven math lem... (read more)

0ChristianKl
That does fit a tweet but knowing that that doesn't mean that a situation exists where that communication happens. In many cases you don't know what you don't know, so you can't ask. For the questions where you can ask StackExchange is great.

I am quite certain this is very unlikely to become any type of trend (it is certainly possible for outsiders to be great, Ramanujan was an outsider after all).

Not in the present circumstances, no. The interesting thing is if it would strike a match with the current disaffection with academia (perceptions of must-have-bachelor's-for-any-kind-of-job student loan rackets and stressed-out researchers who spend most of their energy gaming administrative systems and grinding out cookie-cutter research tailored to fit standardized bureaucratic metrics for acce... (read more)

0Lumifer
The "traditional" answer :-/ is that they will do startups.

Yeah, I am sure enough about this not happening that I am willing to place bets. There is an enormous amount of intangibles Coursera can't give you (I agree it can be useful for a certain type of person for certain types of aims).

Agree that being inside academia is probably a lot bigger deal than people outside it really appreciate. We're about to see the first generation that grew up with a really ubiquitous internet come to grad school age though. Currently in addition to the assumption that generally clever people will want to go to university, we've... (read more)

4EHeller
In STEM fields, there is a great deal of necessary knowledge that simply is not in journals or articles, and is carried forward as institutional knowledge passed around among grad students and professors. Maybe someday someone clever will figure out how to disseminate that knowledge, but it simply isn't there yet.
4IlyaShpitser
I only know about STEM, but I don't think it will make a ton of difference (will report back once I see a few graduate). I am quite certain this is very unlikely to become any type of trend (it is certainly possible for outsiders to be great, Ramanujan was an outsider after all). ---------------------------------------- edit: I think a better example of "credentialism" is docs vs nurses. MDs know a lot more than nurses do, but there is a ton of routine healthcare stuff that needs a doc for no good reason, basically. In academia people ultimately just care if you are good or not. One of the smartest mathematical minds I know is an MD, not a PhD (and is an enormously influential academic doing mathy stuff). There is a famous mathematician at UCLA without a PhD, I think.

Yeah, for some reason I'm not inclined to give very much weight to an event that can't be detected by outside observers at all and which my past, present or future selves can't subjectively observe being about to happen, happening right now or having happened.

You seem to be hung up on either memories or observations being the key to decoding the subjective self. I think that is your error.

This sounds like a thing people who want to explain away subjective consciousness completely are saying. I'm attacking the notion that the annoying mysterious part in... (read more)

3knpstr
At best the argument you're making is the same as "a tree falls in the forest and no one is around to hear it, does it make a sound?" argument. If I have a back-up of my computer software on a different hard drive and the current hard drive fails so I swap in the back up... my computer performs the same but it is obviously a different hard drive. If my hard drive doesn't fail and I simply "write over" my current hard drive with the hard drive back up, it is still not the same hard drive/software. It will be easy to forget it has been copied and is not the original, but the original (or last version) is gone and has been written over, despite it being "the same".

There is some Buddhist connection, yes. The moments of experience thing is a thing in some meditation styles, and advanced meditators are actually describing something like subjective experience starting to feel like an on/off sequence instead of a continuous flux. Haven't gone really deep into what either the Buddhist metaphysics or the meditation phenomenology says. Neuroscience also has some discrete consciousness steps stuff, but I likewise haven't gone very deep into that. Anyway,

I'm with them so far. Here's where I get off): All sentient beings are

... (read more)

The strange part that might give your intuition a bit of a shake is that it's not entirely clear how you tell the difference as an inside observer either. The thought experiment wasn't "we're going to start doing this tomorrow night unless you acquiesce", it's "we've been doing this the whole time", and everybody had been living their life exactly as before until told about it. What should you now think of your memories of every previous day and going to sleep each night?

0[anonymous]
Either you cease to exist, or you don't. It's a very clear difference. You seem to be hung up on either memories or observations being the key to decoding the subjective self. I think that is your error.

My expounding of the pattern identity theory elsewhere in the comments is probably a textbook example of what Scott Aaronson calls bullet-swallowing, so just to balance things out I'm going to link to Aaronson's paper Ghost in the Quantum Turing Machine that sketches a very different attack on standard naive patternism. (Previous LW discussion here)

If pressed, right now I'm leaning towards the matter-based argument, that if consciousness is not magical then it is tied to specific sets of matter. And that a set of matter can not exist in multiple locations. Therefore a single consciousness can not exist in multiple locations. The consciousness A that I am now is in matter A.

So, there are two things we need to track here, and you're not really making a distinction between them. There are individual moments of consciousness, which, yes, probably need to be on a physical substrate that exists in the s... (read more)

2Usul
I'm going to go ahead and continue to disagree with the the pattern theorists on this one. Has the inverse of the popular "Omega is a dick with a lame sense of irony" simulation mass-murder scenario been discussed? Omega (truthful) gives you a gun. "Blow your brains out and I'll give the other trillion copies a dollar." It seems the pattern theorist takes the bullet or Dr Bowie-Tesla's drowning pool with very little incentive. The pattern theorists as you describe them would seem to take us also to the endgame of Buddhist ethics (not a Buddhist, not proselytizing for them): You are not thought, you are not feeling, you are not memory, because these things are impermanent and changing. You are the naked awareness at the center of these things in the mind of which you are aware. (I'm with them so far. Here's where I get off): All sentient beings are points of naked awareness, by definition they are identical (naked, passive), therefore they are the same, Therefore even this self does not matter, therefore thou shall not value the self more than others. At all. On any level. All of which can lead you to bricking yourself up in a cave being the correct course of action. To your understanding, does the pattern theorist (just curious, do you hold to the views you are describing as pattern theory?) define self at all on any level? Memory seems an absurd place to do so from, likewise personality, thought- have you heard the nonsense that thought comes up with? How can a pattern theorist justify valuing self above other? Without a continuous You, we get to the old Koan "Who sits before me now? (who/what are You?)" "Leave me alone and go read up on pattern theory yourself, I'm not your God-damn philosophy teacher." Is a perfectly acceptable response, by the way. No offense will be taken and it would not be an unwarranted reply. I appreciate the time you have taken to discuss this with me already.

You're still mostly just arguing for your personal intuition for the continuity theory though. People have been doing that pretty much as long as we've had fiction about uploads or destructive teleportation, with not much progress to the arguments. How would you convince someone sympathetic to the pattern theory that the pattern theory isn't viable?

FWIW, after some earlier discussions about this, I've been meaning to look into Husserl's phenomenology to see if there are some more interesting arguments to be found there. That stuff gets pretty weird and tricky fast though, and might be a dead end anyway.

4Usul
Honestly, I'm not sure what other than intuition and subjective experience we have to go with in discussing consciousness. Even the heavy hitters in the philosophy of consciousness don't 100% agree that it exists. I will be the first to admit I don't have the background in pattern theory or the inclination to get into a head to head with someone who does. If pressed, right now I'm leaning towards the matter-based argument, that if consciousness is not magical then it is tied to specific sets of matter. And that a set of matter can not exist in multiple locations. Therefore a single consciousness can not exist in multiple locations. The consciousness A that I am now is in matter A. If a copy consciousness B is made in matter B and matter A continues to exist than it is reasonable to state that consciousness A remains in matter A. If matter A is destroyed there is no reason to assume consciousness A has entered matter B simply because of this. You are in A now. You will never get to B. So, if it exists, and it is you, you're stuck in the meat. And undeniably, someone gets stuck in the meat. I imagine differing definitions of You, self, consciousness, etc would queer the deal before we even got started.

I'm guessing a part of the point is that nobody had noticed anything (and indeed still can't, at least in any way they could report back) until the arrangement was pointed out, which highlights that there are bits in the standard notion of personal identity that get a bit tricky once you try to get more robust than just going by intuition on them. How do you tell you die when a matrix lord disintegrates you and then puts together an identical copy? How do you tell you don't die when you go under general anesthesia for brain surgery and then wake up?

-1James_Miller
Or when 90% of the atoms that used to be in your body no longer are there.
0[anonymous]
How does that matter at all? That seems like a completely unrelated, orthogonal issue. The question at hand is should the person being disintegrated expect to continue its subjective experience as the copy, or is it facing oblivion. The fact that you can't experimentally tell the difference as an outside observer is irrelevant.

I see the pattern identity theory, where uploads make sense, as one that takes it as a starting point that you have an unambiguous past but no unambiguous future. You have moments of consciousness where you remember your past, which gives you identity, and lets you associate your past moments of consciousness to your current one. But there's no way, objective or subjective, to associate your present moment of consciousness to a specific future moment of consciousness, if there are multiple such moments, such as a high-fidelity upload and the original perso... (read more)

5MockTurtle
I very much like bringing these concepts of unambiguous past and ambiguous future to this problem. As a pattern theorist, I agree that only memory (and the other parts of my brain's patterns which establish my values, personality, etc) matter when it comes to who I am. If I were to wake up tomorrow with Britney Spear's memories, values, and personality, 'I' will have ceased to exist in any important sense, even if that brain still had the same 'consciousness' that Usul describes at the bottom of his post. Once one links personal identity to one's memories, values and personality, the same kind of thinking about uploading/copying can be applied to future Everett branches of one's current self, and the unambigous past/ambiguous future concepts are even more obviously important. In a similar way to Usul not caring about his copy, one might 'not care' about a version of oneself in a different Everett branch, but it would still make sense to care about both future instances of yourself BEFORE the split happens, due to the fact that you are uncertain which future you will be 'you' (and of course, in the Everett branch case, you will experience being both, so I guess both will be 'you'). And to bring home the main point regarding uploading/copying, I would much prefer that an entity with my memories/values/personality continue to exist in at least one Everett branch, even if such entities will cease existing in other branches. Even though I don't have a strong belief in quantum multiverse theory, thinking about Everett branches helped me resolve the is-the-copy-really-me? dilemma for myself, at least. Of course, the main difference (for me) is that with Everett branches, the different versions of me will never interact. With copies of me existing in the same world, I would consider my copy as a maximally close kin and my most trusted ally (as you explain elsewhere in this thread).
7Usul
Thanks for the reply. I am not convinced by the pattern identity theorist because, I suppose, I do not see the importance of memory in the matter, nor the thoughts one might have about those thoughts. If I lose every memory slowly and die senile in a hospital bed I believe that it will be the same consciousness experiencing those events as is experiencing me writing these words. I identify that being which holds no resemblance to my current intellect and personality will be my Self in a way that an uploaded copy with my current memory and personality can never be. I might should have tabooed "consciousness" from the get go, as there is no one universal definition. For me it is passive awareness. This meat awareness in my head will never get out and it will die and no amount of cryonics or uploading to create a perfect copy that feels as if it is the same meat awareness will change that. Glass half-empty I suppose.

A review of Ernst Jünger's The Glass Bees.

0Viliam
Awesome!!!

Yes, we do, maybe you'd notice if you didn't shut down your brain whenever you encountered a non-PC idea.

I don't think there's been much elaboration on the ideas that were already floating around here five-ish years ago in the last few years. We've just had the few regulars jumping in with the same message, failing to start much interesting conversation, and growing increasingly cranky.

Making being a reactionary your life's work isn't very rewarding. It's a feature of the present system that proponents who get boring and repetitive get thrown in the wood chipper and more clever and interesting ones take their place, but any single person will get stuck in their old material after a while.

Or because you and Jim are being tedious assholes nobody likes to hang out with, while going on about the same predictable set of not socially acceptable stuff for years and years without having anything new and interesting to say after a while.

-5username2

In Finland, today, can I just say "I don't feel like working" and get welfare for life?

(Not an expert on this stuff, but here's my rough understanding.)

You get a couple years of pretty straightforward welfare if you quit your job, then it looks like they will start doing means testing (tarveharkinta) on your savings and will stop paying you if it doesn't look like you're living hand-to-mouth. After you've gone through all your savings that the employment office is aware of, I think you can go on living in some sort of rental apartment and get ... (read more)

3bogus
Now here's why these are really, really dumb policies: they amount to a capital levy and a corvée that are selectively applied to low-income folks who would otherwise qualify for welfare. Needless to say, there's a reason we don't use capital levies and corvées anymore, and limiting them to low-income folks does not change that assessment much.

Looks like you can just aim youtube-dl at the URL and it'll start downloading.

If there was a large dataset of faces shot in a similar way and rated for attractiveness somewhere, you could take a photo of yourself, look for people in the set who look like you (possibly with some sort of face recognition program) and see how they are rated.

I'm still doing this after starting a year ago. I've filled up one large ruled Moleskine and am 50 pages into the second one. I have calendar pages where I don't do any forward planning, but just write a one-line summary of what I worked on that day. Empty lines or variants of "slacking off" are an instant sign of trouble.

Other than that, the nice thing is just that I have a designated single nice journal to do any sort of brainstorming notes I need. Random ideas, planning an ongoing project or reading notes all just start by labeling a new page on the journal and writing down whatever is relevant.

Not exactly, but related: Hugh Everett's' daughter Elizabeth committed suicide in 1996 and wrote in her suicide note that she's going to a parallel universe to be with her father.

7gwern
The exact wording is a little more tentative. From Many Worlds of Hugh Everett, pg757 in my Calibre: (The footnote explains that "don't file me" is a joking reference to how Everett's ashes were apparently kept in a filing cabinet for some time.)

It's still related to his shtick and people are getting really tired of his shtick.

No, it's just indicating that I haven't made any sort of concentrated effort at clearing my reading list or maintaining some sort of FIFO discipline on it. The Complete History of the World in Impeccable Engaging Detail tends to not do very well against a Warren Ellis comic book about shooting aliens wearing human skin suits in the head with flesh-eating bullets when picking random media to consume during idle time.

0gjm
Yup, understood. (My own to-be-read shelves have maybe 350 books on them, and I have the same failure mode where mind candy gets consumed faster than meatier fare. If it actually is a failure mode, which maybe it isn't.)

Can't recommend a book I've read, but I've had J.M. Roberts' The New Penguin History of the World on my reading list for a while now. It's more big picture than facts.

If you're after rulers, dates and the like, just diving into wikipedia, starting from high-level articles and taking your own notes might not be a terribly bad approach.

0gjm
Is the fact that it's been on your reading list for some time but you haven't read it a strike against it? E.g., does it indicate that it's intimidating rather than engaging?
2Vaniver
I actually expect that this is a very good way to approach learning world history.

I'd be happy if there was a RSS feed of his publicly visible FB posts.

Stand Still, Stay Silent, a webcomic about a post-apocalyptic Northern Europe with very nice art.

Seconding this. I use the hledger variant.

It's probably a safe assumption that there exists some number of people dedicating their careers to the study of intelligence who are able to consider potential heritability confounders you can think up in five minutes.

0[anonymous]
My question was actually rather meant as a reference request (though, admittedly poorly phrased [but also poorly interpreted]).

Do you already have a track record of getting sizable groups of strangers doing things they wouldn't otherwise have done by influencing them on some other social media? Then it might be worth looking into how to game facebook.

If you don't, then the first question is if you should try to get into the social media influence game to begin with. How many people are trying to do it compared to people who have any traction with a number of followers, and what sets the successful people apart? If most people who don't bounce off right away are barely hanging by i... (read more)

2[anonymous]
I don't at all. In fact, track evidence for any kind of online organising would suggest I'm worse than avergae. However, I am very effective at mobilising people in real life and have a great track record for that. I can often sense the mood of a place and play off that. But, I'm not confident that I can accurately predict it online and do things. Thank you for getting me to answer this. If I had just thought of the above answer, I wouldn't have realised by that second point, my efficacy of mobilising people in real life, is non-transfferable to my online-mobilising, based on past evidence. And, therefore, my uncertainty about this topic, lending to the question, is now resolved. Thank you.

Funny thing, the previous person with minimal earlier site presence who came in with "here's how to start properly growing Less Wrong" was also kind of tone-deaf and annoying.

-3Lu93
You are just rude now. You just straight up try to insult and discredit me, you did not even try to hide it. I never said "this is how to ..." I offered course specialized in that topic. I offered material. I don't own that course, i just thought it would be useful to people who try to get more members here (I met few of them, so i expect there are more). I properly separated what is my idea, from that course.

I just found out that some comic books I read in Finnish in the 80s were originally published in English in 1976 in a magazine called Starstream. I re-read the comics, which are an anthology of comic adaptations of various golden age SF short stories. They also mostly stick to the source material, such as John Campbell's Who Goes There, which was also the basis for John Carpenter's The Thing. Generally it's quite a bit better than what you'd expect from "newsstand comic book from 1976", and a lot of the stories are quite weird, from the mix of ou... (read more)

Obviously the solution is a smartwatch which pushes retractable needles in a pattern that tells the current time in binary into the skin of your wrist once every minute.

Turns out there is. Probably not all of the programs though.

Are you trying to do something specific or are you just curious about learning about Bayesian statistics? The software on that list probably won't be that useful unless you already know a bit about statistics theory and have a specific problem you want to solve.

0mrexpresso
thanks!

It has seemed to me that a lot of the commenters who come with their own solid competency are also less likely to get unquestioningly swept away following EY's particular hobbyhorses.

There's also the whole Lesswrong-is-dying thing that might be contribute to the vibe you're getting. I've been reading the forum for years and it hasn't felt very healthy for a while now. A lot of the impressive people from earlier have moved on, we don't seem to be getting that many new impressive people coming in and hanging out a lot on the forum turns out not to make you that much more impressive. What's left is turning increasingly into a weird sort of cargo cult of a forum for impressive people.

7V_V
Actually, I think that LessWrong used to be worse when the "impressive people" were posting about cryonics, FAI, many-world interpretation of quantum mechanics, and so on.

Did any of the really valuable contributors to LW go away because they were driven away by incessant criticism? You think Scott Alexander moved to SSC because he couldn't handle the downvotes?

Didn't Eliezer say somewhere that he posts on Facebook instead of LW nowadays because on LW you get dragged into endless point-scoring arguments with dedicated forum arguers and on Facebook you just block commenters who come off as too tiresome to engage with from your feed?

4Lumifer
As far as I understand (it isn't very far), Eliezer prefers Facebook basically because it gives him control -- which is perfectly fine, his place on FB is his place and he sets the rules. I don't think that degree of control would be acceptable on LW -- the local crowd doesn't like tyrants, even wise and benevolent.
1ChristianKl
On the LW facebook group Eliezer bans occasionally bans people who post really low quality content. The same goes for his own feed. If Eliezer would bans someone on LW on the other hand he would get a storm of criticism.

Mathematicians have come up with formal languages that can, in principle, be used to write proofs in a way that they can be checked by a simple algorithm. However, they're utterly impractical.

I understood that a part of the univalent foundations project is to develop a base formalism for mathematics that's amenable to similar layered abstraction with formal proofs as you can do with programs in modern software engineering. The basic formal language for proofs is like raw lambda calculus, you can see it works in theory but it'd be crazy to write actual s... (read more)

Just how bad of an idea is it for someone who knows programming and wants to learn math to try to work through a mathematics textbook with proof exercises, say Rudin's Principles of Mathematical Analysis, by learning a formal proof system like Coq and using that to try to do the proof exercises?

I'm figuring, hey, no need to guess whether whatever I come up with is valid or not. Once I get it right, the proof assistant will confirm it's good. However, I have no idea how much work it'll be to get even much simpler proofs that what are expected of the textboo... (read more)

4redlizard
I have tried exactly this with basic topology, and it took me bloody ages to get anywhere despite considerable experience with coq. It was a fun and interesting exercise in both the foundations of the topic I was studying and coq, but it was by no means the most efficient way to learn the subject matter.
3Drahflow
The Metamath project was started by a person who also wanted to understand math by coding it: http://metamath.org/ Generally speaking, machine-checked proofs are ridiculously detailed. But it being able to create such detailed proofs did boost my mathematical understanding a lot. I found it worthwhile.
5Epictetus
Mathematicians have come up with formal languages that can, in principle, be used to write proofs in a way that they can be checked by a simple algorithm. However, they're utterly impractical. Most proofs leave some amount of detail to the reader. A proof might skip straightforward algebraic manipulations. It might state an implication and leave the reader to figure out just what happened. Actually writing out all the details (in English) would at least double the length of most proofs. Put in a formal language, and you're looking at an order-of-magnitude increase in length. That's a lot of painstaking labor for even a simple proof. If the problem is the least bit interesting, you'll spend a lot more time writing out the details for a computer than you did solving it.
9Anatoly_Vorobey
It's a bad idea. Don't do it. You'll be turned off by all the low-level grudgery and it'll distract you from the real content. Most of the time, you'll know if you found a solid proof or not. Those times you're not sure, just post a question on math.stackexchange, they're super-helpful.
3IlyaShpitser
"There is no royal road to geometry." ---------------------------------------- The way we teach proofs and mathematical sophistication is ad hoc and subject specific. I wish I knew a better general way, but barring that, perhaps start with a mathematical subject close to programming. For instance logic or complexity theory. I wouldn't bother with proof assistants until you are pretty comfortable with proofs.

Any success stories for signing up for cryonics for people with middle-class wealth who don't have a citizenship in an Anglosphere country?

I've learned that it's probably optimal to strive for the most high-earning high-productivity career you can accomplish and donate your extra income to effective altruism and that I'm not going to go and try to do that.

4Zubon
There are other things to optimize, and Less Wrong would cheerfully endorse your optimizing for some combination of: * happiness * fun * production/creation that focuses on your comparative advantages Someone needs to do those things that the high-earning people are paying for, and if you don't think you'd be happy with an earnings-optimized career, you might be that person.
Load More