All of RomanHauksson's Comments + Replies

I was trying to optimise/recommend by it not only parameter of personal tastes which it is actually only hard to get by other means, but a whole bunch of parameters.

Can you rephrase this? Having a hard time parsing this sentence.

1EniScien
Rephrased

I think the personal responsibility mindset is healthy for individuals, but not useful for policy design.

If we’re trying to figure out how to prevent obesity from a society-level view, then I agree with your overall point – it’s not tractable to increase everyone’s temperance. But as I understand it, one finding from positive psychology research is that taking responsibility for your decisions and actions does genuinely improve your mental health. And you don’t need to be rich or have a PhD in nutritional biochemistry to eat mostly grains, beans, frozen ve... (read more)

1Declan Molony
From an individual-level analysis, I agree that adopting personal responsibility is the way to go. The problem is that it doesn't seem to work en masse.  (I added the following to the main text of this post.)
Answer by RomanHauksson41

I would be interested in this!

Related: an organization called Sage maintains a variety of calibration training tools.

How long does the Elta MD sunscreen last?

2rosiecam
It says to reapply every two hours but I... do not do that 😅 I put it on in the morning and would reapply if I was spending time outside. I don't know how important the "every 2 hours" thing is

Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.

6Gunnar_Zarncke
This argument works against any thing you could do besides AI work and thus has to be considered in that wider frame. Going to the gym does mean less time for AI go well. Building a house. Watching Netflix. Some of these are longer time investments and some shorter, but the question still remains. Answer the question first how much effort you want to invest into AI go well vs. all other things you can do and then consider the fraction for children.
6Viliam
Perhaps people who can't contribute to AI alignment directly could help indirectly by providing free babysitting for the people working on AI alignment?

I love how it has proven essentially impossible to, even with essentially unlimited power, rig a vote in a non-obvious way. I am not saying it never happens deniably, and you may not like it, but this is what peaked rigged election somehow always seems to actually look like.

(Maybe I misunderstood, but isn’t this weak evidence that non-obviously rigging an election is essentially impossible, since you wouldn‘t notice the non-obvious examples?)

Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.

  • Reviewing the literature on intellectual progress, metascience, and social epistemology and applying the resulting insights to this community
  • Funding the development of experimental “epistemology software”, like Arbital or Mathopedia

The classic one is Lightcone Infrastructure, the team that runs LessWrong and the Alignment Forum.

I'll end with this thought: I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.

Maybe “altruistic” isn’t the right word. Someone who spends every weekend volunteering at the local homeless shelter out of a duty to help the needy in their community but doesn’t feel any specific obligation towards the poor in other areas is certainly very altruistic. The amount that one does to help those in their circle of consideration seems to be a better fit for most uses of the word altruism.

How about “morally inclusive”?

3Adam Zerner
I dunno. I feel a little uncertain here about whether "altruistic" is the right word. Good point. My current thinking: there are two distinct concepts. 1. The moral weights you assign. What your "moral mountain" looks like. 2. The actions you take. Do you donate 10% of your income? Volunteer at homeless shelters? To illustrate the difference, consider someone who has a crowded moral mountain, but doesn't actually take altruistic actions. At first glance this might seem inconsistent or contradictory. But I think there could be various legitimate reasons for this. * One possibility is simply akrasia. * Another is if they don't have a lot of "moral luck". For example, I personally am pretty privileged and stuff so it wouldn't be very costly for me to volunteer at a homeless shelter. But for someone who works two jobs and has kids, it'd be much more costly. And so we could each assign the exact same moral weights but take very different actions. * The shape of the mountain. For example, Longtermist Lauren might not donate any money or help her friends move, but she takes part in various "swing for the fences" types of projects that will almost certainly fail, but if they succeed they'd have an enormous impact on the world and it's future. All of that said, (1) feels like a better fit for the term "altruistic" than (2). And really, (2) doesn't feel to me like it should be incorporated at all. But I'm not really sure.

I would find this deeply frustrating. Glad they fixed it!

1Ustice
Pretty on-track, I think. While I have seen some toy examples entirely written by an AI system, I’m not seeing complex software yet. More importantly, even in the examples that I have seen, it’s still being directed by a software engineer. That said, copilot can now do a better job of analyzing and refactoring code. I still don’t think that an AI system will be able to replace me in 3 years. While I have seen some project setup examples that were fairly impressive, but still most of it is just saving typing. That said, those savings are likely to become significant. Documentation alone will be a big change. Right now documentation is hard. It takes time to write, and worse it is all too easy to become outdated as time goes on, so it’s often neglected. I expect that documentation will become a lot easier to write and maintain, as developers can just approve it as a part of code review, along with corrections. I expect that within the next year, testing will largely be written automatically. That will be amazing, as writing tests are tedious but essential. This is just the sort of task that an AI assistant is perfect for, because they are fairly self-contained. What I hope is that within three years, I’ll be working similarly to when I’m pair-programming with another developer where they are mostly driving. Sort of a more conversational interface too. Within that time, I still expect that it will still take a human to translate business requirements into the proper abstractions and tasks, but tasks that used to take a week or so to write and test will become doable in an afternoon. Basically I feel like my productivity will be multiplied by about an order of magnitude in three years. While this might mean that some teams will be smaller, I expect that there will be a much higher demand, as smaller businesses will be able to have a full software development team with made up by just a few people. Mostly though I just think that releases will be more frequent, as
Answer by RomanHauksson59144

I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.

1Three-Monkey Mind
I'd like to like this more but I don't have a clear idea of when to up one, up the other, down one, down the other, or down one and up the other.
3Seth Herd
It seems like it would be useful to have it for top-level posts. I love disagree voting and there are massive disparities sometimes between upvotes and agreements that show how useful it is in surfacing good arguments that are controversial. I think I'm seeing some high effort, topical and well-researched top-level posts die on the vine because of controversial takes that are probably disagree voting. This is not a complaint about my own posts sometimes dying; I've been watching others posts with this hypothesis, and it fits.
2TekhneMakre
I guess there's a reason for not having it on top-level posts, but I miss having it on top-level posts.
niplav2027

I also enjoy the reacts way more than I expected! They feel aesthetically at home here, especially with reacts for specific parts of the text.

Answer by RomanHauksson225

(low confidence, low context, just an intuition)

I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).

From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.

2Roman Leventov
Agreed! Cf. Proposal for improving the global online discourse through personalised comment ordering on all websites -- using LessWrong as the incubator for the first version of the proposed model would actually be critical.
9Ruby
This is basically what we do, capped by our team capacity. For most of the last ~2 years, we had ~4 people working full-time on LessWrong plus shared stuff we get from EA Forum team. Since the last few months, we reallocated people from elsewhere in the org and are at ~6 people, though several are newer to working on code. So pretty small startup. Dialogues has been the big focus of late (plus behind the scenes performance optimizations and code infrastructure). All that to say, we could do more with more money and people. If you know skilled developers willing to live in the Berkeley area, please let us know!

This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!

I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.

A research team's ability to design a robust corporate structure doesn't necessarily predict their ability to solve a hard technical problem. Maybe there's some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).

Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren't well-tolerated (even if they have some element of truth).

I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?

This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.

I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth pas

... (read more)
2dr_s
You're talking about this as if it was a matter of science and discovery. I'm not a moral realist so to me that doesn't compute. We don't discover what constitutes moral worth; we decide it. The only discovery involved here may be self-discovery. We could have moral instincts and then introspect to figure out more straightforwardly what do they map to precisely. But deciding to follow our moral instincts at all is as arbitrary a call as any other. As I said, only situation in which this would be true for me is IMO if either humans voluntarily just stop having children (e.g. they see the artificial beings as having happier lives and thus would rather raise one of them than an organic child) or conditions get so harsh that it's impossible for organic beings to keep existing and artificial ones are the only hope (e.g. Earth about to get wiped out by the expanding Sun, we don't have enough energy to send away a working colony ship with a self-sustaining population but we CAN send small and light Von Neumann interstellar probes full of AIs of the sort we deeply care about).

I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.

"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agent... (read more)

2dr_s
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence? If you gave me a choice between two futures, one with humans reasonably thriving for a few more thousand years and then going extinct, and the other with human-made robo-Hitler eating the galaxy, I'd pick the first without hesitation. I'd rather we leave no legacy at all than create literal cosmic cancer, sentient or not. I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us, unless it was a desperate situation in which it's simply become impossible for organic beings to survive and then the synthetics truly are our only realistic chance at leaving a legacy behind. Otherwise, all that would happen is that we'll live together and then if replacement happens it'll barely be noticeable as it does.
2[comment deleted]

Others have provided sound general advice that I agree with, but I’ll also throw in the suggestion of piracetam for a nootropic with non-temporary effects.

2Tomás B.
Any evidence for it working? Seriously doubt.

I've also been thinking a lot about this recently and haven't seen any explicit discussion of it. It's the reason I recently began going through BlueDot Impact's AI Governance course.

A couple questions, if you happen to know:

  • Is there anywhere else I can find discussion about what the transition to a post-superhuman-level-AI society might look like, on an object level? I also saw the FLI Worldbuilding Contest.
  • What are the implications for this on career choice for a early-career EA trying to make this transition go well?

Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.

I suspect this was written by ChatGPT. It doesn’t say anything meaningful about applying Bayes’ theorem to memory techniques.

Microsolidarity

Microsolidarity is a community-building practice. We're weaving the social fabric that underpins shared infrastructure.

The first objective of microsolidarity is to create structures for belonging. We are stitching new kinship networks to shift us out of isolated individualism into a more connected way of being. Why? Because belonging is a superpower: we’re more courageous & creative when we "find our people".

The second objective is to support people into meaningful work. This is very broadly defined: you decide what is meaningful to yo

... (read more)

You don't even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing).

Yeah, I suspect some intellectual groups write like this for that reason: not actively trying to trick people into thinking it's more profound than it is, but a slow creep into too much jargon. Like a frog in boiling water.

Then, when I look at their writing, it seems needlessly intelligible to me, even when it's writing designed for a newcomer. How do they not realize this? Maybe the water just feels warm to them.

When the human tendency to detect patterns goes too far

And, apophenia might make you more susceptible to what researchers call ‘pseudo-profound bullshit’: meaningless statements designed to appear profound. Timothy Bainbridge, a postdoc at the University of Melbourne, gives an example: ‘Wholeness quiets infinite phenomena.’ It’s a syntactically correct but vague and ultimately meaningless sentence. Bainbridge considers belief in pseudo-profound bullshit a particular instance of apophenia. To find it significant, one has to perceive a pattern in something

... (read more)

Np! I actually did read it and thought it was high-quality and useful. Thanks for investigating this question :)

2niplav
I found this comment both unhelpful and anti-scholarship, and have therefore strong - downvoted.

From Pluriverse:

A viable future requires thinking-feeling beyond a neutral technocratic position, averting the catastrophic metacrisis, avoiding dysoptian solutionism, and dreaming acutely into the techno-imaginative dependencies to come.

How do you decide which writings to convert to animations?

4Writer
Up until recently, with a big spreadsheet and guesses about these metrics: - Expected impact - Expected popularity - Ease of adaptation (for external material) The next few videos will still be chosen in this way, but we're drafting some documents to be more deliberate. In particular, we now have a list of topics to prioritize within AI Safety, especially because sometimes they build on each other.
2avturchin
Both these questions have too short timing: half a year.  The real question is will H5N1 pandemic happen in the next 5-10 years, that is, before strong AI. If we extrapolate 2.4% per for half a year – to the next 10 years, it will be around 50 %, which is much less comfortable.
2Q Home
What exactly, in our rational consideration, keeps the risk relatively low? Is it a prior that calamity-level pandemics happen rarely? Is it the fact (?) that today's situation is not that unique? Is it the hope that the virus can "back down", somehow? Is it some fact about general behavior of viruses? What are the "cruxes" of "the risk is relatively low" prediction, what events would increase/decrease the risk and how much? For example, what happens with the probability if a lot of mammal-to-mammal transmissions start happening? Maybe I've missed it, but Zvi doesn't seem to address such points. I feel utterly confused. As if I'm missing an obvious piece of context which "nobody is talking about". I have little knowledge about viruses. How unique is it for a virus to be deadly (and already a deadly threat for humans), epizootic (epidemic in non-humans) and panzootic (affecting animals of many species, especially over a wide area)? (From wikipedia article.) The most naive, over-reactive and highly likely misinformed take would be "we are in a unique situation in history (in terms of viruses), more unique than Spanish flu and Black Death, because the latter weren't (?) widespread among non-humans. there are some dice rolls which separate us from disaster, but all possible dice rolls are now happening daily for days and months (and years)." ... What makes all the factors cash out into "anyway, the risk is relatively low, just one digit"? Here's an analogy: from a naive outside perspective, H5N1's "progress" may seem as impressive as ChatGPT. "This never (?) happened, but suddenly it happened and from this point on things can only escalate (probably)" - I guess for an outsider it's easy to get an impression like this. I feel confused because I'm not seeing it directly addressed.

I was also disappointed to read Zvi's take on fruit fly simulations. "Figuring out how to produce a bunch of hedonium" is not an obviously stupid endeavor to me and seems completely neglected. Does anyone know if there are any organizations with this explicit goal? The closest ones I can think of are the Qualia Research Institute and the Sentience Institute, but I only know about them because they're connected to the EA space, so I'm probably missing some.

Zvi103

I acknowledge that there are people who think this is an actually good idea rather than an indication of a conceptual error in need of fixing, and I've had some of them seek funding from places where I'm making decisions and it's super weird. It definitely increases my worry that we will end up in a universe with no value.

Answer by RomanHauksson344

You can browse the "Practical" tag to find posts which are directly useful. Here are some of my favorites:

  • Lukeprog's The Science of Winning at Life sequence summarizes scientifically-backed advice for "winning" at everyday life: in productivity, relationships, emotions, etc. Not exaggerating, it is close to the most useful piece of media I have ever consumed. I especially recommend the first post Scientific Self Help: The State of our Knowledge, which transformed my perception of where I should look to learn how to improve my life.
  • After reading Scientific
... (read more)
1agrippa
I found the stuff about relationship success in Luke's first post here to be useful! thanks
3agrippa
Ok, this kind of tag is exactly what I was asking about. I'll have a lok at these posts.

I see. Maybe you could address it towards "DAIR, and related, researchers"? I know that's a clunkier name for the group you're trying to describe, but I don't think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.

I don't think it's a good idea to frame this as "AI ethicists vs. AI notkilleveryoneists", as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they're otherwise aligned with the broader idea of "AI is going to be a massive force for societal change and we should make sure it goes well".

Suggestion: instead of addressing "AI ethicists" or "AI ethicists of the DAIR / Stochastic Parrots school of thought", why not address "AI X-risk skeptics"?

1Peter Berggren
I've seen plenty of AI x-risk skeptics present their object-level argument, and I'm not interested in paying out a bounty for stuff I already have. I'm most interested in the arguments from this specific school of thought, and that's why I'm offering the terms I offer.

Does anyone know whether added sugar is bad for you if you ignore the following points?

  1. It spikes your blood sugar quickly (it has a high glycemic index)
  2. It doesn't have any nutrients, but it does have calories
  3. It does not make you feel full, so it makes it easier to eat more calories, and
  4. It increases tooth decay.

I'm asking because I'm trying to figure out what carbohydrate-dense foods to eat when I'm bulking. I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I'm not getting enough carbs. B... (read more)

1contrathinking
I used to consume thousands of calories of pure sugar (mixed with water) for long gym days. I did this in line with recommendations for athletes that simpler carbohydrates are better for endurance activity for their ease of digestion and because the calories were used to fuel exercise as opposed to being converted to excess adipose tissue. Cake is typically 'dirty' in my opinion because regular consumption of cake tends to not be correlated with a healthful diet and because the calories that cake takes up can push out more nutrient-dense foods, but cake, and most all 'foods,' I don't think are bad per se but only insofar as they contribute to a dietary pattern that is lacking in nutrients. But if you're bulking and are adequately meeting nutrition targets then eating calorically dense foods is, I think, neutral wrt health, though eating lots of fatty nuts might be more healthful. Lmk if studies for any of the above claims would be helpful, and for a less evidence-based example, I think of Michael Phelps eating lots of candy and 'unhealthy' foods when training. 
4Benquo
Ascorbic acid seems to be involved in carbohydrate metabolism, or at least in glucose metabolism, which may be why the small amounts of vitamin C in an all meat diet seem to be sufficient to avoid scurvy - negligible carbohydrate intake means reduced levels of vitamin C.  Both raw unfiltered honey and fruits seem like they don't cause the kind of metabolic derangement attributed to foods high in refined carbohydrates like refined grains and sugar.  Empirically high-carbohydrate foods in the ancestral diet are usually high in vitamin C.  Honey seems like an exception, but there might be other poorly understood micronutrients in it that help as well.  So it seems probable but not certain that taking in a lot of carbohydrates without a corresponding increase in vitamin C (and/or possibly other micronutrients they tend to come with in fresh fruit) could lead to problems. Seeds (including grains) also tend to have high concentrations of antinutrients, plant defense chemicals, and hard to digest or allergenic proteins (these are not mutually exclusive categories), so it might be problematic in the long run to get a large percentage of your calories from cake for that reason.  Additionally, some B vitamins like thiamine are important for carbohydrate metabolism, so if your sponge cake is not made from a fortified flour, you may want to take a B vitamin supplement.   Finally, sponge cake can be made with or without a variety of adulterants and preservatives, and with higher-quality or lower-quality fats.  There is some reason to believe that seed and vegetable oils are particularly prone to oxidation and may activate torporific pathways causing lower energy and favoring accumulation of body fat over other uses for your caloric intake, but I haven't investigated enough to be confident that this is true. I wouldn't recommend worrying about glycemic index, as it's not clear high glycemic index causes problems.  If your metabolism isn't disordered, your pancreas should be ab
2RHollerith
There is no consensus on the cause of metabolic syndrome (which is responsible for great amounts of cardiovascular disease and cognitive decline), but some experts like UCSF professor Robert Lustig, MD, believe that the main cause is fructose in the diet. Table sugar is half fructose and about half of the carbs in most fruits and vegetables is also fructose (with apples and pears being about 70% fructose and cherries being about 30%). Cultures that have traditionally relied heavily on carbs, e.g., East Asia, traditionally get almost all of their carbs from starchy food that contain zero fructose. Also, fructose is about 7 times worse than glucose at producing advanced glycation end products (AGEs).
2ChristianKl
Why do you believe that you need calories from carbs to bulk? I personally created a mix that might interesting for you as well that has a lot of calories but isn't very filling: 300ml water + 30ml Liquid Aminoacids + 30ml peanut oil + ~5ml honey + one spoon of pulverized beetroot powder Theoretically, it makes a lot of sense to me that consuming amino acids is less filling than consuming proteins because the body doesn't need to do work to break them down. That also seems to match my experience that I can easily drink it after another meal.  Generally consuming oil directly doesn't taste good and amino acids directly also doesn't taste very good, but mixing them together tastes a lot better. 

They meant a physical book (as opposed to an e-book) that is fiction.

I've also reflected on "microhabits" – I agree that the epistemics are tricky, of maintaining a habit even when you can't observe causal evidence for it being beneficial. I'll implement a habit if I've read some of the evidence and think it's worth the cost, even if I don't observe any effect in myself. Unfortunately, that's the same mistake homeopathics make.

I'm motivated to follow microhabitats mostly out of faith that they have some latent effects, but also out of a subconscious desire to uphold my identity, like what James Clear talks about in Atomic H... (read more)

I find it interesting that all but one toy is a transportation device or a model thereof.

Regardless of whether the lack of these kinds of studies is justified, I think you shouldn't automatically assume that "virology is unreasonable" or "there's something wrong with virologists". Because you're asking why the lack exists, there's something you don't know about virology, and your prior should be that it's justified, similar to Chesterton's Fence.

5ChristianKl
If a field gets an important question wrong, then there is something wrong with it. People died because virologists got it wrong.  I think it's reasonable to measure a field by its outcomes and not by its justifications.  The virologists not only created a dangerous virus that caused a pandemic, but they also failed to provide useful knowledge about virus transmission. 

I also don't particularly like the hedonic gradient of pushing yourself to run at the volume and frequency that seems necessary to really git gud

What do you mean by "hedonic gradient" in this context?

3romeostevensit
There are various psychological pushes to run more in order to run better and faster, but this pushes against your recovery abilities pretty quickly such that much of serious running is about managing recovery. This isn't great if it isn't a primary hobby.

For those of us who don't know where to start (like me), I also recommend checking out the wiki from r/malefashionadvice or r/femalefashionadvice.

Related: Wisdolia is a Chrome extension which automatically generates Anki flashcards based on the content of a webpage you're on.

That's a good point. I conflated Moravec's Paradox with the observation that so far, it seems as though cognitive tasks will be automated more quickly than physical tasks.

We take tending the garden seriously

Ironic typo: the link includes the proceeding space.

Suppose a family values the positive effects that screening would have on their child at $30,000, but in their area, it would cost them $50,000. Them paying for it anyway would be like "donating" $20,000 towards the moral imperative that you propose. But would that really be the best counterfactual use of the money? E.g. donating it instead to the Against Malaria Foundation would save 4-5 lives in expectation.[1] Maybe it would be worth it at $10,000? $5,000?

Although, this doesn't take into account the idea that an additional person doing polygenic sc... (read more)

3Mart_Korz
I mostly agree with this perspective with regards to the "moral imperative". But apart from that, it seems to me that a good case can be made if we use personal health spending as a reference class. Even if we only consider currently achievable DALY gains, it is quite notable that we have a method to gain several healthy life-years for a price of maybe $20,000/healthy year (and actually these gains should even be heritable themselves!). I do not know the numbers for common health interventions, but this should already be somewhat comparable. update: Quick estimate: US per capita health spending in 2019 was $11,582 according to CDC. If the US health spending doubles life expectancy compared to having no health system, this is comparable to $20,000/healthy year.

I agree. Maybe it's time to repost The Best Textbooks on Every Subject again? Many of the topics I want to self-study I haven't found recommendations for in that thread. Or maybe we should create a public database of textbook recommendations instead of maintaining an old forum post.

3Raemon
Someone's idea today was to make a Best of X tag for posts that are specifically recommending a product while having tried at least two other products in that reference class (of which textbooks are one existing example), and then people are encouraged to make top-level posts in this genre, which are easier to search.

Just curious: what motivated the transition?

RobertM110

Some combination of:

  • tech debt from design decisions which made sense when rebooting as LW 2.0, but have become increasingly unwieldy as development's continued
    • strictly speaking there were options here that weren't moving off of mongo, but it'd be much more difficult to make sane design choices with mongo involved.  the fact that it can do a thing doesn't mean it does that thing well.
    • mongo's indexing in particular is quite bad, both in terms of how finnicky mongo is about whether it can figure out how to use an index for a given query... and then also
... (read more)

Prioritizing subjects to self-study (advice wanted)

I plan to do some self-studying in my free time over the summer, on topics I would describe as "most useful to know in the pursuit of making the technological singularity go well". Obviously, this includes technical topics within AI alignment, but I've been itching to learn a broad range of subjects to make better decisions about, for example, what position I should work in to have the most counterfactual impact or what research agendas are most promising. I believe this is important because I aim to event... (read more)

2niplav
Some advice (with less justification): Pick one (1) technical subject[1]. Read the textbook carefully (maybe take notes). Do all the exercises (or at least try to spend >20 minutes on exercises you can't solve). Potentially make flashcards. Study those flashcards. Do the real thing.[2] I regret having spent so much time reading philosophy, and not learning technical subjects. I have gained remarkably little from "learning how to learn" (except the stuff above) or productivity or epistemology (excluding forecasting)[3]. I remember reading about a heuristic (might've been on Gwerns site, but I can't find it right now): Spend 90% of your time on object-level stuff, 9% of time on meta stuff, 0.9% of time on meta-meta stuff, and so on). Learning forecasting is great. Best learned by doing a thousand forecasts (flows through to probability theory). ---------------------------------------- 1. I think linear algebra, causal inference or artificial intelligence are good candidates. I am unsure about game theory, it's been useful only in metaphors in my own life—too brittle and dependent on initial conditions. But in general anything where you can do exercises (so most things from 6.) and have them be wrong or right is good (so stuff like coding is better than math because checking a proof depends on knowing what a good proof looks like). ↩︎ 2. I predict you won't finish the textbook. No problem. ↩︎ 3. I think I learned more from a course on social choice theory than all philosophy from before 1950 I have read. ↩︎
Load More