I think the personal responsibility mindset is healthy for individuals, but not useful for policy design.
If we’re trying to figure out how to prevent obesity from a society-level view, then I agree with your overall point – it’s not tractable to increase everyone’s temperance. But as I understand it, one finding from positive psychology research is that taking responsibility for your decisions and actions does genuinely improve your mental health. And you don’t need to be rich or have a PhD in nutritional biochemistry to eat mostly grains, beans, frozen ve...
I would be interested in this!
Related: an organization called Sage maintains a variety of calibration training tools.
How long does the Elta MD sunscreen last?
Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.
I love how it has proven essentially impossible to, even with essentially unlimited power, rig a vote in a non-obvious way. I am not saying it never happens deniably, and you may not like it, but this is what peaked rigged election somehow always seems to actually look like.
(Maybe I misunderstood, but isn’t this weak evidence that non-obviously rigging an election is essentially impossible, since you wouldn‘t notice the non-obvious examples?)
Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.
The classic one is Lightcone Infrastructure, the team that runs LessWrong and the Alignment Forum.
I'll end with this thought: I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.
Maybe “altruistic” isn’t the right word. Someone who spends every weekend volunteering at the local homeless shelter out of a duty to help the needy in their community but doesn’t feel any specific obligation towards the poor in other areas is certainly very altruistic. The amount that one does to help those in their circle of consideration seems to be a better fit for most uses of the word altruism.
How about “morally inclusive”?
I would find this deeply frustrating. Glad they fixed it!
One year later, what you think about the field now?
I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.
I also enjoy the reacts way more than I expected! They feel aesthetically at home here, especially with reacts for specific parts of the text.
(low confidence, low context, just an intuition)
I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).
From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.
This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!
I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.
A research team's ability to design a robust corporate structure doesn't necessarily predict their ability to solve a hard technical problem. Maybe there's some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren't well-tolerated (even if they have some element of truth).
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
...I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth pas
I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agent...
Others have provided sound general advice that I agree with, but I’ll also throw in the suggestion of piracetam for a nootropic with non-temporary effects.
7 months later, from Business Insider: Silicon Valley elites are pushing a controversial new philosophy.
I've also been thinking a lot about this recently and haven't seen any explicit discussion of it. It's the reason I recently began going through BlueDot Impact's AI Governance course.
A couple questions, if you happen to know:
Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.
I suspect this was written by ChatGPT. It doesn’t say anything meaningful about applying Bayes’ theorem to memory techniques.
...Microsolidarity is a community-building practice. We're weaving the social fabric that underpins shared infrastructure.
The first objective of microsolidarity is to create structures for belonging. We are stitching new kinship networks to shift us out of isolated individualism into a more connected way of being. Why? Because belonging is a superpower: we’re more courageous & creative when we "find our people".
The second objective is to support people into meaningful work. This is very broadly defined: you decide what is meaningful to yo
You don't even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing).
Yeah, I suspect some intellectual groups write like this for that reason: not actively trying to trick people into thinking it's more profound than it is, but a slow creep into too much jargon. Like a frog in boiling water.
Then, when I look at their writing, it seems needlessly intelligible to me, even when it's writing designed for a newcomer. How do they not realize this? Maybe the water just feels warm to them.
When the human tendency to detect patterns goes too far
...And, apophenia might make you more susceptible to what researchers call ‘pseudo-profound bullshit’: meaningless statements designed to appear profound. Timothy Bainbridge, a postdoc at the University of Melbourne, gives an example: ‘Wholeness quiets infinite phenomena.’ It’s a syntactically correct but vague and ultimately meaningless sentence. Bainbridge considers belief in pseudo-profound bullshit a particular instance of apophenia. To find it significant, one has to perceive a pattern in something
Np! I actually did read it and thought it was high-quality and useful. Thanks for investigating this question :)
Too long; didn’t read
From Pluriverse:
A viable future requires thinking-feeling beyond a neutral technocratic position, averting the catastrophic metacrisis, avoiding dysoptian solutionism, and dreaming acutely into the techno-imaginative dependencies to come.
How do you decide which writings to convert to animations?
I was also disappointed to read Zvi's take on fruit fly simulations. "Figuring out how to produce a bunch of hedonium" is not an obviously stupid endeavor to me and seems completely neglected. Does anyone know if there are any organizations with this explicit goal? The closest ones I can think of are the Qualia Research Institute and the Sentience Institute, but I only know about them because they're connected to the EA space, so I'm probably missing some.
I acknowledge that there are people who think this is an actually good idea rather than an indication of a conceptual error in need of fixing, and I've had some of them seek funding from places where I'm making decisions and it's super weird. It definitely increases my worry that we will end up in a universe with no value.
You can browse the "Practical" tag to find posts which are directly useful. Here are some of my favorites:
I see. Maybe you could address it towards "DAIR, and related, researchers"? I know that's a clunkier name for the group you're trying to describe, but I don't think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.
I don't think it's a good idea to frame this as "AI ethicists vs. AI notkilleveryoneists", as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they're otherwise aligned with the broader idea of "AI is going to be a massive force for societal change and we should make sure it goes well".
Suggestion: instead of addressing "AI ethicists" or "AI ethicists of the DAIR / Stochastic Parrots school of thought", why not address "AI X-risk skeptics"?
Does anyone know whether added sugar is bad for you if you ignore the following points?
I'm asking because I'm trying to figure out what carbohydrate-dense foods to eat when I'm bulking. I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I'm not getting enough carbs. B...
They meant a physical book (as opposed to an e-book) that is fiction.
I've also reflected on "microhabits" – I agree that the epistemics are tricky, of maintaining a habit even when you can't observe causal evidence for it being beneficial. I'll implement a habit if I've read some of the evidence and think it's worth the cost, even if I don't observe any effect in myself. Unfortunately, that's the same mistake homeopathics make.
I'm motivated to follow microhabitats mostly out of faith that they have some latent effects, but also out of a subconscious desire to uphold my identity, like what James Clear talks about in Atomic H...
I find it interesting that all but one toy is a transportation device or a model thereof.
Regardless of whether the lack of these kinds of studies is justified, I think you shouldn't automatically assume that "virology is unreasonable" or "there's something wrong with virologists". Because you're asking why the lack exists, there's something you don't know about virology, and your prior should be that it's justified, similar to Chesterton's Fence.
I also don't particularly like the hedonic gradient of pushing yourself to run at the volume and frequency that seems necessary to really git gud
What do you mean by "hedonic gradient" in this context?
For those of us who don't know where to start (like me), I also recommend checking out the wiki from r/malefashionadvice or r/femalefashionadvice.
Related: Wisdolia is a Chrome extension which automatically generates Anki flashcards based on the content of a webpage you're on.
That's a good point. I conflated Moravec's Paradox with the observation that so far, it seems as though cognitive tasks will be automated more quickly than physical tasks.
Suppose a family values the positive effects that screening would have on their child at $30,000, but in their area, it would cost them $50,000. Them paying for it anyway would be like "donating" $20,000 towards the moral imperative that you propose. But would that really be the best counterfactual use of the money? E.g. donating it instead to the Against Malaria Foundation would save 4-5 lives in expectation.[1] Maybe it would be worth it at $10,000? $5,000?
Although, this doesn't take into account the idea that an additional person doing polygenic sc...
I agree. Maybe it's time to repost The Best Textbooks on Every Subject again? Many of the topics I want to self-study I haven't found recommendations for in that thread. Or maybe we should create a public database of textbook recommendations instead of maintaining an old forum post.
Just curious: what motivated the transition?
Some combination of:
I plan to do some self-studying in my free time over the summer, on topics I would describe as "most useful to know in the pursuit of making the technological singularity go well". Obviously, this includes technical topics within AI alignment, but I've been itching to learn a broad range of subjects to make better decisions about, for example, what position I should work in to have the most counterfactual impact or what research agendas are most promising. I believe this is important because I aim to event...
Thx!
Can you rephrase this? Having a hard time parsing this sentence.