LESSWRONG
LW

Noah Birnbaum
335Ω-48310
Message
Dialogue
Subscribe

I'm a rising sophomore at the University of Chicago where I co-run the EA group, founded the rationality group, and am studying philosophy and economics/cog sci. I'm largely interested in formal epistemology, metaethics, formal ethics, decision theory, and I have minor interests in a few other areas--I think LessWrong ideas are heavily underrated in philosophy academia, though I have some contentions. I also have a blog where I post about philosophy (and other stuff sometimes) here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3Noah Birnbaum's Shortform
1y
15
No wikitag contributions to display.
[Stub] The problem with Chesterton's Fence
Noah Birnbaum1mo10

In a similar vain, I’ve always thought Chesterton’s fence reasoning was a bit self defeating — in that, using chestertons fence as a conceptual tool is, in itself, often breaking it. Often people do the tradition thing for cultural, familial, religious reasons. While I understand it’s a heuristic and this doesn’t actually undermine the fence, this seems like an underrated point. 

Reply
Noah Birnbaum's Shortform
Noah Birnbaum1mo20

I saw this good talk on the Manifest youtube channel about using historical circumstances to calibrate predictions - this seems better for training than regular forecasting because you have faster feedback loop between the prediction and the resolution. 

I wanted to know if anyone had recommendations on where to find some software or site where I can do more examples of this (I already know about the estimation game). I would do this myself, but it seems like it would be pretty difficult to do the research on the situation without learning the outcome. I would also appreciate people giving takes about why this might be a bad way to get better at forecasting. 

Reply
The Intelligence Curse
Noah Birnbaum2mo20

Here's an argument against this view - yes, there is some cost associated with helping the citizens of a country and the benefit becomes less great as you become a rentier state. However, while the benefits do go down and economic prosperity becomes greater and greater for the very few due to AGI, the costs of quality life become significantly cheaper to help others in the society. It is not clear that the rate at which the benefits diminish actually outpaces the reduction in costs of helping people. 

In response to this, one might be able to say something like regular people become totally obsolete wrt efficiency and the costs, while reduced stay positive. However, this really depends on how you think human psychology works -- while some people would turn on humans the second they can, there are likely some people who will just keep being empathetic (perhaps this is merely a vestigial trait from the past, but it irrelevant -- the value exists now, and some people might be willing to pay some cost to avoid shaping this value even beyond their own lives). We have a similar situation in our world: namely, animals -- while people aren't motivated to care about animals for power reasons (they could do all the factory farming they want, and it would be better), some still do (I take it that this is a vestigial trait of generalizing empathy to the abstract, but as stated, the description for why this comes to be seems largely irrelevant). 

Because of how cheap it is to actually help someone in this world, you may just need one or a few people to care just a little bit about helping people and that could make everyone better off. Given that we have a bunch of vegans now (the equivalent to empathetic but powerful people post AGI), depending on how low the costs are to make lives happy (presumably there is a negative correlation between the costs to make lives better and the inequality of power, money, etc), it might be the case that regular citizens end up pretty alright on the other side. 

Curious what people think about this! 

 

Also, many of the links at beginning (YouTube, World Bank, Rentier states, etc) don't work. 

Reply
Noah Birnbaum's Shortform
Noah Birnbaum3mo00

Makes sense. Good clarification! 

Reply
Noah Birnbaum's Shortform
Noah Birnbaum3mo170

I think people should know that this exists (Sam Harris arguing for misaligned AI being an x-risk concern on Big Think YouTube channel): 

Reply
Value systematization: how values become coherent (and misaligned)
Noah Birnbaum3mo10

Only somewhat related, but you may enjoy my post here about meta-normative principles (and various issues that arise from each). 

Reply
A voting theory primer for rationalists
Noah Birnbaum4mo10

Warren Smith has to my knowledge never managed to publish a paper in a peer-reviewed journal

He did in 2023 (no hate because this article was published in 2018) -- https://link.springer.com/article/10.1007/s10602-023-09425-w. 

Reply
nikola's Shortform
Noah Birnbaum4mo32

Caveat: A very relevant point to consider is how long you can take a leave of absence, since some universities allow you to do this indefinitely. Being able to pursue what you want/ need while maintaining optionality seems Pareto better. 

Reply
Noah Birnbaum's Shortform
Noah Birnbaum4mo200

US AISI will be 'gutted,' Axios reports: https://t.co/blQY9fGL1v. This should have been expected, I think, but it still seems worth sharing, 

Reply
When you downvote, explain why
Noah Birnbaum5mo1-7

Downvoted because 1) I don't think people are too hesitant to downvote, and 2) I think explaining one's reasoning is a good epistemic hygiene (downvoting and not explaining is like booing when you hear an idea that you don't like). 

Reply
Load More
11A Talmudic Rationalist Cautionary Tale
3mo
2
9New UChicago Rationality Group
8mo
0
6Against AI As An Existential Risk
Ω
1y
Ω
13
1New Blog Post Against AI Doom
1y
5
3Noah Birnbaum's Shortform
1y
15
208Funny Anecdote of Eliezer From His Sister
1y
6
6Rationality Club at UChicago
2y
2