A Chesterton-fence style regret about decline of religion, which I just thought of and haven't seen written down anywhere before.
Many of the countermeasures are allergies against specific things. If a Catholic ascends to some position and then fires non-Catholics and promotes Catholics, observers are prepared to notice this and argue it's violating freedom of religion / using religion for something that it shouldn't be used for in civil society. But if someone whose 'religion' is environmentalism ascends to the same position and fires non-environmentalists and promotes environmentalists, the same allergies might not fire in response. I can't easily imagine a phrase that's "freedom of X" that captures not being able to fire someone because they're not an environmentalist, and this means that if 'religion' morphs with the times the defenses posed by 'freedom of religion' might not morph with them, and we might end up back in the bad state.
Eliezer Yudkowsky and Paul Graham have a lot in common. They're both well-known bloggers who write about rationality and are influential in Silicon Valley. They're both known for Bayesian stuff (Graham was a pioneer of Bayesian spam filtering). They both played a role in creating discussion sites which are, in my opinion, among the best on the internet (Less Wrong for Eliezer, Hacker News for Paul Graham). And they've both stopped posting to the sites they created, but they both still post to... Twitter, which is, in my opinion, one of the worst discussion sites on the internet. (Here is one of many illustrations.)
It seems like having so many celebrities, scientists, and politicians is a major asset for Twitter. What is it about Twitter which makes big names want to post there? How could a rival site attract big names without also importing Twitter's pathologies?
Have any of these people said why they have made that choice?
I don't use twitter, but one possibility might be that it actually isn't a discussion forum. It's a place for drive-by firing off of thoughts. For a prominent person, the function of a tweet is to say, "This is what I am thinking about at the moment," so as to invite conversation elsewhere with the people they already know and find worth while talking to. This is far less time-consuming than an actual discussion forum, where it's expected that a post will be of a more substantial length and that you will participate in subsequent discussion.
I predict from this hypothesis that Eliezer makes hardly any replies on Twitter to replies to his tweets.
For someone who is selfish, or the selfish part of one's moral parliament, a seemingly important but seldom-discussed concern is "modal immortality" (in analogy to "quantum immortality"), which suggests that if all possible worlds exist, one can't subjectively experience death as nothingness and will instead experience things like being "resurrected" by a superintelligence in the future, or being "rescued" from outside of this simulated universe. Given modal immortality, and the possibility of influencing the relative likelihoods of various "life after death" experiences, a selfish person's long term priorities has to include optimizing for these experiences through one's actions. But I don't recall seeing any discussion of this.
Concrete example of a problem: Consider cryonics, or even just getting one's genes sequenced. Doing either seemingly increases the chances (considered as first-person "anticipation" as opposed to third-person "measure") of being "resurrected" (in the same universe) over being "rescued" (from outside the universe). Is that good or bad?
Moral uncertainty suggests that we should spend some of our resources on these problems, as well as related philosophical pro
...I joined a few months ago. I've been happily surprised at how all the comments I've received have been constructive, respectful and written in good faith. It's nice to meet you all.
Just joined up today. My name is Ben, 39 from the Hunter Valley in Australia - grew up and lived most of my life in Sydney a few hours away. I have a BA majoring in philosophy and political science and an honours in philosophy which focused on compelxity, coordination and reflection on the societal scale as envisaged by Jurgen Habermas and Nikals Luhmann. I also finished a Masters in Public Communication a couple of years ago.
My main reason for registering here is to discuss and learn about AI safety, which is what I wish to spend my life contributing to. I have ideas in this area and am keen to learn many more, refine my thinking and connect with others. Like many here, I believe AI safety deserves far more attention and resources than it currently recieves and am keen to be part of rectifying that.
So, hello all. I look forward to getting involved and becoming less wrong about some of the things which matter most :)
Do any AI safety researchers have little things they would like to get done, but don't have the time for?
I'm willing to help out for no pay.
I have a backgound in computer science and mathematics, and I have basic familiarity with AI alignment concepts. I can write code to help with ML experiments, and can help you summarize research or do literature reviews.
Email me at buck@intelligence.org with some more info about you and I might be able to give you some ideas (and we can maybe talk about things you could do for ai alignment more generally)
I'm interested to talk with people about your use of LW, what you get out of it, and changes you'd find helpful.
If you'd like to talk to talk with me about your experience of the site, and let me ask you questions about it, book a conversation with me here: https://calendly.com/benapace. I'm currently available Thursday mornings, US West Coast Time (Berkeley, California).
You can also find the link on my profile page.
There's a new book out, Game-Theoretic Foundations for Probability and Finance by Glenn Shafer and Vladimir Vovk. The idea is that perfect information games can replace measure theory as the basis of probability, and also provide a mathematical basis for finance.
I have their earlier book, which I reviewed on LessWrong. I don't have the new one, in which they claim more generalization, abstraction, and coherent footing as a result of 18 years of further development. They also claim their method for continuous time finance is better and easier to use than current practice.
Has anyone else read this? It's on my list, but it will be pretty far down, so I would welcome other opinions as to whether I should promote it.
Uncertainty is mentally taxing, because one has to build, maintain, and use more mental models compared to someone who is more certain. One would think that makes it a good tool for signaling intelligence (and I think that at least partly motivates me to be conspicuously uncertain, such as here), but fortunately or unfortunately (I'm not sure which ;) I don't see many other people doing this.
Vipul Naik was talking about uncertainty as virtue signaling, which is related to but distinct from my point about uncertainty as intelligence signaling. (For the former, it's perhaps sufficient to pepper "not sure" throughout your writing, like Vipul was complaining about, but for the latter you have to demonstrate familiarity with and usage of a variety of theories/models that you haven't yet ruled out.)
I put together a new sequence, Rationalists on Meditation; figured it would be nice to have an easy way of finding most of the meditation discussion on the site. As the sequence description says: "A semi-curated list of LW writings on meditation, ranging from self-reports to theorizing."
Let me know if I missed anything that you think ought to be included.
Hi, my name is Jason and I live in Asheville, NC. My main goal is just getting better at thinking rationally and how to have proper logical arguments with others. I am an atheist and I love discussing religion from my viewpoint of non-belief and why I believe that religion is not only false but also harmful. I am not afraid to admit ignorance and I am always willing to change my viewpoint based on the available evidence. I just love to learn.
I have read the entire "My big TOE" book/trilogy by Thomas Campbell. If at least three people promise to read my summary of it, I'll write it up. Let me know how long you want the summary to be.
Super quick summary of the book: Thomas Campbell seems like a decently smart guy; he has a PhD in physics and works for the government. He claims his mind/consciousness can exist in and travel between multiple realities. In this book he explains his version of how the universe works, how it got started, why it's here, and why we are here.
Personal...
How do you determine your "tiredness backlog"? When I am lack sleep, it seems that I have been so for decades (school days etc.), but obviously there's a limit to how much I can get back. (And the official stance on parenthood here is "oh you knew what you were getting into", so... hopeless, really.) And it is really easy to imagine that backlog small or large, there's no measure.
Can anyone think of a theoretical justification (Bayesian, Frequentist, whatever) for the procedure described in this blog post? I think this guy invented it for himself -- searching on Google for "blended estimate" just sends me to his post.
Is there anything one can do for shortening the amount of time needed to fall asleep, and making it more robust? Currently I will be unable to fall asleep if I've overslept on the previous day or engaged with something too stimulating before going to bed. This is still true even though I've followed a strict schedule for a fairly long time. It's pretty annoying.
Something I already do is sleep with white noise, partially to make it less likely that I'll wake up from unexpected sounds.
I can't imagine a situation where I'd dole out bad karma, and certainly I wouldn't do it without giving an explanation. If something's so bad it can be reported, but just because "I don't want to see more of this" doesn't mean it's up to me to influence whether anyone else can see it.
Unfortunately the current situation means one person can remove stuff from view if they get in early. I get the impression of rash rather than rational...
Negative karma without comment is like saying "bad dog" which won...
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.
The Open Thread sequence is here.