All of Delete account 's Comments + Replies

How do you feel about experimenting with meth?

Thanks! Do I still need to enter an email?

3Ben Pace
It's optional.

“Today I'm here to tell you: this is actually happening and it will last a week. You will get a payout if you give us a PayPal/ETH address or name a charity of your choosing.”

How do we give you the name of a charity? I only see fields to enter a PayPal and email address on the payment info page.

3Ben Pace
Not the best UI, but if you just put in the full name of the charity in the PayPal field, we'll donate it to them.

The spread of opinions seems narrow compared to what I would expect. OP makes some bold predictions in his post. I see more debate over less controversial claims all of the time.

Sorry, but what do aliens have to do with AI?

Part of the reason the spread seems small is that people are correctly inferring that this comment section is not a venue for debating the object-level question of Probability(doom via AI), but rather for discussing EY's viewpoint as written in the post. See e.g. https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-ama-discussion for more of a debate.

6AnnaSalamon
That's fair. Sorry, I said it badly/unclearly. What I meant was: most ways to design powerful AI will, on my best guess, be "alien" intelligences, in the sense that they are different from us (think differently, have different goals/values, etc.).
1deepy
There's an analogy being drawn between the power of a hypothetical advanced alien civilization and the power of a superintelligent AI. If you agree that the hypothetical AI would be more powerful, and that an alien civilization capable of travelling to Earth would be a threat, then it follows that superintelligent AI is a threat. I think most people here are in agreement that AI poses a huge risk, but are differ on how likely it is that we're all going to die. A 20% chance we're all going to die is very much worth trying to mitigate sensibly, and the OP says still it's worth trying to mitigate a 99.9999% chance of human extinction in a similarly level-headed manner (even if the mechanics of doing the work are slightly different at that point).

So AI will destroy the planet and there’s no hope for survival?

Why is everyone here in agreement that AI will inevitably kill off humanity and destroy the planet?

Sorry I’m new to LessWrong and clicked on this post because I recognized the author’s name from the series on rationality.

-30[anonymous]

Why is everyone here in agreement that…

We’re not. There’s a spread of perspectives and opinions and lack-of-opinions. If you’re judging from the upvotes, might be worth keeping in mind that some of us think “upvote” should mean “this seems like it helps the conversation access relevant considerations/arguments” rather than “I agree with the conclusions.”

Still, my shortest reply to “Why expect there’s at least some risk if an AI is created that’s way more powerful than humanity?” is something like: “It seems pretty common-sensical to think that alien e... (read more)

+1 for asking the 101-level questions! Superintelligence, “AI Alignment: Why It’s Hard, and Where to Start”, “There’s No Fire Alarm for Artificial General Intelligence”, and the “Security Mindset” dialogues (part one, part two) do a good job of explaining why people are super worried about AGI.

"There's no hope for survival" is an overstatement; the OP is arguing "successfully navigating AGI looks very hard, enough that we should reconcile ourselves with the reality that we're probably not going to make it", not "successfully navigating AGI looks impossible... (read more)

habryka140

This sequence covers some chunk of it, though it does already assume a lot of context. I think this sequence is the basic case for AI Risk, and doesn't assume a lot of context. 

Is this an April fools joke?

[This comment is no longer endorsed by its author]Reply
1Charlie Steiner
niplav120

Do you think the Lesswrong team would lie to you‽

Answer by Delete account 50

We don’t know how many Russian generals are in Ukraine. Russia has not made that information public.

What you can look at is data from past wars. Twelve US generals were killed in Vietnam over a period of idk how many years.

Vietnam casualties by rank: http://www.americanwarlibrary.com/vietnam/vwc4.htm A major Russian general is equivalent in rank to a US brigade general (grade 07 in the first table).

The frequency of Russian generals who have died over the period of one month seems unusually high in comparison. FWIW Russia’s military is said to have a top-heavy org structure.

[This comment is no longer endorsed by its author]Reply
2trevor

“On 3/20, Zelenskyy bans activities of pro-Russian political parties until war is over. This does not seem like either great optics or like it is good for Ukrainian democracy, and no I wouldn’t have known this (at least right away) without a Russian-oriented source. He also combined all the television stations and forced them to broadcast only State Media, all in ways that sure seem like they shut out the opposition. I’m pretty sad about this. Not terribly shocked, mind, but sad.”

I mean, Russia invaded Ukraine with soldiers, missiles, tanks, and heavy arti... (read more)

[This comment is no longer endorsed by its author]Reply
9sanxiyn
South Korea is keeping its anti North Korea law to this day and it is routinely abused to attack freedom of expression. I am pessimistic about repealing the law after the war is over.

*Moving my introduction here because I accidentally posted it in the wrong Open Thread.

Introducing myself-

Hi, I’m Karolina. I stumbled across this community after googling “techno feudalism”. A moderator of a Discord server I belong to says that the US will become techno-feudalist society, and I was trying to understand what he means. I’m not super interested in techno feudalism, though. I created an account here because I saw a lot of interesting topics under the “Concepts” tab.

I would describe myself as “down to earth”. Most of my thoughts revolve around... (read more)

[This comment is no longer endorsed by its author]Reply
7Yoav Ravid
Welcome! I wonder how "techno feudalism" lead you here. When I search for it on the site the only results that come up is your two comments (and that's a very rare thing for this site). I suggest you start with the core reading in the library, but other things that might interest you based on what you said are Inadequate Equilibria, Simulacrum Levels, Moral Mazes. A common thread there is incentives / game theory, but you might get an intuition for those from the core reading. If not and that frame feels alien to you, you can go to these tags and look for something that explains them well. Also, maybe you'll find the Parenting tag interesting.