Wiki Contributions

Comments

Sorted by
Odd anon1-2

Get a dozen AI risk skeptics together, and I suspect you'll get majority support from the group for each and every point that the AI risk case depends on. You, in particular, seem to be extremely aligned with the "doom" arguments.

The "guy-on-the-street" skeptic thinks that AGI is science fiction, and it's silly to worry about it. Judging by your other answers, it seems like you disagree, and fully believe that AGI is coming. Go deep into the weeds, and you'll find Sutton and Page and the radical e/accs who believe that AI will wipe out humanity, and that's a good thing, and that wanting to preserve humanity and human control is just another form of racism. A little further out, plenty of AI engineers believe that AGI would normally wipe out humanity, but they're going to solve the alignment problem in time so no need to worry. Some contrarians like to argue that intelligence has nothing to do with power, and that superintelligence will permanently live under humanity's thumb because we have better access to physical force. And then, some optimists believe that AI will inevitably be benevolent, so no need to worry.

If I'm understanding your comments correctly, your position is something like "ASI can and will take over the world, but we'll be fine", a position so unusual I didn't even think to include it detail in my lengthy taxonomy of "everything turns out okay" arguments. I am unable to make even a basic guess as to how you arrived at the position (though I would be interested in learning).

Please notice that your position is extremely non-intuitive to basically everyone. If you start with expert consensus regarding the basis of your own position in particular, you don't get 87% chance that you're right, you get a look of incredulity and an arbitrarily small number. If you instead want to examine the broader case for AI risk, most of the "good arguments" are going to look more like "no really, AI keeps getting smarter, look at this graph" and things like Yudkowsky's "The Power of Intelligence", both of which (if I understand correctly) you already think are obviously correct.

If you want to find good arguments for "humanity is good, actually", don't ask AI risk people, ask random "normal" people.

My apologies if I've completely misunderstood your position.

(PS: Extinction markets do not work, since they can't pay out after extinction.)

AIPI Poll:

  • "86% of voters believe AI could accidentally cause a catastrophic event, and 70% agree that mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war"
  • "76% of voters believe artificial intelligence could eventually pose a threat to the existence of the human race, including 75% of Democrats and 78% of Republicans"

Also, this:

  • "Americans’ top priority is preventing dangerous and catastrophic outcomes from AI" - with relatively few prioritizing things like job loss, bias, etc.

Make that clear. But make it clear is a way that your uncle won’t laugh at over Christmas dinner.

 

Most people agree with Pause AI. Most people agree that AI might be a threat to humanity. The protests may or may not be effective, but I don't really think they could be counterproductive. It's not a "weird" thing to protest.

Odd anon211

Meta’s messaging is clearer.

“AI development won’t get us to transformative AI, we don’t think that AI safety will make a difference, we’re just going to optimize for profitability.”

 

So, Meta's messaging is actually quite inconsistent. Yann LeCun says (when speaking to certain audiences, at least) that current AI is very dumb, and AGI is so far away it's not worth worrying about all that much. Mark Zuckerberg, on the other hand, is quite vocal that their goal is AGI and that they're making real progress towards it, suggesting 5+ year timelines.

Odd anon130

Almost all of these are about "cancellation" by means of transferring money from the government to those in debt. Are there similar arguments against draining some of the ~trillion dollars held by university endowments to return to students who (it could be argued) were implicitly promised an outcome they didn't get? That seems a lot closer to the plain meaning of "cancelling debt".

This isn't that complicated. The halo effect is real and can go to extremes when romantic relationships are involved, and most people take their sense data at face value most of the time. The sentence is meant completely literally.

GPT-5 training is probably starting around now

Sam Altman confirmed (paywalled, sorry) in November that GPT-5 was already under development. (Interestingly, the confirmation was almost exactly six months after Altman told a senate hearing (under oath) that "We are not currently training what will be GPT-5; we don't have plans to do it in the next 6 months.")

The United States is an outlier in divorce statistics. In most places, the rate is nowhere near that high.

Odd anon136

It is not that uncommon for people to experience severe dementia and become extremely needy and rapidly lose many (or all) of the traits that people liked about them. Usually, people don't stop being loved just because they spend their days hurling obscenities at people, failing to preserve their own hygiene, and expressing zero affection.

I would guess that most parents do actually love their children unconditionally, and probably the majority of spouses unconditionally love their partners.

(Persistent identity is a central factor in how people relate to each other, so one can't really say that "it is only conditions that separate me from the worms.")

Load More