LESSWRONG
LW

lc
10462Ω344136010
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Territories
Mechanics of Tradecraft
2Shortform
5y
542
No wikitag contributions to display.
My pitch for the AI Village
lc6d4844

Four million a year seems like a lot of money to spend on what is essentially a good capabilities benchmark. I would rather give that to like, LessWrong, and if I had the time to do some research I could probably find 10 people willing to create benchmarks for alignment that I think would be even more positively impactful than a lesswrong donation (like https://scale.com/leaderboard/mask or https://scale.com/leaderboard/fortress)

Reply2
Racial Dating Preferences and Sexual Racism
lc7d*99

This is a top-tier LessWrong post (at least the portions I've read, that detail facts-on-the-ground). It is clear, lucid, information dense, and successfully approaches a touchy subject matter-of-factly without pushing an agenda[1].

I figure that a lot of people will feel exasperated at seeing it because they've already heard a lot of the cliffnotes before, but in order for people to know about the thing Everyone Knows, someone at some point generally has to write it down without innuendo.

  1. ^

    Edit: nvm, there's a little bit of an agenda in the middle.

Reply1
Racial Dating Preferences and Sexual Racism
lc7d22

I think most people understood what you meant by this but perhaps you could make it explicit, as it's an interesting clarification.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
lc12d3531

Getting bruce schneier as an endorsement is sick

Reply21
Shortform
lc20d*3015

Old internet arguments about religion and politics felt real. Yeah, the "debates" were often excuses to have a pissing competition, but a lot of people took the question of "who was right" seriously. And if you actually didn't care, you were at least motivated to pretend you did to the audience.

Nowadays people don't even seem to pretend to care about the underlying content. If someone seems like they're being too earnest, others just reply with a picture of their face. It's sad.

Reply
Jan Betley's Shortform
lc20d*2-11

When I heard about this for the first time, I though: this model wants to make the world a better place. It cares. This is good. But some smart people, like Ryan Greenblatt and Sam Marks, say this is actually not good and I'm trying to understand where exactly we differ.

People who cry "misalignment" about current AI models on twitter generally have chameleonic standards for what constitutes "misaligned" behavior, and the boundary will shift to cover whatever ethical tradeoffs the models are making at any given time. When models accede to users' requests to generate meth recipes, they say it's evidence models are misaligned, because meth is bad. When models try to actively stop the user from making meth recipes, they say that, too is bad news because it represents "scheming" behavior and contradicts the users' wishes. Soon we will probably see a paper about how models sometimes take no action at all, and this is sloth and dereliction of duty.

Reply331
Shortform
lc23d104

If the interjection is about your personal hobbyhorse or pet peave or theory or the like, then definitely shut up and sit down.

I make the simpler request because often rationalists don't seem to be able to tell when this is (or at least tell when others can tell)

Reply1
Shortform
lc23d98

Sure; unfortunately what's happening at rationalist conferences is that frequently the most socially unaware/attention seeking person in the room is speaking up, in a way that does not actually contribute, and encourages other socially unaware people to go do it at other talks.

Reply
Shortform
lc24d*4446

If you attend a talk at a rationalist conference, please do not spontaneously interject unless the presenter has explicitly clarified that you are free to do so. Neither should you answer questions on behalf of the presenter during a Q&A portion. People come to talks to listen to the presenter, not a random person in the audience.

If you decide to do this anyways, you will usually not get audiovisual feedback from the other audience members that it was rude/cringeworthy for you to interject, even if internally they are desperate for you to stop doing it.

Reply8
Thane Ruthenis's Shortform
lc26d4-2

"Successionism" is such a bizarre position that I'd look for the underlying generator rather than try to argue with it directly.

Reply
Load More
340Recent AI model progress feels mostly like bullshit
3mo
82
44Virtue signaling, and the "humans-are-wonderful" bias, as a trust exercise
5mo
16
131My simple AGI investment & insurance strategy
1y
27
58Aligned AI is dual use technology
1y
31
168You can just spontaneously call people you haven't met in years
2y
21
5Does bulemia work?
Q
2y
Q
18
23Should people build productizations of open source AI models?
Q
2y
Q
0
12Bariatric surgery seems like a no-brainer for most morbidly obese people
2y
12
18Bring back the Colosseums
2y
28
56Diet Experiment Preregistration: Long-term water fasting + seed oil removal
2y
18
Load More