LESSWRONG
LW

1269
Eli Tyre
8293Ω535511904
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
People Seem Funny In The Head About Subtle Signals
Eli Tyre6h40

The consensus feedback said roughly “creepy/rapey, but in the sexy way”,

I burst out laughing at this. 

I'm glad what you're doing is working for you!?

Reply
Wei Dai's Shortform
Eli Tyre1d20

I agree that this seems like a likely effect. 

It seems like the quality of short form writing that displaces what would otherwise have been full posts will generally be lower. But on the other hand, people might feel more willing to publish at all, because they don't have to make the assessment of whether or not they're good enough to be worth making a bid that other people read it.
 

Reply
Nancy Li's Shortform
Eli Tyre1d30

FWIW, this was the basic take of CFAR and the milieu around CFAR at least as early as 2015, though there are additional operational details of how best to go about implementing this approach.

Reply
Axel Ahlqvist's Shortform
Eli Tyre1d42

Yeah, the main reason is link rot. 

Reply
Wei Dai's Shortform
Eli Tyre1d2012

Some months ago, I suggested that there could be an UI feature to automatically turn shortforms into proper posts if they get sufficent karma, that authors could turn on or off.

Reply1
Mo Putera's Shortform
Eli Tyre1d44

various weird obsessions like the idea of legalizing r*pe etc that might have alienated many women and other readers

Sidenote: I object to calling this a weird obsession. This was a minor-to-medium plot point in one science fiction story that he wrote, and (to my knowledge) has never advocated for or even discussed beyond the relevance to the story. I don't think that's an obsession. 

Reply
Mo Putera's Shortform
Eli Tyre1d2-1
  • The early effective altruists would have run across these ideas and been persuaded by them, though somewhat more slowly?

I think I doubt this particular point. That EA embraced AI risk (to the extent that it did) seem to me like a fairly contingent historical fact due to LessWrong being one of the three original proto-communities of EA.

I think early EA could have grown into several very different scenes/movements/cultures/communities, in both from and content. That we would have broadly bought into AI risk as an important cause area doesn't seem overdetermined to me.

Reply
Which side of the AI safety community are you in?
Eli Tyre13d20

it's washing something which we don't yet understand and should not pretend to understand.

Washing? Like safetywashing?

Reply
Load More
29Eli's shortform feed
6y
324
Center For AI Policy
2 years ago
Blame Avoidance
3 years ago
Hyperbolic Discounting
3 years ago
23Evolution did a surprising good job at aligning humans...to social status
2y
37
48On the lethality of biased human reward ratings
2y
10
14Smart Sessions - Finally a (kinda) window-centric session manager
2y
3
63Unpacking the dynamics of AGI conflict that suggest the necessity of a premptive pivotal act
2y
2
20Briefly thinking through some analogs of debate
3y
3
146Public beliefs vs. Private beliefs
3y
30
147Twitter thread on postrationalists
4y
33
22What are some good pieces on civilizational decay / civilizational collapse / weakening of societal fabric?
Q
4y
Q
8
38What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model?
Q
4y
Q
16
42I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction
4y
29
Load More