All of _self_'s Comments + Replies

_self_20

So I didn't know this was a niche philosophy forum, with its own subculture. I'm way out of my element. My suggestions were not very relevant taking that into context, I thought it was a general forum. I'm still glad there are people thinking about it.

The links you sent are awesome! - I'll follow those researchers. I think a lot of my thoughts here are outdated as things keep changing, and I'm still putting thoughts together. So, I probably won't be writing much for a few months until my brain settles down a little.

Am I "shorttermism"? Long term, as in fate of humanity, I think I am not good to debate there

Thanks for commenting on my weird intro!

1the gears to ascension
imo, shorttermism = 1 year, longtermism = 10 years. ai is already changing very rapidly. as far as I'm concerned your posts are welcome; don't waste time worrying about being out of your element, just tell it as you see it and let's debate - this forum is far too skeptical of people with your background and you should be more self-assured that you have something to contribute.
_self_*10

Oh no the problem is already happening, and the bad parts are more dystopian than you probably want to hear about lol

From the behaviorism side yes it's incredibly easy to manipulate people via tech, it's not always done on purpose as you state. But it's frequently insomnia inducing as a whole.

Your point about knowing your weakness and preparing is spot on!

  • For the UX side of this, look up Harry Brignull and Dark Patterns. (His work has been solid for 10+ years, to my knowledge he was the first to call out some real BS that went un-called-out for most of

... (read more)
_self_6-3

I'm one of the new readers and found this forum through a Twitter thread that was critiquing it. psychology background then switched to ML, and I've been following AI ethics for over 15 years and have been hoping for a long time that discussion would leak across industries and academic fields.

Since AI (however you define it) is a permanent fixture in the world, I'm happy to find a forum focused on critical thinking either way and I enjoy seeing these discussions on front page. I hope it's SEO'd well too.

I'd think newcomers and non-technical contributors ar... (read more)

2the gears to ascension
Strongly agreed here. My view is that ai takeover is effectively just the scaled up version of present-day ai best practice concerns, and the teams doing good work on either end up helping both. Both "sides of the debate" have critical things to say about each other, but in my view, that's simply good scientific arguing. I'd love to hear more on your thoughts on most effective actions for shorttermist ai safety and ai bias, if you were up for writing a post! I'd especially like to hear your thoughts on how cutting edge psychology emergency-deescalation-tactics research on stuff like how to re-knit connections between humans who've lost trust for political-fighting reasons can relate to ai safety; that example might not be your favorite focus, though it's something I worry about a lot myself and have thoughts about. Or perhaps if you've encountered the socio-environmental synthesis center's work on interdisciplinary team science (see also their youtube channel), I'm curious if you have thoughts about that. or, well, more accurately, I give those examples as prompt so you can see what kind of thing I'm thinking about writing about and generalize it into giving similar references or shallow dives into research that you're familiar with and I'm not.