I’m not quite sure how to answer your question, but at least I have similar feelings: that my conscientiousness is relatively low ; and that many people who do cooler stuff than me appear to be more driven, with clearer goals and a better ability to actually go and pursue them. I have various thoughts on this:
Interesting. This specific form of ‘reward’ also works well for me (and I also hadn’t conceptualised it as such), but when people talk about rewarding yourself as an incentive for doing something, it’s usually stuff like ‘give yourself a slice of cake if you’ve had a productive workday’ or whatever, and in those cases, my brain is always going ‘wait! I can have the cake anyway, even though I didn’t do what I planned! It’s right here, I can just eat it!’. I’m not sure why it happens, or why watching videos when exercising works better, but I assume it’s what Seth meant?
Thanks! I knew of Alexander, but you reminded me that I’ve been procrastinating on tackling the 1,200+ pages of A Pattern Language for a few months, and I’ve now started reading it :-)
I’m being slightly off-topic here, but how does one "makes it architecturally difficult to have larger conversations"? More broadly, the topic of designing spaces where people can think better/do cooler stuff/etc. is fascinating, but I don’t know where to learn more than the very basics of it. Do you know good books, articles, etc. on these questions, by any chance?
Apparently, he co-founded the channel. But of course he might have had his voiced faked just for this video, as some suggested in the comments to it.
The very real possibility that it’s not in fact Stephen Fry’s voice is as frightening to me as to anyone else, but Stephen Fry doing AI Safety is still really nice to listen to (and at the very least I know he’s legitimately affiliated with that YT channel, which means that Stephen Fry is somewhat into AI safety, which is awesome)
"Have you met non-serious people who long to be serious? People like that seem very rare to me."
… Hmmm… kinda? Like, you’re probably right that it’s few people, and in specific circumstances, but I know some people who are doing something they don’t like, or who are doing something they like but struggling with motivation or whatever for other reasons, and certainly seem to wish they were more serious (or people who did in fact change careers or whatever and are now basically as serious as Mastroianni wants them to be, when they weren’t at all before). But those are basically people who were always inclined to be serious but were prevented from doing so by their circumstances, so you have a point, of course.
Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic.
Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of "she says this, Google says AGI will be fine, some other guy says it won’t", and I’m not 100% confident what Alexandre himself believes as far as the details are concerned.
But it seems really obvious that his view is at least something like "AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes". "Enter the Matrix to avoid being swallowed by it", as he puts it (this is a quote).
Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such "anti-silicium racism". Which is an oversimplification of, like, so many different things.), but some sentences are more like "humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]". So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however.
Oh, and he quotes (and possibly endorses?) the idea that "duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so".
Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization.
My decision process was much dumber: 1. Try to spend less time on LW, and move to close the page after having reflexively opened it, deliberately not opening this post. 2. See Daystar’s comment on the frontpage and go "wait, that’s pretty important for me too". 3. Give ten bucks, because I don’t have $1,000 lying around.
So, basically, I’m making a mostly useless comment but thanks for reminding me to donate :-)