TeaTieAndHat

Wiki Contributions

Comments

Sorted by

I’m not quite sure how to answer your question, but at least I have similar feelings: that my conscientiousness is relatively low ; and that many people who do cooler stuff than me appear to be more driven, with clearer goals and a better ability to actually go and pursue them. I have various thoughts on this:

  • To an extent, it’s just an impression. Many people will struggle to do more than a fraction of what they wanted, and yet because they still do quite a lot and remain very upbeat, you don’t notice than they achieve relatively little compared to what they want, but they certainly notice that. Similarly, many people are working on cool projects and apparently having tons of fun doing it, but if you asked you’d learn that they have no clue  about "what they want to do with their lives" or similar super long-term goals.
  • In fact, I suspect that most people feel at least a little like that at least sometimes, and that we grossly underestimate how likely others are to feel that way.
  • Yet, some people genuinely are better able to get stuff done and stay relentlessly focused on tasks than others. It can be built from habit, it can come from being really really into the one specific thing you’re working on, etc. If you struggle with that anyway, it might be something to do with mental health: famously ADHD, but also autism, or depression/anxiety can impact conscientiousness, and all these seem somewhat more common among LW readers than in the general population, so I dunno, maybe?
  • And some people are also better than others at being optimistic, enthusiastic, eager to do cool stuff. I guess there are many causes, and therefore many potential ways of dealing with it, but I personally like the explanation from low self-confidence, fear of failure, etc., making you less willing to try ambitious stuff (notice how you said "it’s like they’re already taking their success for certain", when, yeah that might be the case, but it might also be that they’re aware they can fail, but they think it’s likely they could easily recover from that failure anyway). It’s quite well described (imho) here.
  • But I’m pretty sure I’m covering only a relatively narrow part of the space of all the things that could be said on that topic, so I hope other people write other replies with completely different takes on the problem :-)

Interesting. This specific form of ‘reward’ also works well for me (and I also hadn’t conceptualised it as such), but when people talk about rewarding yourself as an incentive for doing something, it’s usually stuff like ‘give yourself a slice of cake if you’ve had a productive workday’ or whatever, and in those cases, my brain is always going ‘wait! I can have the cake anyway, even though I didn’t do what I planned! It’s right here, I can just eat it!’. I’m not sure why it happens, or why watching videos when exercising works better, but I assume it’s what Seth meant?

Thanks! I knew of Alexander, but you reminded me that I’ve been procrastinating on tackling the 1,200+ pages of A Pattern Language for a few months, and I’ve now started reading it :-)

I’m being slightly off-topic here, but how does one "makes it architecturally difficult to have larger conversations"? More broadly, the topic of designing spaces where people can think better/do cooler stuff/etc. is fascinating, but I don’t know where to learn more than the very basics of it. Do you know good books, articles, etc. on these questions, by any chance?

Apparently, he co-founded the channel. But of course he might have had his voiced faked just for this video, as some suggested in the comments to it.

The very real possibility that it’s not in fact Stephen Fry’s voice is as frightening to me as to anyone else, but Stephen Fry doing AI Safety is still really nice to listen to (and at the very least I know he’s legitimately affiliated with that YT channel, which means that Stephen Fry is somewhat into AI safety, which is awesome)

"Have you met non-serious people who long to be serious? People like that seem very rare to me."
… Hmmm… kinda? Like, you’re probably right that it’s few people, and in specific circumstances, but I know some people who are doing something they don’t like, or who are doing something they like but struggling with motivation or whatever for other reasons, and certainly seem to wish they were more serious (or people who did in fact change careers or whatever and are now basically as serious as Mastroianni wants them to be, when they weren’t at all before). But those are basically people who were always inclined to be serious but were prevented from doing so by their circumstances, so you have a point, of course.

Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic. 

Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of "she says this, Google says AGI will be fine, some other guy says it won’t", and I’m not 100% confident what Alexandre himself believes as far as the details are concerned. 
But it seems really obvious that his view is at least something like "AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes". "Enter the Matrix to avoid being swallowed by it", as he puts it (this is a quote).
Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such "anti-silicium racism". Which is an oversimplification of, like, so many different things.), but some sentences are more like "humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]". So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however.
Oh, and he quotes (and possibly endorses?) the idea that "duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so".

Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization. 

I don’t know Alexandre’s ideas very well, but here’s what I understand: you know how people who don’t like rationalists say they’re just using a veneer of rationality to hide right-wing libertarian beliefs? Well, that’s exactly what Alexandre in fact very openly does, complete with some very embarrassing opinions on the differences in IQ between different parts of the world, that strengthen his position as quite an unsavoury character (the potential reputational harms that would arise as a result of having a caricature of a rationalist be a prominent political actor are left as an exercise to the reader...)


Wikipedia tells me that he likes Bostrom, however, which probably makes him genuinely more aware of AI-related issues than the vast majority of French politicians. However, he also doesn’t expect AGI before 2100, so, until then he’s clearly focused on making sure we can work with AI as much as possible, making sure we can learn to use those superintelligence thingies before they’re strong enough to take our jobs and destroy our democracies, etc… and he’s very insistent that this is an important thing to be doing: if you have shorter timelines than he does (and, like, you do!), then he’s definitely something of an accelerationnist. 

Load More