Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Brian600

I wasn't expecting such a swift response! Unfortunately I'm a little too tipsy to go through the whole paper right now (I'll get to it with the cold, godless Sunday morn), but I think I'd actually be more interested in the paper you reference at the start, about catastrophism bias. I completely agree that such a bias exists! But still, I don't think it's obvious that we'll develop an AGI that can solve the socio-ecological problems I mentioned earlier before they inhibit the research itself. As such I'm more concerned about the design of a benevolent agrarian revolution before we get to the singularity stuff.

I guess in my mind there's a big tension here - the societal mechanisms that right now support the development of powerful AGI are also horribly brutal and unjust. Could the research continue in a more just and ecologically-sound system?

Brian600

Hi everybody, first time here over from bloggingheads. At the start of the diavlog I thought I'd sympathize with Lanier but his circular reasoning really rankled me. The repeated appeals to his own expertise were really obnoxious.

Has Eliezer ever examined some the socio-ecological assumptions of his singularity model? It seems to me that it's pretty dependent on substantial funding for this type of research, which isn't likely in the event of large-scale energy/resource shortages or nuclear war. I'm looking through the "future" and "disaster" posts, but if someone could point me in the right direction I'd be grateful. I'm finding a few mentions of things "going wrong" etc but I think he's referring to the development of evil AGI and not these more mundane constraints.

You can now count me among your regular readers, Eliezer. You should do a diavlog with Bob Wright - he's a good interviewer with a sense of humor and you guys would have some stuff to discuss when it comes to his Nonzero thesis (which I find specious).