Posts

Sorted by New

Wiki Contributions

Comments

As for where else these ideas can be found, philosophers have been working on conceptual vagueness intensely since the mid-20th century, and cluster concepts were a relatively early innovation. The philosophical literature also has the benefit of being largely free of nebulous speculations about cognition and needless formalism ... The literature also uses terminology in the ordinary way familiar to everybody engaging these issues professionally ... and avoids the invention of needless terms like "thingspace", which mainly achieve the isolation of LessWrong from the external literature.


I think there's some validity to this critique. I read The Cluster Structure of Thingspace (TCSOTS) and was asking myself "isn't this just talking about the problem of classification?" And classification definitely doesn't require us to treat 'birdness' or 'motherhood' as a discrete, as if a creature either has it or doesn't. Classification can be on a spectrum, with a score for 'birdness' or 'motherhood' that's a function of many properties. 

I welcome (!!) making these concepts more accessible to those who are unfamiliar with them, and for that reason I really enjoyed TCSOTS.But it also seems like there'd also be a lot of utility in then tying these concepts to the fields of math/CS/philosophy that are already addressing these exact questions. These ideas presented in The Cluster of Thingspace are not new; not even a little - so why not use them as a jumping-off-point for the broader literature on these subjects, to show how researchers in the field have approached these issues, and the solutions they've managed to come up with? 

See: Fuzzy Math, Support Vector Machines, ANNs, Decision Trees, etc. 

So: I think posts like this would have a stronger impact if tied into the broader literature that already covers the same subjects. The reader who started the article unfamiliar with the subject would, at the end, have a stronger idea of where the field stands, and they would also be better resourced for further exploring the subject on their own. 

Note: this is probably also why most scientific papers start with a discussion of previous related work. 

Yeah, maybe it's less the OODA loop involvement and more that "bad things" lead to a kind of activated nervous system that predisposes us to reactive behavior ("react" as opposed to "reflect/respond"). 

To me, the bad loops are more "stimulus -> react without thinking"  than "observe, orient, decide, act". You end up hijacked by your reactive nervous system. 

"One problem is that due to algorithmic improvements, any FLOP threshold we set now is going to be less effective at reducing risk to acceptable levels in the future."

And this goes doubly so if we explicitly incentivize low-FLOP models. When models are non-negligibly FLOP-limited by law, then FLOP-optimization will become a major priority for AI researchers. 

This reminds me of Goodhart's Law, which states “when a measure becomes a target, it ceases to be a good measure." 

I.e., if FLOPs are supposed to be a measure of an AI's danger, and we then limit/target FLOPs in order to limit AGI danger, then that targeting itself interferes or nullifies the effectiveness of FLOPs as a measure of danger. 

It is (unfortunately) self-defeating. At a minimum, you need to re-evaluate regularly the connection between FLOPs and danger: it will be a moving target. Is our regulatory system up to that task?