Immanuel Jankvist

Wikitag Contributions

Comments

Sorted by

The following seems a bit unclear to me, and might warrant an update–if I am not alone in the assessment:

Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), [...], then we should still expect very rapid technological development [...] once AI meaningfully substitutes for human researchers.

I might just be taking issue with the word "without" and taking it in a very literal sense, but to me "AI meaningfully substituting for human researchers" implies at least a weak form of recursive self-improvement.
That is, I would be quite surprised if the world allowed for AI to become as smart as human researchers but no smarter afterwards.

Thanks for the advice @GeneSmith!

In regards to the 'probability assertions' I made, the following (probably) sums it up best:

I understand the ethical qualms. The point I was trying to make was more in the line of 'if I can effect the system in a positive direction, could this maximise my/humanity's mean-utility function'. Acknowledging this is a weird way to put it (as I assume a utility-function for myself/humanity), I'd hoped it would provide insight into my thought process. 

Note: in the post I didn't specify the  part. I'd hoped it was implicit -- as I don't care much for the scenario, where aging is solved and AI enacts doom right afterwards. I'm aware this is still an incomplete model (and is quite non-rigorous).

Again, I appreciate the response and the advice;)