As far as I know, I have the same conception of God as Thomas Aquinas did, and Thomism is the predominant philosophy of the Catholic church, which is the largest sect of Christianity, which is the most popular religion in the world.
A year or two ago my ideas were still pretty fuzzy, so that might have tripped you up. I change my mind pretty often.
Can you reliably communicate a good approximation of what you believe to another without reference to decision theory?
If yes, I'll accept your hypothesis that I've been reading the wrong comments of yours.
If no, I really doubt that Aquinas would recognize what you believe as what he believed.
(And I don't know what the situation is among the average Catholic, but IMX the average protestant doesn't even know who Aquinas is, so my point may still hold anyway....)
Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.
At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?
I have various questions about this scenario, including: