I actually go out of my way to equate "god" and "AGI"/"superintelligence", because to a large extent they seem like the same thing to me.
Can you give me the common meanings of those terms, and explain how they're equivalent?
It's not that I want to identify as a theist, so much as that I want to point out that I think that the only reason people think that gods/angels/demons and AGIs/superintelligences/transhuman-intelligences are different things is because they're compartmentalizing.
Compartmentalizing in what way? I think they're different things, or rather it seems utterly obvious to me that religious people using the theistic terms are always using them to refer to things completely different than those on LW employing those other terms.
I should say though that the way that the theistic terms are used is in no way consistent, and everybody seems to mean something different (if I can even venture a guess as to what the hell they're talking about). There are multiple meanings associated with these terms, to say the least.
Maybe your conception is something like, "If there really is anything out there that could in any way match the description in Catholicism or whatever, then it would perhaps have to be an AGI, or else a super-intelligent life-form that evolved naturally."
I would say though that this seems like a desperate attempt to resurrect the irrationality of religion. If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons--specifically ones absolutely insane by my own epistemic standards--I would not care. I would move on, and consider that tradition utterly useless and uninteresting.
I don't understand why you care. It's not like Aquinas or anybody else believed any of this stuff for the same reasons you do, or anything like that, so what's the point of being like, "Hey, I know these people came up with this stuff for some random other reasons, but it seems like I can still support their conclusions and everything, so yeah, I'm a theist!" It just doesn't make any sense to me, unless of course you think they came to those conclusions for good reasons that have anything at all to do with yours, in which case I need some elaboration on that point.
Either way, usually I can't even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint. I used to think they were just totally insane or something, and I would make actual attempts to understand what they were trying to get me to visualize, but it all became clear when I started interpreting what they were saying in a different way. It all became clear when I started thinking about it in terms of them employing techniques to delude themselves into believing in an afterlife, or simply just believing it because of some epistemic vulnerability their brain was operating under.
Those theistic terms ("God" etc) have multiple meanings, and different people tend to use them differently, or rather they don't really have meanings at all, and they're just the way some people delude themselves into feeling more comfortable about whatever, or perhaps they're just mind viruses taking advantage of some well-known vulnerabilities found our hardware.
I can't for the life of me figure out why you want to retain this terminology. What use is it besides for contrarianism? Does calling yourself a theist and using the theistic terms actually aid in my or anybody else's understanding of what you're thinking, or what? Is the objective clear communication of something that would be important for me or other people on here to know, or what? I'm utterly confused at what you're trying to do, and what the supposed utility is, of these beliefs of yours and your way of trying to communicate them.
I think Aquinas and I believe in the same God, even if we think about Him differently.
What does that even mean? It sounds like the worst sort of sophistry, but I say that not necessarily to suggest you're making an error in your thinking, but simply to allude to how and why I have no exactly what that means.
(There's two different things going on: I believe there exists an ideal decision theory, Who is God, for theoretical reasons;
So you're defining the sequence of letters starting with "G", next being "o", and ending with "d" as "the ideal decision theory"? Is this a common meaning? Do all (or most of) the religious people I know IRL use that term to refer to the ideal decision theory, even if they wouldn't call it that?
And what do you mean by "ideal"? Ideal for what? Our utility functions? Maybe I even need to hear a bit of elaboration on what you mean by "decision theory". Are we talking about AI programming, or human psychology, or what?
whereas my reasons for believing that transhuman intelligences (lower-case-g gods) affect humans are entirely phenomenological.)
I literally have absolutely no idea why you chose the word "phenomenological" right there, or what you could possibly mean.
If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons--specifically ones absolutely insane by my own epistemic standards--I would not care. I would move on, and consider that tradition utterly useless and uninteresting.
If I found a school of thought that seemed to come to correct conclusion unusually often but "believed them all for th...
Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.
At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?
I have various questions about this scenario, including: