I'm writing this to get information about the lesswrong community and whether it worth engaging. I'm a bit out of the loop in terms of what the LW community is like and whether it can maintain multiple view points (and how known the criticisms are).
The TL;DR is I have problems with treating computation in an overly formal fashion. The more pragmatic philosophy suggested implies (but doesn't prove) that AI will not be as powerful as expected as the physicality of computation is important and instantiating computing in a physical fashion is expensive.
I think all the things I will talk about are interesting, but I don't see the sufficiency of them when considering AI running in the real world in real computers.
1. Source code based decision theory
I don't understand why:
- other agents trust that your source code is what you say it is
- other agents trust that your implementation of your interpreter matches their understanding of the interpreter. I don't see how they get round trustless trust (inserting code/behaviour via malicious compiler/interpreter) issues when they don't have the ability to do diverse compilation.
2. General Functionalism
The idea that it doesn't matter how you compute something just whether the inputs and outputs are the same.
- The battery life of my phone says that the way of computation is very important, is it done on the cloud and I have to power up my antennae to transmit the result.
- Timing attacks say that speed of the computation is important, that faster is not always better.
- Rowhammer says that how you layout your memory is important. Can I flip a bit of your utility calculation?
- Memory usage, overheating, van Eck phreaking etc etc....
2+2=4 pays rent.
Ugh can use it right away for counting days. Source code based decision theory not so much. There aren't the societies based on agents that can read each others source code, so I can't try and predict them with source code based decision theories. It seems like it is mathematically interesting thing though, so it is still interesting. I just don't want it to be a core part of our sole pathway to try and solve the AI problem.
Perhaps it would lead people to avoid trying to find optimal decision theories and accept the answer that the best decision theory depends on the circumstances. Then we can figure out what our circumstances are and find good decisions theories for those. And create designs that can do similarly.
Like the best search algorithm is context dependent, where even algorithms of a worst complexity class can be better due to memory locality and small size.
Supposition> If answers to how decisions are made (and a whole host of other problems) are contextual and complex then it is worth trading information about what answers they have found within their context.
This pathway makes no sense from people that expect there to be winner take all optimal AI designs as any one embryonic system might find the keys to the future and take it over. But if that is not the way the world works....
Human beings can probabilistically read each others' source code. That's why we use primitive versions of noncausal decision theory like getting angry, wanting to take revenge, etc.