I'm writing this to get information about the lesswrong community and whether it worth engaging. I'm a bit out of the loop in terms of what the LW community is like and whether it can maintain multiple view points (and how known the criticisms are).
The TL;DR is I have problems with treating computation in an overly formal fashion. The more pragmatic philosophy suggested implies (but doesn't prove) that AI will not be as powerful as expected as the physicality of computation is important and instantiating computing in a physical fashion is expensive.
I think all the things I will talk about are interesting, but I don't see the sufficiency of them when considering AI running in the real world in real computers.
1. Source code based decision theory
I don't understand why:
- other agents trust that your source code is what you say it is
- other agents trust that your implementation of your interpreter matches their understanding of the interpreter. I don't see how they get round trustless trust (inserting code/behaviour via malicious compiler/interpreter) issues when they don't have the ability to do diverse compilation.
2. General Functionalism
The idea that it doesn't matter how you compute something just whether the inputs and outputs are the same.
- The battery life of my phone says that the way of computation is very important, is it done on the cloud and I have to power up my antennae to transmit the result.
- Timing attacks say that speed of the computation is important, that faster is not always better.
- Rowhammer says that how you layout your memory is important. Can I flip a bit of your utility calculation?
- Memory usage, overheating, van Eck phreaking etc etc....
I don't think I want to make it fully general. I want to contrast between two schools of thinking, both of which are hard to get out of with only the tools of that school.
At what point do you put down the formal proof tools? You might prove that a statement is unprovable, but you may not do it may just hang around like P vs NP forever. Or you might prove a statement is unprovable within a system and then just switch to a different system (adding oracles and what not). What is your stopping condition?
Same with Empiricism. You will never show that empircism isn't a useful method of gaining knowledge for a specific goal using just experiments, just that a single line of experimentation is unfruitful.
You can prove that certain lines of empiricism are likely to be unfruitful (things like trying to create infinite free energy) based on other axioms that hold up empirically. Similarly if an empirical approach is being productive you can use that to constrain the formal proofs you attempt.
This is something I would like to have said a long time ago. I'm not sure how relevant it is to current MIRI research. How much of it is actually writing programs?