I'm writing this to get information about the lesswrong community and whether it worth engaging. I'm a bit out of the loop in terms of what the LW community is like and whether it can maintain multiple view points (and how known the criticisms are).
The TL;DR is I have problems with treating computation in an overly formal fashion. The more pragmatic philosophy suggested implies (but doesn't prove) that AI will not be as powerful as expected as the physicality of computation is important and instantiating computing in a physical fashion is expensive.
I think all the things I will talk about are interesting, but I don't see the sufficiency of them when considering AI running in the real world in real computers.
1. Source code based decision theory
I don't understand why:
- other agents trust that your source code is what you say it is
- other agents trust that your implementation of your interpreter matches their understanding of the interpreter. I don't see how they get round trustless trust (inserting code/behaviour via malicious compiler/interpreter) issues when they don't have the ability to do diverse compilation.
2. General Functionalism
The idea that it doesn't matter how you compute something just whether the inputs and outputs are the same.
- The battery life of my phone says that the way of computation is very important, is it done on the cloud and I have to power up my antennae to transmit the result.
- Timing attacks say that speed of the computation is important, that faster is not always better.
- Rowhammer says that how you layout your memory is important. Can I flip a bit of your utility calculation?
- Memory usage, overheating, van Eck phreaking etc etc....
You can't prove everything you want to know about physics with formal proofs. That doesn't mean that it isn't valuable that physicist prove theorems about abstract physical laws.
It's also not the case that everyone working on FAI tries the same approach.
I think the physicist and the AI researcher are in different positions.
One can chop a small bit off the world and focus on that, at a certain scale or under certain conditions.
The other has to create something that can navigate the entire world and potentially get to know it in better or very different ways than we do. It is unbounded in what it can know and how it might be best shaped.
It is this asymmetry that I think makes their jobs very different.
Thanks. I had almost writt... (read more)