Theoretical systems are useful so long as you keep track of where they depart from reality.
Consider the following exchange:
Engineer: The programme is acquiring more memory than it is releasing' so it will eventually fill the memory and crash.
Computer Scientist: No it won't, the memory is infinite.
Do the MIRI crowd make similar errors? Sure, consider Bostrom's response to Oracle AI. He assumes that an Oracle can only be a general intelligence coupled to a utility function that makes it want to answer questions and do nothing else.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think paraphrasings of "do what makes you happy" are fair as rationality quotes. What else are you gonna do?
I feel like there should be some constraint on harming group happiness while you "do what makes you happy."