Cool, that all sounds fair to me. I don't think we have any substantive disagreements.
hmm, we seem to be talking past each other a bit. I think my main point in response is something like this:
In non-trivial settings, (some but not all) structural differences between programs lead to differences in input/output behaviour, even if there is a large domain for which they are behaviourally equivalent.
But that sentence lacks a lot of nuance! I'll try to break it down a bit more to find if/where we disagree (so apologies if a lot of this is re-hashing).
For most practical purposes, "calculating a function" is only and exactly a very good compression algorithm for the lookup table.
I think I disagree. Something like this might be true if you just care about input and output behaviour (it becomes true by definition if you consider that any functions with the same input/output behaviour are just different compressions of each other). But it seems to me that how outputs are generated is an important distinction to make.
I think the difference goes beyond 'heat dissipation or imputed qualia'. As a c...
I agree that there is a spectrum of ways to compute f(x) ranging from efficient to inefficient (in terms of program length). But I think that lookup tables are structurally different from direct ways of computing f because they explicitly contain the relationships between inputs and outputs. We can point to a 'row' of a lookup table and say 'this corresponds to the particular input x_1 and the output y_1' and do this for all inputs and outputs in a way that we can't do with a program which directly computes f(x). I think that allowing for compression prese...
Regarding your request for a practical example.
Short Answer: It's a toy model. I don't think I can come up with a practical example which would address all of your issues.
Long Answer, which I think gets at what we disagree about:
I think we are approaching this from different angles. I am interested in the GRT from an agent foundations point of view, not because I want to make better thermostats. I'm sure that GRT is pretty useless for most practical applications of control theory! I read John Wentworth's post where he suggested that the entropy-reduction p...
I think we probably agree that the Good Regulator Theorem could have a better name (the 'Good Entropy-Reducer Theorem'?). But unfortunately, the result is most commonly known using the name 'Good Regulator Theorem'. It seems to me that 55 years after the original paper was published, it is too late to try to re-brand.
I decided to use that name (along with the word 'regulator') so that readers would know which theorem this post is about. To avoid confusion, I made sure to be clear (right in the first few paragraphs) about the specific way that I was using the word 'regulator'. This seems like a fine compromise to me.
Minimising the entropy of Z says that Z is to have a narrow distribution, but says nothing about where the mean of that distribution should be. This does not look like anything that would be called "regulation".
I wouldn't get too hung up on the word 'regulator'. It's used in a very loose way here, as in common in old cybernetics-flavoured papers. The regulator is regulating, in the the sense that it is controlling and restricting the range of outcomes.
...Time is absent from the system as described. Surely a "regulator" should keep the value of Z near constant
When you are considering finite tape length, how do you deal with when or when ?
Should there be a 'd' on the end of 'Debate' in the title or am I parsing it wrong?
It is meant to read 350°F. The point is that the temperature is too high to be a useful domestic thermostat. I have changed the sentence to make this clear (and added a ° symbol ). The passage now reads:
...Scholten gives the evocative example of a thermostat which steers the temperature of a room to 350°F with a probability close to certainty. The entropy of the final distribution over room temperatures would be very low, so in this sense the regulator is still 'good', even though the temperature it achieves is too high for it to be useful as a domestic therm
Your reaction seems fair, thanks for your thoughts! Its a good a suggestion to add an epistemic status - I'll be sure to add one next time I write something like this.
Got it, that makes sense. I think I was trying to get at something like this when I was talking about constraints/selection pressure (a system has less need to use abstractions if its compute is unconstrained or there is no selection pressure in the 'produce short/quick programs' direction) but your explanation makes this clearer. Thanks again for clearing this up!
Thanks for taking the time to explain this. This is a clears a lot of things up.
Let me see if I understand. So one reason that an agent might develop an abstraction is that it has a utility function that deals with that abstraction (if my utility function is ‘maximize the number of trees’, its helpful to have an abstraction for ‘trees’). But the NAH goes further than this and says that, even if an agent had a very ‘unnatural’ utility function which didn’t deal with abstractions (eg. it was something very fine-grained like ‘I value this atom being in this e...
Its late where I am now so I'm going to read carefully and respond to comments tomorrow, but before I go to bed I want to quickly respond to your claim that you found the post hostile because I don't want to leave it hanging.
I wanted to express my disagreements/misunderstandings/whatever as clearly as I could but had no intention to express hostility. I bear no hostility towards anyone reading this, especially people who have worked hard thinking about important issues like AI alignment. Apologies to you and anyone else who found the post hostile.
Thanks for taking the time to explain this to me! I would like to read your links before responding to the meat of your comment, but I wanted to note something before going forward because there is a pattern I've noticed in both my verbal conversations on this subject and the comments so far.
I say something like 'lots of systems don't seem to converge on the same abstractions' and then someone else says 'yeah, I agree obviously' and then starts talking about another feature of the NAH while not taking this as evidence against the NAH.
But most posts on the ...
Hello! My name is Alfred. I recently took part in AI Safety Camp 2024 and have been thinking about the Agent-like structure problem. Hopefully I will have some posts to share on the subject soon.
This sounds right. Implicitly, I was assuming that when the black box was fed an x outside of its domain it would return an error message or at least, something which is not equal to f(x). I realise that I didn't make this clear in ... (read more)