Economy can be positive-sum, i.e., the more people work, the more everyone gets. Do you think the UK in particular is in a situation where instead if you work more, you are just lowering wages without getting more done?
In the course of a few months, the functionality I want was progressively added to chatbox, so I'm content with that.
My current thinking is that
were hopes to be destroyed as quickly as possible. (This is not a confident opinion, it originates from 15 minutes of vague thoughts.)
To be clear, I don't think that in general it is right to say "Doing the right thing is hopeless because no one else is doing it", I typically prefer to rather "do the thing that if everyone did that, the world would be better". My intuition is that it makes sense to try to coordinate on bottlenecks like introducing compute governance and limiting flops, but not on a specific incremental improvement of AI techniques, because I think the people thinking things like "I will restrain myself from using this specific AI sub-techinque because it increases x-risk" are not coordinated enough to self-coordinate at that level of detail, and are not powerful enough to have an influence through small changes.
(Again, I am not confident, I can imagine paths were I'm wrong, haven't worked through them.)
(Conflict of interest disclosure: I collaborate with people who started developing this kind of stuff before Meta.)
I wonder whether stuff like "turn off the wifi" is about costly signals? (My first-order opinion is still that it's dumb.)
I started reading, but I can't understand what the parity problem is, in the section that ought to define it.
I guess, the parity problem is finding the set S given black-box access to the function, is it?
I think I prefer Claude's attitude as assistant. The other two look too greedy to be wise.
Referring to the section "What is Intelligence Even, Anyway?":
I think AIXI is fairly described as a search over the space of Turing machines. Why do you think otherwise? Or maybe are you making a distinction at a more granular level?
When you say "true probability", what do you mean?
The current hypotheses I have about what you mean are (in part non-exclusive):
Anton Leicht says evals are in trouble as something one could use in a regulation or law. Why? He lists four factors. Marius Hobbhahn of Apollo also has thoughts. I’m going to post a lot of disagreement and pushback, but I thank Anton for the exercise, which I believe is highly useful.
I think there's one important factor missing: if you really used evals for regulation, then they would be gamed. I trust more the eval when the company is not actually at stake on it. If it was, there would be a natural tendence for evals to slide towards empty box-checking.
Hey I do this too!