My example is when people say "I enjoy life" they mean actually enjoying life-as-whole and not like "I'm glad my life is net-positive" or whatever.
Okay, I kinda understood where I am wrong spiritually-intuitively, but now I don't understand where I'm wrong formally. Like which inference in chain
not Consistent(ZFC) -> some subsets of ZFC don't have a model -> some subsets of ZFC + not Consistent(ZFC) don't have a model -> not Consistent(ZFC + not Consistent(ZFC))
is actually invalid?
Completeness theorem states that consistent countable FO theory has a model. Compactness theorem states that FO theory has a model iff every finite subset of FO theory has a model. Both theorems are provable in ZFC.
Therefore:
Consistent(ZFC) <-> all finite subsets of ZFC have a model ->
not Consistent(ZFC) <-> some finite subsets of ZFC don't have a model ->
some finite subsets of ZFC + not Consistent(ZFC) don't have a model <->
not Consistent(ZFC + not Consistent(ZFC)),
proven in ZFC + not Consistent(ZFC)
You are making an error here: ZFC + not Consistent(ZFC)
!= ZFC
.
Assuming ZFC + not Consistent(ZFC)
we can prove Consistent(ZFC)
, because inconsistent systems can prove everything and ZFC + not Consistent(ZFC) + Consistent(ZFC)
is, in fact, inconsistent. But it doesn't say anything about consistency of ZFC itself, because you can freely assume any sufficiently powerful system instead of ZFC. If you assume inconsistent system, then system + not Consistent(system)
is still inconsistent, if you assume consistent system, then system + not Consistent(system)
is inconsistent for reasoning above, so it can't prove whether assumed system is consistent or not.
There are no properties of brain which define that brain is "you", except for the program that it runs.
I agree with your technical points, but I don't think that we could particularly expect the other path. Safety properties of LLMs seem to be desirable from extremely safety-pilled point of view, not from perspective of average capabilities researcher and RL seems to be The Answer to many learning problems.
I agree that lab leaders are not in much better position, I just think that lab leaders causally screen off influence of subordinates, while incentives in the system causally screens off lab leaders.
It's just no free lunch theorem? For every computable decision procedure you can construct environment which predicts exact output for this decision procedure and reacts in way of maximum damage, making decision procedure to perform worse than random action selection.
The nice thing about being a coward is that once you notice you can just stop.
- Eliezer Yudkowsky, lintamande, Planecrash, the woman of irori
After yet another news about decentralized training of LLM, I suggest to declare assumption "AGI won't be able to find hardware to function autonomously" outdated.