You have the tools necessary to figure this out
I think it would be a good idea to just start applying now to get a feel for how hirable you specifically are and what you specifically could be doing now to very quickly bounce if you get let go. You might be more valuable than you think
has come up from time to time for me
source seems genuine: https://old.reddit.com/r/artificial/comments/1gq4acr/gemini_told_my_brother_to_die_threatening/lwv84fr/?context=3 but I'm less sure now
My concerns about AI-risk have mainly taken the form of intentional ASI-misuse, rather than the popular fear here of an ASI that was built to be helpful going rogue and killing humanity to live forever / satisfy some objective function that we didn't fully understand. What has caused me to shift camps somewhat is the recent gemini chatbot coversation that's been making the rounds: https://gemini.google.com/share/6d141b742a13 (scroll to the end)
I haven't seen this really discussed here, so I wonder if I'm putting too much weight on it.
It's completely valid. And we can simplify it further to:
not Consistent(ZFC) -> not Consistent(ZFC + not Consistent(ZFC))
because if a set of axioms is already inconsistent, then it's inconsistent with anything added. But you still won't be able to actually derive a contradiction from this.
Edit: I think the right thing to do here is look at models for PA + not consistent(PA). I can't find a nice treatment of this at the moment, but here's a possibly wrong one by someone who was learning the subject at the time: https://angyansheng.github.io/blog/a-theory-that-proves-its-own-inconsistency
Let's back up here and clarify definitions before invoking any theorems. In the language of set theory, we have a countably infinite set of finite statements. Some statements imply other statements. A subset of these statements is said to be consistent if they can all be assigned to true such that, when following the basic rules of logic, one does not arrive at a contradiction.
The compactness theorem is helpful when is an infinite set. is a finite set of axioms, so let's ignore everything about finite subsets of and the compactness theorem; it's not relevant. [Edit: as indicated by Amalthea's reaction, this is wrong; some "axioms" in ZF are actually infinite sets of axioms, such as replacement]
I'll now rewrite your last sentence as:
ZFC + not Consistent(ZFC) has no model <-> not Consistent(ZFC + not Consistent(ZFC))
This is true but irrelevant. Assuming ZFC is consistent, ZFC will not be able to prove its own consistency so [not Consistent(ZFC)] can be added as an axiom without affecting its consistency. This means that ZFC + [not Consistent(ZFC)] would indeed have a model; I forget how this goes but I think it's something like "start with a model of ZFC, throw in a that's treated as a natural number and corresponds to the contradiction found in ZFC, then close". I think is automatically treated as greater than every "actual" natural number (and the way to show that this can be added without issue (I think) involves the compactness theorem).
For example, if you ask mathematicians whether ZFC + not Consistent(ZFC) is consistent, they will say "no, of course not!"
Certainly not a mathematician with any background in logic.
Similarly, if we have the Peano axioms without induction, mathematicians will say that induction should be there, but in fact you cannot prove this fact from within Peano
What exactly do you mean here? That the Peano axioms minus induction do not adequately characterize the natural numbers because they have nonstandard models? Why would I then be surprised that induction (which does characterize the natural numbers) can't be proven from the remaining axioms?
and given induction mathematicians will say transfinite induction should be there.
Transfinite induction is a consequence of ZF that makes sense in the context in sets. Yes, it can prove additional statements about the natural numbers (e.g. goodstein sequences converge), but why would it be added as an axiom when the natural numbers are already characterized up to isomorphism by the Peano axioms? How would you even add it as an axiom in the language of natural numbers? (that last question is non-rhetorical).
Are you familiar with Kelly betting? The point of maximizing log expectation instead of pure expectation isn't because happiness grows on a logarithmic scale or whatever, it's for the sake of maximizing long-term expected value. This kills off making bets where "0" is on the table (as log(0) is minus infinity); whether or not that's appropriate is still an interesting topic for discussion because, as you mentioned, x-risks exist anyway
By the way, this is my first non-question post on lesswrong after lurking for almost a year. I've made some guesses about the mathematical maturity I can assume, but I'd appreciate feedback on if I should assume more or less in future posts (and general feedback about the writing style, either here or in private).