" (...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties."
- Saharon Shelah
"A little learning is a dangerous thing ;
Drink deep, or taste not the Pierian spring" - Alexander Pope
As a true-born Dutchman I endorse Crocker's rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
The US should set in a motion a process to gradually and peacefully hand over Taiwan to China in the next ~12 years.
China cares more about Taiwan than anything else. China is stronger and will be even stronger.
China's GDP is near that of the US. China's PPP is even 50% larger. China is ahead in many industries. The US Navy is a disaster. China has made a massive military buildup. Taiwan is much closer to China. China care more about Taiwan than anybody else.
A peaceful transition handover has precedence - see the British handing over Hong Kong.
China will occupy Taiwan within the next twelve years, by peaceful means or by force as they have repeatedly and clearly stated. The US military is no longer powerful enough to defend Taiwan against a determined Chinese attack. There is a small but serious chance that Xi will attack in 2027, that is next year.
One shouldn't be under the illusion that the Chinese government is all peaches but their rhetoric has consistently been measured, peace-seeking and focused on reclaiming just Taiwan - not a general expansionist or militarist ideology. Historically China, unlike Russia, has not been an expansionist power. Their last war was in 1978 (with Vietnam).
What would be the benefits? World peace is probably good. Hot and cold wars will definitely exarbate AI race dynamics. Wars generically make everything more crazy. Safety will likely take a backseat to winning. On the other hand, wars also make governments more competent.
What would be the downsides? It would hurt the precedent of international law. It could be appeasement.
The main dynamic is avoiding a rally around the flag effect in China: even if the US & Taiwan would beat off an initial attack it cannot win in the long-run. The Chinese people will almost certainly redouble their resolve. This would strengthen the CCP, not weaken it.
Through a peaceful transition bloodshed may be avoided. Concessions on AI safety and chip manufacturing may also be achieved. A very daring negotiation would put nukes on Japan and Korea. A very daring negotiation would yield Taiwan but leave TSMC factories with planted explosives so as to prevent a Chinese takeover of chip manufacturing. This could be manned by an international peacekeeping force that would blow up the factories in case the CCP would try and grab the factories.
Hi Artemy. Welcome to LessWrong!
Agree completely with what Zach is saying here.
We need two facts
(1) the world has a specific inductive bias
(2) neural networks have the same specific inductive bias
Indeed no free lunch arguments seem to require any good learner to have good inductive bias. In a sense learning is 'mostly' about having the right inductive bias.
We call this specific inductive bias a simplicity bias. Informally it agrees with our intuitive notion of low complexity.
Rk. Conceptually it is a little tricky since simplicity is in the eye of the beholder - by changing the background language we can make anything with high algorithmic complexity have low complexity. People have been working on this problem for a while but at the moment it seems radically tricky.
IIRC Aram Ebtekar has a proposed solution that John Wentworth likes; I haven't understood it myself yet. I think what one wants to say is that the [algorithmic] mutual information between the observer and the observed is low, where the observer implicitly encodes the universal turing machine used. In other words - the world is such that observers within it observe it to have low complexity with regard to their implicit reference machine.
Regardless, the fact that the real world satisfies a simplicity bias is to my mind difficult to explain without anthropics. I am afraid we may end up having to resort to an appeal to some form of UDASSA but others may have other theological commitments.
That's the bird-eye view of simplicity bias. If you ignore the above issue and accept some sort of formally-tricky-to-define but informally "reasonable" simplicty then the question becomes: why do neural networks have a bias towards simplicity. Well they have a bias towards degeneracy - and simplicity and bias are intimiately connected, see eg:
https://www.lesswrong.com/posts/tDkYdyJSqe3DddtK4/alexander-gietelink-oldenziel-s-shortform?commentId=zH42TS7KDZo9JimTF
One takeaway for me is that the american Presidency is extremely powerful - especially when you don't care about passing legislation or popularity.
The unlimited pardons and vetoes is something that has been only sporadically used in the past, limited mostly by convention. Just reading the constitution text-as-written the presidency is wildly powerful, especially with a supreme court following a unitary executive interpretation and a lame-duck congress that does not care to insist on its war declaration prerogative.
I'm amused that the lightcone may have been lost in the 1790's when the US constitutional framework was designed.
A friend of mine visited the recent ' eugenics'* conference in the Bay. It had all the prominent people in this area attending iirc, eg Steve Hsu. My friend told me he asked around about how realistic these numbers were. He told me that the majority of serious people he spoke with were skeptical of IQ gains >~3 points.
*sorry I don't remember what it was called
I know you know this but I thought it is important to emphasize that
your first point is plausibly understating the problem of pragmatic/blackbox methods. In the worse-case an AI may simply encrypt its thoughts.
It's not even an oversight problem. There is simply nothing to ' oversee'. It will think its evil thoughts in private. The AI will comply with all evals you can cook up until it's too late.
This looks exciting. As Jeremy said, the length raises an eyebrow.
Ramana Kumar!