Nisan

Wikitag Contributions

Comments

Sorted by
Nisan31

Ok. It's strange, then, that wikipedia does not say this. On the contrary, it says:

The notion that bilateral trade deficits are per se detrimental to the respective national economies is overwhelmingly rejected by trade experts and economists.[2][3][4][5]

(This doesn't necessarily contradict your claim, but it would be misleading for the article to say this but not mention a consensus view that trade surpluses are beneficial.)

Nisan20

Do you believe running a trade surplus causes a country to be wealthier? If so, how do we know that?

Nisan84

And so, like OpenAI and Anthropic, Google DeepMind wants the United States' AI to be stronger than China's AI. And like OpenAI, it intends to make weapons for the US government.

One might think that in dropping its commitments not to cause net harm and not to violate international law and human rights, Google is signalling its intent to violate human rights. On the contrary, I believe it's merely allowing itself to threaten human rights — or rather, build weapons that will enable the US government to threaten human rights in order to achieve its goals.

(That's the purpose of a military, after all. We usually don't spell this out because it's ugly.)

This move is an escalation of the AI race that makes AI war more likely. And even if war is averted, it will further shift the balance of power from individuals to already-powerful institutions. And in the meantime, the AIs themselves may become autonomous actors with their own purposes.

Nisan322

Google's AI principles used to say:

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.

On 2025-02-04, Google removed these four commitments. The updated principles seem consistent with making weapons, causing net harm, violating human rights, etc. As justification, James Manyika and Demis Hassabis said:

There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.

Nisan50

Update: It's even better than that. Not only will they make a lab order for you, but they will also pay for the test itself, at a steep discount to the consumer price.

Nisan30

I didn't know about ownyourlabs, thanks! While patients can order a small number of tests directly from Labcorp and Quest Diagnostics, it seems ownyourlabs will sell you a lab order for many tests that you can't get that way.

Nisan60

Exhibit 13 is a sort of Oppenheimer-meets-Truman email thread in which Ilya Sutskever says:

Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake.

Today, OpenAI republished that email (along with others) on its website (archived). But the above sentence is different in OpenAI's version of the email:

Yesterday while we were considering making our final commitment (even the non-solicit agreement), we realized we’d made a mistake.

I wonder which sentence is the one Ilya actually wrote.

Nisan40

check out exhibit 13...

Nisan153

Section 3.3(f)(iii):

Within 120 days of the date of this memorandum, DOE, acting primarily through the National Nuclear Security Administration (NNSA) and in close coordination with AISI and NSA, shall seek to develop the capability to perform rapid systematic testing of AI models’ capacity to generate or exacerbate nuclear and radiological risks. This initiative shall involve the development and maintenance of infrastructure capable of running classified and unclassified tests, including using restricted data and relevant classified threat information. This initiative shall also feature the creation and regular updating of automated evaluations, the development of an interface for enabling human-led red-teaming, and the establishment of technical and legal tooling necessary for facilitating the rapid and secure transfer of United States Government, open-weight, and proprietary models to these facilities.

It sounds like the plan is for AI labs to transmit models to government datacenters for testing. I anticipate at least one government agency will quietly keep a copy for internal use.

Nisan20

So was the launch code really 000000?

Load More