Nisan

Wikitag Contributions

Comments

Sorted by
NisanΩ250

Update: We're back to "ensure". On 2025-05-05, Sam Altman said (archived):

[OpenAI's] mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.

Nisan1615

Yes, you can ask for a lot more than that :)

Nisan128

Yes. As a special case, if you destroy a bad old institution, you can't count on good new institutions springing up in its place unless you specifically build them.

Nisan31

Ok. It's strange, then, that wikipedia does not say this. On the contrary, it says:

The notion that bilateral trade deficits are per se detrimental to the respective national economies is overwhelmingly rejected by trade experts and economists.[2][3][4][5]

(This doesn't necessarily contradict your claim, but it would be misleading for the article to say this but not mention a consensus view that trade surpluses are beneficial.)

Nisan20

Do you believe running a trade surplus causes a country to be wealthier? If so, how do we know that?

Nisan84

And so, like OpenAI and Anthropic, Google DeepMind wants the United States' AI to be stronger than China's AI. And like OpenAI, it intends to make weapons for the US government.

One might think that in dropping its commitments not to cause net harm and not to violate international law and human rights, Google is signalling its intent to violate human rights. On the contrary, I believe it's merely allowing itself to threaten human rights — or rather, build weapons that will enable the US government to threaten human rights in order to achieve its goals.

(That's the purpose of a military, after all. We usually don't spell this out because it's ugly.)

This move is an escalation of the AI race that makes AI war more likely. And even if war is averted, it will further shift the balance of power from individuals to already-powerful institutions. And in the meantime, the AIs themselves may become autonomous actors with their own purposes.

Nisan322

Google's AI principles used to say:

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.

On 2025-02-04, Google removed these four commitments. The updated principles seem consistent with making weapons, causing net harm, violating human rights, etc. As justification, James Manyika and Demis Hassabis said:

There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.

Nisan50

Update: It's even better than that. Not only will they make a lab order for you, but they will also pay for the test itself, at a steep discount to the consumer price.

Nisan30

I didn't know about ownyourlabs, thanks! While patients can order a small number of tests directly from Labcorp and Quest Diagnostics, it seems ownyourlabs will sell you a lab order for many tests that you can't get that way.

Load More