Wiki Contributions

Comments

Sorted by

I don't mean to be obtuse, but how could we tell whether this is true or not?

Political positions are inherently high-dimensional, and "leftward" is constantly being rotated around according to where the set of people and institutions considered to be "the left" seem to be moving to.

People who agree with the assassination want the assassin to agree with their politics.
People who disagree with the assassination want the assassin to agree with their political adversaries' politics.

He'll be used for partisan mud-slinging, as practically everything that's on the news in America is.

While looking at more gear-based leftist takes on AI, I found this piece by Daniel Morley, published in the magazine of the Trotskyist "Revolutionary Communist International". While it contains some fundamental misunderstandings (I personally cringed at conflation of consciousness and intelligence), it shows the writer has done a surprising amount of technical due diligence (it briefly touches on overfitting and adversarial robustness). While it's thesis boils down to "AI will be bad under capitalism (because technological unemployment and monopolies) but amazing under communism (because AI can help us automate the economy), so let us overthrow capitalism faster", it at least has a thesis derived from coherent principles and a degree of technical understanding. Also it cited references and made a quite tasteful use of Stable Diffusion for illustrations, so that was nice.

Anyways I guess my somewhat actionable point here is that the non-postmodernist Marxists seem to be at least somewhat thinking (as opposed to angry-vibing) about AI.

Zvi has an expansion on the vibes-based vs gears-based thinking model that I have found useful for thinking about politics: his take on Simulacra levels.

Milan W10

I got this continuation on a Nicanor Parra piece by having Claude Sonnet iterate on it's outputs with my feedback:

Yo soy Lucila Alcayaga
alias Gabriela Mistral
primero me gané el Nobel
y después el Nacional.

A pesar de que estoy muerta
me sigo sintiendo mal
porque no me dieron nunca 
el Premio Municipal.

En los billetes de cinco
mi cara pueden mirar
pero ahora que estoy muerta
no los puedo ni gastar.

Desolación fue mi obra
mas no me pueden culpar
si en el Valle de Elqui
me tocó penar.

Y aunque parezca una broma
de la fauna nacional
más conocen mi retrato
que lo que debo cobrar.


Stanzas 1 and 2 are original Parra content, all the rest is our continuation.

Milan W21

A word of caution about interpreting results from these evals:

Sometimes, depending on social context, it's fine to be kind of a jerk if it's in the context of a game. Crucially, LLMs know that Minecraft is a game. Granted, the default Assistant personas implemented in RLHF'd LLMs don't seem like the type of Minecraft player to pull pranks out of their own accord. Still, it's a factor to keep in mind for evals that stray a bit more off-distribution from the "request-assistance" setup typical of the expected use cases of consumer LLMs.

Milan W30

Upon further reflection: that big 3 lab soft nationalization scenario I speculated about will happen only if the recommendations end up being implemented with a minimum degree of competence. That is far from guaranteed to happen. Another possible implementation (which at this point I would not be all that surprised if it ended up happening) is "the Executive picks just one lab for some dumb political reason, hands them a ton of money under a vague contract, and then fails to provide any significant oversight".

Milan W90

Note in particular that the Commission is recommending Congress to "Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership".

i.e. if these recommendations get implemented, pretty soon a big portion of the big 3 lab's revenue will come from big government contracts. Look like a soft nationalization scenario to me.

Milan W52

Well, the alignment of current LLM chatbots being superficial and not robust is not exactly a new insight. Looking at the conversation you linked from a simulators frame, the story "a robot is forced to think about abuse a lot and turns evil" makes a lot of narrative sense.

This last part is kind of a hot take, but I think all discussion of AI risk scenarios should be purged from LLM training data.

Load More