KatWoods

Wikitag Contributions

Comments

Sorted by

Deep Research was this for me, at first. Some of its summaries were just pleasant to read, they felt so information-dense and intelligent! Not like typical AI slop at all! But then it turned out most of it was just AI slop underneath anyway,

 

Can you elaborate on what you mean by this? Do you mean it's hallucinating a ton underneath? Or that the writing is somehow bad? Or something else? 

KatWoods100

I provided one here!

Totally agree about LLMs. I've recently been finding giving o3 Deep Research this prompt to be super helpful:

"X=[thing I want to research] 

Do a deep dive into X. Tell me the steelman arguments in favor of X.

Then tell me the steelman counter-arguments to X.

Then tell me the steelman counter-counter-arguments to X.

Then tell me the steelman counter-counter-counter-arguments to X

Make sure to link to primary sources and the full thing that happened, to avoid things being quoted out of context"

It's been particularly helpful for investigating anything political. 

To be fair, I think you could make the case that the technological advancement of the Western European powers led to almost a DSA. 

England had the biggest empire in the history of the world (population wise). The countries which were technologically ahead of others did kinda stomp on the rest of the world

It wasn't literally one world power, but it was a sizeable fraction that way, especially if you consider European culture as a unit instead of just the UK. 

Ah well. At least you can take credit for the name then. 

KatWoods0-1

Very not important question: is Gene Smith your actual name or a pseudonymn? 

Either way, it's the perfect name for the author of this post. 

Hats off to you gene smith. 

A blacksmith in a traditional forge, hammering a glowing strand of DNA on an anvil. Sparks fly as the DNA helix takes shape under the impact. The scene is illuminated by the fiery glow of the forge, with tools and metalwork surrounding the blacksmith. The blacksmith is muscular, wearing a leather apron, with intense focus on shaping the DNA strand.

Robert Miles has a great channel spreading AI safety content. There's also Rational Animations and Siliconversations and In a Nutshell.

I think FLI does a lot of work in outreach + academia. 

Connor Leahy does a lot of outreach and he's one of my favorite AI safety advocates. 

Nonlinear doesn't do outreach to academia in particular, but we do target people working in ML, which is a lot of academia. 

AI Safety Memes does a lot of outreach but is focused on broad appeal, definitely not specifically academia. 

Pause AI and Stop AI both work on outreach to the broader public. 

CAIS does great outreach work. Not sure if any academia specific stuff. 

Are you on the Nonlinear Network? You can sort by the category of "content/media creation" to find a bunch of AI safety orgs limited by funding who are working on advocacy. Quick scan of the section shows 36. 

Might be able to find more possibilities on the AI safety map too https://map.aisafety.world

I saw somebody using one of the latest Google models for this. I forget the one, but it's the one that can see your screen as you type. It can be used to keep you focused. 

(Haven't researched it yet, so might not work very well)

KatWoods1714

I see it called "goal guarding" in some papers. That seems like a pretty good term to use. 

I laughed out loud so many times reading this. Thank you for writing it. 

If I recall correctly, it was told that it had a goal, like optimizing traffic. Then, it was given information where it "discovered" that it was going to be turned off the next day. 

That seems like the sort of thing that would happen pretty often. Imagine the current AIs that have access to your computer, reading your emails, and seeing that you're going to switch to a different AI or install "updates" (which is often deleting existing AI and replacing with a new one)

Load More