Roko

Wiki Contributions

Comments

Sorted by
Roko10-2

Since a few people have mentioned the Miller/Rootclaim debate:

My hourly rate is $200. I will accept a donation of $5000 to sit down and watch the entire Miller/Rootclaim debate (17 hours of video content plus various supporting materials) and write a 2000 word piece describing how I updated on it and why.

Anyone can feel free to message me if they want to go ahead and fund this.

Roko70

Whilst the LEDs are not around the corner, I think the Kr-Cl excimer lamps might already be good enough.

When we wrote the original post on this, it was not clear how quickly covid was spreading through the air, but I think it is now clear that covid can hang around for a long time (on the order of minutes or hours rather than seconds) and still infect people.

It seems that a power density of 0.25W/m^2 would probably be enough to sterilize air in 1-2 minutes, meaning that a 5m x 8m room would need a 10W source. Assuming 2% efficiency that 10W source needs 500W electrical, which is certainly possible and in the days of incandescent lights you would have had a few 100W bulbs anyway.

EDIT: Having looked into this a bit more, it seems that right now the low efficiency of excimer lamps is not a binding constraint because the legally allowed far-UVC exposure is so low.

"TLV exposure limit for 222 nm (23 mJ cm^−2)"

23 mJ per cm^2 per day is just 0.002 W/m^2 , so you really don't need much power until you hit legal limitations.

Source

Roko30

I should have been clear: "doing things" is a form of input/output since the AI must output some tokens or other signals to get anything done

Roko25

If you look at the answers there is an entire "hidden" section of the MIRI website doing technical governance!

Roko3-6

Why is this work hidden from the main MIRI website?

Roko1-1

"Our objective is to convince major powers to shut down the development of frontier AI systems worldwide"

This?

Roko1-1

Re: (2) it will only impact output on the current generated output, once the output is over all that stuff will be reset and the only thing that remains is the model weights which were set in stone at train time.

re: (1) "a LLM might produce text for reasons that don't generalize like a sincere human answer would" it seems that current LLM systems are pretty good at generalizing like a human would and in some ways they are better due to being more honest, easier to monitor, etc

Roko-1-1

But do you really think we're going to stop with tool AI, and not turn them into agents?

But if it is the case that agentic AI is an existential risk then if actors could choose not to develop it, which is a coordination problem not an alignment problem.

We already have aligned AGI, we can coordinate to not build misaligned AGI.

Load More