Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Soapspud7361

We understand that we may be discounted or uninvited in the short term, but meanwhile our reputation as straight shooters with a clear and uncomplicated agenda remains intact.


I don't have any substantive comments, but I do want to express a great deal of joy about this approach.

I am really happy to see people choosing to engage with the policy, communications, and technical governance space with this attitude. 

We only have people who cry wolf all the time. I love that for them, and thank them for their service, which is very helpful. Someone needs to be in that role, if no one is going to be the calibrated version. Much better than nothing. Often their critiques point to very real issues, as people are indeed constantly proposing terrible laws.

The lack of something better calibrated is still super frustrating.


This mental (or emotional) move here, where you manage to be grateful for people doing a highly imperfect job while also being super frustrated that no one is doing a genuinely good job: how are you doing that?

I see this often in rationalist spaces, and I'm confused about how people learn to do this. I would probably end up complaining about the failings of the best (highly inadequate) strategies we've got without the additional perspective of "how would things be if we didn't even have this?"

For people who remember learning how to do this, how did you practice?

It's definitely intuition gained from a few years of doing those kinds of problems.

Also, there's an important point that makes my intuition a bit less impressive, and it's the fact that the problem-statement sounded like an intro-physics problem, so I assumed away many ambiguities that would make the solution particularly complicated to think through.

For example, thought it's not specified, it matters whether your gas is in a fixed volume or not, but if you assume the gas can expand, you're getting into solutions where you might need to know the boundary conditions at the edge of your gas, and you might need to figure out relative pressures and/or temperature gradients. Since the question doesn't specify any of that, I guessed that it's probably not that kind of problem.

My thought after reading the first sentence of your post, and before reading any of the comments, was that gases become less compressible at higher temperatures, which should make it more responsive to a pressure-wave, raising the speed of sound in that medium.

The less-misleading user interface seems good to me, but I have strong reservations about the other four interventions.

To use the shoggoth-with-smiley-face-mask analogy, the way the other strategies are phrased sounds like a request to create new, creepier masks for the shoggoth so people will stop being reassured by the smiley-face.

From the conversation with 1a3orn, I understand that the creepier masks are meant to depict how LLMs / future AIs might sometimes behave.

But I would prefer that the interventions removed the mask altogether, that seems more truth-tracking to me.

(Relatedly, I'd be especially interested to see discussions (from anyone) on what creates the smiley-face-mask, and how entangled the mask is with the rest of the shoggoth's behaviour.)

Note: I believe my reservations are similar to some of 1a3orn's, but expressed differently.

drawn black balls that I do not have adequate defenses for the informational content of directly.

I don't know what drawing black balls means in this context. Would someone be able to clarify?

Do you believe the MIRI view is Orwellian? If so, could you elaborate?

I would love to see research that collects data on metrics like cost of censorship and effectiveness of propaganda, and plots them over time to see if there is any general trend and how much it has varied over the course of human history, and whether the latest AI techniques are changing these metrics significantly.

If you have not already seen it, this report from CSET discusses the extent to which something as capable as GPT-3 changes the cost and effectiveness of disinformation and propaganda.

There was also a recorded discussion/seminar of the same topics with the authors of the report.

I don't think it's exactly what you're looking for, but it seemed adjacent enough to be worth mentioning.