Amalthea

Posts

Sorted by New

Wiki Contributions

Comments

Amalthea2121

When you have a role in policy or safety, it may usually be a good idea not to voice strong opinions on any given company. If you nevertheless feel compelled to do so by circumstances, it's a big deal if you have personal incentives against that - especially if they're not disclosed.

Might be good to estimate the date of the recommendation - as the interview where Carmack mentioned this was in 2023, a rough guess might be 2021/22?

It might not be legal reasons specifically, but some hard-to-specify mix of legal reasons/intimidation/bullying. While it's useful to discuss specific ideas, it should be kept in mind that Altman doesn't need to restrict his actions to any specific avenue that could be neatly classified.

I'd like to listen to something like this in principle, but it has really unfortunate timing with the further information that's been revealed, making it somewhat less exciting. It would be interesting to hear how/whether the participants believes change.

Have you ever written anything about why you hate the AI safety movement? I'd be quite curious to hear your perspective.

I think the best bet is to vote for a generally reasonable party. Despite their many flaws, it seems like Green Party or SPD are the best choices right now. (CDU seems to be too influenced in business interests, the current FDP is even worse)

The alternative would be to vote for a small party with a good agenda to help signal-boost them, but I don't know who's around these days.

It's not an entirely unfair characterization.

Amalthea101

Half a year ago, I'd have guessed that OpenAI leadership, while likely misguided, was essentially well-meaning and driven by a genuine desire to confront a difficult situation. The recent series of events has made me update significantly against the general trustworthiness and general epistemic reliability of Altman and his circle. While my overall view of OpenAI's strategy hasn't really changed, my likelihood of them possibly "knowing better" has dramatically gone down now.

Fundamentally, OP is making the case that Biorisk is an extremely helpful (but not exact) analogy for AI risk, in the sense that we can gain understanding by looking the ways it's analogous, and then get an even more nuanced understanding by analyzing the differences.

The point made seems to be more about it's place in the discourse than about the value of the analogy itself? (E.g. "The Biorisk analogy is over-used" would be less misleading then)

What do you think about building legal/technological infrastructure to enable a prompt pause, should it seem necessary?

Load More