All of alenoach's Comments + Replies

Helping to enforce the DSA might be one opportunity. The DSA is a constraining piece of legislation, but the team tasked with monitoring online platforms and enforce it is understaffed, especially in the early days (these roles were actually on the 80,000 Hours jobs board). So, perhaps there could be an opportunity e.g. in finding ways to help them automatically detect or verify compliance issues, if they accept such contributions.

For Tournesol, their website doesn't appear to have changed so much during the last year, so I suppose it is pretty mature. The... (read more)

2Charbel-Raphaël
I don't Tournesol is really mature currently, especially for non french content, and I'm not sure they try to do governance works, that's mainly a technical projet, which is already cool.

What do you think is the main issue preventing companies from making more ethical recommendation algorithms? Is it the difficulty of determining objectively what is accurate and ethical? Or is it more about the incentives, like an unwillingness to sacrifice addictiveness and part of their audience?

1FrancescaG
I think incentives. Based on my recent reading of 'The Chaos Machine' by Max Fisher, I think it's closely linked to continually increasing engagement driving profit. Addictiveness unfortunately leads to more engagement, which in turn leads to profit. Emotive content (clickbait style, extreme things) also increase engagement. Tools and moderation processes might be expensive on their own, but I think it's when they start to challenge the underlying business model of 'More engagement = more profit' that the companies are in a more uncomfortable position.

Good recommendation engines are really important for our epistemic environment, in my opinion more than for example prediction markets. Because it indeed affects so much of the content that people ingest in their daily lives, on a large scale.

The tough question is how tractable it is. Tournesol has some audience, but also seems to struggle to scale it up despite pretty mature software. I really don't know how effective it would be to pressure companies like Facebook or TikTok, or to push for regulation, or to conduct more research on how to improve recommendation algorithms. Seems worth investigating whether there are cost-effective opportunities, whether through grants or job recommendations.

2Charbel-Raphaël
"I really don't know how tractable it would be to pressure compagnies" seems weirdly familiar.  We already used the same argument for AGI safety, and we know that governance work is much more tractable than expected.

Regarding coherent extrapolated volition, I have recently read Bostrom's paper Base Camp for Mt. Ethics, which presents a slightly different alternative and challenged my views about morality.

One interesting point is that at the end (§ Hierarchical norm structure and higher morality), he proposes a way to extrapolate human morality in a way that seems relatively safe and easy to implement for superintelligences. It also preserves moral pluralism, which is great for reaching a consensus without fighting each other (no need to pick one single moral framework... (read more)

1RogerDearnaley
Having just read Bostrom's Base Camp for Mt. Ethics on your recommendation above (it's fairly short), I don't actually disagree with much of it, but there are a surprising number of things that I think are pretty important, basic, and relevant about ethics, which I thus included in my sequence AI, Ethics, and Alignment that he didn't mention, at all, and I felt were significant or surprising omissions. Such as, for example, the fact that humans are primates and that primates have a number of (almost certainly genetically determined) moral instincts in common: things like an instinctive expectation of fairness for interactions within the primate troupe. Or for another example, how one might start to come up with a more rational process for deciding between sets of norms for a society (despite all sets of norms preferring themselves over all alternatives) than the extremely arbitrary and self-serving social evolution processes of norms that he so ably describes.

As the video says, labeling noise becomes more important as LLMs get closer to 100%. Does making a version 2 look worthwhile ? I suppose that a LLM could be used to automatically detect most problematic questions and a human could verify for each flagged question if it needs to be fixed or removed.

That's a crucial subject indeed.

What's more crazy is that, since AI can process information much faster than the human brain, it's probably possible to engineer digital minds that are multiple orders of magnitude more sentient than the human brain.[1] I can't precisely tell how much more sentient, but biological neurons have a typical peak frequency of 200 Hz, whereas for transistors, it can exceed 2 GHz (10 millions times more).[2]

It's not us versus them. As Nick Bostrom says, we should search for "paths that will enable digital minds and biological ... (read more)

But is it important in utilitarianism to think about people ? As far as I can tell, utilitarianism is not incompatible with things like panpsychism where there could be sentience without delimited personhood.

Anyway, even if technically correct, I think this is a bit too complicated and technical for a short introduction on utilitarianism.

What about something simpler, like: "Utilitarianism takes into account the interests of all sentient beings.". Perhaps we could add something on scale sensitivity, e.g. : "Unlike deontology, it is scale sensitive.". I don't know if what I propose is good, but I think there is a need for simplification.

I find confusing the paragraph : 

"Not to be confused with maximization of utility, or expected utility. If you're a utilitarian, you don't just sum over possible worlds; you sum over people."

It's not clear to me why utilitarianism is not about maximization of expected utility, notably because I guess for a utilitarian, utility and welfare can be the same thing. And it feels pretty obvious that you sum it over people, but the notion of possible worlds is not so easy to interpret here.

2Vladimir_Nesov
A utility function is a function of a sample/probability space, in this case the space whose points (elementary events) are possible worlds. Expected utility is expected value of a utility function over some subspace of a probability space, in this case its definition sums over possible worlds. Talking about people requires a different framing, and to define a utility function that sums over people you need to look inside each possible world.

Thanks for the insights. Actually, board game models don't play very well when they are so heavily loosing, or so heavily winning that it doesn't seem to matter. A human player would try to trick you and hope for a mistake. This is not necessarily the case with these models that play as if you were as good as them, which makes their situation look unwinnable. 

It's quite the same with AlphaGo. AlphaGo plays incredibly well until there is a large imbalance. Surprisingly, AlphaGo also doesn't care about winning by 10 points or by half a point, and someti... (read more)