Wiki Contributions

Comments

Sorted by

Helping to enforce the DSA might be one opportunity. The DSA is a constraining piece of legislation, but the team tasked with monitoring online platforms and enforce it is understaffed, especially in the early days (these roles were actually on the 80,000 Hours jobs board). So, perhaps there could be an opportunity e.g. in finding ways to help them automatically detect or verify compliance issues, if they accept such contributions.

For Tournesol, their website doesn't appear to have changed so much during the last year, so I suppose it is pretty mature. They also have other projects, and they tend to foster a community of French volunteers interested in recommendation algorithms. It depends on whether such projects could have a large-scale impact.

What do you think is the main issue preventing companies from making more ethical recommendation algorithms? Is it the difficulty of determining objectively what is accurate and ethical? Or is it more about the incentives, like an unwillingness to sacrifice addictiveness and part of their audience?

Good recommendation engines are really important for our epistemic environment, in my opinion more than for example prediction markets. Because it indeed affects so much of the content that people ingest in their daily lives, on a large scale.

The tough question is how tractable it is. Tournesol has some audience, but also seems to struggle to scale it up despite pretty mature software. I really don't know how effective it would be to pressure companies like Facebook or TikTok, or to push for regulation, or to conduct more research on how to improve recommendation algorithms. Seems worth investigating whether there are cost-effective opportunities, whether through grants or job recommendations.

alenoach40

Regarding coherent extrapolated volition, I have recently read Bostrom's paper Base Camp for Mt. Ethics, which presents a slightly different alternative and challenged my views about morality.

One interesting point is that at the end (§ Hierarchical norm structure and higher morality), he proposes a way to extrapolate human morality in a way that seems relatively safe and easy to implement for superintelligences. It also preserves moral pluralism, which is great for reaching a consensus without fighting each other (no need to pick one single moral framework like consequentialism or deontology or a particular set of values).

Roughly, higher moral norms are defined as the moral norms of bigger, more inclusive groups. For example, the moral norms of a civilization are higher in the hierarchical structure than the moral norms of a family. But you can extrapolate further, up to what he calls the "Cosmic host", which can take into account the general moral norms of speculative civilizations of digital minds or aliens...

As the video says, labeling noise becomes more important as LLMs get closer to 100%. Does making a version 2 look worthwhile ? I suppose that a LLM could be used to automatically detect most problematic questions and a human could verify for each flagged question if it needs to be fixed or removed.

That's a crucial subject indeed.

What's more crazy is that, since AI can process information much faster than the human brain, it's probably possible to engineer digital minds that are multiple orders of magnitude more sentient than the human brain.[1] I can't precisely tell how much more sentient, but biological neurons have a typical peak frequency of 200 Hz, whereas for transistors, it can exceed 2 GHz (10 millions times more).[2]

It's not us versus them. As Nick Bostrom says, we should search for "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".[1]

  1. ^
  2. ^

But is it important in utilitarianism to think about people ? As far as I can tell, utilitarianism is not incompatible with things like panpsychism where there could be sentience without delimited personhood.

Anyway, even if technically correct, I think this is a bit too complicated and technical for a short introduction on utilitarianism.

What about something simpler, like: "Utilitarianism takes into account the interests of all sentient beings.". Perhaps we could add something on scale sensitivity, e.g. : "Unlike deontology, it is scale sensitive.". I don't know if what I propose is good, but I think there is a need for simplification.

I find confusing the paragraph : 

"Not to be confused with maximization of utility, or expected utility. If you're a utilitarian, you don't just sum over possible worlds; you sum over people."

It's not clear to me why utilitarianism is not about maximization of expected utility, notably because I guess for a utilitarian, utility and welfare can be the same thing. And it feels pretty obvious that you sum it over people, but the notion of possible worlds is not so easy to interpret here.

Thanks for the insights. Actually, board game models don't play very well when they are so heavily loosing, or so heavily winning that it doesn't seem to matter. A human player would try to trick you and hope for a mistake. This is not necessarily the case with these models that play as if you were as good as them, which makes their situation look unwinnable. 

It's quite the same with AlphaGo. AlphaGo plays incredibly well until there is a large imbalance. Surprisingly, AlphaGo also doesn't care about winning by 10 points or by half a point, and sometimes plays moves that look bad to humans just because it's winning anyway. And when it's loosing, since it assumes that its opponent is as strong, it can't find a leaf in the tree search that end up winning. Moreover, I suspect that removing a piece is prone to distribution shift.