It certainly depends on who's arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that's perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision.
(I admit though that "perpetuates colonialism" is a spin that goes beyond "it's not a 1:1 mapping" and is more convincing to me)
This makes me wonder, how could an AI figure out whether it had conscious experience? I always used to assume that from first person perspective it's clear when you're conscious. But this is kind of circular reasoning as it assumes you have a "perspective" and are able to ponder the question. Now what does a, say, reasoning model do? If there is consciousness, how will it ever know? Does it have to solve the "easy" problem of consciousness first and apply the answer to itself?
In no particular order, because interestingness is multi-dimensional and they are probably all to some degree on my personal interesting Pareto frontier:
Random thought: maybe (at least pre-reasoning-models) LLMs are RLHF'd to be "competent" in a way that makes them less curious & excitable, which greatly reduces their chance of coming up with (and recognizing) any real breakthroughs. I would expect though that for reasoning models such limitations will necessarily disappear and they'll be much more likely to produce novel insights. Still, scaffolding and lack of context and agency can be a serious bottleneck.
Interestingly, the text to speech conversion of the "Text does not equal text" section is another very concrete example of this:
But what you're probably not aware of is that 0.8% of the US population ends up dieing due to intentional homicide
That is an insane statistic. According to a bit of googling this indeed seems plausible, but would still be interested in your source if you can provide it.
Downvoted for 3 reasons:
Or as a possible more concrete prompt if preferred: "Create a cost benefit analysis for EU directive 2019/904, which demands that bottle caps of all plastic bottles are to remain attached to the bottles, with the intention of reducing littering and protecting sea life.
Output:
key costs and benefits table
economic cost for the beverage industry to make the transition
expected change in littering, total over first 5 years
QALYs lost or gained for consumers throughout the first 5 years"
In the EU there's some recent regulation about bottle caps being attached to bottles, to prevent littering. (this-is-fine.jpg)
Can you let the app come up with a good way to estimate the cost benefit ratio of this piece of regulation? E.g. (environmental?) benefit vs (economic? QALY?) cost/drawbacks, or something like that. I think coming up with good metrics to quantify here is almost as interesting as the estimate itself.
Some further examples: