All of Don Hussey's Comments + Replies

REALLY interesting line of thought. May I ask what prompted this?

1Q Home
Just my emotions! And I had an argument about the value of artists behind the art (Can people value the source of the art? Is it likely that majority of people may value it?). Somewhat similar to Not for the Sake of Happiness (Alone). I decided to put the topic into a more global context (How long can you replace everything with AI content? What does it mean for the connection between people?). I'm very surprised that what I wrote was interesting for some people. What surprised you in my post? I'm also interested in applying the idea of "prior knowledge" to values (or to argumentation, but not in a strictly probabilistic way). For example, maybe I don't value (human) art that much, or very uncertain about how much I value it. But after considering some more global/fundamental questions ("prior values", "prior questions") I may decide that I actually value human art quite a lot in certain contexts. I'm still developing this idea. I feel (e.g. when reading arguments why AGI "isn't that scary") that there's not enough ways to describe disagreements. I hope to find a new way to show how and why people arrive at certain conclusions. In this post I tried to show "fundamental" reasons of my specific opinion (worrying about AI content generation). I also tried to do a similar thing in a post about Intelligence (I wanted to know if that type of thinking is rational or irrational).

There may be a measurement challenge here: How can the sentiment (pro/contra altruism) be tracked?

Once we figure that out, I'd be interested in seeing if the sentiment is cyclical over time (particularly decades). 

I don't agree or disagree. You have interesting ideas, but they don't seem cohesive to me. Suggest giving them more thought, applying very specific hypotheses, and outlining arguments in favor/contra.

1Q Home
Even if my ideas are vague, shouldn't rationality be applicable even at that stage? The idea of levels of intelligence (or hard intelligence ceilings) isn't very specific either. "Are there unexpected/easy ways to get smarter?", people should have some opinions about that even without my ideas. It's safe to assume Eliezer doesn't believe there's an unknown way to get smarter (or that it's easier to find such a way than to solve the Alignment problem). My more specific hypotheses are related to guessing what such a way might be. But that's not what you meant, I think.