Dagon

Just this guy, you know?

Wiki Contributions

Comments

I think I need to hear more context (and likely more words in the sentences) to understand what inconsistency you're talking about.  "good things are good" COULD be just a tautology, with the assumption that "good things" are relative to a given agent, and "good" is furtherance of the agent's preferences.  Or it could be a hidden (and false) claim of universality  "good things" are anything that a lot of people support, and "are good" means truly pareto-preferred with no harm to anyone.  

Your explanation "by a reasonable person" is pretty limiting, there being no persons who are reasonable on all topics.  Likewise "actually good" - I think there's no way to know even after it happens.

Natural selection is often charged with having goals for humanity, and humanity is often charged with falling down on them.

Whoever's claiming this is really misunderstanding (or, often, misrepresenting) natural selection.  It has goals in exactly the same way that Gravity has goals.  They're also forgetting (or, often, ignoring) that natural selection works by REPLACEMENT, not by improvement in place or preservation of successes.

I think that's the OP's point, and he (and you) are correct.  Comcast provides, for most people, an incredible service that would have been unthinkably amazing only a few decades ago (I remember pricing out T1 lines in the mid '90s - low thousands per month for 1.5Mbps).  

It's ALSO true that the gap between what it seems like they could do and what they actually do, especially around communication regarding outages, unexpected edge cases, slowdowns due to shared infrastructure, and bad configuration/provisioning, is frustrating.   I can't remember the last time they noticed an outage before I did, and even though it's NEVER my equipment (well-monitored Unifi gear), they won't talk to me until I reboot my damn laptop in addition to their modem.  

This is a much smaller and less important distinction than your post made.  Whether it's ANY want, or just a very wide range of wants doesn't seem important to me.

I guess it's not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats.  But maybe it'll just assign probabilities and calculate how to best pursue it's alien and non-human-centered goals.  Either way, that doesn't bode well for biologicals.

Answer by DagonMar 25, 202420

an assumption that objective norms / values do not exist. In my opinion AGI would not make this assumption

The question isn't whether every AGI would or would not make this assumption, but whether it's actually true, and therefore whether it's true that a powerful AGI could have a wide range of goals or values, including the possibility that they're alien or contradictory to common human values.

I think it's highly unlikely that objective norms/values exist, and that weak versions of orthogonality (not literally ANY goals are possible, but enough bad ones to still be worried) are true.  Even more strongly, I think it hasn't been shown that they're false, and we should take the possibility very seriously.

I think it's a confused model that calls it a paradox.  

Almost zero parts of a "free market" are market-decided top-to-bottom.  At some level, there's a monopoly on violence that enforces a lot of ground rules, then a number of market-like interactions about WHICH corporation(s) you're going to buy from, work for, invest in, then within that some bundled authority about what that service, employment, investment mechanism entails.  

Free markets are so great at the layers of individual, decisions of relative value.  They are not great for some other kinds of coordination.  

Do you have an underlying mission statement or goal that can guide decisions like this?  IMO, there are plenty of things that should probably continue to live elsewhere, with some amount of linking and overlap when they're lesswrong-appropriate.  

One big question in my mind is "should LessWrong use a different karma/voting system for such content?".  If the answer is yes, I'd put a pretty high bar for diluting LessWrong with it, and it would take a lot of thought to figure out the right way to grade "wanted on LW" for wiki-like articles that aren't collections/pointers to posts.  

My model of utility (and the standard one, as far as I can tell) doesn't work that way.  No rational agent ever gives up a utilon - that is the thing they are maximizing.  I think of it as "how many utilons do you get from thinking about John Doe's increased satisfaction (not utilons, as you have no access to his, though you could say "inferred utilons") compared to the direct utilons you would otherwise get".

Those moral weights are "just" terms in your utility function.

And, since humans aren't actually rational, and don't have consistent utility functions, actions that imply moral weights are highly variable and contextual.

I may have the same bias, and may in fact believe it's not a bias.  People are highly mutable and contextual in how they perceive others, especially strangers, especially when they're framed as outgroup.

The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn't seem surprising to me, and this doesn't contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.

There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame "otherness" and some individual-power drive, tend to be drawn to it.  There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.

But there's also a large dollop of "humans aren't automatically ANYTHING".  They're far more complex and reactive than a simple view can encompass.

 * OH!  that's a bias that's insanely common.  I said "violence against subgroups" rather than "violence by individuals against individuals, motivated by membership and identification with different subgroups".  

The "people are extraordinarily more altruistic-motivated than they actually are" bias is so pernicious and widespread I've never actually seen it articulated in detail or argued for.

I haven't seen it articulated, or even mentioned.  What is it?  It sounds like this is just the common amnesia (or denial) of the rampant hypocrisy in most humans, but I've not heard that phrasing.  

would it be fair to replace the first "are" (and maybe the second) with something that doesn't imply essentialism or identity?  "people are assumed to be" or "people claim to be" followed by "more altruistic than their behavior exhibits"?

Load More