All of mk54's Comments + Replies

mk54*10

I’m not clear what you’re saying here. 

Are you saying there are specific beliefs that make infinite predictions you regard as having a non infinitesimal probability of being true? For example trying to appeal to whatever may be running the simulation that could be our universe?

Alternatively, are you saying that religious beliefs are no more likely to be true than any arbitrary belief?  And are in fact less likely to be true than many since religious beliefs are more complex?

The problem with that  is Occam’s Razor alone can't  produce us... (read more)

mk5410

To clarify there's a distinction I'm making between a utility function and the utility calculations. You can absolutely set a utility function arbitrarily. The issue is not that a utility function itself can go to infinity, but that religious beliefs can make an AI's prediction of the state of the world contain infinities.

Suppose you have a system that consists of a model that predicts the state of the world contingent on some action taken by the system, a utility function that evaluates those states, and an agent which can take the highest utility action.... (read more)

2ChristianKl
Religious beliefs are one type of belief that makes infinite predictions but it's not special in that from other arbitrary beliefs of other infinite predictions. Given that religious belief like Catholicism needs a lot of details, they are also less likely to be true than other infinite predictions that are less complex in the number of their claims. 
mk5410

Religious beliefs are special because they introduce infinity to the utility calculations. Which can lead to very weird results.

Suppose we have an agent that wants to maximize the expected number of paperclips in the universe. There is an upper bound to the number of paperclips that can exist in the physical universe.

The agent assumes there is a 0.05% chance that Catholicism is true. And if it converts the population of the world to Catholicism it will be rewarded with infinite paperclips. Converting everyone to Catholicism would therefor maximize the expected number of paperclips. Even for very low estimated probabilities of Catholicism being true.

2ChristianKl
You can easily just change an algorithm to see something as having infinity utility in utility calculations. Catholicism or any other religion is not special in that you can tell an algorithm to count infinite utility for it. You can do that with everything provided your system has a way to represent infinity utility. 
mk5410

I'm not sure I agree with your comment. Or at least I wouldn't put it that way. But I think agree with the gist of what you're getting at.

I agree the prospect of  eternal reward has a huge motivating effect on human behavior.  The question I'm trying to raise is whether it might have a similar effect on machine behavior.

An agnostic expectation maximizing machine might be significantly by influenced religious beliefs.  And I expect a machine would be agnostic.

Unless we're very certain that an AI will be atheistic I think this is something we should think about seriously.

2ChristianKl
That's not a claim that I made.  If you want a machine to be motivated by something you can just give it an utility function to be motivated by it. It's not clear why religious belief would be in any way special here. 
mk5410

I think that's a fairly modest claim. Note I don't say the only way.

Religion is evidence (albeit weak and in some respects contradictory evidence) of a certain form of morality bing true.  The probability of certain religions existing is different conditional on certain moral facts being true. I would emphasize that taken seriously this leads to conclusions that are very different than most traditional religions. But I think the argument is valid.

Moral intuitionism is another option. But, imo, it's hard to argue why human intuition should be a good pr... (read more)

mk5410

I agree an AI wouldn't necessarily be totally defined by religion. But very large values, even with small probabilities can massively effect behavior.

And yes, religions could conceivably use AIs do very bad things.  As could many human actors.

mk5410
  1. This is obviously hand waving away a lot of engineering work.  But, my point is that assigning a non-zero probability of god existing  may effect an AIs behavior in very dramatic ways.  An AI doesn't have to be moral to do that. See the example with the paperclip maximizer.
  2. In the grand scheme of things I do think a religious AI would be relatively friendly. In any case, this is why we need to think seriously about the possibility. I don't think anyone is studying this as an alignment issue.
  3. I'm not sure I understand Eliezer's claim in that po
... (read more)
2ChristianKl
A huge problem with religious belief is that there's a lot of ideological propaganda about what it means to have religious beliefs. That makes it hard to think clearly about the effects of religious beliefs.  Part of what religion does is that it makes it easier to justify behavior that causes suffering because the suffering doesn't matter as much compared to the value of eternal salvation.  This includes both actions that are about self-sacrifice and also actions that cause some other people to suffer. 
mk5471

I think this is sort of a naive approach to this problem. 

For one, startup valuations are very high variance. It's impossible to know if you were right or lucky in the case you cite.  Although you do make a plausible case you had more information than the VCs who invested.

The the real reason for modesty is the status quo for a lot of systems is at or near optimal. Especially in areas where competitive pressures are strong.  Building gears level models can help. But doing that with sufficient fidelity is hard. Because even insiders often don't understand the system with enough granularity to sufficiently model it.

mk5420

By that logic, wouldn't it make the most sense to donate to an organization that lobbies for more international aid or scientific research than attempting to fund it yourself?

mk5410

The theory of comedy that I find the most convincing is that things we find "funny" are non-threatening violations of social mores. According to that theory being funny isn't so much about being rational, but understanding the unwritten rules that govern society. More specifically it's about understanding when breaking social rules is actually acceptable. It's kind of like speeding. It's theoretically illegal to go 26 in a 25 mph zone. But as a practical matter, no cop is going to pull you over for it. I'm not sure that an especially detailed understanding of social norms is directly useful to becoming more rational. Maybe to the extent that you're more consciously aware of them and how they influence your thinking.

2Unnamed
"Non-threatening violations of social mores" seems to underspecify what things are funny. Most non-threatening norm violations lead to other reactions like cringe, annoyance, sympathy, contempt, confusion, or indifference rather than comedy. Curb Your Enthusiasm and Mr. Bean had lots of funny scenes which involved norm violations, but if their creators were less talented then people would've cringed instead of laughing (and some people do that anyways). I don't think their talent consists primarily of 'finding ways to violate social mores' or 'figuring out how to make that benign'. "Norm violations" and "non-threatening" also seem like generalizations that aren't true of all humor. "The crows seemed to be calling his name, thought Caw" and referencing movies don't seem like norm violations. Gallows humor and bullies laughing at their victim don't seem threat-free.
mk5400

It's good news and bad news in a way. It casts a lot of doubt on the efficacy of beta-amyloid (plaque) targeting drugs that are in the pipeline right now.

mk5400

Paul LePage arguably won the governorship in 2010 because his opposition was fractured. He received 37.6% of the vote, while an independent candidate received 35.9% and a Democrat received 18.8%. He's fairly unpopular, so I wonder if that was a major driver for this. It also makes me wonder if ranked voting would be as successful in other states. Definitely a good thing though.

mk5420

I think social desirability bias almost certainly did mask Trump's support.

https://morningconsult.com/2016/11/03/yes-shy-trump-voters-no-wont-swing-election/

The study I linked to is a pretty strong case for it existing. The study randomly assigned voters to complete a poll online or via phone. College educated voters were substantially more likely to support Trump in the online poll. Whether social desirability bias alone accounts for Donald Trump outperforming the polls is another question.