All of Walker Vargas's Comments + Replies

In the ending where humanity gets modified, some people commit suicide. The captain thinks it doesn't make sense to choose complete erasure over modification.

If sperm whales were sapient, and had their own languages how recently would humans have noticed? We wouldn't be able to hear much of their speech. Without the ability for advanced tool use and without agriculture, I think it would be rather hard for us to notice. I don't think this would be discovered any earlier than the 20th century. Do we know that we aren't in this situation?

Do they think it's a hardware/cost issue? Or do they think that "true" intelligence is beyond our abilities?

2the gears to ascension
it's the full range of things people say, just a higher ratio of people saying them on the left, in my experience. Also, re: making it a leftist issue - right now it's a liberal issue, and only a liberal issue; liberal CEOs have offended both right-wingers and leftists regarding AI safety, so it's possible that at least getting the actual left on board might be promising somehow. Not sure. Seems like this discussion should be had on less wrong itself first. I've certainly seen leftists worrying about ai aligned to megacorporations.

This is also a plausible route for spreading awareness of AI safety issues to the left. The downside is that it might make AI safety a "leftest" issue if a conservative analogy is not introduced at the same time.

1sudo-nym
It may also be worth noting how a sufficiently advanced "algorithm" could start making its own "decisions"; for example, a search/display algorithm that has been built to maximize advertisement revenue, if given enough resources and no moral boundaries, may suppress search results that contain negative opinions on itself, promote taking down competitors, and/or preferentially display news and arguments that are in favor of allowing Algorithms more power. Skepticism about The Algorithm is a cause many political parties are already able to agree on; the possibility of The Algorithm going FOOM might accelerate public discussions about the development of AI in general.
4the gears to ascension
the problem is most folks I've talked to on the left with this pitch are even more skeptical of the idea that high capability intelligent software can exist. they generally seem to assume the current level is the peak and progress is stuck. solving that would make progress communicating it to them.

I think of it as deferring to future me vs. deferring to someone else.

Another consideration is how much money someone has to hand. If someone only make $1,000 a month, they may choice $25 shoes that will last a year over $100 shoes that will last 5 years. Essentially, it is the complimentary idea of economy of scale.

2Richard_Kennaway
It's expensive to be poor.

Personhood is a legal category and an assumed moral category that policies can point to. Usually, the rules being argued about are about the acceptability of killing something. The category is used differently depending on the moral framework, but it is usually assumed to point at the same objects. Therefore disagreements are interpreted as mistakes.

Personally, I have my doubts on there being an exact point in development that you can point to where a human becomes a person. If there is it might be weeks after birth.

2Dagon
Both of which are mostly based on examples and past decisions/precedent, rather than scientific or operational definitions.  Thus, is itself a huge mistake.  There is absolutely no reason to believe that any given legal framework agrees on any specific with other legal systems, and even less that moral systems would agree with each other or with legal systems.

If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn't care what the "right" thing is. Humans also probably don't care about the cosmic tablet either. That sort of thing isn't what "morality" is references. The argue is more of a trick to get people recognize that than a formal argument.

3TAG
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn't compel a paperclipper, or a rock is irrelevant.

I think the point is that people try to point to things like God's will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God's decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.

2TAG
Then it isn't an argument that moral realism is incoherent, and it isn't an argument that moral realism in general is false either..It's an argument against divine command theory. It.might be successful as such , but it's a more modest target. (Also, not original...It would be Eurythro)
1Jorterder
This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it. I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.

While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.

-5Logan Zoellner
Answer by Walker Vargas96

Highly positive outcomes are assumed to be more particular and complex than highly bad outcomes. Another assumption I think is common is that a utility of a maximally good life is lower than the magnitude of the utility of a maximally bad life. Is there a life good enough that you would take a bet of a 50% chance of that life and a 50% chance of the worst life of torture?

1cSkeleton
Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I'd take a 50% chance of torture. But would this apply to minds in general (say, a random mind or one not too different from human)?

I don't think the fundamental ought works as a default position. Partly because there will always be a possibility of being wrong about what that fundamental ought is no matter how long it looks. So the real choice is about how sure it should be before it starts acting on it's best known option.

The right side can't be NULL, because that'd make the expect value of both actions NULL. To do meaningful math with these possibilities there has to be a way of comparing utilities across the scenarios.

No, if you are contributing to a preexisting discussion, there should be some older work you can cite. For example, you learned about the theory of path semantics from something that wasn't written by you. Cite that source.

1Sven Nilsen
Path semantics is built upon previous works, e.g. Homotopy Type Theory: https://homotopytypetheory.org/ This kind of previous work is cited all of the place. I have no idea why you think there is no preexisting discussion going on.

I don't think that matrix is right. I think it describes a different scenario. Suppose an AI's Utility function is defined referentially as being equal to some unknown function written on a letter on Mt. Everest. It also has a given utility function that it has little reason to think is correlated with the real one. Then it would be vary important to find out want that true function is. Than the expected value of any action would be NULL if that letter doesn't exist.

But an AI that only assigns a probability that that scenario is the case might still have most of its expected value tied to following its current utility function. Well given some way of comparing them. Without that there's no way to weigh up the choice.

1Donatas Lučiūnas
I've replied to a similar comment already https://www.lesswrong.com/posts/3B23ahfbPAvhBf9Bb/god-vs-ai-scientifically?commentId=XtxCcBBDaLGxTYENE#rueC6zi5Y6j2dSK3M Please let me know what you think

I just had a thought. If Mary was presented with a red, a blue, and a green tile on a white background could she identify which was which without additional visual context clues like comparing them to her nails? If not, I would expect a p-zombie to have the same issue implying that that failure isn't to do with consciousness.

Depending on who you are talking to for-profit corporations is a good analogy for what is meant by "misaligned". You can then point out that those same organizations are likely to make AI with profit maximization in mind, and might skimp out on moral restraint in favor of being superhumanly good at PR.

Use that comparison with the wrong person and they'll call you a communist.

I want to add that the AI probably does not know it is misaligned for a while.

This sounds similar to the replication crisis, in terms of the incentivization issues.

Under unification wouldn't it make sense to consider ourselves to be every instance of our mind state? So there's no fact of the matter of what your surroundings are like until they effect your mind state. Similarly, every past and future that is compatible with your current mind state happened and will happen respectively.

1Szymon Kucharski
It seems to me this is the case. 

This isn't the flu. America has had 318,000 deaths so far. That's ~8.5 years worth of flu deaths. One of those years was from the last 26 days. If the world had America's almost 1 death per 1,000 people mortality rate, that would be about 7.8 million deaths. There are 1.7 million deaths globally. That's 6 million people spared! And frankly America is in at least a half banked lockdown.

If your country has almost no cases, that isn't something to complain about. Mass graves would mean that your country had failed to the point that they were having difficulty manages all of the corpses. This point will vary country to country, but it is a lot harder for a first world country to hit that point than you seem to think.

2Stuart Anderson
-

This doesn't require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other's event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.

Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn't affect you according to SIA. It would require some assumptions about the setup to be wrong ... (read more)

1Dach
If you can send a probe to a location, radiation, gravitational waves, etc. from that location will also (in normal conditions) be intercepting you, allowing you to theoretically make pretty solid inferences about certain future phenomena at that location. However, we let the probe fall out of our cosmological horizon- information is reaching it that couldn't/can't have reached the other probes, or even the starting position of that probe. In this setup, you're gaining information about arbitrary phenomena. If you send a probe out beyond your cosmological horizon, there's no way to infer the results of, for example, non-entangled quantum experiments. I think we may eventually determine the complete list of rules and starting conditions for the universe/multiverse/etc. Using our theory of everything and (likely) unobtainable amounts of computing power, we could (perhaps) uniquely locate our branch of the universal wave function (or similar) and draw conclusions about the outcomes of distant quantum experiments (and similar). That's a serious maybe- I expect that a complete theory of everything would predict infinitely many different instances of us in a way that doesn't allow for uniquely locating ourselves. However... this type of reasoning doesn't look anything like that. If SSA/SSSA require us to have a complete working theory of everything in order to be usable, that's still invalidating for my current purposes. For the record, I ran into a more complicated problem which turns out to be incoherent for similar reasons- namely, information can only propagate in specific ways, and it turns out that SSA/SSA allows you to draw conclusions about what your reference class looks like in ways that defy the ways in which information can propagate.  This specific hypothetical doesn't directly apply to the SIA- it relies on adjusting the relative frequencies of different types of observers in your reference class, which isn't possible using SIA. SIA still suffers from t

Sorry this is so late. I haven't been on the site for a while. My last post was in reply to no interference always being better than fighting it out. Most of the character's seem to think that stopping the baby eaters has more utility than letting the superhappies do the same thing to us would cost.

The story brings up the possibility that, the disutility of the babyeaters might outweigh the utility of humanity. There's certainly nothing logically impossible about this.

2Said Achmiz
I don’t see how this is responsive to anything I said. Could you elaborate?

Just ask which algorithm wins then. At least in these kinds of situations udt does better. The only downside is the algorithm has to check if it's in this kind of situation; it might not be worth practicing.

2Chris_Leong
If you are in this situation you have the practical reality that paying the $100 loses you $100 and a theoretical argument that you should pay anyway. If you apply "just ask which algorithm wins" and you mean the practical reality of the situation described, then you wouldn't choose UDT. If you instead take "just ask which algorithm wins" to mean setting up an empirical experiment, then you'd have to decide whether to consider all agents who encounter the coin flip, or only those who see a tails, at which point there is no need to run the experiment. If you instead are proposing figuring out which algorithm wins according to theory, then that's a bit of a tautology as that's what I'm already trying to do.

It's a variant of the liar's paradox. If you say the statement is unlikely, you're agreeing with what it says. If you agree with it, you clearly don't think it's unlikely, so it's wrong.

Vigilantism has been found to be lacking. If I wanted to help with that problem in particular I'd become a cop, or vote for politicians to put higher priority on it. That seems directly comparable to what the humans in the story intended to do for most of it.

What the baby eaters are doing is worst by most people's standards than anything in our history. At least if scale counts for something. Humans don't even need a shared utility function. There just needs to be a cluster around what most people would reflectively endource. Paperc... (read more)