All of agai's Comments + Replies

agai10

Now, to restate the original "thing" we were trying to honestly say we had a prior for:

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on a given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q.

Does this work, given this and our response?

We do not actually have a prior for Q, but we have a rough prior for a highly related question Q', which can be transformed likely fairly easil

... (read more)
agai-10

am maybe too enthusiastic in general for things being 'well organized'.

I don't think so. :)

agai-10

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
agai*-30

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
Answer by agai*30

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
agai10

Yes. Although Moloch is "kind of" all powerful, there are also different levels of "all powerful" so there can be "more all powerful" things. :)

agai10

Would you be able to expand on those? I thought they were quite apt.

agai00

They both exist in different realms, however Elua's is bigger so by default Elua would win, but only if people care to live more in Elua's realm than Moloch's. Getting the map-territory distinction right is pretty important I think.

agai00

Accidents, if not too damaging, are net positive because they allow you to learn more & cause you to slow down. If you are wrong about what is good/right/whatever, and you think you are a good person, then you'd want to be corrected. So if you're having a lot of really damaging accidents in situations where you could reasonably be expected to control, that's probably not too good, but "reasonably be expected to control" is a very high standard. What I'm very explicitly not saying here is that the "just-world" hypothesis is true in any way; accidents *are* accidents, it's just that they can be net positive.

4jmh
One of the recent "cultural" themes being pushed by company I work in is very similar. Basically, if someone critiques you and shows you where you made the mistake, or simply notes a mistake was made, they just gave you a gift, don't get mad or defensive. I think there is a lot of truth to that. My phase is "own your mistakes". Meaning, acknowledge and learn from. So, I fully agree with your general point. Accidents and mistakes should never be pure loss settings. And, in some cases they can lead to net positive benefits (and we're probably done even need to consider those "I was looking for X but found Y and Y is really, really good/beneficial/productive/cost saving/life saving....)
agai00

It's more effective to retain more values since physics is basically unitary (at least up to the point we know) so you'll have more people on your side if you retain the values of past people. So we'd be able to defeat this Moloch if we're careful.

2Isnasene
To be clear, the effectiveness of an action is defined by whatever values we use to make that judgement. Retaining the values of past people is not effective unless * past-people values positively compliment your current values so you can positively leverage the work of past people by adopting more of their value systems (which doesn't necessarily mean you have to adopt their values) * past-people have coordinated to limit the instrumental capabilities of anyone who doesn't have their values (for instance, by establishing a Nash equilibrium that makes it really hard for people to express drifting values or by building an AGI) To be fair, maybe you're referring to Molochian effectiveness of the form (whatever things tend to maximize the existence of similar thnigs). For humans, similarity is a complicated measure. Do we care about memetic similarity (ie reproducing people with similar attitudes as ourselves) or genetic similarity (ie having more kids)? Of course, this is a nonsense question because the answer is most humans don't care strongly about either and we don't really have any psychological intuitions on the matter (I guess you could argue hedonic utilitarianism can be Molochian under certain assumptions but that's just because any strongly-optimizing morality becomes Molochian). In the former case (memetic similarity), adopting values of past people is a strategy that makes you less fit because you're sacrificing your memetics to more competitive ones. In the latter case (genetic similarity), pretending to adopt people's values as a way to get them to have more kids with you is more dominant than just adopting their values. But, overall, I agree that we could kind-of beat Moloch (in the sense of curbing Moloch on really long time-scales) just by setting up our values to be inherently more Molochian than those of people in the future. Effective altruism is actually a pretty good example of this. Utilitarian optimizers leveraging the far-future to manipu
agai00

Yeah, so, this is a complex issue. It is actually true IMO that we want fewer people in the world so that we can focus on giving them better lives and more meaningful lives. Unfortunately this would mean that people have to die, but yeah... I also think that cryogenics doesn't really make it much easier/hard to revive people, I would say either way you pretty much have to do the work of re-raising them by giving them the same experiences...

Although now I think about it there was a problem about that recently where I thought of a way to just "skip... (read more)

agai*30

My response to this would be:

  1. This is a very good argument/summary of arguments/questions
  2. I would analyse this in sequence (like, taking quotes in order) and then recursively go back to relook at the initial state of understanding to see if it's at least consistent. If it isn't, serious updates to worldview might have to occur.
  3. These can be deferred and interleaved concurrently with other either more-interesting or higher-priority updates. Deferral can work by "virtualising" the argument as a "suppose (this: ... )" question.

From 2: (now 2 layers of indi

... (read more)
1agai
Now, to restate the original "thing" we were trying to honestly say we had a prior for: Does this work, given this and our response? We do not actually have a prior for Q, but we have a rough prior for a highly related question Q', which can be transformed likely fairly easily into a prior for Q using mechanical methods. So let's do that "non-mechanically" by saying: 1. If we successfully generate a prior for Q, that part is OK. 2. If Q is false, (::<- previously transformed into the more interesting and still consistent with a possible meaning for this part question "if Q is not-true") :: use "If Q is not-true" as this proposition, then it is OK. But also consider the original meaning of "false" meaning "not true at all" meaning logically has 0 mass assigned to it. If we do both, this part is OK. 3. If Q is true, you put a high probability on life forming on a given arbitrary planet. This was the evidential question which we said the article was not mainly about, so we would continue reasoning from this point by reading the rest of the article until the next point at which we expect an update to occur to our prior; however if we do the rest of the steps (1-end here) then these updates can be relatively short and quick (as it's just, in a technical sense, a multiplication. This can definitely be done by simultaneous (not just concurrent) algorithms). ::-> (predicted continuation point) 4. You are unsure about the truth of a statement Q. OK. 5. Suppose you know that there are a certain number of planets, N. This directly implies that we are only interested in the finite case and not the infinite case for N. However, we may have to keep the infinite case in mind. OK. -> [Now to be able to say "we have a prior," we have to write the continuation from 3. until the meaning of both 1. "what the article is about" becomes clear (so we can disambiguate the intended meanings of 1-5 and re-check) 2. We recognise that the prior can be constructed, and can roughly de
agai*-10

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
agai*-10

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
agai00

I have two default questions when attempting to choose between potential actions: I ask both "why" and "why not?".

agai*-20

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
4jimrandomh
This is incorrect. The main ways computers get compromised are as part of broadly-targeted attacks using open ports, trojanized downloads, and vulnerabilities in the web browser, email client and other network-facing software. For physical-access attacks, the main one is that the computer gets physically stolen, powered off in the process, and never returned, in which case having encrypted the hard disk matters a lot.
1Pattern
This link seems to be broken. Is this a reference to a missing footnote?
3TAG
Linux has had the advantages it has for twenty years...so why now?
6Viliam
It's called progress. In my youth, we only had a bridge to sell you.
agai120

Okay, because I'm bored and have nothing to do, and I'm not going to be doing serious work today, I'll explain my reasoning more fully on this problem. As stated:

You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it. 
A long-dead predictor predicted whether you would choose Left or Right, by runn
... (read more)
1JohnCDraper
In the bomb question, what we need is causal theory in which the ASI agent accurately gauges that a universe of one indicates loneliness and not in fact happiness, which is predicated on friendliness (at least for an ASI) (and I would be slightly concerned as an external observer as to why the universe was reduced to a single agent if it were not due to entropy), then figures out the perfect predictor was a prior ASI, not from that universe, giving it a clue, and then, adding all its available power to the bomb, following Asimov says: "LET THERE BE LIGHT!" And with an almighty bang (and perhaps even with all that extra explosive power, no small pain) there was light--
agai*00

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
-1agai
Comment removed for posterity.
agai*00

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
agai10

So, this is an interesting one. I could make the argument that UDT would actually suggest taking the opposite of the one you like currently.

It depends on how far you think the future (and yourself) will extend. You can reason that if you were to like both hummus and avocado, you should take both. The problem as stated doesn't appear to exclude this.

If you know the information observed about humans that we tend to get used to what we do repeatedly as part of your prior, then you can predict that you will come to like (whichever of avocado or hummus that you

... (read more)
agai-30

Look, I never said it wasn't a serious attempt to engage with the subject, and I respect that, and I respect the author(s).

Let me put it this way. If someone writes something unintentionally funny, are you laughing at them or at what they wrote? To me there is a clear separation between author and written text.

If you've heard of the TV show "America's Funniest Home Videos", that is an example of something I don't laugh at, because it seems to be all people getting hurt.

If someone was truly hurt by my comment then I apologise. I did not mean it that way.

I s

... (read more)
1Tetraspace
The note is just set-dressing; you could have both the boxes have glass windows that let you see whether or not they contain a Bomb for the same conclusions if it throws you off.
7Ben Pace
Firstly, this is false. MacAskill works in academic philosophy, and I'm confident he's read up on decision theory a fair bit. Secondly, it's unkind and unfair to repeatedly describe how you're laughing at someone, and it's especially bad to do it instead of presenting a detailed argument, as you say you're doing in your last sentence. I don't think this needs to rise to the level of a formal moderator warning, I just want to ask you to please not be mean like this on LessWrong in future. That said, I hope you do get around to writing up your critique of this post sometime.
agai*-20

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply