All of roland's Comments + Replies

roland*10

I'm looking for something simpler that doesn't require understanding another concept besides probability.

The article you posted is a bit confusing:

the likelihood of X given Y is just the probability of Y given X!

help us remember that likelihoods can't be converted to probabilities without combining them with a prior.

So is Arbital:

In this case, the "Miss Scarlett" hypothesis assigns a likelihood of 20% to e

Fixed: the "Miss Scarlett" hypothesis assigns a probability of 20% to e

roland10

Sorry don't post here, but there:

https://www.lesswrong.com/posts/hM3KXEFwQ5jnKLzQj/open-thread-winter-2024-2025

roland*50

Update: Confusion of the Inverse

Is there an aphorism regarding the mistake P(E|H) = P(H|E) ?

Suggestions:

  1. Thou shalt not reverse thy probabilities!
  2. Thou shalt not mix up thy probabilities!
  3. Thou shalt not invert thy probabilities! -- Based on Confusion of the Inverse

Arbital conditional probability

Example 2

Suppose you're Sherlock Holmes investigating a case in which a red hair was left at the scene of the crime.

The Scotland Yard detective says, "Aha! Then it's Miss Scarlet. She has red hair, so if she was the murderer she almost certainly would have l

... (read more)
8brambleboy
Don't confuse probabilities and likelihoods?
roland10

Bayes for arguments: how do you quantify P(E|H) when E is an argument? E.g. I present you a strong argument supporting Hypothesis H, how can you put a number on that?

4Garrett Baker
There’s not a principled way for informal arguments, but there are a few for formal arguments—ie proofs. The relevant search term here is logical induction.
2Terence Coelho
I think P(E∣H) is close enough to 1 to be dropped here; the more interesting thing is P(E∣¬H) (how likely would they be to make such a convincing argument if the hypothesis is false?). We have: P(E)=P(E∣H)P(H)+P(E∣¬H)(1−P(H))∼P(H)+P(E∣¬H)(1−P(H)) so Bayes rule becomes P(H∣E)∼P(H)P(H)+P(E∣¬H)(1−P(H))   Edit: actually use likelihood ratios; it's way simpler.
roland10

that it’s reasonably for Eliezer to not think that marginally writing more will drastically change things from his perspective.

Scientific breakthroughs live on the margins, so if he has guesses on how to achieve alignment sharing them could make a huge difference.

roland*21

I have guesses

Even a small probability of solving alignment should have big expected utility modulo exfohazard. So why not share your guesses?

roland10

Weighted step ups instead of squats

Lunges vs weighted step ups?

2romeostevensit
I can't get full range of motion without a significant box height (18 inches). And that's with leaning into it to get more ROM. Like this: https://www.youtube.com/watch?v=DqLMErck6A4
4romeostevensit
Hold fifty pounds in each hand and add up how much you are loading a single leg. In my case it's 260lbs.
roland10

why would a weighted step up be better and safer than a squat?

3romeostevensit
1/2-1/3 the spine loading for the same stress on the legs
roland10

Weighted step ups instead of squats can be loaded quite heavy.

What are the advantages of weighted step ups vs squats without bending your knees too much? Squats would have the advantage of greater stability and only having to do half the reps.

3romeostevensit
I don't understand, a quad focused exercise is inherently involving the knee
roland*20

Valence-Owning

Could you please give a definition of the word valence? The definition I found doesn't make sense to me: https://en.wiktionary.org/wiki/valence

3Rob Bensinger
Basically: whether something is good or bad, enjoyable or unpleasant, desirable or undesirable, interesting or boring, etc. It's the aspect of experience that evaluates some things as better or worse to varying degrees and in various respects.
roland32

1.1. It’s the first place large enough to contain a plausible explanation for how the AGI itself actually came to be.

According to this criterion we would be in a simulation because there is no plausible explanation of how the Universe was created.

1mruwnik
This is a valid point. It can easily be extended to the agent via last thursdayism. 
roland42
  1. exfohazard
  2. expohazard(based on exposition)

Based on the latin prefix ex-

IMHO better than outfohazard.

roland10

The key here would be an exact quantification: how much carbs do these cultures consume in relation to the amount of physical activity.

2ChristianKl
Herman Pontzer did such a study for the Hadza who eat a lot of honey.  He came to conclusions like "To Pontzer, this means that the human body seems to adjust to physical activity by saving calories on other physiological processes to keep total energy expenditure in check." 
roland10

Has the hypothesis

excess sugar/carbs -> metabolic syndrome -> constant hunger and overeating -> weight gain

been disproved?

1Dan Hopkins
I think the standard answer is that some traditional cultures rely quite heavily on carbs with very low incidence of obesity. Some even eat substantial amounts of sugar (e.g. as honey).
roland10

Rather, my read of the history is that MIRI was operating in an argumentative argument where:

argumentative environment?

roland10

A good critical book on this topic is House of Cards by Robyn Dawes

roland10

If we have to use voice, we can still try to ask hard questions and get fast answers, but because of the lower rate itâs hard to push far past human limits.

You could go with IQ-test-type progressively harder number sequences.Use big numbers that are hard to calculate in your head.

E.g. start with a random 3 digit number, each following number is the previous squared minus 17. If he/she figures it out in 1 second he must be an ai.

roland-10

If you like Yudkowskian fiction, Wertifloke = Eliezer Yudkowsky

The Waves Arisen https://wertifloke.wordpress.com/

roland10

Is it ok to omit facts to you lawyer? I mean is the lawyer entitled to know everything about the client?

2ryan_b
Everything about the client *that is relevant to the case,* yes. Omitting relevant facts is grounds for terminating the relationship.
roland50

Eliezer Yudkowsky painted "The Scream" with paperclips:

The Scream by Eliezer Yudkowsky

roland30

Does a predictable punchline have high or low entropy?

From False Laughter

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

Yup, low. Although a high-entropy punchline probably wouldn't be funny either, for different >١c񁅰򺶦˥è򡆞.

roland30

Regarding laughter:

https://www.lesswrong.com/posts/NbbK6YKTpQR7u7D6u/false-laughter?commentId=PszRxYtanh5comMYS

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

roland*10

You might say that a predictable punchline is too high-entropy

I'm confused. Entropy is the average level of surprise inherent in the possible outcomes, a predictable punchline is an event of low surprise. Where does the high-entropy come from?

roland20

For the most point, admitting to having done Y is strong evidence that the person did do Y so I’m not sure if it can generally be considered a bias.

Not generally but I notice that the argument I cited is usually invoked when there is a dispute, e.g.:

Alice: "I have strong doubts about whether X really did Y because of..."

Bob: "But X already admitted to Y, what more could you want?"

3ChristianKl
Bob's reply is not concerned with the truth of whether X did Y in the Bayesian sense. Bob doesn't argue about what the correct probability happens to be. It's concerned with dispute resolution. In a discussion about truth, wanting doesn't matter. In a process of dispute resolution it matters a great deal.
roland20

What is the name of the following bias:

X admits to having done Y, therefore it must have been him.

6Isnasene
For the most point, admitting to having done Y is strong evidence that the person did do Y so I'm not sure if it can generally be considered a bias. In the case where there is additional evidence that the admittance was coerced, I'd probably decompose it into the Just World Fallacy (ie "Coercion is wrong! X couldn't have possibly been coerced.") or a blend of Optimism Bias and Typical Mind Fallacy (ie "I think I would never admitting to something I haven't done! So I don't think X would either!") where the person is overconfident in their uncoercibility and extrapolates this confidence to others. This doesn't cover all situations though. For instance, if someone was obviously paid a massive amount of money to take the fall for something, I don't know of a bias that would lead to to continue to believe that they must've done it
1Mathisco
Gullibility bias?
roland10

if I am seeing a bomb in Left it must mean I’m in the 1 in a trillion trillion situation where the predictor made a mistake, therefore I should (intuitively) take Right. UDT also says I should take Right so there’s no problem here.

It is more probable that you are misinformed about the predictor. But your conclusion is correct, take the right box.

roland60

It’s pretty uncharitable of you to just accuse CfAR of lying like that!

I wasn't, I rather suspect them of being biased.

roland70

As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.

I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don't see it as a good justification for not making the CFAR handbook available.

-6Zack_M_Davis
roland120

Is the CFAR hand­book pub­li­cly available? If yes, link please. If not, why not? It would be a great re­source for those who can’t at­tend the work­shops.

There's no official, endorsed CFAR handbook that's publicly available for download. The CFAR handbook from summer 2016, which I found on libgen, warns

While you may be tempted to read ahead, be forewarned - we've often found that participants have a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding. Many of the explanations here are intentionally approximate or incomplete, because we believe this content is best transmitted in person. It helps to think of this handbook as a compa
... (read more)
roland10

Is the CFAR handbook publicly available? If yes, link please. If not why not? It would be a great resource for those who can't attend the workshops.

roland30

Just a reminder, the Solomonoff induction dialogue is still missing:

https://www.lesswrong.com/posts/muKEBrHhETwN6vp8J/arbital-scrape#tKgeneD2ZFZZxskEv

2emmab
See Arbital Scrape V2
roland60

Seconded, that part is missing. Thanks for pointing out that very interesting dialogue.

roland*40

Can asking for advice be bad? From Eliezer's post Final Words:

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing? For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the

... (read more)
3gjm
(Not replying "at the original post" because others haven't and now this discussion is here.) That fragment of "Final Words" is in a paragraph of consequences of underconfidence. Suppose (to take a standard sort of toy problem) you have a coin which you know either comes up heads 60% of the time or comes up heads 40% of the time. (Note: in the real world there are probably no such coins, at least not if they're tossed in a manner not designed to enable bias. But never mind.) And suppose you have some quantity of evidence about which sort of coin it is -- perhaps derived from seeing the results of many tosses. If you've been tallying them up carefully then there's not much room for doubt about the strength of your evidence, so let's say you've just been watching and formed a general idea. Underconfidence would mean e.g. that you've seen an excess of T over H over a long period, but your sense of how much information that gives you is wrong, so you think (let's say) there's a 55% chance that it's a T>H coin rather than an H>T coin. So then someone trustworthy comes along and tells you he tossed the coin once and it came up H. That has probability 60% on the H>T hypothesis and probability 40% on the T>H hypothesis, so it's 3:2 evidence for H>T, so if you immediately have to bet a large sum on either H or T you should bet it on H. But maybe the _real_ state of your evidence before this person's new information justifies 90% confidence that it's a T>H coin, in which case that new information leaves you still thinking it's more likely T>H, and if you immediately have to bet a large sum you should bet it in T. Thus: if you are underconfident you may take advice you shouldn't, because you underweight what you already know relative to what others can tell you. Note that this is all true even if the other person is scrupulously honest, has your best interests at heart, and agrees with you about what those interests are.
-2Pattern
That's because they already have it (in a sense that we don't). They know every way any experiment could go (if not which one it will). You have more at stake than they do. (Also watch out for if they have vested interests.) EDIT: If you have an amazing knockdown counter-argument, please share it.
2mako yass
I'd trust myself not to follow bad advice. I'd probably be willing to ask a person I didn't respect very much for advice, even if I knew I wasn't going to follow it, just as a chance to explain why I'm going to do what I'm going to do, so that they understand why we disagree, and don't feel like I'm just ignoring them. You can't create an atmosphere of fake agreement by just not confronting the disagreement. They'll see what you're doing.
roland*10

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing?

For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.\

Lets say you already have lots of evid

... (read more)
roland10

From: https://www.lesswrong.com/posts/bfbiyTogEKWEGP96S/fake-justification

In The Bottom Line, I observed that only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals.

roland120

Quoting from: https://intelligence.org/files/DeathInDamascus.pdf

Functional decision theory has been developed in many parts through (largely unpublished) dialogue between a number of collaborators. FDT is a generalization of Dai's (2009) "updateless decision theory" and a successor to the "timeless decision theory" of Yudkowsky (2010). Related ideas have also been proposed in the past by Spohn (2012), Meacham (2010), Gauthier (1994), and others.
roland10

There is a difference of claims relating to who said what. But why do you automatically assume that I'm the one not being truthful?

2Felix Denker
I (somewhat charitbly) believe that both of these were honest misunderstandings on Roland's part and don't think he has been intentionally untruthful anywhere.
roland10

No. What I'm saying that a pseudonymous poster without any history, who pops out of nowhere gets credibility. Specifically do people take the following affirmation at face value?

As one of the multiple people creeped out by Roland in person
6gjm
I think it's quite likely to be true, but not merely because a pseudonymous poster coming out of nowhere said it. (Though of course that's evidence; people are more likely to turn up making that claim when it's true than when it's untrue.) So why do I think it likely to be true? Because, I'm sorry to say, lots of things about this affair look very much like cases I've seen before where someone is creeping other people out. For the avoidance of doubt, I am not claiming to know and I absolutely could be wrong; but I would bet at quite heavy odds that that's how it is. Now, there's a difference between "lots of people find X creepy" and "X is behaving in a bad way" or "X poses an actual threat", and I think it sometimes happens that a person who in fact is no threat to anyone and would never behave in a way that harms anyone (or that would even be perceived as unpleasant if it were someone else doing it) gets widely perceived as creepy. So even if I'm right to believe the claims that multiple people are creeped out by you in person, it's possible that this is an unfair affliction and no fault of yours at all. But ... there's no nice way to say this, so I won't try: I think people whom lots of people find creepy and whose response to being found creepy is to complain that they are being mistreated, and who have trouble believing that anyone finds them creepy ... those people, I think, often are more than averagely likely to act in actually-harmful ways, or in ways that would be unpleasant whoever was doing it -- and on the basis of what I have seen in this thread I would totally support anyone who preferred not to be around you. Again, it's possible that I'm dead wrong, it's possible that my and others' creep-detection and threat-detection are throwing up false positives, and if so that's a very unfortunate situation for you and I sympathize. None the less, it's everyone's right to avoid people they think or feel are likely to be unpleasant to be around; the heuristi
roland20

Giego I agree with your post in general.

> IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.

This is just a strawman that has cropped up here. From the beginning I said I don't mind dropping any topic that is not wanted. This never was the issue.

roland10

> Ultimately, the Zurich EA group is not an official organisation representing EA. They are just a bunch of people who decide to meet up once in a while. They can choose who they do and do not allow into their group, regardless of how good/bad their reasons, criteria or disciplinary procedures are.

Fair enough. I decided to post this just for the benefit of all. Lots of people in the group don't know what is going on.

Roland, it isn't about the object level or any particular one specific thing. I gave some examples for illustration, but none of them are cruxes for me.

Let me be more specific. The problem is not that you hold and voice any particular opinions, the problem is that your opinion forming and voicing process is such that it is unproductive for us to engage with you.

I have known you for 2 years and have not seen you improve in this regard.

2Gurkenglas
I just noticed that my votes count for three points, so that might explain the ludicrous imbalance in the scores here. Edit: And now it's two points. Reminds me of that Black Mirror episode, Nosedive. Who thought this system is a good idea?
-5Dr. Jamchie
Load More