All of alexey's Comments + Replies

alexey41

I mostly agree, but it's a double-digit percent increase in bankruptcies which ends up being (from the post)

about 4bps (0.04%)/year of additional bankruptcies

alexey1612

But, crucially, if one product is not available, then these people will very likely form an addiction to something else. That is what 'addictive personality disorder' means.

Except whatever they got addicted to before the legalization of online sports betting, it apparently led to much lower bankruptcy rates etc. 

I feel that the discourse has quietly assumed a fabricated option: if these people can't gamble then they will be happy unharmed non-addicts.

This post isn't quietly assuming something: it's loudly giving evidence that they will be much less harmed.

1Dumbledore's Army
"Except whatever they got addicted to before the legalization of online sports betting, it apparently led to much lower bankruptcy rates etc."  Yes, Zvi gives evidence on bankruptcy rates. However, that is not the only kind of harm. Sports gambling doesn't have direct health effects the way that drug or alcohol addictions do. Different types of addictions hit relationships, and sports gambling isn't good for relationships, but it's plausibly less harmful than a porn addiction. Sports gambling leaves people functional and able to hold down a job, again in a way that not all other addictions do. I don't think you can assert that banning sports gambling will make people 'much less harmed'. Not unless you've done a deep dive into all the different forms of harm caused by different types of addiction which they might get instead. If someone like Scott Alexander did that kind of analysis and announced that banning sports gambling is still worthwhile because people will addict themselves to cannabis or porn and that is net better, that would be different, but so far as I know, no one has tried to answer the question.  Separate point: you also can't make the jump from 'sports gambling is harmful' to 'we should ban sports gambling'. Banning things also causes harm. It moderately reduces takeup, but doesn't eliminate the thing banned -- illegal gambling is a problem everywhere people ban it, Prohibition didn't eliminate alcohol, and the War on Drugs didn't magically stop people taking them. So banning things causes a moderate reduction in usage of the thing, with an empirical question mark over how much. Banning things also means criminalising people who would otherwise not be criminalised, ruining their lives. It means devoting more societal resources to police and prisons, to enforce the ban, or else it means diverting existing law enforcement resources away from dealing with other crimes. Even if you accept both that sports gambling is harmful to users (more harmful than
alexey10

Do you expect anyone to answer "agree" to the starting question?

alexey10

Bywayeans are pretty censorious and scrupulous about violations of the NAP

Except against people who enjoy sunsets, apparently?

alexey10

He’d walk on over to nearby industry labs with candy and a sales pitch for why they should use his services. He primarily targeted top, Nobel-prize-winning research groups

and

Plasmidsaurus has historically done very little ‘traditional’ marketing — no brochures, few cold reach-outs

seem to be a bit contradictory?

1Abhishaike Mahajan
Yeah I can see that; I guess what I was trying to get across was that Plasimidsaurus did do a lot of cold reachout at the start (and, when they did do it, it was high-effort, thoughtful reachouts that took into account the labs needs), but largely stopped afterwords. 
alexey10

If people followed Brennan’s advice, those ignorant of their lack of knowledge would keep voting, while well-educated people might think they’re not competent enough and abstain.

I'd add that people ignorant enough not to know or not to understand Brennan's argument would also keep voting.

alexey10

Was this post significantly edited? Because this seems to be exactly the take in the post from the start:

because he thought it wasn't bad enough to be considered torture. Then he had it tried on himself, and changed his mind, coming to believe it is torture and should not be performed.

to the end

This is supported by Malcom's claim that Hitchens was "a proponent of torture", which is clearly false going by Christopher's public articles on the subject. The question is only over whether Hitchens considered waterboarding to be a form of torture, and therefore permissible or not, which Malcolm seems to have not understood.

alexey10

It’s absurd to end up with a framework that believes a life for a woman in Saudi Arabia is just as good as life for a woman in some other country with similarly high per capita income.

You could similarly argue a life for a woman in Saudi Arabia is worse than for a man, but it seems absurd to conclude from that that saving lives of SA men is better than saving lives of SA women.

Whether you save a life in Congo, Sri Lanka or Australia, I can’t think of strong reasons for why #2 would vary all that much.

It seems to me there are obvious differences: 1. family ... (read more)

alexey10

But you aren't asked about (your current estimate of your prior). If you want to put it in this way, it would be , your current estimate of your previous estimate. And you do have exact knowledge what that estimate was.

alexey72

Here is a counter-argument against Rovelli I found reasonable: Aristotle and Falling Objects | Diagonal Argument

2Algon
This is a good counter-arguement! Though I think the missing factor of a square root doesn't change the qualitative nature of natural i.e. steady-state motion. But that's not much of a defence, is it? Especially when Aristotle stuck his neck out by saying double the weight, double the speed. It is to his detriment that he didn't check.
alexey20

so the maximum "downside" would be the sum of the differences between that reference populations lives and those without the variant for all variants you edit (plus any effects from off-targets)

I don't think that's true? It has to assume the variants don't interact with each other. Your reference population would only have 0.01% people with (the rarest) 2 variants at once, 0.0001% with 3 variants, and so on.

alexey21

Yes, but this exact case is when you say "This would be useful for trying out different variations on a phrase to see what those small variations change about the implied meaning" and when it can be particularly misleading because the LLM is contrasting with the previous version which the humans reading/hearing the final version don't know about.

So it would be more useful for that purpose to use a new chat.

alexey11

But the screenshot says "if i instead say the words...". This seems like it has to be in the same chat with the "matters" version.

2Gordon Seidoh Worley
Yes, you're right, in exactly one screen shot above I did follow up in the chat. But you should be able to see that all the other ones are in new, separate chats. There was just one case where it made more sense to follow up rather than ask a new question.
alexey20

but speak only the truth to other Parselmouths and (by implication) speak only truth to Quakers.

I would merely like to note that the implication seems contrary to the source of the name: I expect Quirrell and most historical Parselmouths in HPMOR would very much lie to Quakers (Quirrell would maybe derive some entertainment from not saying factually false things while misleading them).

4Screwtape
That is a worthwhile note, and I would think about these roles differently based on which definition is in use. If I was transported to a world where everyone is a Quaker I like to think I'd rapidly (though not immediately) switch to being basically pure Quaker. There might well be other kinds of Parselmouths that would lie as long as they were sure enough in not getting caught. Certainly norms like "you can lie to outsiders but never to your in-group" have existed, from organized crime to ethnic or religious bonds to children's conspiracies to hide who broke a vase. That might be closer to the examples in the linked post and in HPMOR. Maybe it's worth coining terms to distinguish those. I make a genuine effort not to lie or mislead other rationalists. I don't feel bound to speak truth to panhandlers.  As for the distinction between factually false and misleading, man, I have such profound cynicism and despair around accurate communication that someone could hide a small moon in the latitude I often have around misleading. Give me an intent-reading machine and that would change. I have written technical documentation professionally before and the experience of having your instructions called confusing because someone wasn't sure if when you said "the right mouse button" you meant their right or the computer's right is the kind of thing that sticks with you. 
alexey10

Or to put it another way: in the full post you say

There is some evidence he has higher-than-normal narcissistic traits, and there’s a positive correlation between narcissistic traits and DAE. I think there is more evidence of him having DAE than there is of him having narcissistic traits

but to me it looks like you could have equally replaced DAE with "narcissistic traits" in Theories B and C, and provided the same list of evidence.

(1) Convicted criminals are more likely to have narcissistic traits.

(2) "extreme disregard for protecting his customers" is als... (read more)

alexey10

Yes, it's evidence. My question is how strong or weak this evidence is (and my expectation is that it's weak). Your comparison relies on "wet grass is typically substantial evidence for rain".

alexey10

Based on the full text:

Some readers may think that this sounds circular: if I’m trying to explain why someone would do what SBF did, how is it valid to use the fact that he did it as a piece of evidence for the explanation? But treating the convictions as evidence for SBF’s DAE is valid in the same way that, if you were trying to explain why the grass is wet, it would be valid to use the fact that the grass is wet as evidence for the hypothesis that it rained recently (since wet grass is typically substantial evidence for rain).

But a lot of your pro-DAE ev... (read more)

3spencerg
Thanks for your comment. Some thoughts: "But a lot of your pro-DAE evidence seems to me to fail this test. E.g. ok, he lied to the customers and to the Congress; why is this substantial evidence of DAE in particular?" Because E is evidence in favor of a hypothesis H if: P(E given H is true) > P(E given H is false) And the strength of the evidence is determined by the ratio: bayes factor = P(E given H is true)/P(E given H is false) In my view there isn't really any other reasonable mathematical definition of evidence other than the bayes factor (or transformations of the bayes factor). Applied to this specific case: Probabilityiity(Lying to congress given DAE) > Probability(Lying to congress given not DAE) And the reason that inequality is true is because people with DAE are more likely to lie than people without DAE (all else equal).  "Everything under this seems to fail the rain test, at least; very many people have this willingness [to lie and deceive others] most of them don't have DAE (simply based on the prevalence you mention). Is this particular "style" of dishonesty characteristic of DAE?" The question of whether E is evidence for H is not the same as the question "Is H true most of the time when E?" That's just a different question, and in my view, not the correct question to ask when evaluating evidence. The question to ask to evaluate evidence is whether the evidence is more likely if the hypothesis is true than if it's not true. And yes, lying is indeed characteristic of DAE. 
alexey20

I feel like people like Scott Aaronson who are demanding a specific scenario for how AI will actually kill us all... I hypothesize that most scenarios with vastly superhuman AI systems coexisting with humans end in the disempowerment of humans and either human extinction or some form of imprisonment or captivity akin to factory farming

Aaronson in that quote is "demanding a specific scenario" for how GPT-4.5 or GPT-5 in particular will kill us all. Do you believe they will be vastly superhuman?

alexey10

The quoted section more seems like instrumental convergence than orthogonality to me?

The second part of the sentence, yes. The bolded one seems to acknowledge AIs can have different goals, and I assume that version of EY wouldn't count "God" as a good goal.

Another more relevant part:

Obviously, if the AI is going to be capable of making choices, you need to create an exception to the rules - create a Goal object whose desirability is not calculated by summing up the goals in the justification slot.

Presumably this goal object can be anything.

But in order to

... (read more)
alexey10

In fact it seems that the linked argument relies on a version of the orthogonality thesis instead of being refuted by it:

For almost any ultimate goal - joy, truth, God, intelligence, freedom, law - it would be possible to do it better (or faster or more thoroughly or to a larger population) given superintelligence (or nanotechnology or galactic colonization or Apotheosis or surviving the next twenty years).

Nothing about the argument contradicts "the true meaning of life" -- which seems in that argument to be effectively defined as "whatever the AI ends up with as a goal if it starts out without a goal" -- being e.g. paperclips.

2tailcalled
The quoted section more seems like instrumental convergence than orthogonality to me? In a sense, that's it's flaw; it's supposed to be an argument that building a superintelligence is desirable because it will let you achieve the meaning of life, but since nothing contradicts "the meaning of life" being paperclips, you can substitute "convert the world into paperclips" into the argument and not lose validity. Yet, the argument that we should build a superintelligence because it lets us convert the world into paperclips is of course wrong, so one can go back and say that the original argument was wrong too. But in order to accept that, one needs to accept the orthogonality thesis. If one doesn't consider "maximize the number of charged-up batteries" to be a sufficiently plausible outcome of a superintelligence that it's even worth consideration, then one is going to be stuck in this sort of reasoning.
6aphyer
Yes. (Good to ask, though. I think the unfinished-story percentage on Glowfic is like 98%)
alexey40

The issue with the first justification is that no one has actually claimed that the existence of such a rule is obvious or self-evident. Publicly holding a non-obvious belief does not obligate the holder to publicly justify that belief to the satisfaction of the author.

However, Yudkowsky also called the rule "straightforward" and said that

violating it this hugely and explicitly is sufficiently bad news that people should've been wary about this post and hesitated to upvote it for that reason alone

That is, he expected majority of EA Forum members (at least) to also consider is a "basic rule".

alexey10

That right there shows autogynephilia isn't a universal explanation.

Do any prominent pro-AGP people claim it is? Even when I see them described by their opponents, the claim is that there are two clusters of trans women and AGP people are one of them, so aroace trans women could belong to the other cluster without contradicting that theory.

2tailcalled
AGP theorists generally claim that aroace trans women belong to the AGP cluster. The other cluster is named "homosexual" because they are attracted to men (not aroace). AGP is supposed to be the universal explanation among those who are not exclusively androphilic.
alexey70

There are similar claims in Russia as well, for what it's worth.

alexey50

and author intentionally cropped

The author is visible in the next screenshot, unless you meant something else (also, even if he wasn't, the name is part of the URL).

4the gears to ascension
hello I'm an idiot
alexey10

If I were going to play chess against Magnus Carlsen I'd definitely study his games with a computer, and if that computer found a stunning refutation to an opening he liked I'd definitely play it.

Conditionally on him continuing to play the opening, I would expect he has a refutation to that refutation, but no reason to use the counter-refutation in public games against the computer. On the other hand, he may not want to burn it on you either.

alexey21

is obviously different than what you said, though

To me it doesn't seem to be? "condoned by social consensus" == "isn't broadly condemned by their community" in the original comment. And 

because the "social consensus" is something designed by people, in many cases with the explicit goal of including circles wider than "them and their friends"

doesn't seem to work unless you believe a majority of people are both actively designing the "social consensus" and have this goal; majority of people who design the consensus having this as a goal is not sufficient.

alexey30

It's explicitly the second:

But if they can do that with an AGI capable of ending the acute risk period, then they've probably solved most of the alignment problem. Meaning that it should be easy to drive the probability of disaster dramatically lower.

alexey10

You might have confused "singularity" and "a singleton" (that is, a single AI (or someone using AI) getting control of the world)?

alexey30

Cairo is a problem too, then (it was founded after Arthur lived).

alexey10

It's also interesting that apparently field experts only did about as well as the traditional students:

Differences between Fleet and ITTC participants were generally smaller and neither consistently positive nor negative.

Does experience not help at all?

1crl826
You can only answer that question by including "compared to what?" It would appear that, in this case, experience only taught what could have been learned with a better method of instruction.
alexey30

I don't believe the original novels imply the humanity nearly went extinct and then banded together, that was only in "the junk Herbert's son wrote". Or that Strong AI was developed only a short time before the Jihad started.

Neither of these are true in the Dune Encyclopedia version, which Frank Hebert at least didn't strongly disapprove of.

There is still some Goodhart's-Law-ing there, to quote https://dune.wikia.com/wiki/Butlerian_Jihad/DE:

After Jehanne's death, she became a martyr, but her generals continued exponentia
... (read more)
1Virgil Kurkjian
I think it's a reasonable inference that humanity nearly went extinct, given that the impact of the Jihad was so pronounced as to affect all human culture for the next 10,000+ years. And I think it's a reasonable inference that we banded together, given that we did manage to win.
alexey00

Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let's suppose) for keys of the right shape to open lock L.

Why suppose this and not the opposite? If you understand L well enough to see if a key opens it immediately, does this make L-openingness intrinsic, so intrinsicness/extrinsicness is relative to the observer?

And on the other hand, someone else needs to simulate a ruler to check for ∆-ness, so it is an extrinsic property to him.

Namely, goodness of a state of affairs is something that I

... (read more)
alexey50

I've taken the survey.

alexey00

Most leftists ... believe we can all agree on what crops to grow (what social values to have [2])

Whose slogan is "family values", again?

and pull out and burn the weeds of nostalgia, counter-revolution, and the bourgeoisie

Or the weeds of revolution, hippies, and trade unions...

Conservatives view their own society the way environmentalists view the environment: as a complex organism best not lightly tampered with. They're skeptical of the ability of new policies to do what they're supposed to do, especially a whole bunch of new policies

... (read more)
0lmn
I think the conservative attitude towards those things is more likely the environmentalist attitude towards invasive species.
alexey260

I've taken the survey.

alexey00

Second AI: If I just destroy all humans, I can be very confident any answers I receive will be from AIs!

alexey20

The amount of line emission from a galaxy is thus a rough proxy for the rate of star formation – the greater the rate of star formation, the larger the number of large stars exciting interstellar gas into emission nebulae... Indeed, their preferred model to which they fit the trend converges towards a finite quantity of stars formed as you integrate total star formation into the future to infinity, with the total number of stars that will ever be born only being 5% larger than the number of stars that have been born at this time.

Is this a good proxy for total star formation, or only large star formation? Is it plausible that while no/few large stars are forming, many dwarfs are?

2[anonymous]
That depends on something called the "initial mass function" for a star forming region - the frequency distribution of masses produced. See http://model.galev.org/help/help_imfs.png for two estimated mass functions for our galaxy. Until recently the consensus was that since the initial mass function was pretty similar throughout our own galaxy under very different environments, it should be similar in other places too. More recently there's been some controversial claims that 'Early type' (elliptical) galaxies may have a systematically different mass functionn than spirals that also varies by galaxy mass, see http://astrobites.org/2012/02/16/the-imf-is-not-universal/ . Other research seems to contradict this, see http://astrobites.org/2014/12/08/counting-stellar-corpses-rethinking-the-variable-initial-mass-function/ . These papers become technical to a point at which I get lost pretty easily in reading them. If I am reading the paper referred to in the first link correctly though, their findings if true are consistent with one of two scenarios: either the mass functionin massive elliptical galaxies is biased towards the formation of large amounts of small stars, or it is biased towards the production of large amounts of large stars which are now dead and contributing excess compact mass in the form of dead star remnants. Both would be consistent with the data (which comes in the form of ratios of luminosity to galactic mass) but a mass function like that of most spirals would not be. If the mass function stuff turns out true, I'm pretty sure it would distort the shape of the curve of star formation referenced in this post one way or another, but not change its ultimate form.
alexey30

But my point is that at some point, a "static analysis" becomes functionally equivalent to running it. If I do a "static analysis" to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for "real", and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.

Crucial words here are "at some point". And Benja's original comment (as I understan... (read more)

alexey10

Suppose I've seen records of some inputs and outputs to a program: 1->2, 5->10, 100->200. In every case I am aware of it was given a number as input, it output the doubled number. I don't have the program's source and or ability to access the computer it's actually running on. I form a hypothesis: if this program received input 10000, it would output 20000. Am I running the program?

In this case: doubling program<->Eliezer, inputs<->comments and threads he is answering, outputs<->his replies.

0Lumifer
No, you've built your model of the program and you're running your own model.
alexey00

But I can still do static analysis of a Turing machine without running it. E.g. I can determine a T.M. would never terminate on given input in finite time.

3ThisSpaceAvailable
But my point is that at some point, a "static analysis" becomes functionally equivalent to running it. If I do a "static analysis" to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for "real", and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for. Suppose I write a program that is short and simple enough that you can go through it and figure out in your head exactly what the computer will do at each line of code. In what sense has your mind not run the program, but a computer that executes the program has? Imagine the following dialog: Alice: "So, you've installed a javascript interpreter on your machine?" Bob: "Nope." Alice: "But I clicked on this javascript program, and I got exactly what I was supposed to get." Bob: "Oh, that's because I've associated javascript source code files with a program that looks at javascript code, determines what the output would be if the program had been run, and outputs the result." Alice: "So... you've a installed a javascript interpreter." Bob: "No. I told you, it doesn't run the program, it just computes what the result of the program would be." Alice: "But that's what a javascript interpreter is. It's a program that looks at source code, determines what the proper output is, and gives that output." Bob: "Yes, but an interpreter does that by running the program. My program does it by doing a static analysis." Alice: "So, what is the difference? For instance, if I write a program that adds two plus two, what is the difference?" Bob: "An interpreter would calculate what 2+2 is. My program calculates what 2+2 would be, if my computer had calculated the sum. But it doesn't actually calculate the sum. It just does a static analysis of a program that would have calculated the sum." I don't see how, outside of a rarefied philosophical conte
alexey10

If I'm figuring out what output a program "would" give "if" it were run, in what sense am I not running it?

In the sense of not producing effects on the outside world actually running it would produce. E.g. given this program

int goodbye_world() {
   launch_nuclear_missiles();
   return 0;
}

I can conclude running it would launch missiles (assuming suitable implementation of the launch_nuclear_missiles function) and output 0 without actually launching the missiles.

3ThisSpaceAvailable
Benja defines an l-zombie as "a Turing machine which, if anybody ever ran it..." A Turing Machine can't launch nuclear missiles. A nuclear missile launcher can be hooked up to a Turing Machine, and launch nuclear missile on the condition that the Turing Machine reach some state, but the Turing Machine isn't launching the missiles, the nuclear missile launcher is.
8FeepingCreature
Within the domain that the program has run (your imagination) missiles have been launched.