All of Isaac King's Comments + Replies

Teleporting an object 1 meter up gives it more energy the closer it is to the planet, because gravity gets weaker the further away it is. If you're at infinity, it adds 0 energy to move further away.

I think your error is in not putting real axes on your phase space diagram. If going to the right increases your potential energy, and the center has 0 potential energy, then being to the left of the origin means you have negative potential energy? This is not how orbits work; a real orbit would never leave the top right quadrant of the phase space since neithe... (read more)

2Ben
You have misunderstood me in a couple of places. I think think maybe the diagram is confusing you, or maybe some of the (very weird) simplifying assumptions I made, but I am not sure entirely. First, when I say "momentum" I mean actual momentum (mass times velocity). I don't mean kinetic energy. To highlight the relationship between the two, the total energy of a mass on a spring can be written as: E=(1/2)p2/m+(1/2)kx2  where p is the momentum, m the mass, k the spring strength and x the position (in units where the lowest potential point is at x=0). The first of the two terms in that expression is the kinetic energy (related to the square of the momentum). The second term is the potential energy, related to the square of the position. I am not treating gravity remotely accurately in my answer, as I am not trying to be exact but illustrative. So, I am pretending that gravity is just a spring. The force on a spring increases with distance, gravity decreases. That is obviously very important for lots of things in real life! But I will continue to ignore it here because it makes the diagrams here simpler, and its best to understand the simple ones first before adding the complexity. Here, because we are pretending gravity is a spring, potential energy is related to the square of the potion. (x2). The potential energy is zero when x=0. But it increases in either direction from the middle. Similarly, in the diagram, the kinetic energy is related to the square of the momentum, so we have zero kinetic energy in the vertical middle, but going either upwards or downwards would increase the kinetic energy. As I said, the circles are the energy contours, any two points on the same circle have the same total energy. Over time, our oscillator will just go around and around in its circle, never going up or down in total energy. If we made gravity more realistic then potential energy would still increase in either direction from the middle (minimum as x=0, increasing in eithe

Why is it a sickness of soul to abuse an animal that's been legally defined as a "pet", but not to define an identical animal that has not been given this arbitrary label?

1Purplehermann
They'd not identical. First, they have a different status, much the same as citizens and aliens have different rights. Second, different species of animals have different relationships with humanity: Dogs are bred to be symbiotic companions Cats are parasites if allowed, pest control if tolerated Rats are disease vector scavengers Chickens are livestock - they lay infertile eggs for human consumption!
5Richard_Kennaway
Intent. I don’t know what’s with this “legally defined” and “arbitrary label”. If someone adopts a stray cat as a companion, legal definitions are not involved. The law is not involved. “Pet” is not an arbitrary label, it is a word with an ordinary meaning that everyone knows. You may disagree with my acceptance of livestock and rejection of maltreating pets, but your faux-naïf framing is not an argument. You use the word “torture” a lot, but torture has a specific meaning: suffering deliberately inflicted in order to coerce or punish someone, or as an end in itself. (“The purpose of torture is torture.” — O’Brien, in “1984”.) This is not the reason for the suffering of livestock, which is a side-effect of the intention of making food. No farmer, on discovering that the conditions of his animals are more humane than he thought they were, will deliberately go out with a cattle prod to make up the loss of the suffering he thought he was inflicting. If market conditions change to make crops a more profitable use of his land, he will switch without a thought of all the animal suffering he is missing out on.

Eliezer's argument is the primary one I'm thinking of as an obvious rationalization.

https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/i-really-don-t-understand-eliezer-yudkowsky-s-position-on

https://benthams.substack.com/p/against-yudkowskys-implausible-position

I'm not confident about fetuses either, hence why I generally oppose abortion after the fetus has started developing a brain.

Different meanings of "bad". The former is making a moral claim, the second presumably a practical one about the person's health goals. "Bad as in evil" vs. "bad as in ineffective".

Hitler was an evil leader, but not an ineffective one. He was a bad person, but he was not bad at gaining political power.

It seems unlikely to me that the amount of animal-suffering-per-area goes down when a factory farm replaces a natural habitat; natural selection is a much worse optimizer than human intelligence.

And that's a false dichotomy anyway; even if factory farms did reduce suffering per area, you could instead pay for something else to be there that has even less suffering.

3ChristianKl
You claimed that you are interested in changing your mind. If that would be true you would be willing to find cruxes.  It's different if your crux is that you don't believe that factory farms destroy enough natural habitat than if your crux is that even if factory farms would destroy enough habitat they wouldn't meaningfully offset the harm that you think they cause.  There are EA people who argue that everyone should use donate most of their resources to EA causes. It's unclear to me why you shift to it when we discuss the issue of veganism. 

I agree with the first bullet point in theory, but see the Corrupted Hardware sequence of posts. It's hard to know the true impact of most interventions, and easy for people to come up with reasons why whatever they want to do happens to have large positive externalities. "Don't directly inflict pain" is something we can be very confident is actually a good thing, without worrying about second-order effects.

Additionally, there's no reason why doing bad things should be acceptable just due to also doing unrelated good things. Sure it's net positive from a c... (read more)

Do you also find it acceptable to torture humans you don't personally know, or a pet that someone purchased only for the joy of torturing it and not for any other service? If not, the companionship explanation is invalid and likely a rationalization.

2weightt an
How many randomly sampled humans would I rather condemn to torture to save my mother? Idk, more than one, tbh. Unvirtuous. This human is disgusting as they consider it fun to deal a lot of harm to the persons in their direct relationships. Also I really don't like how you jump into "it's all rationalization" with respect to values! Like, the thing about utilitarian -ish value systems is that they deal poorly with preferences of other people (they mostly ignore them). Preference based views deal poorly with creation and not creation of new persons. I can redteam them and find real murderous decision recommendations. Maybe like, instead of anchoring to the first proposed value system maybe it's better to understand what are the values of real life people? Maybe there is no simple formulation of them, maybe it's a complex thing. Also, disclaimer, I'm totally for making animals better off! (Including wild animals) Just I don't think it's an inference from some larger moral principle, it's just my aesthetic preference, and it's not that strong. And I'm kinda annoyed at EAs who by "animal welfare" mean dealing band aids to farm chickens. Like, why? You can just help to make that lab grown meat a thing faster, it's literally the only thing that going change it.

I agree that this is technically a sound philosophy; the is-ought problem makes it impossible to say as a factual matter that any set of values is wrong. That said, I think you should ask yourself why you oppose the mistreatment of pets and not other animals. If you truly do not care about animal suffering, shouldn't the mistreatment of a pet be morally equivalent to someone damaging their own furniture? It may not have been a conscious decision on your part, but I expect that your oddly specific value system is at least partially downstream of the fact that you grew up eating meat and enjoy it.

3Richard_Kennaway
I got the strong impression that you were presenting your values regarding animals as right, calling meat-eating an “obvious rationality failure”. Now you switch to the relativism that you explicitly rejected previously? Regarding pets, people cultivate close emotional relationships with their pets. That is what pets are for. For someone to cultivate an abusive relationship with a pet strikes me as symptomatic of a sickness in their soul. That is the wrongness of it. (BTW, some vegans reject the use of animals for any purpose, hence oppose the keeping of pets.) I don’t see anything “oddly specific” about my values around animals. They seem to me boringly unremarkable. And of course my values are downstream of my upbringing, as yours are of yours.

Meat-eating (without offsetting) seems to me like an obvious rationality failure. Extremely few people actually take the position that torturing animals is fine; that it would be acceptable to do to a pet or even a stray. Yet people are happy to pay others to do it for them, as long as it occurs where they can't see it happening.

Attempts to point this out to them are usually met with deflection or anger, or among more level-headed people, with elaborate rationalizations that collapse under minimal scrutiny. ("Farming creates more animals, so as long as the... (read more)

3tailcalled
Idea: it should be illegal to keep animals in captivity, so any organization that wants to butcher animals for flesh and blood should make a space that's attractive for animals to hang out so they can hunt them there.

I don't much care about animal suffering.

Really. I am not pretending not to care for self-serving reasons. I. Actually. Do. Not. Care.

Life has not brought me any occasion to slaughter and butcher a carcase myself, but if it did, I'd be willing to do it. I am not drawn to fishing as a recreation, but I have no moral objection to it, if the catch is to be eaten. On the other hand, I would be disinclined to the sort of sport fishing where the catch is released back into the water.

I wouldn't eat primates, and certainly not humans. I wouldn't go game hunting — ... (read more)

4weightt an
I think you present here some false dichotomy, some impartial utilitarian -ish view VS hardcore moral relativism. Pets are sometimes called companions. It's as if they provide some service and receive some service in return, all of this with trust and positive mutual expectations, and that demands some moral considerations / obligations, just like friendship or family relationship. I think mutualist / contractualist framework accounts for that better. It makes the prediction that such relationships will receive additional moral considerations, and they actually do in practice. And it predicts that wild animals wouldn't, and they don't, in practice. Success? So, people just have the attitude about animals just like any other person, exacerbated with how little status and power they have. Especially shrimp. Who the fuck cares about shrimp? You can only care about shrimp if you galaxy brain yourself on some weird ethics system. I agree that they have no consistent moral framework that backs up that attitude, but it's not that fair to force them into your own with trickery or frame control >Extremely few people actually take the position that torturing animals is fine Wrong. Most humans would be fine answering that torturing 1 million chickens is an acceptable tradeoff to save 1 human. You just don't torture them for no reason, as it's unvirtuous and icky
2Nathan Helm-Burger
I've been having some related discussions in this comment section: https://forum.effectivealtruism.org/posts/nrC5v6ZSaMEgSyxTn/discussion-thread-animal-welfare-vs-global-health-debate?commentId=aBYj4P6JWJyyZmSEa 
2ZY
I think I observe this generally a lot: "as soon as those implications do not personally benefit them", and even more so when this comes with a cost/conflict of interest. On rationality on decision making (not the seeking truth part on belief forming I guess) - I thought it is more like being consistent with their own preference and values (if we are constraining to the definition on lesswrong/sequence ish)? I have a hot take that: 1. If the action space of commit to a belief is a binary choice, then when people do not commit to a belief, the degree they believe in that belief is less than those who do. If we have to make it into binary classification, then it is not really a true belief if they do not commit to that belief. 2. It could be the action of a belief is a spectrum, and then people in this case for example could eat less meat, matching the degree of belief "eating meat is not moral".
6cubefox
This poses an interesting question: Where is the difference between failures of rationality and failures of morality? No doubt there is some sort of contradiction (loosely speaking) in holding these two mental states simultaneously: 1. The belief that eating meat is bad 2. The intention to eat meat This would ordinarily be called a failure of morality. But now compare this pair: 1. The belief that eating chocolate is bad 2. The intention to eat chocolate Now this seems more like a failure of rationality. Perhaps the difference is that in the first pair, "bad" means "bad overall", while in the second pair, "bad" means "bad for me". That's the difference between altruism and egoism.

I eat most meats (all except octopus and chicken) and have done this my entire life, except once when I went vegan for Lent. This state seems basically fine because it is acceptable from scope-sensitive consequentialist, deontic, and common-sense points of view, and it improves my diet enough that it's not worth giving up meat "just because".

  • According to EA-style consequentialism, eating meat is a pretty small percentage of your impact, and even if you're not directly offsetting, the impact can be vastly outweighed by positive impact in your career or dona
... (read more)
8ChristianKl
Are you in favor of destroying the habits of all wild animals who live in conditions with a lot of suffering? Or to be more concrete, if I buy meat produced by destroying the habits of enough suffering wild animals so that cows can graze in the area, do you think I have done adequate offsetting for my meat consumption?
6brambleboy
While most people have super flimsy defenses of meat-eating, that doesn't mean everyone does. Some people simply think it's quite unlikely that non-human animals are sentient (besides primates, maybe). For example, IIRC Eliezer Yudkowsky and Rob Bensinger's guess is that consciousness is highly contingent on factors such as general intelligence and sociality, or something like that. I think the "5% chance is still too much" argument is convincing, but it begs similar questions such as "Are you really so confident that fetuses aren't sentient? How could you be so sure?"
1Guilherme Marthe (gui42)
You can check an older version here  https://web.archive.org/web/20220429114903/https://www.lesswrong.com/posts/rEZqP7K4MG6waC2zf/optimizing-crop-planting-with-mixed-integer-linear 
4Kas_Hauser
Another copy edit: persecute -> prosecute.

There is no one Overton window, it's culture-dependent. "Sleeping in a room with a fan on will kill you" is within the Overton window in South Korea, but not in the US. Wikipedia says this is false rather than adopting a neutral stance because that's the belief held by western academia.

6Viliam
I may be wrong here, but I think I vaguely remember that each language version of Wikipedia is supposed to represent the speakers of the language. (Which makes it difficult for English, because there are too many countries involved.) Thus, as a hypothetical example, if Korean "reliable sources" agree that sleeping in a room with a fan will kill you, the Korean Wikipedia should say so. (It may or may not also mention that people in other countries are in denial about this danger.) This is probably more relevant for notability, for example someone popular in South Korea but virtually unknown in the rest of the world would have a page in Korean Wikipedia, but not in e.g. English Wikipedia.
6ChristianKl
When it comes to Ivermectin Wikipedia had the position that meta-analysis in reputable journals in Western academia weren't notable and the thing that's important is what non-academic authorities like the CDC had to say about it.

I didn't claim that the far-left generally agrees with the NYT, or that the NYT is a far-left outlet. It is a center-left outlet, which makes it cover far-left ideas much more favorably than far-right ideas, while still disagreeing with them.

5the gears to ascension
Ehh, maybe fair ish, but, you said that NYT is left at all; I think that's wrong, that they're a similar kind of liberal to eg wapo, and that it's a misleading image the NYT cultivates as being "The Left's Voice" in order to damage discourse in ways that seem on brand for the kind of behavior by the NYT described in OP. I find this behavior on NYT's part frustrating, hence it coming up. In the single dimensional model it's a centrist outlet, similar to wapo; It's not "near left" at all, again, it seems to me that even near left progressivism is primarily damaged by NYT. But I don't think the single dimensional model accurately represents the differences of opinion between these opinion clusters anyway - left is a different direction than liberal, is a different direction than libertarian, is a different direction than right, is a different direction than authoritarian - many of these opinion clusters, if you dot product them with each other, are mildly positive or negative, but there are subsets of them that are intensely resonant. And I think accepting the NYT's frame on itself is letting an adversarial actor mess up the layout of your game space, such that the groups that ought to be realizing their opinions can resonate well are not. Letting views be overcoupled due to using too-low-dimensional models to represent them seems like a core way memetic attacks on coordination between disagreeing groups work.

This is not an idiosyncrasy of Gerard and people like him, it is core to Wikipedia's model. Wikipedia is not an arbiter of fact, it does not perform experiments or investigations to determine the truth. It simply reflects the sources.

This means it parrots the majority consensus in academia and journalism. When that consensus is right, as it usually is, Wikipedia is right. When that consensus is wrong, as happens more frequently than its proponents would like to admit but still pretty rarely overall, Wikipedia is wrong. This is by design.

Wikipedia is not objective, it is neutral. It is an average of everyone's views, skewed towards the views of the WEIRD people who edit Wikipedia and the people respected by those people.

4Viliam
Wikipedia was supposed to describe the opinions within the Overton window. Neutral point of view, sections on criticism, etc., but no need to teach the controversy about Flat Earth. But there is no precise definition of the Overton window, and some Wikipedia admins (such as David Gerard, but some other names also ring a bell) decided to redefine it to match their political tribe.

The whole first part of the article is how this is wrong, due to the gaming of notable sources

In the linked Wikipedia discussion, someone asked David to provide sources for his claim and he refused to do so, so I would not consider them to be relevant evidence.

As for the factual question, I've come across one article from Quillette that seemed significantly biased and misleading, and I wouldn't be surprised if there were more.  There was one hoax that they briefly fell for and then corrected within hours, which was the main reason that Wikipedia considers them unreliable, but this says more about Wikipedia than Quillette. (I'm sure many of Wik... (read more)

6ChristianKl
I don't think that there's a single major newspaper that doesn't contain at least one misleading article on most days. If a few misleading articles would rule out a source than you just couldn't use newspapers as sources. Here the question isn't just whether Quillette has lower standards than the NYT but whether it has lower standards than an outlet like the Huffington Post. The NYT is commonly seen as "the paper of record" because they invest more effort into fact-checking than an outlet like the Huffington post. 
-1the gears to ascension
NYT's bias is generally highly anti-left, [to a similar degree to fox] (subclaim retracted); NYT seems to support liberals, as long as they have no [edit: more accurately, have convenient] left views OR are cariacatures basically nobody, including other leftists, would agree with. They paper it over with weird liberal claims that try to appear left but are emphatically nothing of the kind [this one probably holds up]. And any time actual progressive stuff comes up, NYT manages to deeply distort it similar to how they do other topics - if I were to take an intentional stance, I'd be wondering if the NYT exists primarily to be a cariacature that people can use to diss actual progressives. Making a case for this would be a pain [due to being about the average of a large body of work], and it's a pattern that has exceptions, whereas I imagine quillette doesn't have exceptions where actually they like progressive stuff sometimes. But I agree that source reliability should be about predictive accuracy. If anything, source reliability should have eliminated more sources Gerard relies on - such as, for example, NYT. I doubt there are many sources besides the AP that would be left over. Also, general suggestion that unpacking "woke" into more specific components may produce more clarity. Edit: lines targeted agree-disagree would be useful here. I maintain NYT seems liberal anti-progressive, I have never known someone progressive to think positively of NYT as a whole.

I think Michael's response to that is that he doesn't oppose that. He only opposes a lawyer who tries to prevent their client from getting a punishment that the lawyer believes would be justified. From his article:

It is not wrong per se to represent guilty clients. A lawyer may represent a factually guilty client for the purpose of preventing unjust punishments or rights-violations. What is unethical is to represent a person who you know committed a crime that was really wrong and really deserves to be punished, and to attempt to stop that person from getting the punishment he deserves.

Oh weird, apparently all my running pm2 jobs cancelled themselves at the end of the month. No idea what caused that. Thanks, fixed now.

Did you confirm with the doctor that this actually occurred? I'd be worried about a false memory.

Ideally, this would eliminate [...] the “learning the test” issues.

 

How would it do that? If they learned the test in advance, it would be in their long-term memory, and they'd still remember it when tested on the drug.

They didn't change their charter.

https://forum.effectivealtruism.org/posts/2Dg9t5HTqHXpZPBXP/ea-community-needs-mechanisms-to-avoid-deceptive-messaging

1wassname
Thanks, I hadn't seen that, I find it convincing.

Hmm, interesting. The exact choice of decimal place at which to cut off the comparison is certainly arbitrary, and that doesn't feel very elegant. My thinking is that within the constraint of using floating point numbers, there fundamentally isn't a perfect solution. Floating point notation changes some numbers into other numbers, so there are always going to be some cases where number comparisons are wrong. What we want to do is define a problem domain and check if floating point will cause problems within that domain; if it doesn't, go for it, if it does... (read more)

5faul_sname
BTW as a concrete note, you may want to sub in 15 - ceil(log10(n)) instead of just "15", which really only matters if you're dealing with numbers above 10 (e.g. 1000 is represented as 0x408F400000000000, while the next float 0x408F400000000001 is 1000.000000000000114, which differs in the 13th decimal place).

In the general case I agree it's not necessarily trivial; e.g. if your program uses the whole range of decimal places to a meaningful degree, or performs calculations that can compound floating point errors up to higher decimal places. (Though I'd argue that in both of those cases pure floating point is probably not the best system to use.) In my case I knew that the intended precision of the input would never be precise enough to overlap with floating point errors, so I could just round anything past the 15th decimal place down to 0.

6faul_sname
That makes sense. I think I may have misjudged your post, as I expected that you would classify that kind of approach as a "duct tape" approach.

If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.

I don't understand what "at the start" is supposed to mean for an event that lasts zero time.

1simon
In the case where it's instantaneous, "at the start" would effectively mean right before (e.g. a one-sided limit).

Ok now I'm confused about something. How can it be the case that an instantaneous perpendicular burn adds to the craft's speed, but a constant burn just makes it go in a circle with no change in speed?

1simon
The trajectory is changing during the continuous burn, so the average direction of the continuous burn is between perpendicular to where the trajectory was at the start of the burn and where it was at the end. The instantaneous burn, by contrast, is assumed to be perpendicular to where the trajectory was at the start only. If you instead made it in between perpendicular to where it was at the start and where it was at the end, as in the continuous burn, you could make it also not add to the craft's speed. Going back to the original discussion, yes this means that an instantaneous burn that doesn't change the speed is pointing slightly forward relative to where the rocket was going at the start of the burn, pushing the rocket slightly backward. But, this holds true even if you have a very tiny exhaust mass sent out at a very high velocity, where it obviously isn't going at the same speed as the rocket in the planet's reference frame.

...Are you just trying to point out that thrusting in opposite directions will cancel out? That seems obvious, and irrelevant. My post and all the subsequent discussion are assuming burns of epsilon duration.

1simon
No.  I'm pointing out that continuous thrust that's (continuously during the burn) perpendicular to the trajectory doesn't change the speed. This also means that (going to your epsilon duration case) if the burn is small enough not to change the direction very much, the burn that doesn't change the speed will be close to perpendicular to the trajectory (and in the low mass change (high exhaust velocity) limit it will be close to halfway between the perpendiculars to the trajectory before and after the burn, even if it does change the direction a lot). That's independent of the exhaust velocity, as long as that velocity is high, and when it's high it will also tend not to match the ship's speed since it's much faster, which maybe calls into question your statement in the post, quoted above, which I'll requote:

I don't understand how that can be true? Vector addition is associative; it can't be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors' sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship's trajectory as throwing both rocks at the same time.

1simon
Yes, it's associative. But if you thrust at 90 degrees to the rocket's direction of motion, you aren't thrusting in a constant direction, but in a changing direction as the trajectory changes. This set of vectors in different directions will add up to a different combined vector than a single vector of the same total length pointing at 90 degrees to the direction of motion that the rocket had at the start of the thrusting.

How is that relevant? In the limit where the retrograde thrust is infinitesimally small, it also does not increase the length of the main vector it is added to. Negligibly small thrust results in negligibly small change in velocity, regardless of its direction.

2simon
I implicitly meant, but again did not say explicitly, that the ratio of the contribution to the length of the vector from adding an infinitesimal sideways vector, as compared to the length of that infinitesimal vector, goes to zero of as the length of the sideways addition goes to zero (because it scales as the square of the sideways vector). So adding a large number of tiny instantaneously sideways vectors, in the limit that the size of each goes to zero and holding to the total amount of thrust added constant, in that limit results in a non-zero change in direction but zero change in speed. Whereas, if you add a large number of tiny instantaneous aligned vectors, the ratio of the contribution to the length of the vector to the length of each added tiny vector is 1, and if you add up a whole bunch of such additions, it changes the length and not the direction, regardless of how large or small each addition is.

Unfortunately I already came across that paradox a day or two ago on Stack Exchange. It's a good one though!

Yeah, my numerical skill is poor, so I try to understand things via visualization and analogies. It's more reliable in some cases, less in others.

when the thrust is at 90 degrees to the trajectory, the rocket's speed is unaffected by the thrusting, and it comes out of the gravity well at the same speed as it came in.

 

That's not accurate; when you add two vectors at 90 degrees, the resulting vector has a higher magnitude than either. The rocket will be accelerated to a faster speed.

1simon
In the limit where the perpendicular side vector is infinitesimally small, it does not increase the length of the main vector it is added to.  If you keep thrusting over time, as long as you keep the thrust continuously at 90 degrees as the direction changes, the speed will still not change. I implicitly meant, but did not explicit say, that the thrust is continuously perpendicular in this way. (Whereas, if you keep the direction of thrust fixed when the direction of motion changes so it's no longer at 90 degrees, or add a whole bunch of impulse at one time like shooting a bullet out at 90 degrees, then it will start to add speed). 

I don't think so. The difference in the gravitational field between the bottom point of the swing arc and the top is negligible. The swing isn't an isolated system, so you're able to transmit force to the bar as you move around.

There's a common explanation you'll find online of how swings work by you changing the height of your center of mass, which is wrong, since it would imply that a swing with rigid bars wouldn't work. But they do.

The actual explanation seems to be something to do with changing your angular momentum at specific points by rotating your body.

I'm still confused about some things, but the primary framing of "less time spent subject to high gravitational deceleration" seems like the important insight that all other explanations I found were missing.

Probability is a geometric scale, not an additive one.  An order of magnitude centered on 10% covers ~1% - 50%.

https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities

0O O
Exactly, not very informative. For pretty much any p(doom) the range is anywhere from non-existent to very likely. When someone gives a Fermi estimate of a p(doom) between 0% and 100% they may as well be saying the p(doom) is between ~0% and ~100%. Divide any number by 10 and multiply by 10 to see this.

Feel free to elaborate on the mistakes and I'll fix them.

That article isn't about e/acc people and doesn't mention them anywhere, so I'm not sure why you think it's intended to be. The probability theory denial I'm referencing is mostly on Twitter.

Great point! I focused on AI risk since that's what most people I'm familiar with are talking about right now, but there are indeed other risks, and that's yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.

Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that's easily mapped to concepts like "unearned confidence", the onlooker is more likely to dismiss whatever you're saying.

It's literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don't know how to get out of here without reference to probabilities and expected values. 

If that comes up, yes. But then it's them who have brought up the fact that probability is relevant, s... (read more)

3Lichdar
I am one of those people; I don't consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.

I don't see how they would be. If you do see a way, please share!

I don't understand how either of those are supposed to be a counterexample. If I don't know what seat is going to be chosen randomly each time, then I don't have enough information to distinguish between the outcomes. All other information about the problem (like the fact that this is happening on a plane rather than a bus) is irrelevant to the outcome I care about.

This does strike me as somewhat tautological, since I'm effectively defining "irrelevant information" as "information that doesn't change the probability of the outcome I care about". I'm not sure how to resolve this; it certainly seems like I should be able to identify that the type of vehicle is irrelevant to the question posed and discard that information.

2carboniferous_umbraculum
OK I think this will be my last message in this exchange but I'm still confused. I'll try one more time to explain what I'm getting at.  I'm interested in what your precise definition of subjective probability is.  One relevant thing I saw was the following sentence: It seems to give something like a definition of what it means to say something has a 50% chance. i.e. I interpret your sentence as claiming that a statement like 'The probability of A is 1/2' means or is somehow the same as a statement a bit like [*]  'I don't know the exact conditions and don't have enough meaningful/relevant knowledge to distinguish between the possible occurrence of (A) and (not A)' My reaction was: This can't possibly be a good definition.  The airplane puzzle was supposed to be a situation where there is a clear 'difference' in the outcomes - either the last person is in the 1 seat that matches their ticket number or they're not. - they're in one of the other 99 seats. It's not as if it's a clearly symmetric situation from the point of view of the outcomes. So it was supposed to be an example where statement [*] does not hold, but where the probability is 1/2. It seems you don't accept that; it seems to me like you think that statement [*] does in fact hold in this case.  But tbh it feels sorta like you're saying you can't distinguish between the outcomes because you already know the answer is 1/2! i.e. Even if I accept that the outcomes are somehow indistinguishable, the example is sufficiently complicated on a first reading that there's no way you'd just look at it and go "hmm I guess I can't distinguish so it's 1/2", i.e. if your definition were OK it could be used to justify the answer to the puzzle, but that doesn't seem right to me either.  

No, I think what I said was correct? What's an example that you think conflicts with that interpretation?

2carboniferous_umbraculum
I have in mind very simple examples.  Suppose that first I roll a die. If it doesn't land on a 6, I then flip a biased coin that lands on heads 3/5 of the time.  If it does land on a 6 I just record the result as 'tails'. What is the probability that I get heads?  This is contrived so that the probability of heads is  5/6 x 3/5 = 1/2. But do you think that that in saying this I mean something like "I don't know the exact initial conditions... well enough to have any meaningful knowledge of how it's going to land, and I can't distinguish between the two options." ? Another example: Have you heard of the puzzle about the people randomly taking seats on the airplane? It's a well-known probability brainteaser to which the answer is 1/2 but I don't think many people would agree that saying the answer is 1/2 actually means something like "I don't know the exact initial conditions... well enough to have any meaningful knowledge of how it's going to land, and I can't distinguish between the two options."  There needn't be any 'indistinguishability of outcomes' or 'lack of information' for something to have probability 0.5, it can just..well... be the actual result of calculating two distinguishable complementary outcomes.

I think that's accurate, yeah. What's your objection to it?

1carboniferous_umbraculum
I'm kind of confused what you're asking me - like which bit is "accurate" etc.. Sorry, I'll try to re-state my question again: - Do you think that when someone says something has "a 50% probability" then they are saying that they do not have any meaningful knowledge that allows them to distinguish between two options? I'm suggesting that you can't possibly think that, because there are obviously other ways things can end up 50/50. e.g. maybe it's just a very specific calculation, using lots of specific information, that ends up with the value 0.5 at the end. This is a different situation from having 'symmetry' and no distinguishing information. Then I'm saying OK, assuming you indeed don't mean the above thing, then what exactly does one mean in general when saying something is 50% likely?  

Yeah that was a mistake, I mixed frequentism and propensity together.

I don't have an answer for you, as this is also something I'm confused about. I felt bad seeing 0 answers here, so I just wanted to mention that I asked about this on Manifold and got some interesting discussion, see here: 

1mic
Thanks for setting this up :)

No, I'm using the WYSIWYG editor. It was for a post, not a comment, and definitely the right link.

Edit: Huh, I tried it again and it worked this time. My bad for not reloading to test on a fresh page before posting here, sorry.

This doesn't seem to work anymore? I'm posting the link in the editor and nothing happens, there's just a text link.

2habryka
Are you using the markdown editor? Or maybe you are getting the wrong link?  Still works fine for me.

It's conceptually pretty simple; 240 characters isn't room for a lot. Here's how the writer explained it:
 

Here's the annotated version of my bot: https://pastebin.com/1a9UPKQk

The basic strategy is:

Simulate what the opponent will do on the current turn, and what they would do on the next two turns if I defect twice in a row.

If the results of the simulations are [cooperate, defect, defect], play tit-for-tat. Otherwise, defect.

This will defect against DefectBots, CooperateBots, and most of the silly bots that don't pay attention to the opponent's moves.

... (read more)
1RedMan
Is it safe to call this bot 'tit for tat with foresight and feigned ignorance'?   I'm wondering what its' actual games looked like and how much of a role the hidden foresight actually played.

The winner was the following program:

try{eval(`{let f=(d,m,c,s,f,h,i)=>{let r=9;${c};return +!!r};r=f}`);let θ='r=h.at(-1);r=!r||r.o',λ=Ω=>r(m,d,θ,c,f,Ω,Ω.map(χ=>({m:χ.o,o:χ.m}))),Σ=(μ,π)=>[...μ,{m:π,o:+!1}],α=λ([...i]),β=λ(Σ(i,α));r=f(θ)&α&!β&!λ(Σ(Σ(i,α),β))|d==m}catch{r = 1}

We're running a sequel, see here to participate.

5habryka
This is the ideal agent. You may not like it, but this is what peak performance looks like. More seriously though, does anyone want to explain how it works?

Well, I'd encourage you to submit this strategy and see how it does. :)

Load More