Teleporting an object 1 meter up gives it more energy the closer it is to the planet, because gravity gets weaker the further away it is. If you're at infinity, it adds 0 energy to move further away.
I think your error is in not putting real axes on your phase space diagram. If going to the right increases your potential energy, and the center has 0 potential energy, then being to the left of the origin means you have negative potential energy? This is not how orbits work; a real orbit would never leave the top right quadrant of the phase space since neithe...
Why is it a sickness of soul to abuse an animal that's been legally defined as a "pet", but not to define an identical animal that has not been given this arbitrary label?
Eliezer's argument is the primary one I'm thinking of as an obvious rationalization.
https://benthams.substack.com/p/against-yudkowskys-implausible-position
I'm not confident about fetuses either, hence why I generally oppose abortion after the fetus has started developing a brain.
Different meanings of "bad". The former is making a moral claim, the second presumably a practical one about the person's health goals. "Bad as in evil" vs. "bad as in ineffective".
Hitler was an evil leader, but not an ineffective one. He was a bad person, but he was not bad at gaining political power.
It seems unlikely to me that the amount of animal-suffering-per-area goes down when a factory farm replaces a natural habitat; natural selection is a much worse optimizer than human intelligence.
And that's a false dichotomy anyway; even if factory farms did reduce suffering per area, you could instead pay for something else to be there that has even less suffering.
I agree with the first bullet point in theory, but see the Corrupted Hardware sequence of posts. It's hard to know the true impact of most interventions, and easy for people to come up with reasons why whatever they want to do happens to have large positive externalities. "Don't directly inflict pain" is something we can be very confident is actually a good thing, without worrying about second-order effects.
Additionally, there's no reason why doing bad things should be acceptable just due to also doing unrelated good things. Sure it's net positive from a c...
Do you also find it acceptable to torture humans you don't personally know, or a pet that someone purchased only for the joy of torturing it and not for any other service? If not, the companionship explanation is invalid and likely a rationalization.
I agree that this is technically a sound philosophy; the is-ought problem makes it impossible to say as a factual matter that any set of values is wrong. That said, I think you should ask yourself why you oppose the mistreatment of pets and not other animals. If you truly do not care about animal suffering, shouldn't the mistreatment of a pet be morally equivalent to someone damaging their own furniture? It may not have been a conscious decision on your part, but I expect that your oddly specific value system is at least partially downstream of the fact that you grew up eating meat and enjoy it.
Meat-eating (without offsetting) seems to me like an obvious rationality failure. Extremely few people actually take the position that torturing animals is fine; that it would be acceptable to do to a pet or even a stray. Yet people are happy to pay others to do it for them, as long as it occurs where they can't see it happening.
Attempts to point this out to them are usually met with deflection or anger, or among more level-headed people, with elaborate rationalizations that collapse under minimal scrutiny. ("Farming creates more animals, so as long as the...
I don't much care about animal suffering.
Really. I am not pretending not to care for self-serving reasons. I. Actually. Do. Not. Care.
Life has not brought me any occasion to slaughter and butcher a carcase myself, but if it did, I'd be willing to do it. I am not drawn to fishing as a recreation, but I have no moral objection to it, if the catch is to be eaten. On the other hand, I would be disinclined to the sort of sport fishing where the catch is released back into the water.
I wouldn't eat primates, and certainly not humans. I wouldn't go game hunting — ...
I eat most meats (all except octopus and chicken) and have done this my entire life, except once when I went vegan for Lent. This state seems basically fine because it is acceptable from scope-sensitive consequentialist, deontic, and common-sense points of view, and it improves my diet enough that it's not worth giving up meat "just because".
The images appear to be broken.
Fixed, thank you.
There is no one Overton window, it's culture-dependent. "Sleeping in a room with a fan on will kill you" is within the Overton window in South Korea, but not in the US. Wikipedia says this is false rather than adopting a neutral stance because that's the belief held by western academia.
I didn't claim that the far-left generally agrees with the NYT, or that the NYT is a far-left outlet. It is a center-left outlet, which makes it cover far-left ideas much more favorably than far-right ideas, while still disagreeing with them.
This is not an idiosyncrasy of Gerard and people like him, it is core to Wikipedia's model. Wikipedia is not an arbiter of fact, it does not perform experiments or investigations to determine the truth. It simply reflects the sources.
This means it parrots the majority consensus in academia and journalism. When that consensus is right, as it usually is, Wikipedia is right. When that consensus is wrong, as happens more frequently than its proponents would like to admit but still pretty rarely overall, Wikipedia is wrong. This is by design.
Wikipedia is not objective, it is neutral. It is an average of everyone's views, skewed towards the views of the WEIRD people who edit Wikipedia and the people respected by those people.
The whole first part of the article is how this is wrong, due to the gaming of notable sources
In the linked Wikipedia discussion, someone asked David to provide sources for his claim and he refused to do so, so I would not consider them to be relevant evidence.
As for the factual question, I've come across one article from Quillette that seemed significantly biased and misleading, and I wouldn't be surprised if there were more. There was one hoax that they briefly fell for and then corrected within hours, which was the main reason that Wikipedia considers them unreliable, but this says more about Wikipedia than Quillette. (I'm sure many of Wik...
I think Michael's response to that is that he doesn't oppose that. He only opposes a lawyer who tries to prevent their client from getting a punishment that the lawyer believes would be justified. From his article:
It is not wrong per se to represent guilty clients. A lawyer may represent a factually guilty client for the purpose of preventing unjust punishments or rights-violations. What is unethical is to represent a person who you know committed a crime that was really wrong and really deserves to be punished, and to attempt to stop that person from getting the punishment he deserves.
Oh weird, apparently all my running pm2 jobs cancelled themselves at the end of the month. No idea what caused that. Thanks, fixed now.
Oh whoops, thank you.
Did you confirm with the doctor that this actually occurred? I'd be worried about a false memory.
Ideally, this would eliminate [...] the “learning the test” issues.
How would it do that? If they learned the test in advance, it would be in their long-term memory, and they'd still remember it when tested on the drug.
They didn't change their charter.
https://forum.effectivealtruism.org/posts/2Dg9t5HTqHXpZPBXP/ea-community-needs-mechanisms-to-avoid-deceptive-messaging
Hmm, interesting. The exact choice of decimal place at which to cut off the comparison is certainly arbitrary, and that doesn't feel very elegant. My thinking is that within the constraint of using floating point numbers, there fundamentally isn't a perfect solution. Floating point notation changes some numbers into other numbers, so there are always going to be some cases where number comparisons are wrong. What we want to do is define a problem domain and check if floating point will cause problems within that domain; if it doesn't, go for it, if it does...
In the general case I agree it's not necessarily trivial; e.g. if your program uses the whole range of decimal places to a meaningful degree, or performs calculations that can compound floating point errors up to higher decimal places. (Though I'd argue that in both of those cases pure floating point is probably not the best system to use.) In my case I knew that the intended precision of the input would never be precise enough to overlap with floating point errors, so I could just round anything past the 15th decimal place down to 0.
If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.
I don't understand what "at the start" is supposed to mean for an event that lasts zero time.
I don't think you understand how probability works.
https://outsidetheasylum.blog/understanding-subjective-probabilities/
Ok now I'm confused about something. How can it be the case that an instantaneous perpendicular burn adds to the craft's speed, but a constant burn just makes it go in a circle with no change in speed?
...Are you just trying to point out that thrusting in opposite directions will cancel out? That seems obvious, and irrelevant. My post and all the subsequent discussion are assuming burns of epsilon duration.
I don't understand how that can be true? Vector addition is associative; it can't be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors' sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship's trajectory as throwing both rocks at the same time.
How is that relevant? In the limit where the retrograde thrust is infinitesimally small, it also does not increase the length of the main vector it is added to. Negligibly small thrust results in negligibly small change in velocity, regardless of its direction.
Unfortunately I already came across that paradox a day or two ago on Stack Exchange. It's a good one though!
Yeah, my numerical skill is poor, so I try to understand things via visualization and analogies. It's more reliable in some cases, less in others.
when the thrust is at 90 degrees to the trajectory, the rocket's speed is unaffected by the thrusting, and it comes out of the gravity well at the same speed as it came in.
That's not accurate; when you add two vectors at 90 degrees, the resulting vector has a higher magnitude than either. The rocket will be accelerated to a faster speed.
I don't think so. The difference in the gravitational field between the bottom point of the swing arc and the top is negligible. The swing isn't an isolated system, so you're able to transmit force to the bar as you move around.
There's a common explanation you'll find online of how swings work by you changing the height of your center of mass, which is wrong, since it would imply that a swing with rigid bars wouldn't work. But they do.
The actual explanation seems to be something to do with changing your angular momentum at specific points by rotating your body.
I'm still confused about some things, but the primary framing of "less time spent subject to high gravitational deceleration" seems like the important insight that all other explanations I found were missing.
Probability is a geometric scale, not an additive one. An order of magnitude centered on 10% covers ~1% - 50%.
https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities
Feel free to elaborate on the mistakes and I'll fix them.
That article isn't about e/acc people and doesn't mention them anywhere, so I'm not sure why you think it's intended to be. The probability theory denial I'm referencing is mostly on Twitter.
Great point! I focused on AI risk since that's what most people I'm familiar with are talking about right now, but there are indeed other risks, and that's yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that's easily mapped to concepts like "unearned confidence", the onlooker is more likely to dismiss whatever you're saying.
It's literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don't know how to get out of here without reference to probabilities and expected values.
If that comes up, yes. But then it's them who have brought up the fact that probability is relevant, s...
I don't see how they would be. If you do see a way, please share!
I don't understand how either of those are supposed to be a counterexample. If I don't know what seat is going to be chosen randomly each time, then I don't have enough information to distinguish between the outcomes. All other information about the problem (like the fact that this is happening on a plane rather than a bus) is irrelevant to the outcome I care about.
This does strike me as somewhat tautological, since I'm effectively defining "irrelevant information" as "information that doesn't change the probability of the outcome I care about". I'm not sure how to resolve this; it certainly seems like I should be able to identify that the type of vehicle is irrelevant to the question posed and discard that information.
No, I think what I said was correct? What's an example that you think conflicts with that interpretation?
I think that's accurate, yeah. What's your objection to it?
Yeah that was a mistake, I mixed frequentism and propensity together.
I don't have an answer for you, as this is also something I'm confused about. I felt bad seeing 0 answers here, so I just wanted to mention that I asked about this on Manifold and got some interesting discussion, see here:
No, I'm using the WYSIWYG editor. It was for a post, not a comment, and definitely the right link.
Edit: Huh, I tried it again and it worked this time. My bad for not reloading to test on a fresh page before posting here, sorry.
This doesn't seem to work anymore? I'm posting the link in the editor and nothing happens, there's just a text link.
It's conceptually pretty simple; 240 characters isn't room for a lot. Here's how the writer explained it:
...Here's the annotated version of my bot: https://pastebin.com/1a9UPKQk
The basic strategy is:
Simulate what the opponent will do on the current turn, and what they would do on the next two turns if I defect twice in a row.
If the results of the simulations are [cooperate, defect, defect], play tit-for-tat. Otherwise, defect.
This will defect against DefectBots, CooperateBots, and most of the silly bots that don't pay attention to the opponent's moves.
The winner was the following program:
try{eval(`{let f=(d,m,c,s,f,h,i)=>{let r=9;${c};return +!!r};r=f}`);let θ='r=h.at(-1);r=!r||r.o',λ=Ω=>r(m,d,θ,c,f,Ω,Ω.map(χ=>({m:χ.o,o:χ.m}))),Σ=(μ,π)=>[...μ,{m:π,o:+!1}],α=λ([...i]),β=λ(Σ(i,α));r=f(θ)&α&!β&!λ(Σ(Σ(i,α),β))|d==m}catch{r = 1}
We're running a sequel, see here to participate.
Well that explains why you got the wrong answer! Springs, as you now point out, work opposite the way gravity does, in that the longer a spring is, the more energy it take to continue to deform it. (Assuming we mean an ideal spring, not one that's going to switch to plastic deformation at some point.) So if we were talking about springs, you would be correct that the most efficient time to teleport the spring longer would be when it's already as long as possible.
But we are not talking about springs, we are talking about gravity, which works differently. (N... (read more)