Eliezer's argument is the primary one I'm thinking of as an obvious rationalization.
https://benthams.substack.com/p/against-yudkowskys-implausible-position
I'm not confident about fetuses either, hence why I generally oppose abortion after the fetus has started developing a brain.
Different meanings of "bad". The former is making a moral claim, the second presumably a practical one about the person's health goals. "Bad as in evil" vs. "bad as in ineffective".
Hitler was an evil leader, but not an ineffective one. He was a bad person, but he was not bad at gaining political power.
It seems unlikely to me that the amount of animal-suffering-per-area goes down when a factory farm replaces a natural habitat; natural selection is a much worse optimizer than human intelligence.
And that's a false dichotomy anyway; even if factory farms did reduce suffering per area, you could instead pay for something else to be there that has even less suffering.
I agree with the first bullet point in theory, but see the Corrupted Hardware sequence of posts. It's hard to know the true impact of most interventions, and easy for people to come up with reasons why whatever they want to do happens to have large positive externalities. "Don't directly inflict pain" is something we can be very confident is actually a good thing, without worrying about second-order effects.
Additionally, there's no reason why doing bad things should be acceptable just due to also doing unrelated good things. Sure it's net positive from a c...
I agree that this is technically a sound philosophy; the is-ought problem makes it impossible to say as a factual matter that any set of values is wrong. That said, I think you should ask yourself why you oppose the mistreatment of pets and not other animals. If you truly do not care about animal suffering, shouldn't the mistreatment of a pet be morally equivalent to someone damaging their own furniture? It may not have been a conscious decision on your part, but I expect that your oddly specific value system is at least partially downstream of the fact that you grew up eating meat and enjoy it.
Meat-eating (without offsetting) seems to me like an obvious rationality failure. Extremely few people actually take the position that torturing animals is fine; that it would be acceptable to do to a pet or even a stray. Yet people are happy to pay others to do it for them, as long as it occurs where they can't see it happening.
Attempts to point this out to them are usually met with deflection or anger, or among more level-headed people, with elaborate rationalizations that collapse under minimal scrutiny. ("Farming creates more animals, so as long as the...
I don't much care about animal suffering.
Really. I am not pretending not to care for self-serving reasons. I. Actually. Do. Not. Care.
Life has not brought me any occasion to slaughter and butcher a carcase myself, but if it did, I'd be willing to do it. I am not drawn to fishing as a recreation, but I have no moral objection to it, if the catch is to be eaten. On the other hand, I would be disinclined to the sort of sport fishing where the catch is released back into the water.
I wouldn't eat primates, and certainly not humans. I wouldn't go game hunting — ...
I eat most meats (all except octopus and chicken) and have done this my entire life, except once when I went vegan for Lent. This state seems basically fine because it is acceptable from scope-sensitive consequentialist, deontic, and common-sense points of view, and it improves my diet enough that it's not worth giving up meat "just because".
This is not an idiosyncrasy of Gerard and people like him, it is core to Wikipedia's model. Wikipedia is not an arbiter of fact, it does not perform experiments or investigations to determine the truth. It simply reflects the sources.
This means it parrots the majority consensus in academia and journalism. When that consensus is right, as it usually is, Wikipedia is right. When that consensus is wrong, as happens more frequently than its proponents would like to admit but still pretty rarely overall, Wikipedia is wrong. This is by design.
Wikipedia is not objective, it is neutral. It is an average of everyone's views, skewed towards the views of the WEIRD people who edit Wikipedia and the people respected by those people.
In the linked Wikipedia discussion, someone asked David to provide sources for his claim and he refused to do so, so I would not consider them to be relevant evidence.
As for the factual question, I've come across one article from Quillette that seemed significantly biased and misleading, and I wouldn't be surprised if there were more. There was one hoax that they briefly fell for and then corrected within hours, which was the main reason that Wikipedia considers them unreliable, but this says more about Wikipedia than Quillette. (I'm sure many of Wik...
I think Michael's response to that is that he doesn't oppose that. He only opposes a lawyer who tries to prevent their client from getting a punishment that the lawyer believes would be justified. From his article:
It is not wrong per se to represent guilty clients. A lawyer may represent a factually guilty client for the purpose of preventing unjust punishments or rights-violations. What is unethical is to represent a person who you know committed a crime that was really wrong and really deserves to be punished, and to attempt to stop that person from getting the punishment he deserves.
Hmm, interesting. The exact choice of decimal place at which to cut off the comparison is certainly arbitrary, and that doesn't feel very elegant. My thinking is that within the constraint of using floating point numbers, there fundamentally isn't a perfect solution. Floating point notation changes some numbers into other numbers, so there are always going to be some cases where number comparisons are wrong. What we want to do is define a problem domain and check if floating point will cause problems within that domain; if it doesn't, go for it, if it does...
In the general case I agree it's not necessarily trivial; e.g. if your program uses the whole range of decimal places to a meaningful degree, or performs calculations that can compound floating point errors up to higher decimal places. (Though I'd argue that in both of those cases pure floating point is probably not the best system to use.) In my case I knew that the intended precision of the input would never be precise enough to overlap with floating point errors, so I could just round anything past the 15th decimal place down to 0.
If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.
I don't think you understand how probability works.
https://outsidetheasylum.blog/understanding-subjective-probabilities/
I don't understand how that can be true? Vector addition is associative; it can't be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors' sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship's trajectory as throwing both rocks at the same time.
when the thrust is at 90 degrees to the trajectory, the rocket's speed is unaffected by the thrusting, and it comes out of the gravity well at the same speed as it came in.
That's not accurate; when you add two vectors at 90 degrees, the resulting vector has a higher magnitude than either. The rocket will be accelerated to a faster speed.
I don't think so. The difference in the gravitational field between the bottom point of the swing arc and the top is negligible. The swing isn't an isolated system, so you're able to transmit force to the bar as you move around.
There's a common explanation you'll find online of how swings work by you changing the height of your center of mass, which is wrong, since it would imply that a swing with rigid bars wouldn't work. But they do.
The actual explanation seems to be something to do with changing your angular momentum at specific points by rotating your body.
Probability is a geometric scale, not an additive one. An order of magnitude centered on 10% covers ~1% - 50%.
https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities
Great point! I focused on AI risk since that's what most people I'm familiar with are talking about right now, but there are indeed other risks, and that's yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that's easily mapped to concepts like "unearned confidence", the onlooker is more likely to dismiss whatever you're saying.
It's literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don't know how to get out of here without reference to probabilities and expected values.
If that comes up, yes. But then it's them who have brought up the fact that probability is relevant, s...
I don't understand how either of those are supposed to be a counterexample. If I don't know what seat is going to be chosen randomly each time, then I don't have enough information to distinguish between the outcomes. All other information about the problem (like the fact that this is happening on a plane rather than a bus) is irrelevant to the outcome I care about.
This does strike me as somewhat tautological, since I'm effectively defining "irrelevant information" as "information that doesn't change the probability of the outcome I care about". I'm not sure how to resolve this; it certainly seems like I should be able to identify that the type of vehicle is irrelevant to the question posed and discard that information.
It's conceptually pretty simple; 240 characters isn't room for a lot. Here's how the writer explained it:
...Here's the annotated version of my bot: https://pastebin.com/1a9UPKQk
The basic strategy is:
Simulate what the opponent will do on the current turn, and what they would do on the next two turns if I defect twice in a row.
If the results of the simulations are [cooperate, defect, defect], play tit-for-tat. Otherwise, defect.
This will defect against DefectBots, CooperateBots, and most of the silly bots that don't pay attention to the opponent's moves.
The winner was the following program:
try{eval(`{let f=(d,m,c,s,f,h,i)=>{let r=9;${c};return +!!r};r=f}`);let θ='r=h.at(-1);r=!r||r.o',λ=Ω=>r(m,d,θ,c,f,Ω,Ω.map(χ=>({m:χ.o,o:χ.m}))),Σ=(μ,π)=>[...μ,{m:π,o:+!1}],α=λ([...i]),β=λ(Σ(i,α));r=f(θ)&α&!β&!λ(Σ(Σ(i,α),β))|d==m}catch{r = 1}
We're running a sequel, see here to participate.
Teleporting an object 1 meter up gives it more energy the closer it is to the planet, because gravity gets weaker the further away it is. If you're at infinity, it adds 0 energy to move further away.
I think your error is in not putting real axes on your phase space diagram. If going to the right increases your potential energy, and the center has 0 potential energy, then being to the left of the origin means you have negative potential energy? This is not how orbits work; a real orbit would never leave the top right quadrant of the phase space since neithe... (read more)