All of omnizoid's Comments + Replies

But not all possible people are continuous wavefunctions! 

Bentham's bulldog here!  It seems like you're calculating the number of distinct people--in the sense of the number of people that differs regarding some mental or physical property.  But that's not what's relevant.  SIA favors theories with Beth 2 copies of the same person. 

3interstice
He doesn't only talk about properties but also what people actually are according to our best physical theories, which is continuous wavefunctions -- of which there are only beth-1.
2[comment deleted]
omnizoid498

First of all, the claim that wild animal suffering is serious doesn't depend on the claim that animals suffer more than they are happy.  I happen to think human suffering is very serious, even though I think humans live positive lives.  

Second, I don't think it's depressive bias infecting my judgments.  I am quite happy--actually to a rather unusual degree.  Instead, the reason to think that animals live mostly bad lives is that nearly every animal lives a very short life that culminates in a painful death on account of R-selection--if ... (read more)

Well, sometimes getting a lot of arguments for a view should convince you of the view.

I refer you to my response to Said Achmiz's comment.  Do you have a better way of estimating animal consciousness?  Sure, the report isn't perfect, but it's better than alternatives.  It's irrational to say "well, we don't know exactly how much they suffer, so let's ignore them entirely." https://www.goodthoughts.blog/p/refusing-to-quantify-is-refusing

3Said Achmiz
As you well know, I have already responded to this claim as well.

Fischer's not against using it for tradeoffs, he's against using it as a singular indicator of worth. 

But then you'd lose out on being the creatures.

The dark arts of expected value calculations relying on conservatively downgrading the most detailed report on the subject.  What a joke.

3Richard_Kennaway
I refer you to Said Achmiz's comment.
omnizoid-10

But I'm not trolleying them--I'm talking about how bad their suffering is.  

3Noah Birnbaum
Saying that we should donate there as opposed to AMF, for example, I would argue is trolleying. You're making tradeoffs and implicitly saying this is worth as much as that. Perhaps you're giving lower trade offs than the pain pleasure stuff, but you didn't really mention these, and they seem important to the end claim "and for these reasons, you should donate to shrimp welfare." 
omnizoid-1-2

As they describe in the report, the philosophical assumptions are mostly inconsequential and assumed  for simplicity.  The rest of your critique is just describing what they did, not an objection to it.  It's not precise and they admit quite high uncertainty, but it's definitely better than alternatives (E.g. neuron counts). 

omnizoid10

It's not that piece.  It's another one that got eaten by a Substack glitch unfortuantely--hopefully it will be back up soon! 

1Bohaska
Do you happen to have a copy of it that you can share?
4ChristianKl
What makes you believe that Substack is to blame and not him unpublishing it?
omnizoid3-7

He thinks it's very near zero if there is a gap. 

6ChristianKl
He explicitly says that the people who argue that there's no gap are mistaken to argue that. He argues for the gap being small, not nonexistent. He does not use the term "near zero" himself. 
omnizoid30

If you half and don't think that your credence should be 2/3 in heads after finding out it's Monday you violate the conservation of evidence.  If you're going to be told what time it is, your credence might go up but has no chance of going down--if it's day 2 your credence will spike to 100, if it's day 1 it wont' change. 

1Ape in the coat
You would violate conservation of expected evidence if  P(Monday) + P(Tuesday) = 1  However this is not the case because P(Monday) = 1 and P(Tuesday) = 1/2
4JBlack
Is conversation of expected evidence a reasonably maintainable proposition across epistemically hazardous situations such as memory wipes (or false memories, self-duplicates and so on)? Arguably, in such situation it is impossible to be perfectly rational since the thing you do your reasoning with is being externally manipulated.
omnizoid32

Yes--Lewis held this, for instance, in the most famous paper on the topic. 

2JBlack
Good point! Lewis' notation P_+(HEADS) does indeed refer to the conditional credence upon learning that it's Monday, and he sets it to 2/3 by reasoning backward from P(HEADS) = 1/2 and using my (1). So yes, there are indeed people who believe that if Beauty is told that it's Monday, then she should update to believing that the coin was more likely heads than not. Which seems weird to me - I have a great deal more suspicion that (1) is unjustifiable than that (2) is.

Lots of people disagree with 2.  

2JBlack
They ... what? I've never read anything suggesting that. Do you have any links or even a memory of an argument that you may have seen from such a person? Edit: Just to clarify, conditional credence P(X|Y) is of the form "if I knew Y held, then my credence for X would be ...". Are you saying that lots of people believe that if they knew it was Monday, then they would hold something other than equal credence for heads and tails?

I didn't make a betting argument. 

3Dagon
Not directly, but all probability is betting.  Or at least the modeling part is the same, where you define what the prediction is that your probability assessment applies to. Sleeping beauty problems are interesting because they mess with the number of agents making predictions, and this very much confuses our intuitions.  The confusion is in how to aggregate the two wakings (which are framed as independent, but I haven't seen anyone argue that they'll ever be different).  I think we all agree that post-amnesia, on Wednesday, you should predict 50% that the experimenter will reveal heads, and you were awoken once, and 50% tails, twice.  When woken and you don't know if it's Monday or Tuesday, you should acknowledge that on Wednesday you'll predict 50%.  If right now you bet 1/3, it's because you're predicting something different than you will on Wednesday.  

Impervious to reason?  I sent you an 8,000 word essay giving reasons for it! 

0Shankar Sivarajan
That's not what "impervious" means: your view does not open itself up to falsification by logical argument or by experiment. Any argument against it would only address its internal consistency, which I think it fundamentally has; I was being only slightly sardonic when I said that was as good as truth.
4Mitchell_Porter
I found an answer to the main question that bothered me, which is the relevance of a cognitive "flicker frequency" to suffering. The idea is that this determines the rate of subjective time relative to physical time (i.e. the number of potential experiences per second); and that is relevant to magnitude of suffering, because it can mean the difference between 10 moments of pain per second and 100 moments of pain per second.  As for the larger issues here:  I agree that ideally one would not have farming or ecosystems in which large-scale suffering is a standard part of the process, and that a Jain-like attitude which extends this perspective e.g. even to insects, makes sense.  Our understanding of pain and pleasure feels very poor to me. For example, can sensations be inherently painful, or does pain also require a capacity for wanting the sensation to stop? If the latter is the case, then avoidant behavior triggered by a damaging stimulus does not actually prove the existence of pain in an organism; it can just be a reflex installed by darwinism. Actual pain might only exist when the reflexive behavior has evolved to become consciously regulated. 
omnizoid2-13

https://benthams.substack.com/p/moral-realism-is-true

3Shankar Sivarajan
Ah, honest-to-god supernaturalism! I didn't expect to find that here. It's a view impervious to reason or empiricism, but perfectly self-consistent, which makes it as good as true. 

It may be imaginable, but if it's false, who cares.  Like, suppose I argue, that fundamental reality has to meet constraint X and view Y is the only plausible view that does so.  Listing off a bunch of random ones that meet constraint X but are false doesn't help you .

2interstice
It's a counterexample to a single step of reasoning(large multiverse of people --> God), it doesn't have to be globally a valid theory of reality. And clearly the existence of an imaginable multiverse satisfying a certain property makes it more plausible that our actual multiverse might satisfy the same property. (As an analogy, consider math, where you might want an object satisfying properties A and B. Constructing an object with property A makes it more plausible that you might eventually construct one with both properties)

Well, UDASSA is false https://joecarlsmith.com/2021/11/28/anthropics-and-the-universal-distribution.  As I argue elsewhere, any view other than SIA implies the doomsday argument.  The number of possible beings isn't equal to the number of "physically limited beings in our universe," and there are different arrangements for the continuum points.  

2interstice
Did you notice that I linked the very same article that you replied with? :P I'm aware of the issues with UDASSA, I just think it provides a clear example of an imaginable atheistic multiverse containing a great many possible people.

The argument for Beth 2 possible people is that it's the powerset of continuum points.  SIA gives reason to think you should assign a uniform prior across possible people.  There could be a God-less universe with Beth 2 people, but I don't know how that would work, and even if there's some coherent model one can make work without sacrificing simplicity, P(Beth 2 people)|Theism>>>>>>>>>>>>>>>>>>>>>>P(Beth 2 people)|Atheism.  You need to fill in the details more beyond just saying "there are Beth 2 people," which will cost simplicity.  

Remember, this is just part of a lengthy cumulative case.

2interstice
I think the cardinality should be Beth(0) or Beth(1) since finite beings should have finite descriptions, and additionally finite beings can have at most Beth(1)(if we allow immortality) distinct sequences of thoughts, actions, and observations, given that they can only think, observe, act, in a finite number of ways in finite time, so if you quotient by identical experiences and behaviors you get Beth(0) or Beth(1)(you might think we can e.g. observe a continuum amount of stuff in our visual field but this is an illusion, the resolution is bounded). The Bekenstein bound also implies physically limited beings in our universe have a finite description length. I don't think it's hard to imagine such a universe, e.g. consider all possible physical theories in some formal language and all possible initial conditions of such theories. This might be less simple to state than "imagine an infinitely perfect being" but it's also much less ambiguous, so it's hard to judge which is actually less simple. My perspective on these matters is influenced a lot by UDASSA, which recovers a lot of the nice behaviors of SIA at the cost of non-uniform priors. I don't actually think UDASSA is likely a correct description of reality, but it gives a coherent pictures of what an atheistic multiverse containing a great many possible people could look like.

If theism is true then all possible people exist but they're not all here.  SIA gives you a reason to think many exist but says nothing about where they'd be.  Theism predicts a vast multiverse. 

The cases are non-symmetrical because a big universe makes my existence more likely but it doesn't make me more likely to get HTTTTTTTHTTHHTTTHTTTHTHTHTTHHTTTTTTHHHTHTTHTTTHHTTTTHTHTHHHHHTTTTHTHHHHTHHHHHHHTTTTHHTHHHTHTTTTTHTTTHTTHHHTHHHTHHTHTHTHTHTHHTHTHTTHTHHTTHTHTTHHHHHTTTTTTHHTHTTTTTHHTHHTTHTTHHTTTHTTHTHTTHHHTTHHHTHTTHHTTHTTTHTHHHTHHTHHHHTHHTHHHTHHHHTTHTTHTHHTHTTHTHHTTHHTTHHTH.  The most specific version of the evidence is I get those sequence of coin flips, which is unaffected by the number of people, rather than that someone does that.  My view follows trivially from the widely adopted SIA which I argued for in the piece--it doesn't rely on some basic math error.

I didn't attack his character, I said he was wrong about lots of things. 

1M. Y. Zuo
Did you skim or skip over reading most of the comment?

//If you add to the physical laws code that says "behave like with Casper", you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.//

No, you haven't reimplemented Casper, you've just copied his physical effects.  There is no Casper, and Casper's consciousness doesn't exist.  

Your description of the FDT stuff isn't what I argued.  

//I've just skimmed this part, but it seems to me that you provide arguments and evidence about consciousnes... (read more)

I think this comment is entirely right until the very end.  I don't think I really attack him as a person--I don't say he's evil or malicious or anything in the vicinity, I just say he's often wrong.  Seems hard to argue that without arguing against his points.  

I never claimed Eliezer says consciousness is nonphysical--I said exactly the opposite.

If you look at philosophers with Ph.Ds who study decision theory for a living, and have a huge incentive to produce original work, none of them endorse FDT.  

I don't think the specific part of decision theory where people argue over Newcomb's problem is large enough as a field to be subject to the EMH. I don't think the incentives are awfully huge either. I'd compare it to ordinal analysis, a field which does have PhDs but very few experts in general and not many strong incentives. One significant recent result (if the proof works then the ordinal notation in question would be most powerful proven well-founded) was done entirely by an amateur building off of work by other amateurs (see the section on Bashicu Matrix System): https://cp4space.hatsya.com/2023/07/23/miscellaneous-discoveries/

About three quarters of academic decision theorists two box on Newcombe's problem.  So this standard seems nuts.  Only 20% one box.  https://survey2020.philpeople.org/survey/results/4886?aos=1399

1green_leaf
That's irrelevant. To see why one-boxing is important, we need to realize the general principle - that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can't impose a boundary condition only on our brain (i.e. we can't only choose how our brain decides while keeping everything else the same). This is necessarily true. Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a "debate" about it.

My goal was to get people to defer to Eliezer.  I explicitly say he's an interesting thinker who is worth reading. 

I think that “the author of the post does not think the post he wrote was bad” is quite sufficiently covered by “hardly any”.

I didn't say Eliezer was a liar and a fraud.  I said he was often overconfident and eggregiously wrong, and explicitly described him as an interesting thinker who was worth reading. 

omnizoid-3-8

The examples just show that sometimes you lose by being rational.  

Unrelated, but I really liked your recent post on Eliezer's bizarre claim that character attacks last is an epistemic standard. 

But part of the whole dispute is that people don't agree on what "rational" means, right? In these cases, it's useful to try to avoid the disputed term—on both sides—and describe what's going on at a lower level. Suppose I'm a foreigner from a far-off land. I'm not a native English speaker, and I don't respect your Society's academics any more than you respect my culture's magicians. I've never heard this word rational before. (How do you even pronounce that? Ra-tee-oh-nal?) How would you explain the debate to me?

It seems like both sides agree that FDT age... (read more)

What's your explanations of why virtually no published papers defend it and no published decision theorists defend it?  You really think none of them have thought of it or anything in the vicinity? 

5metachirality
Yes. Well, almost. Schwarz brings up disposition-based decision theory, which appears similar though might not be identical to FDT, and every paper I've seen on it appears to defend it as an alternative to CDT. There are some looser predecessors to FDT as well, such as Hofstadter's superrationality, but that's too different imo. Given Schwarz' lack of reference to any paper describing any decision theory even resembling FDT, I'd wager that FDT's obviousness is merely only in retrospect.

I mean like, I can give you some names.  My friend Ethan who's getting a Ph.D was one person.  Schwarz knows a lot about decision theory and finds the view crazy--MacAskill doesn't like it either.

3metachirality
Is there anything about those cases that suggest it should generalize to every decision theorist, or that this is as good a proxy for how much FDT works as the beliefs of earth scientists are for whether the Earth is flat or not? For instance, your samples consist of a philosopher not specialized in decision theory, one unaccountable PhD, and one single person who is both accountable and specializes in decision theory. Somehow, I feel as if there is a difference between generalizing from that and generalizing from every credentialed expert that one could possibly contact. In any case, its dubious to generalize from that to "every decision theorist would reject FDT in the same way every earth scientist would reject flat earth", even if we condition on you being totally honest here and having fairly represented FDT to your friend. I think everyone here would bet $1,000 that if every earth scientist knew about flat earth, they would nearly universally dismiss it (in contrast to debating over it or universally accepting it) without hesitation. However, I would be surprised if you would bet $1,000 that if every decision theorist knew about FDT, they would nearly universally dismiss it.
omnizoid-1-2

I wouldn't call a view crazy for just being disbelieved by many people.  But if a view is both rejected by all relevant experts and extremely implausible, then I think it's worth being called crazy!  

I didn't call people crazy, instead I called the view crazy.  I think it's crazy for the reasons I've explained, at length, both in my original article and over the course of the debate.  It's not about my particular decision theory friends--it's that the fact that virtually no relevant experts agree with an idea is relevant to an assessmen... (read more)

1metachirality
My claim is that there is not yet people who know what they are talking about, or more precisely, everyone knows roughly as much about what they are talking about as everyone else. Again, I'd like to know who these decision theorists you talked to were, or at least what their arguments were. The most important thing here is how you are evaluating the field of decision theory as a whole, how you are evaluating who counts as an expert or not, and what arguments they make, in enough detail that one can conclude that FDT doesn't work without having to rely on your word.
4Richard_Kennaway
Let's say, fundamental differences in worldview. I judge wireheading to be a step short of suicide, simulations to be no more than places that may be worth visiting on occasion, and most talk of "happiness" to be a category error. And the more zeros in an argument, the less seriously I am inclined to take it.
4Richard_Kennaway
For some reason the words "flagrantly, confidently, and egregiously wrong" come to mind.

You can make it with Parfit's hitchiker, but in that case there's an action before hand and so a time when you have the ability to try to be rational.  

There is a path from the decision theory to the predictor, because the predictor looks at your brain--with the decision theory it will make--and bases the decision on the outputs of that cognitive algorithm. 

1Oskar Mathiasen
I don't think the quoted problem has that structure. So S causes one boxing tendencies, and the person putting money in the box looks only at S. So it seems to be changing the problem to say that the predictor observes your brain/your decision procedure. When all they observe is S which, while causing "one boxing tendencies", is not causally downstream of your decision theory. Further if S where downstream of your decision procedure, then fdt one boxes whether or not the path from the decision procedure to the contents of the boxes routes through an agent. Undermining the criticism that fst has implausible discontinuities.
omnizoid-2-5

The Demon is omniscient.  

FDTists can't self-modify to be CDTists, by stipulation.  This actually is, I think, pretty plausible--I couldn't choose to start believing FDT. 

4metachirality
So it's crazy to believe things that aren't supported by published academic papers? I think if your standard for "crazy" is believing something that a couple people in a field too underdeveloped to be subject to the EMH disagree with and that there are merely no papers defending it, not any actively rejecting it, then probably you and roughly every person on this website ever count as "crazy". Actually, I think an important thing here is that decision theory is too underdeveloped and small to be subject to the EMH, so you can't just go "if this crazy hypothesis is correct then why hasn't the entire field accepted it, or at least having a debate over it?" It is simply too small to have fringe, in contrast to non-fringe positions. Obviously, I don't think the above is necessarily true, but I still think you're making us rely too much on your word and personal judgement. On that note, I think it's pretty silly to call people crazy based on either evidence they have not seen and you have not showed them (for instance, whatever counterarguments the decision theorists you contacted had), or evidence as weak/debatable as the evidence you have put forth in this post, and which has come to their attention only now. Were we somehow supposed to know that your decision theorist acquaintances disagreed beforehand? If you have any papers from academic decision theorists about FDT, I'd like to see them, whether favoring or disfavoring it. IIRC Soares has a Bachelor's in both computer science and economics and MacAskill has a Bachelor's in philosophy.

Yeah, I agree I have lots of views that LessWrongers find dumb.  My claim is just that it's bad when those views are hard to communicate on account of the way LW is set up.  

6ChristianKl
As shminux describes well, it's possible to write about controversial views in a way that doesn't get downvoted into nirvana. To do that, you actually have to think about how to write well. The rate limits, limits the quantity but that allows you to spend more time to get the quality right. If you are writing in the style you are writing you aren't efficiently communicating in the first place. That would require to think a lot more about what the cruxes actually are. 
Viliam3620

I think it's not just the views but also (mostly?) the way you write them.

This is hindsight, but next time instead of writing "I think Eliezer is often wrong about X, Y, Z" perhaps you should first write three independent articles "my opinion on X", "my opinion on Y", my opinion on Z", and then one of two things will happen -- if people agree with you on X, Y, Z, then it makes sense to write the article "I think Eliezer is often wrong" and use these three articles as evidence... or if people disagree with you on X, Y, Z, then it doesn't really make sense t... (read more)

The description is exactly as you describe in your article.  I think my original was clear enough, but you describe your interpretation, and your interpretation is right.  You proceed to bite the bullet.  

1Heighn
Your original description doesn't specify subjunctive dependence, which is a critical component of the problem.

How'd you feel about a verbal debate? 

1MiguelDev
I have written before why FDT is relevant to solving the alignment problem. I'd be happy to discuss that to you.

Philosophy is pretty much the only subject that I'm very informed about.  So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on.  That's . . . worrying. 

5Jackson Wagner
Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences: * Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc.  (IMO he's right about this stuff.  But idk if this counts as controversial enough within academia?) * Probably a lot of assorted philosophy-of-science stuff about the nature of evidence, the idea that high-caliber rationality ought to operate "faster than science", etc.  (IMO he's right about the big picture here, although this topic covers a lot of ground so if you looked closely you could probably find some quibbles.) * The claim / implication that talk of "emergence" or the study of "complexity science" is basically bunk.  (Not sure but seems like he's probably right?  Good chance the ultimate resolution would probably be "emergence/complexity is a much less helpful concept than its fans think, but more helpful than zero".) * A lot of assorted references to cognitive and evolutionary psychology, including probably a number of studies that haven't replicated -- I think Eliezer has expressed regret at some of this and said he would write the sequences differently today.  But there are probably a bunch of somewhat-controversial psychology factoids that Eliezer would still confidently stand by.  (IMO you could probably nail him on some stuff here.) * Maybe some assorted claims about the nature of evolution?  What it's optimizing for, what it produces ("adaptation-executors, not fitness-maximizers"), where the logic can & can't be extended (can corporations be said to evolve?  EY says no), whether group selection happens in real life (EY says basically never).  Not sure if any of these claims are controversial though. * Lots of confident claims about the idea of "intelligence" -- that it is a coherent concept, an important trait, etc.  (Vs some philosophers w

I felt like I was following the entire comment, until you asserted that it rules out zombies.

Load More