All of Max Kaye's Comments + Replies

Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

curi evidently wanted to change some things about his behaviour, otherwise he wouldn't have updated... (read more)

Vaniver130

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that.  But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?". 

As stewards of the community, we need to make decisions taking into acco... (read more)

I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough.

I don't either.

The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern.

Sure, we can stop.

Curi is being banned for wasting time with long, unproductive conversations.

I don't know anywhere I could go to find out that this is a bannable offense. If it is not in a body of rules somewhere, then it should be added. If the mods are unwilli... (read more)

3habryka
I am not currently aware of any factual inaccuracies, but would be happy to correct any you point out.  The only thing you pointed out was something about the word "threat" being wrong, but that only appears to be true under some very narrow definition of threat. This might be weird rationalist jargon, but I've reliably used the word "threat" to simply mean signaling some kind of intention of inflicting some kind punishment in response to some condition by the other person. Curi and other people from FI have done this repeatedly, and the "list of people who have evaded/lied/etc." is exactly one of such threats, whether explicitly labeled as such or not.  The average LessWrong user would pretty substantially regret having engaged with curi if they later end up on that list, so I do think it's a pretty concrete punishment, and while there might be some chance you are unaware of the negative consequences, this doesn't really change the reality very much that due to the way I've seen curi active on the site, engaging with him is a trap that people are likely to regret.

This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.

This definition doesn't describe anything curi has done (see my sibling reply linked below), at least that I've seen. I'd appreciate any quotes you can provide.

https://www.lesswrong.com/posts/PkpuvsFYr6yuYnppy/open-and-welcome-thread-september-2020?commentId=H2tyDgoRFov8Xs8HS

define:threat

I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace".

This definition seems okay to me.

undue justice

I don't know how justice can be undue, do you mean like undue or excessive prosecution? or persecution perhaps? thought I don't think either prosecution or persecution describe anything curi's done on LW. If you have counterexamples I would appreciate it if you could quote them.

We have substantial disa

... (read more)
7lsusr
I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough. I do not believe that elaborating upon my reasoning would get you to change your mind about the core disagreement. Elaborating upon my position would therefore waste both of our time. The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern. Curi is being banned for wasting time with long, unproductive conversations. It would be ironic for me to embroil myself in such a conversation as a consequence.

lsusr said:

(1) Curi was warned at least once.

I'm reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I'll check, though.

There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY

gjm said:

I had not looked, at that point; I took "mirrored" to mean taking copies of whole discussions, which would imply copying other people's writing en masse. I have loo

... (read more)
lsusr100

I googled the definition, and these are the two (for define:threat)

  • a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
  • a person or thing likely to cause damage or danger.

Neither of these apply.

I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace". I think the word "retribution" implies undue justice. A "threat" need only imply reta... (read more)

The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.

Isn't it even worse then b/c no action was necessary?

But more to the point, isn't the determination X person is not good to have around a personal judgement? It doesn't apply to everyone else.

I think what habryka meant was that he wasn't making a personal judgement.

I'm not sure about other cases, but in this case curi wasn't warned. If you're interested, he and I discuss the ban in the first 30 mins of this stream

Arguably, if there is something truly wrong with the list, I should have an issue with it.

This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determent by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.

I think this is fair, and additionally I maybe shouldn't have used the word "truly"; it's a very laden word. I do think that, on the balance of probabilities, my case do... (read more)

Max Kaye*130

Today we have banned two users, curi and Periergo from LessWrong for two years each.

I wanted to reply to this because I don't think it's right to judge curi the way you have. Periergo I don't have an issue w/. (it's a sockpuppet acct anyway)

I think your decision should not go unquestioned/uncriticized, which is why I'm posting. I also think you should reconsider curi's ban under a sort of appeals process.

Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.

O

... (read more)
2habryka
I don't understand this sentence at all. How has he already been punished for his past behavior? Indeed, he has never been banned before, so there was never any previous punishment. 
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.

The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind". Someone posting the kind of content that's unwanted on a forum seems like a reasonable reason to bar that person from the forum in question.

I agree with... (read more)

9lsusr
If I understand you correctly then your primary argument appears to be that a ban is (1) too harsh a judgment where a warning would have sufficed, (2) that curi ought to have some sort of appeals process and (3) that habryka's top-level comment does not provide detailed citations for all the accusations against curi. (1) Curi was warned at least once. (2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation. (3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi's profile and even curi's response you linked to that curi is damaging to productive dialogue on Less Wrong. The strongest claim against curi is "a history of threats against people who engage with him [curi]". I was able to confirm this via a quick glance through curi's past behavior on this site. In this comment curi threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him. Edit: grammar.

You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.

The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.

Unpopularity is no reason for a ban

That seems like a sentiment indicative of ignoring the reason for which he was banned. It was a utilitarian argument. The fact that someone gets downvoted is Bayesian evidence that it's not valuabl... (read more)

6Rafael Harth
This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determined by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified. Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.

FYI I am on that list and fine with it - curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto

I think you're wrong on multiple counts. Will reply more in a few hours.

\usepackage{cleveref}

Cool, thanks. I think I was missing \usepackage{cleveref}. I actually wrote the post in latex (the post for which I asked this question), but the lesswrong docs on using latex are lacking. for example they don't tell you they support importing stuff and don't list what is supported.

\Cref{eq:1} is an amazing new discovery; before Max Kaye, no one grasped the perfect and utter truth of \cref{eq:1}.

I use crefs in the .tex file linked above. I suppose I should have been more specific and asked "does anyone know how to label equation... (read more)

I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality.

In that case I especially don't think that argument answers the question in OP.

I've left some details in another reply about why I think the constant overhead argument is flawed.

So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world.

I don't think this is true. I do agree some conclusions would be converged on ... (read more)

3interstice
Intuitive explanation: Say it takes X bits to specify a human, and that the human knows how to correctly predict whatever sequence we're applying SI to. SI has to find the human among the other 2^X programs of length X. Say SI is trying to predict the next bit. There will be some fraction of those 2^X programs that predict it will be 0, and some fraction predicting 1. There fractions define SI's probabilities for what the next bit will be. Imagine the next bit will be 0. Then SI is predicting badly if greater than half of those programs predict a 1. But then, all those programs will be eliminated in the update phase. Clearly, this can happen at most X times before most of the weight of SI is on the human hypothesis(or a hypothesis that's just as good at predicting the sequence in question) The above is a sketch, not quite how SI really works. Rigorous bounds can be found here, in particular the bottom of page 979("we observe that Theorem 2 implies the number of errors of the universal predictor is finite if the number of errors of the informed prior is finite..."). In the case where the number of errors is not finite, the universal and informed prior still have the same asymptotic rate of growth of error (error of universal prior is in big-O class of error of informed prior) When I say the 'sense of simplicity of SI', I use 'simple program' to mean the programs that SI gives the highest weight to in its predictions(these will by definition be the shortest programs that haven't been ruled out by data). The above results imply that, if humans use their own sense of simplicity to predict things, and their predictions do well at a given task, SI will be able to learn their sense of simplicity after a bounded number of errors. I think you can input multiple questions by just feeding a sequence of question/answer pairs. Actually getting SI to act like a question-answering oracle is going to involve various implementation details. The above arguments are just meant to e

The solution to the "large overhead" problem is to amortize the cost of the human simulation over a large number of English sentences and predictions.

That seems a fair approach in general, like how can we use the program efficiently/profitably, but I don't think it answers the question in OP. I think it actually actually implies the opposite effect: as you go through more layers of abstraction you get more and more complex (i.e. simplicity doesn't hold across layers of abstraction). That's why the strategy you mention needs to be over ever larger and la... (read more)

for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next. Therefore the English and code-complexity can differ by at most a constant.

Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.

Solomonoff induction is fine wit

... (read more)
2interstice
I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality. You're right that we have no guarantee that the explanation that looks simplest to a human will also look the simplest to a newly-initialized SI, because the 'constant factor' needed to specify that human could be very large. I do think it's meaningful that there is at most a constant difference between different versions of Solomonoff induction(including "human-SI"). This is because of what happens as the two versions update on incoming data: they will necessarily converge in their predictions, differing at most on a constant number of predictions. So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world. If an emulation of a human takes X bits to specify, it means a human can beat SI at binary predictions at most X times(roughly) on a given task before SI wises up. For domains with lots of data, such as sensory prediction, this means you should expect SI to converge to giving answers as good as humans relatively quickly, even if the overhead is quite large*. The quantity that matters is how many bits it takes to specify the mind, not store it(storage is free for SI just like computation time). For the human brain this shouldn't be too much more than the length of the human genome, about 3.3 GB. Of course, getting your human brain to understand English and have common sense could take a lot more than that. *Although, those relatively few times when the predictions differ could cause problems. This is an ongoing area of research.

the M1-simulator may be long, but its length is completely independent what we're predicting - thus, the M2-Kolmogorov-complexity of a string is at most the M1-Kolmogorov-complexity plus a constant (where the constant is the length of the M1-simulator program).

I agree with this, but I don't think it answers the question. (i.e. it's not a relevant argument^([1]))

Given the English sentence, the simulated human should then be able to predict anything a physical human could predict given the same English sentence.

There's a large edge case where the over... (read more)

3johnswentworth
The solution to the "large overhead" problem is to amortize the cost of the human simulation over a large number of English sentences and predictions. We only need to specify the simulation once, and then we can use it for any number of prediction problems in conjunction with any number of sentences. A short English sentence then adds only a small amount of marginal complexity to the program - i.e. adding one more sentence (and corresponding predictions) only adds a short string to the program.

On the literature that addresses your question: here is a classic LW post on this sort of question.

The linked post doesn't seem to answer it, e.g. in the 4th paragraph EY says:

Why, exactly, is the length of an English sentence a poor measure of complexity? Because when you speak a sentence aloud, you are using labels for concepts that the listener shares—the receiver has already stored the complexity in them.

I also don't think it fully addresses the question - or even partially in a useful way, e.g. EY says:

It’s enormously easier (as it turns out)

... (read more)

One somewhat silly reason: for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next.

Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.

because coding languages were designed to be understandable by humans and have syntax sim

... (read more)
1interstice
Solomonoff induction is fine with inputs taking unboundedly long to run. There might be cases where the human doesn't converge to a stable answer even after an indefinite amount of time. But if a "simple" hypothesis can have people debating indefinitely about what it actually predicts, I'm okay with saying that it's not actually simple(or that it's too vague to count as a hypothesis), so it's okay if SI doesn't return an answer in those cases. Why do you need to include those things? Solomonoff induction can use any Turing-complete programming language for its definition of simplicity, there's nothing special about low-level languages. I mean you can pass functions as arguments to other functions and perform operations on them. Regarding dictionary/list-of-tuples, the point is that you only have to write the abstraction layer *once*. So if you had one programming language with dictionaries built-in and other without, the one with dictionaries gets at most a constant advantage in code-length. In general two different universal programming languages will have at most a constant difference, as johnswentworth mentioned. This means that SI is relatively insensitive to the choice of programming language: as you see more data, the predictions of 2 versions of Solomonoff induction with different programming languages will converge.
Answer by Max Kaye10

Brevity of code and english can correspond via abstraction.

I don't know why brevity in low and high abstraction programs/explanations/ideas would correspond (I suspect they wouldn't). If brevity in low/high abstraction stuff corresponded; isn't that like contradictory? If a simple explanation in high abstraction is also simple in low abstraction then abstraction feels broken; typically ideas only become simple after abstraction. Put another way: the reason to use abstraction is to make ideas/thing that are highly complex into things that are less complex.

I... (read more)

I went through the maths in OP and it seems to check out. I think the core inconsistency is that Solomonoff Induction implies which is obviously wrong. I'm going to redo the maths below (breaking it down step-by-step more). curi has which is the same inconsistency given his substitution. I'm not sure we can make that substitution but I also don't think we need to.

Let and be independent hypotheses for Solomonoff induction.

According to the prior, the non-normalized probability of (and similarly for ) is: (1)

what is th... (read more)

testing \latex \LaTeX

does anyone know how to label equations and reference them?

@max-kaye u/max-kaye https://www.lesswrong.com/users/max-kaye

5TurnTrout
\usepackage{cleveref} .... \begin{equation} 1+1=2.\label{eq:1} \end{equation} \Cref{eq:1} is an amazing new discovery; before Max Kaye, no one grasped the perfect and utter truth of \cref{eq:1}.

If it's possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can't freely decide to use it to make a better decision than the one you would have made anyway.

[...]

Deterministic physics excluded free choice. Physics doesn't.

MWI is deterministic over the multiverse, not per-universe.

2TAG
Yes, and it still precludes free choice, like single universe determinism, as well as precluding refraining
A combination where both are fine or equally predicted fails to be a hypothesis.

Why? If I have two independent actions - flipping a coin and rolling a 6-sided die (d6) - am I not able to combine "the coin lands heads 50% of the time" and "the die lands even (i.e. 2, 4, or 6) 50% of the time"?

If you have partial predictions of X1XX0X and XX11XX you can "or" them into X1110X.

This is (very close to) a binary "or", I roughly agree with you.

But if you try to combine 01000 and 00010 the result will not be 01010 but som
... (read more)
1Slider
In order for the predcitions to be combatible they must be silent about each other. If the base cases are H1,H2,H3,H4,H5,H6,T1,T2,T3,T4,T5,T6 then it makes sense to say that HX (H1,H2,H3,H4,H5,H6) is 50% of XX and that {X2,X4,X6} is 50% of XX However if the base cases are H,T,1,2,3,4,5,6 then H would be 50% of {H,T} but not of X and {2,4,6} would be 50% of {1,2,3,4,5,6} but not X The case where "everything works out" would like the OR to output a prediction scope of {H1,H2,H3,H4,H5,H6,T2,T4,T6}. But someone mean could argue that the OR outputs a prediction scope of {H,2,4,6} If the claims are about separate universes then they are not predictions about the same thing. A heads claim doesn't concern the dice world so it is not a prediction on it. Predictiontions should be alternative descriptions what happens in the world of concern. So the predictions should have the same amount of holes in them and at the root level all details shoudl be filled out ie 0 holes. An OR operation would need to produce an object that has more holes than the inputs if the inputs speak about the same universe. That is H1,H2,H3,H4,H5,H6,T1,T2,T3,T4,T5,T6 are the base level hypotheses and {H1,H2,H3,H4,H5,H6,T1,T2,T3,T4,T5,T6} and {} are prediction scopes neither which is found among the hypotheses. Applying a prediction OR would push towards the former object. But hypotheses are required to be firm in the details while scopes can be slippery. That is predction scopes {H1} and {H2} can be ored to {H1,H2} but the hypotheses H1 and H2 can't be ored to produce a hypothesis.
So there is a relationship between the Miller and Popper papers conclusions, and it's assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn't depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument ... the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.

I didn't claim that paper made no assumptions. I claimed th... (read more)

1TAG
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn't refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction. I don't particularly identify as an inductivist , and I don't think that the critrat version of inductivism, is what self identified inductivists believe in. Conclusion from what? The conclusion will be based on some deeper assumption. What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I've also read some if the great man's works. Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn't suitable for science. But of course the true believing critrats weren't convinced by Word of God. The point is that every claim in general depends on assumptions. So, in particular, the critrats don't have a disproof of induction that floats free of assumptions. ---------------------------------------- 1. 1 ↩︎
> Bertrand Russell had arguments against that kind of induction...
Looks like the simple organisms and algorithms didn't listen to him!

I don't think you're taking this seriously.

CR doesn't have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do.

This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...

The refutations of that kind of induction are way beyond the bounds of CR.

1TAG
Looks like the simple organisms and algorithms didn't listen to him!
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.

Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it's an explanatory conclusion.

I'm not convinced we can get anywhere productive continuing this discussion. If you don't think contradictions are bad, it feels like there's going to be a lot of work finding... (read more)

2TAG
Neither. You don't have to treat epistemology as a religion.
1TAG
Firstly, epistemology goes first. You don't know anything about reality without having the means to acquire knowledge. Secondly, I didn't say it was the PNC was actually false. So there is a relationship between the Miller and Popper papers conclusions, and it's assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn't depend on assumptions. No, it proposes a criticism of your argument ... the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one.

I'm not sure I understand yet, but does the following line up with how you're using the word?

Indexical uncertainty is uncertainty around the exact matter (or temporal location of such matter) that is directly facilitating, and required by, a mind. (this could be your mind or another person's mind)

Notes:

  • "exact" might
... (read more)
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!

Sure, or the parties need a rational method of resolving a disagreement on acceptability. I'm not sure why that's particularly relevant, though.

> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don't see how that is an example, principally because it seems wrong to me.

You didn... (read more)

1TAG
The relevance is that CR can't guarantee that any given dispute is resolveable. But I don't count it as an example, since I don't regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions. In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So "induction must be based on bivalent logic" is an assumption. The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.

Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.

> * If you can’t say something nice, don’t say anything at all.

This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it's neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?

"You're going to fast", "you're hurting me", "your habit of overreaching hurts your ability to learn", etc. These are g... (read more)

Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia

I don't think so, here is a counter-example:

Alice and Bob start talking in a room. Alice has an identical twin, Alex. Bob doesn't know about the twin and thinks he's talking to Alex. Bob asks: "How are you today?". Before Alice responds, Alex walks in.

Bob's observation of Alex will surprise him, and he'll quickly figure out that something's going on. But more importantly: Bob&apo... (read more)

1mako yass
That's a well constructed example I think, but no that seems to be a completely different sense of "indexical". The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one. The Sleeping Beauty problem is the most widely known example. The mirror chamber was another example.
It's a bad thing if ideas can't be criticised at all, but it's also a bad thing if the relationship of mutual criticism is cyclic, if it doesn't have an obvious foundation or crux.

Do you have an example? I can't think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they're contrived)

> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A c
... (read more)
1TAG
The other options need to be acceptable to both parties! I don't see how that is an example, principally because it seems wrong to me.

FYI this usage of the term *universality* overloads a sorta-similar concept David Deutsch (DD) uses in *The Fabric of Reality* (1997) and in *The Beginning of Infinity* (2011). In BoI it's the subject of Chapter 6 (titled "The Jump to Universality") I don't know what history the idea has prior to that. Some extracts are below to give you a bit of an idea of how DD uses the word 'universality'.

Part of the reason I mention this is the reference to Popper; DD is one of the greatest living Popperians and has made significant cont... (read more)

This is commentary I started making as I was reading the first quote. I think some bits of the post are a bit vague or confusing but I think I get what you mean by anthropic measure, so it's okay in service to that. I don't think equating anthropic measure to mass makes sense, though; counter examples seem trivial.

> The two instances can make the decision together on equal footing, taking on exactly the same amount of risk, each- having memories of being on the right side of the mirror many times before, and no memories of being on the wrong- ... (read more)

3mako yass
It provides no real reassurance, but it would make it a more pleasant experience to go through. The effect of having observed the coin landing heads many times and never tails, is going to make it instinctively easy to let go of your fear of tails. Possibly! I'm not sure how realistic that part is, to come to that realization while the thing is happening instead of long before, but it was kind of needed for the story. It's at least conceivable that the academic culture of the consenters was always a bit inadequate, maybe Nai had heard murmurings about this before, then the murmurer quietly left the industry and the Nais didn't take it seriously until they were living it. The odds don't have to be as high as 3:1 for the decision to come out the same way. The horrors of collectively acknowledging that the accessible universe's resources are finite and that growth must be governed in order to prevent malthus from coming back? Or is it more about the beurocracy of it, yeah, you'd really hope they'd be able to make exceptions for situations like this, hahaha (but, lacking an AGI singleton, my odds for society still having legal systems with this degree of rigidity are honestly genuinely pretty high, like yeah, uh, human societies are bad, it's always been bad, it is bad right now, it feels normal if you live under it, you Respect The History and imagine it couldn't be any other way) An abdication of the expectations of an employer, much less than an abdication of the law. Transfers to other substrates with the destruction of the original copy are legal, probably even commonplace in many communities. I think this section is really confusing. They're talking about killing the original, which the chamber is not set up to do, but they have an idea as to how to do it. The replica is just a brain, they will only experience impaling their brain with a rod and it wouldn't actually happen. They would be sitting there with their brain leaking out while somehow still conscious
I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire.

I think he'd say 'yes' to a distinction between morality and desire, at least in the way I'm reading this sentence. My comment: Moral statements are part of epistemology and not dependent on humans or local stuff. However, as one learns more about morality and considers their own actions, their preferences progressively change to be increasingly compatible with their morality.

Being a fallibilist I think he'd add ... (read more)

In what way are the epistemologies actually in conflict?

Well, they disagree on how to judge ideas, and why ideas are okay to treat as 'true' or not.

There are practical consequences to this disagreement; some of the best CR thinkers claim MIRI are making mistakes that are detrimental to the future of humanity+AGI, for **epistemic** reasons no less.

My impression is that it is more just a case of two groups of people who maybe don't understand each other well enough, rather than a case of substantiative disagreement between the useful theorie
... (read more)

It's not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.

If a theory doesn't offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.

We can also come up with criticisms that are sort of independent of where they came from, like a new... (read more)

2TAG
It's a bad thing if ideas can't be criticised at all, but it's also a bad thing if the relationship of mutual criticism is cyclic, if it doesn't have an obvious foundation or crux. Do you have a concrete example? Kind of, but "everything is wrong" is vulgar scepticism.

I'm happy to do this. On the one hand I don't like that lots of replies creates more pressure to reply to everything, but I think if we'll probably be fine focusing on the stuff we find more important if we don't mind dropping some loose ends. If they become relevant we can come back to them.

> CR says that truth is objective
I'd say bayesian epistemology's stance is that there is one completely perfect way of understanding reality, but that it's perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It's good to believe that there's an objective truth and try to move towards it, but
... (read more)
2mako yass
I think you'd probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one. The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet. I'm not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn't at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn't get me closer to the thing that it wants. Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I'm not sure where that resolution came from. What I'm getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn't seem interesting to me at all. A reasonable person's approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people. Real world examples of decisionmaking generally aren't solvable, or reducible to optimal methods.   Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation. It's probably worth mentioning that even mathematical claims aren't beyond doubt, as mathematical claims can be arrived at in error (cosmic
1TAG
Most fraught ideas are mutually refuted...A can be refuted assuming B, B can be refuted using A.

On the note of *qualia* (providing in case it helps)

DD says this in BoI when he first uses the word:

Intelligence in the general-purpose sense that Turing meant is one of a constellation of attributes of the human mind that have been puzzling philosophers for millennia; others include consciousness, free will, and meaning. A typical such puzzle is that of qualia (singular quale, which rhymes with ‘baalay’) – meaning the subjective aspect of sensations. So for instance the sensation of seeing the colour blue is a quale. Consider the foll
... (read more)
But maybe there could be something reasonably describable as a bayesian method. But I don't work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.

I don't know how you'd describe Bayesianism atm but I'll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.

  • both CR and Bayesianism answer Qs about knowledge and judging kn
... (read more)
1mako yass
Understanding what you mean by "right", I think I might agree; it's not complete, it's not especially close to certainty. It's difficult to apply the mirror chamber's reduction of anthropic measure across different species (it was only necessitated for comparing over a pair of very similar experiences), and I'm not sure the biomass difference between fishbrain and humanbrain is such that anthropics can be used either, meaning... well, we can conclude, from the amount of rock in the universe, and the tiny amount of humans in the universe, and our being humans instead of rock, that it is astronomically unlikely that anthropic measure binds in significant quantities to rock. If it did, we would almost certainly have woken up in a different sort of place. But for fish, perhaps the numbers are not large enough for us to draw a similar conclusion. (Again, I'm realizing the validity of that sort of argument doesn't clearly entail from the mirror chamber, though I think it is suggested by it) I think my real reasons for going with pescatarianism are being fed into from other sources, here. It's not just the anthropic measure thing. Also receiving a strong push from my friends in neuroscience who claim that the neurology of fish is just way too simple to be given a lot of experiential weight, in the same way that a thermostat is too simple for us to think anything is suffering when ... [reexamines the assumptions]... Hmm. I no longer believe their reasoning there (I should talk to them again I guess). I have seen too many bastards say "but that's merely a machine so it couldn't have conscious experience" of systems that probably would have conscious experience, and here they are saying that a biological reinforcement learning system that observably learns from painful experience could not truly suffer. It's not clear that there's a difference between that and suffering. I think fish suffer. The quantity must be small, but this is not enough to conclude that it's negligibl
1mako yass
I'd say bayesian epistemology's stance is that there is one completely perfect way of understanding reality, but that it's perhaps provably unattainable to finite things. You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries. (For some mysteries, like anthropic measure binding pthe hard problem of consciousness] it seems provably impossible to ever test our models. We each have exactly one observation about what kinds of matter attract subjectivity, we can't share data (solopsistic doubt), we can never get more data (to "enter" a new locus of consciousness we would have to leave our old one, losing access to the observed experience that we had), but the question still matters a lot for measuring the quantity of experience over different brain architectures, so we need to have theories, even though objective truth can't be attained.) It's good to believe that there's an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it, and dialethic epistemologies like popper's can't give you the framework you need to operate gracefully without ever getting objective truth.   We sometimes talk about aumann's agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything. This might serve the same social purpose as an advocation of objectivity. Though I do not know whether it's really true of humans that they will converge if they talk long enough, I still hold out faith that it will be one day, if we keep trying, if we try to get better at it.   Which means "wrong" is no longer a meaningful word. Do you think you can operate without having a word like "wrong"? Do you think you can operate without that concept? I think DD some
1mako yass
It might be a good idea to divide comments up so that they can be voted on separately and so that the replies can be branch off under them too, but it's not important! I'll reply to the rest tomorrow I think
I would be interested to know how Mirror Chamber strikes you though, I haven't tried to get non-bayesians to read it.

Will the Mirror Chamber explain what "anthropic measure" (or the anthropic measure function) is?

I ended up clicking through to this and I guess that the mirror chamber post is important but not sure if I should read something else first.

I started reading, and it's curious enough (and short enough) I'm willing to read the rest, but wanted to ask the above first.

1mako yass
Aye, it's kind of a definition of it, a way of seeing what it would have to mean. I don't know if I could advocate any other definitions than the one outlined here.
[...] critrats [...] let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem

As someone who thinks you'd think they're a 'critrat', this feels wrong to me. I can't speak for other CR ppl, ofc, and some CR ppl aren't good at it (like any epistemology), but for me I don't think what you describe would add up to "a productive intellectual ecosystem".

In this case, I don't think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it;

I'm not sure what this would look like in practice. If you have two competing theories and don't need to act on them - there's no issue. If they're not mutually exclusive there's no issue. I... (read more)

1mako yass
Maximizing expected utility does these things in a very simple way to the exact extent that it should. Hmm... My first impulse was to say "bayes is not a method. It is a low-level language for epistemology. Methods emerge higher in the abstraction stack. Its fandom uses just whatever methods work." But maybe there could be something reasonably describable as a bayesian method. But I don't work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it. Is the bayesian method... trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that. His understanding of AGI is utterly anthropomorphic, and is not informed by decision theory; it is not informed by the study of reliable, deeply comprehensible essences of things, and so it will not come into play very much in the adjacent discipline of engineering. I guess... to get closer to understanding what the bayesian methodology might be uniquely good at... I'll have to reexamine some original reasoning that I have done with it... so... understanding things in terms of decision lets me identify concepts that are basic and necessary for consistent decisionmaking (paraphrasings of consistent decisionmaking: for being free and agentic and not self-defeating or easily tricked). Which let me narrow in on just the aspects of the hard problem of consciousness that must be, in some sense, real. Which lead me to conclusions like "fish aren't important moral subjects, because even though they're clearly capable of suffering, experiences have magnitude, and theirs must be negligible, for it to be other than negligible, something astronomically unlikely would have needed to have happened, so it basically must be." Which means I get to be more of a pescaterian than a vegan, which is a very immediately useful realization to have arrived at. If that argument doesn't make sense to you, well that might mean that we've ju
I think it's fairly clear from this that he doesn't have solomonoff induction internalized, he doesn't know how many of his objection to bayesian metaphysics it answers.

I suspect, for DD, it's not about *how many* but *all*. If I come up with 10 reasons Bayesianism is wrong (so 10 criticisms), and 9 of those get answered adequately, the 1 that's still left is as bad as the 10; *any* unanswered criticism is a reason not to believe an idea. So to convince DD (or any decent Popperian) that an idea is wrong can't rely on incompl... (read more)

2mako yass
In what way are the epistemologies actually in conflict? My impression is that it is more just a case of two groups of people who maybe don't understand each other well enough, rather than a case of substantiative disagreement between the useful theories that they have, regardless of what DD thinks it is. Bayes does not disagree with true things, nor does it disagree with useful rules of thumb. Whatever it is you have, I think it will be conceivable from bayesian epistemological primitives, and conceiving it in those primitives will give you a clearer idea of what it really is.
Evidence to the contrary, please?

here

Before October 2014, copyright law permitted use of a work for the purpose of criticism and review, but it did not allow quotation for other more general purposes. Now, however, the law allows the use of quotation more broadly. So, there are two exceptions to be aware of, one specifically for criticism and review and a more general exception for quotation. Both exceptions apply to all types of copyright material, such as books, music, films, etc.

https://www.copyrightuser.org/understand/exceptions/quotation/ - firs... (read more)

2gjm
Yep, "en masse" is vague, and what it turns out curi actually did -- which is less drastic than what his use of the word "mirrored" and his past history with LW led me to assume -- was not so very en masse as I feared. My apologies, again, for not checking. I didn't, of course, claim to know what happens in every jurisdiction; the point of my "in every jurisdiction I know of" was the reverse of what you're taking it to be. I don't know anything much about the law in Tuvalu and Mauritius, but I believe they are both signatories to the Berne Convention, which means that their laws on copyright are probably similar to everyone else's. The Berne Convention requires signatories to permit some quotation, and its test for what exceptions are OK doesn't give a great deal of leeway to allow more (see e.g. https://www.keionline.org/copyright/berne-convention-exceptions-revisions), so the situation there is probably similar to that in the UK (which is where I happen to be and where the site you linked to is talking about). The general rule about quoting in the UK is that you're allowed to quote the minimum necessary (which is vague, but that's not my fault, because the law is also vague). What I (wrongly) thought curi had done would not, I think, be regarded as the minimum necessary to achieve a reasonable goal. But, again, what he actually did is not what I guessed, and what he did is OK. If someone sees something I wrote on Google and takes an interest in it, the most likely result is that they follow Google's link and end up in the place where I originally wrote it, where they will see it in its original context. If someone sees something I wrote that curi has "mirrored" on his own site, the most likely result is that they see whatever curi has chosen to quote, along with his (frequently hostile) comments of which I may not even be aware since I am not a regular there, and comments from others there (again, likely hostile; again, of which I am not aware). None of that