I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough.
I don't either.
The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern.
Sure, we can stop.
Curi is being banned for wasting time with long, unproductive conversations.
I don't know anywhere I could go to find out that this is a bannable offense. If it is not in a body of rules somewhere, then it should be added. If the mods are unwilli...
This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.
This definition doesn't describe anything curi has done (see my sibling reply linked below), at least that I've seen. I'd appreciate any quotes you can provide.
define:threat
I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace".
This definition seems okay to me.
undue justice
I don't know how justice can be undue, do you mean like undue or excessive prosecution? or persecution perhaps? thought I don't think either prosecution or persecution describe anything curi's done on LW. If you have counterexamples I would appreciate it if you could quote them.
...We have substantial disa
lsusr said:
(1) Curi was warned at least once.
I'm reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I'll check, though.
There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY
gjm said:
...I had not looked, at that point; I took "mirrored" to mean taking copies of whole discussions, which would imply copying other people's writing en masse. I have loo
I googled the definition, and these are the two (for define:threat)
- a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
- a person or thing likely to cause damage or danger.
Neither of these apply.
I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace". I think the word "retribution" implies undue justice. A "threat" need only imply reta...
The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.
Isn't it even worse then b/c no action was necessary?
But more to the point, isn't the determination X person is not good to have around a personal judgement? It doesn't apply to everyone else.
I think what habryka meant was that he wasn't making a personal judgement.
I'm not sure about other cases, but in this case curi wasn't warned. If you're interested, he and I discuss the ban in the first 30 mins of this stream
FYI and FWIW curi has updated the post to remove emails and reword the opening paragraph.
http://curi.us/2215-fallible-ideas-post-mortems and http://curi.us/2215-fallible-ideas-post-mortems#18059
Arguably, if there is something truly wrong with the list, I should have an issue with it.
This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determent by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.
I think this is fair, and additionally I maybe shouldn't have used the word "truly"; it's a very laden word. I do think that, on the balance of probabilities, my case do...
Today we have banned two users, curi and Periergo from LessWrong for two years each.
I wanted to reply to this because I don't think it's right to judge curi the way you have. Periergo I don't have an issue w/. (it's a sockpuppet acct anyway)
I think your decision should not go unquestioned/uncriticized, which is why I'm posting. I also think you should reconsider curi's ban under a sort of appeals process.
Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.
...O
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.
The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind". Someone posting the kind of content that's unwanted on a forum seems like a reasonable reason to bar that person from the forum in question.
I agree with...
You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.
The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.
Unpopularity is no reason for a ban
That seems like a sentiment indicative of ignoring the reason for which he was banned. It was a utilitarian argument. The fact that someone gets downvoted is Bayesian evidence that it's not valuabl...
FYI I am on that list and fine with it - curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto
I think you're wrong on multiple counts. Will reply more in a few hours.
\usepackage{cleveref}
Cool, thanks. I think I was missing \usepackage{cleveref}
. I actually wrote the post in latex (the post for which I asked this question), but the lesswrong docs on using latex are lacking. for example they don't tell you they support importing stuff and don't list what is supported.
\Cref{eq:1} is an amazing new discovery; before Max Kaye, no one grasped the perfect and utter truth of \cref{eq:1}.
I use crefs in the .tex file linked above. I suppose I should have been more specific and asked "does anyone know how to label equation...
I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality.
In that case I especially don't think that argument answers the question in OP.
I've left some details in another reply about why I think the constant overhead argument is flawed.
So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world.
I don't think this is true. I do agree some conclusions would be converged on ...
The solution to the "large overhead" problem is to amortize the cost of the human simulation over a large number of English sentences and predictions.
That seems a fair approach in general, like how can we use the program efficiently/profitably, but I don't think it answers the question in OP. I think it actually actually implies the opposite effect: as you go through more layers of abstraction you get more and more complex (i.e. simplicity doesn't hold across layers of abstraction). That's why the strategy you mention needs to be over ever larger and la...
for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next. Therefore the English and code-complexity can differ by at most a constant.
Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.
...Solomonoff induction is fine wit
the M1-simulator may be long, but its length is completely independent what we're predicting - thus, the M2-Kolmogorov-complexity of a string is at most the M1-Kolmogorov-complexity plus a constant (where the constant is the length of the M1-simulator program).
I agree with this, but I don't think it answers the question. (i.e. it's not a relevant argument^([1]))
Given the English sentence, the simulated human should then be able to predict anything a physical human could predict given the same English sentence.
There's a large edge case where the over...
On the literature that addresses your question: here is a classic LW post on this sort of question.
The linked post doesn't seem to answer it, e.g. in the 4th paragraph EY says:
Why, exactly, is the length of an English sentence a poor measure of complexity? Because when you speak a sentence aloud, you are using labels for concepts that the listener shares—the receiver has already stored the complexity in them.
I also don't think it fully addresses the question - or even partially in a useful way, e.g. EY says:
...It’s enormously easier (as it turns out)
One somewhat silly reason: for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next.
Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.
...because coding languages were designed to be understandable by humans and have syntax sim
Brevity of code and english can correspond via abstraction.
I don't know why brevity in low and high abstraction programs/explanations/ideas would correspond (I suspect they wouldn't). If brevity in low/high abstraction stuff corresponded; isn't that like contradictory? If a simple explanation in high abstraction is also simple in low abstraction then abstraction feels broken; typically ideas only become simple after abstraction. Put another way: the reason to use abstraction is to make ideas/thing that are highly complex into things that are less complex.
I...
I went through the maths in OP and it seems to check out. I think the core inconsistency is that Solomonoff Induction implies which is obviously wrong. I'm going to redo the maths below (breaking it down step-by-step more). curi has which is the same inconsistency given his substitution. I'm not sure we can make that substitution but I also don't think we need to.
Let and be independent hypotheses for Solomonoff induction.
According to the prior, the non-normalized probability of (and similarly for ) is: (1)
what is th...
testing \latex \LaTeX
does anyone know how to label equations and reference them?
@max-kaye u/max-kaye https://www.lesswrong.com/users/max-kaye
If it's possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can't freely decide to use it to make a better decision than the one you would have made anyway.
[...]
Deterministic physics excluded free choice. Physics doesn't.
MWI is deterministic over the multiverse, not per-universe.
A combination where both are fine or equally predicted fails to be a hypothesis.
Why? If I have two independent actions - flipping a coin and rolling a 6-sided die (d6) - am I not able to combine "the coin lands heads 50% of the time" and "the die lands even (i.e. 2, 4, or 6) 50% of the time"?
If you have partial predictions of X1XX0X and XX11XX you can "or" them into X1110X.
This is (very close to) a binary "or", I roughly agree with you.
But if you try to combine 01000 and 00010 the result will not be 01010 but som...
So there is a relationship between the Miller and Popper papers conclusions, and it's assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn't depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument ... the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn't claim that paper made no assumptions. I claimed th...
> Bertrand Russell had arguments against that kind of induction...
Looks like the simple organisms and algorithms didn't listen to him!
I don't think you're taking this seriously.
CR doesn't have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do.
This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...
The refutations of that kind of induction are way beyond the bounds of CR.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it's an explanatory conclusion.
I'm not convinced we can get anywhere productive continuing this discussion. If you don't think contradictions are bad, it feels like there's going to be a lot of work finding...
The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one.
I'm not sure I understand yet, but does the following line up with how you're using the word?
Indexical uncertainty is uncertainty around the exact matter (or temporal location of such matter) that is directly facilitating, and required by, a mind. (this could be your mind or another person's mind)
Notes:
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I'm not sure why that's particularly relevant, though.
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don't see how that is an example, principally because it seems wrong to me.
You didn...
Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.
> * If you can’t say something nice, don’t say anything at all.
This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it's neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?
"You're going to fast", "you're hurting me", "your habit of overreaching hurts your ability to learn", etc. These are g...
Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia
I don't think so, here is a counter-example:
Alice and Bob start talking in a room. Alice has an identical twin, Alex. Bob doesn't know about the twin and thinks he's talking to Alex. Bob asks: "How are you today?". Before Alice responds, Alex walks in.
Bob's observation of Alex will surprise him, and he'll quickly figure out that something's going on. But more importantly: Bob&apo...
It's a bad thing if ideas can't be criticised at all, but it's also a bad thing if the relationship of mutual criticism is cyclic, if it doesn't have an obvious foundation or crux.
Do you have an example? I can't think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they're contrived)
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A c...
FYI this usage of the term *universality* overloads a sorta-similar concept David Deutsch (DD) uses in *The Fabric of Reality* (1997) and in *The Beginning of Infinity* (2011). In BoI it's the subject of Chapter 6 (titled "The Jump to Universality") I don't know what history the idea has prior to that. Some extracts are below to give you a bit of an idea of how DD uses the word 'universality'.
Part of the reason I mention this is the reference to Popper; DD is one of the greatest living Popperians and has made significant cont...
This is commentary I started making as I was reading the first quote. I think some bits of the post are a bit vague or confusing but I think I get what you mean by anthropic measure, so it's okay in service to that. I don't think equating anthropic measure to mass makes sense, though; counter examples seem trivial.
> The two instances can make the decision together on equal footing, taking on exactly the same amount of risk, each- having memories of being on the right side of the mirror many times before, and no memories of being on the wrong- ...
I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire.
I think he'd say 'yes' to a distinction between morality and desire, at least in the way I'm reading this sentence. My comment: Moral statements are part of epistemology and not dependent on humans or local stuff. However, as one learns more about morality and considers their own actions, their preferences progressively change to be increasingly compatible with their morality.
Being a fallibilist I think he'd add ...
In what way are the epistemologies actually in conflict?
Well, they disagree on how to judge ideas, and why ideas are okay to treat as 'true' or not.
There are practical consequences to this disagreement; some of the best CR thinkers claim MIRI are making mistakes that are detrimental to the future of humanity+AGI, for **epistemic** reasons no less.
My impression is that it is more just a case of two groups of people who maybe don't understand each other well enough, rather than a case of substantiative disagreement between the useful theorie...
It's not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn't offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new...
I'm happy to do this. On the one hand I don't like that lots of replies creates more pressure to reply to everything, but I think if we'll probably be fine focusing on the stuff we find more important if we don't mind dropping some loose ends. If they become relevant we can come back to them.
> CR says that truth is objective
I'd say bayesian epistemology's stance is that there is one completely perfect way of understanding reality, but that it's perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It's good to believe that there's an objective truth and try to move towards it, but...
On the note of *qualia* (providing in case it helps)
DD says this in BoI when he first uses the word:
Intelligence in the general-purpose sense that Turing meant is one of a constellation of attributes of the human mind that have been puzzling philosophers for millennia; others include consciousness, free will, and meaning. A typical such puzzle is that of qualia (singular quale, which rhymes with ‘baalay’) – meaning the subjective aspect of sensations. So for instance the sensation of seeing the colour blue is a quale. Consider the foll...
But maybe there could be something reasonably describable as a bayesian method. But I don't work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
I don't know how you'd describe Bayesianism atm but I'll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.
I would be interested to know how Mirror Chamber strikes you though, I haven't tried to get non-bayesians to read it.
Will the Mirror Chamber explain what "anthropic measure" (or the anthropic measure function) is?
I ended up clicking through to this and I guess that the mirror chamber post is important but not sure if I should read something else first.
I started reading, and it's curious enough (and short enough) I'm willing to read the rest, but wanted to ask the above first.
[...] critrats [...] let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem
As someone who thinks you'd think they're a 'critrat', this feels wrong to me. I can't speak for other CR ppl, ofc, and some CR ppl aren't good at it (like any epistemology), but for me I don't think what you describe would add up to "a productive intellectual ecosystem".
In this case, I don't think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it;
I'm not sure what this would look like in practice. If you have two competing theories and don't need to act on them - there's no issue. If they're not mutually exclusive there's no issue. I...
I think it's fairly clear from this that he doesn't have solomonoff induction internalized, he doesn't know how many of his objection to bayesian metaphysics it answers.
I suspect, for DD, it's not about *how many* but *all*. If I come up with 10 reasons Bayesianism is wrong (so 10 criticisms), and 9 of those get answered adequately, the 1 that's still left is as bad as the 10; *any* unanswered criticism is a reason not to believe an idea. So to convince DD (or any decent Popperian) that an idea is wrong can't rely on incompl...
Evidence to the contrary, please?
here
Before October 2014, copyright law permitted use of a work for the purpose of criticism and review, but it did not allow quotation for other more general purposes. Now, however, the law allows the use of quotation more broadly. So, there are two exceptions to be aware of, one specifically for criticism and review and a more general exception for quotation. Both exceptions apply to all types of copyright material, such as books, music, films, etc.
https://www.copyrightuser.org/understand/exceptions/quotation/ - firs...
I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)
curi evidently wanted to change some things about his behaviour, otherwise he wouldn't have updated... (read more)
I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that. But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?".
As stewards of the community, we need to make decisions taking into acco... (read more)