All of nshepperd's Comments + Replies

Neat! This looks a lot like my quick note on survival time prediction I wrote a few years back, but more in depth. Very nice.

None of us are calling for blame, ostracism, or cancelling of Michael.

What I'm saying is that the Berkeley community should be.

Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.

Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.

I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.

It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the O... (read more)

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detr

... (read more)
devi120

Please see my comment on the grandparent.

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.

The sentence is also misleading given Devi didn't detransition afaik.

Each cohort knows that Carol is not a realistic threat to their preferred candidate, and will thus rank her second, while ranking their true second choice last.

Huh? This doesn't make sense. In which voting system would that help? In most systems that would make no difference to the relative probability of your first and second choices winning.

This is called burying. It makes sense in systems that violate the later-no-help or later-no-harm criteria, but instant-runoff voting satisfies both of those.

https://electowiki.org/wiki/Tactical_voting#Burying

That's possible, although then the consciousness-related utterances would be of the form "oh my, I seem to have suddenly stopped being conscious" or the like (if you believe that consciousness plays a causal role in human utterances such as "yep, i introspected on my consciousness and it's still there"), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.

As a more powerful version of this, you can install uBlock Origin and configure these custom filters to remove everything on youtube except for the video and the search box. As a user, I don't miss the comments, social stuff, 'recommendations', or any other stuff at all.

I must admit I can't make any sense of your objections. There aren't any deep philosophical issues with understanding decision algorithms from an outside perspective. That's the normal case! For instance, A*

2Gordon Seidoh Worley
This followup also seems relevant.
2Shmi
It's a great post, just doesn't quite go far enough...

This isn't a criticism of this post or of Vaniver, but more a comment on Circling in general prompted by it. This example struck me in particular:

Orient towards your impressions and emotions and stories as being yours, instead of about the external world. “I feel alone” instead of “you betrayed me.”

It strikes me as very disturbing that this should be the example that comes to mind. It seems clear to me that one should not, under any circumstances engage in a group therapy exercise designed to lower your emotional barriers and create vulnerability in th

... (read more)
Vaniver110

It seems clear to me that one should not, under any circumstances engage in a group therapy exercise designed to lower your emotional barriers and create vulnerability in the presence of anyone you trust less than 100%

I agree with this almost completely. Two quibbles: first, styles of Circling vary in how much they are a "group therapy exercise" (vs. something more like a shared exploration or meditation), and I think "100%" trust of people is an unreasonable bar; like, I don't think you should extend that level of trust to anyone, even yourself. So there'

... (read more)
8ChristianKl
I don't think the principle of orienting towards your own impression/emotions/stories is about reducing emotional barriers. Nonviolent communication is perfectly capable of expressing boundaries. There might be some situations where a person lacks the skill to express boundaries in a nonviolent way and then loses some protection when they are put into a context where they are expected to communicate nonviolently but if there's a good Circling facilitator that facilitator's role is to help the person to actually express their boundaries. The problem is when a powerful person uses authenticity or NVC in a way where they express their own desires without accounting for the interests of the less powerful person in an exchange. From what I read about the allegations towards Brent, him openly expressing his desires in cases where he was powerful and pushing his desires as being important for others to fulfill is one way how this plays out. One feature of the SAS seminars of Circling Europe is for example that there's are no confidentiality agreements because they see such an agreement as creating a should that prevents people from authentically expressing themselves. At the same time I find confidentiality agreements important to protected vulnerable/low power people who share information in circles and I do make confidentiality agreements when I lead circles. Whenever one has a lot of power in a social situation it's necessary to do more then just follow one's own desires to avoid slipping into patterns that are abusive of other people. The principle of trusting that you only have to be authentic and can then trust that the universe will see that nobody comes to harm is dangerous.

Where does that obligation come from?

This may not be Said's view, but it seems to me that this obligation comes from the sheer brute fact that if no satisfactory response is provided, readers will (as seems epistemically and instrumentally correct) conclude that there is no satisfactory response and judge the post accordingly. (Edit: And also, entirely separately, the fact that if these questions aren't answered the post author will have failed to communicate, rather defeating the point of making a public post.)

Obviously readers will conclude this more

... (read more)

T3t's explanations seem quite useless to me. The procedure they describe seems highly unlikely to reach anything like a correct interpretation of anything, being basically a random walk in concept space.

It's hard to see what "I don't understand what you meant by X, also here's a set of completely wrong definitions I arrived at by free association starting at X" could possibly add over "I don't understand what you meant by X", apart from wasting everyone's time redirecting attention onto a priori wrong interpretations.

I'm also somewhat alarmed to see people

... (read more)
5RobertM
To reiterate, I don't explicitly use anything like the procedures I described in my posts to do any sort of interpretation. I came up with them to use as levers to attempt bridging the inferential distance between Said and I; I agree that in practice trying to use those models explicitly would be extremely error-prone (probably better than a random walk, but maybe not by much). More salient to the point at hand: you understood (to a sufficient degree) the models I was describing, and your criticisms contain information about your understanding of those models. If for whatever reason I wanted to continue discussing those models, those two things being true would make it possible for me to respond further (with clarifications, questions about your interpretations, etc).
3habryka
Alas, then that guess of mine was probably wrong, but thank you for clarifying your position. In that case I will have to admit that I am arguing for a change in norms that you will also likely perceive to be worse.  To be clear though, you have given an argument against the procedure that T3t has described. The question at hand was whether their explanation helped you come to better understand the procedure (independently of whether you agree with it). It seems to me that you did indeed come to better understand the procedure in question, though my guess is there are still significant misunderstandings left. Is your sense that your model of the kind of procedure that me and T3t are advocating for has stayed the same after reading their comment? 

But my sense is that if the goal of these comments is to reveal ignorance, it just seems better to me to argue for an explicit hypothesis of ignorance, or a mistake in the post.

My sense is the exact opposite. It seems better to act so as to provide concrete evidence of a problem with a post, which stands on its own, than to provide an argument for a problem existing, which can be easily dismissed (ie. show, don't tell). Especially when your epistemic state is that a problem may not exist, as is the case when you ask a clarifying question and are yet to receive the answer!

To be clear, I think your comment was still net-negative for the thread, and provided little value (in particular in the presence of other commenters who asked the relevant questions in a, from my perspective, much more productive way)

I just want to note that my comment wouldn't have come about were it not for Said's.

Again, this is a problem that would easily be resolved by tone-of-voice in the real world, but since we are dealing with text-based communication here, these kinds of confusions can happen again and again.

To be frank, I find your attitu

... (read more)
habryka160

I just want to note that my comment wouldn't have come about were it not for Said's.

That's good to know. I do think if people end up writing better comments in response to Said's comments, then that makes a good difference to me. I would be curious about how Said's comment helped you write your comment, if you have the time, which would help me understand the space of solutions in better.

The only person in this thread who interpreted Said's original comment as an attack seems to have been you.

I am quite confident that is not the case. I don't think anyone

... (read more)

FWIW, that wasn't my interpretation of quanticle's comment at all. My reading is that "healthy" was not meant as a proposed interpretation of "authentic" but as an illustrative substitution demonstrating the content-freeness of this use of the word -- because the post doesn't get any more or less convincing when you replace "authentic" with different words.

This is similar to what EY does in Applause Lights itself, where he replaces words with their opposites to demonstrate that sentences are uninformative.

(As an interpretation, it would also be rather barr

... (read more)
2habryka
Yes, to be clear, I agree with this. I would count that substitution as a possible interpretation of the word (in particular an interpretation of it being basically just an applause light), but I don't care too much about quibbling about words here. 

Why should “that which can be destroyed by the truth” be destroyed? Because the truth is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.” Similarly, why should “that which can be destroyed by authenticity” be destroyed? Because authenticity is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.” I don’t mean to pitch ‘radical honesty’ here, or other sorts of excessive openness

... (read more)
Vaniver*590

I think you're right that the functional role of "authentic" in the above post is as an applause light. But... I think the same goes for "truth," in the way that you point out in your 2nd point. [In the post as a whole, I think "deep" also doesn't justify its directionality, but I think that's perhaps more understandable.]

That is, a description of what 'truth' is looks like The Simple Truth, which is about 20 pages long. I'm editing in that link to the relevant paragraph, as well as an IOU for 'authenticity,' which I think will be a Project to actually pay

... (read more)

If what you want is to do the right thing, there's no conflict here.

Conversely, if you don't want to do the right thing, maybe it would be prudent to reconsider doing it...?

I don't see the usual commonsense understanding of "values" (or the understanding used in economics or ethics) as relying on values being ontologically fundamental in any way, though. But you've the fact that they're not to make a seemingly unjustified rhetorical leap to "values are just habituations or patterns of action", which just doesn't seem to be true.

Most importantly, because the "values" that people are concerned with then they talk about "value drift" are idealized values (ala. extrapolated volition), not instantaneous values or opinions or habit

... (read more)
2Gordon Seidoh Worley
Right, I think people are pointing at something else when they normally talk about values but that cluster is poorly constructed and doesn't cut reality at the joint in the same way our naive notions of belief, morals, and much else cut reality slightly askew. I'm suggesting this as a rehabilitative framing of values that is a stronger, more consistent meaning for "value" than the confused cluster of things people are normally pointing at. Although to be clear even the naive confused notion of value I'm trying to explode and rebuild here is still a fundamentally ontological thing, unless you think people mean something by "value" more like signals in the brain serving as control mechanisms to regulate feedback systems. To your concern about an unjustified leap, this is a weakness of my current position: I don't yet have a strong ability to describe my own reasoning to bring most people along, and is one of the points of working out these ideas: so I can see what inferences do seem intuitive to people and which don't and use that information to iterate on my explanations. To the extent that I think "value" is a confused concept, I think "idealized value" is consequently also confused, perhaps even more so because it is further distanced from what is happening on the ground. I realize idealized value feels intuitive to many folks, and at one time it did seem intuitive to me, but I am similarly suspicious that it is cleanly pointing to a real thing and is instead a fancy thing we have constructed as part of our reasoning that has no clear correlate out in the world. That is, it is an artifact of our reasoning process, and while that's not inherently bad, it also means it's something almost purely subjective and can easily become unhinged from reality, which makes me nervous about using it as a justification for any particular policy we might want to pursue.

When we talk of values as nouns, we are talking about the values that people have, express, find, embrace, and so on. For example, a person might say that altruism is one of their values. But what would it mean to “have” altruism as a value or for it to be one of one’s values? What is the thing possessed or of one in this case? Can you grab altruism and hold onto it, or find it in the mind cleanly separated from other thoughts?

Since this appears to be a crux of your whole (fallacious, in my opinion) argument, I'm going to start by just criticizing this

... (read more)
2Gordon Seidoh Worley
Hmm, so there's a way in which I agree with you and a way I don't, and it depends on what you mean by "have" here. Without going back into addressing the possession metaphor, you're expressing a notion that I interpret as talking about existence, and I see a sharp line between existence or being and reality or the thing in itself. Existence is marked by differentiation, and for people to have beliefs, objects to have colors, etc. there must be some boundary at which these concepts are demarcated such that they are distinguishable from all else. In this sense we can say these things exist, but that it's dependent on our ability to observe and differentiate, to infer a pattern. There is also a way in which some of these are more real than others. All of them arise from some physical process, but not all of them have neat correspondences. Color has maybe the cleanest, being an interaction of our senses with photons and directly correlates with behaviors of those photons. Concepts in books is maybe the flimsiest, since it's an interaction of a book (paper? words? what makes a book a book and not some other kind of stuff that conveys information to us?) and our model of how we model the world, and the hardest to find where it really comes from. This is not to say it is totally unreal, but it is to say there is no thing that looks like concepts in books if you do not also have a mind to provide that interpretation of phenomena. Perhaps my presentation goes to far or is confusig, but the point is to be clear on what is ontological and what is ontic and not mistake the two, as I think it's happening in the usual model of values.

Doesn't it mean the same thing in either case? Either way, I don't know which way the coin will land or has landed, and I have some odds at which I'll be willing to make a bet. I don't see the problem.

(Though my willingness to bet at all will generally go down over time in the "already flipped" case, due to the increasing possibility that whoever is offering the bet somehow looked at the coin in the intervening time.)

5Said Achmiz
The difference is (to the naive view; I don’t necessarily endorse it) that in the case where the coin has landed, I do not know how it landed, but there’s a sense in which I could, in theory, know; there is, in any case, something to know; there is a fact of the matter about how the coin has landed, but I do not know that fact. So the “probability” of it having landed heads, or tails—the uncertainty—is, indeed, entirely in my mind. But in the case where the coin has yet to be tossed, there is as yet no case of the matter about whether it’s heads or tails! I don’t know whether it’ll land heads or tails, but nor could I know; there’s nothing to know! (Or do you say the future is predetermined?—asks the naive interlocutor—Else how else may one talk about probability being merely “in the mind”, for something which has not happened yet?) Whatever the answers to these questions may be, they are certainly not obvious or simple answers… and that is my objection to the OP: that it attempts to pass off a difficult and confusing conceptual question as a simple and obvious one, thereby failing to do justice to those who find it confusing or difficult.

The idea that "probability" is some preexisting thing that needs to be "interpreted" as something always seemed a little bit backwards to me. Isn't it more straightforward to say:

  1. Beliefs exist, and obey the Kolmogorov axioms (at least, "correct" beliefs do, as formalized by generalizations of logic (Cox's theorem), or by possible-world-counting). This is what we refer to as "bayesian probabilities", and code into AIs when we want to them to represent beliefs.
  2. Measures over imaginary event classes / ensembles also obey the Kolmogorov axioms. "Frequentist
... (read more)
3ryan_b
You might be interested in some work by Glenn Shafer and Vladimir Vovk about replacing measure theory with a game-theoretic approach. They have a website here, and I wrote a lay review of their first book on the subject here. I have also just now discovered that a new book is due out in May, which presumably captures the last 18 years or so of research on the subject. This isn't really a direct response to your post, except insofar as I feel broadly the same way about the Kolmogorov axioms as you do about interpreting their application to phenomena, and this is another way of getting at the same intuitions.

No, that doesn't work. It seems to me you've confused yourself by constructing a fake symmetry between these problems. It wouldn't make any sense for Omega to "predict" whether you choose both boxes in Newcomb's if Newcomb's were equivalent to something that doesn't involve choosing boxes.

More explicitly:

Newcomb's Problem is "You sit in front of a pair of boxes, which are either- both filled with money if Omega predicted you would take one box in this case, otherwise only one is filled". Note: describing the problem does not require mentioning "Newcomb's P... (read more)

Yes, you need to have a theory of physics to write down a transition rule for a physical system. That is a problem, but it's not at all the same problem as the "target format" problem. The only role the transition rule plays here is it allows one to apply induction to efficiently prove some generalization about the system over all time steps.

In principle a different more distinguished concise description of the system's behaviour could play the a similar role (perhaps, the recording of the states of the system + the shortest program that outputs the record

... (read more)
3Bunthut
And the thing that isnt O(1) is to apply the transition rule until you reach the relevant time step, right? I think I understand it now: The calculations involved in applying the transition rule count towards the computation length, and the simulation should be able to answer multible questions abouth the thing it simulates. So if object A simulates object B, we make a model X of A, prove it equivalent to the one in our theory of physics, then prove it equivalent to your physics model of B, then calculate forward in X, then translate the result back into B with the equivalence. And then we count the steps all this took. Before I ask any more questions, am I getting that right?

That's not an issue in my formalization. The "logical facts" I speak of in the formalized version would be fully specified mathematical statements, such as "if the simulation starts in state X at t=0, the state of the simulation at t=T is Y" or "given that Alice starts in state X, then <some formalized way of categorising states according to favourite ice cream flavour> returns Vanilla". The "target format" is mathematical proofs. Languages (as in English vs Chinese) don't and can't come in to it, because proof systems are language-ignorant.

Note, the

... (read more)
2Bunthut
In that case, the target format problem shows up in the formalisation of the physical system. How do you "interpret" certain electrical junctions as nand gates? Either you already have or this is a not fully formal step. Odds are you already have one (your theory of physics). But then you are measuring proof shortness relative to that system. And you could be using one of countless other formal systems which always make the same predictions, but relative to which different proofs are short and long. To steal someone elses explanation: And which of these empirically indistinguishable formalisations you use is of course a fact about the map. In your example: The assumption (including that it takes in and puts out in arabic numerals, and uses "*" as the multuplication command, and that buttons must be pressed,... and all the other things you need to actually use it) includes that.

This idea is, as others have commented, pretty much Dust theory.

The solution, in my opinion, is the same as the answer to Dust theory: namely, it is not actually the case that anything is a simulation of anything. Yes, you can claim that (for instance) the motion of the atoms in a pebble can be interpreted as a simulation of Alice, in the sense that anything can be mapped to anything... but in a certain more real sense, you can't.

And that sense is this: an actual simulation of Alice running on a computer grants you certain powers - you can step through the

... (read more)
2Bunthut
I think youve given a good analysis of "simulation", but it doesnt get around the problem OP presents. Its also possible to do those calculations during the interpretation/translation. You may have meant that, I cant tell. Your idea that the computation needs to happen somewhere is good, but in order to make it work you need to specify a "target format" in which the predictions are made. "1" doesnt really simulate Alice because you cant read the predictions it makes, even when they are technically "there" in a mathematical sense, and the translation into such a format involves what we consider the actual simulation. This means though, that whether something is a simulation is only on the map, and not in the territory. It depends on a what that "target format" is. For example a description in chinese is in a sense not a real description to me, because I cant process it efficiently. Someone else however, may, and to them it is a real descripton. Similarly one could write a simulation in a programming language we dont know, and if they dont leave us a compiler or docs, we would have a hard time noticing. So whether something is a simulation can depend on the observer. If we want to say that simulations are conscious and ethically relevant, this seems like something that needs to be adressed.

We can (and should) have that discussion, we should just have it on a separate post

Can you point to the specific location that discussion "should" happen at?

2habryka
Hmm, so originally I thought it would be best for you to create a new top-level post on your own, but I think Ray (Raemon) is planning to publish a question about this pretty soon, so it might be a better idea to wait for 24 hours or so, since that would be the most natural place to consolidate the discussion. Though you are welcome to create a new top-level post right now if that doesn't seem like a good idea to you.

The two parts I mentioned are simply the most obviously speculative and unjustified examples. I also don't have any real reason to believe the vaguer pop psychology claims about building stories, backlogs, etc.

The post would prob­a­bly have been a bit cleaner to not men­tion the few wild spec­u­la­tions he men­tions, but get­ting caught up on the tiny de­tails seems to miss the for­est from the trees.

It seems to me LW has a big epistemic hygiene problem, of late. We need to collectively stop make excuses for posting wild speculations as if they were fa

... (read more)

The tacit claim is that LW should be about confirmatory research and that exploratory research doesn't belong here. But confirmatory, cited research has never been the majority of content going back to LW 1.0.

For a post that claims to be a "translation" of Buddhism, this seems to contain:

  • No Pali text;
  • No specific references to Pali text, or any sources at all;
  • No actual translation work of any kind.

On the other hand, it does contain quite a bit of unjustified speculation. "Literal electrical resistance in the CNS", really? "Rewiring your CNS"? Why should I believe any of this?

Why are people upvoting this?

8Qiaochu_Yuan
I upvoted this because it gave me some concepts to use to look at some experiences I've had. The speculations at the level of physical mechanism aren't really cruxes for me so I mostly don't care about them, and same with facts of the matter about what any particular Pali text actually says. What's interesting to me is what Romeo gets out of a combination of reading them and reflecting on his own experience, that might be relevant to me reflecting on my own experience. Gut reaction to this question is that it's the wrong question. I don't view this post as telling you anything you're supposed to believe on Romeo's word.
3J-
they're just being nice. (agreed).
3romeostevensit
This isn't a compiler level attempt, it is a design patterns level attempt. I guess it's not universally illuminating.

"Above the map"? "Outside the territory"? This is utter nonsense. Rationality insists no such thing. Explicitly the opposite, in fact.

Given things like this too:

Existing map-less is very hard. The human brain really likes to put maps around things.

At this point I have to wonder if you're just rounding off rationality to the nearest thing to which you can apply new-age platitudes. Frankly, this is insulting.

2Shmi
If you find yourself getting overly emotional over a reply on a rationality forum post, a prudent thing to do is to step away and chill for a bit before replying.
2Elo
You are welcome to think this is utter nonsense and feel like this is insulting. That's fine. I understand that. It makes no sense to you and it seems like I'm gibbering about nothing. I understand where you are and why you would say that. I'm sure it's very frustrating to see these new age platitudes and have no idea where I'm getting this from. For me this is significant information, for the several people who have read it and privately messages me and been impressed and surprised by the experience. For myself and these people, there's something here that we see. It seems strange that I can talk in a secret language right under your nose and make sense to other people. How long until you wonder what that is and how you can see it for yourself?

You don't need to estimate this.

A McGill University study found that more than 60 percent of college-level soccer players reported symptoms of concussion during a single season. Although the percentage at other levels of play may be different, these data indicate that head injuries in soccer are more frequent than most presume.

A 60% chance of concussion is more than enough for me to stay far away.

Prevention over removal. Old LW required a certain amount of karma in order to create posts, and we correspondingly didn't have a post spam problem that I remember. I strongly believe that this requirement should be re-introduced (with or without a moderator approval option for users without sufficient karma).

3Wei Dai
Agreed. Also, in the slightly longer term, there must be automated spam detection services that could be incorporated or hired to reduce the moderators' spam-filtering work load? (If not, it seems like a business opportunity for someone.)
nshepperdΩ7240

Proof of #4, but with unnecessary calculus:

Not only is there an odd number of tricolor triangles, but they come in pairs according to their orientation (RGB clockwise/anticlockwise). Proof: define a continuously differentiable vector field on the plane, by letting the field at each vertex be 0, and the field in the center of each edge be a vector of magnitude 1 pointing in the direction R->G->B->R (or 0 if the two adjacent vertices are the same color). Extend the field to the complete edges, then the interiors of the triangles by some interpolat

... (read more)
2lbThingrb
Generalized to n dimensions in my reply to Adele Lopez's solution to #9 (without any unnecessary calculus :)
3Charlie Steiner
As a physicist, this is my favorite one for obvious reasons :)

Your interpretation of the bolded part is correct.

We got to discussing this on #lesswrong recently. I don't see anyone here pointing this out yet directly, so:

Can you technically Strong Upvote everything? Well, we can’t stop you. But we’re hoping a combination of mostly-good-faith + trivial inconveniences will result in people using Strong Upvotes when they feel it’s actually important.

This approach, hoping that good faith will prevent people from using Strong votes "too much", is a good example of an Asshole Filter (linkposted on LW last year). You've set some (unclear) boundaries, then due to not en

... (read more)
4habryka
Overall, agree on the whole asshole filter thing. After a few months of operation, we now have a bunch more data on how people vote, and so might make some adjustments to the system after we analyzed the data a bunch more. I am currently tending towards a system where your strong-upvotes get weaker the more often you use them, using some kind of "exhaustion" mechanic. I think this still would cause a small amount of overrepresentation by people who use it a lot, but I think would lessen the strength of the effect. I am mostly worried about the UI complexity of this, and communicating this clearly to the user. Also still open to other suggestions. I am not a huge fan of just leaving them unlimited, mostly because I think it's somewhat arbitrary to what degree someone will perceive them as a trivial inconvenience, and then we just introduced a bunch of random noise into our karma system, by overrepresenting people who don't find click-and-hold to be a large inconvenience.
habryka211

Note: I would never punish anyone for their vote-actions on the site, both because I agree that you should not punish people for giving them options without communicating any downside, but more importantly, because I think it is really important that votes form an independent assessment for which people do not feel like they have to justify themselves. Any punishment of voting would include some kind of public discussion of vote-patterns, which is definitely off-limits for us, and something we are very very very hesitant to do. (This seemed important to say, since I think independence of voting is quite important for the site integrity)

(Note: still disenfranchises users who don’t notice that this feature exists, but maybe that’s ok.)

It is not difficult to make people notice the feature exists; cf. the GreaterWrong implementation. (Some people will, of course, still fail to notice it, somehow. There are limits to how much obliviousness can be countered via reasonable UX design decisions.)

This is also a UX issue. Forcing users to navigate an unclear ethical question and prisoner’s dilemma---how much strong voting is “too much”---in order to use the site is unpleasant and a bad user ex

... (read more)

Good post!

Is it common to use Kalman filters for things that have nonlinear transformations, by approximating the posterior with a Gaussian (eg. calculating the closest Gaussian distribution to the true posterior by JS-divergence or the like)? How well would that work?

Grammar comment--you seem to have accidentally a few words at

Measuring multiple quantities: what if we want to measure two or more quantities, such as temperature and humidity? Furthermore, we might know that these are [missing words?] Then we now have multivariate normal distributions.

3SatvikBeri
Thanks! Edited.
4gjm
There are a number of Kalman-like things you can do when your updates are nonlinear. The "extended Kalman filter" uses a local linear approximation to the update. There are higher-order versions. The EKF unsurprisingly tends to do badly when the update is substantially nonlinear. The "unscented Kalman filter" uses (kinda) a finite-difference approximation instead of the derivative, deliberately taking points that aren't super-close together to get an approximation that's meaningful on the scale of your actual uncertainty. Going further in that direction you get "particle filters" which represent your uncertainty not as a Gaussian but by a big pile of samples from its distribution. (There's a ton of lore on all this stuff. I am in no way an expert on it.)

How big was your mirror, and how much of your face did you see in it?

C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.

That's not how algorithms work and seems... incoherent.

That you want to deny C is great,

I did not say that either.

because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, an

... (read more)
2Gordon Seidoh Worley
Given that I still think after all this trying that you are confused and that I never wanted to put this much work into the comments on this post, I give up trying to explain further as we are making no progress. I unfortunately just don't have the energy to devote to this right now to see it through. Sorry.

It seems that you don't get it. Said just demonstrated that even if C exists it wouldn't imply a universally compelling argument.

In other words, this:

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/​algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-indepen

... (read more)
7TAG
So what? Neither the existence or non existence of a Criterion of Truth that is persuasive to our minds is implied by the (non) existence of universally compelling arguments. The issue of universally compelling arguments is a red herring.
4Gordon Seidoh Worley
See my other comment, but assuming to know something about how to compute C would just already be part of C by definition. It's very hard to talk about the criterion of truth without accidentally saying something that implies it's not true because it's an unknowable thing we can't grasp onto. C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That's definitionally what it means to be able to know the criterion of truth. That you want to deny C is great, because I think (as I'm finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.

It doesn't seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling "criterion of truth" before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?

It doesn't seem like applying full force in criticism is a priority for the 'postrationality' envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.

4TAG
I did not ask for a universally compelling argument: you brought that in. Trying to solve problems by referring to the Sequences has a way of leading to derailment: people match the topic at hand to which ever of Yudkowsky's writings is least irrelevant, even if it is not relevant enough to be on the same topic.
8Gordon Seidoh Worley
I agree with Kaj on this point, however I also don't think you're intentionally trying to respond to a strawman version of what we're presenting; what we're arguing for hinges on what seems to be a subtle point for most people (it doesn't feel subtle to me but I am empathetic to technical philosophical positions being subtle to other people), so it's easy to conflate our position with, say, postmodernist-style epistemic relativism, since although it's drastically different than that it's different for technical reasons that may not be apparent from reading the broad strokes of what we're saying. I suspect what's going on in this discussion is something like the following: me, Kaj, TAG, and others are coming from a position that relatively small in idea space, but there's other ideas that sort-of pattern match if you don't look too close at the details that are getting confused for the point we're trying to make, and then people respond to these other ideas rather than the one we're holding. Although we're trying our best to cut idea space such that you see the part we're talking about, the process is inexact because although I've pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words (physics sometimes has the same problem: you pick a word because it's a useful metaphor but give it a technical meaning, and then people misunderstand because they think too much in terms of the metaphor and not in terms of the precise model being referred to by the word) and requires a certain about of fluency with philosophy in general. For example, in all the comments on this post, I think so far only jessicata has asked for clarification in a way that clearly is framed in terms of technical philosophy. This is not to necessarily demand that you engage with technical philosophy if you don't want to, but it is I suspect why we continue to have trouble communicating (or if

As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.

Even if true, this is different from "epistemic rationality is just instrumental rationality"; as different as adaptation executors are from fitness maximisers.

Separately, it's interesting that you quote this part:

The important thing is to hold nothing bac

... (read more)

Advocates of postrationality seem to be hoping that the fact that P(Occam's razor) < 1 makes these arguments go away. It doesn't work like that.

This (among other paragraphs) is an enormous strawman of everything that I have been saying. Combined with the fact that the general tone of this whole discussion so far has felt adversarial rather than collaborative, I don't think that I am motivated to continue any further.

0Gordon Seidoh Worley
Hmm, I think there is some kind of category error happening that you think I'm asking for universally compelling arguments because I agree they don't and can't exist as a straightforward corollary of epistemic circularity. You might feel that I do though because I think if you assume to know the criterion of truth or to be able to learn it this would be equivalent to saying you could find a universally compelling argument, because this is exactly the positivist stance. If you disagree then I suspect whatever disagreement we have has become extremely esoteric since I don't see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments.

I'll have more to say later but:

The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.

Both of these are different from the claim actually being true. The fact that Occam's razor is true is what causes the physical process of (occamian) observation and experiment to yield correct results. So you see, you've already managed to rephrase what I've been saying into something different by conflating map and territory.

8Kaj_Sotala
Indeed, something being true is further distinct from us considering it true. But given that the whole point of metarationality is fully incorporating the consequences of realizing the map/territory distinction and the fact that we never observe the territory directly (we only observe our brain's internal representation of the external environment, rather than the external environment directly), a rephrasing that emphazises the way that we only ever experience the map seemed appropriate.

This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.

But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean;

This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.

Yes, carryi

... (read more)

reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.

Obviously. Which is why I said that the point was not any of the specific arguments in that debate - they were totally arbitrary and could just as well have been two statisticians debating the validity of different statistical approaches - but the fact that any two people can disagree about anything in the first place, as they have different models of how to interpret their observations.

"Occam's razor is true" is an entirely different thing from

... (read more)

Indeed, the scientific history of how observation and experiment led to a correct understanding of the phenomenon of rainbows is long and fascinating.

1TAG
Which is to say that is a lot more complex than "just look" and also more complex than "come up with a predictive theory". Indeed, no-one has method for obtaining correspondence to reality that works in all cases..

I'm sorry, what? In this discussion? That seems like an egregious conflict of interest. You don't get to unilaterally decide that my comments are made in bad faith based on your own interpretation of them. I saw which comment of mine you deleted and honestly I'm baffled by that decision.

habryka100

The moderation system we settled on gives people above a certain karma threshold the ability to moderate on their own posts, which I think is very important to allow people to build their own gardens and cultivate ideas. Discussion about that general policy should happen in meta. I will delete any further discussion of moderation policies on this post.

4Gordon Seidoh Worley
Please see the moderation guidelines. I choose to enforce a particular norm I spell out and I'm the ultimate arbiter of that. If anything I am too generous to people and let them get away with a lot of bullshit before I put a stop to things. This is not to say I never make errors, but if I think you made insufficient effort to respond in a good faith way to advance the conversation, understand the other person, and respond in a way that is not simply reacting in frustration, trying to score points, or otherwise speak to some purpose other than increasing mutual understanding, then your comment will be deleted. If you don't like my garden you can always go talk somewhere else.

If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.

and to be pointed about it I think believing you can identify the criterion of truth is a “comforting” belief that is either contradictory or demands adopting non-transcendental idealism

Actually... I was going to edit my comment to add that I'm not sure that I would agree that I "think we can know truth well enough to avoid the problem of the criterion" either, since your concep

... (read more)

If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.

That's not my only disagreement. I also think that your specific proposed solution does nothing to "address" the problem (in particular because it just seems like a bad idea, in general because "addressing" it to your satisfaction is impossible), and only serves as an excuse to rationalize holding comforting but wrong beliefs under the guise of doing "advanced philosophy". This is why

... (read more)
2Gordon Seidoh Worley
It's true that I think the problem of the criterion cannot be resolved, and this forces us to adopt particularism (this is different from pragmatism but compatible with it, see Chisholm's work in this area for more information). I'm not sure what "comforting but wrong beliefs" you think I'm holding on to, though, and to be pointed about it I think believing you can identify the criterion of truth is a "comforting" belief that is either contradictory or demands adopting non-transcendental idealism (a position I think is insufficiently parsimonious to be worth taking). As for it being "a trap" and granting you no more "ability to step outside your own head that you didn't have before", I'd say this is entirely true of any ontology you construct. That doesn't mean we don't try, but it is the case that we are always stuck in our heads so long as we are trying to understand anything because that's the nature of what it is to understand. You'll likely disagree with me on this point because we disagree on the problem of the criterion, but I'd say the only way to get outside your own head is by turning to the pre-ontological or the ontic through techniques like meditation and epoche. So alas it sounds as though we are at an impasse as I don't really have the interest or the energy to try to convince you to my side of the question of how to address epistemic circularity given my current understanding of your reasoning. That's not to dismiss you, only that it's beyond what I'm currently up to engaging in. Perhaps another will step into this thread and take up the challenge.

I don't have to solve the problem of induction to look out my window and see whether it is raining. I don't need 100% certainty, a four-nines probability estimate is just fine for me.

Where's the "just go to the window and look" in judging beliefs according to "compellingness-of-story"?

1TAG
I wasn't talking about induction specifically. Merely observing doesnt solve everything. What about the rainbow you see after the rain has stopped? How many times have people observed the sun without knowing it is a fusion reactor?
-1Gordon Seidoh Worley
This seems to be completely missing the mark and failing to respond in good faith. I already deleted a couple other comments for this reason, including one of yours nshepperd, but this case is marginal enough that I'll let it slide. Consider yourself warned and I will ban if necessary to maintain productive discussion, which would be unfortunate given your fruitful contributions elsewhere in the comments of this post.

Of course not, and that’s the point.

The point... is that judging beliefs according to whether they achieve some goal or anything-- is no more reliable than judging beliefs according to whether they are true, is in no way a solution to the problem of induction or even a sensible response to it, and most likely only makes your epistemology worse?

Indeed, which is why metarationality must not forget to also include all of rationality within it!

Can you explain this in a way that doesn't make it sound like an empty applause light? How can I take compellin

... (read more)
2Gordon Seidoh Worley
I think we've already hit the crux of our disagreement and further drilling is pointless. If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it. I think we do not and cannot know the criterion for assessing truth well enough to ignore the problem. I might make this even shorter by saying you take the pragmatist position and I take the skeptical position regarding epistemic circularity (although for myself this elides my agreement that we have to be pragmatic even as we are skeptical if we are to get anything done, and for you likely elides some skepticism you'd like to maintain). I move us in this direction because I think, for example, trying to respond to is fruitless right now because your disagreement with what I've said seems to hinge on the sort of relationship we believe we are capable of having with the truth.

Because there’s no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.

I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.

Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.

So much the worse for schizophrenics. And s

... (read more)
1TAG
Inasmuch as you are looking with your eyes, that would be tracking appearance. What you don't have is a way of checking whether the ultimate causes of your sense data, in reality, are what you think they are.
I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.

Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.

Somebody else objects, "of course you don't, it just happened to rain by coinci... (read more)

6Gordon Seidoh Worley
I just want to make clear that's exactly what Kaj and I are saying to do. Our caveat is that it's not the only thing you can do because it's not the only thing you do do even if you wanted desperately with all your heart for it to be otherwise. This also seems to be missing the point; we're specifically saying that we think that things that rationalist think are not magical instead are magical (assuming to know the criterion of truth) and because of this you can't make assumptions strong enough to directly go after the truth without contradiction.

Two points:

  1. Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there's some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.

  2. It doesn't prove the OP's point at all. The OP was about beliefs (and "making sense of the world"). But I can have the belief "postrationality is poisonous and harmful" without having to post a comm

... (read more)
4Gordon Seidoh Worley
Yep. In isolation I would be unhappy about this sentence, but given the context I think it's advancing the conversation by expressing a viewpoint about what has been said so we can discuss how the ideas presented are perceived.

Well, this is a long comment, but this seems to be the most important bit:

The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models.

Why would you think "magic access" is required? It seems to me the ordinary non-magic causal access granted by our senses works just fine.

All that you say about beliefs often being critically mistaken due to eg. emotional attachment, is of course true, and that is why we must be ruthless in rejecting any reasons for believing things other than t

... (read more)
4Gordon Seidoh Worley
FWIW, an interesting counter-question is to ask you, nshepperd, to provide the criterion of truth, or at least how one might find it. I'll warn you in advance, though, that Eliezer never adequately addresses this question in his writing because he is a pragmatist and so cuts off the line of inquiry before he'd have to address this question, thus appealing to him in insufficient (not that you necessarily would, but you've been linking him heavily and I want to cut short the need for a round of conversation where we get over this hurdle). I'll also say I have no problem with pragmatism per-se, and in fact I say as much in another comment because pragmatism is how you get on with living despite great doubt, but if you choose to go deeper on questions of epistemology than a pragmatic approach may at the moment demand, you're forced to grapple with the program of the criterion head-on. Don't feel pressured to do this though; I just think you'll find it an interesting exercise to try to pin it down and might gain some insight from it into the postrationalist worldview.
Why would you think "magic access" is required?

Because there's no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality. Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.

Since there's no direct causal pathway, it would have to work through some non-causal means, i.e. magic.

The problem is this seems to be exactly the opposite of what "postrationality" advocates: using the la
... (read more)
4Gordon Seidoh Worley
I'll let Kaj say more, but in short it becomes a logical necessity to ground the line of reasoning without introducing self-contradiction or enough freedom that you can say P=~P and not straying into saying something isomorphic to the postrationalist position.
Load More