None of us are calling for blame, ostracism, or cancelling of Michael.
What I'm saying is that the Berkeley community should be.
Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.
Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.
I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.
It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the O...
I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.
Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.
...Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detr
Please see my comment on the grandparent.
I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.
Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.
The sentence is also misleading given Devi didn't detransition afaik.
Each cohort knows that Carol is not a realistic threat to their preferred candidate, and will thus rank her second, while ranking their true second choice last.
Huh? This doesn't make sense. In which voting system would that help? In most systems that would make no difference to the relative probability of your first and second choices winning.
This is called burying. It makes sense in systems that violate the later-no-help or later-no-harm criteria, but instant-runoff voting satisfies both of those.
https://electowiki.org/wiki/Tactical_voting#Burying
That's possible, although then the consciousness-related utterances would be of the form "oh my, I seem to have suddenly stopped being conscious" or the like (if you believe that consciousness plays a causal role in human utterances such as "yep, i introspected on my consciousness and it's still there"), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
As a more powerful version of this, you can install uBlock Origin and configure these custom filters to remove everything on youtube except for the video and the search box. As a user, I don't miss the comments, social stuff, 'recommendations', or any other stuff at all.
I must admit I can't make any sense of your objections. There aren't any deep philosophical issues with understanding decision algorithms from an outside perspective. That's the normal case! For instance, A*
This isn't a criticism of this post or of Vaniver, but more a comment on Circling in general prompted by it. This example struck me in particular:
Orient towards your impressions and emotions and stories as being yours, instead of about the external world. “I feel alone” instead of “you betrayed me.”
It strikes me as very disturbing that this should be the example that comes to mind. It seems clear to me that one should not, under any circumstances engage in a group therapy exercise designed to lower your emotional barriers and create vulnerability in th
...It seems clear to me that one should not, under any circumstances engage in a group therapy exercise designed to lower your emotional barriers and create vulnerability in the presence of anyone you trust less than 100%
I agree with this almost completely. Two quibbles: first, styles of Circling vary in how much they are a "group therapy exercise" (vs. something more like a shared exploration or meditation), and I think "100%" trust of people is an unreasonable bar; like, I don't think you should extend that level of trust to anyone, even yourself. So there'
...Where does that obligation come from?
This may not be Said's view, but it seems to me that this obligation comes from the sheer brute fact that if no satisfactory response is provided, readers will (as seems epistemically and instrumentally correct) conclude that there is no satisfactory response and judge the post accordingly. (Edit: And also, entirely separately, the fact that if these questions aren't answered the post author will have failed to communicate, rather defeating the point of making a public post.)
Obviously readers will conclude this more
...T3t's explanations seem quite useless to me. The procedure they describe seems highly unlikely to reach anything like a correct interpretation of anything, being basically a random walk in concept space.
It's hard to see what "I don't understand what you meant by X, also here's a set of completely wrong definitions I arrived at by free association starting at X" could possibly add over "I don't understand what you meant by X", apart from wasting everyone's time redirecting attention onto a priori wrong interpretations.
I'm also somewhat alarmed to see people
...But my sense is that if the goal of these comments is to reveal ignorance, it just seems better to me to argue for an explicit hypothesis of ignorance, or a mistake in the post.
My sense is the exact opposite. It seems better to act so as to provide concrete evidence of a problem with a post, which stands on its own, than to provide an argument for a problem existing, which can be easily dismissed (ie. show, don't tell). Especially when your epistemic state is that a problem may not exist, as is the case when you ask a clarifying question and are yet to receive the answer!
To be clear, I think your comment was still net-negative for the thread, and provided little value (in particular in the presence of other commenters who asked the relevant questions in a, from my perspective, much more productive way)
I just want to note that my comment wouldn't have come about were it not for Said's.
Again, this is a problem that would easily be resolved by tone-of-voice in the real world, but since we are dealing with text-based communication here, these kinds of confusions can happen again and again.
To be frank, I find your attitu
...I just want to note that my comment wouldn't have come about were it not for Said's.
That's good to know. I do think if people end up writing better comments in response to Said's comments, then that makes a good difference to me. I would be curious about how Said's comment helped you write your comment, if you have the time, which would help me understand the space of solutions in better.
The only person in this thread who interpreted Said's original comment as an attack seems to have been you.
I am quite confident that is not the case. I don't think anyone
...FWIW, that wasn't my interpretation of quanticle's comment at all. My reading is that "healthy" was not meant as a proposed interpretation of "authentic" but as an illustrative substitution demonstrating the content-freeness of this use of the word -- because the post doesn't get any more or less convincing when you replace "authentic" with different words.
This is similar to what EY does in Applause Lights itself, where he replaces words with their opposites to demonstrate that sentences are uninformative.
(As an interpretation, it would also be rather barr
......Why should “that which can be destroyed by the truth” be destroyed? Because the truth is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.” Similarly, why should “that which can be destroyed by authenticity” be destroyed? Because authenticity is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.” I don’t mean to pitch ‘radical honesty’ here, or other sorts of excessive openness
I think you're right that the functional role of "authentic" in the above post is as an applause light. But... I think the same goes for "truth," in the way that you point out in your 2nd point. [In the post as a whole, I think "deep" also doesn't justify its directionality, but I think that's perhaps more understandable.]
That is, a description of what 'truth' is looks like The Simple Truth, which is about 20 pages long. I'm editing in that link to the relevant paragraph, as well as an IOU for 'authenticity,' which I think will be a Project to actually pay
...If what you want is to do the right thing, there's no conflict here.
Conversely, if you don't want to do the right thing, maybe it would be prudent to reconsider doing it...?
I don't see the usual commonsense understanding of "values" (or the understanding used in economics or ethics) as relying on values being ontologically fundamental in any way, though. But you've the fact that they're not to make a seemingly unjustified rhetorical leap to "values are just habituations or patterns of action", which just doesn't seem to be true.
Most importantly, because the "values" that people are concerned with then they talk about "value drift" are idealized values (ala. extrapolated volition), not instantaneous values or opinions or habit
...When we talk of values as nouns, we are talking about the values that people have, express, find, embrace, and so on. For example, a person might say that altruism is one of their values. But what would it mean to “have” altruism as a value or for it to be one of one’s values? What is the thing possessed or of one in this case? Can you grab altruism and hold onto it, or find it in the mind cleanly separated from other thoughts?
Since this appears to be a crux of your whole (fallacious, in my opinion) argument, I'm going to start by just criticizing this
...Doesn't it mean the same thing in either case? Either way, I don't know which way the coin will land or has landed, and I have some odds at which I'll be willing to make a bet. I don't see the problem.
(Though my willingness to bet at all will generally go down over time in the "already flipped" case, due to the increasing possibility that whoever is offering the bet somehow looked at the coin in the intervening time.)
The idea that "probability" is some preexisting thing that needs to be "interpreted" as something always seemed a little bit backwards to me. Isn't it more straightforward to say:
No, that doesn't work. It seems to me you've confused yourself by constructing a fake symmetry between these problems. It wouldn't make any sense for Omega to "predict" whether you choose both boxes in Newcomb's if Newcomb's were equivalent to something that doesn't involve choosing boxes.
More explicitly:
Newcomb's Problem is "You sit in front of a pair of boxes, which are either- both filled with money if Omega predicted you would take one box in this case, otherwise only one is filled". Note: describing the problem does not require mentioning "Newcomb's P...
Yes, you need to have a theory of physics to write down a transition rule for a physical system. That is a problem, but it's not at all the same problem as the "target format" problem. The only role the transition rule plays here is it allows one to apply induction to efficiently prove some generalization about the system over all time steps.
In principle a different more distinguished concise description of the system's behaviour could play the a similar role (perhaps, the recording of the states of the system + the shortest program that outputs the record
...That's not an issue in my formalization. The "logical facts" I speak of in the formalized version would be fully specified mathematical statements, such as "if the simulation starts in state X at t=0, the state of the simulation at t=T is Y" or "given that Alice starts in state X, then <some formalized way of categorising states according to favourite ice cream flavour> returns Vanilla
". The "target format" is mathematical proofs. Languages (as in English vs Chinese) don't and can't come in to it, because proof systems are language-ignorant.
Note, the
...This idea is, as others have commented, pretty much Dust theory.
The solution, in my opinion, is the same as the answer to Dust theory: namely, it is not actually the case that anything is a simulation of anything. Yes, you can claim that (for instance) the motion of the atoms in a pebble can be interpreted as a simulation of Alice, in the sense that anything can be mapped to anything... but in a certain more real sense, you can't.
And that sense is this: an actual simulation of Alice running on a computer grants you certain powers - you can step through the
...We can (and should) have that discussion, we should just have it on a separate post
Can you point to the specific location that discussion "should" happen at?
The two parts I mentioned are simply the most obviously speculative and unjustified examples. I also don't have any real reason to believe the vaguer pop psychology claims about building stories, backlogs, etc.
The post would probably have been a bit cleaner to not mention the few wild speculations he mentions, but getting caught up on the tiny details seems to miss the forest from the trees.
It seems to me LW has a big epistemic hygiene problem, of late. We need to collectively stop make excuses for posting wild speculations as if they were fa
...The tacit claim is that LW should be about confirmatory research and that exploratory research doesn't belong here. But confirmatory, cited research has never been the majority of content going back to LW 1.0.
For a post that claims to be a "translation" of Buddhism, this seems to contain:
On the other hand, it does contain quite a bit of unjustified speculation. "Literal electrical resistance in the CNS", really? "Rewiring your CNS"? Why should I believe any of this?
Why are people upvoting this?
"Above the map"? "Outside the territory"? This is utter nonsense. Rationality insists no such thing. Explicitly the opposite, in fact.
Given things like this too:
Existing map-less is very hard. The human brain really likes to put maps around things.
At this point I have to wonder if you're just rounding off rationality to the nearest thing to which you can apply new-age platitudes. Frankly, this is insulting.
You don't need to estimate this.
A McGill University study found that more than 60 percent of college-level soccer players reported symptoms of concussion during a single season. Although the percentage at other levels of play may be different, these data indicate that head injuries in soccer are more frequent than most presume.
A 60% chance of concussion is more than enough for me to stay far away.
Prevention over removal. Old LW required a certain amount of karma in order to create posts, and we correspondingly didn't have a post spam problem that I remember. I strongly believe that this requirement should be re-introduced (with or without a moderator approval option for users without sufficient karma).
Proof of #4, but with unnecessary calculus:
Not only is there an odd number of tricolor triangles, but they come in pairs according to their orientation (RGB clockwise/anticlockwise). Proof: define a continuously differentiable vector field on the plane, by letting the field at each vertex be 0, and the field in the center of each edge be a vector of magnitude 1 pointing in the direction R->G->B->R (or 0 if the two adjacent vertices are the same color). Extend the field to the complete edges, then the interiors of the triangles by some interpolat
Your interpretation of the bolded part is correct.
We got to discussing this on #lesswrong recently. I don't see anyone here pointing this out yet directly, so:
Can you technically Strong Upvote everything? Well, we can’t stop you. But we’re hoping a combination of mostly-good-faith + trivial inconveniences will result in people using Strong Upvotes when they feel it’s actually important.
This approach, hoping that good faith will prevent people from using Strong votes "too much", is a good example of an Asshole Filter (linkposted on LW last year). You've set some (unclear) boundaries, then due to not en
...Note: I would never punish anyone for their vote-actions on the site, both because I agree that you should not punish people for giving them options without communicating any downside, but more importantly, because I think it is really important that votes form an independent assessment for which people do not feel like they have to justify themselves. Any punishment of voting would include some kind of public discussion of vote-patterns, which is definitely off-limits for us, and something we are very very very hesitant to do. (This seemed important to say, since I think independence of voting is quite important for the site integrity)
(Note: still disenfranchises users who don’t notice that this feature exists, but maybe that’s ok.)
It is not difficult to make people notice the feature exists; cf. the GreaterWrong implementation. (Some people will, of course, still fail to notice it, somehow. There are limits to how much obliviousness can be countered via reasonable UX design decisions.)
...This is also a UX issue. Forcing users to navigate an unclear ethical question and prisoner’s dilemma---how much strong voting is “too much”---in order to use the site is unpleasant and a bad user ex
Good post!
Is it common to use Kalman filters for things that have nonlinear transformations, by approximating the posterior with a Gaussian (eg. calculating the closest Gaussian distribution to the true posterior by JS-divergence or the like)? How well would that work?
Grammar comment--you seem to have accidentally a few words at
Measuring multiple quantities: what if we want to measure two or more quantities, such as temperature and humidity? Furthermore, we might know that these are [missing words?] Then we now have multivariate normal distributions.
How big was your mirror, and how much of your face did you see in it?
C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.
That's not how algorithms work and seems... incoherent.
That you want to deny C is great,
I did not say that either.
...because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, an
It seems that you don't get it. Said just demonstrated that even if C exists it wouldn't imply a universally compelling argument.
In other words, this:
...Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-indepen
It doesn't seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling "criterion of truth" before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?
It doesn't seem like applying full force in criticism is a priority for the 'postrationality' envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.
As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.
Even if true, this is different from "epistemic rationality is just instrumental rationality"; as different as adaptation executors are from fitness maximisers.
Separately, it's interesting that you quote this part:
...The important thing is to hold nothing bac
Advocates of postrationality seem to be hoping that the fact that P(Occam's razor) < 1 makes these arguments go away. It doesn't work like that.
This (among other paragraphs) is an enormous strawman of everything that I have been saying. Combined with the fact that the general tone of this whole discussion so far has felt adversarial rather than collaborative, I don't think that I am motivated to continue any further.
I'll have more to say later but:
The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.
Both of these are different from the claim actually being true. The fact that Occam's razor is true is what causes the physical process of (occamian) observation and experiment to yield correct results. So you see, you've already managed to rephrase what I've been saying into something different by conflating map and territory.
This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.
But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean;
This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.
Yes, carryi
...reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.
Obviously. Which is why I said that the point was not any of the specific arguments in that debate - they were totally arbitrary and could just as well have been two statisticians debating the validity of different statistical approaches - but the fact that any two people can disagree about anything in the first place, as they have different models of how to interpret their observations.
..."Occam's razor is true" is an entirely different thing from
Indeed, the scientific history of how observation and experiment led to a correct understanding of the phenomenon of rainbows is long and fascinating.
I'm sorry, what? In this discussion? That seems like an egregious conflict of interest. You don't get to unilaterally decide that my comments are made in bad faith based on your own interpretation of them. I saw which comment of mine you deleted and honestly I'm baffled by that decision.
The moderation system we settled on gives people above a certain karma threshold the ability to moderate on their own posts, which I think is very important to allow people to build their own gardens and cultivate ideas. Discussion about that general policy should happen in meta. I will delete any further discussion of moderation policies on this post.
If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.
and to be pointed about it I think believing you can identify the criterion of truth is a “comforting” belief that is either contradictory or demands adopting non-transcendental idealism
Actually... I was going to edit my comment to add that I'm not sure that I would agree that I "think we can know truth well enough to avoid the problem of the criterion" either, since your concep
...If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.
That's not my only disagreement. I also think that your specific proposed solution does nothing to "address" the problem (in particular because it just seems like a bad idea, in general because "addressing" it to your satisfaction is impossible), and only serves as an excuse to rationalize holding comforting but wrong beliefs under the guise of doing "advanced philosophy". This is why
...I don't have to solve the problem of induction to look out my window and see whether it is raining. I don't need 100% certainty, a four-nines probability estimate is just fine for me.
Where's the "just go to the window and look" in judging beliefs according to "compellingness-of-story"?
Of course not, and that’s the point.
The point... is that judging beliefs according to whether they achieve some goal or anything-- is no more reliable than judging beliefs according to whether they are true, is in no way a solution to the problem of induction or even a sensible response to it, and most likely only makes your epistemology worse?
Indeed, which is why metarationality must not forget to also include all of rationality within it!
Can you explain this in a way that doesn't make it sound like an empty applause light? How can I take compellin
...Because there’s no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.
I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.
So much the worse for schizophrenics. And s
...I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.
Somebody else objects, "of course you don't, it just happened to rain by coinci...
Two points:
Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there's some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.
It doesn't prove the OP's point at all. The OP was about beliefs (and "making sense of the world"). But I can have the belief "postrationality is poisonous and harmful" without having to post a comm
Well, this is a long comment, but this seems to be the most important bit:
The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models.
Why would you think "magic access" is required? It seems to me the ordinary non-magic causal access granted by our senses works just fine.
All that you say about beliefs often being critically mistaken due to eg. emotional attachment, is of course true, and that is why we must be ruthless in rejecting any reasons for believing things other than t
...Why would you think "magic access" is required?
Because there's no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality. Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.
Since there's no direct causal pathway, it would have to work through some non-causal means, i.e. magic.
The problem is this seems to be exactly the opposite of what "postrationality" advocates: using the la...
Neat! This looks a lot like my quick note on survival time prediction I wrote a few years back, but more in depth. Very nice.