In the following, I will use the term "my DIT" to refer to the claim that:
In some specific non-trivial contexts, on average more than half of the participants in online debate who pose as distinct human beings are actually bots.
I agree wi...
However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I'm caring about something unobservable.
If we're going to make sense of living in a branching multiverse, then we'll need to adopt a more fluid concept of personal identity.
Scenario: I take a sleeping pill that will make me fall asleep in 30 minutes. However, the person ...
This can be a great time-saver because it relies on each party to present the best possible case for their side. This means I don't have to do any evidence-gathering myself; I just need to evaluate the arguments presented, with that heuristic in mind. For example, if the pro-X side cites a bunch of sources in favor of X, but I look into them and find them unconvincing, then this is pretty good evidence against X, and I don't have to go combing through all the other sources myself. The mere existence of bad arguments for X is not in itself evidence against ...
In my experience, Americans are actually eager to talk to strangers and make friends with them if and only if they have some good reason to be where they are and talk to those people besides making friends with people.
A corollary of this is that if anyone at an [X] gathering is asked “So, what got you into [X]?” and answers “I heard there’s a great community around [X]”, then that person needs to be given the cold shoulder and made to feel unwelcome, because otherwise the bubble of deniability is pierced and the lemon spiral will set in, ruining it for ...
I highly recommend Val Plumwood's essay Tasteless: towards a food-based approach to death for a "green-according-to-green" perspective.
Plumwood would turn the "deep atheism" framing on its head, by saying in effect "No, you (the rationalist) are the real theist". The idea is that even if you've rejected Cartesian/Platonic dualism in metaphysics, you might still cling for historical reasons to a metaethical-dualist view that a "real monist" would reject, i.e. the dualism between the evaluator and the evaluated, or between the subject and object of moral val...
It's a question of whether drawing a boundary on the "aligned vs. unaligned" continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the "unaligned" side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be "Antagonistic" in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn't be characterized as such.
On the contrary, I'd say internet forum debating is a central example of what I'm talking about.
This "trying to convince" is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob's side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that hi...
#1 - I hadn't thought of it in those terms, but that's a great example.
#2 - I think this relates to the involvement of the third-party audience. Free speech will be "an effective arena of battle for your group" if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:
If this is really what's going on, Alice will be in favo...
I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.
Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?
And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I'm not sure what the conflict between virtue ethics and consequentialism would be here.
The special difficulty here is that the two sides are following the same virtue-ethics framework, a...
It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.
Aren't there situations (at least in some virtue-ethics systems) where it's fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)
For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, ev...
It's also nice to be able to charge up in a place where directly plugging in your device would be inconvenient or would risk theft, e.g. at a busy cafe where the only outlet is across the room from your table.
I want to say something like: "The bigger N is, the bigger a computer needs to be in order to implement that prior; and given that your brain is the size that it is, it can't possibly be setting N=3↑↑↑↑↑3."
Now, this isn't strictly correct, since the Solomonoff prior is uncomputable regardless of the computer's size, etc. - but is there some kernel of truth there? Like, is there a way of approximating the Solomonoff prior efficiently, which becomes less efficient the larger N gets?
I'm unsure whether it's a good thing that LLaMA exists in the first place, but given that it does, it's probably better that it leak than that it remain private.
What are the possible bad consequences of inventing LLaMA-level LLMs? I can think of three. However, #1 and #2 are of a peculiar kind where the downsides are actually mitigated rather than worsened by greater proliferation. I don't think #3 is a big concern at the moment, but this may change as LLM capabilities improve (and please correct me if I'm wrong in my impression of current capabilities).
One time, a bunch of particularly indecisive friends had started an email thread in order to arrange a get-together. Several of them proposed various times/locations but nobody expressed any preferences among them. With the date drawing near, I broke the deadlock by saying something like "I have consulted the omens and determined that X is the most auspicious time/place for us to meet." (I hope they understood I was joking!) I have also used coin-flips or the hash of an upcoming Bitcoin block for similar purposes.
I think the sociological dynamic is somethi...
I use the same strategy sometimes for internal coordination. Sometimes when I have a lot of things to do I tend to get overwhelmed, freeze and do nothing instead.
A way for me to get out of this state is to write down 6 things that I could do, throw a die, and start with the action corresponding to the dice outcome!
This may shed some light onto why people have fun playing the Schelling game. It's always amusing when I discover how uncannily others' thoughts match my own, e.g. when I think to myself "X! No, X is too obscure, I should probably say the more common answer Y instead", and then it turns out X is the majority answer after all.
What exactly did you do with the candles? I've seen pictures and read posts mentioning the fact that candles are used at solstice events, but I'm having trouble imagining how it works without being logistically awkward. E.g.:
I wrote up the following a few weeks ago in a document I shared with our solstice group, which seems to independently parallel G Gordon Worley III's points:
To- | morrow can be brighter than [1]
to- | day, although the night is cold [2]
the | stars may seem so very far
a- | way... [3]
But | courage, hope and reason burn,
in | every mind, each lesson learned, [4]
[5] | shining light to guide to our way,
[6] | make tomorrow brighter than [7]
to- | day....
...
- It's weird that the comma isn't here, but rather 1 beat later.
- The unnecessary syncopation on "night is cold" is al
I think most non-experts still have only a vague understanding of what cryptocurrency actually is, and just mentally lump together all related enterprises into one big category - which is reinforced by the fact that people involved in one kind of business will tend to get involved in others as well. FTX is an exchange, Alameda is a fund, and FTT is a currency, and each of these things could theoretically exist apart from the others, but a layperson will point at all of them and say "FTX" in the same way as one might refer to a PlayStation console as "the N...
Meta question: What do you think of this style of presenting information? Is it useful?
The more resources people in a community have, the easier it is for them to run events that are free for the participants. The tech community has plenty of money and therefore many tech events are free.
This applies to "top-down funded" events, like a networking thing held at some tech startup's office, or a bunch of people having their travel expenses paid to attend a conference. There are different considerations with regard to ideological messages conveyed through such events (which I might get into in another post), but this is different from the cen...
This is a fair point but I think not the whole story. The events that I'm used to (not just LW and related meetups, but also other things that happen to attract a similar STEM-heavy crowd) are generally held in cafes/bars/parks where nobody has to pay anything to put on the event, so it seems like financial slack isn't a factor in whether those events happen or not.
Could it be an issue of organizers' free time? I don't think it's particularly time-consuming to run a meetup, especially if you're not dealing with money and accounting, though I could be wrong...
Really helpful to hear an on-the-ground perspective!
(I do live in America - Austin specifically.)
I don't think this issue is specific to spirituality; these are just the most salient examples I can think of where it's been dealt with for a long time and explicitly discussed in ancient texts. (For a non-spiritual example, according to Wikipedia the Platonic Academy didn't charge fees either, though I doubt they left any surviving writings explaining why.)
How would you respond to someone who says "I can easily pay the recommended donation of $20 but I don't ...
You are forced to trust what others tell you.
The difference between fiction and non-fiction is that non-fiction at least purports to be true, while fiction doesn't. I can decide whether I want to trust what Herodotus says, but it's meaningless to speak of "trusting" the Sherlock Holmes stories because they don't make any claims about the world. Imagining that they do is where the fallacy comes in.
For example, kung-fu movies give a misleading impression of how actual fights work, not because the directors are untrustworthy or misinformed, but because it's more fun than watching realistic fights, and they're optimizing for that, not for realism.
If you categorically don’t pay people who are a purveyor of values, then you are declaring that you want that nobody is a purveyor of values as their full-time job.
Would this really be a bad thing? The current situation seems like a defect/defect equilibrium - I want there to be full-time advocates for Good Values, but only to counteract all the other full-time advocates for Bad Values. It would be better if we could just agree to ratchet down the ideological arms race so that we can spend our time on more productive, non-zero-sum activities.
But unlike ...
OK, so if I understand this correctly, the proposed method is:
(Edit: I suppose it's simpler to just multiply all of each contestant's probabilities together, and distribute the award proportional to that result.)
I have a vague memory of a dream which had a lasting effect on my concept of personal identity. In the dream, there were two characters who each observed the same event from different perspectives, but were not at the time aware of each other's thoughts. However, when I woke up, I equally remembered "being" each of those characters, even though I also remembered that they were not the same person at the time. This showed me that it's possible for two separate minds to merge into one, and that personal identity is not transitive.
See also Newcomblike problems are the norm.
When I discuss this with people, the response is often something like: My value system includes a term for people other than myself - indeed, that's what "morality" is - so it's redundant / double-counting to posit that I should value others' well-being also as an acausal "means" to achieving my own ends. However, I get the sense that this disagreement is purely semantic.
Hint:
It's a character from a movie.
It turns out Japanese words are really useful for filling in crosswords, since they have so many vowels.
Well done! This is faster than I expected it to be solved.
If the cryptography example is too distracting, we could instead imagine a non-cryptographic means to the same end, e.g. printing the surveys on leaflets which the employees stuff into envelopes and drop into a raffle tumbler.
The point remains, however, because (just as with the blinded signatures) this method of conducting a survey is very much outside-the-norm, and it would be a drastic world-modeling failure to assume that the HR department actually considered the raffle-tumbler method but decided against it because they secretly do want to deanonymize ...
You mention "Infra-Bayesianism" in that Twitter thread - do you think that's related to what I'm talking about here?
This is interesting, because it seems that you've proved the validity of the "Strong Adversarial Argument", at least in a situation where we can say:
This event is incompatible with XYZ, since Y should have been called.
In other words, we can use the Adversarial Argument (in a normal Bayesian way, not as an acausal negotiation tactic) when we're in a setting where the rule against hearsay is enforced. But what reason could we have had for adopting that rule in the first place? It could not have been because of the reasoning you've laid out here, which pr...
To make it slightly more concrete, we could say: one copy is put in a red room, and the other in a green room; but at first the lights are off, so both rooms are pitch black. I wake up in the darkness and ask myself: when I turn on the light, will I see red or green?
There’s something odd about this question. “Standard LessWrong Reductionism” must regard it as meaningless, because otherwise it would be a question about the scenario that remains unanswered even after all physical facts about it are known, thus refuting reductionism. But from the perspective ...
Thinking more about this:
- Is it possible to get good at this game?
- Does this game teach any useful skills?
I don't think there's a generalized skill of being good at this game as such, but you can get good at it when playing with a particular group, as you become more familiar with their thought processes. Playing the game might not develop any individual's skills, but it can help the group as a whole develop camaraderie by encouraging people to make mental models of each other.
I've played a variant like this before, except that only one clue would be active at once - if the clue is neither defeated nor contacted within some amount of time, then we'd move on to another clue, but the first clue can be re-asked later. The amount of state seemed manageable for roadtrips/hikes/etc.
Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. ("All possible bitstrings, in order" is not a good law of physics, just because it contains us somewhere).
Another way of thinking about this, which amounts to the same thing: Holding the laws of physics constant, the Solomonoff prior will assign much more probability to a universe that evolves from a minimal-entropy initial state, than to one that starts off in thermal equilibrium. In other words:
Here's the way I understand it: A low-entropy state takes fewer bits to describe, and a high-entropy state takes more. Therefore, a high-entropy state can contain a description of a low-entropy state, but not vice-versa. This means that memories of the state of the universe can only point in the direction of decreasing entropy, i.e. into the past.
I think the "normal items that helped" category is especially important, because it's costly in terms of money, time, and space to get prepper gear specifically for the whole long tail of possible disasters. If resources are limited, then it's best to focus on buying things that are both useful in everyday life and also are the general kind-of-thing that's useful in disaster scenarios, even if you can't specifically anticipate how.
Good to know that this was useful. I hadn't thought of this meetup as "journalism," but I suppose it was in a sense.
Same here.
You may be right... I just need a rough headcount now, so if you want to take time to ponder the team name feel free to leave it blank now and then submit the form again later with your suggestion. (Edited the form to say so.)
I'm trying to wrap my head around this. Would the following be an accurate restatement of the argument?
I’d suggest that even a counterfactual donation of $100 to charity not occurring would feel more significant than the frontpage going down for a day.
This suggests an interesting idea: A charity drive for the week leading up to Petrov Day, on condition that the funds will be publicly wasted if anyone pushes the button (e.g. by sending bitcoin to a dead-end address, or donating to two opposing politicians' campaigns).
If the space of possibilities is not arbitrarily capped at a certain length, then such a distribution would have to favor shorter strings over longer ones in much the same way as the Solomonoff prior over programs (because if it doesn't, then its sum will diverge, etc.). But then this yields a prior that is constantly predicting that the universe will end at every moment, and is continually surprised when it keeps on existing. I'm not sure if this is logically inconsistent, but at least it seems useless for any practical purpose.