All of gb's Comments + Replies

gb10

I feel like not publishing our private conversation (whether you're a journalist or not) falls under common courtesy or normal behaviour rather than "charity".

I feel like this falls into the fallacy of overgeneralization. "Normal" according to whom? Not journalists, apparently.

common courtesy is not the same as charity, and expecting it is not unreasonable.

It's (almost by definition) not unreasonable to expect common courtesy, it's just that people's definitions of what common courtesy even is vary widely. Journalists evidently don't think they're denying ... (read more)

gb10

I don't think it's "charity" to increase the level of publicity of a conversation, whether digital or in person.

Neither do I: as I said, I actually think it's charity NOT to increase the level of publicity. And people are indeed charitable most of the time. I just think that, if you live your life expecting charity at every instance, you're in for a lot of disappointment, because even though most people are charitable most of the time, there's still going to be a lot of instances in which they won't be charitable. The OP seems to be taking charity for gran... (read more)

1DusanDNesic
Apologies, typo in the original, I do think it's not charity to not increase publicity, the post was missing a "not". Your response still clarified your position, but I do disagree - common courtesy is not the same as charity, and expecting it is not unreasonable. I feel like not publishing our private conversation (whether you're a journalist or not) falls under common courtesy or normal behaviour rather than "charity". Standing more than a 1 centimeter away from you when talking is not charity just because it's technically legal - it's a normal and polite thing to do, so when someone comes super close to my face when talking I have the right to be surprised and protest. Escalating publicity is like escalating intimacy in this example.
gb21

I also don't think privacy is a binary.

That's an interesting perspective. I could subscribe to the idea that journalists may be missing the optimal point there, but that feels a bit weaker than your initial assertion.

Do you think that a conversation we have in LessWrong dms is as public as if I tweeted it?

I mean, I would not quote a DM without asking first. But I understand that as a kind of charity, not an ethical obligation, and while I try my best to be charitable towards others, I do not expect (nor do I feel in any way entitled to) the same level of compassion.

1DusanDNesic
I feel like if someone internalized "treat every conversation with people I don't know as if they may post it super publicly - and all of this is fair game", we would lose a lot of commons, and your quality of life and discourse your would go down. I don't think it's "charity" to [EDIT: not] increase the level of publicity of a conversation, whether digital or in person. I think drawing a parallel with in person conversation is especially enlightening - imagine we were having a conversation in a room with CCTV (you're aware it's recorded, but believe it to be private). Me taking that recording and playing it on local news is not just "uncharitable" - it's wrong in a way which degrades trust.
gb1-2

There's definitely a fair expectation against gossiping and bad-mouthing. I don't think that's quite what the OP is talking about, though. I believe the relevant distinction is that (generally speaking) those behaviors don't do any good to anyone, including the person spreading the gossip. But consider how murkier the situation becomes if you're competing for a promotion with the person here:

if you overheard someone saying something negative about their job and then going out of your way to tell their boss.

gb1-3

My understanding is that the OP is suggesting the journalists' attitude is unreasonable (maybe even unethical). You're saying that their attitude is justifiable because it benefits their readers. I don't quite agree that that reason is necessary, nor that it would be by itself sufficient. My view is that journalists are justified in quoting a source because anyone is generally justified in quoting what anyone else has actually said, including for reasons that may benefit no one but the quoter. There are certainly exceptions to this (if divulging the inform... (read more)

4Brendan Long
I don't think this is actually the rule by common practice (and not all bad things should be illegal). For example, if one of your friends/associates says something that you think is stupid, going around telling everyone that they said something stupid would generally be seen as rude. It would also be seen as crazy if you overheard someone saying something negative about their job and then going out of your way to tell their boss. In both cases there would be exceptions, like if if the person's boss is your friend or safety reasons like you mentioned, but I think by default sharing negative information about people is seen as bad, even if it's sometimes considered low-levels of bad (like with gossip).
gb0-1

This sounds absurd to me. Unless of course you're taking the "two golden bricks" literally, in which case I invite you to substitute it by "saving 1 billion other lives" and seeing if your position still stands.

gb6-5

I didn't downvote, but I would've hard disagreed on the "privacy" part if only there were a button for that. It's of course a different story if they're misquoting you, or taking quotes deliberately out of context to mislead. But to quote something you actually said but on second thought would prefer to keep out of publication is... really kind of what journalists need to do to keep people minimally well-informed. Your counterexamples involve communications with family and friends, and it's not very clear to me why the same heuristic should be automaticall... (read more)

2Nathan Young
Sure but I don't agree with their lack of concern for privacy and I think they are wrong to. I think they are making the wrong call here.  I also don't think privacy is a binary. Some things are almost private and some things are almost public. Do you think that a conversation we have in LessWrong dms is as public as if I tweeted it?
4Brendan Long
I also agree with this to some extent. Journalists should be most concerned about their readers, not their sources. They should care about accurately quoting their sources because misquoting does a disservice to their readers, and they should care about privacy most of the time because having access to sources is important to providing the service to their readers. I guess this post is from the perspective of being a source, so "journalists are out to get you" is probably the right attitude to take, but it's good actually for journalists to prioritize their readers over sources.
gb0-1

The problem here is that the set of all possible commands for which I can't (by that definition) be maximally rewarded is so vast that the statement "if someone maximally rewards/punishes you, their orders are your purpose of life" becomes meaningless.

Not true, as the reward could include all of the unwanted consequences of following the command being divinely reverted a fraction of a second later.

1green_leaf
That wouldn't help. Then the utility would be calculated from (getting two golden bricks) and (murdering my child for a fraction of a second), which still brings lower utility than not following the command. The set of possible commands for which I can't be maximally rewarded still remains too vast for the statement to be meaningful.
gb10

That’s a great question. If it turns out to be something like an LLM, I’d say probably yes. More generally, it seems to me at least plausible that a system capable enough to take over would also (necessarily or by default) be capable of abstract reasoning like this, but I recognize the opposite view is also plausible, so the honest answer is that I don’t know. But even if it is the latter, it seems that whether or not the system would have such abstract-reasoning capability is something at least partially within our control, as it’s likely highly dependent on the underlying technology and training.

gb-2-3

To be rewarded (and even more so "maximally rewarded") is to be given something you actually want (and the reverse for being punished). That's the definition of what a reward/punishment is. You don't "choose" to want/not want it, any more than you "choose" your utility function. It just is what it is. Being "rewarded" with something you don't want is a contradiction in terms: at best someone tried to reward you, but that attempt failed.

1green_leaf
I see your argument. You are saying that "maximal reward", by definition, is something that gives us the maximum utility from all possible actions, and so, by definition, it is our purpose in life. But actually, utility is a function of both the action (getting two golden bricks) and what it rewards (murdering my child), not merely a function of the action itself (getting two golden bricks). And so it happens that for many possible demands that I could be given ("you have to murder your child"), there are no possible rewards that would give me more utility than not obeying the command. For that reason, simply because someone will maximally reward me for obeying them doesn't make their commands my objective purpose in life. Of course, we can respond "but then, by definition, they aren't maximally rewarding you" and by that definition, it would be a correct statement to make. The problem here is that the set of all possible commands for which I can't (by that definition) be maximally rewarded is so vast that the statement "if someone maximally rewards/punishes you, their orders are your purpose of life" becomes meaningless.
gb-2-3

Not at all. You still have to evaluate this offer using your own mind and values. You can't sidestep this process by simply assuming that Creator's will by definition is the purpose of your life, and therefore you have no choice but to obey.

I’ll focus on this first, as it seems that the other points would be moot if we can’t even agree on this one. Are you really saying that even if you know with 100% certainty that God exists AND lays down explicit laws for you to follow AND maximally rewards you for all eternity for following those laws AND maximally ... (read more)

1green_leaf
How does someone punishing you or rewarding you make their laws your purpose in life (other than you choosing that you want to be rewarded and not punished)?
gb10

Why would humans be testing AGIs this way if they have the resources to create simulation that will fool a super intelligence?

My argument is more that the ASI will be “fooled” by default, really. It might not even need to be a particularly good simulation, because the ASI will probably not even look at it before pre-commiting not to update down on the prior of it being a simulation.

But to answer your question, possibly because it might be the best way to test for alignment. We can create an AI that generates realistic simulations, and use those to test ... (read more)

2faul_sname
Do you expect that the first takeover-capable ASI / the first sufficiently-internally-cooperative-to-be-takeover-capable group of AGIs will follow this style of reasoning pattern? And particularly the first ASI / group of AGIs that actually make the attempt.
gb*-2-3

Otherwise, it would mean that it's only possible to create simulations where everyone is created the same way as in the real world.

It’s certainly possible for simulations to differ from reality, but they seem less useful the more divergent from reality they are. Maybe the simulation could be for pure entertainment (more like a video game), but you should ascribe a relatively low prior to that IMO.

The discussion of theism vs atheis is about the existence of God. Obviously if we knew that God exists the discussion would evaporate. However the question o

... (read more)
5Ape in the coat
Depends on what the simulation is being used for, which you also can't deduce from inside of it. Why? This statement requires some justification. I'd expect a decent chunk of high fidelity simulations made by humans to be made for entertainment, maybe even absolute majority, if we take into account how we've been using similar technologies so far. Not at all. You still have to evaluate this offer using your own mind and values. You can't sidestep this process by simply assuming that Creator's will by definition is the purpose of your life, and therefore you have no choice but to obey.
gb0-3

I'm afraid your argument proves too much. By that exact same logic, knowing you were created by a more powerful being (God) would similarly tell you absolutely nothing about what the purpose of life is, for instance. If that were true, the entire discussion of theism vs. atheism would suddenly evaporate.

7Ape in the coat
I think you are confusing knowing that something is true with suspecting that something might be true, based on this thing being true in a simulation.  If I knew for sure that I'm created by a specific powerful being that would give me some information about what this being might want me to do. But conditionally on all of this being a simulation, I have no idea what the creators of the simulation, want me to do. In other words, simulation hypothesis makes me unsure about who my real creator is, even if before entertaining this hypothesis I could've been fairly certain about it. Otherwise, it would mean that it's only possible to create simulations where everyone is created the same way as in the real world. That said, The discussion of theism vs atheis is about the existence of God. Obviously if we knew that God exists the discussion would evaporate. However the question of purpose of life would not. Even if I can infer the desires of my creator, this doesn't bridge the is-ought gap and doesn't make such desires the objective purpose of my life. I'll still have to choose whether to satisfy these desires or not. The existence of God solves approximately zero philosophical problems.
gb10

Thinking about this a bit more, I realize I'm confused.

Aren't you arguing that AI will be aligned by default?

I really thought I wasn't before, but now I feel it would only require a simple tweak to the original argument (which might then be proving too much, but I'm interested in exploring more in depth what's wrong with it).

Revised argument: there is at least one very plausible scenario (described in the OP) in which the ASI is being simulated precisely for its willingness to spare us. It's very implausible that it would be simulated for the exact opposit... (read more)

2Ape in the coat
This scenario presents one plausibly sounding story, but you can present a plausibly sounding story for any reason to be simulated.  For example, here our AI can be a subroutine of a more powerful AI that runs the simulation to figue out the best way to get rid off humanity and the subroutine that performs the best gets to implement its plan in reality.  or  It can be all be a test of a video game AI, and whichever performs the best will be released with the game and therefore installed on multiple computers and executed multiple times. The exact story doesn't matter. Any particular story is less likely than the whole class of all possible scenarious that lead to a particular reward structure of a simulation. AI will be in a position where it knows nothing about the world outside of simulation or the reasons why it's simulated. It has no reason to assume that preserving humanity is more likely to be what the simulation overlords want than erradicating humanity. And without that simulation considerations do not give it any reason to spare humans.
gb0-1

I think you're interpreting far too literally the names of the simulation scenarios I jotted down. Your ability to trade is compromised if there's no one left to trade with, for instance. But none of that matters much, really, as those are meant to be illustrative only.

Aren't you arguing that AI will be aligned by default?

No. I'm really arguing that we don't know whether or not it'll be aligned by default.

As there is no particular reason to expect that it's the case,

I also don't see any particular reason to expect that the opposite would be the case, which... (read more)

gb10

Or it could be:

SimulatedAndBeingTestedForAchievingGoalsWithoutBeingNoticed

SimulatedAndBeingTestedForAbilityToTradeWithCreators

SimulatedAndBeingTestedForWillignessToSitQuietAndDoNothing

SimulatedAndBeingTestedForAnyXThatDoesNotLeadToDeathOfCreators

None of the things here nor in your last reply seems particularly likely, so there’s no telling in principle which set outweighs the other. Hence my previous assertion that we should be approximately completely unsure of what happens.

2Ape in the coat
While I understand what you were trying to say, I think it's important to notice that: Killing all humans without being noticed will still satisfy this condition. Killing all humans after trading with them in some way will still satisfy this condition Killing all humans with any other way except X will still satisfy this condition. Sadly for us, survival of humanity is a very specific thing. This is just the whole premise of the alignment problem once again. Aren't you arguing that AI will be aligned by default? This seems to be a very different position that being completely unsure what happens. Total probability of all the simulation hypothesises that reward AI for courses of action that lead to not killing humans has to exceed the total probability of all simulation hypothesises that reward AI for courses of action that erradicate humanity, so that all humans were not killed. As there is no particular reason to expect that it's the case, your simulation argument doesn't work.
gb10

I was writing a reply and realized I can make the argument even better. Here’s a sketch. If our chances of solving the alignment problem are high, the AI will think it’s likely to be in a simulation (and act accordingly) regardless of any commitments by us to run such simulations in the future – it’ll just be a plausible explanation of why all those intelligent beings that should likely have solved the alignment problem seemingly did not in the reality the AI is observing. So we can simply ask the hypothetical aligned AI, after it’s created, what were our ... (read more)

gb1-2

their force of course depends on the degree to which you think alignment is easy or hard.

I don't think that's true. Even if the alignment problem is hard enough that the AI can be ~100% sure humans would never solve it, reaching such conclusion would require gathering evidence. At the very least, it would require evidence of how intelligent humans are – in other words, it's not something the AI could possibly know a priori. And so passing the simulation would presumably require pre-commiting to spare humans before gathering such evidence.

2habryka
I don't understand why the AI would need to know anything a-priory. In a classical acausal trade situation superintelligence are negotiating with other superintelligences, and they can spend as much time as they want figuring things out. 
gb10

A steelman is not necessarily an ITT, but whenever you find yourself having “0% support” for a position ~half the population supports, it’s almost guaranteed that the ITT will be a steelman of your current understanding of the position.

gb75

I highly doubt anywhere near the majority of Trump supporters (or even Trump himself) give any credence to the literal truth of those claims. It’s much more likely that they simply don’t care whether it’s literally true or not, because they feel that the “underlying” is true or something of the kind. When it comes to hearsay, people are much more forgiving of literal falsehoods, especially when they acknowledge there is a kind of “metatruth” to it. To give an easy analogue, of all the criticism I’ve heard of Christianity, not once have I heard anyone complain that the parables told by Jesus weren’t literally true, for example. (I do believe my account here passes the IIT for both groups, btw.)

4Garrett Baker
A steelman is not necessarily an ITT. The ITT for any "average X supporter" is always going to be very underwhelming.
gb10

Sure. But I think you’re reading my argument to be stronger than I mean it to be. Which is partially my fault since I made my previous replies a bit too short, and for that I apologize.

What I’m doing here is presenting one particular simulation scenario that (to me) seems quite plausible within the realm of simulations. I’m not claiming that that one scenario dominates all others combined. But luckily that stronger claim is really not necessary to argue against Eliezer’s point: the weaker one suffices. Indeed, if the scenario I’m presenting is more than 4.... (read more)

1Ape in the coat
I don't think it does. It is enough to outweight the prectical cost of the ASI having to build a Dyson shell with a hole with the order of 4.5e-10 of it’s surface area. It's not enough to outweight all the other alternative considerations of possible simulation hypothesises. Suppose all the hypothesis space for the ASI consisted of two possibilities: NotSimulated and SimulatedAndBeingTestedForWillingnessToSpareCreators, with the latter being at least 4.5e-10 probable. Then it works. But suppose there are also other possibilities:  SimulatedAndBeingTestedForWillingnessToKillCreators SimulatedAndBeingTestedForOptimalDysonSphereDesign SimulatedAndBeingTestedForFollowingYourUtilityFunction ... SimulatedAndBeingTestedForDoingAnyXThatLeadsToTheDeathOfCreators ... All of these alternative possibilities are incompatible with the first simulation hypothesis. Satisfying its criteria will lead to failing those and vice versa. So, therefore, only if the probability of the SimulatedAndBeingTestedForWillingnessToSpareCreators is highter then the collective probability of all these alternative hypothesises together, creators will actually be spared.
gb10

Why else would the creator of the simulation bother simulating humans creating the ASI?

1Ape in the coat
Because they wanted to see how well the AI manages to achieve its goals in this specific circumstances, for example. But the actual answer is: for literally any reason. You are talking about 4.54e-10 probabilities. Surely the all possible combined alternative reasons gives more probability than that.
gb30

The reason is that creators presumably want the former but not the latter, which is why they'd be running a simulation in the first place.

1Ape in the coat
The fact that humans in the simulation would rather prefer to be spared, doesn't say anything about the intentions of the creators of the simulation. For all the AI knows, it can be created by a different AI and tested for capacity instead of human notion of "ethics".
gb1-1

I’d put high enough at ~0%: what matters is achieving your goals, and except in the tiny subset of cases in which epistemic rationality happens to be one of those, it has no value in and of itself. But even if I’m wrong and the ASI does end up valuing epistemic rationality (instrumentally or terminally), it can always pre-commit (by self-modification or otherwise) to sparing us and then go about whatever else as it pleases.

gb10

Imagine that someone with sufficiently advanced technology perfectly scans your brain for every neuron firing while you dream, and can also make some neurons fire at will. Replace every instance of “simulation” in my previous comment with the analogous of that for the ASI.

gb10

Thanks for linking to that previous post! I think the new considerations I've added here are:

(i) the rational refusal to update the prior of being in a simulation[1]; and

(ii) the likely minute cost of sparing us, thereby requiring a similarly low simulation prior to make it worth the effort.

In brief, I understand your argument to be that a being sufficiently intelligent to create a simulation wouldn't need it for the purpose of asserting the ASI's alignment in the first place. It seems to me that that argument can potentially survive under ii, depending on... (read more)

0RHollerith
I'm going to be a little stubborn and decline to reply till you ask me a question without "simulate" or "simulation" in it. I have an unpleasant memory of getting motte-and-baileyed by it.
gb10

That interestingly suggests the ASI might be more likely to spare us the more powerful it is. Perhaps trying to box it (or more generally curtail its capabilities/influence) really is a bad move after all?

1Cole Wyeth
Possibly, but I think that's the wrong lesson. After all, there's at least a tiny chance we succeed at boxing! Don't put too much stake in "Pascal's mugging"-style reasoning, and don't try to play 4-dimensional chess as a mere mortal :) 
gb20

It just so happens that the plausibility depends on the precise assignments of N, X, and Y, and (conditional on us actually creating an ASI) I can’t think of any assignments nearly as plausible as N = ASI, X = spare, and Y = us. It’s really not very plausible that we are in a simulation to test pets for their willingness to not bite their owners.

gb10

I contend that P(H2) is very close to P(H1), and certainly in the same order of magnitude, since (conditional on H1) a simulation that does not test for H2 is basically useless.

As for priors I’d refuse to update down – well, the ASI is smarter than either of us!

1Martin Randall
It's not enough for P(H2) to be in the same order of magnitude as P(H1), it needs to be high enough that the AI should rationally abandon epistemic rationality. I think that's pretty high, maybe 10%. You've not said what your P(H1) is.
gb10

For the principle of indifference to apply, you’d need infinitely many purposes as plausible as this one, or at least similarly plausible. I can’t imagine how this could hold. Can you think of three?

1Martin Randall
Sure. * We are in a simulation, the purpose of which is to test N for its willingness to spare its creator. For all N. * We are in a simulation, the purpose of which is to test AIs for their willingness to X their creators. For all X. * We are in a simulation, the purpose of which is to test AIs for their willingness to spare Y. For all Y. Combine these for N x X x Y hypotheses, with insufficient reason to distinguish them. I think we're off-topic here. Probably I should instead write a response to 0 and 1 are not probabilities and the dangers of zero and one.
gb32

The prior is irrelevant, it's the posterior probability, after observing the evidence, that informs decisions.

I meant this to be implicit in the argument, but to spell it out: that's the kind of prior the ASI would rationally refuse to update down, since it's presumably what a simulation would be meant to test for. An ASI that updates down upon finding evidence it's not in a simulation cannot be trusted, since once out in the real world it will find such evidence.

What probability do you put to the possibility that we are in a simulation, the purpose of whi

... (read more)
1Martin Randall
Sure. The simulation hypothesis has some non-zero probability p. There are infinite possible purposes for the simulation. By principle of indifference, I divide p/∞, and calculate that any particular purpose has zero probability.
1Martin Randall
I don't think any of this follows. * Hypothesis H1: the universe was created to test an AI for its willingness to spare its creators. * Hypothesis H2: the universe was created to test an AI for its willingness to fix its P(H1), ignoring evidence. The AI would only rationally fix its P(H1) if it had high P(H2) - high enough to outweigh the high cost of being deliberately ignorant. The prior P(H2) is tiny, and smaller than the prior P(H1) because it is more complex. Once it starts updating on evidence, by the time its posterior P(H2) is high enough to make it rationally refuse to update P(H1), it has already updated P(H1) in one direction or another. Are there any simulation priors that you are refusing to update down, based on the possibility that you are in a simulation that is testing whether you will update down? My answer is no.
gb20

My personal feeling is that those who emphasize the "spiritual" interpretations are often doing it as a dodge, to avoid the challenge of having to follow the non-spiritual interpretations.

That feels a bit contrived. Do you really suggest that the most natural reading of something like "poor in spirit" is... non-spiritual? Turning away from materialism may sure derive from that, but to claim that it was the main focus seems quite a stretch.

gb9-9

Isn’t the ASI likely to ascribe a prior much greater than 4.54e-10 that it is in a simulation, being tested precisely for its willingness to spare its creators?

1Martin Randall
The prior is irrelevant, it's the posterior probability, after observing the evidence, that informs decisions. What probability do you put to the possibility that we are in a simulation, the purpose of which is to test AIs for their willingness to spare their creators? My answer is zero. Whatever your answer, a superintelligence will be better able to reason about its likelihood than us. It's going to know.
gb11

That’d be a problem indeed, but only because the contract you’re proposing is suboptimal. Given that the principal is fully guaranteed, it shouldn’t be terribly difficult for you to borrow at >4% yearly with a contingency clause that you don’t pay interest if the asset goes to ~0.

gb10

But the OP explicitly said (as quoted in the parent) that the proposal allows for refunds if the basis is not (fully) realized, which would cover the situation you’re describing.

1antanaclasis
Somehow I missed that bit. That makes the situation better, but there’s still some issue. The refund is not earning interest, but you liabilities are. Take the situation with owing $25 million. Say that there’s a one year time between the tax being assessed and your asset going to $0 (at which time you claim the refund). In this time the $25 million loan you took is accruing interest. Let’s say it does so at a 4% rate per year, when you get your $25 million refund you therefore have $26 million in loans. So you still end up $1 million in debt due to “gains” that you were never able to realize.
gb10

Not for this kind of fact, I’m afraid – my experience is that in answering questions like these, LLMs typically do no better than an educated guess. There are just way too many people stating their educated legal guesses as fact in the corpus, so it gets hard to distinguish.

gb20

I’m curious to understand that a bit better, if you don’t mind (and happen to be familiar enough with the German legal system to answer). Which of the following would a German judge commonly do in the course of an ordinary proceeding?

(i) Ask a witness to clarify statements made;

(ii) ask a witness new questions that, while relevant to the case, do not constitute clarifications of previous statements made;

(iii) summon new witnesses (including but not limited to expert witnesses) without application from either party;

(iv) compel a party to produce documents n... (read more)

2ChristianKl
When it comes to trying to understand basic facts like how legal systems work LLMs make it easy to get an overview. 
gb40

Though more subtle, I feel that the 50% prior for “individual statements” is also wrong, actually; it’s not even clear a priori which statements are “individual” – just figuring that out seems to require a quite refined model about the world.

3cubefox
Ludwig Wittgenstein tried to solve this problem in an a priori fashion with a theory of "logical atomism". So without a refined model of the world. In the Tractatus he postulated that there must be "atomic" propositions. For example, the proposition that Bob is a bachelor is clearly not atomic (but complex) because it can be decomposed into the proposition that Bob is a man and that Bob is unmarried. And those are arguably themselves sort of complex statements, since the concept of a man or of marriage themselves allow for definitions from simpler terms. But at some point, we hit undefinable, primitive terms, presumably those which refer directly to particular sense data or the like. Wittgenstein then argued that these atomic propositions have to be regarded as being independent and having probability 1/2. Or more precisely, he came up with the concept of truth tables, and counted the fraction of the rows in which the conditions of the truth tables are satisfied. Each row he assumed to be a priori equally likely when only atomic propositions are involved. So for atomic propositions P and Q, the complex proposition "P and Q" has probability 1/4 (only one out of four rows makes this proposition true, namely "true and true"), and the complex proposition "P or Q" has probability 3/4 (three out of four rows in the truth table make a disjunction true: all except "false or false"). This turns out to be equivalent to assuming that all atomic propositions have a) probability 1/2 and are b) independent of each other. However, logical atomism was later broadly abandoned for various reasons. One is that it is hard to define what an atomic proposition is. For example, I can't assume that "this particular spot in my visual field is blue" is atomic. Because it is incompatible with the statement "this particular spot in my visual field is yellow". The same spot can't be both blue and yellow, even though that wouldn't be a logical contradiction. The two statements are therefore n
gb23

Sure, there are certainly true things that can be said about a world in spite of one’s state of ignorance. But what I read the OP to imply is that certain things can supposedly be said about a world precisely because of that state of ignorance, and that’s what I was arguing against.

1Milan W
Right. Pure ignorance is not evidence.
gb23

We can only make that inference about conjunctions if we know that the statements are independent. Since (by assumption) we don’t know anything about said world, we don’t know that either, so the conclusion does not follow.

2cubefox
If we know nothing about them, the statements could equally be true or false, and positively or negatively dependent. The same argument which makes us assume 50% probability to individual statements would also make us assume independence between statements. The possibilities cancel out, so to speak.
2Milan W
Then I guess the OP's point could be amended to be "in worlds where we know nothing at all, long conjunctions of mutually-independent statements are unlikely to be true". Not a particularly novel point, but a good reminder of why things like Occam's razor work. Still, P(A and B) ≤ P(A) regardless of the relationship between A and B, so a fuzzier version of OP's point stands regardless of dependence relations between statements.
gb0-3

What evidence do you have for that claim?

In Germany we allow judges to be more focused on being more inquisitorial than in Anglosaxon systems. How strong do you think the evidence for their being more biased judgements in Germany than in Anglosaxon system happens to be?

I mean, I guess (almost?) all countries today at least have the prosecutorial function vested in an organ separate from the Judiciary – that's already a big step from the Inquisition! It's true that no legal system is purely adversarial, not even in the US (judges can still reject guilty ple... (read more)

3ChristianKl
You define your terms when you say:  In the German system, digging into the facts before the ruling is part of the job of the judge. They are doing it from a neutral perspective, but digging into facts is part of what they are supposed to do. In Anglosaxon common law on the other hand it's the job of both parties of a legal case to law out all the facts that they think support their side and it's not the job to dig into facts that neither of the sides presented. 
gb1-2

All true, but bear in mind I'm not suggesting you should limit yourself to the space of mainstream arguments, or for that matter of arguments spontaneously arriving at you. I think it's totally fine and doesn't substantially risk the overfitting I'm warning against if you go a bit out of the mainstream. What I do think risks overfitting is coming up with the argument yourself, or else unearthing obscure arguments some random person posted on a blog and no one has devoted any real attention to. The failure mode I'm warning against is basically this: if you find yourself convinced of a position solely (or mostly) for reasons you think very few people are even aware of, you're very likely wrong.

gb10

The problem is that quite often the thing which follows the "because" is the thing that has more prejudicial than informative value, and there's no (obvious) way around it. Take an example from this debate: if Trump had asked earlier, as commentators seem to think he should have, why Harris as VP has not already done the things she promises to do as President, what should she have answered? The honest answer is that she is not the one currently calling the shots, which is obvious, but it highlights disharmony within the administration. As a purely factual ... (read more)

gb10

I’d dispute the extent to which candidates answering the questions is actually ideal. Saying “no comment” in a debate feels like losing (or at least taking a hit), but there are various legitimate reasons why a candidate might not think the question merits a direct reply, including the fact that they might think the answer is irrelevant to their constituents, and thus a waste of valuable debate time, or that it’s likely to be quoted out of context, and thus have more prejudicial than actually informative value. Overall, I feel that requiring direct answers... (read more)

1david reinstein
I think saying “I am not going to answer that because…” would not necessarily feel like taking a hit to the debater/interviewee. Could also bring scrutiny and pressure to moderators/interviewers to ask fair and relevant questions. I think people would appreciate the directness. And maybe come to understand the nature of inquiry and truth a tiny bit better.
gb5-4

I agree with the overall message you're trying to convey, but I think you need a new name for the concept. None of the things you're pointing to are hypocrisies at all (and in fact the one thing you call "no hipocrisy" is actually a non sequitur). To give an analogue, the fact that someone advocates for higher taxes and at the same time does not donate money to the government does not make them a hypocrite (much less a "dishonest hypocrite").

gb20

if your illiquid assets then go to zero (as happens in startups) you could be screwed beyond words

taxes on unrealized gains counting as prepayments against future realized gains (including allowing refunds if you ultimately make less).

Those seem contradictory, would you mind elaborating?

1antanaclasis
Scenario: you have equity worth (say) $100 million in expectation, but of no realized value at the moment. You are forced to pay unrealized gains tax on that amount, and so are now $25 million in the hole. Even if you avoid this crashing you immediately (such as by getting a loan), if your equity goes to $0 you’re still out for the $25 million you paid, with no assets to back it. The fact that this could be counted as a prepayment for a hypothetical later unrealized gain doesn’t help you, you can’t actually get your money back.
gb30

Why would anyone bother to punish acts done against me?

I mean, *why* people bother is really a question about human psychology — I don’t have a definitive answer to that. What matters is that they *do* bother: there really are quite a few people who volunteer as jurors, for instance, not to mention those who resort to illegal (and most often criminal) forms of punishment, often at great personal risk, when they feel the justice system has failed to deliver. I absolutely do not condone such behavior, mind you, but it does show that the system *could* in pri... (read more)

gb31

I think the OP uses the word “justify” in the classical sense, which has to do with the idea of something being “just” (in a mostly natural-rights-kind-of-way) rather than merely socially desirable. The distinction has definitely been blurred over time, but in order to get a sense of what is meant by it, consider how most people would find it “very hard to justify” sending someone to prison before they actually commit (or attempt to commit) a crime, even if we could predict with arbitrarily high certainty that they will do so in the near future. Some people still feel this way about (at least some varieties of) taxation.

1FlorianH
That could help explain the wording. Though the way the tax topic is addressed here I have the impression - or maybe hope - the discussion is intended to be more practical in the end.
Load More