by LVSN
1 min read

1

This is a special post for quick takes by LVSN. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
59 comments, sorted by Click to highlight new comments since:

The Principle of Nameless Heartsmarts: It is pointless to complain that I acted with propriety if in the end I was too dense to any relevant consideration.

Do you mean impropriety?

No; pointless for me to complain, to be clear.

I think I need more context to understand what you're claiming.  I don't know anyone who complains about their actions, they only complain about other's complaints about their actions.  Are you saying that others should not complain about your proper actions when you're too dense to some relevant consideration?  Or the opposite?  Or that "acting with propriety" is actually not consistent with being dense to a relevant consideration?

In the sequences, Yudkowsky has remarked over and over that it is futile to protest that you acted with propriety if you do not achieve the correct answer; read the 12th virtue

Is your proposed principle different from that?  It seems like there are some near-synonym replacements, but nothing semantic nor structural that would justify a new name for it. 

Arbitrary incompleteness invites gameability, and arbitrary specificity invites exceptioncraft.

"arbitrary" is doing a lot of work here.  If it's agent-chosen specificity/completeness, that IS ALREADY a game, with exceptioncrafting just a move within in it.  If the arbitrariness is randomly or "naturally" distributed, replace "gamability" with "engineering" and "exceptioncraft" with "craft".

Recognizing adversarial (including semi-cooperative and mixed sequences of cooperative/adversarial) situations is a  big modeling hole in many rationalists' worldviews.

Funny that you think gameability is closer to engineering; I had it in mind that exceptioncraft was closer. To my mind, gameability is more like rules-lawyering the letter of the law, whereas exceptioncraft relies on the spirit of the law. Syntactic vs semantic kinda situation.

Exceptioncraft is seeking results within a set of constraints that don't make the path to those results obvious.  Engineering and gaming are just other words for understanding the constraints deeply enough to find the paths to desired (by the engineer) results. 

Powered heavier-than-air flight is gaming the rules of physics, utilizing non-obvious aerodynamic properties to overcome gravity.  Using hold-to-maturity accounting to bypass rules on risk/capitalization is financial engineering in search of profits.  

The words you choose are political, with embedded intentional beliefs, not definitional and objective about the actions themselves.  

Engineering and gaming are just other words for understanding the constraints deeply enough to find the paths to desired (by the engineer) results. 

Yes.

The words you choose are political, with embedded intentional beliefs, not definitional and objective about the actions themselves.

Well now that was out of left-field! People don't normally say that without having a broader disagreement at play. I suppose you have a more-objective reform-to-my-words prepared to offer me? My point about the letter of the law being more superficial than the spirit seems like a robust observation, and I think my choice of words accurately, impartially, and non-misleadingly preserves that observation; 

until you have a specific argument against the objectivity, your response amounts to an ambiguously adversarially-worded request to imagine I was systematically wrong and report back my change of mind. I would like you to point my imagination in a promising direction; a direction that seems promising for producing a shift in belief.

Yeah, I suspect we mostly agree, and I apologize for looking to find points of contention.  

Just because there are mottes and baileys, doesn't mean the baileys are wrong; they may just be less defensible in everyday non-ideal speech situations.

Some subset of those who agree that 'when two people disagree, only one of them can be right' and the people who agree that A : A := 'when two people disagree, they can both be right' such that A A' and A' := 'when two people "disagree," they might not disagree, and they can both be right', do not have a disagreement that cashes out as differences in anticipated experiences, and therefor may only superficially disagree.

Note 1: in order for this to be unambiguously true, 'anticipated experiences' necessarily includes anticipated experiences given counterfactual conditions.

Note 1.1: Counterfactuals are not contrary to facts; they have attributes which facts can also share, and, under varying circumstances, the ratio of [the set of relevant shared attributes] to [the set of relevant unshared attributes] between a counterfactual situation and known situation may be sufficiently large that it becomes misleading to characterize the situations as [opposite] or [mostly disagreeing, as opposed to mostly agreeing]. A more fitting word would be 'laterofactual'.

Note 1.1.1: When people say that B : B := 'C and D disagree', the set of non-excluded non-[stupidly interpretable] implicatures of the statement B includes that E : E := 'C and D mostly disagree', and not only F : F := 'C and D have any amount of disagreement'.

What is normally called common sense is not common sense. Common sense is the sense that is actually common. Idealized common sense (which, I shall elaborate, is the union of the set of thoughts you would have to be carefully trying to be common-sensical in order make salient in your mind and the set of natural common sense thoughts) should be called something other than common sense, because making a wide-sweeping mental search about possible ways of being common-sensical is not common, even if a general deference and post-hoc accountability to the concept of common sense may be common.

No one will hear my counter-arguments to Sabien's propaganda who does not ask me for them privately. Sabien has blocked me for daring to be unsubtle with him. He is equally welcome as anyone else to come forth to me and exchange considerations. I will not be lured into war; if it is to be settled, then it will be settled with words and in ideal speech situations.

No one will hear my counter-arguments to Sabien's propaganda who does not ask me for them privately.

uh, why? Why not make a top level post?

I am under the impression that here at LessWrong, everyone knows we have standards about what makes good, highly-upvotable top-level content. Currently I would not approve of a version of myself who would conform to those standards I perceive, but I can be persuaded otherwise, including by methods such as improving my familiarity with the real standards.

Addendum: I am not the type of guy who does homework. I am not the type of guy who pretends to have solved epistemology when they haven't. I am the type of guy who exchanges considerations and honestly tries to solve epistemology, and follows up with "but I'm not really sure; what do you guys think?" That is not highly-upvotable content in these parts'a town.

People defend normal rules by saying they're "not arbitrary." But if they were arbitrariness minimizers the rules would certainly be different. Why should I tolerate an arbitrary level of arbitrariness when I can have minimal instead?

Your policy's non-maximal arbitrariness is not an excuse for its remaining arbitrariness. 

I do not suggest the absence of a policy if such an absence would be more arbitrary than the existing policy. All I want is a minimally arbitrary policy; that often implies replacing existing rules rather than simply doing away with them. Sometimes it does mean doing away with them.

Rules about driving on the left or right side of the road are arbitrary. At the same time, having those rules is very useful because it means that people can coordinate around the rule. 

Rules about how to format code are similar. If you work with other people on the same project and you don't have rules for formatting that produces a mess. Programming languages that are opinionated about how to format code are good because that means you don't have to talk about the conventions with your fellow programs at the start of a project. 

I don't yet have any opinions about the arbitrariness of those rules. It is possible that I would disagree with you about the arbitrariness if I was more familiar.

Still, you claim that those rules are arbitrary and then defend them; what on Earth is the point of that? If you know they are arbitrary then you must know there are, in principle, less arbitrary policies available. Either you have a specific policy that you know is less arbitrary, in which case people should coordinate around that policy instead as a matter of objective fact, or you don't know a specific less arbitrary policy, and in that case maybe you want people with better Strategic Goodness about those topics to come up with a better policy for you that people should coordinate around instead.

You can complain about the inconvenience of improving, sure. But the improvement will be highly convenient for some other people. There's only so long you can complain about the inconvenience of improving before you're a cost-benefit-dishonest asshole and also people start noticing that fact about you.

You don't need to have a rule about whether to drive on the left or right side. Allowing people to drive where they want is less arbitrary. 

You have that in a lot of cases. An arbitrary law allows people to predict the behavior of other people and that increase in predictability is useful. 

Generally, most people like to have the world around them to be predictable. 

It is better to be predictably good than surprisingly bad, and it is better to be surprisingly good than predictably bad; that much will be obvious to everyone.

I think it is better to be surprisingly good than predictably good, and it is better to be predictably bad than surprisingly bad. 

EDIT: wait, I'm not sure that's right even by deontology's standards; as a general categorical imperative, if you can predict something will be bad, you should do something surprisingly good instead, even if the predictability of the badness supposedly makes it easier for others to handle. No amount of predictable badness is easier for others to handle than surprising goodness.

EDIT EDIT: I find the implication that we can only choose between predictable badness and surprising badness to be very rarely true, but when it is true then perhaps we should choose to be predictable. Inevitably, people with more intelligence will keep conflicting with people with less intelligence about this; less intelligent people will keep seeing situations as choices between predictable badness and surprising badness, and more intelligent people will keep seeing situations as choices between predictable badness and surprising goodness.

Focusing on predictability is a strategy for people who are trying to minimize their expectedly inevitable badness. Focusing on goodness is a strategy for people who are trying to secure their expectedly inevitable weirdness.

Good policy is better than bad policy. That's true but has nothing to do with arbitrariness. 

A policy that could be better — could be more good —  is arbitrarily bad. In fact the phrase "arbitrarily bad" is redundant; you can just say "arbitrary."

That's not how the English language works. 

The dictionary defines arbitrary as:

based on random choice or personal whim, rather than any reason or system

It's not about whether the choice is good or bad but that it's not made because of reasons that speak in favor. 

There is no real reason to choose either the left or right side of the road for driving but it's very useful to choose either of them. 

The fact that the number 404 for a "page not found" and 403 for "client is forbidden from accessing a valid URL" is arbitrary. There's no reason or system why you wouldn't switch the two numbers. 

The web profits from everyone accepting the same arbitrary numbers for the same type of error. 

If one person says I don't really need that many error codes, I don't want to follow arbitrary choices and send 44 instead of 404, this creates a mess for everyone who expects the standard to be followed. 

The dictionary defines arbitrary as:

based on random choice or personal whim, rather than any reason or system

The more considerate and reasoned your choice, the less random it is. If the truth is that your way of being considerate and systematic isn't as good as it could have been, that truth is systematic and not magical. The reason for the non-maximal goodness of your policy is a reason you did not consider. The less considerate, the more arbitrary.

There is no real reason to choose either the left or right side of the road for driving but it's very useful to choose either of them. 

Actually there are real reasons to choose left or right when designing your policy; you can appeal to human psychology; human psychology does not treat left and right exactly the same.

If one person says I don't really need that many error codes, I don't want to follow arbitrary choices and send 44 instead of 404, this creates a mess for everyone who expects the standard to be followed. 

If the mess created for everyone else truly outweighs the goodness of choosing 44, then it is arbitrary to prefer 44. You cannot make true arbitrariness truly strategic just by calling it so; there are facts of the matter besides your stereotypes. People using the word "arbitrary" to refer to something that is based on greater consideration quality are wrong by your dictionary definition and the true definition as well. 

You are wrong in your conception of arbitrariness as being all-or-nothing; there are varying degrees, just as there are varying degrees of efficiency between chess players. A chess player, Bob, half as efficient as Kasparov, makes a lower-quality sum of considerations; not following Kasparov's advice is arbitrary unless Bob can know somehow that he made better considerations in this case; 

maybe Bob studied Kasparov's biases carefully by attending to the common themes of his blunders, and the advice he's receiving for this exact move looks a lot like a case where Kasparov would blunder. Perhaps in such a case Bob will be wrong and his disobedience will be arbitrary on net, but the disobedience in that case will be a lot less arbitrary than all his other opportunities to disobey Kasparov.

Optimizing too strongly for anything runs into goodharting, so good arguments become terrible ideas when taken too literally. Thus an argument that is a terrible idea when pursued too literally is not necessarily a bad argument.

When you say "all else should stay firm belief" do you mean "all else should be regarded as belief"? Also, was the word 'firm' in 'firm belief' playing any role there or can I just get rid of it? 

I think all propositions should be subject to tests whether they are regarded as knowledge or belief.

Feature suggestion: Luigi-vote vs Waluigi-vote buttons. Too many people downvote not because the content is bad but simply because it's wa.

... Actually, I guess people would just downvote and wa-vote. Problem not solved.

Are there any considerations I am being dense to in the formulation of this theorem? As a LessWrong user, feel free to be really really negative to me to compensate for my denseness, even if you must speak fallaciously. If I am misled then there is no point in complaining that I acted with propriety. You are welcome to babble.

1. No one is teaching Bayesianism from true first principles, which would involve exploring the relevant consequences of every tempting(-from-some-perspective) mutation on the set of definitions and instruction steps that is called Bayesianism

2. "There are aspects of good reasoning that we don’t yet understand, even in principle." — Nate Soares, A Guide to MIRI's Research

3. Most epistemic systems in history have been half-misleading in a way that would have become obvious through more thorough foundational investigation

4. Therefor, Bayesianism is probably half-misleading in some way.

If AI takeoff was not upon us, the rationalist community would perish from the simple inability of its users to call each other idiots. More specifically the inability to engage in directed babbling-style dialogue that starts off very negative.

Meta-Contrarianism Is Good, Actually

An untitled litany; title suggestions welcome:

I wish an omnibenevolent god ruled over all living beings in the cosmos so that everyone could have as much as they want and so that intrinsically bad things happened minimally. It is okay to want a thing without its normal consequence. It is okay to want one thing out of a linked pair.

Inspiration credit to Aella.

1. Warren Buffett consistently beats the EMH as an exceptionally smart person 

2. Prediction markets systematically outperform all individual smart people 

3. Therefor, if futarchy is implemented, we will inevitably end up with a functional planned economy.

Something that I think rationalists and EAs don't take seriously enough: it is a nice and altruistic gift to desensitize people to defections on especially fake types of morality.

If people's net worth should in principle include how much money they give to charity, it should also in principle include how much status they give up to defect with fake morality.

When someone says something with low editing that you like, you call it blurting. When someone says something with low editing that you don't like, you call it quacking.

It's Russell conjugation.

I get the sense that expeditiousness could be the 14th rationalist virtue (after the 13th, paranoia).

https://www.thesaurus.com/browse/expeditious

There's a kind of manipulation which gets little discussion due to its invisibility.

Most people start out with some uncertainty about every form of social interaction. 
Then someone associates a social interaction with emotionally loaded language and undesirability, and then most people, without the benefit of appreciating that they can think and interpret autonomously in the opposite direction with some degree of plausibility, just go with the flow of what other people are saying is undesirable. 
Then, if someone tries to fight this gradual narrowing of tolerance with arguments which explicitly appeal to people's conscience by trying to consider all the plausible perspectives on the matter, which is far more respectful to their intellectual autonomy than letting only one view dominate, it gets processed as manipulation, guilt-tripping, and antisociality because they are coming from a position which is unpopular due to the prior indoctrination.

Getting people to be as tolerant of innocent things as they naturally are is always an uphill battle against puritans and censors.
People become puritans and censors in the first place because they believe they cannot solve their disagreements with logical argument, so they try to paint those which they feel tensions with as bad people in whatever way they can. So if Hitler ate sugar, and sugar is a niche rather than popular luxury, you start portraying sugar as evil, and the tolerance of respectable society narrows further: no more sugar for the few who enjoy it and don't like hurting people.

My impression is that when rationalists make objections, they tend not to explicitly distinguish between correcting failure and revealing possible improvements. 

If A is abstractly true, and B is 
1. abstractly true 
2. superficially contradictory with A
3. true in a more relevant way most of the time to most people

I expect rationalists who want to prioritize B to speak as if issuing corrections to people who focus on A, instead of being open-minded that there's good reason for A in unrecognized/rare(ly considered) but necessarily existing contexts, and instead of offering their personal impression of what an improvement would look like as merely that: a personal impression.

In spite of this, I still love you guys more than any other culture; love your ambition, clarity of judgment, and charitability. I'm not a post-rat; I struggle with rationality.

Is there a decision theory which works as follows? (Is there much literature on decision theories given iterated dilemmas/situations?)

If my actual utility doesn't match my expected utility, something went wrong.
Whatever my past self could have done better in this kind of situation in order to make actual utility match the expected utility is what I should do right now. If the patch (aka lesson) mysteriously works, why it works isn't an urgent matter to attend to, although further optimization may be possible if the nature of the patch (aka lesson) is better understood.

In this shortform, I want to introduce a concept of government structure I've been thinking of called representative omnilateralism. I do not know if this system works and I do not claim that it does. I'm asking for imagination here.

Representative (subagent) omnilateralism: A system under which one person or a groups of people tries to reconcile the interests of all people(/subagents) in a nation (instead of just the majority of the nation) into one plan that satisfies all people(/subagents)*

I think "representative democracy" is an ambiguous term which can be used to mean some mix of representative ochlocracy or representative omnilateralism whenever the situation is convenient. The reason we like democracy is because it approximates direct omnilateralism better than other alternatives, but if we will permit *representative* democracy, why not representative omnilateralism? Much as direct democracy is a purer ochlocracy in theory, representative omnilateralism is a purer elitism and a purer defense of the less fortunate in theory, but direct omnilateralism literally has the best of all worlds. 

My impression (not verdict) is that direct omnilateralism is impossible in practice only because people are not equipped to negotiate optimally. If everyone was better at negotiating, we would have way fewer conflicts and far more business.

*(I mention subagents because people often do not accept parts of themselves which are innocent, which is a personal mistake as well as a commons mistake; direct subagent omnilateralism is an even higher aspiration than direct superagent omnilateralism)

When a bailey is actually true, there are better ways to show it - in those cases they ARE in the motte.

Endorsed.

To whatever extent culture must pass through biology (e.g. retinas, eardrums, stomach) before having an effect on a person, and to whatever extent culture is invented through biological means, cultural inputs are entirely compatible with biological determinism.

Deadlines: X Smooth, Y Sharp

Recently an acquaintance told me we had to be leaving at "4:00 PM sharp." 

Knowing of the planning fallacy, I asked "Sharp? Uh-oh. As a primate animal, I naturally tend not to be very good with sharp deadlines, though your information is useful. Could you tell me when we're leaving smooth?"

"Smooth? What do you mean?"

"Smooth as opposed to sharp. Like, suppose you told me the time I should be aiming to be ready for in order to compensate for the fact that humans are bad at estimating time costs. Let's say you wanted to create a significant buffer between the time I was ready and the time I had to be ready by; the beginning of that significant buffer is the time we're leaving by smooth."

Since then, we've been saying things in the structure "X smooth, Y sharp", where X and Y are times or amounts of time. It's intuitive, catchy, simple, and very useful.

My response to it is: What makes you think it is naive idiocy? It seems like naive intelligence if anything. Even if the literal belief is false, that doesn't make it a stupid thing to act as if true. If everyone acted as if it were true, it would certainly be a stag-hunt scenario! And the benefits are still much worthwhile even if the other does not perfectly cooperate. 

Stupid uncritical intolerant people will think you look childish and impertinent, but intelligent people will notice you're being bullied and you're still tolerating your interlocutor, and they will think you're super-right. You divide the world into intelligent+pro-you and stupid+against-you.

Also I might note that your attempted counter-example has an implied tone which accuses naive idiocy, rather than sounding curious with salient plausibility. The saliently plausible thing, in your attempted counter-example, is an implicit gesture that there is not a difference.

Lately I've been thinking about what God would want from me, because I think the idea was a good influence on my life. Here's a list in progress of some things I think whould characterize God's wants and judgments:

  • 1. God would want you to know the truth
  • 2. If you find yourself flinching at knowledge of serious risk factors (e.g. of your character or moral plans), God would urgently want to speak with you about it
  • 3. Resist the pull of natural wrongness
  • 3.1. Consider all of the options which are such that you would have to be looking for the obvious/common sense options in order to find them
  • 3.2. Consider many non-obvious options; consider that the right thing to do is a different concretization of an abstracted version of the wrong thing to do, is adjacent to the wrong or seemingly-right thing to do, queers the seemingly-right or wrong thing to do, or is a thing in a category which cuts sideways through categories of abstractly right or wrong things to do
  • 3.3. Every night, go over a list in progress of cognitive biases and search your memories and feelings honestly as to whether you gave into any of them
  • 4. By one third of the set of good definitions of 'making progress' that you can come up with, or by no more than six good definitions out of eighteen, make it 80% true about you that you are making progress; don't be going nowhere
  • 4.1. On an average rate of twice every five days, do a good day's work
  • 4.2. On an average rate of once every three weeks, spend a day working really hard
  • 4.3. For every extra amount of work beyond the rates specified above, God will be extra proud of you, which can become a source of great esteem and comfort.
  • 5. Reward yourself temperantly for making progress and resisting the pull of natural wrongness; your morality should be as an enlightened, wiser-than-you friend who you eagerly wish you were strong enough to follow; not a slaveholder making you regret your acquaintanceship.
  • 6. In your life, always be faithful and reliable to at least one great moral principle; have one moral job or nature that God will consider you remarkable for
  • 7. Recognize the vulnerability of others as unsettingly reminiscent of the vulnerability in yourself

Feel free to leave suggestions for more entries; aim for excellence, and if you feel honestly that your suggestion is excellent in spite of acknowledged strong possibilities that it may be subjective and biased, don't hesitate to share. Or, hesitate the right amount before sharing; either is good.

Interesting stuff from the Stanford Encyclopedia of Philosophy:

2.8 Occam’s Razor and the Assumption of a “Closed World”

Prediction always involves an element of defeasibility. If one predicts what will, or what would, under some hypothesis, happen, one must presume that there are no unknown factors that might interfere with those factors and conditions that are known. Any prediction can be upset by such unanticipated interventions. Prediction thus proceeds from the assumption that the situation as modeled constitutes a closed world: that nothing outside that situation could intrude in time to upset one’s predictions. In addition, we seem to presume that any factor that is not known to be causally relevant is in fact causally irrelevant, since we are constantly encountering new factors and novel combinations of factors, and it is impossible to verify their causal irrelevance in advance. This closed-world assumption is one of the principal motivations for McCarthy’s logic of circumscription (McCarthy 1982; McCarthy 1986).

3. Varieties of Approaches

We can treat the study of defeasible reasoning either (i) as a branch of epistemology (the theory of knowledge), or (ii) as a branch of logic. In the epistemological approach, defeasible reasoning can be studied as a form of inference, that is, as a process by which we add to our stock of knowledge. Alternatively, we could treat defeat as a relation between arguments in a disputational discourse. In either version, the epistemological approach is concerned with the obtaining, maintaining, and transmission of warrant, with the question of when an inference, starting with justified or warranted beliefs, produces a new belief that is also warranted, given potential defeaters. This approach focuses explicitly on the norms of belief persistence and change.

In contrast, a logical approach to defeasible reasoning fastens on a relationship between propositions or possible bodies of information. Just as deductive logic consists of the study of a certain consequence relation between propositions or sets of propositions (the relation of valid implication), so defeasible (or nonmonotonic) logic consists of the study of a different kind of consequence relation. Deductive consequence is monotonic: if a set of premises logically entails a conclusion, than any superset (any set of premises that includes all of the first set) will also entail that some conclusion. In contrast, defeasible consequence is nonmonotonic. A conclusion follows defeasibly or nonmonotonically from a set of premises just in case it is true in nearly all of the models that verify the premises, or in the most normal models that do.

The two approaches are related. In particular, a logical theory of defeasible consequence will have epistemological consequences. It is presumably true that an ideally rational thinker will have a set of beliefs that are closed under defeasible, as well as deductive, consequence. However, a logical theory of defeasible consequence would have a wider scope of application than a merely epistemological theory of inference. Defeasible logic would provide a mechanism for engaging in hypothetical reasoning, not just reasoning from actual beliefs.

I am convinced that moral principles are contributory rather than absolute. I don't like the term 'particularist'; it sounds like a matter of arbitration when you put it that way; I am very reasonable about what considerations I allow to contribute to my moral judgments. I would prefer to call my morality contributist. I wonder if it makes sense to say that utilitarians are a subset of contributists.

I found the Defeasible Reasoning SEP page because I found this thing talking about defeasible reasoning, which I found because I googled 'contextualist Bayesian'.

Googling 'McCarthy Logic of Circumscription' brought me here; very neat.

In defense of strawmanning: there's nothing wrong with wanting to check if someone else is making a mistake. If you forget to frame it as a question, e.g. "Just wanna make sure: what's the difference between what you're thinking and the thinking of what my more obviously bad, made-up person who speaks similarly to you?" Then the natural way it comes out will sound accusatory, as in our typical conception of strawmanning. 

I think most people strawman because it's shorthand for this kind of attempt to check, but then they're also unaware that they're just trying to check, and they wind up defending their (actually accidental) apparent hostility, and then a polarization happens.

Strawmanning happens when we take others' judgments as plausible evidence of more general models and habits that those judgments play a part in. By asking for clarity of what models inform a judgment, we can get better over time at inferring models from judgments. It can become a limited form of mind reading.

One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU="(Model Uncertainty) I'm confused, maybe the question doesn't make sense, maybe A isn't a coherent claim, maybe the concepts I used aren't the right concepts to use, maybe I didn't think of a possibility, etc. etc.". 

I tend to think of writing my propositions in notepad like
A: 75%
B: 34%
C: 60%

And so on. Are you telling me that "~A: 75%" means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect 'A: 75%' to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can't a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?

My shortform post yesterday about proposition negations, could I get some discussion on that? Please DM me if you like! I need to know if and where there's been good discussion about how Bayesian estimate tracking relates with negation! I need to know if I'm looking at it the wrong way!

Does thinking that A is 45% likely mean that you think the negation of A is 5% likely, or 55% likely? Don't answer that; the negation is 55% likely.

But we can imagine making a judgment about someone's personality. One human person accepts MBTI's framework that thinking and feeling are mutually exclusive personalities, so when they write that someone has a 55% chance of being a thinker type, they make an implicit not-tracked judgment that they have an almost 45% chance of being a feeler type AND not a thinker, but a rational Bayesian is not so silly of course; being a feeler and/or a thinker are two independent questions, buddy.

The models in a person's mind are predictable from the estimate on his paper, and while his estimate may be true, the models the predictions stem from may be deeply flawed.

By the logic of personality taxonomy and worldly relations, "the negation of A" has many connotations.

Maybe the trouble is with the words 'negation', 'opposite', and 'falsehood' instead of using the word 'absence'. Presence of falsehood evidence is not the same as absence of truth evidence, even if absence of truth evidence is one kind of weak falsehood evidence to be present.

I want to DM about rationality. Direct messaging is not subject to so many incentive pressures. Please DM me. Please let me be free. 

Please DM me please DM me please DM me please DM me * 36

I'm looking for someone who I can share my half-baked rambly thoughts with. Shortform makes me feel terrible. 

Please DM me; let me be free; please DM me; let me be free * 105

[-]LVSN-10

What is 'knowing'? This is not an arbitrary question. They say that curiosity is wanting to know. Making new considerations makes you more efficient. Someone who wants to become more efficient will therefor look for new considerations and behave as if curious. 

Bayesians appear to perform well epistemically, allegedly because they feel a desire to know; because they feel curiosity. But I expect Bayesians would perform just as well if not better if they desired to be efficient, which, again, will look approximately identical to curiosity.

Another type of reasoner may desire to get closer to omniscience. Omniscience means having awareness of all information.

If you admit you are not omniscient about a question, then you admit there is more information that could shift your belief, which would make you more efficient.

Yet those who allegedly desire to know seemingly claim to often know the correct answers to questions even when they are not omniscient about those questions. What could they mean by 'know'? Are they wrong that they know? Do they have sufficient semiscience in some sense? How could sufficient-semiscience-for-knowledge be defined in a way that can't be gamed? 

I suspect there is no such ungameable definition. If there is no ungameable definition, then it would seem 'desire to know' is also gameable if it does not mean 'desire to consider everything' (become omniscient).

k was I downvoted for being too Waluigi or did I do something wrong