All of query's Comments + Replies

P(vulcan mountain | you're not in vulcan desert) = 1/3

P(vulcan mountain | guard says "you're not in vulcan desert") = P(guard says "you're not in vulcan desert" | vulcan mountain) * P(vulcan mountain) / P(guard says "you're not in vulcan desert") = ((1/3) * (1/4)) / ((3/4) * (1/3)) = 1/3

Woops, you're right; nevermind! There are algorithms that do give different results, such as justinpombrio mentions above.

EDIT: This was wrong.

The answer varies with the generating algorithm of the statement the guard makes.

In this example, he told you that you were not in one of the places you're not in (the Vulcan Desert). If he always does this, then the probability is 1/4; if you had been in the Vulcan Desert, he would have told you that you were not in one of the other three.

If he always tells you whether or not you're in the Vulcan Desert, then once you hear him say you're not your probability of being in the Vulcan Mountain is 1/3.

7Jiro
The whole thing is basically the Monty Hall problem.
3AK
That can't be right -- if the probability of being in the Vulcan Mountain is 1/4 and the probability of being in the Vulcan Desert (per the guard) is 0, then the probability of being on Earth would have to be 3/4.

Definitely makes sense. A commonly cited example is women in an office workplace; what would be average assertiveness for a male is considered "bitchy", but they still suffer roughly the same "weak" penalties for non-assertiveness.

With the advice-giving aspect, some situations are likely coming from people not knowing what levers they're actually pulling. Adam tells David to move his "assertiveness" lever, but there's no affordance gap available for David by moving that lever -- he would actually have to move an &... (read more)

I suspect there's a practise effect here as well. Figuring out how to be assertive without being domineering or bossy is hard. People who have grown up being assertive will have had the opportunity to learn, but those who try to become assertive because they know its important for the workplace won't have developed the judgement yet.

Feel similarly; since Facebook comments are a matter of public record, disputes and complaints on them are fully public and can have high social costs if unaddressed. I would not be worried about it in a small group chat among close friends.

I perceive several different ways something like this happens to me:

1. If I do something that strains my working memory, I'll have an experience of having a "cache miss". I'll reach for something, and it won't be there; I'll then attempt to pull it into memory again, but usually this is while trying to "juggle too many balls", and something else will often slip out. This feels like it requires effort/energy to keep going, and I have a desire to stop and relax and let my brain "fuzz over". Eventually I&#x... (read more)

A note on this, which I definitely don't mean to apply to the specific situations you discuss (since I don't know enough about them):

If you give people stronger incentives to lie to you, more people will lie to you. If you give people strong enough incentives, even people who value truth highly will start lying to you. Sometimes they will do this by lying to themselves first, because that's what is necessary for them to successfully navigate the incentive gradient. This can be changed by their self-awareness and force of will, but some wh... (read more)

1Serpent-Stare
Also I'd like to comment that the "Do I look fat in this" question is an example I quite like. It's a fantastic example of the sort of question that has a stereotypical negative response so strong that many people will just assume, even the first time, that you don't ever say yes to that question. And also, I had an ex boyfriend that I got to participate with me in an exercise to help me get over my own fat shame. I asked him outright to call me fat, and to do it with a smile so that I could practice associating "fat" with anything other than ugly and shameful. He agreed, and sometimes we would just call each other fat while cuddling and being flirty, in an attempt to disarm the word's cultural baggage. It's also a pretty terribly phrased question, but it can still be answered honestly and positively. An honest fashionista friend might do well to comment, "Darling, it's too small and it's squeezing your hips in a way that looks terribly uncomfortable; try a different cut or a larger size." Or someone else might reply as most of my exes have done, "I have no idea, I don't do fashion." This response is a bit disappointing sometimes because it offers no useful feedback, but has never offended me.
1Serpent-Stare
Oh, absolutely. That's why I work so hard to try to reward those people I can trust to tell me the truth. To mitigate all the messages of high-stress that I can't help but put out when I encounter something unexpected and distressing as well as I know how; asking for a moment to decompress, using distractions to calm down until I can deal with it more directly, and all-importantly remembering to thank and affirm the behaviour even when it's stressing me out, and afterwards at other times when it isn't. I say things like "it's important that you're able to talk to me about these things" and "it would be so much worse if you didn't tell me and then it blew up later". They are vital mantras to me, not only to reassure my friends but also to remind myself. I tell my trusted friends that I love them and trust them because I don't have to worrywort over everything they say, and I can ask them to remind me of the comforting truths as well as alerting me to the uncomfortable ones and they seem to be alright with that. Because it's true. Because it's helping me to recover some of my paranoia and deal with relationships in which I don't have that openness by being able to reliably turn to ones in which I do. Sometimes I still get stuck in a panic spiral about the negative reinforcement stimuli that I know I'm putting out. But recently, my honest friends have been quick to reassure me on that front. I notice it far more than they do, because I care so much about noticing it, for exactly the reasons you give.

I think you're looking for Thurston's "On proof and progress in mathematics": https://arxiv.org/abs/math/9404236

2ryan_b
That's the one! Downloaded, bookmarked, emailed to myself - it will not escape me again. For the interested, in the above link I intended to reference Part 6, specifically where he talks about his experience with foliations. This is page 13. Thank you, query!
I think I may agree with the status version of the anti-hypocrisy flinch. It's the epistemic version I was really wanting to argue against.

Ok yeah, I think my concern was mostly with the status version-- or rather that there's a general sensor that might combine those things, and the parts of that related to status and social management are really important, so you shouldn't just turn the sensor off and run things manually.

... That doesn't seem like treating it as being about epistemics to me. Why is it epistemically relevant? I thin
... (read more)
2abramdemski
That's a good point. Given that I didn't even think of the distinction explicitly until engaging with comments, it seems really easy to confuse them.

I will see if I can catch a fresh one in the wild and share it. I recognize your last paragraph as something I've experienced before, though, and I endorse the attempt to not let that grow into righteous indignation and annoyance without justification -- with that as the archetype, I think that's indeed a thing to try to improve.

Most examples that come to mind for me have to do with the person projecting identity, knowledge, or an aura of competence that I don't think is accurate. For instance holding someone else to a social standard tha... (read more)

2abramdemski
I think I may agree with the status version of the anti-hypocrisy flinch. It's the epistemic version I was really wanting to argue against. ... That doesn't seem like treating it as being about epistemics to me. Why is it epistemically relevant? I think it's more like a naive mix of epistemics and status. Status norms in the back of your head might make the hypocrisy salient and feel relevant. Epistemic discourse norms then naively suggest that you can resolve the contradiction by discussing it.

As you say, there are certainly negative things that hypocrisy can be a signal of, but you recommend that we should just consider those things independently. I think trying to do this sounds really really hard. If we were perfect reasoners this wouldn't be a problem; the anti-hypocrisy norm should indeed just be the sum of those hidden signals. However, we're not; if you practice shutting down your automatic anti-hypocrisy norm, and replace it with a self-constructed non-automatic consideration of alternatives, then I think you'll do wors... (read more)

2abramdemski
Can you give a typical example, for yourself (maybe look out for examples in daily life and give one when it comes up)? I think, for myself, the anti-hypocrisy flinch is causing me problems in almost every case where I consciously notice it. So my position is really more like "notice that this response is mostly useless/harmful. Also, in every case I can think of where it's not, you could replace it with something more specific." For example, it's often happened that a friend is giving advice and admits that they don't do the thing themselves. I notice that in the social context, this feels something like a 50% decline in the credence given to the advice -- it feels very real. But, when I notice this, it usually doesn't seem valid on reflection. Or, maybe a friend said something and I later start thinking about ways in which that friend doesn't live their life in accordance with such a statement, and I start experiencing righteous indignation. Usually, when I reflect on this, it isn't very plausible. I'm actually stretching the meaning of their statement, and also stretching my interpretation of how they live their life, in order to paint a picture where there's a big mismatch. If I talked to them about it, they would predictably be able to respond by correcting those misinterpretations -- and if I held on to my anger, I would probably double down, accusing them of missing the point and not trying to charitably understand what I'm saying. There's usually some real reason for the annoyance I'm feeling with my friend, which has only a little to do with the hypocrisy accusation.

One hypothesis is that consciousness evolved for the purpose of deception -- Robin Hanson's "The Elephant in the Brain" is a decent read on this, although it does not address the Hard Problem of Consciousness.

If that's the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being dec... (read more)

Some reader might be thinking, "This is all nice and dandy, Quaerendo, but I cannot relate to the examples above... my cognition isn't distorted to that extent." Well, let me refer you to UTexas CMHC:
Maybe you are being realistic. Just for the sake of argument, what if you're only 90% realistic and 10% unrealistic? That mean's you're worrying 10% "more" than you really have to.

Not intending to be overly negative, but this is not a good argument for anything and also doesn't answer the hypothetical question of no... (read more)

For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision.

I don't think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes... (read more)

1Vanessa Kosoy
I don't see it this way. I think that both intuition and explicit reasoning are relevant to both inside view and outside view. It's just that the input of inside view is the inner structure of question and the input of outside view is the reference category inside which the question resides. People definitely use the outside view in debates by communicating it verbally, which is hard to do with pure intuition. I think that ideally you should use combine intuition with explicit reasoning and also combine inside view with outside view. You can certainly have biases about these things, but these things can be regarded as coming from your intuition. You can think of it as P vs. NP. Solving problems is hard but verifying solutions is easy. To solve a problem you have to use intuition, but to verify the solution you rely more on explicit reasoning. And since verifying is so much easier, there is much less room for bias.

If you choose to "care more" about something, and as a result other things get less of your energy, you are socially less liable for the outcome than if you intentionally choose to "care less" about a thing directly. For instance, "I've been really busy" is a common and somewhat socially acceptable excuse for not spending time with someone; "I chose to care less about you" is not. So even if your one and only goal was to spend less time on X, it may be more socially acceptable to do that by adding Y as cover.

Social excusability is often reused as internal excusability.

Some reasons this is bad:

  1. It's false or not-even-wrong ("worthless parody of a human" is not something that I imagine epistemically applies to any human ever.)
  2. It's mixing epistemics and shoulds -- even if you categorized yourself as a misery pit, this does not come close to meaning you should throw yourself under a bus.
  3. Misery pits are a false framework, that may be useful for modeling phenomena, but may not be a useful model for people who would tend to identity themselves a misery pits. For instance, if they were likely to think the quoted thought, they'd be committing a lot of bucket errors.

I also dislike this comment because I think it's too glib.

I think it's a memetic adaptation type thing. I would claim that attempting to open up the group usage of NVC will also (in a large enough group) open up the usage of "language-that-appears-NVCish-even-if-against-the-stated-philosophy". I think that this type of language provides cover for power plays (re: the broken link to the fish selling scenario), and that using the language in a way that maintains boundaries requires the group to adapt and be skillful enough at detecting these violations. It is not enough if you do so as an individu... (read more)

This is incorrect and I think only sounds like an argument because of the language you're choosing; there's nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.

Also, there's nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution's values, if that even makes sense.

-1Dr. Jamchie
So to again summarise this whole argument: Moloch is a problem, that made you exist and is impossible to solve by definition. So what are you going to do about it? (I suggest trying to answer this to your self at first, only then to me)

I'm not an expert, but I think MD5 isn't the best for this purpose due to collision attacks. If it's a very small plain-english ASCII message, then collision attacks are probably not a worry (I think?), but it's probably better to use something like SHA-2 or SHA-3 anyways.

1Valentine
That might well be. I haven't a clue which hash functions do what relative to one another. But yeah, the thing it encodes is English ASCII text.

Yeah, this definitely seems like a bug; permalinks to comments shouldn't require this. Unfortunately, I don't see any obvious way to report a bug.

Upfront note: I've enjoyed the circling I've done.

One reason to be cautious of circling: dropping group punishment norms for certain types of manipulation is extremely harmful.  From my experience of circling (which is limited to a CFAR workshop), it provides plausible cover for very powerful status grabs under the aegis of "(just) expressing feelings and experiences"; I think the strongest usual defense against this is actually group disapproval.  If someone is able to express such a status grab without receiving overt disapproval, the... (read more)

0FireItself
You seem to be thinking that both NVC and Circling involve not maintaining boundaries against behaviors that we would otherwise notice, categorize as bad and take social action against. I am familiar with NVC and if anything the opposite seems to be the case, in NVC you enforce your boundaries more strongly and effectively than without it. I am not familiar with Circling, but I see nothing in the post above to suggest it would be any different.
3MakerOfErrors
The "fish sell" link isn't working - it just takes me to the top of the circling post. Also, when I search for "fish sell" on Lesser Wrong, I get a result under "comments" of CronoDAS saying: And that link, itself, just takes me to the top of the circling post. And weirdly, I don't see that comment here anywhere. Is this a error on the website, rather than the way the link was formatted? Like, is it not possible to link to comments yet? I'll poke around a little, but I'm not all that hopeful, since that's a guess in the dark.
4CronoDAS
The "fish sell" url isn't working - it just takes me to the top of the Circling post.

"Complaining about your trade partners" at the level of making trade decisions is clearly absurd (a type error). "Complaining about your trade partners" at the level of calling them out, suggesting in an annoyed voice they behave differently, looking miffed, and otherwise attempting to impose costs on them (as object level actions inside of an ongoing trade/interaction which you both are agreeing to) is not. These are sometimes the mechanism via which things of value are traded or negotiations are made, and may be preferred by both parties to ceasing the interaction.

A potential explanation I think is implicit in Ziz's writing: the software for doing coordination within ourselves and externally is reused. External pressures can shape your software to be of a certain form; for instance, culture can write itself into people so that they find some ideas/patterns basically unthinkable.

So, one possibility is that fusion is indeed superior for self-coordination, but requires a software change that is difficult to make and can have significant costs to your ability to engage in treaties externally. Increased Mana allows you to offset some of the costs, but not all; some interactions are just pretty direct attempts to check that you've installed the appropriate mental malware.

Assuming the money transfer actually takes place, this sounds like a description of gains from trade; the "no pareto improvement" phrasing is that when actually making the trade, you lose the option of making the trade -- which is of greater than or equal value than the trade itself if the offer never expires. One avenue to get actual Pareto improvements is then to create or extend opportunities for trade.

If the money transfer doesn't actually take place: I agree that Kaldor-Hicks improvements and Pareto improvements shouldn't be conflated. It takes social technology to turn one into the other.

I was definitely very confused when writing the part you quoted. I think the underlying thought was that the processes of writing humans and of writing AlphaZero are very non-random; i.e., even if there's a random number generated in some sense somewhere as part of the process, there's other things going on that are highly constraining the search space -- and those processes are making use of "instrumental convergence" (stored resources, intelligence, putting the hard drives in safe locations.) Then I can understand your claim as &quo... (read more)

I think you have a good point, in that the VNM utility theorem is often overused/abused: I don't think it's clear how to frame a potentially self modifying agent in reality as a preference ordering on lotteries, and even if you could in theory do so it might require such a granular set of outcomes as to make the resulting utility function not super interesting. (I'd very much appreciate arguments for taking VNM more seriously in this context; I've been pretty frustrated about this.)

That said, I think instrumental convergence is indeed... (read more)

3zulupineapple
I'd classify both of those as random programs though. AlphaZero is a random program from the set of programs that are good at playing go (and that satisfy some structure set by the creators). Humans are random machines from the set of machines that are good at not dying. The searches aren't uniform, of course, but they are not intentional enough for it to matter. In particular, AlphaZero was not selected in such way that exhibiting instrumental convergence would benefit it, and therefore it most likely does not exhibit instrumental convergence. Suppose there was a random modification to AlphaZero that would make it try to get more computational resources, and that this modification was actually made during training. The modified version would play against the original, the modification would not actually help it win in the simulated environment, the modified version would most likely lose and be discarded. If the modified version did end up winning, then it was purely by chance. The case of humans is more complicated, since the "training" does reward self preservation. Curiously, this self preservation seems to be it's own goal, and not a subgoal of some other desire, as instrumental convergence would predict. Also, human self preservation only works in a narrow sense. You run from a tiger, but you don't always appreciate long term and low probability threats, presumably because you were not selected to appreciate them. I suspect that concern for these non-urgent threats does not correlate strongly with IQ, unlike what instrumental convergence would predict.

On equal and opposite advice: many more people want you to surrender to them than it is good for you to surrender to, and the world is full of people who will demand your apology (and make it seem socially mandatory) for things you do not or should not regret. Tread carefully with practicing surrender around people who will take advantage of it. Sometimes the apparent social need to apologize is due to a value/culture mismatch with your social group, and practicing minimal or non-internalized apologies is actually a good survival mechanic.

If you are high... (read more)

3Error
This was my first thought, too. I'm all in favor of the argument against weasel apologies, but sometimes the reason you're giving a weasel apology is that, by your own lights, you didn't do anything wrong. Weasel apologies are never appropriate, but sometimes a sincere one also isn't appropriate. Sometimes the appropriate response is "No, I did the right thing here. Sorry, but no social surrender will be forthcoming." You'll have to accept the probable social consequences, of course, but that's part of the price of integrity.
8Raemon
I like this comment for cleanly encapsulating advice for different use-cases.

Yeah; it's not open/shut. I guess I'd say in the current phrasing, the "but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong." is suggesting implications but not actually saying anything interesting -- at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they're getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.

I think this flaw is basically in ... (read more)

This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.

Direct messaging seems to be wonky at the moment, so I'll put a suggested correction here: for 2.4, Aumann's Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: " if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be e... (read more)

4Quaerendo
Thanks for the feedback. Here's the quote from the original article: One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don't think it's entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it's not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are "doing something wrong". Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann's Agreement Theorem actually says. So I will amend that part of the text.

I found this extremely helpful; it motivated me to go read your entire blog history. I hope you write more; I think the "dark side" is a concept I had only the rough edges of, but one that I unknowingly desired to understand better (and had seen the hints of in other's writing around the community.) I feel like the similarly named "dark arts" may have been an occluding red herring.

The more you shine the light of legibility, required defensibility and justification, public scrutiny of beliefs, social reality that people's ju
... (read more)

Beautifully written; thank you for sharing this.

EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Effectively no. I understand that you're aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you've just said aren't different in gestalt from what I've read from you.

To be potentially more helpful, here's a few ways the arguments you just made fall flat for me:

I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I als

... (read more)

use every deviation from perfection as ammunition against even fully correct forms of good ideas.

As a professional educator and communicator, I have a deep visceral experience with how "fully correct forms of good ideas" are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsew... (read more)

I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.

So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:

  • By presenting these ideas in weakened
... (read more)
2[anonymous]
My immediate reaction was to disagree. I think most people don't listen to arguments from authority often enough; not too often. So I decided to search "arguments from authority" on LessWrong, and the first thing I came to was this article by Anna Salamon: She then suggests separating out knowledge you have personally verified from arguments from authority knowledge to avoid groupthink, but this doesn't seem to me to be a viable method for the majority of people. I'm not sure it matters if non-experts engage in groupthink if they're following the views of experts who don't engage in groupthink. Skimming the comments, I find that the response to AnnaSalamon's article was very positive, but the response to your opposite argument in this instance also seems to be very positive. In particular, AnnaSalamon argues that the share of knowledge which most people can or should personally verify is tiny relative to what they should learn. I agree with her view. While I recognize that there are different people responding to AnnaSalamon's comments than the one's responding to your comments, I fear that this may be a case of many members of LessWrong interpreting arguments based on presentation or circumstance rather than on their individual merits.
5Tem42
I would argue that your first and third points are not very strong. I think that it is not useful to protect an idea so that it is only presented in its 'cool' form. A lot of harm is done by people presenting good ideas badly, and we don't want to do any active harm, but at the same time, the more ways and the more times that an idea is adequately expressed, the more likely that idea will be remembered and understood. People who are not used to thinking in strict terms are more likely to be receptive to intuition pumps and frequent reminders of the framework (evidence based everything). Getting people into the right mindset is half the battle. I do however, agree with your second point, strongly. It is very hard to get people to actually care about evidence, and most people would not click through to formal studies; even fewer would read them. Those who would read them are probably motivated enough to Google for information themselves. But actually checking the evidence is so central to rationality that we should always remind new potential rationalists that claims are based on strong research. If clickbait sites are prone to edit out that sort of reference, we should link to articles that are more reader friendly but do cite (and if possible, link to) supporting studies. This sort of link is triple plus good: it means that the reader can see the idea in another writer's words; it introduces them to a new, less clickbaity site that is likely to be good for future reading; and, of course, it gives access to sources. I think that one function that future articles of this sort should focus on as a central goal is to subtly introduce readers to more and better sites for more and better reading. However, the primary goal should remain as an intro level introduction to useful concepts, and intro level means, unfortunately, presenting these ideas in weakened forms.
4Evan_Gaensbauer
This comment captures my intutions well. Thanks for writing this. It's weird for me, because when I wear my effective altruism hat, I think what Gleb is doing is great because marketing effective altruism seems like it would only drive more donations to effective charities, while not depriving them of money or hurting their reputations if people become indifferent to the Intentional Insights project. This seems to be the consensus reaction to Gleb's work on the Effective Altruism Forum. Of course, effective altruism is sometimes more concerned with only the object-level impact that's easy to measure, e.g., donations, rather than subtler effects down the pipe, like cumulatively changing how people think over the course of multiple years. Whether that's a good or ill effect is a judgment I'll leave for you. On the other hand, when I put on my rationality community hat, I feel the same way about Gleb's work as you do. It's uncomfortable for me because I realize I have perhaps contradicting motivations in assessing Intentional Insights.

I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.

I want to see if I can address some of the concerns you expressed.

In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being in... (read more)

I disagree with your conclusion. Specifically, I disagree that

This is, literally, infinitely more parsimonious than the many worlds theory

You're reasoning isn't tight enough to have confidence answering questions like these. Specifically,

  • What do you mean by "simpler"?
  • Specifically how does physics "take into account the entire state of the universe"?

In order to actually say anything like the second that's consistent with observations, I expect your physical laws become much less simple (re: Bell's theorem implying non-locality... (read more)

Your question of "after finishing the supertask, what is the probability that 0 stays in place" doesn't yet parse as a question in ZFC, because you haven't specified what is meant by "after finishing the supertask". You need to formalize this notion before we can say anything about it.

If you're saying that there is no formalization you know of that makes sense in ZFC, then that's fine, but that's not necessarily a strike against ZFC unless you have a competitive alternative you're offering. The problem could just be that it's an ill-d... (read more)

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated.

That said, we cant actually retroactively edit anyways.

0Lumifer
I don't think that's the model (or if it is, I think it's wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.

Being to vague to be wrong is bad. Especially when you want to speak in favor of science.

I agree, it's good to pump against entropy with things that could be "Go Science!" cheers. I think the author's topic is not too vague to discuss, but his argument isn't strong or specific enough that you should leap to action based solely on it. I think it's a fine thing to post to Discussion though; maybe this indicate we have ideal different standards for Discussion posts?

There no reason to say "well maybe the author meant to say X" when h

... (read more)
2ChristianKl
That result didn't include a discussion about the value of including formalism in the definition of science. The question about whether "formalism" is a central part of science is one that's to be had on LW. Instead of saying: "I think you meant to include formalism." it's better to say: "I think formalism should be part of the our definition of science because of X, Y and Z." That would make the discussion less vague and more concrete. Maybe someone agrees with your reasons. Maybe people disagree. In both cases there's productive discussion. Criticizing an argument is not the same as objecting to it being posted. If vague ideas are posted in discussion then, it makes sense to have a discussion with the goal of getting the ideas less vague. In addition to the question of "formalism" the OP's definition of science also lacks public challenge of ideas. Is that a conscious decision? I don't know. As it stands the post is not address any of the concerns in the topic that exist in LW culture. To teach Bayesian statistics well you need calculus. Most statistics 101 classes teach frequentists statistics with p-values. Commonly they are taught in a memorize the teacher password way, that doesn't leave students with real understanding. Doctors have their statistics 101 classes but they mostly just memorize it and then forget it afterwards. Does statistics 101 teaches students to expose themselves to empiric feedback? I don't think it does. Simply saying: We need to teach more students statistics 101 ignores all that previous discussion. He claims that LW is mainly about logical consistency and biases. I don't think that's the case. Rationality!CFAR2015 seems to be: Your system I and system II are aligned in a way that if it's rational to get up at 7 o'clock your brain wakes you up at 7 o'clock without you needing an alarm clock. Then you work on the most important thing in your life. We are not planning of publishing papers because academia with it's ethical review

Actually, this illustrates scientific thinking; the doctor forms a hypothesis based on observation and then experimentally tests that hypothesis.

Most interactions in the world are of the form "I have an idea of what will happen, so I do X, and later I get some evidence about how correct I was". So, taking that as a binary categorization of scientific thinking is not so interesting, though I endorse promoting reflection on the fact that this is what is happening.

I think the author intends to point out some of the degrees of scientiificism by w... (read more)

0ChristianKl
Being to vague to be wrong is bad. Especially when you want to speak in favor of science. I don't see any mention of formalism in the OP. There no reason to say "well maybe the author meant to say X" when he didn't say X.
3g_pepper
Perhaps, but a the doctor in the OP did not just happen to later get some evidence about how correct he/she was; instead, after formulating a hypothesis, the doctor ran a test specifically to test the hypothesis. That is practically a textbook example (albeit a fairly short/simple one) of the scientific method at work. And that was really my point. It is worth noting that the scientific method is really just a very rigorous formalization of common sense reasoning. I think that demystifying science among the non scientifically sophisticated population is actually a step in the direction in which the OP gestures. This also is true; even if one can't expect the full-on House M.D. treatment each time one goes in with a sinus infection or strep throat, many of the protocols that the doctor follows and the medicines that he/she prescribes were developed/tested with a high degree of scientific rigor.

I think it would be good to separate the analysis into FGCA's which are always fallacious, versus those that are only warning signs/rude. For instance, the fallacy of grey is indeed a fallacy, so using it as a counter-argument is a wrong move regardless of its generality.

However, it may in fact be that your opponent is a very clever arguer or that the evidence they present you has been highly filtered. Conversationally, using these as a counter-argument is considered rude (and rightly so), and the temptation to use them is often a good internal warning s... (read more)

Modulo nitpicking, agreed on both points.

I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori.

(edit: I think I might understand after-all; it sounds like you're claiming AIXI-like things are unlikely to be useful since they're based mostly on preconceptions that are likely false?)

I don't think I understand what you mean here. Everyone favors modeling based on real evidence as opposed to fake evidence, and everyone favors avoiding the import of false preconceptions. It sou... (read more)

1[anonymous]
I don't think it's an active waste of time to explore the research that can be done with things like AIXI models. I do, however, think that, for instance, flaws of AIXI-like models should be taken as flaws of AIXI-like models, rather than generalized to all possible AI designs. So for example, some people (on this site and elsewhere) have said we shouldn't presume that a real AGI or real FAI will necessarily use VNM utility theory to make decisions. For various reasons, I think that exploring that idea-space is a good idea, in that relaxing the VNM utility and rationality assumptions can both take us closer to how real, actually-existing minds work, and to how we normatively want an artificial agent to behave.

Formally, you don't. Informally, you might try approximate definitions and see how they fail to capture elements of reality, or you might try and find analogies to other situations that have been modeled well and try to capture similar structure. Mathematicians et al usually don't start new fields of inquiry from a set of definitions, they start from an intuition grounded in reality and previously discovered mathematics and iterate until the field takes shape. Although I'm not a physicist, the possibly incorrect story I've heard is that Feynman path integrals are a great example of this.

I offered the transform as an example how things can mathematically factor, so like I said, that may not be what the solution looks like. My feeling is that it's too soon to throw out anything that might look like that pattern though.

Oh yes, it sounds like I did misunderstand you. I thought you were saying you didn't understand how such a thing could happen in principle, not that you were skeptical of the currently popular models. The classes U and F above, should something like that ever come to pass, need not be AIXI-like (nor need they involve utility functions).

I think I'm hearing that you're very skeptical about the validity of current toy mathematical models. I think it's common for people to motte and bailey between the mathematics and the phenomena they're hoping to model, a... (read more)

0[anonymous]
I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori. Right. Which is precisely why I don't like when we attempt to do FAI research under the assumption of AIXI-like-ness.
0query
I offered the transform as an example how things can mathematically factor, so like I said, that may not be what the solution looks like. My feeling is that it's too soon to throw out anything that might look like that pattern though.

A mathematical model of what this might look like: you might have a candidate class of formal models U that you think of as "all GAI" such that you know of no "reasonably computable"(which you might hope to define) member of the class (corresponding to an implementable GAI). Maybe you can find a subclass F in U that you think models Friendly AI. You can reason about these classes without knowing any examples of reasonably computable members of either. Perhaps you could even give an algorithm for taking an arbitrary example in U and ... (read more)

3[anonymous]
Frankly, I don't trust this claim for a second, because important components of the Friendliness problem are being completely shunted aside. For one thing, in order for this to even start making sense, you have to be able to specify a computable utility function for the AGI agent in the first place. The current models being used for this "mathematical" research don't have any such thing, ie: AIXI specifies reward as a real-valued percept rather than a function over its world-model. The problem is not the need for large amounts of computing power (ie: the problem is not specifying the right behavior and then "scaling it down" or "approximating" a "tractable example from the class"). The problem is not being able to specify what the agent values in detail. No amount of math wank about "approximation" and "candidate class of formal models U" is going to solve the basic problem of having to change the structure away from AIXI in the first place. I really ought to apologize for use of the term "math wank", but this really is the exact opposite approach to how one constructs correct programs. What you don't do to produce a correct computer program, knowing its specification, is try to specify a procedure that will, given an incomplete infinity of time, somehow transform an arbitrary program from some class of programs into the one you want. What you do is write the single exact program you want, correct-by-construction, and prove formally (model checking, dependent types, whatever you please) that it exactly obeys its specification. If you are wondering where the specification for an FAI comes from, well, that's precisely the primary research problem to solve! But it won't get solved by trying to write a function that takes as input an arbitrary instance or approximation of AIXI and returns that same instance of AIXI "transformed" to use a Friendly utility function.
1AeroRails
That sounds plausible, but how do you start to reason about such models of computation if they haven't even been properly defined yet?

I've enlarged my social circles, or the set of circles I can comfortably move in, and didn't end up on that model. I think I originally felt that way a lot, and I worked on the "feeling like doing a dramatic facepalm" by reflecting on it in light of my values. When dramatic face palms aren't going to accomplish anything good, I examine why I have that feeling and usually I find out it's because my political brain is engaged even when this isn't a situation where I'll get good outcomes from political-style conversation. You can potentially chan... (read more)

Agreed on the 2nd paragraph.

Optimally, you'd be have an understanding of the options available, how you work internally, and how other people respond so you could choose the appropriate level of anger, etc. Thus it's better to explore suggestions and see how they work than to naively apply them in all situations.

Seconded! Another phrase (whose delivery might be hard to convey in text) is "Look, I dunno, but anyways..."

Maybe the big idea is to come across as not expressing much interest in the claim, instead of opposing the claim? I think most people are happy to move on with the conversation when they get a "move on" signal, and we exchange these signals all the time.

I also like that this is an honest way to think about: I really am not interested in what I expect will happen with that conversation (even if I am interested in the details of countering your claim.)

I don't know what you mean, but I think I see a lot of people "being polite" but failing at one of these when it would be really useful for them.

For example, you can be polite while internally becoming more suspicious and angry at the other person (#3 and #4) which starts coming out in body language and the direction of conversation. Eventually you politely end the conversation in a bad mood and thinking the other person is a jerk, when you could've accomplished a lot more with a different internal response.

2Lumifer
Maybe the other person is a jerk and is on an obnoxious power trip at your expense. If you don't get suspicious and (internally) angry you're just setting yourself up as a victim. Generic advice doesn't apply everywhere. A default "nod and slowly back away" response isn't bad but is not always useful.

I think your critique of this being only for disagreements that don't matter is too strong, and your examples miss the context of the article.

This is not a suggested resolution procedure for all humans in all states of disagreement; this is a set of techniques for when you already have and want to maintain some level of cooperative relationship with a person, but find yourself in a disagreement over something. Suggestion 5 above is specifically about disengaging from disagreements that "don't matter", and the rest are potentially useful even if it's a disagreement over something important.

-2Lumifer
So it's just unrolling the basic "don't be an asshole, be polite instead" advice?
Load More