Not all of the MIRI blog posts get cross posted to Lesswrong. Examples include the recent post AGI outcomes and civilisational competence and most of the conversations posts. Since it doesn't seem like the comment section on the MIRI site gets used much if at all, perhaps these posts would receive more visibility and some more discussion would occur if these posts were linked to or cross posted on LW?
Re: "civilizational incompetence". I've noticed "civilizational incomptence" being used as a curiosity stopper. It seems like people who use the phrase typically don't do much to delve in to the specific failure modes civilization is falling prey to in the scenario they're analyzing. Heaven forbid that we try to come up with a precise description of a problem, much less actually attempt to solve it.
(See also: http://celandine13.livejournal.com/33599.html)
Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.
I remember lukeprog used to recommend Bermudez's Cognitive Science over many others. But then So8res reviewed it and didn't like it much, and now the current recommendation is for The Oxford Handbook of Thinking and Reasoning, which I haven't really seen anyone say much about.
There are a few other things like this, for example So8res apparently read Heuristics and Biases as part of his review of books on the course list, but it doesn't seem to appear on the course list anymore, and under the heuristics and biases section Thinking and Deciding is recommended (once reviewed by Vaniver).
No, it's not up to date. (It's on my list of things to fix, but I don't have many spare cycles right now.) I'd start with a short set theory book (such as Naive Set Theory), follow it up with Computation and Logic (by Boolos), and then (or if those are too easy) drop me a PM for more suggestions. (Or read the first four chapters of Jaynes on Probability Theory and the first two chapters of Model Theory by Chang and Keisler.)
Edit: I have now updated the course list (or, rather, turned it into a research guide) that is fairly up-to-date (if unpolished) as of 6 Nov 14.
Luke's IAMA on reddit's r/futurology in 2012 was pretty great. I think it would be cool if he did another, a lot has changed in 2+ years. Maybe to coincide with the December fundraising drive?
The outside view.... (The whole link is quoted.)
...Yesterday, before I got here, my dad was trying to fix an invisible machine. By all accounts, he began working on the phantom device quite intently, but as his repairs began to involve the hospice bed and the tubes attached to his body, he was gently sedated, and he had to leave it, unresolved.
This was out-of-character for my father, who I presumed had never encountered a machine he couldn’t fix. He built model aeroplanes in rural New Zealand, won a scholarship to go to university, and ended up as an aeronautical engineer for Air New Zealand, fixing engines twice his size. More scholarships followed and I first remember him completing his PhD in thermodynamics, or ‘what heat does’, as he used to describe it, to his six-year-old son.
When he was first admitted to the hospice, more than a week go, he was quite lucid – chatting, talking, bemoaning the slow pace of dying. “Takes too long,” he said, “who designed this?” But now he is mostly unconscious.
Occasionally though, moments of lucidity dodge between the sleep and the confusion. “When did you arrive?” he asked me in the early hours of this morning, having woken up wanting water. Onc
Today I had an aha moment when discussing coalition politics (I didn't call it that, but it was) with elementary schoolers, 3rd grade.
As a context: I offer an interdisciplinary course in school (voluntary, one hour per week). It gives a small group of pupils a glimpse of how things really work. Call it rationality training if you want.
Today the topic was pairs and triple. I used analogies from relationships: Couples, parents, friendships. What changes in a relationship when a new element appears. Why do relationships form in the first place? And this revealed differences in how friendships work among boys and among girls. And that in this class at this moment at least the girl friendships were largely coalition politics: "If you do this your are my best friend," or "No we can't be best friends if she it your best friend." For the boys it appears to be at least wquantitatively different. But maybe just the surface differs.
I the end I represented this as graphs (kind of) on the board. And the children were delighted to draw their own coalition diagrams, even abbreviating names by single letters. You wouldn't have bet that these diagrams were from 3rd grade.
You may be interested in "Chimpanzee Politics", by Frans de Waals (something like that), which is about exactly that (observing a group of Chimps in a zoo, and how their politics and alliances evolves, with a couple coups).
Recently, I started a writing wager with a friend to encourage us both to produce a novel. At the same time, I have been improving my job hunting by narrowing my focus on what I want out of my next job and how I want it. While doing these two activities, I began to think about what I was adding to the world. More specifically, I began to ask myself what good I wanted to make.
I realized that writing a novel was not from a desire to add a good to the world (I don't want to write a world changing book), but just something enjoyable. So, I looked at my job. I realized that it was much the same. I'm not driven to libraries specifically by a desire to improve the world's intellectual resources; that's just a side effect. I'm driven to them out of enjoyment for the work.
So, if I'm not producing good from the two major productions of my life, I thought about what else I could produce or if I should at all. But I couldn't think of any concrete examples of good I could add to the world outside of effective altruism. I'm not an inventor nor am I a culture-shifting artist. But I wanted to find something I could add to the world to improve it, if only for my own vanity.
I decided, for the time b...
Yes, take the Invisible Hand approach to altruism, by pursuing your own productive wellbeing you will generate wellbeing in the worlds of others. Trickle down altruism is a feasible moral policy. Come to the Dark Side and bask in Moral Libertarianism.
How communities Work, and What Wrecks Them
One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves.
Behavior patterns that grind communities down: endless contrarianism, axe-grinding, persistent negativity, ranting, and grudges.
I posted a link to the 2014 survey in the 'Less Wrong' Facebook group, and some people commented they filled it out. Another friend of mine started a Less Wrong account to comment that she did the survey, and got her first karma. Now I'm curious how many lurkers become survey participants, and are then incenitivized to start accounts to get the promised karma by commenting they completed it. If it's a lot, that's cool, because having one's first comment upvoted after just registering an account on Less Wrong seems like a way of overcoming the psychological barrier of 'oh, I wouldn't fit in as an active participant on Less Wrong...'
If you, or someone you know, got active on Less Wrong for the first time because of the survey, please reply as a data point. If you're a regular user who has a hypothesis about this, please share. Either way, I'm curious to discover how strong an effect this is, or is not.
Someone has created a fake Singularity Summit website.
(Link is to MIRI blog post claiming they are not responsible for the site.)
MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.
Laundry (plus ironing, if you have clothes that require that - I try not to), washing up (I think this is called doing the dishes in America), mopping, hoovering (vacuuming), dusting, cleaning bathroom and kitchen surfaces, cleaning toilets, cleaning windows and mirrors. That might cover the obvious ones? Seems like most of them don't involve much learning but do take a bit of getting round to, if you're anything like me.
I'd add, not leaving clutter lying around. It both collects dust, and makes cleaning more of an effort. Keep it packed away in boxes and cupboards. (Getting rid of clutter entirely is a whole separate subject.)
It's really hard to estimate that accurately, because for me something like 90% of cleanliness is developing habits that couple it with the tasks that necessitate it: always and automatically washing dishes after cooking, putting away used clothes and other sources of clutter, etc. Habits don't take mental effort, but for the same reason it's almost impossible to quantify the time or physical effort that goes into them, at least if you don't have someone standing over you with a stopwatch.
For periodic rather than habitual tasks, though, I spend maybe half an hour a week on laundry (this would take longer if I didn't have a washer and dryer in my house, though, and there are opportunity costs involved), and another half hour to an hour on things like vacuuming, mopping, and cleaning porcelain and such.
Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.
Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”
You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.
But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?
I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.
I had a conversation with another person regarding this Leslie's firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It's pretty clear - from the outside - that the cavemen which do probability correctly and don't do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.
I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?
To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.
The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.
Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a pro...
I've recently started a tumblr dedicated to teaching people what amounts to Rationality 101. This post isn't about advertising that blog, since the sort of people that actually read Less Wrong are unlikely to be the target audience. Rather, I'd like to ask the community for input on what are the most important concepts I could put on that blog.
(For those that would like to follow this endeavor, but don't like tumblr, I've got a parallel blog on wordpress)
I'd like to ask LessWrong's advice. I want to benefit from CFAR's knowledge on improving ones instrumental rationality, but being a poor graduate I do not have several thousand in disposable income nor a quick way to acquire it. I've read >90% of the sequences but despite having read lukeprog's and Alicorn's sequences I am aware that I do not know what I do not know about motivation and akrasia. How can I best improve my instrumental rationality on the cheap?
Edit: I should clarify, I am asking for information sources: blogs, book recommendations, particularly practice exercises and other areas of high quality content. I also have a good deal of interest in the science behind motivation, cognitive rewiring and reinforcement. I've searched myself and I have a number of things on my reading list, but I wanted to ask the advice of people who have already done, read or vetted said techniques so I can find and focus on the good stuff and ignore the pseudoscience.
I've been to several of CFAR's classes throughout the last 2 years (some test classes and some more 'official' ones) and I feel like it wasn't a good use of my time. Spend your money elsewhere.
I didn't learn anything useful. They taught, among other things, "here's what you should do to gain better habits". Tried it and didn't work on me. YMMV.
One thing that really irked me was the use of cognitive 'science' to justify their lessons 'scientifically'. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I'm not sure what the correct phrase is - the words weren't constraining beliefs? don't pay rent? they could have made up scientific sounding words and it would have had the same effect.)
Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.
ETA: I might go to another class in a year or two to see if they've improved. Not convinced that they're worth donating money towards at this moment.
(This is Dan from CFAR again)
We have a fair amount of data on the experiences of people who have been to CFAR workshops.
First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question "0 to 10, are you glad you came?" The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn's (also linked above under "C") seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that "The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect."
In-person conversations that I've had with alumni (including some interviews that ...
(Dan from CFAR here)
Hi cursed - glad to hear your feedback, though I'm obviously not glad that you didn't have a good experience at the CFAR events you went to.
I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR's curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I'll leave a separate comment about thing 2.
Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer's research). Some of our techniques are not from the academic literature (e.g., the technique that we call "propagating urges" started out in 2011 as something that CFAR co-founder Andrew Critch did).
The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The "theorizing" step of this process inclu...
Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective?
I don't believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the 'science'.
Do you think your experience was typical?
I believe that the majority of people don't get much, if anything, from CFAR's rationality lessons. However, after the lesson, people may be slightly more motivated to accomplish whatever they want to, in the short term just because they've paid money towards a course to increase their motivation.
How useful do you think it would be to an average person?
There was one average person at one of the workshops I attended. e.g. never read LessWrong/other rationality material. He fell asleep a few hours into the lesson, I don't think he gained much from attending. I'm hesitant to extrapolate, because I'm not exactly sure what an average person entails.
An average rationalist?
I haven't met many rationalists, but would believe they wouldn't benefit much/at all.
Exposure therapy: Fail on small things, then larger ones, where it is obvious that failiure doesn't mean death. First remember past experiences where you failed and did not die, then go into new situations.
Seeking LWist Caricatures
I've written the existence of a cult-like "Bayesian Conspiracy" of mostly rebellious post-apocalypse teens - and now I'm looking for individuals to populate it with. What I /want/ to do is come up with as many ways that someone who's part of the LW/HPMOR/Sequences/Yudkowsky-ite/etc memeplex could go wrong, that tend not to happen to members of the regular skeptical community. Someone who's focused on a Basilisk, someone on Pascal's Mugging, someone focused on dividing up an infinity of timelines into unequal groups...
Put another way, I've been trying to think of the various ways that people outside the memeplex see those inside it as weirdos.
(My narrative goal: For my protagonist to experience trying to be a teacher. I'd be ecstatic if I could have at least one of the cultists be able to teach her a thing or two in return, but since I've based her knowledge of the memeplex on mine, that's kind of tricky to arrange.)
I can't guarantee that I'll end up spending more than a couple of sentences on any of this - but I figure that the more ideas I have to try building with, the more likely I will.
(Also asked on Reddit at https://www.reddit.com/r/rational/comments/2kopgx/qbst_seeking_lwist_caricatures/ .)
The person who uses ev psych to justify their romantic preferences to potential and current partners. (There's a generalisation of this that I'm not sure how to describe, but I've fallen into it when talking with friends about the game-theoretical value of friendship.)
Hey, does anyone else struggle with feelings of loneliness?
What strategies have you found for either dealing with the negative feelings, or addressing the cause of loneliness, and have they worked?
Do you feel lonely because you spent your time alone or because you will you don't connect with the people with whom you spend your time?
Two separate problems.
Bayesianism and Causality, or, Why I am only a Half-Bayesian (Judea Pearl)
“The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships.”
Suppose I was an unusual moral, unusually insightful used car saleswoman. I have studied the dishonest sales techniques my colleagues use, and because I am unusually wise, worked out the general principles behind them. I think it is plausible that this analysis is new, though I guess it could already exist in an obscure journal.
Is it moral of me to publish this research, or should I practice the virtue of silence?
There have been discussions here in the past about whether "extreme", lesswrong-style rationality is actually useful, and why we don't have many extremely successful people as members of the community.
I've noticed that Ramit Sethi often uses concepts we talk about here, but under different names. I'm not sure if he's as high a level as we're looking for as evidence, but he appears to be extremely successful as a businessman. I think he started out in life/career coaching, and then switched to selling online courses when he got popular. His stuff ...
Side point: I've found material like his, "concepts we talk about here, but under different names", extremely useful when I want to explain the idea of rationality to someone without having to work around the lesswrong lingo and trying to have a conversion while tabooing all the lesswrong phases and cached thoughts.
Those who are currently using Anki on a mostly daily or weekly basis: what are you studying/ankifying?
To start: I'm working on memorizing programming languages and frameworks because I have trouble remembering parameters and method names.
I've seen a few discussions recently where people seem to argue past one another because they're using different senses of the terms "subjective" and "objective".
Some things are called "subjective" because they are parametrized by subject. For instance, everyone who can see has a field of vision, but no two people have the same field of vision (because two people can't stand in the same spot at the same time). However, we can reason and calculate accurately about someone else's field of vision.
Other things are called "sub...
My thoughts on the following are rather disorganized and I've been meaning to collate them into a post for quite some time but here goes:
Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that's where the light is. I also think there's a more-or-less unstated assumption that considerations other than Harm are low-status.
It is extremely important to find out how to have a successful community without sociopaths.
(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole". I believe that avoiding these - any maybe many other - failure modes is critical if we ever want to have a Friendly society.)
It is extremely important to find out how to have a successful community without sociopaths.
It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?
(The analogy to Friendly AI is worth considering, though.)
Unfortunately, I don't feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don't think I have a solution. I just noticed a danger, and general unwillingness to debate it.
Probably the best thing I can do right now is to recommend good books on this topic. That would be:
I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.
As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and "winning". (And "something bad" offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real li...
I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people... and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn't have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say "yeah, yeah, I know exactly what you mean" while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)
Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say "2 + 2 = 5" so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go aw...
I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.
Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn't doing so for reasons causally connected to the fact that they are conscious. To effectively say "I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn't conscious" is absurd.
Can anyone recommend any good books/resources on dyspraxia?
Ideally suitable for adults with a reasonable background understanding of psychology. Most of the stuff I've been able to find has been aimed for teachers/parents.
A look at the cost of bad self-imposed regulations in businesses.
I keep finding the statistic that "one pint of donated blood can save up to 3 lives!" But I can't find the average number of lives saved from donating blood. Does anyone know/is able to find?
I did a little research to find out whether there are free survey sites that offer "check all answers that apply" questions.
Super Simple Survey probably does, but goddamned if I'll deal with their website to make sure.
On the almost free side, Live Journal enables fairly flexible polls (including checkboxes) for paid accounts, and you can get a paid account for a month for $3. Live Journal is a social media site.
It has been experimentally shown that certain primings and situations increase utilitarian reasoning; for instance, people are more willing to give the "utilitarian" answer to the trolley problem when dealing with strangers, rather than friends. Utilitarians like to claim that this is because people are able to put their biases aside and think more clearly in those situations. But my explanation has always been that it's because these setups are designed to maximise the psychological distance between the subject and the harm they're going to infl...
The following model is my new hypothesis for generating better OKCupid profiles for myself while remaining honest.
I brainstorm what I want to include in my profile in a positive way without lying. This may include goal-factoring on what honest signals I'm trying to send. Then, I see how what I brainstormed fits into the different prompts on OKCupid profiles.
I generate multiple clause-like chunks for each item/object/quality of myself I'm trying to express in my profile. I then A/B test the options for each item across a cross-section of individuals sim
Two years ago, I wrote this cringe-worthy thing.
I can't tell if things have gotten worse, or if they've stayed the same. I lean toward worse.
4 years ago, I asked a psychiatrist about my soul-crushing Akrasia issues. He prescribed Focalin, at 5mg/day for the first week, then 10mg/day for the second. The first week saw improvements--I didn't feel like I had much choice over what I wound up focusing on, but I actually finished things--the second week did not work at all, and a pile of unpleasant things all hit at once on one of those nights. So we switched to...
Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.
This article discusses a paper that seems interesting from the perspective of effective altruism and how peoples behavior changes based on where they think their money might be going:
If you want a link directly to the paper, that link is both in the article and reposted here:
http://www.sciencemag.org/content/346/6209/632
Short summary: When considering donations, people in the study donated more when they know their donation is not going to overhead.
After reading through the Quantum Physics sequence, I would like to know more about the assumptions and theories behind the idea that an amplitude distribution factorizes, or approximately factorizes. Where would be a good place to learn more about this? I would appreciate some recommendations for journal articles to read, or specific sections of specific books, or if there's another better way to learn this stuff, please let me know.
In the blog posts in the sequence, an analogy comes up a few times, saying that it doesn't make sense to distinguish betwe...
I stumbled across an article about Amelia, a program that can supposedly perform low-level human jobs like call center operator. A brief search hasn't turned up anything particularly illuminating. Has this been discussed on LW before?
On the one hand, everything I read about her sounds sufficiently vague that I suspect it's hype (and possibly native advertising). Still, I'm curious about the underlying tech - is it some kind of substantial improvement over past attempts, or is she just Siri++ in the way that Eugene Goostman was a slightly better chatbot?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.