Open thread, 21-27 April 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Thread started before the end of the last thread to ecourage Monday as first day.
Thread started before the end of the last thread to ecourage Monday as first day.
Comments (346)
Pure curiousity question: What is the general status of UDT vs. TDT among yall serious FAI research people? MIRI's publications seem to exclusively refer to TDT; people here on LW seem to refer pretty much exclusively to UDT in serious discussion, at least since late 2010 or so; I've heard it reported variously that UDT is now standard because TDT is underspecified, and that UDT is just an uninteresting variant of TDT so as to hardly merit its own name. What's the deal? Has either one been fully specified/formalized? Why is there such a discrepancy between MIRI's official work and discussion here in terms of choice of theory?
Why do you say that? If I do a search for "UDT" or "TDT" on intelligence.org, I seem to get about an equal number of results.
This seems accurate to me. I think what has happened is that UDT has attracted a greater "mindshare" on LW, to the extent that it's much easier to get a discussion about UDT going than about TDT. Within MIRI it's probably more equal between the two.
As I recall, Eliezer was actually the one who named UDT. (Here's the comment where he called it "updateless", which everyone else then picked up. In my original post I never gave it a name but just referred to "this decision theory".)
There has been a number of attempts to formalize UDT, which you can find by searching for variations on "formal UDT" on LW. I'm not aware of a similar attempt to formalize TDT, although this paper gives some hints about how it might be done. It's not really possible to "fully" specify either one at this time because both need to interface with a to-be-discovered solution to the problem of logical uncertainty, and at this point we don't even know the type signature of such a solution. In the attempts to formalize UDT, people either make a guess as to what the type signature is, or side-step the problem by assuming that all relevant logical facts can be deduced by the agent.
Thanks! This is exactly the kind of answer I was hoping for. A lot of it was what I had sort of deduced from looking at MIRI docs and stuff, but having it laid out explicitly seems to have clicked the missing elements into place and I feel like I understand it much better now.
I'm not serious, but I'd say that there's little actual use of TDT because it requires us to solve the difficult problem of finding the right causal and logical structure of the problem - this can be handwaved in by the user, but doing that feels awkward. Folk-UDT ("just execute the best strategy") is sufficient for most purposes, both in application and in e.g. trying to understand logical uncertainty.
On the other hand, using causal structure is what lets us consider hypotheticals properly - so TDT will not have some issues that typical-UDT does with hypotheticals about its own actions. On the mutant third hand, TDT's solution of adding logical nodes to the causal structure might just be a simplification of something deeper, so it's not like we (us non-serious decision-theory dilettantes) should put all our eggs in one basket.
What is an example of an issue that UDT has with hypotheticals that TDT does not?
The 5 and 10 problem is basically what happens when your agent asks "what are the logical implications if 5 is chosen?" rather than "If we do causal surgery such that 5 is chosen, what's the utility?"
There are other ways to avoid the 5 and 10 problem, but I think they're less general than using causality.
Here's one attempt to further formalize the different decision procedures: http://commonsenseatheism.com/wp-content/uploads/2014/04/Hintze-Problem-class-dominance-in-predictive-dilemmas.pdf (H/T linked by Luke)
I get confused when people use language that talks about things like "fairness", or whether people are "deserving" of one thing or another. What does that even mean? And who or what is to say? Is it some kind of carryover from religious memetic influence? An intuition that a cosmic judge decides what people are "supposed" to get? A confused concept people invoke to try to get what they want? My inclination is to just eliminate the whole concept from my vocabulary. Is there a sensible interpretation that makes these words meaningful to atheist/agnostic consequentialists, one that eludes me right now?
I am with Stanislaw Lem -- it's hard to communicate in general, not just about fairness. I find so many communication scenarios in life resemble first contact situations..
It's not a theistic concept - if anything, it predates theology(some animals have a sense of fairness, for example). We build social structures to enforce it, because those structures make people better off. The details of fairness algorithms vary, but the idea that people shouldn't be cheated is quite common.
Here are some things people might describe as "unfair":
What sorts of things do you see in common among these situations?
Your list seems a bit... biased.
Let's throw in a couple more situations:
While people say "That's not fair" in the above examples and in these, it seems there are two different clusters of what they mean. In the first group, the objection seems to be to self-serving deception of others, particularly violation of agreements (or what social norms dictate are implicit agreements). Your examples don't involve deception or violation of agreements (except perhaps in the case of eminent domain), and the objection is to inequality. I find it strange that the same phrase is used to refer to such different things.
I think you could say that in both groups, people are objecting because society is not distributing resources according to some norm of what qualities the resource distribution is supposed to be based on.
In the first group of examples, people are deceiving others and violating agreements, and society says that people are supposed to be rewarded for honest behavior and keeping agreements.
For the second group of examples:
Regardless of what your ideal society looks like, creating it probably requires consistently maintaining some algorithm that rewards certain behaviors while punishing others. Fairness violations could be thought of as situations where the algorithm doesn't work, and people are being rewarded for things that an optimal society would punish them for, or vice versa.
You could also say that in both groups, there is actually an implicit agreement going on, with people being told (via e.g. social ideals and what gets praised in public) that "if you do this, then you'll be rewarded". If you buy into that claim, then you will feel cheated if you do what you think you should do, but then never get the reward.
Of course, the situation is made more complicated by the fact that there is no consistent, univerally-agreed upon norm of what the ideal society should be, nor of what would be the optimal algorithm for creating it. People also have an incentive to push ideals which benefit them personally, whether as a conscious strategy or as an unconscious act of motivated cognition. So it's not surprising that people will have widely differing ideas of what "fair" behavior actually looks like.
However looking at reality, the phrase is used in all these ways, isn't it?
As Bart Wilson mentions here, a century ago the word "fairness" referred exclusively to the first cluster. However, due to various political developments during the past century it has drifted and now refers to a confused mix of both.
Indeed it is, which is evidence for the two different types of situations feeling similar to people.
That's odd ... I was specifically trying to choose examples that would be relatively uncontroversial — cases of cheating, betrayal of trust, abuse of power, and so on; as opposed to cases of mere inequality of outcome.
That's a bias, isn't it? :-)
If you're choosing examples to construct a definition from, already having a definition in mind makes the exercise pointless.
If you choose examples of fraud and abuse of power you essentially force the definition of "unfair" be "fraud and abuse of power".
Wow, and here I thought I'd be dinged for including such mildly politicized examples as the police one and the collective-bargaining one. Instead, I get dinged for not including a bunch of stuff likely to provoke a political foofaraw about class, gender, or eminent domain? Weird.
Okay, this is getting excessively meta. I'm done here.
The sense of fairness evolved to make our mental accounting of debts (that we owe and are owed) more salient by virtue of being a strong emotion, similar to how a strong emotion of lust makes the reproductive instinct so tangible. This comes in handy because humans are highly social and intelligent and engage in positive-sum economic transactions, so long as both sides play fair... according to your adapted sense of what's fair. If you don't have a sharp sense of fairness other people might walk all over you, which is not evolutionarily adaptive. See "The Moral Animal" or "Nonzero" by Robert Wright, or the chapter "Family Vaules" in Steven Pinker's "How the Mind Works."
This sense of fairness may have been co-opted at other levels, like a religious or political one, but it's quite instinctual. Very young children have a strong sense of fairness before they could reason to it, just as they can acquire language before they could explicitly/consciously reason from grammar rules to produce grammatical sentences. It's very engrained in our mental structure, so I think it would take quite an effort to "wipe the concept."
"Fairness" generally means one out of two things.
Either it's, basically, a signal of attitude -- to call something "fair" is to mean "I approve of it" -- or it is a rhetorical device in the sense of a weapon in an argument.
I think that people generally have gut ideas about what fairness entails, but they are fuzzy, bendable, and subject to manipulation, both by cultural norms and by specific propaganda/arguments.
According to Moral Foundations Theory, fairness is one of the innate moral instincts.
According to Scott Adams, fairness was invented so children and idiots can participate in arguments.
I think we have a fairness instinct mostly so we can tell clever stories about why our desire for more stuff is more noble than greed.
The word "fairness" has been subject to a lot of semantic drift during the past century. Here is a blog post by Bart Wilson, describing the older definition, which frankly I think makes a lot more sense.
It's a cultural norm. If someone constantly defects in prisoner dilemma he's violating the norm of fairness and deverses to be punished for doing so.
Except that in a lot of accusations of "unfairness" there is no obvious prisoner-dilemma-defection going on.
Not lynching rich bankers means choosing to cooperate. Having a social landscape that's peaceful and without much violence isn't something to take for granted.
That is not a prisoner's dilemma.
We sort of have an informal agreement of the proletarians not making a revolution and hanging the rich capitalists in return for society as a whole working in a way that makes everyone better of.
Rich bankers not fulfilling their side of working to make everyone in society better of is defecting from that agreement.
No, we don't have anything of that sort.
Marx was wrong. He is still wrong.
How would you apply that to Lumifer's second example?
The usual way groups of girls deal with this is to call the girl who actually twirls around a lot of guys around her little finger a slut. The punishment isn't physical violence but it's there.
I've been struggling with how to improve in running all last year, and now again this spring. I finally realized (after reading a lot of articles on lesswrong.com, and specifically the martial arts of rationality posts) that I've been rationalizing that Couch to 5k and other recommended methods aren't for me. So I continue to train in the wrong way, with rationalizations like: "It doesn't matter how I train as long as I get out there."
I've continued to run intensely and in short bursts, with little success, because I felt embarrassed to have to walk any, but I keep finding more and more people who report success with programs where you start slowly and gradually add in more running.
Last year, I experimented with everything except that approach, and ended up hurting myself by running too far and too intensely several days in a row.
It's time to stop rationalizing, and instead try the approach that's overwhelmingly recommended. I just thought it would be interesting to share that recognition.
You might also want to work on eliminating embarrassment.
Have you considered not running as your primary exercise program? If you aren't specifically going for the performance of running, I would shelve it and instead cut calories (assuming you have extra weight to lose) and lift heavy things at the gym. Distance running is great for distance running.
I have been in multiple running groups and they are great for achieving goals like 26.2 miles, but after that, I wanted to optimize for looks and not for long distances (any more).
Unfortunately, I live in a rural area where gyms are hard to come by. I have enjoyed running for its own sake in the past, that's a part of why I want to get back into running shape, but I will try to add in some body weight exercises as well as my running.
You don't need a gym to exercise. Google up "paleo fitness", Crossfit is full of advice about how to build a basic gym in your garage, etc. etc.
The best general advice I can give you is:
Be honest with yourself when determining your current abilities. There's no shame in building slowly. It just means you get to improve even more.
Not every day is a hard day. There are huge benefits to varying your workouts. If you're running about the same distance each day you run, you're doing it wrong. Some days should be shorter, more intense intervals broken up by very slow jogs or walks, while other days should be "active recovery" days of short, slow runs, while other days you might go for distance and a sustained pace. Just to give an idea, even elite athletes will not usually do more than 2-3 hard (interval) days each week. You will want to start with 0 or 1.
Watch your volume: Slowly increase your total miles / week over time. Make sure you start low enough not to get repetitive stress injuries.
I was once a fairly successful runner and have a lot of experience with designing training programs for both distance running and weightlifting. I'd be happy to help you design your running program or to look over your program once you do some research and put something together. Let me know!
Research on mindfulness meditation
Mindfulness meditation is promoted as though it's good for everyone and everything, and there's evidence that it isn't-- going to sleep is the opposite of being mindful, and a mindfulness practice can make sleep more difficult. Also, mindfulness meditation can make psychological problems more apparent to the conscious mind, and more painful.
The difficulties which meditation can cause are known to Buddhists, but have not yet known by researchers or the general public. The commercialization of meditation is part of the problem.
This isn't a question, just a recommendation: I recommend everyone on this site who wants to talk about AI familiarize themselves with AI and machine learning literature, or at least the very basics. And not just stuff that comes out of MIRI. It makes me sad to say that, despite this site's roots, there are a lot of misconceptions in this regard.
Do you have a recommendation for a resource that explains the basics in a decent matter?
Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?
Not so much a specific misconception, but understanding the current state of AI research and understanding how mechanical most AI is (even if the mechanisms are impressive) should make you realize that being a "Friendly AI researcher" is a bit like being a unicorn tamer (and I mean that in a nice way - surely some enterprising genetic engineer will someday make unicorns).
Edit: Maybe I was being a little snarky - my meaning is simply this: Given how little we know about what actual Strong AI will look like (And we genuinely know very very little), any FAI effort will face tremendous obstacles in transforming theory into practice - both in the fact that the theory will have been developed without the guidance that real-world constraints and engineering goals provide, and the fact that there is always overhead and R&D involved in applying theoretical research. I think many people here underestimate this vast difference.
OK. I've seen a lot of people here say that Eliezer's idea of a 'Bayesian intelligence' won't work or is stupid, or is very different from how the brain works. Those familiar with the machine intelligence literature will know that, in fact, hierarchical Bayesian methods (or approximations to them) are the state of the art in machine learning, and recent research suggests they very closely model the workings of the cerebral cortex. For instance, refer to the book "Data Mining: Concepts and Techniques, 3rd edition" (by Han and Kamber) and the 2013 review "Whatever next? Predictive brains, situated agents, and the future of cognitive science" by Andy Clark: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=8918803
The latter article has a huge number of references to relevant machine learning and cognitive science work. The field is far far larger and more active than many people here imagine.
Nonpaywalled Clark link: http://users.monash.edu/~naotsugt/Tsuchiya_Labs_Homepage/Bryan_Paton_files/Commentary%20.pdf
What would you consider the "very basics"?
What are some of the most blatant? Sorry to ask a question so similar to Squark's.
How good is the case for taking adderall if you struggle with a lot of procrastination and have access to a doctor to give you a prescription?
How do I decide whether to get married?
Pros
Cons
She has said that she doesn't want to marry me if she's just my female best friend that I sleep with. But I don't know how to evaluate what she's asking. There are a number of possibilities. Maybe I don't feel the requisite feelings and thus she wouldn't want to be married. Maybe I do have the feelings and I have no way to evaluate whether I do or not. Maybe I'm not ever going to feel some extra undetected thing X, ever, and so I should just go through the motions saying that I do, and our marriage prospects are entirely unchanged. Maybe this is just some signalling ritual we have to go through.
We both are concerned that I've not really had a relationship not with her, so there are no points of comparison for me to make.
In your list you didn't mention the topic of getting children. If you marry someone with the intention of spending the rest of your life together with them, I think you should be on the same page with regards to getting children before you marry.
What exactly do you think/hope will change between the current situation (which I assume involves you two living together) and the situation if you were to marry?
I don't know what is the significance of marriage for you, except symbolic. IMO the truly critical point is having kids. You probably want to have stable income before that.
Regarding things you're "just supposed to know": same thing happens to me with my wife. Haven't stopped us from being together for 10 years and raising a 4 year old son. Different people see things differently and have different assumptions on what is "obvious". The important thing is being mutually patient and forgiving (I know it's easier said than done, but it's doable).
Regarding the "extra feeling". Don't really know what to tell you. It is difficult to compare emotional experiences of different people. When our relationship started, it was mad, passionate infatuation. Now it's something calmer but it is obvious to me we love each other.
I had few relationships apart from my wife and virtually no serious relationships. Never bothered me.
Don't get married unless there is a compelling reason to do so. There's a base rate of 40-50% for divorce, and at least some proportion of existing marriages are unhealthy and unhappy. Divorce is one of the worst things that can happen to you, and many of the benefits of marriage to happiness are because happier people are more likely to get married in the first place.
What are her feelings about you? Are you "just" her "male best friend that she sleeps with"? Your post comes across as rather asymmetric.
Aren't you "both concerned" that she had too many relationships and so may decide that you are not for her precisely because she has these "points of comparison"? I suspect that she is the dominant partner in this relationship, possibly because she is more mentally mature, and this is often a warning flag.
Do you get mad at her for things she is just supposed to know to do, say or not say?
Anyway. DO NOT GET MARRIED YET until you figure out how to be an equal in this relationship (and if you think that you are, then you are fooling yourself).
And married women make less, so even assuming the arrow of causality is entirely from marital status to income it's not clear to me what would happen to your combined income.
Even if your combined income decreases, your combined consumption probably increases, because many goods are non-rivalrous in a marriage situation. See here for a discussion.
I believe you meant decreases.
I think he means increases. If your consumption decreases, then your standard of living is falling and that doesn't sound good at all.
Good point, but doesn't that also apply to unmarried cohabitation?
EDIT: BTW, the bottom of your post says “[...] marriage makes family income go up via the large male marriage premium minus the small female marriage penalty”, which answers my question upthread.
I was feeling lethargic and unmotivated today, but as a way of not-doing-anything, I got myself to at least read a paper on the computational architecture of the brain and summarize the beginning of it. Might be interest to people, also briefly touches upon meditation.
EDIT: Just realized, this model explains tulpas. Also has connections to perceptual control theory, confirmation bias and people's general tendency to see what they expect to see, embodied cognition, the extent to which the environment affects our thought... whoa.
Could you elaborate? I haven't read the paper, but this connection doesn't seem obvious to me.
O_O
This explains SO MUCH of things I feel from the inside! Estimating a small probability it'll even help deal with some pretty important stuff. Wish I could upvote a million times.
I have trouble with the statement "In the end, we're all insignificant." I mean I get the sentiment, which is of awe and aims to reduce pettiness. I can get behind that. But I have trouble if someone uses it in an argument, such as: "Why bother doing X; we're all insignificant anyway."
Because, if you look closely, "significance" is not simply a property of objects. It is, at the very least, a function of objects, agents and scales. For example you can say that we're all insignificant on the cosmic scale; but we're also all insignificant on the microscopic scale. We're also insignificant for some trees in the middle of the rainforest or an alien in another galaxy. We're almost completely insignificant to some random person in the past, present or future, but much more significant to the people around us.
To put differently, given two actions A & B with expected utilities U & V, you should choose A over B iff U > V. Only the relative ordering of U & V is meaningful, not the absolute difference (the utility function can be scaled arbitrarily anyway).
Good point. I guess you could rephrase some of the existential angst over insignificance as despairing at the tiny amounts of utility we can manipulate given a utility function scaled to the entire world/universe/whatever.
I wonder what the person who submitted the number 1488 was thinking. (Maximizing their answer, perhaps.)
The quiz seems to target at people who are different then me. I don't watch TV so, it's hard for my to give an answer about channel surfing. I don't listen to the radio. The same goes for renting videos.
Can anyone share the story behind the Future of Life Institute?
There are a lot famous people on their list, and presumably FLI is behind the recent article in the huffington post, but how much does this indicate that said famous people are on board with the claims in the article? The top non-famous person on their list of people studies monte-carlo methods and volunteers for CFAR - is this an indication that they're bringing on someone to do actual work? Or does Alan Alda being at the top of their list of advisors mean they're going to focus on communications?
UPDATE: somervta with a large chunk of story
So the quantified self (QS) community has been existing for a while. Just as bodybuilding groups should be excellent test beds for what kind of exercises and chemicals will yield high results, the QS community should yield a preferably small, low-cost set of measures you should determine about yourself. Do these exist? Can be any blood measure, rhythm, time, psychological value, net worth ...
There's no standardized list.
Basically it turns out that it's really hard to get people to measure specific stuff and it's often a lot more useful if people measure value that they care about.
Agreed. QS seems most helpful for providing people tools to attack problems they are having (sleep, weight, etc.) rather than make a normal person superhuman.
Something that keeps nagging me in my mind: A young college graduate comes up to you and asks "Where should I look for what kind of work to have the highest living standard?"
Remember, a lower nominal wage in a country where this wage has higher purchasing power should be better suited to this individual. Naively I might say the US or Switzerland but something tells me I am overlooking a gigantic hole.
For someone skilled enough to choose your location and who thinks long-term enough to live very cheaply for a number of years, higher nominal wages means higher absolute savings amounts.
Live somewhere expensive when you're getting started, and move somewhere cheap when you're slowing down.
Cost of living is an overblown statistic because dumb people spend their money poorly. You can live in expensive areas on the cheap without that much effort. This isn't to say that living in the bay area isn't more expensive than many other areas, but it certainly isn't as expensive as the cost of living calculations would make it seem.
Yes, provided you're young, healthy, and childless.
What makes youth a necessary condition independent of overall health?
Mostly risk and stress tolerance.
...but also less established social ties. And less settled long-term investments (though this correlates with with risk part).
Living standard as quantified is not particularly helpful to the individual. Someone might be comparatively far better off living in malaysia with a long-distance high paying freelance programming job, but I think you'll find that being around cultural compatriots is not to be ignored.
In some fields, doing freelance work for clients in a country with low purchasing power while living in one with high purchasing power is an option.
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.
Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren't so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft's world, Qoheleth's world, or Nietzsche's world.
Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are in Cassandra's or Buddha's world?
Edited lightly — In the first couple of paragraphs, I've clarified that I'm talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.
Replace religion with this dilemma and you have NS's Microkernel reliigon.
That's an interesting post. Let me throw in some comments.
I am not sure about the Cassandra's world. Here's why:
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not "enable goal-accomplishing actions" for him -- in the Bayes' world as well. Is the Cassandra's world defined by being powerless?
Heroes in myth defy predictions essentially by taking a wider view -- by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions -- what will come to pass and what will not. That is not a low-level world property, that's just a function of how wide your framework is. Kobayashi Maru and all that.
As to the Buddha's world, it seems to be mostly about goals and values -- things on the subject of which the Bayes' world is notably silent.
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, "the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in."
I don't particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for "Cassandra World" reasons.
So then the Cassandra's world is essentially a predetermined world where fate rules and you can't change anything. None of your choices matter.
Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.
Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin's genie: "Phenomenal cosmic predictive capacity ... itty bitty evidential status."
Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo's "install prophecy" code was. If Cassandra took a lesson in lying from Epimenides, she mightn't have had any problems.
I don't see these as alternatives, more like complements.
It's a memorable name, but it does not need to be called anything so dramatic, given that we live in this world already. For example, most of us make a likely correct prediction that if we procrastinate less then we will be better off, yet we still waste time and regret it later.
Why this AIXIsm? We are a part of the world, and the most important part of it for many people, so updating your model of self is very Bayesian. Lacking this self-update is what leads to a "Cassandra's world".
I'd tell you what method, I would use to evaluate the evidence to decide in which world we are, but it seems like you denied it in the premise. ;)
How strong is the evidence in favor of psychological treatment really?
I am not happy. I suffer from social anxiety. I procrastinate. And I have a host of another issues that are all linked, I am certain. I have actually sought out treatment with absolutely no effect. On the recommendation of my primary care physician I entered psychoanalytic counseling and was appalled by the theoretical basis and practical course of "treatment". After several months without even the hint of a success I aborted the treatment and looked for help somewhere else.
I then read David Burns' "Feeling Good", browsing through, taking notes and doing the exercises for a couple of days. It did not help, of course in hindsight I wasn't doing the treatment long enough to see any benefit. But the theoretical basis intrigued me. It just made so much more sense to be determined by one's beliefs than a fear of having one's balls chopped off, hating their parents and actively seeking out displeasure because that is what fits the narrative.
Based on the key phrase "CBT" I found "The now habit" and reading me actually helped to subdue my procrastination long enough to finish my bachelor's degree in a highly technical subject with grades in the highest quintile. Then I slipped back into a phase of relative social isolation, procrastionation and so on.
We see these phenomena consistently in people. We also see them consistently in animals being held in captivity not suited to their species' specific needs. I am less and less convinced that this block of anxiety, depression and procrastination is a disease but a reaction to an environment in the broadest sense inherently unsuitable to humans.
The proper and accepted procedure for me would be to try counseling again, this time with a cognitive behavioral approach. But I am unwilling to commit that much time for uncertain results, especially now that I want to travel or do a year abroad or just run away from it all. (Suicide is not an option) What lowers my odds of success even more is that I never feel understood by people put in place to understand in various venues. So how could such a treatment help?
I am open to bibliotherapy. I don't think I am open to traditional or even medical therapy.
So, can you say more about what aspect of your environment is bugging you? Captivity?? Do you want to try living somewhere more "outdoors"?
I have suffered from social anxiety continuously and depression off and on since childhood. I've sought treatment that included talk therapy and medication. Currently I am doing EMDR therapy which may or may not end up being helpful, but I don't expect it to work miracles. Everyone in my immediate family has had similar issues throughout their lives. I feel your pain. Despite not being perfect and being in therapy, I feel like my life is going pretty well. Here is what has worked for me:
Acceptance: Not everyone can be or should be the life of the party. Being quiet or reserved or shy is a perfectly acceptable way to live your life. You can still work on becoming comfortable in more social situations but you are fine right now. There are plenty of people who will like you just as you are, even if you social skills are far from perfect. Harsh self-judgement can make anxiety worse and lead to procrastination and depression. What I try to do as best I can is to just do whatever I feel like in the moment, and just let the world correct me. I try not to develop too many theories about how the world will react to me since I know from experience that those theories will be biased and pessimistic.
Decide what you want from the world: I guess this is somewhat generic life advice, but it has really worked for me. I decided fairly early on what I wanted to get from the social world. I wanted 3 things. - marriage - children - a good career
Deciding those things, I plugged away at getting them. I was completely incompetent at talking to women but with some help from e-harmony I found one who I was able to be comfortable with and who liked me. We got married 6.5 years ago and we have a 2 year old daughter and another child on the way. Professionally, I found a career that involves a minimum of politicking and no customer interaction. And yet it is both intellectually satisfying and highly remunerative. Even though neither my home life nor my professional life are perfect, achieving my basic life goals has given me a deep feeling of confidence and satisfaction that I can use to counter feelings of anxiety and depression as they come.
Each step I took along the path towards my goals gave me more confidence to move forward, but that confidence wasn't necessarily automatic. I have to periodically brag to myself about myself because otherwise I will naturally focus on my failures and weaknesses and start to feel like a loser. You should be very proud of your accomplishments in college. Most people could not do what you have done. Remind yourself of that. Feel good about yourself.
I think the evidence shows that it works for some people, doesn't work for other people, and the spectrum of outcomes stretches all the way from "miraculously fixed everything" to "made everything worse" :-/
Oh, and "some people" and "other people" refers not just to the person being treated, but to a patient/psychotherapist pair. It is fairly common for people to have no success with a chain of therapists until they find "the one" who clicks and can effectively help with whatever the problem is.
Sorry, but there is really no answer to the question as posed.
What does it mean for a dog to be procrastinating?
Procrastination usually involves human wanting to do things that are not natural.
I used to believe that procrastination was something very unique to me but today I believe that nearly everyone struggles with it to some extend. Even someone like Tim Ferriss who advises a dozen startups and writes a book at the same time still deals with it. People who are productive simply have found strategies to still be productive despite being imperfect humans.
You already read Burns. How about doing 15 minutes per day of his exercises for the next year?
Not at all. Procrastination is letting near and immediate incentives overcome far and remote ones.
People procrastinate by browsing the 'net instead of going running -- which one is more "natural"?
Going running for the sake of doing exercise isn't natural.
Browsing the net= being sedentary, saving energy, staying in a place you know is safe and has access to food and water. Running= Wasting a shit ton of energy and putting yourself into the world and at risk for no immediate gain.
Seems obvious to me which you would be more naturally inclined to do.
I've heard the idea from Somatic Experiencing-- unfortunately, I haven't found anything that goes into detail about that particular angle, except that part of it seems to be about having a tribe-- it's not just about spending time out of doors.
I'll be keeping an eye out for information on the subject, but meanwhile, you might want to look into Somatic Experiencing and Peter A. Levine.
This scratches on some things some popular people sometimes note: A feeling of being derooted, having no sense of belonging or meaning. Maybe this is a reason for the recent resurging of religious organizations. Of course if this vague shred of an idea has some truth to it one should be able to create or find a tribe substitute.
I will look into it, thank you.
Who are you, what are your physical and social environments like, and do you do the obvious things like lifting weights (or at least similar if you're female) and eating "right"?
The only reason to pay someone for non-specific therapy is if you don't have any friends, and even then you can't be truly honest without risking being institutionalized.
Disagree. Frequent discussion of one's anxieties can be a heavy burden on a friendship, and it's vulnerable to cascading failures. If I have four friends and spread my worries evenly between them, and one finds this exhausting and decides to spend less time with me, then I have three friends I can talk to, each of whom will suddenly find me even more stressful to be around.
It's not useful to discuss whether or not anxiety, depression, or procrastination is a "disease." It either is or isn't a useful way to adapt to the current environment, and if it's not useful you want to change either your reaction or your environment.
Making friends is hard with social anxiety but I think it's your best bet.
Consider neurofeedback administered by a professional. In the U.S. it will cost between $50/200 a session. You probably need at least 20 sessions for permanent results, but you might be able to feel some effects during the first session.
Existent. But psychological treatment is in it's infancy. I am not a licensed mental health professional, but watch this:
https://www.youtube.com/watch?v=_V_rI2N6Fco
Now, go find a therapist who's at least 45 years old, preferably 50-plus, is not burned out, and loves what they do. It doesn't really matter what the therapeutic modality is. Don't go to a thirty-something CBT-weenie.
Edit: A bunch of recent posts on my blog are about therapy. May or may not be useful:
http://meditationstuff.wordpress.com/
Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.
The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."
Quite simple, really, but I found it extremely interesting.
http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf
Discussed occasionally: https://www.google.com/search?num=100&q=%22simulation%20argument%22%20site%3Alesswrong.com
The argument falls apart once you use UDT instead of naive anthropic reasoning: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/aoym
Since LW is the place where I found out about App Academy... I started working through their sample problems today, and at what level of perceived difficulty / what number of stupid mistakes should I give up? Both in the sense of giving up on working toward getting into App Academy specifically [because I doubt I think fast enough / have a good enough memory to pass the challenges -- the first four problems in their second problem-set took me over an hour, and I had to look a few things up despite having gone through the entire Codecademy Ruby course] and in the sense of giving up on programming as an at-least-short-term job plan?
Not sure how much of this is lack of practice (maybe implementation / avoiding stupid errors would get better with practice, but designing the algorithms takes me a while, and I'm not new to programming at all), how much is overconfidence / unrealistically high expectations wrt skill (but they say the code challenges are supposed to take 45 minutes each) and how much is that I really don't have the talent to get into that particular program, or to not fail miserably at the job, or to develop the skills to be able to even get a programming job...
Hey, I have good news for you. I just tried those practice problems and timed myself to see if I could give you something to compare to (and for fun). I completed the first four in about an hour and 10 minutes (though I am a bit out of practice). Those practice problems are not trivial; they take some thought. I didn't have to use any outside resources, but I did have to test quite a few things out in the terminal as I was coding it.
For background: I am self taught, but I've been programming for almost 2 years. I have done freelance rails programming. I have built multiple rails apps from the ground up by myself. One of these is still in use by multimillion dollar company as a part of their client onboarding program. I've been offered a job as a rails developer, though I didn't end up taking it as I had a higher paying offer on the business end of things.
So I say don't worry if you have a bit of trouble with it. If you felt like you were looking things up all the time, then you just need some more practice. For the algorithm design part (especially the mathy ones), look into Project Euler. It's a great list of problems to get practice and you can use whatever language you want to find the answer, so use Ruby. Practice taking the problems apart into pieces, using helper functions, and writing the pseudocode before you actually code anything. That will make this style of thinking feel more natural.
Feel free to PM me if you want to talk more.
Use the try harder Luke.
What do you mean by "not new to programming at all"? How many hours programming have you done? How many projects have you completed? Because unless you've had a job as a programmer before or you did CS as a college degree your previous experience will be utterly swamped by App Academy. If you feel insecure about algorithms specifically practice them specifically. If you want more practice with Ruby maybe do Hartl's book. The Codecademy Ruby course is not the end of the world. If programming appeals to you prepare, apply and let App academy do the judging.
Edit: Remember, many people who have had jobs as programmers can't do FizzBuzz if asked to in an interview. Retain hope.
Please recommend some good sources/material (books, blogs, also advice from personal experience) for techniques of self-analysis and introspection? Basically, I'm looking for things to keep in mind while I attempt to find patterns of behavior in myself and ways for changing them. I realize that this is a very broad category. But roughly, material akin to Living Luminously.
The Feeling Good Handbook. It focuses specifically on Depression and Anxiety, but could probably be useful for anyone.
I'm an Orthodox Jew, and I'd be interested to connect with others on LW who are also Orthodox. More precisely, I'm interested in finding other LWers who (a) are Jewish, (b) are still religious in some form or fashion, and (c) are currently Orthodox or were at some point in the past.
You must have excellent compartmentalization skills.
I'm an Orthodox Jew (Modern Orthodox). Since Mr. Yudkowsky's work is - obviously - apikorsus of the highest level, I read it l'havin ul'horos, mostly, but enjoy the thinking in it anyway.
In case anyone else is curious, it appears that:
"apikorsus" has a range of meanings including "heretic", "damned person", "unbeliever"; the term may or may not be derived from the name of Epicurus.
[EDITED to add: As pointed out by kind respondents below, I was sloppy and mixed up "apikores" (which has the meanings above) and "apikorsus" (which means something more like "the sort of thing an apikores says"). My apologies.]
"l'havin ul'horos" means "to understand and to teach", as opposed to "to agree" or "to practice" or whatever. In the Bible, when the Israelites invade Canaan they are told not to learn to do as the natives do, and there's some famous commentary that says "but you are allowed to learn in order to understand and to teach".
[EDITED to add: I am not myself Jewish, nor do I know more than a handful of Hebrew words; if I have got the above wrong then I will be glad to learn.]
More accurately: Apikores = heretic in modern parlance; apikorsus = heretical views.
As an aside, Maimonides is the medieval Jewish authority generally associated with the view that the term apikores is not derived from the name Epicurus. Maimonides was a world-class Aristotelian philosopher and quotes Epicurus several times in his works. Since the words apikores and Epicurus have identical spellings in medieval Hebrew, the fact that Maimonides proposes a different etymological theory begs for an explanation. Maimonides' theory is that the term is from the Aramaic "apkeirusa" (this is hard to translate, especially in the way Maimonides seems to be using it; I think it implies something like "people doing whatever they feel like instead of listening to authority figures"). I've long felt that this derives from the fact that the Talmud's discussion of the term doesn't have anything to do with dogma or heretical beliefs but rather with belittling authority figures. Maimonides himself, however, converts the term in his other works into the current usage of referring to heretical beliefs. Based on this, I strongly suspect that Maimonides thought that the original term does stem from Epicurus (who held precisely those beliefs that Maimonides identifies as heretical), but that the rabbis of the Talmud borrowed the term and used it as a sort of Aramaic-Greek pun to refer to belittling authority figures.
Also in case anybody else is curious, Modern Orthodox is as opposed mainly to Ultra-Orthodox (also known as "hareidi" or "frum"). Hassidim are their own sub-group of Ultra-Orthodox.
As an interesting intellectual challenge, try steelmanning some of the hareidi sociopolitical positions, such as their extreme opposition to the Israeli draft law. And it does need steelmanning - I personally know several very well-thought-out, very smart, very well-meaning, very knowledgeable rabbis who strongly agree with the hareidi positions.
I think that actually if you accept a certain basic worldview, they have a rather strong case. I strongly disagree with that worldview, but that's a diffrent matter.
Let's lay it out:
Axiom 1: Everything happens acording to God's will.
Axiom 2: If we behave righeously, God's will will be favourable.
Example: Again and again in the past, this has happened. "בכל דור ודור עומדים עלינו לכלותינו והקדוש ברוך הוא מצילנו מידם" [Rough translation: "In each and every generation our foes have tried to destroy us, and each time the Holy One Blessed Be He saves us from them.]
Corollary 1: If we are righteous, we can expect this to carry on in the present and the future.
Axiom 3: The most righteous thing to be doing is to be studying the Holy Texts.
Lemma: We need to have as large a number of people as possible studing in yeshiva as their day-to-day occupation.
Proof of the lemma: Follows from Corollary 1 and Axiom 3.
Preposition: "Much as it pains us, we acknowledge that not everyone has it in them to spend all day studying Torah. We don't want to force people who don't want to study in yeshiva to do so (much as it aches the very bottoms of our souls), but at least you can let those who want to do so get on with it, and not waste their time on your secular 'army' nonsense, which has nothing to do with our defense, as our only true defense is God."
Proof of the preposition: The lemma says that we need lots of yeshiva bochers, so let's provide them! If you don't have the proper כונה we cannot effectivly force you to study Torah (even if the hareidim had the political power), but at least we can take the masses of willing hareidi young men and allow the to do their job for the defense of our people, in order to protect what fragment of spiritual defense we still have.
Corollary 2: The state of Israel shouldn't draft the yeshiva bochurs. Doing so removes our only true line of defense, and so is taramount to the genocide of the Jewish people.
If you accept the three axioms, it leads invariably to the Preposition, and so to Corollary 2.
Q.E.D.
That would work... but the Chareidim don't actually believe in their defenses (they flee places getting bombed and leave the soldiers to defend people's lives), nor are these defenses backed up in any way by halacha (they're misinterpreted that one text they use as a source). Also, they don't allow anyone in their community to go into the army. Ever. And they don't let non-Chareidim join them in their learning for defense, either.
I suspect noonehomer's correct in part and that the chareidim don't actually believe everything that Username says.
Also, I don't think it's true they don't let anyone go to the army (or at least it didn't use to be), just that it's discouraged.
If anyone's interested in my own thoughts, I posted them in a comment here. Just look for the comment by iarwain. Sorry, you may need to understand some hebrew terms to understand it. But then again, you'll need to understand hebrew terms to read Username's comment as well.
Yes.
What I wrote was a steelman of their positions, and must be taken as such. They themselves do not have such sophisticated mental models of the world. The answer to why they hate the IDF and the state of Israel is simply one of tribal affiliation. <<Insert Robbers Cave experiment reference here>>
[Edit: Also see point 3 in iarwain1's linked comment. It explains the hareidi attitude to all this.]
I don't know about "frum". Badly educated and mistakenly chumradik is more like it.
The hardest part of reading things l'havin ul'horos is that I can't recommend them to anyone else because it's assur for non-learned people to read them (possibly even non-Jews, in this case). And yes, iarwain1 is correct that apikorsus is a thing and an apikores is a person. But thank you for translating.
Can you recommend such things to other people considered learned? (And: is there an important distinction between "assur" and "forbidden"? A little googling suggests that "assur" is less emphatic somehow; is that right?)
Yup, inexcusably sloppy of me. Thanks.
Almost certainly I can. But right now I'm in high school, so I don't know that many people who qualify.
Um... assur means you can't do it. It's not less severe than "forbidden", I don't think. It literally means "bound". It's important to note that it doesn't mean something's morally wrong, but in this case, independent of the prohibition (non-literal translation of the noun form, issur) the act of reading foreign philosophy without knowledge of the corresponding arguments in one's own can cause stupid questions, not smart ones, and is considered to be wrong, not just forbidden (in my father's circles, anyhow).
I commend you for your self-control in not telling other people about these issues. I'd also add that for many people who aren't the intellectual type, you'd be doing them a major disservice by exposing them to arguments that can easily cause them massive psychological stress issues. As I know from personal experience with people who that happened to.
It might be worth thinking about switching to a different high school where there are more intellectual-type people around. Also, if you go to Yeshiva University for college you'll find plenty of smart people, both staff and students, who are quite educated in foreign philosophies.
Humans are diverse.
I mean this not only in the sense of them coming all kinds of shapes, colours and sizes, having different world views and upbringings attached to them, but also in the sense of them having different psychological, neurological and cultural makeup. It does not sound like something that needs to explicitly said but apparently it needs to be said.
Of course first voices have realised that the usual population for studies is WEIRD but the problem goes deeper and further. Even if the conscientious scientist uses larger populations, more representative for the problem at hand, the conclusions drawn tend to ignore human diversity.
One of the culprits is the concept of "average" or at least a misuse of it. The average person has an ovary and a testicle. Completely meaningless to say, yet we are comfortable in hearing statements like "going to college raises your expected income by 70%" (number made up) and off to college we go. Statements like these suppress a great deal of relevant information, namely the underlying, inherent diversity in the population. Going to college may increase lifetime earnings, but the size of this effect might be highly dependent on some other factor like inherent cognitive ability and choice of major.
Now that is obvious, you might say, but virtually all research shows that this is not the case. It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is. And this can be determined by the answer to a single question. Research on exercise and diet is massively convoluted with questions about endurance/strength and carbs/fats. May this be because of ignoring underlying biological factors?
People are touting the coming age of personalised medicine as they see massively diminishing returns on generic medicine. Ever more diseases are hypothesised to have very specific causes for each person necessitating ever more specialised treatment. The effects of psychedelic substances are found to be dependant on the exact psychological makeup, e.g. cannabis causing psychosis only in individuals already at risk for such episodes.
There is no exact point to this rant. Just the observation that ever more statements are similar to saying "having unprotected sex with your partner has a high probability of leading to pregnancy" to homosexual man.
See also the comments of Yvain's What Universal Human Experiences Are You Missing Without Realizing It? for a broad selection of examples of how human minds vary.
Oh, now I realized the point of that article was the comments, not the article itself. Thanks for clarifying this!
There are three separate issues:
(a) The concept of averaging. There is nothing wrong with averages. People here like maximizing expected utility, which is an average. "Effects" are typically expressed as averages, but we can also look at distribution shapes, for instance. However, it's important not to average garbage.
(b) The fact that population effects and subpopulation effects can differ. This is true, and not surprising. If we are careful about what effects we are talking about, Simpson's paradox stops being a paradox.
(c) The fact that we should worry about confounders. Full agreement here! Confounders are a problem.
I think one big problem is just the lack of basic awareness of causal issues on the part of the general population (bad), scientific journalists (worse!), and sometimes even folks who do data analysis (extremely super double-plus awful!). Thus much garbage advice gets generated, and much of this garbage advice gets followed, or becomes conventional wisdom somehow.
That depends. Mostly they are used as single-point summaries of distributions and in this role they can be fine but can also be misleading or downright ridiculous. The problem is that unless you have some idea of the distribution shape, you don't know whether the mean you're looking at is fine or ridiculous. And, of course, the mean is expressly NOT a robust measure.
The study you're probably thinking of failed to replicate with a larger sample size. While success at learning to code can be predicted somewhat, the discrepancies are not that strong.
http://www.eis.mdx.ac.uk/research/PhDArea/saeed/
The researcher didn't distinguish the conjectured cause (bimodal differences in students' ability to form models of computation) from other possible causes. (Just to name one: some students are more confident; confident students respond more consistently rather than hedging their answers; and teachers of computing tend to reward confidence).
And the researcher's advisor later described his enthusiasm for the study as "prescription-drug induced over-hyping" of the results ...
Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers.
The failure to replicate was of their test, not of the initial observation. Specifically it was considered interesting why the distribution of grades in CS (apparently typically two-humped) was different from eg mathematics (apparently typically one-humped). As far as I know this still remains to be explained.
Tyler Cowen talks with Nick Beckstead about x-risk here. Basically he thinks that "people doing philosophical work to try to reduce existential risk are largely wasting their time" and that "a serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."
My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.
A "serious" MIRI would operate in absolute secrecy, and the "public" MIRI would never even hint at the existence of such an organisation, which would be thoroughly firewalled from it. Done right, MIRI should look exactly the same whether or not the secret one exists.
Excerpts and discussion on MR: http://marginalrevolution.com/marginalrevolution/2014/04/nick-becksteads-conversation-with-tyler-cowen.html
If you ideas of being serious is to train a team of hacker-assassins that might indicate that your project is doomed from the start.
As far as I know there are still nuclear weapons in the post-collapse Soviet Union.
Pretty clear that he meant the "loose nukes" that went unaccounted for in the administrative chaos after Soviet Collapse.
A team of slightly more sophisticated Terminators, right?
Oh, wait... :-D
Hackers / assassins would at best postpone the catastrophe, not avoid it.
Hi, CFAR alumni here. Is there something like a prediction market run somewhere in discussion?
Going mostly off of Gwern's recommendation, it seems like PredictionBook is the go-to place to make and calibrate predictions, but it lacks the "flavour" that the one at CFAR did. CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.
What would it take to get such a thread going online? I believe one of the reasons it worked so well at minicamp was because we were all in the same area for the same period of time, so it was simple to restrict bets to relevant things we could all verify. Even if most of the posts wind up being relevant only to the local meetups, it would be nice to have them up in the same place for unofficial competition. Is that something you would use?
I do not know if this is the best place, but I have lurked here and on OB for roughly a year, and have been a fellow traveler for many more. However specifically I want to talk to any members that have ADHD, and how they specifically go about treating their disorder. On the standard anti-akrasia topics, the narrative is that if you have anxiety,depression, xyz that you should treat that first, but there seems to be a lack of quantity of members that have this. Going to other forums to talk about stuff like which medication is "better" is filled with a lot of bad epistemology, conclusions, people faking their disorder, and much more. Any other members have it, and want to talk about it? I was hoping there could be maybe a general discussion thread for people with it if enough people have it. I've poured through studies and journals but it is difficult to do alone.
Alright, i'm going to get enough karma and just start this myself until some one stops me. I also kind of need this, so I don't destroy my life through some other unspecified means.
A koan:
A monk came to Master Banzen and asked, "What can be said of universal moral law?"
Master Banzen replied, "Among the Tyvari of Arlos, all know that borlitude is highly frumful. For a Human of Earth, is quambling borl forbidden, permissible, laudable or obligatory?"
The monk replied, "Mu."
Master Banzen continued, "Among the Humans of Earth, all know that friendship is highly good. For a Tyvar of Arlos, is making friends forbidden, permissible, laudable or obligatory?"
The monk replied, "Mu," and asked no more.
Qi's Commentary: The monk's failure was one of imagination. His question was not foolish, but it was parochial.
Shouldn't Banzen's second question be something like "For a Tyvar of Arlos, is making friends frumful, flobulent, grattic, or slupshy?"?
Sounds to me like the master's jumping to more conclusions than the student is, here. His response makes sense if he wanted to break a sufficiently specific deontology (at least at interspecies scope), but there are a lot of more general things you could say about morality that aren't yet ruled out by the student's question.
I don't really know anything about the Tyvar of Arlos, so I'm pretty confused on this front, but I'm fairly sure you're relating a Talmudic anecdote, not a Zen one ;-). "Forbidden, permissible, laudable, or obligatory" says to me that we're contemplating halachah.
I would hope you don't know anything about them—they were made up on the spot. ^_^
And yes, I suppose the style here might well have been influenced from more than one place.
How is this a failure of imagination? Why is the question parochial?
Parochial because he mistook a local property of mindspace for a global one; unimaginative because he never thought of frumfulness when considering what things a mind might value. "Good" is no more to a Tyvar than "frumful" to Clippy or "clipful" to a human.
this is silly. Good is a quite useful concept that easily stretches to cover entities with different preferences, but even if it does not, it's STILL meaningful, and your clippy example shows us exactly why. The meaning of clipful, something like "causes there to be more paperclips" or whatever, is perfectly clear to if not really valued by humankind.
Brienne Strohl mentioned she was reading "Robby's re-sequencing of Eliezer's Sequences" on facebook/twitter, can anyone link me to it?
Honey badger intelligence
When I was a kid, our cats used a similar tactic to escape the laundry room with a closed door. One would sit on the dryer and turn the handle with both paws and the other would push against the door with their head.
What's the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so? I've had some comments downvoted recently, and without explanations it's frustrating and a poor feedback mechanism.
If the alternative is no feedback at all, downvoting without explanation is a better option.
There ain't no policy. People up- and down-vote as they please.
I'd say that even more important than giving explanation is not downvoting merely because you disagree. The signal transmitted by downvoting is "I don't want the hear this" or in simpler language "shut up". This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc. Mistakes made in good faith don't deserve a downvote. I'd say it is an extension of the "Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever." rule. The alternative is death spirals, blue-green politics and plainly ruining the community experience for everyone.
I personally made a rule of upvoting any content with net negative score which doesn't deserve a downvote, even if I disagree, especially when it's a comment of a person I'm currently arguing against. I want arguments that are discussions in which both sides are trying to arrive at the truth, not fights or two-people-showing-off-how-smart-they-are (is there a name for it?).
Not if you aim to enforce a level of discussion higher than mere absence of pathology. I like for there to be places that distance themselves from (particular kinds of) mediocrity...
...which is made more difficult by egalitarian instincts.
It's not. Punishment is different enough from deciding who to talk with. See also Yvain on safe spaces.
Downvotes are not the way to achieve it. The way to achieve it is by positive personal example and upvoting content which is exemplary. Why are downvotes bad? Because:
We want to allow "mediocre" people (some of which have an unrealized potential to be excellent) that want to learn from excellent people (I hope you agree). Such people can make innocent mistakes. There's no reason to downvote them as long as they're willing to listen and aren't arrogant in their ignorance. Downvoting will only drive them away.
Even smart people occasionally say foolish things. Downvoting sends such a strong negative signal that it discourages even people that get much more upvotes than downvotes. By "discourages" I don't mean "discourages from saying foolish things", I mean discourages from participating in the community in general.
Most content is not voted upon by most of the community, therefore statistical variance is large. Again, since the discouragement of downvotes is not cancelled out by the encouragement of upvotes, you get much more discouragement than you want.
Downvotes transform arguments into sort of arena fights where the people in the crowd are throwing spoiled vegetables on the players they don't like. The emotional aura this creates is very bad for rationality. It's excellent for blue-green politics (downvote THEM!) and death spirals.
If you don't want to talk to someone, don't upvote her and don't reply to her. The psychological impact of downvoting is equivalent to punishment.
This is completely different. "Safe spaces" are about banning content which might offend someone's sensibilities. My suggestion is about "banning" less content.
I agree with enough of this. I know there are immediate downsides and hypothetical dangers. But the upsides seem indispensable. The argument needs to consider the balance of the two.
They remain in the fabric of the forum, making it less fun to read. Not upvoting doesn't address this issue.
Things that are not fun (for certain sense of "fun") offend my sensibilities (for certain sense of "offend"). My suggestion is to discourage them by downvoting. (This is the intended analogy, which is strong enough to carry over a lot of Yvain's discussion, even if the concept "safe spaces" doesn't apply in detail, although I think it does to a greater extent than I think you think it does.)
I have no problem with that, my problem is with the opposite - people learning from mediocre (or worse) folk, because they don't realize that their content is flawed (which downvotes signal).
To some extend yes, but we don't want eternal September either. There concern about the average IQ that reported in the LW census dropping over time.
If we would have less downvotes in general then every single downvote would create a much stronger negative signal than it does at the moment.
What do you mean by "fighting mediocrity"? Should I interpret it literally as "I don't like mediocre people"? Or as "I want to reward excellence"? If it is the latter you are aiming it, use upvotes, not downvotes (for ideal rational agents the two might be symmetric, but for people they aren't: the emotional signal from getting a downvoting is very different from the emotional signal of not getting an upvote).
Exactly, and this is a reason why downvoting is important (and shouldn't be systematically countered): it allows scaring people away (who are not of our tribe). A forum culture that would merely abstain from upvoting is worse at scaring people away than one that actively downvotes.
(Sorry, I heavily edited the grandparent since the first revision.)
The same person who said that also said this, so I guess he meant something narrower by “bullet” than you think.
Upvoted for making an interesting point.
However: I was not appealing to Eliezer's authority. I was just making a parallel with a similar (but more extreme) phenomenon.
Regarding well-kept gardens. Let me put things in perspective. If you see a comment along the lines of "jesus is our lord" or "rationality is wrong because the world is irrational" or "a machine cannot be intelligent because it has no soul", by all means downvote. However, if you see two people debating e.g. whether there will be an AI foom or whether consequentialism is better than deontology or whether AGI will come before WBE, don't downvote someone just because you disagree. Downvote when the argument is so moronic that you're confident you don't want this person in our community.
People change. People change even faster when you give them feedback. I downvote things I don't want to see from people I like and respect the same way I would frown at a friend if they did something I didn't want them to do.
So instead of 'I'm confident I don't want you in our community,' I view a downvote more as 'shape up or ship out.'
Agreed, but...
Nope. Sometimes otherwise-okay people make moronic arguments because they're mind-killed, they're tired, etc.
THE WHOLE POINT OF DOWNVOTES IS TO HAVE LESS BAD STUFF AND MORE GOOD STUFF. This applies not just to making people leave but making people who stay post tbings of higher quality.
If you don't downvote "otherwise-okay" people when they say dumb shit, how are they supposed to learn. Downvote the comment, not the person .
Er... That was my point.
I think you're drawing a false equivalence here. While a downvote does carry the meaning of "I don't want to hear this", most of the meaning of "shut up" is connotation, not denotation, and those connotations don't necessarily carry over.
Mere disagreement generally isn't enough to justify a downvote, no. But we want to see well-reasoned disagreement: it signifies a chance to teach or to learn, even if it's unpleasant in the moment. On the other hand, there are plenty of things short of Time Cube or cat memes that one might legitimately not want to see here, even if posted in good faith; restricting the option to those most extreme cases robs it of most of its power to improve discussion.
I donwvoted you, because you seem to use upvotes in a way that diminishes the value of the karma system in my eyes - an undeserved downvote is as bad as an undeserved upvote.
I've seen a lot of low quality posts getting some karma, and coming back to positive scores without a good reason - and now I know the behaviour that is partially responsible.
(and the above comes from someone with a mass downvoter after him, who gets a downvote on every single comment he makes)
Downvotes and upvotes are not symmetric, see my reply to Vladimir.
This is a common question from the new participants. First, there is no policy on downvoting. There can't be, because there is no enforcement mechanism. There are, however, recommendations, like "downvote something you would like to see less of", which is often mixed up with "downvote everything I disagree with", or worse, with "downvote every comment by a user I dislike, regardless of content, to force them post less". At least one prominent regular has been accused of this last one. Second, commenting on why you downvote tends to result in the comment being downvoted, which discourages such comments very effectively.
Yes, but only in the beginning. Once you have a few hundred karma, a downvote is just an indication that someone disliked your post, nothing more. And if all your comments are universally liked, you must be doing something wrong.
Well, Eliezer's policy tends towards "replying to downvote-worthy comments tends to start flame wars and is thus discouraged".
Right, but then we invented "Tapping out" so that wouldn't become an issue.
"Tapping out" can be interpreted as conceding and is thus low status.
If you're that worried, link to the wikipage which defines away that connotation, like "I'm tapping out.".
Signaling doesn't work that way. I'd think someone who reads Game blogs would know that.
Call it something else then, or be more direct and paraphrase the wikipage, or take it into PMs, whatever you fancy. The point is that you shouldn't feel guilty replying to a comment just because it was downvoted.
I haven't been following LW discussions of Löb's theorem etc. very much at all but this guide to the m4 macro language (a standard Unix tool) seemed to have the same character, especially this section. Dunno if this is interesting to people who are interested in Löb's theorem.
Does anyone have suggestions for Android self-tracking/quantified-apps? I just got an Android phone an am hoping to begin tracking my diet, exercise ect. as well as various outcomes and try to find correlations.
LifeTracking
I was able to get it installed, but get a message saying "Unfortunately, LifeTracking has stopped" whenever I try to go past the first page.
The marketplace link doesn't work. I tried searching for LifeTracking but only found LifeTrack, are they the same thing?
Probably not, though I have never had access to the Android marketplace, so I'm not sure. Have you tried installing the app directly from the downloadble .apk file?
That seems to have worked.
Sleep as Android is what I use on a tablet under my pillow to keep track of how long i actually spend trying to sleep, as well as if my sleep cycle seems to contain coherent deep-to-not-deep cycles.
Fairly off-topic question, but I imagine there'll be suitable people to answer it on LW. Any recommendations for cheap and cheerful VPS hosting? Just somewhere to park a minimum CentOS install. It's for miscellaneous low-priority personal projects that I might abandon shortly after starting, so I'm hesitant to pay top dollar for a quality product that I might end up not using. On the other hand, I want to make sure I get what little I'm paying for.
I promise I'm not a stingy unfriendly AI looking for a new home.
I would like to learn drawing.
I would like to be able to have fun expressing myself via art. How long does it takes to learn to draw from zero to good enough not to be embarrassed of oneself?
What techniques are useful? Is there any sense in e.g. Drawing on the Right Side of the Brain?
Drawing from the real life is especially useful for someone who is learning to draw. It teaches you that drawing is not simply about holding a pen and drawing the correct lines, but it's also about seeing and thinking correctly. We tend to think in terms of shapes, outlines and symbols, but such things don't represent the reality very well. You should be thinking in terms of form and contour.
Here's a good video about it.
I think this post is a good start:
So draw a lot, draw from the real life and draw from reference and begin to think in 3D.
I think Drawing on the Right Side of the Brain is probably pretty effective because one of its main point is the above - that you should just draw what you see and not think in terms of symbols when you draw. The underlying idea about the brain hemispheres is pseudoscience, but that doesn't mean it can't still teach useful lessons.
Drawing on the Right Side is great for this reason. The hemisphere stuff is quite tangential to the book's utility.
If you want to see examples of "visual symbols", look at the drawings of children. In particular, look at drawings of the human face. The prototypical symbols for something like an eye, just don't look that much like a human eye. This sounds obvious, but it's very hard to just draw what you see, and not draw what you "think you ought" to see.
For example, imagine a face lit from one side. Visually, the illuminated side of the face will show the "expected" details: You'll see the folds in both lids of the eye, and the fine curves of the face and ear. But the dark side of the face will look nothing like this. You'll only see broad dark areas and broad light areas. However, most people who'd identify as "bad at drawing", will draw the same details on both sides of the face, and will be genuinely unaware that this isn't what they really "see".
This isn't to say that artists don't make use of visual symbols, etc, but skill is the ability to take both approaches.
I'd actually advance this as a example of the fundamental analysis of one type of "talent". The "good at drawing" people grokked the connection between seeing and drawing, and the "bad at drawing" people didn't.
I've wondered for some time if something similar isn't present in musical talent, where the basic "mindset" has to do with some connection of sound to expression, rather than a connection between sound and physical ritual.
There's an (unfinished) set of posts about rationality and drawing written by Raemon, Drawing LessWrong p2 p3 p4 p5 that might answer your questions (in the articles or comments.)