If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Thread started before the end of the last thread to ecourage Monday as first day.
Thread started before the end of the last thread to ecourage Monday as first day.
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.
Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not...
Pure curiousity question: What is the general status of UDT vs. TDT among yall serious FAI research people? MIRI's publications seem to exclusively refer to TDT; people here on LW seem to refer pretty much exclusively to UDT in serious discussion, at least since late 2010 or so; I've heard it reported variously that UDT is now standard because TDT is underspecified, and that UDT is just an uninteresting variant of TDT so as to hardly merit its own name. What's the deal? Has either one been fully specified/formalized? Why is there such a discrepancy between MIRI's official work and discussion here in terms of choice of theory?
MIRI's publications seem to exclusively refer to TDT
Why do you say that? If I do a search for "UDT" or "TDT" on intelligence.org, I seem to get about an equal number of results.
people here on LW seem to refer pretty much exclusively to UDT in serious discussion
This seems accurate to me. I think what has happened is that UDT has attracted a greater "mindshare" on LW, to the extent that it's much easier to get a discussion about UDT going than about TDT. Within MIRI it's probably more equal between the two.
that UDT is just an uninteresting variant of TDT so as to hardly merit its own name
As I recall, Eliezer was actually the one who named UDT. (Here's the comment where he called it "updateless", which everyone else then picked up. In my original post I never gave it a name but just referred to "this decision theory".)
Has either one been fully specified/formalized?
There has been a number of attempts to formalize UDT, which you can find by searching for variations on "formal UDT" on LW. I'm not aware of a similar attempt to formalize TDT, although this paper gives some hints about how it might be done. It's not ...
I was feeling lethargic and unmotivated today, but as a way of not-doing-anything, I got myself to at least read a paper on the computational architecture of the brain and summarize the beginning of it. Might be interest to people, also briefly touches upon meditation.
Whatever next? Predictive brains, situated agents, and the future of cognitive science (Andy Clark 2013, Behavioral and Brain Sciences) is an interesting paper on the computational architecture of the brain. It’s arguing that a large part of the brain is made up of hierarchical systems, where each system uses an internal model of the lower system in an attempt to predict the next outputs of the lower system. Whenever a higher system mispredicts a lower system’s next output, it will adjust itself in an attempt to make better predictions in the future.
EDIT: Just realized, this model explains tulpas. Also has connections to perceptual control theory, confirmation bias and people's general tendency to see what they expect to see, embodied cognition, the extent to which the environment affects our thought... whoa.
How strong is the evidence in favor of psychological treatment really?
I am not happy. I suffer from social anxiety. I procrastinate. And I have a host of another issues that are all linked, I am certain. I have actually sought out treatment with absolutely no effect. On the recommendation of my primary care physician I entered psychoanalytic counseling and was appalled by the theoretical basis and practical course of "treatment". After several months without even the hint of a success I aborted the treatment and looked for help somewhere else.
I then read David Burns' "Feeling Good", browsing through, taking notes and doing the exercises for a couple of days. It did not help, of course in hindsight I wasn't doing the treatment long enough to see any benefit. But the theoretical basis intrigued me. It just made so much more sense to be determined by one's beliefs than a fear of having one's balls chopped off, hating their parents and actively seeking out displeasure because that is what fits the narrative.
Based on the key phrase "CBT" I found "The now habit" and reading me actually helped to subdue my procrastination long enough to finish my ba...
I get confused when people use language that talks about things like "fairness", or whether people are "deserving" of one thing or another. What does that even mean? And who or what is to say? Is it some kind of carryover from religious memetic influence? An intuition that a cosmic judge decides what people are "supposed" to get? A confused concept people invoke to try to get what they want? My inclination is to just eliminate the whole concept from my vocabulary. Is there a sensible interpretation that makes these words meaningful to atheist/agnostic consequentialists, one that eludes me right now?
Here are some things people might describe as "unfair":
What sorts of things do you see in common among these situations?
Your list seems a bit... biased.
Let's throw in a couple more situations:
While people say "That's not fair" in the above examples and in these, it seems there are two different clusters of what they mean. In the first group, the objection seems to be to self-serving deception of others, particularly violation of agreements (or what social norms dictate are implicit agreements). Your examples don't involve deception or violation of agreements (except perhaps in the case of eminent domain), and the objection is to inequality. I find it strange that the same phrase is used to refer to such different things.
I think you could say that in both groups, people are objecting because society is not distributing resources according to some norm of what qualities the resource distribution is supposed to be based on.
In the first group of examples, people are deceiving others and violating agreements, and society says that people are supposed to be rewarded for honest behavior and keeping agreements.
For the second group of examples:
It's not a theistic concept - if anything, it predates theology(some animals have a sense of fairness, for example). We build social structures to enforce it, because those structures make people better off. The details of fairness algorithms vary, but the idea that people shouldn't be cheated is quite common.
Humans are diverse.
I mean this not only in the sense of them coming all kinds of shapes, colours and sizes, having different world views and upbringings attached to them, but also in the sense of them having different psychological, neurological and cultural makeup. It does not sound like something that needs to explicitly said but apparently it needs to be said.
Of course first voices have realised that the usual population for studies is WEIRD but the problem goes deeper and further. Even if the conscientious scientist uses larger populations, more representative for the problem at hand, the conclusions drawn tend to ignore human diversity.
One of the culprits is the concept of "average" or at least a misuse of it. The average person has an ovary and a testicle. Completely meaningless to say, yet we are comfortable in hearing statements like "going to college raises your expected income by 70%" (number made up) and off to college we go. Statements like these suppress a great deal of relevant information, namely the underlying, inherent diversity in the population. Going to college may increase lifetime earnings, but the size of this effect might be highly depend...
It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is.
The study you're probably thinking of failed to replicate with a larger sample size. While success at learning to code can be predicted somewhat, the discrepancies are not that strong.
http://www.eis.mdx.ac.uk/research/PhDArea/saeed/
The researcher didn't distinguish the conjectured cause (bimodal differences in students' ability to form models of computation) from other possible causes. (Just to name one: some students are more confident; confident students respond more consistently rather than hedging their answers; and teachers of computing tend to reward confidence).
And the researcher's advisor later described his enthusiasm for the study as "prescription-drug induced over-hyping" of the results ...
Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers.
See also the comments of Yvain's What Universal Human Experiences Are You Missing Without Realizing It? for a broad selection of examples of how human minds vary.
I've been struggling with how to improve in running all last year, and now again this spring. I finally realized (after reading a lot of articles on lesswrong.com, and specifically the martial arts of rationality posts) that I've been rationalizing that Couch to 5k and other recommended methods aren't for me. So I continue to train in the wrong way, with rationalizations like: "It doesn't matter how I train as long as I get out there."
I've continued to run intensely and in short bursts, with little success, because I felt embarrassed to have to walk any, but I keep finding more and more people who report success with programs where you start slowly and gradually add in more running.
Last year, I experimented with everything except that approach, and ended up hurting myself by running too far and too intensely several days in a row.
It's time to stop rationalizing, and instead try the approach that's overwhelmingly recommended. I just thought it would be interesting to share that recognition.
Research on mindfulness meditation
Mindfulness meditation is promoted as though it's good for everyone and everything, and there's evidence that it isn't-- going to sleep is the opposite of being mindful, and a mindfulness practice can make sleep more difficult. Also, mindfulness meditation can make psychological problems more apparent to the conscious mind, and more painful.
The difficulties which meditation can cause are known to Buddhists, but have not yet known by researchers or the general public. The commercialization of meditation is part of the problem.
How do I decide whether to get married?
Pros
Cons
She has said that she doesn't want to marry me if she's just my female best friend that I sleep with. But I don't know how to evaluate what she's asking. There are a number of possibilities. Maybe I don't feel the requisite feelings and thus she wouldn't want to be married. Maybe I do have the feelings and I have no way to evaluate whether I do or not. Maybe I'm not ever going to feel some extra undetec...
In your list you didn't mention the topic of getting children. If you marry someone with the intention of spending the rest of your life together with them, I think you should be on the same page with regards to getting children before you marry.
What exactly do you think/hope will change between the current situation (which I assume involves you two living together) and the situation if you were to marry?
Don't get married unless there is a compelling reason to do so. There's a base rate of 40-50% for divorce, and at least some proportion of existing marriages are unhealthy and unhappy. Divorce is one of the worst things that can happen to you, and many of the benefits of marriage to happiness are because happier people are more likely to get married in the first place.
This isn't a question, just a recommendation: I recommend everyone on this site who wants to talk about AI familiarize themselves with AI and machine learning literature, or at least the very basics. And not just stuff that comes out of MIRI. It makes me sad to say that, despite this site's roots, there are a lot of misconceptions in this regard.
Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?
A koan:
A monk came to Master Banzen and asked, "What can be said of universal moral law?"
Master Banzen replied, "Among the Tyvari of Arlos, all know that borlitude is highly frumful. For a Human of Earth, is quambling borl forbidden, permissible, laudable or obligatory?"
The monk replied, "Mu."
Master Banzen continued, "Among the Humans of Earth, all know that friendship is highly good. For a Tyvar of Arlos, is making friends forbidden, permissible, laudable or obligatory?"
The monk replied, "Mu," and asked no...
Shouldn't Banzen's second question be something like "For a Tyvar of Arlos, is making friends frumful, flobulent, grattic, or slupshy?"?
How good is the case for taking adderall if you struggle with a lot of procrastination and have access to a doctor to give you a prescription?
So the quantified self (QS) community has been existing for a while. Just as bodybuilding groups should be excellent test beds for what kind of exercises and chemicals will yield high results, the QS community should yield a preferably small, low-cost set of measures you should determine about yourself. Do these exist? Can be any blood measure, rhythm, time, psychological value, net worth ...
I've been reading about maximizers and satisficers, and I'm interested to see where LessWrong people fall on the scale. I predict it'll be signficantly on the maximizer side of things.
A maximizer is someone who always tries to make the best choice possible, and as a result often takes a long time to make choices and feels regret for the choice they do make ('could I have made a better one?'). However, their choices tend to be judged as better, eg. maximizers tend to get jobs with higher incomes and better working conditions, but to be less happy with them...
I wonder what the person who submitted the number 1488 was thinking. (Maximizing their answer, perhaps.)
I'm an Orthodox Jew, and I'd be interested to connect with others on LW who are also Orthodox. More precisely, I'm interested in finding other LWers who (a) are Jewish, (b) are still religious in some form or fashion, and (c) are currently Orthodox or were at some point in the past.
In case anyone else is curious, it appears that:
"apikorsus" has a range of meanings including "heretic", "damned person", "unbeliever"; the term may or may not be derived from the name of Epicurus.
[EDITED to add: As pointed out by kind respondents below, I was sloppy and mixed up "apikores" (which has the meanings above) and "apikorsus" (which means something more like "the sort of thing an apikores says"). My apologies.]
"l'havin ul'horos" means "to understand and to teach", as opposed to "to agree" or "to practice" or whatever. In the Bible, when the Israelites invade Canaan they are told not to learn to do as the natives do, and there's some famous commentary that says "but you are allowed to learn in order to understand and to teach".
[EDITED to add: I am not myself Jewish, nor do I know more than a handful of Hebrew words; if I have got the above wrong then I will be glad to learn.]
Tyler Cowen talks with Nick Beckstead about x-risk here. Basically he thinks that "people doing philosophical work to try to reduce existential risk are largely wasting their time" and that "a serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."
My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of din...
My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.
A "serious" MIRI would operate in absolute secrecy, and the "public" MIRI would never even hint at the existence of such an organisation, which would be thoroughly firewalled from it. Done right, MIRI should look exactly the same whether or not the secret one exists.
I have trouble with the statement "In the end, we're all insignificant." I mean I get the sentiment, which is of awe and aims to reduce pettiness. I can get behind that. But I have trouble if someone uses it in an argument, such as: "Why bother doing X; we're all insignificant anyway."
Because, if you look closely, "significance" is not simply a property of objects. It is, at the very least, a function of objects, agents and scales. For example you can say that we're all insignificant on the cosmic scale; but we're also all ins...
Can anyone share the story behind the Future of Life Institute?
There are a lot famous people on their list, and presumably FLI is behind the recent article in the huffington post, but how much does this indicate that said famous people are on board with the claims in the article? The top non-famous person on their list of people studies monte-carlo methods and volunteers for CFAR - is this an indication that they're bringing on someone to do actual work? Or does Alan Alda being at the top of their list of advisors mean they're going to focus on communications?
Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.
The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief t...
Since LW is the place where I found out about App Academy... I started working through their sample problems today, and at what level of perceived difficulty / what number of stupid mistakes should I give up? Both in the sense of giving up on working toward getting into App Academy specifically [because I doubt I think fast enough / have a good enough memory to pass the challenges -- the first four problems in their second problem-set took me over an hour, and I had to look a few things up despite having gone through the entire Codecademy Ruby course] and ...
Please recommend some good sources/material (books, blogs, also advice from personal experience) for techniques of self-analysis and introspection? Basically, I'm looking for things to keep in mind while I attempt to find patterns of behavior in myself and ways for changing them. I realize that this is a very broad category. But roughly, material akin to Living Luminously.
I'd like to gauge interest in posting bulleted, section-by-section, non-fiction book summaries, with the intention of some discussion. I think that it would be of high utility to those who want knowledge but haven't the time to read a book, and for me who wants to read a book and work through the ideas more thoroughly. The first two books I have in mind are Understanding Uncertainty which has been recommended by Lukeprog, and The Moral Animal which has been recommended by EY.
It could be chapter by chapter, perhaps in weekly open threads, or the whole book ...
Something that keeps nagging me in my mind: A young college graduate comes up to you and asks "Where should I look for what kind of work to have the highest living standard?"
Remember, a lower nominal wage in a country where this wage has higher purchasing power should be better suited to this individual. Naively I might say the US or Switzerland but something tells me I am overlooking a gigantic hole.
For someone skilled enough to choose your location and who thinks long-term enough to live very cheaply for a number of years, higher nominal wages means higher absolute savings amounts.
Live somewhere expensive when you're getting started, and move somewhere cheap when you're slowing down.
I do not know if this is the best place, but I have lurked here and on OB for roughly a year, and have been a fellow traveler for many more. However specifically I want to talk to any members that have ADHD, and how they specifically go about treating their disorder. On the standard anti-akrasia topics, the narrative is that if you have anxiety,depression, xyz that you should treat that first, but there seems to be a lack of quantity of members that have this. Going to other forums to talk about stuff like which medication is "better" is filled w...
Does anyone have suggestions for Android self-tracking/quantified-apps? I just got an Android phone an am hoping to begin tracking my diet, exercise ect. as well as various outcomes and try to find correlations.
Brienne Strohl mentioned she was reading "Robby's re-sequencing of Eliezer's Sequences" on facebook/twitter, can anyone link me to it?
Hi, CFAR alumni here. Is there something like a prediction market run somewhere in discussion?
Going mostly off of Gwern's recommendation, it seems like PredictionBook is the go-to place to make and calibrate predictions, but it lacks the "flavour" that the one at CFAR did. CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.
What would it take to get such a thread going online? I believe one of the reasons it worked so well at...
I haven't been following LW discussions of Löb's theorem etc. very much at all but this guide to the m4 macro language (a standard Unix tool) seemed to have the same character, especially this section. Dunno if this is interesting to people who are interested in Löb's theorem.
Fairly off-topic question, but I imagine there'll be suitable people to answer it on LW. Any recommendations for cheap and cheerful VPS hosting? Just somewhere to park a minimum CentOS install. It's for miscellaneous low-priority personal projects that I might abandon shortly after starting, so I'm hesitant to pay top dollar for a quality product that I might end up not using. On the other hand, I want to make sure I get what little I'm paying for.
I promise I'm not a stingy unfriendly AI looking for a new home.
I would like to learn drawing.
I would like to be able to have fun expressing myself via art. How long does it takes to learn to draw from zero to good enough not to be embarrassed of oneself?
What techniques are useful? Is there any sense in e.g. Drawing on the Right Side of the Brain?
What's the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so? I've had some comments downvoted recently, and without explanations it's frustrating and a poor feedback mechanism.
What's the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so?
There ain't no policy. People up- and down-vote as they please.
This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc.
Not if you aim to enforce a level of discussion higher than mere absence of pathology. I like for there to be places that distance themselves from (particular kinds of) mediocrity...
I personally made a rule of upvoting any content with net negative score which doesn't deserve a downvote
...which is made more difficult by egalitarian instincts.
I'd say it is an extension of the "Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
It's not. Punishment is different enough from deciding who to talk with. See also Yvain on safe spaces.
I'd say that even more important than giving explanation is not downvoting merely because you disagree. The signal transmitted by downvoting is "I don't want the hear this" or in simpler language "shut up". This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc. Mistakes made in good faith don't deserve a downvote. I'd say it is an extension of the "Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever." rule. The alternative is death spirals, blue-green politics and plainly ruining the community experience for everyone.
I personally made a rule of upvoting any content with net negative score which doesn't deserve a downvote, even if I disagree, especially when it's a comment of a person I'm currently arguing against. I want arguments that are discussions in which both sides are trying to arrive at the truth, not fights or two-people-showing-off-how-smart-they-are (is there a name for it?).
I think you're drawing a false equivalence here. While a downvote does carry the meaning of "I don't want to hear this", most of the meaning of "shut up" is connotation, not denotation, and those connotations don't necessarily carry over.
Mere disagreement generally isn't enough to justify a d... (read more)