Open Thread, Apr. 20 - Apr. 26, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (350)
What sort of technology is "Moloch-dangerous"?
That is, what sort of technology, social, engineering, or otherwise, presents new opportunities to throw things into the fire to gain victory? Is it "all," or "some with [these features]"?
Off the top of my head, wherever the marginal returns from an an increase in ecficacy are very large, either because the outcome is very sensitive to some factor or the prize is just very big. So places with strong feedback loops where you can consolidate the gains, and you know others are doing so as well. Thus, probably just about any technology that is suddenly in demand due to an unexpected event, for one. Another may be any sort of strong truly defensive (protective, not deterrent) capability. Obvious yet profitable applications of new technology fall into the event category above, if first to market wins most of market share.
May also be worth thinking of anti-moloch factors, resulting in underinvestment relative to gains. Market share will be split many ways, first mover has little advantage, and perhaps greater costs. So maybe lack of strong IP protection. What else?
The Wall Street Journal has an article up claiming that the world economy is currently experiencing an excess of capital, labor, and commodities, and that this is potentially a cause of serious problems.
Could anyone explain to me how it is possible to have an excess of capital and an excess of labor?
ETA: You can get around the paywall by googling the title of the article and clicking the first link.
There are not enough people with great ideas to produce new products starting companies that don't take away the market of existing companies.
Two guesses:
There is a lot of capital and a lot of labor, but in different areas where they don't complement each other. For example, maybe there is a lot of car factories and equipment, but not enough skilled workers to operate the equipment.
There is just more economic power than people actually want. For example, there is a lot of car manufacturing capacity and a lot of skilled auto workers, but people don't feel the need to buy more cars.
As a side note, I strongly believe that the conceptual tools of "modern" economics are increasingly ill-suited to describe the modern world. Economic concepts like supply/demand and labor/capital were developed in an era where most economic activity was centered around the production and distribution of goods - either agricultural or industrial. In the modern world, physical goods are becoming less and less important. Agriculture is down to 1% of GDP in the US, and industry is down to 20% (even this number seems high). The economy is now dominated by sectors like health care, education, technology, and government. These sectors cannot be described well by traditional economic concepts. What does it mean to talk about the "demand" for health care or education? How can one apply concepts like marginal cost and comparative advantage to the technology sector, where the marginal cost is zero and there is usually a free version of every product available?
A lot of capital existing means that inflation adjusted US t-bond have at the moment a 0.07% interest rate. It even used to be negative in 2013.
A lot of labor existing means that we have high unemployment in many countries.
It's sad that big pharma companies buy back their own shares and let go of employees instead of investing the money into developing new drugs.
Apple doesn't buy back shares but has $158.8billion in cash in there reserves that they don't manage to invest into developing new technology.
With a 1,000 square kilometer industrial complex for the manufacture of slinkys and a million trained botanists.
The article makes it pretty clear they are not describing a mismatch scenario. In a mismatch you have simultaneous shortages and gluts, but the article never talks about shortages of X while there is a surplus of Y, only gluts.
Send them to me! slinkys that is! it's time to change the world for the better!
Send them to me. Botanists, that is. It's time to change the world for the better.
I am really confused by economics.
I can't see this article, what kind of labor did they have in mind? There is a chronic shortage of skilled/creative labor, so I am assuming they mean the kind of labor that's vulnerable to being automated away. Perhaps the key shortage that explains this situation is education and training. In a modern economy people and money isn't enough anymore.
You can access the full article by googling the article title. It should be the first link.
Paywall.
— Wikipedia, "Overproduction"
Has anyone made a mathematical model of that? I don't know what most of the words in it mean, in concrete terms.
It sounds like "we can make more than we want with less labour than we can supply." Is that accurate?
In scarcity, which has been all of history up to the present, everyone's strategy has been to get as much work as they can, make as much stuff as they can, and sell as much stuff as they can, in order to get as much stuff as they can in exchange. I can imagine that when half the workforce can make twice as much stuff as everyone wants, that may not work so well. But that's just a verbal story, and I don't trust those.
Sigh. Another dead transhumanist. I never met Dan Fredinburg, but I gather from his friends' posts on Facebook that he wanted to upload his mind some day.
And what an unlikely way to die. You put your life at risk by trying to climb Everest under normal conditions. Fredinburg just happened to attempt that when a catastrophic earthquake struck Nepal.
Are there people who would be interested in a (virtual) reading group for Pearl's Causality?
I've edited the LW-wiki to make a list of LWers interested in making debate tools..
In general, I think it'd be useful to make a post similar to the "What are you working on threads", so that people with similar interest can find each other. What do people think of a "People working on X repository" post?
What are the consequences of neutral actions in ethics? After a quick perusal of Google, there doesn't seem to be anything addressing my question, and I think there should be some discussion on this.
This question is related to a problem I've been having with ethics lately; namely, should ones ethical system be viable in any kind of reality? Failing that, shouldn't there be some omniversal meta-ethical structure?
I've had a few thoughts on this, and some arguments played out in my head, but I want to see what others think.
What sort of consequences are you thinking of? The idea that ethics can consider two options equally preferable and not care which one you take follows from the idea of an ethical utility function (even a complicated function that only exists in an abstract mathematical sense). We don't need to assume it directly, we can go with the Archimedean property (roughly, that crossing the street can be worth a small chance of death).
A paper with some empirical results on tools and techniques for fighting procrastination and distractions (in the context of taking online courses).
An interesting complete-disillusionment-with-academia letter from a Ph.D. student.
People have been complaining about academia for a very long time. Then again, every other human organization has burned-out/disillusioned people writing similar complaints.
As for me, I definitely don't have what it takes to stick around in academia and plan to leave as soon as I get my degree.
I remember hearing a quote somewhere on LW saying something like "pain/discomfort is what you feel when you level up". Does anyone know what the actual quote is? Where it was said?
There's also "Pain is weakness leaving the body", which is less specific but probably pre-dates Eliezer's quote.
I like it!! Never heard it before but it's becoming one of my favorite quotes.
that line is specifically helpful for tricking your brain into thinking that physically exhausting exercise is a good thing. (don't use it too much, and don't hurt yourself doing it) i.e. pushups.
That's funny, I actually tried to do as many pushups in a row as I can the day after hearing it. And I did the most I've ever done before btw :)
Why do you think it's specifically good for physical exertion? What about mental or emotional exertion?
I suspect; pain is a physical feeling, weakness is similarly an understanding of physical problems. By pipining the pain=negative weakness, you convince yourself that (temporarily) more pain is helpful to the cause of doing more pushups, rather than previously "not helpful" as you would have been treating pain and deciding to stop.
I have heard it work before for pushups or similar exercise by danielfilan
(side note: don't push too hard, trust your body's limits - it tends to know when its going to break)
My impression is that the advice applies more generally than to just physical pain. My impression is that there's a lot of times when your body produces pain-like signals telling you to stop, when it's really in your interest to push through.
And so, to use the emotional example, when I push past fear, I imagine it as weakness leaving my body. I imagine a) becoming a stronger person for having pushed past it and being more capable of pushing past it in the future. And b) I imagine it as weakness leaving the body in the sense of my pushing myself towards an end that is more preferable to me.
I think you're confusing your body signaling with pain and your mind yelling "I don't wanna!" These are very different things.
Thanks for clarifying. I know what you mean, but I had just been using the wrong vocabulary.
Eliezer said it in http://lesswrong.com/lw/ul/my_bayesian_enlightenment : "That scream of horror and embarrassment is the sound that rationalists make when they level up."
THANK YOU!
The story of Matheryn Naovaratpong's cryopreservation has gotten quite a bit of coverage in English-language websites in Southeast Asia:
Father of cryonically preserved Thai girl: I will just hug her if we meet again
http://www.straitstimes.com/news/asia/south-east-asia/story/father-cryonically-preserved-thai-girl-i-will-just-hug-her-if-we-mee#xtor=CS1-10
That's very interesting. I'd be interested to see if this actually leads to an uptick of interest in Southeast Asia.
That region's newly emergent middle class people and wealthy people might lack Westerners' prejudices which have made cryonics such a hard idea to sell in our parts of the world.
For one thing, they have witnessed rapid economic progress in their own societies in their own generation, so they wouldn't understand the appeal of Western pessimism about apocalyptic and dystopian futures.
They likely don't have the same prejudices as Westerners but that doesn't mean that they don't have other prejudices.
Without understanding the local culture a lot more than most of of Westerners do it's hard to make this kind of prediction.
That's an interesting hypothesis. Is there any way to test it? Also is there any way to take advantage of it? That suggests that the window for cryonics there may not be very long, possibly on the order of 20 years or so.
I don't know how to test it, though I suspect the relative absence of christian beliefs in those countries would make a difference. And why would such a "window" even exist there? If these countries can figure out how to keep economic progress going indefinitely without the dysfunctions in Western societies identified by, say, Peter Thiel, then these countries could very well take the lead in becoming increasingly "futuristic" on their own, without having to look to the West for models and guidance.
By window I meant the following: you said that " they have witnessed rapid economic progress in their own societies in their own generation, so they wouldn't understand the appeal of Western pessimism about apocalyptic and dystopian futures." If that is what is going on, then the next generation may not see that as much. If so, we have around a generation. I agree that if the economic progress continues at a fast pace that may not end up with some of the issues we have here, but in general developing countries have as they've neared parity with the developed countries had their improvement rates by many metrics slow down and come more or less into alignment with Western growth rates. Look at for example infant mortality levels and expected lifespan.
I would be interesting to find out what the public reaction is in Thailand, and also to see what their science fiction is like.
Does anyone have software or procedures they have found useful for evaluating large, hard, inference problems? I don't know what the right class of problem is. Mine is that I have several years and lots and lots of notes of symptoms a family member has exhibited, including subjective recollections all the way to MRIs, and I'd like to organize my thoughts and inferences around what common cause(s) might be, priors, weight of evidence, etc.
I plan to improvise, but I'd like to steal first.
One way to approach it would be to organize the data around the questions "What seems to have an effect on the system? What makes things better, what makes things worse, even if the effect is very small (but reproducible)?" Then, investigate those things.
Doctors are kind of terrible at doing that. They tend to have a tool box of "these are the things I know how to do" and any information that doesn't fit their specific specialty is discarded as irrelevant.
I'm not sure how useful it would be to weight things by evidence if part of the problem is that some things haven't been investigated enough, or are simply not well-enough understood by modern medicine and science.
I have a friend with an undiagnosed disease and am thinking about doing the same thing. One thing I've thought about is using a Bayesian Network as a tool, but then again, I'd have to be really careful about how I plug in data, and it would be good to know if there are other approaches to this as well. PM me if you find a good way to go about this.
Not sure what you mean.
BUGS maybe?
I think what GuySrinivasan's asking is closer to "how do I organize a mass of evidence & ideas about a topic so I can better reason about it" than "how do I grind numerical statistical inferences out of a formal Bayesian model"?
Seeking Phi in Game Theory
For a bit of fiction I'm thinking of... is there any aspect of game theory in which offering the number/symbol phi can indicate a threat to defect unless the other player cooperates?
So far, a quick bit of Googling has only turned up phi in relation to the Bargaining problem, at https://en.wikipedia.org/wiki/Bargaining_problem , and while I /could/ wrestle such an interpretation out of what's there, I'm hoping there's a less abstruse way to go about it.
Aha - at http://www.pnas.org/content/109/26/10409.full.pdf , zero-determinant strategies are defined by two factors, chi and phi, and at least when chi is 1, maximum phi results in the strategy of Tit-for-Tat, which is exactly what I was looking for.
Hm... is there a mathematical notation for the maximum of a variable, like |x| indicates the absolute value of x?
In addition, when chi is at the extortionate level of 3.5, then when phi is maxxed out, the odds of player X cooperating if player Y cooperated last turn are 66.6% or 50%, depending on whether player X also cooperated on the previous turn. Thus, in order to give player Y enough of an incentive to want to cooperate - providing odds of at least 50% of ending up with CC - player X may have their own incentive to set chi to something below 3.5. As it happens, there are two mathematical constants a little below that that might be chosen - the inverse Fibonacci constant, also called psi, about 3.36, or the more widely-known pi, 3.14.
Is
not sufficient?
My goal is for a relatively simple, even iconic, image or logo, which can be easily interpreted regardless of the viewer's language. The symbol for Phi - a circle with a line through it - provides fodder for as much interpretation as I desire, from the overlapped 1 and 0 of binary to an ouroborus to an axis-and-equator to the Golden Ratio - and if a minimal modification can explicitly add the "maximum value of the variable denoted by this symbol", I'll be a happy little rat indeed.
This page suggests "∨".
(I've only ever seen that symbol meaning "or", but that's a kind of maximum.)
That's one of the two possibilities I've found over the past day. The other is ⊤ from https://en.wikipedia.org/wiki/Greatest_element .
(I wonder if presenting this idea to an actual mathematician would induce any wincing? Off to /r/math to find out...)
∨ is the mathematical symbol for "or" (in logic) -- my guess is that it may be derived from the fact that the initial letter of the Latin word for "or" is "v". There's a kinda convention that when you have a(n associative) binary operator, you use a bigger version of it to signify applying it to all the things in a sequence or set, so you'd want a larger one -- a bit like a capital "V".
⊤ is the mathematical symbol for the "top" element of a Boolean algebra; maybe more generally of a lattice. You wouldn't use it to mean "maximum of these things" in general.
I've come up with an interesting thought experiment I call oracle mugging.
An oracle comes up to you and tells you that either you will give them a thousand dollars or you will die in the next week. They refuse to tell you which. They have done this many times, and everyone has either given them money or died. The oracle isn't threatening you. They just go around and find people who will either give them money or die in the near future, and tell them that.
Should you pay the oracle? Why or why not?
This is essentially just another version of the smoking lesion problem, in that there is no connection, causal or otherwise, beween the thing you care about and the action you take. Your decision theory has no specific effect on your likelyhood of dying, that being determined entirely by environmental factors that do not even attempt to predict you. All you are paying for is to determine whether or not you get a visit from the oracle.
ETA: Here's a UDT game tree (see here for an explanation of the format) of this problem, under the assumption that oracle visits everyone meeting his criteria, and uses exclusive-or:
ETA2: More explanation: the colours are states of knowledge. Blue = oracle asks for money, Orange = they leave you alone. Let's say the odds of being healthy are α. If you Pay the expected reward is
α(-1000) + (1-α) DEATH; if you Don't Pay the expected reward isα 0 + (1-α) DEATH. Clearly (under UDT) paying is worse by a term of-1000α.Variation on this:
An oracle comes up to you and tells you that you will give it a thousand dollars. This oracle has done this many times and every time it has told people this the people have given the oracle a thousand dollars. This oracle, like the other one, isn''t threatening you. It just goes around finding people who will give it money. Should you give the oracle money?
Under UDT: pay iff you need human contact so much that you'd spend $1000 to be visited by a weird oracle who goes around posing strange decision theory dilemmas.
No, but you will.
Every decision theory I throw at it says either don't pay or Error: Divide By Zero. Is this a trick question?
I don't know what "error: divide by zero" means in this context. Could you please clarify? (If you're suggesting that the problem is ill-posed under some decision theories because the question assumes that it is possible to make a choice but the oracle's ability to predict you means you cannot really choose, how doesn't that apply to the original problem?)
You want to figure out whether to do as the oracle asks or not. To do this, you would like to predict what will happen in each case. But you have no evidence concerning the case where you don't do as it asks, because so far everyone has obliged. So, e.g., Pr(something good happens | decline oracle's request) has Pr(decline oracle's request) in the denominator, and that's zero.
I think you can say something similar about the original problem. P(decline oracle's request) can (for the new problem) also be phrased as P(oracle is wrong). And P(oracle is wrong) is zero in both problems; there's no evidence in either the original problem or the new problem concerning the case where the oracle is wrong.
Of course, the usual Newcomb arguments apply about why you shouldn't consider the case where the oracle is wrong, but they don't distinguish the problems.
That's a forward-looking probability and is certainly not zero.
In the absence of evidence you just fall back on your prior.
In order to get Error: Divide By Zero, you have to be using a particular kind of decision theory and assume P(decline oracle's request) = 0.
Your prior for what?
For the baseline, "underlying" probability of the oracle's request being declined. Roughly speaking, if you have never seen X happen, it does not mean that X will never happen (=has a probability of zero).
This assumes you're a passive observer, by the way -- if you are actively making a decision whether to accept or decline the request you can't apply Bayesian probabilities to your own actions.
I believe in testing rules and breaking things. So no. Don't give and see what happens.
Clearly you give them money, since otherwise you are almost certain to die. It's just one-boxing in disguise.
I really want to say that you should pay. Obviously you should precommit to not paying if you can, and then the oracle will never visit you to begin with unless you are about to die anyway. But if you can't do that, and the oracle shows up at your door, you have a choice to pay and live or not pay and die.
Again, obviously it's better to not pay and then you never end up in this situation in the first place. But when it actually happens and you have to sit down and choose between paying it to go away or dying, I would choose to pay it.
It's all well and good to say that some decision theory results in optimal outcomes. It's another to actually implement it in yourself. To make sure every counter factual version of yourself makes the globally optimal choice, even if there is a huge cost to some of them.
The traditional LW solution to this is that you precommit once and for all to this: Whenever I find myself in a situation where I wish that I had committed to acting in accordance with a rule R I will act in accordance with R.
That's great to say, but much harder to actually do.
For example, if Omega pays $1,000 to people or asks them to commit suicide. But it only asks people it knows100% will not do it, otherwise it gives them the money.
The best strategy is to precommit to suicide if Omega asks. But if Omega does ask, I doubt most lesswrongers would actually go through with it.
So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.
Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don't precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.
Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it's hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.
(I think I am willing to bite the suicide bullet as long as we're clear that I would require truly extraordinary evidence.)
Please Don't Fight the Hypothetical. I agree with you if you are only 99% sure, but the premise is that you know Omega is right with certainty. Obviously that is implausible, but so is the entire situation with an omniscient being asking people to commit suicide, or oracles that can predict if you will die.
But if you like you can have a lesser cost, like Omega asking you to pay $10,000. Or some amount of money significant enough to seriously consider just giving away.
I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?
I am not trying to fight the hypothetical, I am trying to explain why one's intuition cannot resist fighting it. This makes the answer I give seem unintuitive.
So, as in most such problems, there's an important difference between the epistemological question ("should I pay, given what I know?") and the more fundamental question ("should I pay, supposing this description is accurate?"). Between expected value and actual value, in other words.
It's easy to get those confused, and my intuitions about one muddy my thinking about the other, so I like to think about them separately.
WRT the epistemological question, that's hard to answer without a lot of information about how likely I consider accurate oracular ability, how confident I am that the examples of accurate prediction I'm aware of are a representative sample, etc. etc. etc., all of which I think is both uncontroversial and uninteresting. Vaguely approximating all of that stuff I conclude that I shouldn't pay the oracle, because I'm not justified in being more confident that the situation really is as the oracle describes it, than that the oracle is misrepresenting the situation in some important way. My expected value of this deal in the real world is negative.
WRT the fundamental question... of course, you leave a lot of details unspecified, but I don't want to fight the hypothetical here, so I'm assuming that the "overall jist" of your description applies: I'm paying $1K for QALYs I would not have had access to without the oracle's offer. That's a good deal for me; I'm inclined to take it. (Though I might try to negotiate the price down.)
The knock-on effect is that I encourage the oracle to keep making this offer... but that's good too; I want the oracle to keep making the offer. QALYs for everyone!
So, yes, I should pay the oracle, though I should also implement decision procedures that will lead me to not pay the oracle.
I think a key part of the question, as I see it, is to formalize the difference between treatment effects and selection effects (in the context where your actions might reflect a selection effect, and we can't make the normally reasonable assumption that our actions result in treatment effects). An oracle could look into the future, find a list of people who will die in the next week, and a list of people who would pay them $1000 if presented with this prompt, and present the prompt to the exclusive or of those two lists. This doesn't give anyone QALYs they wouldn't have had otherwise.
And so I find my intuitions are guided mostly by the identification of the prompter as an "oracle" instead of a "wizard" or "witch." Oracle implies selection effect; wizard or witch implies treatment effect.
Leaving aside lexical questions about the connotations of the word "oracle", I certainly agree that if the entity's accuracy represents a selection effect, then my reasoning doesn't hold.
Indeed, I at least intended to say as much explicitly (_"I don't want to fight the hypothetical here, so I'm assuming that the "overall jist" of your description applies: I'm paying $1K for QALYs I would not have had access to without the oracle's offer." _ ) in my comment.
That said, it's entirely possible that I misread what the point of DanielLC's hypothetical was.
DanielLC said:
I interpreted that as a selection effect, so my answer recommended not paying. Now I realize that it may not be entirely a selection effect. Maybe the oracle is also finding people whose life would be saved by making them $1000 poorer, for various exotic reasons. But if the probability of that is small enough, my answer stays the same.
Right. Your reading is entirely sensible, and more likely in "the real world" (by which I mean something not-well-thought-through about how it's easier to implement the original description as a selection effect), I merely chose to bypass that reading and go with what I suspected (perhaps incorrectly) the OP actually had in mind.
Pay iff you would pay $1000 to avoid learning of your death the last week of your life. If you don't pay the oracle only shows up when you are about to die anyway.
I wouldn't pay. Let's convert it to a mundane psychological experiment, by replacing precognition with precommitment (which is the right approach according to UDT):
1) Ten participants sign up for the experiment.
2) One participant is randomly chosen to be the "loser". We know who the "loser" is, but don't tell the participants.
3) Also, each participant tells us in private whether they are a "payer" or "non-payer".
4) Each "payer" who is not a "loser" pays $10 (this corresponds to paying the oracle and staying alive). The "loser" pays $100 (this corresponds to dying). Everyone else pays nothing.
It seems obvious that you should choose to be a "non-payer", right?
In terms of the original problem, if you're the kind of person who would pay the oracle if you were approached, you're causing the oracle to approach you, so you're paying for nothing.
I don't think that it's specified in the OP that the oracle considers it likely that you will pay or indeed approaches people based on their likelihood to pay.
but it is! it really depends on how many levels of "If I know that the oracle knows that I know" you want to go into. Because if the oracle is able to factor in your decision to pay or not in whether they tell you that you should pay then thats a super-duper-oracle.
Also paying and dying is permissable and not great either.
It's just a version of the Newcomb's problem with negative outcomes instead of positive.
Presumably the oracle makes its offer only to people from two classes: (1) Those who will die next week AND will not pay $1000; and (2) Those who will pay $1000 AND not die next week. Since it's the oracle it can identify these people and make its offer only to them. If you got this offer, you are in one of the above classes but you "don't know" in which.
If any one has sleep apnea with or without snoring or even a hint of being too tired during the day please fix it. My life is profoundly better and I have access to a life i did not know was possible, no more sadness or depression whatsoever.
My life is 20-40x better I feel like I have woken up in another world that was shut off from me for the first 20ish years of my lifespan.
On that note, I've heard a lot about how addressing sleep apnea is great but how do you check if you have it in the first place (or, at least, to the extent that would warrant seeing a doctor about it)? 'Being tired during the day' doesn't seem like a strong self-diagnostic criterion.
I tried a lot of sleep tracking with apps and wearables. (fitbit, basis, sleep as android)
I currently use both fitbit and basis, fitbit visualises long term sleep better, basis visualises a single night better.
These devices showed me what my sleep looks like, and further what my "normal sleep pattern" looks like. while I have good sleep now; if I stop having good sleep; I will have the graphs to prove it.
sleeping on my side worked for me, if i am not disciplined i mess it up, the expectation is so large that perhaps a mouthguard or machine is worth it. If you wake up at night but cannot remember or remember falling then I now that is a good sign.
google sleep apnea/shallow breathing while sleeping. I'm afraid I cannot do too much more to help with my current knowledge.
Seeking writing advice: how to keep writing
I've been having some shoulder pain for the past couple of weeks, which I've seen a doctor for. I've also noticed that I haven't actually written anything new for my novel, "S.I.", for almost that long, and have just been posting chapters from my buffer to the forum I post them in.
Given my previous attempt at writing long fiction ("Myou've Gotta Be Kidding Me"), I anticipate two likely courses. One, pain sucks, and when it goes away, my writing motivation will return, and I'll get back into the swing of things. Or two, my writing engine has run out of motivation-fuel for this story generally. In the latter case, I think I can avoid leaving the story entirely unfinished, though there would still be all sorts of dangling plot threads and unsolved mysteries; I should be able to muster up enough typing to have my protagonist finally feel overwhelmed by everything she's facing, retreat to Elliot Lake, and jump to my intended finale. It's far from a perfect solution, but seems better than putting the story on permanent hiatus (or more formally cancelling it) without any finish at all, as I ended up doing with "Myou've".
I'm hoping it's the first course. What I don't know... is if there's any way I can tweak the odds to /favour/ the first course.
Any ideas?
I have no direct experience with this myself, but have heard good things about http://zhealth.net. A quick search turns up Will Eden once recommending it here on LW and apparently a practitioner was brought in for a lecture at one of the first rationality camps so perhaps CFAR staff or one of the alums listed at http://rationalitybootcamp.blogspot.com could say more about it.
Write by hand on your bed. Write on your phone during bus rides. Write by dancing to sign language. Write in a new medium humanity hasn't dreamt of. The keyboard is just one of many possible tools.
Is your keyboard / workstation set up correctly to minimise strain or whatever on your shoulder? I think an optimally positioned desk, keyboard, chair, screen etc should avoid much (any?) shoulder movement at all. You don't say whether typing exacerbates the shoulder pain or if it's just a background level of pain that's bothering you while writing, though.
Typing doesn't increase the shoulder pain. (As of the latest doctor's visit, he thinks it's actually more of a neck problem.) It's more the general background level of pain that's keeping me from being able to spend any time coming up with plot-stuff I want to write.
Sometimes the connection between pain and its cause aren't obvious. I was having fairly severe random knee pains, and it turned out that getting my bike lubricated made them go away, even though the pain wasn't happening when I was riding my bike or soon after riding it.
You might want to check on the ergonomics of everything you usually do.
Maybe speech-to-text software can make writing less painful?
Does anyone here suffer from sexual or general anhedonia? Any comforting guesses as to when we will reach the point of being able to routinely cure it?
Do we "routinely cure" depression? People do stop being depressed but I'm not sure that I would use the word "routinely cure".
It might be more productive to think of the suffering person as a subject than as an object. Feeling pleasure is only possible if you actually see yourself as a subject.
Parts of the brain don't function right. Contrary to your and the majority's belief, it's not a matter of choice to see yourself as a subject and alter your thinking. It wouldn't cure your aging for the same reason it won't cure the type of anhedonia in question.
If you make a choice to see yourself as a subject than you might still frequently drop into the old habits, but that doesn't mean that you can't make that choice. It doesn't mean that making that choice won't help you to more frequently feel as subject.
I have multiple times seen that when someone makes that shift his body language changes to a visible extend. They become more present. Emotional processing starts to happen that otherwise wouldn't.
Large parts of aging aren't about disassociating emotions*. Associating emotions won't make your telomeres longer. On the other hand anhedonia seems very much about disassociated emotions and a straightforward way to associate emotions is to move from seeing oneself as object to seeing oneself as subject.
*If you want more input into the parts that are, read Thomas Hanna's "Somatics: Reawakening The Mind's Control Of Movement, Flexibility, And Health"
Seems being the keyword. Posting here was probably a mistake. The whole world thinks that if your dick behaves like rubber incapable of any pleasure, it must mean you're thinking the wrong way. I just wanted some optimism about future scientific breakthroughs. Seemed like the place.
Sorry, that wasn't appreciative.
No, certain thinking patterns are associated with certain problems. Psychological interventions do help people.
What naivety on my part that I thought you wanted to change. Of course it's easier to blame it on supposedly unchangeable traits of the brain.
As the same person you were responding to, I know there is no point trying to make you comprehend that there is a possibility your idiotic belief about my anhedonia being different than other devastating illnesses and curable by myself is not true just because it makes you feel smug and justified to blame others for their suffering. I don't care about being banned, so here goes. What you said was extremely cruel. I wish you decades of torture. Congratulations on making the world a worse place, you shit.
The fact that nothing you tried in the past cured it doesn't imply that it's incurable. It just means that nothing you tried worked. Believing in the ability to change is helpful for quite a lot of mental health issues.
I don't blame your for suffering. You are free to suffer if you want. If you want to interpret my comment as blaming than I'm blaming you for unwillingness to do things I consider likely to improve your situation.
I retract my Great Filter hypothesis: I realized this predicts an ever smaller population of ever smarter hominids, who still have a good quality of life, making up in smarts what they miss in numbers. But the simple fact is, hominid populations were not dwindling. They were pretty steadily taking over the planet, migrating out of Africa and all that.
Well, unless it happened before and caused the mitochondrial Eve bottleneck and then things turned different after that, but that is adding too much detail and courting a conjunction bias, so I don't propose that until more evidence is unearthed.
Upvoted for updating.
If utility is logarithmic in wealth, the Kelly Criterion tells me the right size of stake to put on a given bet, given the odds offered, my subjective probability and my wealth. In practice, in the real world, what's the right number to plug into the "wealth" part of the equation? My current savings? My yearly salary? The value of my home minus the money owing on it?
The "wealth" part of the equation is the total amount you're willing to gamble with. If you have money set aside for frivolities like food, then that wouldn't be part of your wealth as far as the Kelly Criterion is concerned.
The general principle with gambling is never to bet more than you're willing to lose. Kelly betting is optimal in the sense that over the long run, no other system will outperform it. In the short run, it's quite volatile and you can get very low.
This is true, but incomplete. If utility is monotonically increasing with wealth, the Kelly Criterion tells you how to size your bets.
That is only true if you are making lots of bets and expect them to be your main source of income. But that assumes away Ciphergoth's question.
No, it is true if you want to have the highest expectation value of utility in your life, and the answer to ciphergoth's question is you use all your wealth.
If instead of wanting to maximizing utility, you might prefer to minimize the probability that your utility will fall below a certain level. In this case, the bad tails of the distribution of Kelly criterion strategies matters to you and the expected utility does not. You might come up with some modification of Kelly criterion that meets this criterion of avoiding really bad outcomes with high probability. Or you might find some entirely different criterion or policy that meets the avoidance of bad outcomes that you are trying to achieve.
The OP's original question hides a lot of complexity in its "in the real world" clause. In the real world, are we expected-utility maximizers? Or are we low-utility-probability minimizers? Or are we something else? Until we know we can't evaluate investment/betting strategies.
A little knowledge is a dangerous thing.
The amount which you can afford to lose.
The answer probably depends on what your utility is nearest to being proportional to the logarithm of.
Most likely your utility (so far as you have one) looks like other_stuff + f(wealth) where wealth = g(annual_income, liquid_assets, other_assets) or something of the kind, where f and g are functions about which we don't know very much. It's probably OK to assume that g is just a linear combination of its inputs. So it seems like there are two things to do.
And then you can try plugging the result into the Kelly formula, seeing how over-risky it feels, and (if you are so inclined) correcting for excess risk aversion not already factored into f.
Not so much in response to your specific question, but when trying to figure out what I can afford, I actually take a pretty simple approach: my liquid assets (mostly in the bank) plus things I could easily liquidate (stocks, etc.) minus a "rainy day fund" (this has varied in size over the years, but tends to sit at between 2 and 10 thousand USD, based on how hard I think it would be to get a job or find housing in the event that I lost one or both). Things like 401K and HSA are omitted; they're already earmarked for specific things and mean I don't have to worry about keeping other funds back for those purposes. Assets that are technically resalable but either not worth the effort or of high utility to my daily life (my computer, my car, etc.) are also omitted, though in a pinch I would of course sell them too.
The result is the money I can afford to spend. I can use it on video games, or vacations, or gifts for people, or a new car (at which point I would sell the old one), or fighting malaria. I can trickle it away on living expenses if I decide to quit my job and pursue hobbies or whatever (I would start looking for a new one once I got within "expected job hunt time * cash outflow rate" distance of the bottom of this wealth, though I could dip into the rainy day fund if needed).
I can also invest it into riskier things than a savings account, like stocks... or into any other kind of betting.
The same thing that you want to maximize in the long run.
I'm inclined to think that policy towards illegal immigration is a result of incoherent moral standards-- some combination of "discourage strangers from showing up in large numbers" and "rescue harmless people who are close to death".
I don't think policy is a result of incoherent moral standards. I think it is a result of different people having different moral ideas that they consider important. So some subset of people are concerned enough to be active in discouraging strangers from showing up, and some other subset of people are concerned with rescuing people who are close to death, and the political/legislative system cobbles these things together into something which can pass a vote.
I suspect CEV is unlikely. That is, if one were to extrapolate volition from bunches of different people, the result would not be coherent, it would be incoherent. Because people have different and inconsistent volitions.
Probably but I'm not sure why that should be surprising; most moral standards we hold are inconsistent. So what would distinguish policy towards illegal immigration from other policies?
In a previous open thread, I brought up the theory of right-wing authoritarianism, which purports that conservative attitudes may be partially a defensive response to perception of threat. That offers one way of looking at policy towards illegal immigration: Maybe some people really do view immigrants as a threat to their way of living. So from that perspective they would not view them as harmless.
It may be simpler than that, though. Maybe 'rescue harmless people who are close to death' is not a strong value (or a value at all) for some. Certainly we know that psychopaths do not hold this as a value, and may even consider it an anti-value -- they would enjoy increasing the number of harmless people who are close to death. I'm sure this is not true for the majority of human beings, however.
It reminds me of a thought experiment I have read somewhere. Imagine that there are many people in the world who are dying from starvation. They would happily agree to be your slaves, if you feed them. There is too many of them and they are not qualified for modern economy; if you would give them more than a minimum, there wouldn't be enough for you to have a decent life. Imagine you only have the following three options:
A) Share everything with them. Everyone will live, but everyone will be rather poor.
B) Accept them as your slaves, in exchange for food and shelter. Everyone will live, you will keep your quality of life, but there will be a huge inequality.
C) Refuse to interact with them. You will keep your quality of life, but they will die from starvation.
If we order these options by altruism, which is how those poor people would see them, we get A > B > C. It would be best to make those poor people our equals, but even helping them survive as slaves is better than letting them die.
If we order these options by pure egoism, we get B > C > A. Having slaves would be a cool improvement, keeping status quo is acceptable.
But in the typical decision process, we refuse B to signal that we are not complete egoists, and refuse A because we are not really that much altruistic. Thus what remains is the option C... which paradoxically both altruists and egoists consider to be worse than B (and the altruists also worse than A).
Slavery is a non sequitur here. Under the circumstances you might suggest "I will pay you below minimum wage" or "I will pay you nothing, but provide housing on my plantation where you work." But so long as they have the right to walk away at any time its not slavery, and there's nothing in the setup that justifies that loss of liberty. Your hypothetical situation is an argument against the minimum wage, not pro-slavery.
The thing is, I don't think a lot of illegal immigrants are unqualified for a modern economy. If they were unqualified, there wouldn't be so many laws trying to keep them from working.
Even without that, there's a lot of issues about giving them welfare. We could allow them entry as second-class citizens who have no minimum wage or access to welfare but still need to pay taxes. We'll avoid having to give them welfare, but we'll need to admit that we have second-class citizens, which is something we pretend to be against.
It also means that the people who are currently working at minimum wage jobs are likely to lose their jobs to the cheaper competition.
That depends on the degree to which the two groups compete for jobs. There are also positive secondary effects which reduce the impact (immigration reduces inflation and increases the overall market size). The employment impact of immigration on low-skilled workers is somewhere between slightly negative and slightly positive.
Great point!
Although hypotetically here could be two independent interests that just happen to be strategically aligned. Some people want to stop unqualified immigrants, other people want to stop qualified immigrants who would compete with them on the job market.
Also there are of course concerns other than economical, such as people bringing with them some nasty habits from their cultures. These were not included in the thought experiment, which perhaps makes it irrelevant for real-world situations.
Also having slaves has the risk of those slaves rebelling later.
I wasn't kidding when I said one of the motivations was a desire to not live with large numbers of strangers. One issue might be cognitive load-- the strangers have unfamiliar customs (is a sincere apology accompanied by a smile or a serious expression?) and possibly an unfamiliar language.
As far as I can tell, the economic side of not wanting immigrants is a sort of merchantilism-- a belief that all that matters is where the money is, so that new people showing up and getting paid for work just seems like money getting drained away. Weirdly, rich people who show up and spend money without working locally may be disliked, but they don't seem to be as hated as poor people who do useful work. I don't think it's just about competition for jobs.
https://hbr.org/2015/04/emotional-intelligence-doesnt-translate-across-borders
A few examples of people from different cultures misreading each other.
I'd say inconsistent rather than incoherent moral standards, or different moral standards at tension.
Honestly, this seems like a "well, duh" sort of thing. One just needs to read the rhetoric from say both sides of the US immigration debate, or both sides of the discussions in Europe about refugees from North Africa to see this pretty clearly.
I've noticed a lot of disciplines, particularly ones that sometimes have to justify their value, often make a similar claim:
"[subject] isn't just about [subject matter]: it teaches you how to think"
This raises some interesting questions:
I can believe, for example, that Art History instils in its students some useful habits of thought, but I suspect they're less general than those from a discipline with an explicit problem-solving focus. What kind of scheme could one construct to score the meta-cognitive skills learned from a particular subject?
Are there any subjects which are particularly unlikely to make this claim? Are any subjects just composed of procedural knowledge without any overarching theory, cross-domain applicability, or necessary transferable skills?
Are there particularly potent combinations of skills, or particularly useless ones? It seems that a Physics degree and a Maths degree would have similar "coverage" in terms of thinking habits they instil, but a Physics degree and a Law degree would have much broader coverage. "I have technical skills, but I also have people-skills" is a fairly standard contemporary idea. Do Physics and Law have strikingly different coverages because Physics Lawyers don't really need to exist?
I suspect that with "mastery of a skill" comes an ability to understand "mastery", in that - on a variation of man-with-a-hammer syndrome; holding the mastery of one area will help you better understand the direction to head in when mastering other areas, and learning in other areas.
to me the line now reads; "mastery of [subject] isn't just about [subject matter]: mastery teaches you how to think"
where <subject> can vary; the significance of what people are trying to convey is maybe not in the <subect> but in the experience of learning.
Most (~70%) of the times it is a euphemism for "it's useless, but we like it so we still want to use taxpayers' money to teach it".
(If people really cared about teaching people how to think, they'd teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
I expect you're typical-minding here. I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Presumably they, being familiar with the thought-changing effects of Latin but not having felt the language-changing effects of cognitive psychology etc. (either because of not having studied those topics enough, or because of not having a mind whose thought patterns would be strongly affected by the study of them), would likewise say "if people really cared about teaching people how to think, they'd teach Latin and not stuff like cognitive psychology". Just like you say what you say, either because of not having studied Latin enough, or because of not having a mind whose thought patterns would be strongly affected by the study of languages.
Sure, but the same happens with living languages as well.
I studied Latin for five years. Sure, it is possible that if I had studied it longer it would have changed my thought patterns more, but surely there are cheaper ways of doing that. (Even the first couple months of studying linear algebra affected me more, but I don't expect that to apply to everybody so I didn't list it upthread.)
A while ago I read that a betting firm rather hires physics or math people than people with degrees in statistics because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
Outside of specific statistics programs a lot of times statistics classes lead to students simply memorizing recipes and not really developing a good statistical intuition.
Teaching statistics sounds often much better in the abstract than in practice.
That's a good point, but on the other hand, even thinking that everything is a Gaussian would be a vast improvement over thinking that everything is a Dirac delta and it is therefore not ludicrous to speculate about why some politician's approval rating went down from 42.8% last week to 42.3% today when both figures come from surveys with a sample size of 1600.
A well trained mathematician or physicist who never took a formal course on statistics likely isn't going to make that error, just as a well trained statistician isn't going to make that error.
I would think that the mathematician is more likely to get this right than the medical doctor who got statistics lessons at med school.
I read that Germans are often anti-semites, is it true?
That is, ahem, bullshit. Stupid undergrads might think so for a short while, "statistics folks" do not.
Long Term Capital Management (LTCM) was a hedge fund that lost billions of dollars because its founders, including nobel prize winners, assumed 1) things that have been uncorrelated for a while will remain uncorrelated, and 2) ridiculously low probabilities of failure calculated from assumptions that events are distributed normally actually apply to analyzing the likelihood of various disastrous investment strategies failing. That is, LTCM reported results as if something which is seen from data to be normal between +/- 2*sigma will be reliably normal out to 3, 4, 5, and 6 sigma.
Yes, there WERE people who knew LTCM were morons. But there were plenty who didn't, including nobel prize winners with PhDs. It really happened and it still really happens.
I am familiar with LTCM and how it crashed and burned. I don't think that people who ran it were morons or that they assumed returns will be normally distributed. LTCM's blowup is a prime example of "Markets can stay irrational longer than you can stay solvent" (which should be an interesting lesson for LW people who are convinced markets are efficient).
LTCM failed when its convergence trades (which did NOT assume things will be uncorrelated or that returns will be Gaussian) diverged instead and LTCM could not meet margin calls.
Hindsight vision makes everything easy. Perhaps you'd like to point out today some obvious to you morons who didn't blow up yet but certainly will?
An LTCM investor letter, quoted here, says
And of course, it proceeded to lose essentially all of its portfolio after operating for just a handful of years. Now if in fact you are correct and the LTCM'ers did understand things might be correlated and that tail probabilities would not be gaussian, how do you imagine they even made a calculation like that?
Can we get a bit more specific than waving around marketing materials?
Precisely which things turned out to be correlated that LTCM people assumed to be uncorrelated and precisely the returns on which positions the LTCM people assumed to be Gaussian when in fact they were not?
Or are you critiquing the VAR approach to risk management in general? There is a lot to critique, certainly, but would you care to suggest some adequate replacements?
"Statisticians think everything is normally distributed" seems to be one of those weirdly enduring myths. I'd love to know how it gets propagated.
I can't say I ran into it before (whereas "economists think humans are all rational self-interested agents", jeez...)
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it's not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
Someone was overly impressed by the Central Limit Theorem... X-)
I strongly suspect that a large part of its recent popularity is because in the recent CDO-driven crash it suited the interests of the (influential) people whose decisions were actually responsible to spread the idea that the problem was that those silly geeky quants didn't understand that everything isn't uncorrelated Gaussians, ha ha ha.
Physics lawyers definitely need to exist. I would strongly like to get an injunction against the laws of thermodynamics.
I would interpret that claim as: "we may be practically useless, but we are still fucking high-status!" :D
The claim isn't just made with arguably useless disciplines, though. Many people argue (quite rightly, IMO) that programming doesn't just teach you to command machines to do your bidding, but also instils powerful thinking tools. So even if kids don't grow up to be software developers, it's still valuable for them to learn programming. Similar arguments could be made for law or finance.
Slightly off topic, but I both program and play guitar and for the longest time I was wondering why I was getting an overwhelming feeling of the two bleeding into each other. While playing guitar, it would "feel" like I was also coding. Eventually I figured out that the common thread is probably the general task of algorithm optimization.
There's no way for me to tell if programming made me a better guitar player or vice versa.
Could you make that argument for finance? I see that learning finance is very useful for personal financial decisions but how does it provide use beyond that?
Finance is about marshalling resources and using them to efficiently create a lot more wealth. Since wealth is at minimum the thing that keeps us from working 24/7 on getting enough food to eat, and generally gives us the kind of free time we need to invent AIs, post on message boards, have hobbies, and try to get the hot chicks, it can be quite useful even for a non-wall-street worker. Think of finance as the thing that keeps you from carrying a balance on your credit card or buying lottery tickets as investments.
That's not an argument for the claim that finance skills instill thinking tools that are useful in other domains. It's just an argument that finance skills are useful.
Obviously "finance" is a very wide area that covers a lot of different ideas, but my observation of "finance people" is that they have a powerful mental vocabulary for thinking about what kind of a value something is and what can be done with it over time. For example: the difference between stock values and flow values, expected return of a portfolio of assets, the leveraging of credit, the mitigation of risk.
More generally, they seem to be able to look at some number assigned to a thing, and observe that it's morphologically similar to some other number assigned to some different thing, and understand what sort of things can happen to both those numbers, and what sort of process is required to turn one sort of number into another sort of number.
Seems to me that "teaches you how to think" does not necessarily imply instilling habits of thought. I would interpret that (say, in the context of Art History) as:
The habits of thought are not involved in all of this -- it's more of a broadening-your-horizons exercise.
Thought experiment. You are doing a really boring job you dislike like data entry, but so well paid you don't want to leave it. You cannot automate it. You cannot work from home. You sit in the office 8 hours Thankfully it does not take 8 hours, you can do it in 5 and then browse the web or something.
What do you do? Trying to spend the other 3 meaningfully like studying with Anki, and trying to find challenging games in the actual job part are two obvious ones, what else? E.g. would you listen to ebooks while doing it? What else?
Post to lesswrong.com.
Renegotiate work time to 5 hours.
Tried something like that. Was unable to do anything productive after 5 hours without a real deadline.
Could you hire a cheap online personal assistant that would give you the deadlines? Like, you would make a plans for the whole week in advance, give those plans to the assistant, and then during the week the assistant would role-play being your manager. (Using another person as proxy for your planning self.)
If it doesn't feel real, it's easy to ignore.
I would start programming mobile games, and would hope to make money from them. If I don't succeed, at least I had a hobby, and maybe can use the experience to get a more interesting job later. If I do succeed, then I do not have to solve the problem of boring job anymore.
That would require sufficient freedom to spend those 3 hours not just programming, but also painting pictures, editing 3D models, editing levels, and testing the game on the phone. Okay, hypothetically that is not necessary; there can be some parts that I have to do at home. But it would be much more convenient if I could do whatever is necessary for the game immediately when I need it.
Or, if I wouldn't have a specific plan, I would just learn random stuff from online universities. I enjoy learning, so I wouldn't necessarily care about how useful are those lessons. I would imagine that some part of that would be useful somehow later, if nothing else, then for impressing people.
Someone who is a buddhist could use those three hours to meditate daily, and achieve nirvana in a few years, while keeping a well-paying job. Also, being a buddhist could help with the feelings of boredom from the job. ;)