Open Thread, Apr. 20 - Apr. 26, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (350)
Sigh. Another dead transhumanist. I never met Dan Fredinburg, but I gather from his friends' posts on Facebook that he wanted to upload his mind some day.
And what an unlikely way to die. You put your life at risk by trying to climb Everest under normal conditions. Fredinburg just happened to attempt that when a catastrophic earthquake struck Nepal.
The Wall Street Journal has an article up claiming that the world economy is currently experiencing an excess of capital, labor, and commodities, and that this is potentially a cause of serious problems.
Could anyone explain to me how it is possible to have an excess of capital and an excess of labor?
ETA: You can get around the paywall by googling the title of the article and clicking the first link.
There are not enough people with great ideas to produce new products starting companies that don't take away the market of existing companies.
Two guesses:
There is a lot of capital and a lot of labor, but in different areas where they don't complement each other. For example, maybe there is a lot of car factories and equipment, but not enough skilled workers to operate the equipment.
There is just more economic power than people actually want. For example, there is a lot of car manufacturing capacity and a lot of skilled auto workers, but people don't feel the need to buy more cars.
As a side note, I strongly believe that the conceptual tools of "modern" economics are increasingly ill-suited to describe the modern world. Economic concepts like supply/demand and labor/capital were developed in an era where most economic activity was centered around the production and distribution of goods - either agricultural or industrial. In the modern world, physical goods are becoming less and less important. Agriculture is down to 1% of GDP in the US, and industry is down to 20% (even this number seems high). The economy is now dominated by sectors like health care, education, technology, and government. These sectors cannot be described well by traditional economic concepts. What does it mean to talk about the "demand" for health care or education? How can one apply concepts like marginal cost and comparative advantage to the technology sector, where the marginal cost is zero and there is usually a free version of every product available?
A lot of capital existing means that inflation adjusted US t-bond have at the moment a 0.07% interest rate. It even used to be negative in 2013.
A lot of labor existing means that we have high unemployment in many countries.
It's sad that big pharma companies buy back their own shares and let go of employees instead of investing the money into developing new drugs.
Apple doesn't buy back shares but has $158.8billion in cash in there reserves that they don't manage to invest into developing new technology.
With a 1,000 square kilometer industrial complex for the manufacture of slinkys and a million trained botanists.
The article makes it pretty clear they are not describing a mismatch scenario. In a mismatch you have simultaneous shortages and gluts, but the article never talks about shortages of X while there is a surplus of Y, only gluts.
Send them to me! slinkys that is! it's time to change the world for the better!
Send them to me. Botanists, that is. It's time to change the world for the better.
I am really confused by economics.
I can't see this article, what kind of labor did they have in mind? There is a chronic shortage of skilled/creative labor, so I am assuming they mean the kind of labor that's vulnerable to being automated away. Perhaps the key shortage that explains this situation is education and training. In a modern economy people and money isn't enough anymore.
You can access the full article by googling the article title. It should be the first link.
Paywall.
Are there people who would be interested in a (virtual) reading group for Pearl's Causality?
I've edited the LW-wiki to make a list of LWers interested in making debate tools..
In general, I think it'd be useful to make a post similar to the "What are you working on threads", so that people with similar interest can find each other. What do people think of a "People working on X repository" post?
A paper with some empirical results on tools and techniques for fighting procrastination and distractions (in the context of taking online courses).
An interesting complete-disillusionment-with-academia letter from a Ph.D. student.
People have been complaining about academia for a very long time. Then again, every other human organization has burned-out/disillusioned people writing similar complaints.
As for me, I definitely don't have what it takes to stick around in academia and plan to leave as soon as I get my degree.
I remember hearing a quote somewhere on LW saying something like "pain/discomfort is what you feel when you level up". Does anyone know what the actual quote is? Where it was said?
There's also "Pain is weakness leaving the body", which is less specific but probably pre-dates Eliezer's quote.
I like it!! Never heard it before but it's becoming one of my favorite quotes.
that line is specifically helpful for tricking your brain into thinking that physically exhausting exercise is a good thing. (don't use it too much, and don't hurt yourself doing it) i.e. pushups.
That's funny, I actually tried to do as many pushups in a row as I can the day after hearing it. And I did the most I've ever done before btw :)
Why do you think it's specifically good for physical exertion? What about mental or emotional exertion?
I suspect; pain is a physical feeling, weakness is similarly an understanding of physical problems. By pipining the pain=negative weakness, you convince yourself that (temporarily) more pain is helpful to the cause of doing more pushups, rather than previously "not helpful" as you would have been treating pain and deciding to stop.
I have heard it work before for pushups or similar exercise by danielfilan
(side note: don't push too hard, trust your body's limits - it tends to know when its going to break)
My impression is that the advice applies more generally than to just physical pain. My impression is that there's a lot of times when your body produces pain-like signals telling you to stop, when it's really in your interest to push through.
And so, to use the emotional example, when I push past fear, I imagine it as weakness leaving my body. I imagine a) becoming a stronger person for having pushed past it and being more capable of pushing past it in the future. And b) I imagine it as weakness leaving the body in the sense of my pushing myself towards an end that is more preferable to me.
I think you're confusing your body signaling with pain and your mind yelling "I don't wanna!" These are very different things.
Thanks for clarifying. I know what you mean, but I had just been using the wrong vocabulary.
Eliezer said it in http://lesswrong.com/lw/ul/my_bayesian_enlightenment : "That scream of horror and embarrassment is the sound that rationalists make when they level up."
THANK YOU!
The story of Matheryn Naovaratpong's cryopreservation has gotten quite a bit of coverage in English-language websites in Southeast Asia:
Father of cryonically preserved Thai girl: I will just hug her if we meet again
http://www.straitstimes.com/news/asia/south-east-asia/story/father-cryonically-preserved-thai-girl-i-will-just-hug-her-if-we-mee#xtor=CS1-10
That's very interesting. I'd be interested to see if this actually leads to an uptick of interest in Southeast Asia.
That region's newly emergent middle class people and wealthy people might lack Westerners' prejudices which have made cryonics such a hard idea to sell in our parts of the world.
For one thing, they have witnessed rapid economic progress in their own societies in their own generation, so they wouldn't understand the appeal of Western pessimism about apocalyptic and dystopian futures.
They likely don't have the same prejudices as Westerners but that doesn't mean that they don't have other prejudices.
Without understanding the local culture a lot more than most of of Westerners do it's hard to make this kind of prediction.
That's an interesting hypothesis. Is there any way to test it? Also is there any way to take advantage of it? That suggests that the window for cryonics there may not be very long, possibly on the order of 20 years or so.
I don't know how to test it, though I suspect the relative absence of christian beliefs in those countries would make a difference. And why would such a "window" even exist there? If these countries can figure out how to keep economic progress going indefinitely without the dysfunctions in Western societies identified by, say, Peter Thiel, then these countries could very well take the lead in becoming increasingly "futuristic" on their own, without having to look to the West for models and guidance.
By window I meant the following: you said that " they have witnessed rapid economic progress in their own societies in their own generation, so they wouldn't understand the appeal of Western pessimism about apocalyptic and dystopian futures." If that is what is going on, then the next generation may not see that as much. If so, we have around a generation. I agree that if the economic progress continues at a fast pace that may not end up with some of the issues we have here, but in general developing countries have as they've neared parity with the developed countries had their improvement rates by many metrics slow down and come more or less into alignment with Western growth rates. Look at for example infant mortality levels and expected lifespan.
I would be interesting to find out what the public reaction is in Thailand, and also to see what their science fiction is like.
Does anyone have software or procedures they have found useful for evaluating large, hard, inference problems? I don't know what the right class of problem is. Mine is that I have several years and lots and lots of notes of symptoms a family member has exhibited, including subjective recollections all the way to MRIs, and I'd like to organize my thoughts and inferences around what common cause(s) might be, priors, weight of evidence, etc.
I plan to improvise, but I'd like to steal first.
Not sure what you mean.
BUGS maybe?
I think what GuySrinivasan's asking is closer to "how do I organize a mass of evidence & ideas about a topic so I can better reason about it" than "how do I grind numerical statistical inferences out of a formal Bayesian model"?
One way to approach it would be to organize the data around the questions "What seems to have an effect on the system? What makes things better, what makes things worse, even if the effect is very small (but reproducible)?" Then, investigate those things.
Doctors are kind of terrible at doing that. They tend to have a tool box of "these are the things I know how to do" and any information that doesn't fit their specific specialty is discarded as irrelevant.
I'm not sure how useful it would be to weight things by evidence if part of the problem is that some things haven't been investigated enough, or are simply not well-enough understood by modern medicine and science.
I have a friend with an undiagnosed disease and am thinking about doing the same thing. One thing I've thought about is using a Bayesian Network as a tool, but then again, I'd have to be really careful about how I plug in data, and it would be good to know if there are other approaches to this as well. PM me if you find a good way to go about this.
I've come up with an interesting thought experiment I call oracle mugging.
An oracle comes up to you and tells you that either you will give them a thousand dollars or you will die in the next week. They refuse to tell you which. They have done this many times, and everyone has either given them money or died. The oracle isn't threatening you. They just go around and find people who will either give them money or die in the near future, and tell them that.
Should you pay the oracle? Why or why not?
Clearly you give them money, since otherwise you are almost certain to die. It's just one-boxing in disguise.
I wouldn't pay. Let's convert it to a mundane psychological experiment, by replacing precognition with precommitment (which is the right approach according to UDT):
1) Ten participants sign up for the experiment.
2) One participant is randomly chosen to be the "loser". We know who the "loser" is, but don't tell the participants.
3) Also, each participant tells us in private whether they are a "payer" or "non-payer".
4) Each "payer" who is not a "loser" pays $10 (this corresponds to paying the oracle and staying alive). The "loser" pays $100 (this corresponds to dying). Everyone else pays nothing.
It seems obvious that you should choose to be a "non-payer", right?
In terms of the original problem, if you're the kind of person who would pay the oracle if you were approached, you're causing the oracle to approach you, so you're paying for nothing.
I don't think that it's specified in the OP that the oracle considers it likely that you will pay or indeed approaches people based on their likelihood to pay.
but it is! it really depends on how many levels of "If I know that the oracle knows that I know" you want to go into. Because if the oracle is able to factor in your decision to pay or not in whether they tell you that you should pay then thats a super-duper-oracle.
Also paying and dying is permissable and not great either.
So, as in most such problems, there's an important difference between the epistemological question ("should I pay, given what I know?") and the more fundamental question ("should I pay, supposing this description is accurate?"). Between expected value and actual value, in other words.
It's easy to get those confused, and my intuitions about one muddy my thinking about the other, so I like to think about them separately.
WRT the epistemological question, that's hard to answer without a lot of information about how likely I consider accurate oracular ability, how confident I am that the examples of accurate prediction I'm aware of are a representative sample, etc. etc. etc., all of which I think is both uncontroversial and uninteresting. Vaguely approximating all of that stuff I conclude that I shouldn't pay the oracle, because I'm not justified in being more confident that the situation really is as the oracle describes it, than that the oracle is misrepresenting the situation in some important way. My expected value of this deal in the real world is negative.
WRT the fundamental question... of course, you leave a lot of details unspecified, but I don't want to fight the hypothetical here, so I'm assuming that the "overall jist" of your description applies: I'm paying $1K for QALYs I would not have had access to without the oracle's offer. That's a good deal for me; I'm inclined to take it. (Though I might try to negotiate the price down.)
The knock-on effect is that I encourage the oracle to keep making this offer... but that's good too; I want the oracle to keep making the offer. QALYs for everyone!
So, yes, I should pay the oracle, though I should also implement decision procedures that will lead me to not pay the oracle.
I think a key part of the question, as I see it, is to formalize the difference between treatment effects and selection effects (in the context where your actions might reflect a selection effect, and we can't make the normally reasonable assumption that our actions result in treatment effects). An oracle could look into the future, find a list of people who will die in the next week, and a list of people who would pay them $1000 if presented with this prompt, and present the prompt to the exclusive or of those two lists. This doesn't give anyone QALYs they wouldn't have had otherwise.
And so I find my intuitions are guided mostly by the identification of the prompter as an "oracle" instead of a "wizard" or "witch." Oracle implies selection effect; wizard or witch implies treatment effect.
Leaving aside lexical questions about the connotations of the word "oracle", I certainly agree that if the entity's accuracy represents a selection effect, then my reasoning doesn't hold.
Indeed, I at least intended to say as much explicitly (_"I don't want to fight the hypothetical here, so I'm assuming that the "overall jist" of your description applies: I'm paying $1K for QALYs I would not have had access to without the oracle's offer." _ ) in my comment.
That said, it's entirely possible that I misread what the point of DanielLC's hypothetical was.
DanielLC said:
I interpreted that as a selection effect, so my answer recommended not paying. Now I realize that it may not be entirely a selection effect. Maybe the oracle is also finding people whose life would be saved by making them $1000 poorer, for various exotic reasons. But if the probability of that is small enough, my answer stays the same.
Right. Your reading is entirely sensible, and more likely in "the real world" (by which I mean something not-well-thought-through about how it's easier to implement the original description as a selection effect), I merely chose to bypass that reading and go with what I suspected (perhaps incorrectly) the OP actually had in mind.
Pay iff you would pay $1000 to avoid learning of your death the last week of your life. If you don't pay the oracle only shows up when you are about to die anyway.
It's just a version of the Newcomb's problem with negative outcomes instead of positive.
Presumably the oracle makes its offer only to people from two classes: (1) Those who will die next week AND will not pay $1000; and (2) Those who will pay $1000 AND not die next week. Since it's the oracle it can identify these people and make its offer only to them. If you got this offer, you are in one of the above classes but you "don't know" in which.
This is essentially just another version of the smoking lesion problem, in that there is no connection, causal or otherwise, beween the thing you care about and the action you take. Your decision theory has no specific effect on your likelyhood of dying, that being determined entirely by environmental factors that do not even attempt to predict you. All you are paying for is to determine whether or not you get a visit from the oracle.
ETA: Here's a UDT game tree (see here for an explanation of the format) of this problem, under the assumption that oracle visits everyone meeting his criteria, and uses exclusive-or:
ETA2: More explanation: the colours are states of knowledge. Blue = oracle asks for money, Orange = they leave you alone. Let's say the odds of being healthy are α. If you Pay the expected reward is
α(-1000) + (1-α) DEATH; if you Don't Pay the expected reward isα 0 + (1-α) DEATH. Clearly (under UDT) paying is worse by a term of-1000α.Variation on this:
An oracle comes up to you and tells you that you will give it a thousand dollars. This oracle has done this many times and every time it has told people this the people have given the oracle a thousand dollars. This oracle, like the other one, isn''t threatening you. It just goes around finding people who will give it money. Should you give the oracle money?
I believe in testing rules and breaking things. So no. Don't give and see what happens.
Under UDT: pay iff you need human contact so much that you'd spend $1000 to be visited by a weird oracle who goes around posing strange decision theory dilemmas.
No, but you will.
Every decision theory I throw at it says either don't pay or Error: Divide By Zero. Is this a trick question?
I don't know what "error: divide by zero" means in this context. Could you please clarify? (If you're suggesting that the problem is ill-posed under some decision theories because the question assumes that it is possible to make a choice but the oracle's ability to predict you means you cannot really choose, how doesn't that apply to the original problem?)
You want to figure out whether to do as the oracle asks or not. To do this, you would like to predict what will happen in each case. But you have no evidence concerning the case where you don't do as it asks, because so far everyone has obliged. So, e.g., Pr(something good happens | decline oracle's request) has Pr(decline oracle's request) in the denominator, and that's zero.
I think you can say something similar about the original problem. P(decline oracle's request) can (for the new problem) also be phrased as P(oracle is wrong). And P(oracle is wrong) is zero in both problems; there's no evidence in either the original problem or the new problem concerning the case where the oracle is wrong.
Of course, the usual Newcomb arguments apply about why you shouldn't consider the case where the oracle is wrong, but they don't distinguish the problems.
That's a forward-looking probability and is certainly not zero.
In the absence of evidence you just fall back on your prior.
In order to get Error: Divide By Zero, you have to be using a particular kind of decision theory and assume P(decline oracle's request) = 0.
Your prior for what?
For the baseline, "underlying" probability of the oracle's request being declined. Roughly speaking, if you have never seen X happen, it does not mean that X will never happen (=has a probability of zero).
This assumes you're a passive observer, by the way -- if you are actively making a decision whether to accept or decline the request you can't apply Bayesian probabilities to your own actions.
I really want to say that you should pay. Obviously you should precommit to not paying if you can, and then the oracle will never visit you to begin with unless you are about to die anyway. But if you can't do that, and the oracle shows up at your door, you have a choice to pay and live or not pay and die.
Again, obviously it's better to not pay and then you never end up in this situation in the first place. But when it actually happens and you have to sit down and choose between paying it to go away or dying, I would choose to pay it.
It's all well and good to say that some decision theory results in optimal outcomes. It's another to actually implement it in yourself. To make sure every counter factual version of yourself makes the globally optimal choice, even if there is a huge cost to some of them.
The traditional LW solution to this is that you precommit once and for all to this: Whenever I find myself in a situation where I wish that I had committed to acting in accordance with a rule R I will act in accordance with R.
That's great to say, but much harder to actually do.
For example, if Omega pays $1,000 to people or asks them to commit suicide. But it only asks people it knows100% will not do it, otherwise it gives them the money.
The best strategy is to precommit to suicide if Omega asks. But if Omega does ask, I doubt most lesswrongers would actually go through with it.
So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.
Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don't precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.
Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it's hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.
(I think I am willing to bite the suicide bullet as long as we're clear that I would require truly extraordinary evidence.)
If any one has sleep apnea with or without snoring or even a hint of being too tired during the day please fix it. My life is profoundly better and I have access to a life i did not know was possible, no more sadness or depression whatsoever.
My life is 20-40x better I feel like I have woken up in another world that was shut off from me for the first 20ish years of my lifespan.
On that note, I've heard a lot about how addressing sleep apnea is great but how do you check if you have it in the first place (or, at least, to the extent that would warrant seeing a doctor about it)? 'Being tired during the day' doesn't seem like a strong self-diagnostic criterion.
sleeping on my side worked for me, if i am not disciplined i mess it up, the expectation is so large that perhaps a mouthguard or machine is worth it. If you wake up at night but cannot remember or remember falling then I now that is a good sign.
google sleep apnea/shallow breathing while sleeping. I'm afraid I cannot do too much more to help with my current knowledge.
I tried a lot of sleep tracking with apps and wearables. (fitbit, basis, sleep as android)
I currently use both fitbit and basis, fitbit visualises long term sleep better, basis visualises a single night better.
These devices showed me what my sleep looks like, and further what my "normal sleep pattern" looks like. while I have good sleep now; if I stop having good sleep; I will have the graphs to prove it.
I retract my Great Filter hypothesis: I realized this predicts an ever smaller population of ever smarter hominids, who still have a good quality of life, making up in smarts what they miss in numbers. But the simple fact is, hominid populations were not dwindling. They were pretty steadily taking over the planet, migrating out of Africa and all that.
Well, unless it happened before and caused the mitochondrial Eve bottleneck and then things turned different after that, but that is adding too much detail and courting a conjunction bias, so I don't propose that until more evidence is unearthed.
Upvoted for updating.
Seeking writing advice: how to keep writing
I've been having some shoulder pain for the past couple of weeks, which I've seen a doctor for. I've also noticed that I haven't actually written anything new for my novel, "S.I.", for almost that long, and have just been posting chapters from my buffer to the forum I post them in.
Given my previous attempt at writing long fiction ("Myou've Gotta Be Kidding Me"), I anticipate two likely courses. One, pain sucks, and when it goes away, my writing motivation will return, and I'll get back into the swing of things. Or two, my writing engine has run out of motivation-fuel for this story generally. In the latter case, I think I can avoid leaving the story entirely unfinished, though there would still be all sorts of dangling plot threads and unsolved mysteries; I should be able to muster up enough typing to have my protagonist finally feel overwhelmed by everything she's facing, retreat to Elliot Lake, and jump to my intended finale. It's far from a perfect solution, but seems better than putting the story on permanent hiatus (or more formally cancelling it) without any finish at all, as I ended up doing with "Myou've".
I'm hoping it's the first course. What I don't know... is if there's any way I can tweak the odds to /favour/ the first course.
Any ideas?
I have no direct experience with this myself, but have heard good things about http://zhealth.net. A quick search turns up Will Eden once recommending it here on LW and apparently a practitioner was brought in for a lecture at one of the first rationality camps so perhaps CFAR staff or one of the alums listed at http://rationalitybootcamp.blogspot.com could say more about it.
Is your keyboard / workstation set up correctly to minimise strain or whatever on your shoulder? I think an optimally positioned desk, keyboard, chair, screen etc should avoid much (any?) shoulder movement at all. You don't say whether typing exacerbates the shoulder pain or if it's just a background level of pain that's bothering you while writing, though.
Typing doesn't increase the shoulder pain. (As of the latest doctor's visit, he thinks it's actually more of a neck problem.) It's more the general background level of pain that's keeping me from being able to spend any time coming up with plot-stuff I want to write.
Sometimes the connection between pain and its cause aren't obvious. I was having fairly severe random knee pains, and it turned out that getting my bike lubricated made them go away, even though the pain wasn't happening when I was riding my bike or soon after riding it.
You might want to check on the ergonomics of everything you usually do.
Write by hand on your bed. Write on your phone during bus rides. Write by dancing to sign language. Write in a new medium humanity hasn't dreamt of. The keyboard is just one of many possible tools.
If utility is logarithmic in wealth, the Kelly Criterion tells me the right size of stake to put on a given bet, given the odds offered, my subjective probability and my wealth. In practice, in the real world, what's the right number to plug into the "wealth" part of the equation? My current savings? My yearly salary? The value of my home minus the money owing on it?
The "wealth" part of the equation is the total amount you're willing to gamble with. If you have money set aside for frivolities like food, then that wouldn't be part of your wealth as far as the Kelly Criterion is concerned.
The general principle with gambling is never to bet more than you're willing to lose. Kelly betting is optimal in the sense that over the long run, no other system will outperform it. In the short run, it's quite volatile and you can get very low.
The amount which you can afford to lose.
Not so much in response to your specific question, but when trying to figure out what I can afford, I actually take a pretty simple approach: my liquid assets (mostly in the bank) plus things I could easily liquidate (stocks, etc.) minus a "rainy day fund" (this has varied in size over the years, but tends to sit at between 2 and 10 thousand USD, based on how hard I think it would be to get a job or find housing in the event that I lost one or both). Things like 401K and HSA are omitted; they're already earmarked for specific things and mean I don't have to worry about keeping other funds back for those purposes. Assets that are technically resalable but either not worth the effort or of high utility to my daily life (my computer, my car, etc.) are also omitted, though in a pinch I would of course sell them too.
The result is the money I can afford to spend. I can use it on video games, or vacations, or gifts for people, or a new car (at which point I would sell the old one), or fighting malaria. I can trickle it away on living expenses if I decide to quit my job and pursue hobbies or whatever (I would start looking for a new one once I got within "expected job hunt time * cash outflow rate" distance of the bottom of this wealth, though I could dip into the rainy day fund if needed).
I can also invest it into riskier things than a savings account, like stocks... or into any other kind of betting.
I'm inclined to think that policy towards illegal immigration is a result of incoherent moral standards-- some combination of "discourage strangers from showing up in large numbers" and "rescue harmless people who are close to death".
I don't think policy is a result of incoherent moral standards. I think it is a result of different people having different moral ideas that they consider important. So some subset of people are concerned enough to be active in discouraging strangers from showing up, and some other subset of people are concerned with rescuing people who are close to death, and the political/legislative system cobbles these things together into something which can pass a vote.
I suspect CEV is unlikely. That is, if one were to extrapolate volition from bunches of different people, the result would not be coherent, it would be incoherent. Because people have different and inconsistent volitions.
It reminds me of a thought experiment I have read somewhere. Imagine that there are many people in the world who are dying from starvation. They would happily agree to be your slaves, if you feed them. There is too many of them and they are not qualified for modern economy; if you would give them more than a minimum, there wouldn't be enough for you to have a decent life. Imagine you only have the following three options:
A) Share everything with them. Everyone will live, but everyone will be rather poor.
B) Accept them as your slaves, in exchange for food and shelter. Everyone will live, you will keep your quality of life, but there will be a huge inequality.
C) Refuse to interact with them. You will keep your quality of life, but they will die from starvation.
If we order these options by altruism, which is how those poor people would see them, we get A > B > C. It would be best to make those poor people our equals, but even helping them survive as slaves is better than letting them die.
If we order these options by pure egoism, we get B > C > A. Having slaves would be a cool improvement, keeping status quo is acceptable.
But in the typical decision process, we refuse B to signal that we are not complete egoists, and refuse A because we are not really that much altruistic. Thus what remains is the option C... which paradoxically both altruists and egoists consider to be worse than B (and the altruists also worse than A).
Slavery is a non sequitur here. Under the circumstances you might suggest "I will pay you below minimum wage" or "I will pay you nothing, but provide housing on my plantation where you work." But so long as they have the right to walk away at any time its not slavery, and there's nothing in the setup that justifies that loss of liberty. Your hypothetical situation is an argument against the minimum wage, not pro-slavery.
The thing is, I don't think a lot of illegal immigrants are unqualified for a modern economy. If they were unqualified, there wouldn't be so many laws trying to keep them from working.
Even without that, there's a lot of issues about giving them welfare. We could allow them entry as second-class citizens who have no minimum wage or access to welfare but still need to pay taxes. We'll avoid having to give them welfare, but we'll need to admit that we have second-class citizens, which is something we pretend to be against.
It also means that the people who are currently working at minimum wage jobs are likely to lose their jobs to the cheaper competition.
That depends on the degree to which the two groups compete for jobs. There are also positive secondary effects which reduce the impact (immigration reduces inflation and increases the overall market size). The employment impact of immigration on low-skilled workers is somewhere between slightly negative and slightly positive.
Great point!
Although hypotetically here could be two independent interests that just happen to be strategically aligned. Some people want to stop unqualified immigrants, other people want to stop qualified immigrants who would compete with them on the job market.
Also there are of course concerns other than economical, such as people bringing with them some nasty habits from their cultures. These were not included in the thought experiment, which perhaps makes it irrelevant for real-world situations.
Also having slaves has the risk of those slaves rebelling later.
I wasn't kidding when I said one of the motivations was a desire to not live with large numbers of strangers. One issue might be cognitive load-- the strangers have unfamiliar customs (is a sincere apology accompanied by a smile or a serious expression?) and possibly an unfamiliar language.
As far as I can tell, the economic side of not wanting immigrants is a sort of merchantilism-- a belief that all that matters is where the money is, so that new people showing up and getting paid for work just seems like money getting drained away. Weirdly, rich people who show up and spend money without working locally may be disliked, but they don't seem to be as hated as poor people who do useful work. I don't think it's just about competition for jobs.
https://hbr.org/2015/04/emotional-intelligence-doesnt-translate-across-borders
A few examples of people from different cultures misreading each other.
Probably but I'm not sure why that should be surprising; most moral standards we hold are inconsistent. So what would distinguish policy towards illegal immigration from other policies?
In a previous open thread, I brought up the theory of right-wing authoritarianism, which purports that conservative attitudes may be partially a defensive response to perception of threat. That offers one way of looking at policy towards illegal immigration: Maybe some people really do view immigrants as a threat to their way of living. So from that perspective they would not view them as harmless.
It may be simpler than that, though. Maybe 'rescue harmless people who are close to death' is not a strong value (or a value at all) for some. Certainly we know that psychopaths do not hold this as a value, and may even consider it an anti-value -- they would enjoy increasing the number of harmless people who are close to death. I'm sure this is not true for the majority of human beings, however.
I'd say inconsistent rather than incoherent moral standards, or different moral standards at tension.
Honestly, this seems like a "well, duh" sort of thing. One just needs to read the rhetoric from say both sides of the US immigration debate, or both sides of the discussions in Europe about refugees from North Africa to see this pretty clearly.
I've noticed a lot of disciplines, particularly ones that sometimes have to justify their value, often make a similar claim:
"[subject] isn't just about [subject matter]: it teaches you how to think"
This raises some interesting questions:
I can believe, for example, that Art History instils in its students some useful habits of thought, but I suspect they're less general than those from a discipline with an explicit problem-solving focus. What kind of scheme could one construct to score the meta-cognitive skills learned from a particular subject?
Are there any subjects which are particularly unlikely to make this claim? Are any subjects just composed of procedural knowledge without any overarching theory, cross-domain applicability, or necessary transferable skills?
Are there particularly potent combinations of skills, or particularly useless ones? It seems that a Physics degree and a Maths degree would have similar "coverage" in terms of thinking habits they instil, but a Physics degree and a Law degree would have much broader coverage. "I have technical skills, but I also have people-skills" is a fairly standard contemporary idea. Do Physics and Law have strikingly different coverages because Physics Lawyers don't really need to exist?
Physics lawyers definitely need to exist. I would strongly like to get an injunction against the laws of thermodynamics.
Most (~70%) of the times it is a euphemism for "it's useless, but we like it so we still want to use taxpayers' money to teach it".
(If people really cared about teaching people how to think, they'd teach cognitive psychology, probability and statistics, game theory, and the like, not stuff like Latin.)
I expect you're typical-minding here. I know enough linguistics enthusiasts who feel that learning new languages makes you think in new ways that I believe that to be their genuine experience. Also because I personally find a slight difference in the way I think in different languages, though not as pronounced as those people.
Presumably they, being familiar with the thought-changing effects of Latin but not having felt the language-changing effects of cognitive psychology etc. (either because of not having studied those topics enough, or because of not having a mind whose thought patterns would be strongly affected by the study of them), would likewise say "if people really cared about teaching people how to think, they'd teach Latin and not stuff like cognitive psychology". Just like you say what you say, either because of not having studied Latin enough, or because of not having a mind whose thought patterns would be strongly affected by the study of languages.
Sure, but the same happens with living languages as well.
I studied Latin for five years. Sure, it is possible that if I had studied it longer it would have changed my thought patterns more, but surely there are cheaper ways of doing that. (Even the first couple months of studying linear algebra affected me more, but I don't expect that to apply to everybody so I didn't list it upthread.)
A while ago I read that a betting firm rather hires physics or math people than people with degrees in statistics because the statistics folks to often think that real world data is supposed to follow a normal distribution like the textbook example they faced in university.
Outside of specific statistics programs a lot of times statistics classes lead to students simply memorizing recipes and not really developing a good statistical intuition.
Teaching statistics sounds often much better in the abstract than in practice.
That's a good point, but on the other hand, even thinking that everything is a Gaussian would be a vast improvement over thinking that everything is a Dirac delta and it is therefore not ludicrous to speculate about why some politician's approval rating went down from 42.8% last week to 42.3% today when both figures come from surveys with a sample size of 1600.
A well trained mathematician or physicist who never took a formal course on statistics likely isn't going to make that error, just as a well trained statistician isn't going to make that error.
I would think that the mathematician is more likely to get this right than the medical doctor who got statistics lessons at med school.
I would interpret that claim as: "we may be practically useless, but we are still fucking high-status!" :D
The claim isn't just made with arguably useless disciplines, though. Many people argue (quite rightly, IMO) that programming doesn't just teach you to command machines to do your bidding, but also instils powerful thinking tools. So even if kids don't grow up to be software developers, it's still valuable for them to learn programming. Similar arguments could be made for law or finance.
Slightly off topic, but I both program and play guitar and for the longest time I was wondering why I was getting an overwhelming feeling of the two bleeding into each other. While playing guitar, it would "feel" like I was also coding. Eventually I figured out that the common thread is probably the general task of algorithm optimization.
There's no way for me to tell if programming made me a better guitar player or vice versa.
Seems to me that "teaches you how to think" does not necessarily imply instilling habits of thought. I would interpret that (say, in the context of Art History) as:
The habits of thought are not involved in all of this -- it's more of a broadening-your-horizons exercise.
I suspect that with "mastery of a skill" comes an ability to understand "mastery", in that - on a variation of man-with-a-hammer syndrome; holding the mastery of one area will help you better understand the direction to head in when mastering other areas, and learning in other areas.
to me the line now reads; "mastery of [subject] isn't just about [subject matter]: mastery teaches you how to think"
where <subject> can vary; the significance of what people are trying to convey is maybe not in the <subect> but in the experience of learning.
Thought experiment. You are doing a really boring job you dislike like data entry, but so well paid you don't want to leave it. You cannot automate it. You cannot work from home. You sit in the office 8 hours Thankfully it does not take 8 hours, you can do it in 5 and then browse the web or something.
What do you do? Trying to spend the other 3 meaningfully like studying with Anki, and trying to find challenging games in the actual job part are two obvious ones, what else? E.g. would you listen to ebooks while doing it? What else?
Post to lesswrong.com.
Tried something like that. Was unable to do anything productive after 5 hours without a real deadline.
Could you hire a cheap online personal assistant that would give you the deadlines? Like, you would make a plans for the whole week in advance, give those plans to the assistant, and then during the week the assistant would role-play being your manager. (Using another person as proxy for your planning self.)
If it doesn't feel real, it's easy to ignore.
Plan A: Change your environment; spend three hours a day preparing a proposal for management/ownership to work as a contractor paid by entry opposed to an employee paid by the hour. Find the relevant tax and overhead savings to make this a mutually beneficial arrangement. Find out who in management/ownership can approve your proposal and who it just creates headaches for, buy beer for both.
I understand that goes against the spirit of your question, that your work environment may be to rigid, management that could approve the proposal are out of reach of the data entry staff, or one of many other arguments, but 60 hours a month is a large amount of time, it is shocking what could be done.
Plan B: Now on to things I've actually done in that situation; spend 60 hours preparing a bulletproof argument/presentation for a raise, spend 60 hours learning how to create better resumes, spend 60 hours learning how to job hunt without a resume (handshakes and recommendations), spend 60 hours job hunting, and last on the list spend the time on entertainment so that you are mentally recharged to make the most of your personal time.
I would start programming mobile games, and would hope to make money from them. If I don't succeed, at least I had a hobby, and maybe can use the experience to get a more interesting job later. If I do succeed, then I do not have to solve the problem of boring job anymore.
That would require sufficient freedom to spend those 3 hours not just programming, but also painting pictures, editing 3D models, editing levels, and testing the game on the phone. Okay, hypothetically that is not necessary; there can be some parts that I have to do at home. But it would be much more convenient if I could do whatever is necessary for the game immediately when I need it.
Or, if I wouldn't have a specific plan, I would just learn random stuff from online universities. I enjoy learning, so I wouldn't necessarily care about how useful are those lessons. I would imagine that some part of that would be useful somehow later, if nothing else, then for impressing people.
Someone who is a buddhist could use those three hours to meditate daily, and achieve nirvana in a few years, while keeping a well-paying job. Also, being a buddhist could help with the feelings of boredom from the job. ;)
That's not how it works.
You are right, Buddha himself had to quit his job before he could achieve enlightenment.
upvoted for the determined, "thats not how nirvana works".
I can't concentrate if the words I'm hearing are not the ones I'm typing. Ebooks would be a terrible distraction for me during data entry. Music without lyrics would be better.
During blank minutes at a call center I used to work at, I made slow progress at writing a novel. It was made more enjoyable by the quirk that my writing flows better with pen and paper.
I have this half-baked idea that trying to be rational by oneself is a slightly pathological condition. Humans are naturally social, and it would make sense to distribute cognition over several processors, so to speak. It would explain the tendencies I notice in relationships to polarize behavior - if my partner adopts the position that we should go on vacations as much as possible, I almost automatically tend to assume the role worrying about money, for example, and we then work out a balanced solution together. If each of us were to decide on our own, our opinions would be much less polarized.
I could totally see how it would make sense in groups that some members adopt some low probability beliefs, and that it would benefit the group overall.
Is there any merit to this idea? Considering the well known failures in group rationality, I wonder if this is something that has long been disproved.
Mercier & Sperber made a similar argument, commenting that e.g. things that seem like biases in the context of a single individual (such as confirmation bias) are actually beneficial for the decision-making of a group. An excerpt:
There are studies that compared performance of couples with randomly assigned pairs (from the same group) and found that couples perform better than random assignment. This suggests that couple specialize and at the same time rely on the specialization of the other part ("I knew you'd make the appointment").
The other side of the coin this breaking-up: You feel like a part of your brain has been ripped off - namely the part you outsourced to your partner.
Just like when the internet goes out and you can't get to google/Wikipedia/etc! But more traumatic considering how much more bandwidth is exchanged between people in physical and emotional space.
That's a powerful idea and it actually goes deeper than you may think. We are divided even internally inside ourselves. There is reason to think that your internal rational decision-making processes consist of multiple sub-processes that combine and compare various points of view. Each sub-process has the same level of interaction with other sub-processes as you would have when speaking to another person. Your mental sub-processes may not even distinguish between thoughts and ideas coming from another part of your brain and coming from another person.
Yes, it is difficult to maintain balance when the other person is pushing in some direction. You feel the instinct to push the other way, as if to provide a balance on average. The problem is, balance on the average means imbalance in your head, if the other person is unbalanced.
It's like when we have a debate about how much is 2+2, and the other person insists that it is 3, then when I say 4, there is a risk that in the future we will achieve a compromise value of 3.5, which I already perceive as wrong. So people have the social instinct to say at least 5, so that the future compromise value may be 4. Even if they originally did not really believe it was 5.
One possible solution would be to make everyone write their opinion before hearing the opinions of others. But that can be done in artificial settings, not in real life -- we usually already heard the opinions of some people. Also, if we have iterated debates about the same topic (e.g. the vacations), we can already predict what our partner will say.
To me it simply means that to have a rational debate, it is better to exclude the people who are strongly mindkilled about something. (Obviously, deciding who they are, is a problem on a higher level.) Maintaining balance is difficult on its own, and almost impossible when someone keeps pushing you on one side: you either fall on the side you are pushed, or you tilt to the opposite direction and fall down later when you are alone. We should not overestimate our own ability to be reasonable in difficult situations.
I can imagine a debate where you flip a coin and you either present your true opinion, or you role-play a selected opinion. Problem is, how would you create the set of the role-played opinions?
What if you forget to include something important? What if most of the supposedly "random" opinions are actually variants of one side (which is already overrepresented in the sincere part of debate), and the other side is underrepresented (and some third side is completely absent). That would be quite likely if people who prepare the "random" options are from the same population as the sincerely debating ones: they would add many minor variants of their own opinion, because those would sound meaningful; and then a few obvious strawmen of their enemies, to create a feeling of a fulfilled duty.
"life pro tips" for people slightly on the schizoid spectrum? I have realized that my version of "nerdiness" actually checks significant checkmarks of diagnostic criteria, such as indifference, aloofness, anhedonia, inner fantasies, being suspiciously "good" at dealing with criticism (i.e. not care) etc. One good idea I managed to google up is to build empathy by praising people. This goes well with a buddhist practice I have found earlier, which is to wish good things to people, like happiness or long life.
These things may deal with the social aspect of it pretty well, but I guess what I would like to know, is this kind of internal retreat from the outer world does come from retreating to the social world? I mean, perhaps not for everybody, but for me the world outside my head can be split into two distinct categories, the social world of humans and the material world of everything else, nature, the universe, also human made things, cars etc. The world of subjects and the world of objects, right? Do you think feeling aloof, indifferent and internally retreating from the world of objects too (unlike aspergers, who are often fascinated by a narrow range of objects, this seems the major difference between asperger and schizoid) can come from a retreat from the social, human world, so fixing that would fix the other as well? How to put it... I was never really interested in the beauty of nature (as a subset of never really interested in anything), do you think getting more interested in people (by practicing praises and good wishes) also makes one more interested in this non-people things of the world as well? That everything reduces to the social?
I can offer no help; but feel like "life pro tips for living happy with schizoid spectrum disorders" would be a really good piece of knowledge to create.
Has anyone here ever had the "location" of their sense of self change? I ask because I've recently read that while some people feel like "they" are located in their heads, others feel like "they" are in their chests, or even feet. Furthermore, apparently some people actually "shift around", in that sometimes they feel like their sense of self is in one body part, and then it's somewhere else.
I find this really interesting because I have never had such an experience myself; I'm always "in my head", so to speak--more precisely, I feel as though "I" am located specifically at a point slightly behind my eyes. The obvious hypothesis is that my visual sense is the sense that conveys the most information (aside from touch, which isn't pinned down to a specific location), which is why I identify with it most, but the sensation of being "in my head" persists even when I have my eyes closed, which somewhat contradicts that hypothesis. Also, the fact that some people apparently don't perceive themselves in that place is more weak evidence against that hypothesis.
So, any thoughts/stories/anecdotes?
I always thought my sense of self was in my head because of where my eyes and ears were. I look out at myself and see my hands typing and my legs when I am walking and I am looking from my head. I.e., I am in my head, that is the center.
I have had out-of-body experiences. Nothing too major; just the sensation of floating above my 'actual' body, sometimes only a few centimeters, other times a full human body length (as if I was standing on my own head). I had a burst of these out-of-body experiences around 2005-2006 (perhaps four or five in a two-year period) and have not had them since. Each episode lasted only a minute or two. Once, a friend was present, and they told me I had 'zoned out' for several minutes. It's worth mentioning don't know what caused or triggered the episodes. During the episodes my eyes were fully open and I could see what was happening in front of me. However, I wasn't focused on sensory input but was more inward-focused on my own thoughts.
If you have any specific questions, feel free to ask.
http://www.yale.edu/minddevlab/papers/starmans%26bloom.pdf?hc_location=ufi
I'm not sure this is definitive, but it's at least interesting.
When I'm reaching into a space I can't see with my hands to say, untangle something, I definitely have more of a sense of space around my hands than my head. Closing your eyes and untying/retying your shoes right now might simulate this.
I think this is learned - Aristotle considered it is in the heart and the brain is just about cooling blood. I think it is because we are taught from childhood to "use your head" etc.
Be critical of these sorts of factoids. Aristotle was a 'wise man' which in that pre-scientific time meant more seemingly-wise than actually-wise regarding most topics (although Aristotle was better than other contemporaries to be fair). You can take it as weak evidence that Aristotle claiming the self to be in the heart and not in the brain means that most people of the time thought it was in the brain not the heart, as with today. His view got recorded for history because it was contrarian.
In ancient Greece, it was common knowledge that the liver was the thinking organ. This is obvious, because it is purple (the color of royalty) and triangular (mathematically and philosophically significant).
Is this true that most people believed the brain was where thought came from? I know the Egyptians used to rip it out because they didn't think it was important.
I was literally just thinking about this the other day, about how ancient people didn't notice that people that got head injuries would change their behave or die instantly.
I don't have a single friend whose behavior I'd have noticed changing after a head injury: the only reason I know it happens is because I've read case reports of it happening to someone. Maybe some doctor might have noticed, but then, I'd expect ancient peoples to also have fewer head injuries that were serious enough to change behavior but also mild enough to be survivable.
That is good evidence, on the other hand.
People that got heart injuries tend to die instantly, too :-/
A better clue would be that you can knock someone out by hitting him on the head, but not on any other part of the body.
If you hit someone hard in the region of the heart, they die.
The large field of the so-called out-of-body experiences is precisely about the "location of self" moving outside of the body. I understand that specific types of meditation and mental exercises can produce this effect fairly reliably. So can some psychoactives.
Don't forget that your ears which provide you with hearing and the sense of balance and orientation are on your head, too.
I've have had out of body experiences which match the description of other out of body experiences fairly well (for example, while I am half dreaming with eyes open during sleep paralysis) and I think that's completely different.
In an out-of-body experience of the type that I have, you feel like your head and other body parts are somewhere different than where it really is.Your sense of self in relation to your body is preserved. You might still be in your head, but you imagine your head is somewhere else. (And hallucinate visual and tactile phenomenon consistent with your body being somewhere else).
It's not much different drom a regular dream - instead of dreaming you're in a fantasy place, you dream you are in your room but in another location of your room. (Then you feel a sort of snap back to your true body when the dream ends)
That's different from feeling a sense of self as localized somewhere other than behind the eyes.
I've occasionally been able to move my sense of self downwards from my head. From what I've read, people didn't put their sense of self in their heads (it was typically in the heart or abdomen) until the importance of the brain was discovered.
I find this very hard to believe, given that humans are highly visual creatures and our eyes are located in our head. What time period/people had their sense of self in their heart or abdomen?
Whenever my nerdy/schizoid/introverted side is stronger, I feel exactly this, I am behind the eyes and staring forward, as in this state my spatial location ability, the ability to be aware in 360 degrees, is bad. But whenever this side of me retreats a bit (for example any sense of success or victory beats down the inner nerd for a while) and I come out from my inner shell to bask in the world, I feel at home in space, I get 360 degrees awareness, I know where my legs and hands are and so on, then I am less aware of where I am and more in the center of the body, perhaps chest level.
Not everyone is that visually focused.
I'd say I'm more focused on auditory and kinesthetic senses. I'm focused in my head, but more between the ears than behind the eyes.
Even moreso than visual, we are mental creatures. Ideas and culture can make all the difference.
To the OP: there are times and circumstances by which I can lose much connection to the location of my body at all. Usually associated with stargazing.
Worth noting, the Dalai Lama recommends before falling asleep focusing the sense of self in the middle of the chest at the level of the heart for deeper sleeping or in the throat for more vivid dreams. I have never tried it, but may be an experiment for people with sleep problems or trying to lucid dream.
A hypothesis; if you think your sense of self is connected to the location of your eyes; try spend some time blindfolded; say 1 hour in a normal/safe environment without vision and see if it moves. It might just be in your hands as you feel your way around; or your feet as you travel around.
it would seem reasonable that the focus of your interaction with the environment feels like its at one of your strongest senses but might be elsewhere for other people with different sensory wiring.
Cool question.
I have experienced a change in 'location' of my sense of self- it 'spreads out'. It is a feeling that "I" do not reside in the particular head/body of Bageldaughter, but instead in both my head/body and the other things I happen to be keenly aware of. If I am deeply engrossed in a conversation or social activity, "I" will begin to be identified with the group of individuals as a whole. The particular intentions, thoughts or feelings that I typically associate with myself lose some of their distinguishing quality from the ones I perceive from others.
There is often an accompanying "spreading-out" of "my" location in time- the round-trip time of ideas through a group is often slower than just through my own head. I will get the sense that my "current moment" spans back to a thought that originated in my friend's head one minute ago!
I can invoke this sensation pretty reliably. It can be fun. I get worried when people talk about experiencing this type of thing as some kind of higher truth than normal, because it seems like a sign of mental illness that may not end well.
As a single data point, this is totally bizarre to me... I've never in my life felt a "sense of self" anywhere myself, but I find the idea intriguing.
How do you locate yourself? Do you have to meditate or something, or are "you" just always there?
I haven't had this experience myself, but apparently it's not difficult to induce: http://www.npr.org/2011/02/25/134059271/creating-the-illusion-of-a-different-body
The big cryonics story of the week, about the Thai toddler Matheryn Naovaratpong:
The Girl Who Would Live Forever
http://motherboard.vice.com/read/the-girl-who-would-live-forever
Two-year-old cryogenically frozen by parents
http://www.cnet.com/news/two-year-old-cryogenically-frozen-by-parents/
The girl who could come back from the dead: Toddler who died from a brain tumour is FROZEN by parents who hope she can one day be revived by medical advances
http://www.dailymail.co.uk/health/article-3043272/The-girl-come-dead-Toddler-died-brain-tumour-FROZEN-parents-hope-one-day-revived-medical-advances.html#ixzz3XoNKDW00
PZ Myers weighs in. I guess he got bored with inflicting damage on communion wafers and accusing Michael Shermer of sexually assaulting women, and now he wants to pick on cryonicists:
How to live forever
http://freethoughtblogs.com/pharyngula/2015/04/16/how-to-live-forever/
Well, hats off to /u/DataPacRat for fighting the good fight in that comment section. I suspect most of the thread is people who just came in to post their little dig at the weird meat-popsicle cultists and then move on, so I'm not sure if he's achieving much, but if nothing else he's stopped me from feeling I need to go in there and join the fray to say what he ended up saying, except less well.
Lots of people employing the weirdness heuristic, as expected. Oh, and of course David Gerard's over there too.
sigh
PZ Myers:
Here's a fun topic of conversation - if I happen across PZ Myers, and he's having a heart attack, should I feel any obligation to perform CPR?
I get that Myers' article pisses a lot of people here off (myself included), but let's try to refrain from mean-spirited-ness, neh? Mind-killing happens readily enough by itself without people helping the process along.
Normally, yes I think it wise to refrain from mean-spirited-ness. But when someone writes a hit piece against the parents of a recently deceased toddler because they dared to try to save her life in a weird way, well, in this case I'm going to make an exception.
The fact that his behavior emotionally triggers you is no reason to engage in bad and unproductive behavior yourself. Even if it's "justified".
I think you are greatly missing the point. If you want to be effective in the world, sometimes that involves being politically smart. And sometimes the politically smart thing to do is a show of force. You should not jump from emotion straight to action. But sometimes after examining the evidence and weighing the possibilities, the best response is an angry toned rejection.
I have nothing against calculated actions that shows force. Against a blogger who in the business of getting page views by stirring up controversy being mean-spirited isn't showing force.
Yes, if they have cryonics or its successor technologies for themselves and they can reason about consequences carefully. If you have an injury or pathology in the 24th Century that the health care providers don't know how to treat, you could go into brain preservation to see if the health care providers in, say, the 26th Century would know how to help you. Some of those health care professionals active in the 26th Century might have been born in the 20th or 21st Centuries and have gone through a round or two of brain preservation themselves, and they entered the practice of medicine in the 26th Century as one of their new careers. "Hey, I know this guy. He helped to resuscitate me in 2327. I owe him so I'll return the favor."
He's not saying in that quote that they shouldn't feel an obligation, he's making a point focusing on doubting whether they'd want to resurrect them. I think they very likely would, and PZ is ignoring the entire first-in/last-out which cryonics plans on using to further encourage people to resurrect, but it helps to actually focus on what his criticism is.
I am oscillating between "calm down, politics is the mindkiller" and "if the iron is hot, I want to believe it is hot".
Is there any hope that if we bite our collective tongues and not feed the trolls, they will get bored and find a new victim? I am afraid that when the troll has sufficient power and allies in online media, the old advice of not feeding it is just not available anymore; whatever you do, someone else on the planet will feed the troll anyway.
It almost makes me think these guys are maximizing evil, but then I realize they are simply maximizing money, and the laws of the universe say that you generate most screaming when you poke in the place it hurts. It is nothing personal; it's just that your tears are an important component in paperclip production. The Clippy does not hate you, it just calmly explores the places where your density of sensory receptors is highest. It could just as well try to make you laugh, but that is a less productive thing to do with humans.
Agreed. Of course, calming down is hard enough by itself without people seemingly actively trying to prevent you from calming down--people like, say, the commenters in that particular blog post. (Major kudos to DataPacRat for managing to stay calm while he/she was being accused of believing in "godbots"; I would not have been able to do the same.) I'm inclined to apply the principle of charity here along with Hanlon's Razor to conclude that they're not actually deliberately trying to piss you off... but God, it sure feels like it sometimes.
Money? I think PZ types are mainly looking for narcissistic supply. Also, there was an article either here on on SSC about how people sometimes don't want to be high status just feel high status, cannot find it anymore, but seems relevant.
EDIT found it I think this is what is going on here, not really money.
Yes, the link explains why some people may be obsessed by some ideas -- because they generate feeling of status in their heads. Now other question is why this idea instead of some other idea. For example, you are looking for a "bad guy" whose reputation you can smash online, thus generating heroic feelings in yourself... so, from all the available options, why choose cryonics?
Well, I guess it is somehow similar to the previous "bad guys", so whatever enemy-detection algorithm chose them, it also chose cryonics.
atheists... video game fans... cryonicists... -- complete the pattern
What do these have in common?
For a clickbait website, this is a perfect target. All they have to do is write: "Your way of life makes you hate women, therefore your way of life should be regulated by well-meaning outsiders. What is our proof for this? We have found this one women who feels uncomfortable with you. And since you have a minority of women, it must be a general rule. Now stop resisting and start obeying your new overlords!"
Well, for me the interesting question here is who are the next likely targets. Who else fits this pattern? Can we recognize them before they are attacked? And assuming we care about them, can we use this knowledge to somehow protect them?
My suspicion is that "rationalist" and "effective altruists" do fit this pattern; they were just not given sufficiently high priority yet. It may depend on how large wave of hate the attack on cryonicists can generate. (There is always a risk of choosing too weird group, so the outsiders will be too indifferent to join the wave.)
Of course there is always the chance that I am pattern-matching here too much. My only defense is that we could use this model to generate predictions about who will be attacked next, and then see whether those predictions were right. (On the other hand, it also feels like doing homework for PZ Myers, so maybe this is not a good topic for a public debate.)
I don't think this is what is going here at all. The pattern match that is going on is cryonics and fringe science or pseudoscientific ideas that sound like they are promising things they cannot deliver. This much more about PZ thinking of himself as a skeptic and having just enough biology background to think he can comment on any biology related issue.
Yeah. The parent & sibling comments here got me curious about exactly what PZ wrote, and whether it'd be a transparently politically motivated fulmination against cryonicists.
But the post, as far as I can see, is just an unfavourable comparison of cryonics to ancient mummification, and Myers calling cryonicists frauds who practice "ritual" & "psuedo-scientific alteration of [a] corpse", frauds sometimes defended with "the transhumanist technofetishist version of Pascal’s Wager". Strong stuff, but I don't see anything in the post about partisan politics, race, nerd culture (unless one counts "transhumanist technofetishist" as a dog-whistle meant to slam nerds in general...?), or sexism or feminism or gender (well, except the reference to the frozen girl as a "girl").
Ctrl-F-ing for "Myers" doesn't reveal anything along those lines either.
I see several comments in the political categories I mentioned but they weren't posted by PZ or cheered by PZ, so I'm a bit surprised by the comments here focusing on PZ to impute political motives to him and psychoanalyze him.
PZ's post all but says he's slamming cryonicists because (to his mind) they're crooks & quacks. (Based on the reference to "tortur[ing] cadavers", maybe there's a purity-violation ick-reaction too. That's still pretty distant from the motivations people are speculating about here.) I don't understand why I'd need a special explanation for that, over & above the more common reasons why people tend to scoff at cryonics (absurdity heuristic, plus scepticism about future technological trends w.r.t. brain preservation & re-instantiation, plus over-generalization from everyday experience of how freezing affects food and the like).
The funny part is PZ being a nerdy white male atheist scientist so basically the perfect target for this. Could this partially be a preventive action i.e. if I shoot at my group, perhaps people don't notice I am one of them?
In debates I read about similar people, "projection" is a word mentioned repeatedly. I would also suspect "reaction formation" (known as "the lady doth protest too much" outside of psychoanalysis) to play an important role.
That means, I think there is more than merely strategically shooting at one's own phenotype to draw attention away from one's own person. If drawing attention away would be the only goal, it would make more sense to try draw attention away towards some other group, also an easy target, but not including me. For example, white male nerds could shoot at white male jocks, since it is only being white and male that is considered a bad thing in certain circles. Similarly, white male atheists could shoot at white male Christians. So there must be some additional explanation.
(Not everyone is like this. There are also people who do not shoot at their own group, but at a different group, or at least at a much larger supergroup so that their own group gets a smaller fraction of attention. For example white male non-nerds shooting at white male nerds, or rich white people putting huge emphasis on whiteness and maleness and maybe also cissexuality but never ever mentioning class privilege. (Which is rather ironic, considering that the whole privileged/oppressed framwork was stolen from Marx. Here, Marx would be an example of a rich white male shooting at rich white males.))
So I guess in a way these people are trying to shoot at themselves -- on some metaphorical level. It's like they perceive something undesirable in themselves... then use typical mind fallacy to generalize it to their whole group (because being a member of a sinful group is less painful than being a sinful individual in otherwise mostly healthy group) ... and then try to atone for their sins by attacking all the other members of their group (because it is less painful than trying to improve oneself). That is, on some level they are sincerely fighting against something they consider evil. They just completely lost control over the huge biases that govern their evil-detection mechanisms.
Here is an experimental prediction: Find a sample of über-politically-correct white men publicly shooting at their own group (not just a similar group or a huge superset). Explore their background, and the background of typical members of such group. I predict that among these online warriors you will find a higher percent than in general population of racists, rapists, etc. (Where by "racists" I don't mean scoring non-zero on an implicit association test, but like actual neo-Nazis; etc.)
Is it wrong that I'm most saddened that they tore apart her brain for a year chasing that tumor, before they did the sensible thing and let her be cryopreserved?
Not that this is an open and shut case at all, but we need laws on the books regarding elective cryopreservation in the case of brain degenerative disease.
I think I have ADHD. What should I do now?
In environments where you can, allow yourself to fidget, or even better, keep moving more actively (stationary bike, exercise ball, treadmill, etc.). I have borderline ADHD and have found that to be much more effective than meds. YMMV.
I'd give ADHD meds a try. In fact, I did.
I had a doctor who said they were like a light switch and you'd know quickly if they were helping (he had ADHD as well).
I didn't feel any improvement and stopped.
Consider experimenting with supplements. If you are a U.S. student look into getting exam accommodations. You might also consider neurofeedback and meditation.
UK also.
What problems are your trying to solve? Knowing you have ADHD is useful because it offers insight into what solutions will work well. For example, it might offer suggestions as to what medications might produce useful results.
I want to have more focus and find it easier to do boring things.
That's a very common desire. I am guessing that most everyone would like to have more focus and willpower.
Everyone would like better vision but some people can't see more than a few feet without the help of glasses.
However when people just say "I want to see better", you don't know whether they actually need a telescope, a microscope, or a pair of glasses.
Well the question asked was "what problems are you trying to solve (which an ADHD diagnosis might help with)". For that specific problem there are various medications and other methods which seem to help people with ADHD.
Just like a person with nearsightedness would describe a problem of seeing objects that are far away, and a diagnosis would help them solve that problem.
consider the effects that caffeine has on kids with ADHD. (there is some research out) and then evaluate if that is similar to caffeine'e effect on you.
After that consider other alternatives; but coffee seems like the most "secretly be normal" advice I have.
<I don't drink coffee>
What sort of technology is "Moloch-dangerous"?
That is, what sort of technology, social, engineering, or otherwise, presents new opportunities to throw things into the fire to gain victory? Is it "all," or "some with [these features]"?
Off the top of my head, wherever the marginal returns from an an increase in ecficacy are very large, either because the outcome is very sensitive to some factor or the prize is just very big. So places with strong feedback loops where you can consolidate the gains, and you know others are doing so as well. Thus, probably just about any technology that is suddenly in demand due to an unexpected event, for one. Another may be any sort of strong truly defensive (protective, not deterrent) capability. Obvious yet profitable applications of new technology fall into the event category above, if first to market wins most of market share.
May also be worth thinking of anti-moloch factors, resulting in underinvestment relative to gains. Market share will be split many ways, first mover has little advantage, and perhaps greater costs. So maybe lack of strong IP protection. What else?
What are the consequences of neutral actions in ethics? After a quick perusal of Google, there doesn't seem to be anything addressing my question, and I think there should be some discussion on this.
This question is related to a problem I've been having with ethics lately; namely, should ones ethical system be viable in any kind of reality? Failing that, shouldn't there be some omniversal meta-ethical structure?
I've had a few thoughts on this, and some arguments played out in my head, but I want to see what others think.
What sort of consequences are you thinking of? The idea that ethics can consider two options equally preferable and not care which one you take follows from the idea of an ethical utility function (even a complicated function that only exists in an abstract mathematical sense). We don't need to assume it directly, we can go with the Archimedean property (roughly, that crossing the street can be worth a small chance of death).
Seeking Phi in Game Theory
For a bit of fiction I'm thinking of... is there any aspect of game theory in which offering the number/symbol phi can indicate a threat to defect unless the other player cooperates?
So far, a quick bit of Googling has only turned up phi in relation to the Bargaining problem, at https://en.wikipedia.org/wiki/Bargaining_problem , and while I /could/ wrestle such an interpretation out of what's there, I'm hoping there's a less abstruse way to go about it.
Aha - at http://www.pnas.org/content/109/26/10409.full.pdf , zero-determinant strategies are defined by two factors, chi and phi, and at least when chi is 1, maximum phi results in the strategy of Tit-for-Tat, which is exactly what I was looking for.
Hm... is there a mathematical notation for the maximum of a variable, like |x| indicates the absolute value of x?
In addition, when chi is at the extortionate level of 3.5, then when phi is maxxed out, the odds of player X cooperating if player Y cooperated last turn are 66.6% or 50%, depending on whether player X also cooperated on the previous turn. Thus, in order to give player Y enough of an incentive to want to cooperate - providing odds of at least 50% of ending up with CC - player X may have their own incentive to set chi to something below 3.5. As it happens, there are two mathematical constants a little below that that might be chosen - the inverse Fibonacci constant, also called psi, about 3.36, or the more widely-known pi, 3.14.
Is
not sufficient?
My goal is for a relatively simple, even iconic, image or logo, which can be easily interpreted regardless of the viewer's language. The symbol for Phi - a circle with a line through it - provides fodder for as much interpretation as I desire, from the overlapped 1 and 0 of binary to an ouroborus to an axis-and-equator to the Golden Ratio - and if a minimal modification can explicitly add the "maximum value of the variable denoted by this symbol", I'll be a happy little rat indeed.
This page suggests "∨".
(I've only ever seen that symbol meaning "or", but that's a kind of maximum.)
That's one of the two possibilities I've found over the past day. The other is ⊤ from https://en.wikipedia.org/wiki/Greatest_element .
(I wonder if presenting this idea to an actual mathematician would induce any wincing? Off to /r/math to find out...)
∨ is the mathematical symbol for "or" (in logic) -- my guess is that it may be derived from the fact that the initial letter of the Latin word for "or" is "v". There's a kinda convention that when you have a(n associative) binary operator, you use a bigger version of it to signify applying it to all the things in a sequence or set, so you'd want a larger one -- a bit like a capital "V".
⊤ is the mathematical symbol for the "top" element of a Boolean algebra; maybe more generally of a lattice. You wouldn't use it to mean "maximum of these things" in general.