If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open Thread, Jun. 22 - Jun. 28, 2015
New Comment
204 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom's book was made recently.

The same animation studio also made this fairly accurate and entertaining introduction to (parts of) Bostrom's argument. Although I don't know what to think of their (subjective) probability for possible outcomes.

1Elo
Although not improving my life at all; I quite like the short story as an analogy for UFAI risks.

Hope this is appropriate for here.

I had an epiphany related to akrasia today, though it may apply generally to a problem where you are stuck: For the longest time I thought to myself: "I know what I actually need to do, I just need to sit down and start working and once I've started it's much easier to keep going. I was thinking about this today and I had an imaginary conversation where I said: "I know what I need to do, I just don't know what I need to do, so I can do what I need to do." (I hope that makes sense). And then it hit me: I have no fucking clue what I actually need to do. It's like I've been trying to empty a sinking ship of water with buckets, instead of fixing the hole in the ship.

Reminds me in hindsight of the "definition of insanity": "The definition of insanity is doing the same thing over and over and expecting different results."

I think I believed, that I lacked the necessary innate willpower to overcome my inner demons, instead of lacking a skill I could acquire.

7Luke_A_Somers
Once I was facing akrasia and I kind of had the same thing happen. I knew what I needed to do, and I ruminated on why I wasn't doing that. I thought at first that I was just being lazy, but then I realized that I subconsciously knew that the strategy I was procrastinating from was actually pretty terrible. Once I realized that, I started thinking about how I might do it better, and then when I thought of something (which wasn't immediate, to be sure) I was actually able to get up and do it.
3Viliam
Sometimes "laziness" is being aware on some level that your current plan does not work, but not knowing a better alternative... so you keep going, but you find yourself slowing down, and you can't gather enough willpower to start running again.
1Elo
sounds like a growth mindset discovery! Congratulations! For my benefit can you try to rephrase this sentence with alternative words or in a more verbose form: mainly a taboo on the multiple meanings of the word need that you tried to express. without knowing the tone; it just sounds confusing. Meta: I suspect people have rewarded you for achieving an epiphany.
1ZeitPolizei
Let's say, I have some homework to do. In order to finish the homework, at some point I have to sit down at my desk and start working. And in my experience, actually starting is the hardest part, because after that I have few problems with continuing to work. And the process of "sitting down, opening the relevant programs and documents and starting to work" is not difficult per se, at least physically. In a simplified form, the steps necessary to complete my homework assignment are: 1. Open relevant documents/books, get out pen and paper etc. 2. Start working and don't stop working. Considering how much trouble I have getting to the point where I can do step one (sometimes I falter between steps one and two), there must be at least one necessary step zero before I am able to successfully complete steps one and two. And knowing steps one and two does not help very much, if I don't know how to get to a (mental) state where I can actually complete them. A different analogy: I know how I can create a checkmate if I only have a rook and king, and my opponent only a king. But that doesn't help me if I don't know how to get to the point where only those pieces are left on the board.
2Elo
A suggestion. Commit to a small amount of the work. i.e. instead of committing to utilising a local gym, commit to arriving at the gym. after which if you decide to go home you can; but at least you break down the barrier to starting. In the homework case, commit to sitting down and doing the first problem. Then see if you feel like doing any more than that.

Deep Learning is the latest thing in AI. I predict that it will be exactly as successful at achieving AGI as all previous latest things. By which I mean that in 10 years it will be just another chapter in the latest edition of Russell and Norvig.

Purely on Outside View grounds, or based on something more?

2Richard_Kennaway
Outside View only. That's the way it's always worked out before, and I'm not seeing anything specific to Deep Learning to suggest that this time, it will be different. But I am not a professional in this field.
3Baughn
So, some Inside View reasons to think this time might be different: * The results look better, and in particular, some of Google's projects are reproducing high-level quirks of the human visual cortex. * The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn't have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore's Law is producing a difference in kind. That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all. We're making progress, and I wouldn't be surprised if deep learning is part of the first AGI.
1RobFack
While the work that the visual cortex does is complex and hard to crack (from where we are now), it doesn't seem like being able to replicate that leads to AGI. Is there a reason I should think otherwise?
6Houshalter
There is the 'one learning algorithm' hypothesis, that most of the brain uses a single algorithm for learning and pattern recognition. Rather than specialized modules for doing vision, and another for audio, etc. The evidence experiments where they cut the connection from the eyes to the visual cortex in an animal, and rerouted it to the auditory cortex (and I think vice versa.) The animal then learned to see fine, and it's auditory cortex just learned how to do vision instead.
0jsteinhardt
This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?
2Houshalter
The outside view is not very good for predicting technology. Every technology has an eternity of not existing, until suddenly one day it exists out of the blue. Now no one is saying that deep learning is going to be AGI in 10 years. In fact the deep learning experts have been extremely skeptical of AGI in all forms, and are certainly not promoting that view. But I think it's a very reasonable opinion that it will lead to AGI within the next few decades. And I believe sooner rather than later. The reasons that 'this time it is different': * NNs are extraordinarily general. I don't think you can say this about other AI approaches. I mean search and planning algorithms are pretty general. But they fall back on needing heuristics to shrink the search space. And how do you learn heuristics? It goes back to being a machine learning problem. And they are starting to solve it. E.g. a deep neural net predicted Go moves made by experts 54% of the time. * The progress you see is a great deal due to computing power advances. Early AI researchers were working with barely any computing power, and a lot of their work reflects that. That's not to say we have AGI and are just waiting for computers to get fast enough. But computing power allows researchers to experiment and actually do research. * Empirically they have made significant progress on a number of different AI domains. E.g. vision, speech recognition, natural language processing, and Go. A lot of previous AI approaches might have sounded cool in theory, or worked on a single domain, but they could never point to actual success on loads of different AI problems. * It's more brain like. I know someone will say that they really aren't anything like the brain. And that's true, but at a high level there are very similar principles. Learning networks of features and their connections, as opposed to symbolic approaches. And if you look at the models that are inspired by the brain like HTM, they are sort of converging
2jsteinhardt
Several of the above claims don't seem that true to me. * Statistical methods are also very general. And neural nets definitely need heuristics (LSTMs are basically a really good heuristic for getting NNs to train well). * I'm not aware of great success in Go? 54% accuracy is very hard to interpret in a vaccuum in terms of how impressed to be. * When statistical methods displaced logical methods it's because they led to lots of progress on lots of domains. In fact, the delta from logical to statistical was probably much larger than the delta from classical statistical learning to neural nets.
0Houshalter
I consider deep learning to be in the family of statistical methods. The problem with previous statistical methods is that they were shallow and couldn't learn very complicated functions or structure. No one ever claimed that linear regression would lead to AGI. That narrows the search space to maybe 2 moves or so per board. Which makes heuristic searching algorithms much more practical. You can not only generate good moves and predict what a human will do, but you can combine that with brute force and search much deeper than a human as well. I mean that NNs learn heuristics. They do require heuristics in the learning algorithm, but not ones that are specific to the domain. Whereas search algorithms depend on lots of domain dependent, manually created heuristics.

Revisited The Analects of Confucius. It's not hard to see why there's a stereotype of Confucius as a Deep Wisdom dispenser. Example:

The Master said, "It is Man who is capable of broadening the Way. It is not the Way that is capable of broadening Man."

I read a bit of the background information, and it turns out the book was compiled by Confucius' students after his death. That got me thinking that maybe it wasn't designed to be passively read. I wouldn't put forth a collection of sayings as a standalone philosophical work, but maybe I'd use it as a teaching aid. Perhaps one could periodically present students a saying of Confucius and ask them to think about it and discuss what the Master meant.

I've noticed this sort of thing in other works as well. Let's take the Dhammapada. In a similar vein, it's a collection of sayings of Buddha, compiled by his followers. There are commentaries giving background and context. I'm now getting the impression that it was designed to be just one part of a neophyte's education. There's a lot that one would get from teachers and more senior students, and then there are the sayings of the Master designed to stimulate thought and reflectio... (read more)

2[anonymous]
It would be condescending for the master too, to talk in short bursts of wisdom to his disciples, as long as he was alive. The issue is rather that once he dies, and the top level disciples gradually elevate the memory of the master into a quasi-deity, pass on the thoughts verbally for generations, and by the time they get around to writing it down the memory of the master is seen as such a big guy / deity and more or less gets worshipped so it becomes almost inconceivable to write it in anything but a condescending tone. But it does not really follow the masters were just as condescending IRL. You can see this today. The Dalai Lama is really an easy guy, he does not really care how people should behave to him, he is just friendly and direct with everybody, but there is an "establishment" around him that really pushes visitors into high-respect mode. I had this experience with a lower lama, of a different school, I was anxious about getting etiquette right, hands together, bowing etc. then he just walked up to me, shook my hand in a western style, did not let it go but just dragged me halfway accross the room while patting me on the back and shaking with laughter at my surprise, it was simply his joke, his way of breaking the all too ceremonious mood. He was a totally non-condescending, direct, easy-going guy, who would engage everybody on an equal level, but a lot of retainers and helpers around him really put him and his boss (he was something of a top level helper of an even bigger guy too) on a pedestal.
1Epictetus
Good point. I suppose what I had in mind is that when the disciple asks the master a question, the master can give a hint to help the disciple find the answer on his own. Answering a question with a question can prod someone into thinking about it from another angle. These are legitimate teaching methods. Using them outside of a teacher/student interaction is rather condescending, however. This is also a major factor. Disciples like to make the Master into a demigod and some of his human side gets lost in the process.
1Zubon

Do people who take modafinil also drink coffee (on the same day)? Is that something to avoid, or does it not matter?

0drethelin
It seems to have a synergistic effect but I regularly drink coffee and take modafinil irregularly so it's hard to say. It doesn't seem bad by any means.

I went to the dermatologist and today and I have some sort of cyst on my ear. He said it was nothing. He said the options are to remove it surgically, to use some sort of cream to remove it over time, or to do nothing.

I asked about the benefits of removing it. He said that they'd be able to biopsy it and be 100% sure that it's nothing. I asked "as opposed to... how confident are you now?" He said 99.5 or 99.95% sure.

It seems clear to me that the costs of money, time and pain are easily worth the 5/1000(0) chance that I detect something dangerous earlier and correspondingly reduce the chances that I die. Like, really really really really really clear to me. Death is really bad. I'm horrified that doctors (and others) don't see this. He was very ready to just send me home with his diagnosis of "it's nothing". I'm trying to argue against myself and account for biases and all that, but given the badness of death, I still feel extremely strongly that the surgery+biopsy is the clear choice. Is there something I'm missing?

Also, the idea of Prediction Book for Doctors occurred to me. There could be a nice UI with graphs and stuff to help doctors keep track of the predictions they've made. Maybe it could evolve into a resource that helps doctors make predictions by providing medical info and perhaps sprinkling in a little bit of AI or something. I don't really know though, the idea is extremely raw at this point. Thoughts?

1) surgery is dangerous. Even innocuous surgeries can have complications such as infection that can kill. There's also complications that aren't factored into the obvious math, for example ever since I got 2 of my wisdom teeth out, my jaw regularly tightens up and cracks if I open my mouth wide, something that never happened beforehand. I wasn't warned about this and didn't consider it when I was deciding to get the surgery.

2) If it's something dangerous, you're very likely to find out anyway before it becomes serious. eg, if it's a tumor, it's going to keep growing and you can come back a month later and get it out then with little problem.

3) even if it's not nothing, it might be something else that's unlikely to kill you. Thus the 5/1000 chance of death you're imagining is actually a 5/1000 chance of being not nothing.

3Adam Zerner
Are you just making these points as things to keep in mind, or are you making a stronger point? If the latter, can you elaborate? Are you particularly knowledgeable?

The point is your consideration of "if surgery, definitely fine" vs "if no surgery, 5/1000 chance of death" are ignoring a lot of information. You're acting like your doctor is being unreasonable when in fact they're probably correct.

1Zubon
Stronger point: since we are at Less Wrong, think Bayes Theorem. In this case, a "true positive" would be cancer leading to death, and a "false positive" would be death from a medical mishap trying to remove a benign cyst (or even check it further). Death is very bad in either case, and very unlikely in either case. P (death | cancer, untreated) - this is your explicit worry P (death | cancer, surgery) P (death | benign cyst, untreated) P (death | benign cyst, surgery) - this is what drethelin is encouraging you to note P (benign cyst) P (cancer) My prior for medical mishaps is higher than 0.5% of the time, but not for fatal ones while checking/removing a cyst near the surface of the skin. As drethelin's #2 notes, this is not binary. If it is not a benign cyst, you will probably have indicators before it becomes something serious. Similarly, you have non-surgical options such as a cream or testing. Testing probably has a lower risk rate than surgery, although if it is a very minor surgery, perhaps not that much lower. If the cyst worries you, having it checked/removed is probably low risk and may be good for your mental health. But now we might have worried you about the risks of doing that (sorry) when we meant to reduce your worries about leaving the cyst untreated.
4ChristianKl
In general if you list everything you can think of and give it probability scores, you ignore unknown unknowns. For medical interventions like surgery unknown unknowns are more likely to be bad than to be good. As a result it's useful to have a prior against doing a medical intervention if there no strong evidence that the intervention is beneficial.
0[anonymous]
Maybe we need to visualize surgery different. I used to think about it like replacing a part in a car. Why not just do it if the part is not working too well. Maybe we should see it as damage. It's like someone attacking you with a knife. Except that the intention is completely different, they know what they are doing, their implements are far more precise and so on, so the parallel is not very good either, I am just saying that "recovering from an appendicitis" could be at least visualized as something closer to "recovering after a nasty knife fight" than to "just had the clutch in my car replaced". What do you think?
0ChristianKl
Why do you think we need to do so?
0Elo
agreed; if you are getting it done; and prefer the higher chance of life; get it done without being fully anaesthetized. Possibly by a plastic surgeon; they seem to have profits to burn on quality equipment from people doing unnecessary (debatable) cosmetic procedures.

You're probably misreading your doctor.

When he said "99.5 or 99.95%" I rather doubt he meant to give the precise odds. I think that what he meant was "There is a non-zero probability that the cyst will turn out to be an issue, but it is so small I consider it insignificant and so should you". Trying to base some calculations on the 0.5% (or 0.05%) chance is not useful because it's not a "real" probability, just a figurative expression.

1Adam Zerner
Great point. He did seem to pause and think about it, but still a good point. It seems notably likely that you're right, and even so, I doubt that his confidence is well-calibrated.
8Manfred
I think you should use the cream for a week, to start with. Also, thought experiment: Suppose a person is going to live another 70 years. If undergoing some oversimplified miracle-cure treatment will cost, one way or another, 1 week of their life, what chance of "it's just a cyst" will they accept? 99.97%. So from the doctor's perspective (neglecting other risks or resources used, taking their '99.95%' probability estimate at face value, and assuming that a biopsy is some irreplaceable road to health), your condition is so likely to be benign that the procedure to surgically check spends your life at about the same rate as it saves it.
5[anonymous]
The biggest thing is that the doctor's priorities are not your priorities. To him, a life is valuable... but not infinitely valuable -estimates usually puts the value of a life at (ballpark) 2 million dollars. When you consider the relative probability of you dying, and then the cost to the healthcare system of treatment, he's probably making the right decision (you of course, would probably value your own life MUCH MUCH higher). Btw, this kind of follows a blindspot I've seen in several calculations of yours - let me know if you're interested in getting feedback on it. Finally, there are two other wrinkles - the possibility of complications and the possibility of false positives from a biopsy. The second increases the potential cost, and the first decreases the potential years added to your life. Both of these tilt the equation AGAINST getting it removed.
9ChristianKl
The doctor has no incentive to minimize the cost of treatment. He makes money by having a high cost of treatment.
0Douglas_Knight
Right, MattG is 100% backwards.
7Unknowns
Even adamzerner probably doesn't value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.
0Silver_Swift
That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*. *: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).
1Unknowns
I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)
0[anonymous]
Good point.
2Adam Zerner
(Two people mentioned this so I figure I'll just reply here.) Re: doctors perspective. I see how it might be rational from his perspective. My first thought is, "why not just give me the info and let me decide how much money I'm willing to invest in my health?". I could see how that might not be such a good idea though. From a macro perspective, perhaps those sorts of transaction costs might not be worth the benefits of added information -> increased efficiency? Plus it'd be getting closer to admitting to how much they value a life, which seems like it'd be bad from an image perspective I guess what I'm left with is saying that I find it extremely frustrating, I'm disappointed in myself for not thinking harder about this, and I'm really really glad you guys emphasized this so I could do a better job of thinking about what the interests are of parties I interact with (specifically doctors, and also people more generally). I feel like it makes sense for me to be clear that I would like information to be shared with me and that I'm willing to spend a lot of money on my health. And perhaps that it's worth exercising some influence on my doctors so they care more about me. Thoughts?
4ChristianKl
The doctor you are with has a financial interest to treat you. When he advises you against doing something about the cyst he's acting against his own financial interests. Overtreatment isn't good if you value life very much. Every medical interventions comes with risks. We don't fully understand the human body, so we don't know all the risks. From the perspective of the doctor the question likely isn't: "How much money is the patient willing to invest in health" but "How much is the patient willing to invest for the cosmetic issue of getting rid of an ugly cyst".
0philh
If the surgery isn't necessary, and something goes wrong during it, does the doctor need to worry about getting sued?
0ChristianKl
If I remember right the best predictor for a doctor getting sued is whether patients perceive the doctor to be friendly. Advising against a unnecessary practice might be malpractice but informing a patient about the option to do so, especially when there are cosmetic reasons for it, shouldn't be a big issue.
0Elo
Even good doctors can get sued. But it speaks to more about why people sue; (doctors did a bad human-interaction job rather than they did a negligent job) I do wonder about the nature of doctoring. Do you happen to get 3% (arbitrary number) wrong; and if you are also bad at people-skills, this bites you. whereas if you get 3% wrong and you are good at people skills you avoid being sued 99% of those 3% of cases.
1Elo
A perspective on the nature of medical advice: There exist people who are so concerned about not dying that they would do anything in their power to survive medically, and organise for themselves regular irrelevant medical tests. They are probably over-medicated and wasting a lot of time. i.e. a brain scan for tumours (where no reason to think they exist is present). There exist people who get yearly mammograms. there exist people who probably get around to their (reccomended yearly) mammogram every few years. There exist people who have heart attacks from long term lifestyle choices. There exist people who are so not concerned about dying that they smoke. This is the range of patients that exist. You sound like you are closer to the top in terms of medical concern. The dermatologist has to consider where on the spectrum you are when devising a treatment as well as where the condition is on the spectrum of risk. For a rough estimate (not a doctor) I would say the chance of a cyst on your ear killing you in the next 50 years would be less than the chance of getting an entirely different kind of cancer and having it threaten your life. (do you eat burnt food? bowel cancer risk. Do you go in the sun? skin cancer risk) If it can be removed by cream; it will still be gone. The specialist should suggest a biopsy to cover their ass, but really; it could be 99 different types of skin growths or few type of cancerous growth. With no other symptoms there is no reason to suspect any danger exists. the numbers you suggested sound like they were fabricated when given to you. Which is a reason to not mathematically attack them; but take them on the feeling value of 99.99% thumbs up. (and its really hard and almost impossible to find 0.01% so medically we don't usually bother)
0Strangeattractor
My advice would be: 1) See another doctor to get a second opinion. (And possibly a third opinion, if you don't like the second doctor.) Keep looking for a doctor until you find one that explains things to you in enough detail so that you understand thoroughly. Write down the questions you want answered ahead of time, and take notes during your appointment. "I am confident" is a bullshit answer unless you understand what possibilities the doctor considered, why the doctor thinks this one is the most likely, what the possible approaches to dealing with it if it turns out to be "not fine" are, and their advantages and disadvantages, what warning signs to look for that might indicate it is not fine, and the mechanism by which the cream option would work. Unfortunately, the state of medical knowledge is such that there may not be good answers to all of the questions. The best the doctor may be able to do is "I don't know" for some of them. But you can get a better understanding of the situation than you have now, and a better understanding of where there are gaps in the medical knowledge. 2) Read a bunch of scientific papers about cysts and biopsies and tests so that you understand the possibilities and the risks better. 3) Also read about medical errors and risks of surgeries. People following doctor's instructions is one of the leading causes of death in the USA. I read an article about it in JAMA a few years ago. There might be more up-to-date papers about it by now. Having a medical procedure done is not a neutral option when it comes to affecting your chances to continue living. For example, here's a paper that indicates that prostate biopsies could increase the mortality rate in men. This is just one study, not enough information to make an informed decision. Boniol M, Boyle P, Autier P, Perrin P. Mortality at 120 days following prostatic biopsy: analysis of data in the PLCO study. Program and abstracts of the 2013 American Society of Clinical Oncology Annual
0[anonymous]
(deleted--everything I said was said by others already)
0minusdash
Saying 99.9999% seems a mouthful. Would you have preferred an answer like this instead: https://www.youtube.com/watch?v=7sWpSvQ_hwo :)
0Adam Zerner
If brevity was the issue, I wouldn't have expected him to say 5 instead of 9. And I would have expected him to use stronger language than he did. My honest impression is that he thinks that the chances that it's something are really small, but nothing approaching infinitesimally small.
2minusdash
I'd say an expert in any field has better intuitions (hidden, unverbalized knowledge) than what they can express in words or numbers. Therefore, I'd assume that the decision that it's not worth doing the examination should take priority over the numerical estimate that he made up after you asked. It may be better to ask the odds in such cases, like 1 to 10,000 or 1 to a million. Anyway, it's really hard to express our intuitive, expert-knowledge in such numbers. They all just look like "big numbers". Another problem is that nobody is willing to put a dollar value on your life. Any such value would make you upset (maybe you are the exception, but most people probably would). Say the examination costs $100 (just an example). Then if he's 99.95% sure you aren't sick, and 0.05% sure you are dying and sends you home, then he (rather your insurance) values your life at less than $200,000. This is a very rough estimation, but it seems in the right ballpark for what a general stranger's life seems to be valued by the whole population. Of course it all depends on how much insurance you pay, how expensive the biopsy is etc. Maybe you are right that you deserve to be examined for your money, maybe not. But people tend to avoid this sort of discussion because it is very emotionally-loaded. So we mainly mumble around the topic. People are dying all the time out of poverty, waiting on waiting lists, not having insurance, not being able to pay for medicaments. But of course people who have more money can override this by buying better medical care. Depending on the country there are legal and not-so-legal methods to get better healthcare. You could buy a better package legally, put some cash in the doctor's coat, etc. You need to consider that the people who'd do your biopsy can do other things as well, for example work on someone's biopsy who has a chance of 1% of dying instead of your 0.05% (assuming this figure is meaningful and not just a forced, uncalibrated guess). If y
0ChristianKl
It's quite easy to get more expensive healthcare. On the other hand that doesn't mean the healthcare is automatically better. If you are willing to pay for any treatment out of your own pocket then a doctor can treat you in a way that's not being payed for by an insurance company because it's not evidence-based medicine.
0minusdash
It can still be evidence-based, just on a larger budget. I mean, you can get higher quality examinations, like MRI and CT even if the public insurance couldn't afford it. Just because they wouldn't do it by default and only do it for your money doesn't mean it's not evidence based. Evidence-based medicine doesn't say that this person needs/doesn't need this treatment/examination, it gives a risk/benefit/cost analysis. The final decision also depends on the budget.
0Adam Zerner
It seemed to me that the proposition was made under false assumptions. Specifically, I value my life way more than most people do, and I value the costs of time/money/pain less than most people do. He seemed to have been assuming that I value these things in a similar way to most people. Yeah, I understand this now. Previously I hadn't thought enough about it. So given that I am willing to spend money for my health, and that I can't count on doctors to presume that, it seems like I should make that clear to them so they can give me more personalized advice.
0ChristianKl
How do you know? Because you do things like flossing every day? Healthcare economics quite frequently mean that a person prefers to pay more rather than less to signal to themselves that they do everything in their power to stay alive. People quite frequently make bad health decisions because buying an expensive treatment feels like they do something to stay healthy will it's much more difficult emotionally to do nothing.
0Adam Zerner
I understand that for a lot of people, the X isn't about Y thing applies. That investing in health might be about signaling to oneself/others something. But I assure you that I genuinely do care. Maximizing expected utility is a big part of how I make decisions, and I think that things that reduce the chances of dying have very large expected utilities (given the magnitude of death). That said, I'm definitely not perfect. I ate pizza for lunch today :/
0Lumifer
"Willing to spend money" meaning that you're willing to pay out of pocket for medical procedures? Or that you are willing to fight your insurance so that it pays for things it doesn't think necessary? And doctors are supposed to ignore money costs when recommending treatment (or lack of it) anyway. If you want "extra attention", I suspect that you would need to proactively ask for things. For example, you can start by doing a comprehensive blood screen -- and I do mean comprehensive -- including a variety of hormones, a metals panel, a cytokine panel, markers for inflammation, thryroid, liver, etc. etc. You will have to ask for it, assuming you're reasonably healthy a normal doctor would not prescribe it "just so".
0Adam Zerner
I'm willing to spend out of pocket. More generally, I value my life a lot, and so I'm willing to undergo costs in proportion to how much I value my life.
2Lumifer
You're constrained by the size of your pocket :-) Being willing to spend millions on saving one's life is not particularly relevant if you current bank balance is $5.17. Very rich people can (and do) hire personal doctors. That, however, has its own failure modes (see Michael Jackson).
0Adam Zerner
Yeah, I know. It's just hard to be more specific than that. I guess what I mean is that I am willing to spend a much larger portion of my money on health than most people are.
3Lumifer
Is that a revealed preference? ;-)

Inspired by terrible, terrible Facebook political arguments I've observed, I started making a list of heuristic "best practices" for constructing a good argument. My key assumptions are that (1) it's unreasonable to expect most people to acquire a good understanding of skepticism, logic, statistics, or the ways the LW-crowd thinks of as how to use words rightly, and (2) lists of fallacies to watch out for aren't actually much help in constructing a good argument.

One heuristic captured my imagination as it seems to encapsulate most of the other he... (read more)

4ChristianKl
As far as I understand CFAR teaches this heuristic under the name "Gears-Thinking".
1selylindi
Does that name come from the old game of asking people to draw a bike, and then checking who drew bike gears that could actually work?
1[anonymous]
One thing you might want to consider is the reason people or posting on Facebook... usually, it's NOT to create a good argument, and in fact, sometimes a good, logical argument is counterproductive to the goal people have (to show their allegiance to a tribe).
0Elo
you might like www.yourlogicalfallacy.com

Can anyone think of a decision which might come up in ordinary life where Baysian analysis and frequentist analysis would produce different recommendations?

1Vaniver
The core difference between B and F is what they mean by "probability." If you go to the casino, the Bs and the Fs will interpret everything the same way, but when you go to the stock market, the Bs and the Fs will want to use their language differently. It seems likely to me that most of the uncertainties that show up in everyday life are things that Bs would be comfortable assigning probabilities to, but Fs would be hesitant about.
0Douglas_Knight
When it comes to an action you must structure your knowledge in Bayesian terms to use to compute an expected utility. It is only when discussion detached knowledge that other options become available.
0jsteinhardt
??? This isn't true unless I misunderstood you. There are frequentist decision rules as well as Bayesian ones (minimax is one common such rule, though there are others as well).
0Douglas_Knight
In what sense is minimax frequentist?
0jsteinhardt
From Wikipedia: ETA: While that page talks about estimating parameters, most of the math holds for more general actions as well.
0Douglas_Knight
I don't think that "non-bayesian" is a common definition of "frequentist." In any event, it's not a useful category.

Philosophers are apparently about as vulnerable as the general population to certain cognitive biases involved in making moral decisions according to new research. Apparently, they are as susceptible to the order of presentation impacting how moral or immoral they rate various situations. See summary of research here. Actual research is unfortunately behind a paywall.

A paper "Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection" (Eric Schwitzgebel and Fiery Cushman) is available here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Stability-150423.pdf

2[anonymous]
Very interesting, thanks for finding it. The methods and statistics look good (feel free to correct me). However, I wish the authors would have controlled for gender. I don’t think it would significantly change the results, but behavioral finance research indicates that men are more susceptible to certain behavioral biases than women: https://faculty.haas.berkeley.edu/odean/papers/gender/BoysWillBeBoys.pdf Admittedly, “Boys Will Be Boys” addresses overconfidence bias rather than framing and order biases.

An interactive twitch stream of a neural network hallucinating. Or twitch plays Large Scale Deep neural net.

EDIT: Fixed link.

3ZankerH
You've messed up the link, this is it http://www.twitch.tv/317070
2Douglas_Knight
Some more links: the blog post and a ten minute sample that you put on youtube. I imagine that there are many people who prefer youtube to twitch. In particular, I like the 2x setting on youtube.
0Houshalter
I'm amazed you found that video since I haven't posted it anywhere yet. I'm still trying to figure out how to add more than 2 minutes of music to it.
0Douglas_Knight
I found it by putting the twitch title into the youtube search bar. I tried it because people copy all sorts of videos to youtube,

What do you all think of "General Semantics"? Is it worth e.g. trying to read "Science and Sanity"? Are there insights / benefits there that can't be found in "Rationality: AI to Zombies"?

3ChristianKl
Science and Sanity contains a lot of good insights that aren't in the sequences. The problem is that it's not an accessible book. It hard to read and a substantial time investment.
3John_Maxwell
Do you think this is an intrinsic property of the insights, or could someone compress the book in to something shorter, more readable, and almost as useful?
3ChristianKl
I don't think the problem is that the book is long. It's that it basically defines it's own language and is written in that language. It's similar to a math textbook defining terms and then using those terms. It defines for example the term "semantic reaction" and then goes to abbreviate it as s.r The gist is that if you say something the meaning of what you say is the reaction that happens in the brain of your listener when he hears the words. It's not hard to understand that definition on a superficial level. On the other hand it's hard to really integrate it. It's a fundamental concept used throughout the book.

There is a paper out, the abstract of which says:

...Second, respondents significantly underestimated the proportion of [group X] among their colleagues. Third, [members of group X] fear negative consequences of revealing their ... beliefs to their colleagues. Finally, they are right to do so: In decisions ranging from paper reviews to hiring, many ... said that they would discriminate against openly [group X] colleagues. The more [group anti-X] respondents were, the more they said they would discriminate.

Before you go look at the link, any guesses as to what the [group X] is? X-/

3Jiro
I correctly guessed what X was. Because there's only one thing it could ever be, unless the paper was talking about very unusual subgroups like Jehovah's Witnesses in Mormon territory.
3Manfred
Well, it could be creationist zoologists, or satanist school teachers, or transgender fashion models. But of course it's psychologists studying psychologists, and of course it's reiterating an interesting narrative we've seen before.
2fubarobfusco
One would expect creationists to be underrepresented in zoology for a number of reasons, only one of which is that zoologists have negative beliefs about creationists and tend not to hire or encourage them. Others would include that creationists may avoid studying zoology because they find the subject matter unpleasantly contradictory to their existing commitments; and that some people previously inclined to creationism who study zoology cease to be creationists.
2[anonymous]
Anecdotally, I know at least one creationist zoologist, although I don't think he publishes creationist stuff. He doesn't stand out at all or has any noticeable trouble because of it. All zoologists I know are weirder than the average person.
1Lumifer
That's an interesting observation, isn't it?
1Nornagest
Between the word "beliefs" (which rules out most demographic groups), the word "openly" (which rules out anything you can't easily hide), and the existence of a plausible "anti-X" group (which rules out most multipolar situations), there's not too many possibilities left. The correct answer is the biggest, and most of the other plausible options are subsets of it. I suppose it could also have been its converse, but you don't hear too much about discrimination cases going that way.
0philh
I think that ngurvfgf would have been a plausible X in some places (and perhaps the opposite in others), but the correct one was the first that came to mind and the one I considered most likely.
3ahbwramc
ROT13: V thrffrq pbafreingvirf pbeerpgyl, nygubhtu V'z cerggl fher V unq urneq fbzrguvat nobhg gur fghql ryfrjurer.
1Vaniver
Cbyvgvpnyyl pbafreingvir.
0NancyLebovitz
I haven't looked. Pbafreingvirf.
0Larks
fbpvny pbafreingvirf

Sam Altman's advice for ambitious 19 year olds.

I don't know of Sam Altman, so maybe this criticism is wrong, but the quote: "If you join a company, my general advice is to join a company on a breakout trajectory. There are a usually a handful of these at a time, and they are usually identifiable to a smart young person." Absent any guides on how to identify breakout trajectory companies, this advice seems unhelpful. It feels like: "Didn't work for you? You must not have been a smart young person or you would have picked the right company."

Paired with the paragraph below on not letting salary be a factor, I am left with the suspicion that Sam runs what he believes to be a company with a 'breakout trajectory' and pays noncompetitive salaries.

Now to find a way to test that suspicion.

2[anonymous]
I have read something like this on a rationalist blog somewhere. Basically it was a type of advice like "you want to win the race? well, just run fast! just put one foot in front of the other quicker than others do, d'uh!" Maybe we need a name for this.
2John_Maxwell
Sam Altman is the president of Y Combinator. I think the way to look for a company on a breakout trajectory is to find a company that is growing fast and getting a lot of buzz but has not become established and is not thoroughly proven yet. Even better might be to find a company that's growing fast but not getting a lot of buzz, but that's probably trickier.
4Douglas_Knight
As the president of YC, he doesn't really hire anyone, but he does fund lots of companies, and his advice could be interpreted as: work for a YC company.
5John_Maxwell
The more precise cynical interpretation would be "work for a promising early stage YC company". Note that he also could have told you to work for a late stage one or apply to YC in order to start one. But it's probably true that working at a promising early-stage YC company is what would most benefit YC on the margin. (Although if what benefits YC most on the margin is what generates the most value, then generating more value for YC also seems like a good way to generate enough value that you capture a significant chunk.)
5[anonymous]
These types of advices are really not honest enough, I think. Let me try a honest one: 1) Move to America if you don't already live there. Bluff your way through immigration officers and whatnot. 2) Move to the Silicon Valley if you don't already live there. Deal with the costs of living there / outside your parents house anyhow. 3) Acquire enough money, lump sum or regular income that you can can focus on chasing shiny things for years without pay. Consider getting reincarnated in a well-to-do family, that helps. 4) Above is still true if you intend to join a company. Unless you want to join the kind of company where you are okay with HR drones keyword-buggering and credential-combing your resume and requiring 3 years of experience in technologies 2 years old, which is not really what the truly ambitious like to, those years will be spent on getting to know excellent founders, and making the kind of stuff on your own that convinces them to let you join them. I.e. chasing shiny things without pay before you can join the right kind of company. 5) Be a programmer, because there are very few professions where you can just casually build things as you see fit. As a programmer you get away with not having access to anything but your brain, the net, and a laptop. If you are e.g. a sculptor and your dream is to build a 50m tall Darth Vader statue out of bronze, well that is going to require some harder to acquire stuff as well. If you took all these building a bit too literally and you graduated in civil engineering, your chances of starting out on your own after graduating are nil, these kinds of startups don't exist, and you will probably work 10 years at the construction equivalent of Microsoft before you can try to start out on your own and finally do something interesting. So be a programmer. Don't like programming and computer technology so much? Still be a programmer or at the very least figure out real hard how to graduate in something that 1) can be made with

Seeking writing advice: Tropes vs writing block?

I've started writing bits and pieces for S.I. again, but not nearly at the rate I was writing before my hiatus.

I'm beginning to wonder if I should cheat a bit, and deliberately leave some of the details I'm having trouble getting myself to write about vague, and explain it away with some memory problems of Bunny-the-narrator for that period. Goodness knows there are plenty of ways Bunny's brain has been fiddled with so far, so it's not without precedent; and if it gets me over the hump and into full-scale wri... (read more)

2ZeitPolizei
Would it maybe help, if you left some of the details vague at first, to get back into writing, and go back later to rewrite those parts?
0DataPacRat
That seems to be the default that I'm settling on. I'm jotting down the plot points I want to happen in such sections, marking them so I know that I have to go back to that, and working on whatever I /can/ get myself to work on in the meantime.
0[anonymous]
From the way things seem given your recent posts about struggling with getting words onto the page, I would suggest doing anything that actually gets you moving in that direction. If you are stuck on one particular bit, by all means skip it for now. Whether that means incorporating this into the narrative, or coming back later for clean-up, depends on the product itself (I haven't read the work you are talking about). A more general aside: I've found myself in a very similar position, finding it incredibly hard to put words on the page yet needing to do so more and more urgently. I've seen a few comments you made before about preparing optimal writing situations and planning for them - I did exactly the same and in retrospect it seems this was a bad strategy for me. Mainly because such preparations got me thinking more and more about providing an optimal situation for written productivity: in essence setting up small "writing retreats" now and again. This became a self-perpetuating loop of non-writing, because doing so provided perfect excuses for NOT writing at any other time. A friend who is a (now retired) writer suggested that instead, I work on writing despite distractions, rather than constraining my writing effort to those situations where all distractions are minimised. In alternating weeks I tried the different techniques (A,B,B,A, where A=my old approach of writing in optimal situations and B=explicit attempt to write in distracting environments I wouldn't consider suitable for "A"). It turned out that B>A both in minutes spent writing (+125%) and in wordcount (+160%). Quality of work under "B" might have been lower but I don't seem to have a block in editing and revising, only in first drafting.

I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I'd ask...)

IDSIA / University of Lugano in Switzerland is where e.g. Schmidhuber is. His research is quite neural network-focused, but also AGI-focused. Also Shane Legg (now at DeepMind, one of the hottest AGI-ish companies around) graduated from Lugano with a PhD thesis on machine superintelligence.

"AGI but not machine learning or neuroscience or anything with neural nets" sounds a little odd to me, since the things you listed under the "not" seem like the components you'll need to understand if you want to ever build an AGI. (Though maybe you meant that you don't want to do research focusing only on neuroscience or ML without an AGI component?)

3jsteinhardt
Just wondering why you don't want to do machine learning? Many ML labs have at least some people who care about AI, and you'll get to learn a lot of useful technical material.

A little while back, someone asked me 'Why don't you pray for goal X?' and I said that there were theological difficulties with that and since we were about to go into the cinema, it was hardly the place for a proper theological discussion.

But that got me thinking, if there weren't any theological problems with praying for things, would I do it? Well, maybe. The problem being that there's a whole host of deities, with many requiring different approaches.

For example, If I learnt that the God of the Old Testament was right, I would probably change my set of ... (read more)

Does anyone know about any programs for improving confidence in social situations and social skills that involve lots of practice (in real world situations or in something scripted/roleplayed)? Reading through books on social skills (ie. How to Win Friends and Influence People) seems to provide a lot of tips that would be useful to implement in real life, but they don't seem to stick without actually practicing them. The traditional advice to find a situation in your own life that you would already be involved in hasn't worked well for me because it is mis... (read more)

3[anonymous]
There's a number of "game" related courses that take this approach. Most of these programs involve going out, and continually approaching and interacting with people, with specific goals in mind. There's the connection course (This one is probably the closest you're looking for, as he's reworked it to remove all "gamey" stuff, and just focused on social interactions): http://markmanson.net/connection-course There's the Collection of Confidence: http://www.amazon.com/The-Collection-Of-Confidence-HYPNOTICA/product-reviews/B000NPXWT8 Stylelife academy: http://web.stylelife.com/ Ars Amorata: http://www.zanperrion.com/arsamorata.php and a whole bunch more. Edt: in my area at least, there's also practice groups for Non-violent communication on meetup.com
3Bryan-san
The Rejection Game using Beeminder can be a good start for social skills development in general If you're interested in a specific area of social interactions then finding a partner or two in that area could help out. Toastmasters, pua groups, book clubs, and improv groups fall into this category. Alternatively, obtaining a job in sales can take you far
0William_S
My impression of Toastmasters is that it might be similar to what I'm looking for, but only covers public speaking.
2ChristianKl
Advice about picking in person training is location dependent. Without knowing what's available where you live it's impossible to give good recommendations.
0William_S
Recommendations for in person training around the Bay Area would be useful (as I'm likely to end up there).
3ChristianKl
California is a good place. A lot of personal development framework come from California. It very likely that there are good things available in California that are not known outside of it. Asking locals at LW meetups for recommendations. There seem to be regular Authentic Relating/Circling event in Berkeley: https://www.facebook.com/ARCircling We had a workshop in that paradigm at our European LW Community Event and it was well liked. I also attended another workshop in that framework in Berlin. Describing the practice isn't easy but it's goal is about having deep conversations with other people that produce the feeling of having a relationship with them. I have spent multiple years in Toastmasters and wouldn't recommend it if your goal isn't being on stage. Toastmasters Meetings usually have 20+ people in a room and only one person speaking at a time. That means relatively little speaking time per person. Toastmasters is also very structured. The ability to give a good 2 minute Table Topic speech for me didn't create the ability to tell a funny story in a corresponding way in a small talk context. Toastmasters have a nice and fun atmosphere but it feels a bit artificial in a way that Circling isn't. Trying to cut the number of "Ahm" in a speech by focusing on the "Ahm" instead of focusing on the underlying emotional layer is from my perspective suboptimal. Bryan-san also gave the recommendation of attending PUA groups. It's hard to really know the relevant outcomes. There are people who do have some success via that framework but it also makes some people more awkward. If you do PUA cold approaching you might get feedback from another PUA but you usually don't get honest feedback from the actual woman with whom you are interacting. Authentic Relating on the other hand provides a framework that isn't antagonistic.
0OrphanWilde
PUA success varies by region and local culture. In some urban areas, anecdotally, women have started judging men's PUA "game". I think it pattern-matches on a "correct" behavior, but is self-defeating; it pattern-matches on the idea that women, like men, want to have casual sex. The "correct" behaviors, are indeed, being something of a jerk, but is self-defeating because it assumes rudeness is the desired quality, rather than a signal of a desired quality: Jerks aren't likely to pester you for follow-up dates, which is to say, they are actually interested in strictly casual sex. It's self-defeating, because as soon as men who are interested in more meaningful relationships start utilizing the technique of being a jerk, being a jerk stops being a useful signal of -not- being interested in more meaningful relationships. (Being -very good- at being a jerk, on the other hand, probably -does- pattern-match pretty well with interest in strictly casual sex, hence the anecdotal accounts of women judging PUA "game".) The whole thing gets messier on account of individual differences. Some women want to be hit on, some don't, some want one approach, some want another, some are receptive to the idea of longer-term relationships, some aren't - in short, women are people, too. No single "framework" is going to accommodate everybody's desires, and those who push a monoculture ideal are being narrow-minded. And dating signaling is, frankly, terrible, and often abused, intentionally or unintentionally. (Women signaling desire for casual sex to get free drinks, men signaling desire for long-term relationships to get casual sex, for two of the common complaints.) Getting outside that, my personal practice is to strike up random conversations with strangers; small talk is the grease the gets conversation going. Treat small talk as a skill with a toolbox of techniques. Your toolbox should contain a list of standard questions for strangers; what do you do for a living, who are you ro
2ChristianKl
The problem is not only about the woman but about the man. Quite many man who go into PUA never end up in a state where they are comfortable striking up random conversations with strangers. Recently I went to a local "get out of your comfort zone" meetup in Berlin lead by someone who authored a book on comfort zone expansion and who has a decade in the personal development industry. Surprisingly we didn't went out to start conversations with strangers. His main argument against going down that road was that it often makes people without previous experience often experience those exercises in a disassociated way instead of in an associated way. PUA quite often leads to people trying to influence the woman instead of paying attention to their own emotions and dealing with those emotions in a constructive fashion. It's certainly possible to have toolbox smalltalk and do okay with it. Developing genuine curiosity for the other person and letting that curiosity guide your questions is both more fun and more likely to create a connection. I'm not advocating monoculture. I also don't think nobody should do PUA. It's just worth noting that PUA doesn't deliver for many people who buy into it.
1OrphanWilde
The toolbox gives you a starting point; it's not meant to be the entirety of the conversation, but rather starting points. It's relatively easy to maintain a conversation, harder to start one. Curiosity doesn't begin until you have something to be curious about in the first place. I agree that PUA doesn't give people what they're looking for, most of my comment was intended to explain why. (Short summary: It's about sex, not conversation.)
2ChristianKl
When standing at a bus stop are you asking a stranger: "What do you do for a living?" To me that doesn't seem like a good conversation starter. "Do you know in many minutes the bus will arrive" can be a curiosity based question, that's socially acceptable to ask. I'm standing next to a stranger and that question comes into my mind, I notice that I have a question were I'm interested in the answer. I can either look at my phone and look at the bus timetable to figure out the answer or I can ask the other person. There are many instances like that were you can choose the social way to deal with the situation. I think even for people who think they want sex, it often doesn't deliver on it's promise.
-4VoiceOfRa
The reason women who want causal sex are attracted to Jerks isn't because they aren't likely want follow up dates, it's because if getting the father to help raise the kids id out of the question, you want the best possible sperm. Granted today the women is likely to use a condom or abort because she doesn't want children, but that's adaptation execution for you.
0OrphanWilde
Are you an evolutionary strategy? Do your preferences all reduce down to evolutionary strategies?
-2VoiceOfRa
My preferences are shaped by my genes (which were shaped by evolution), and my experiences as interpreted by the systems built by my genes.
1CAE_Jones
A subset of Speech Therapy (especially for Autism Spectrum) covers exactly this sort of thing. I rather doubt it's what you're looking for, even if it's an option, but it fits what you described almost perfectly. The major issues would be the tendency toward a more clinical setting, only being an hour or so a week, the limited pool of people to practice with, and establishing your existing skills.
0Strangeattractor
Sometimes career centres at universities or community colleges have workshops to practice job interviewing and networking. You could see if there's something like that near you.

Can we look at Orbán's Hungary as a real-life laboratory of whether NRx works in practice?

2knb
I recall reading somewhere (slatestarcodex, I think) that the neoreactionists have three main strains: ethnocentric, techno-futurist/capitalist, and religious-authoritarian. In light of that, I wonder if Israel isn't a better example than Hungary. Israel is technologically advanced but also a strict ethnostate with some theocratic elements. Like Hungary, Israel is probably too democratic to really qualify.
-4VoiceOfRa
The problem with Israel is that the religious elements are based on a religion still optimized for exile rather than being a national religion. They still haven't rebuilt the temple, for crying out loud.
2Lumifer
Why do think Orbán's Hungary is a good example of NRx ideas implemented?
-2polymathwannabe
Example and example.
7Lumifer
So why not Putin himself? Or the Belorussian guy? Or any of the Central Asian rulers? If the criterion is rejection of liberal democracy, why not China?
2polymathwannabe
Those countries were never very liberal to begin with, so their departure from Western values doesn't look like what the experiment needs. Hungary, on the other hand, has a solid history of resistance to totalitarianism that only in the past half decade has had to face the threat of dictatorship.
5Viliam
There is more to NRx than just giving up liberal values. For example, Hungary still has elections that this guy has to win, so I guess they would still classify the country as "demotist". When they make a revolution, abolish democracy, declare Orbán a hereditary king, and possibly when he hires Ernő Rubik as a Chief Royal Scientist to solve all country's problems, then we'll have a good example.
0[anonymous]
AFAIK NRx are quasi-libertarians in the Hoppean sense (or Pinochetian sense), who want to use political authoritarianism for economic libertarianism largely. Orban is pretty much the opposite - economic statist, on a nationalist basis. Socially they can be similar but economically not. Orban is closer to US palecons like Pat Buchanan who are not full believers of free markets, they accept economic intervention just not on a left-wing / egalitarian basis, but a nationalist-protectionist basis e.g. not shipping jobs abroad. I admit this is a bit complicated, because economic libertarianism and illibertarianism meshes with different ideologies depending on what aspect of non-intervention they focus on. For example those US righ-wingers who focus primarily on low taxes and social spending, are closer to Orban, those want all kinds of spending low not just social are not so close, those who focus on free trade are far away from him, and those who focus on privatizing things are the farthest - Eastern European right-wing tends to be anti-privatization because privatization tends to lead to foreigners acquriing things and it does not mesh with their nationalism well. It's a bit complicated. But I see the primary difference as Orban is playing the man-of-the-people role, talks about a "plebeian" democracy, asks voters frequently about their opinion of issues, so he would be an NRx "demotist", he plays that role of the Little Guy against liberal elites type of thing that is closer to perhaps Tea Party folks. In short, far more anti-liberal than anti-democratic, he plays more of the role of a rural conservative democrat against aristocratic liberal elites, and his primary goal seems to be strengthening the national state against international liberal capitalism. He is very much the anti-Soros, and that is explicit (there are few people the Eastern European Right hates more than George Soros, and both because of his liberal views and capitalist exploits). European terminol
-2VoiceOfRa
I'm not quite a NRx but from what I hear about him I like Orban.
0[anonymous]
As long as you don't care much about economic libertarianism, privatizing all the things etc. but only social conservatism, you can be on the same page. Admittedly, the whole economic libertarianism thing is different in the center vs. peripheria of globalization. In the center, such as the US where businesses are owned by people of those countries, anti-libertarianism usually means egalitarianism. In the peripheria, where businesses are usually foreign-owned, anti-libertarianism usually means economic nationalism, protectionism. The later is culturally far more palatable for culturally conservative people, but Rothbard types would still be disgusted by it. BTW you see the same story on a far larger and transparenter case in Russia. Classical liberalism / libertarianism is equated with Yeltsin and that equated with selling all the things to foreigners and his memory very much hated on the Russian Right. They may be down with those types of libertarianism that is mostly about tax cuts, but they really draw lines at not letting foreigners get a lot of economic influence. (Not that Yeltsin was anywhere near being a principled libertarian - he just really liked selling things. I think the only principled libertarian to the east from Germany is Vaclav Klaus.)
0ChristianKl
It's still a democracy which has elections that the OCED can inspect Hungary. It's also still a member of the EU. That means it's subject to all sorts of legislation from Brussels and action by the EU Court of Human Rights. It seems like Hungary has to pay billions to the churches due to a verdict of the EU Court of Human Rights.
[-][anonymous]10

this was an unhelpful comment, removed and replaced by this comment

1[anonymous]
If you just want a basic "display information" website, go with wordpress. If you're looking to do a full web-app, I'd recommend either Drupal, or Wordpess with the Toolset plugins.
0Strangeattractor
Wordpress is open source. That's a good thing, and important.

I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.

I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".

These people seem confident dee... (read more)

4Manfred
There isn't really a "LW approach to AI," but there are some factors at work here. If there's one universal LW buzzword, it's "Bayesian methods," though that's not an AI design, one might call it a conceptual stance. There's also LW's focus on decision theory, which, while still not an AI design, is usually expressed as short, "model-dependent" algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods. As to whether there's some tribal conflict to be worried about here, nah, probably not.
0hydkyll
I think this sums up the problem. If you want to build a safe AI you can't use neural nets because you have no clue what the system is actually doing.
4Kaj_Sotala
If we genuinely had no idea of what neural nets were doing, NN research wouldn't be getting anywhere. But that's obviously not the case. More to the point, there's promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link). Furthermore it's not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it's a neural network or not.

Perhaps it would be beneficial to introduce life to Mars in the hope that it could eventually evolve into intelligent life in the event that Earth becomes sterilized. There are some lifeforms on Earth that could survive on Mars. The outer space treaty would need to be amended to make this legal, though, as it currently prohibits placing life on Mars. That said, I find it doubtful that intelligent life ever would evolve from the microbes, given how extreme Mar's conditions are.

3Unknowns
If you want to establish intelligent life on Mars, the best way to do that is by establishing a human colony. Obviously this is unlikely to succeed but trying to evolve microbes into intelligent life is less likely by far.
0ChristianKl
The likelihood of success of establishing a human colony depends on the timeframe. If there's no major extinction event I would be surprised if we don't have a human mars colony in 1000 years. On the other hand having a colony in the next 50 years is a lot less likely.

Can anyone help me understand the downvote blitz for my comments on http://lesswrong.com/lw/mdy/my_recent_thoughts_on_consciousness/ ?

I understand that I'm arguing for an unpopular set of views, but should that warrant some kind of punishment? Was I too strident? Grating? Illucid? How could I have gone about defending the same set of views without inspiring such an extreme backlash?

The downvotes wouldn't normally concern me too much but I received so many that my karma for the last 30 days has dropped to 30% positive from of 90%. I'd like to avoid this happening again when the same topic is under discussion.

1fubarobfusco
You note: "I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I'm not sure if this post adds any value." Many readers (myself included) are already familiar with these sources, and so the post comes across as unoriginal. It is basically you rephrasing and summarizing things that a lot of people have already read. In other words, it's probably not that people are downvoting to disagree, but because they don't see a response-journal reiterating well-known views as a good Main post. It's not "Go away, you are not smart enough to post here!" but "Yes, yes, we know these things; this particular post here is not news." The post has far too much "I think", "I realized", "it seems to me" language in it. It's your post; of course it is about what you think. In conversation those kind of phrases are used to soften the impact of a weird view strongly stated, but in writing they make it sound like the writer is excessively wrapped up in themselves. (On the other hand, if the important part is the sequence of your realizations, then present the evidence that convinced you, not just assertions that you had those realizations.) While different language communities have different standards for paragraph length, by the standards of current Web writing, your paragraphs are often way too long. To me, long block paragraphs come across as "kook sign" — that is, they lead me to think that the writer's thinking is disorganized.
0eternal_neophyte
I am not the OP of the thread I linked to. Most of the downvotes I received (in the comments) of that post have been reversed. Thanks for replying though.
0fubarobfusco
Ah, oops. Indeed, I thought you were the poster and were asking for an explanation of the downvotes to the post.

If someone on LW mentions taking part in seriously illegal activities (in all jurisdictions), am I morally obliged to contact the police/site admin? I don't think the person in question is going to hurt anyone directly.

Speaking of which, who is the site mod? Vladmeir someone?

EDIT: I think I misunderstood and the situation isn't bad enough to need reporting to anyone. He was only worrying about whether he wanted to do certain things, rather than actually doing them.

[-][anonymous]100

NancyLebovitz is the newest moderator at present.... and I believe the only really active one at least in day-to-day operations. Viliam_Bur was previously in that role but he backed off in January due to other time commitments.

There is a moderator list here

9Lumifer
I hope not (though your morality is your morality, of course). Bringing in the cops into an online discussion is very VERY rarely a proper thing to do, IMHO.
-3skeptical_lurker
This was a very serious matter - I would not have considered calling the police for most things.
9Richard_Kennaway
I didn't see the post I believe you're referring to before the author redacted it, but for me the line would be real danger to other people, which you say you don't think is the case. It any case, it would be best to go through the mods first. A pseudonymous post recounting deeds in (perhaps) unspecified places and times isn't something the police can work with. Also, summoning the police is not to be done lightly, for once summoned, no power can banish them whence they came.
4skeptical_lurker
I have thought about it, and contacting the police straight away would only be the right thing to do if there was some imminent danger. I probably wouldn't have mentioned it, except that it was the sort of thing which can pose an indirect threat or lead to behaviour which does hurt other people. Anyway, it appears I got the wrong impression anyway, and he was only obsessing over the hypothetical possibility of doing things, rather than actually doing them. So this is one of the times when its good that I didn't impulsivly do the first thing that popped into my head and instead stopped to think about it.
0Elo
I saw the post; it was a mix-up.
0Dahlen
You may or may not be legally obligated (although this obligation is not realistically enforceable by law); as for morally obligated, it depends upon the nature of the act. There is imperfect overlap between things defended by law and by morality. If we're talking piracy or jaywalking or buying modafinil on the black market, you may be overexerting your civic powers here, and your conscience can relax. If the matter at hand involves violence, and you can expect to save some people through your report, then maybe it's better to get involved. For all matters in-between, both approaches may be valid in certain proportions, so apply common sense.
-2skeptical_lurker
It was a misunderstanding, but for the record, what I thought was going on was far worse than buying modafinil, or any other illicit substance, and sort of indirectly involves violence.
0ChristianKl
A site mod theoretically has IP addresses that allow him to pick the right country when reporting to the police. As a result it makes sense to report to a mod. If you would contact the police and then they contact the side mod, things can get more messy when the mod doesn't reply fast enough.
1John_Maxwell
I thought this comment was pretty good.
0Elo
yep. someone else downvoted this. I agree in downvoting because of the lack of description or information given with the random link.
1John_Maxwell
Seems to work fine for reddit...
3Elo
If I wanted to be on reddit I would be on reddit.
-2Xerographica
"Nobody made a greater mistake than he who did nothing because he could do only a little." - Edmund Burke
0Elo
If that's a related quote you should say so; if that's a meta comment about my comment you should also say so. Downvoted this: I could give you an equally pretty sounding quote about walking blindly forward or repeating teachers passwords but I am too lazy to find a link. This is the Internet; don't be cryptic, be obvious, be helpful and be clear. How about this description: And the most important part of sharing a link:
-1Xerographica
That's fucking teamwork.