If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Nov. 16 - Nov. 22, 2015
New Comment
189 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

http://mindhacks.com/2015/11/16/no-more-type-iii-error-confusion/#comments

Use "The Boy Who Cried Wolf" as a mnemonic. His first error is type 1 (claiming a wolf as present when there wasn't one). His second error is type 2 (people don't notice an existing wolf).

[-]gjm380

Nice.

To fight back against terrible terminology from the other side (i.e., producing rather than consuming) I suggest a commitment to refuse to say "Type I error" or "Type II error" and always say "false positive" or "false negative" instead.

2twanvl
I find "false positive" and "false negative" also a bit confusing, albeit less so than "type I" and "type II" errors. Perhaps because of a programming background, I usually interpret 'false' and 'negative' (and '0') as the same thing. So is a 'false positive' something that is false but is mistaken as positive, or something that is positive (true), but that is mistaken as false (negative)? In other words, does 'false' apply to the postiveness (it is actually negative, but classified as positive), to being classified as positive (it is actually positive, but classified as positive)? Perhaps we should call false positives "spurious" and false negatives "missed".
5gjm
Huh. That never occurred to me (even though I spend a lot of my days writing code too). In case you're expressing actual uncertainty rather than merely what your brain gets confused about, the answer is that a false positive is something that falsely looks positive. Perhaps the best way to put it is different, though: a false positive is a positive result of your test (so it actually is a positive) that doesn't match the underlying reality. Like a "false alarm".
0philh
Now that I know which is which, this will be very slightly harder for me than it used to be.
4D_Malik
Introspecting, the way I remember this is that 1 is a simple number, and type 1 errors are errors that you make by being stupid in a simple way, namely by being gullible. 2 is a more sophisticated number, and type 2 errors are ones you make by being too skeptical, which is a more sophisticated type of stupidity. I do most simple memorization (e.g. memorizing differentiation rules) with this strategy of "rationalizing why the answer makes sense". I think your method is probably better for most people, though.
4IlyaShpitser
Nice!
1MrMind
Ha, I wasn't even aware of this. Really nice, thanks.

LW displays notifications about replies and private messages in the same place, mixed together, looking the same. Note that top-level comments on your articles are also considered replies to you (this is a default behavior, you could turn it off, but it makes sense so you will probably leave it turned on).

This has the disadvantage that when you post an article which receives about 20 comments and someone sends you a private message, it is very easy to miss the message. Because in your inbox you just see 21 entries that look almost the same.

Suggestion: The easiest fix would probably be to change the appearance of private messages in your inbox. Make the difference obvious, so you can't miss it. For example, add a big icon above each private message.

4Elo
support this change; have no idea how easy it is to do.
2SanguineEmpiricist
yes please
0Tem42
We do have an inbox that sorts messages onto a separate page from replies to posts. I think the easiest change would be to simply make two separate icons for these two separate pages.
5Vaniver
I don't think this is correct. I'm only familiar with http://lesswrong.com/message/inbox/ , which mixes the two together. You may be thinking of http://lesswrong.com/message/sent/ , which shows all the private messages you've sent.
1Viliam
Thanks, I didn't know that. What's the URL? (Or how else can I get there?)
2Tem42
No, I'm sorry, I was thinking of the Outbox. You can see what you've sent, but not what you've received. Currently not useful, but at least it suggests that the coding might not be so hard to do.

The latest New Yorker has a lengthy article about Nick Bostrom and Superintelligence. It contains a good profile of Bostrom going back to his graduate school days, his interest in existential threats in general, and how that interest became more focused on the risk of AGI specifically. Many concepts frequently discussed at LW are mentioned, e.g. the Fermi paradox and the Great Filter, the concept of an intelligence explosion, uploading, cryonics, etc. Also discussed is the progress that Bostrom and others have made in getting the word out regarding the threat posed by AGI, as well as some opposing viewpoints. Various other AI researchers, entrepreneurs and pundits are mentioned as well (although neither EY nor LW is mentioned, unfortunately).

The article is aimed at a general audience and so it doesn't contain much that will be new to the typical LWer, but it is an interesting and well-done overview, IMO.

I was amused to see both modafinil and nicotine pop up. I guess I should feel proud?

0signal
You should. Just started playing with those gums.
7hg00
"There's no limit to the amount of good you can do if you don't care who gets the credit."
2IlyaShpitser
I don't think that's the right explanation in this case.
1Gunnar_Zarncke
I understood the comment differently: The OP did write the post because the purpose of the Times article was spot on - not because the 'right' people got the credit.
0g_pepper
I did not mean to suggest that anyone had been slighted or denied any due credit when I stated that neither EY nor LW was mentioned. As I read the article, I had just been looking for mentions of EY or LW, and I figured that others might as well, so that is why I mentioned it. No article can cover everything. As Gunnar stated, I thought it was a great article!
2Soothsilver
I was surprised to see how health-conscious Bostrom is. Making his own foods in order to maximize health and not shaking hands. I thought that was limited to Kurzweil only.
1SanguineEmpiricist
"Bostrom had little interest in the cocktail party. He shook a few hands, then headed for St. James’s Park, a public garden that extends from the gates of Buckingham Palace through central London. " - Article
0Soothsilver
And yet "His intensity is too untidily contained, evident in his harried gait on the streets outside his office (he does not drive), in his voracious consumption of audiobooks (played at two or three times the normal speed, to maximize efficiency), and his fastidious guarding against illnesses (he avoids handshakes and wipes down silverware beneath a tablecloth)."
3Tem42
If he is a rationalist, I would expect that he has a good grasp of when it is socially pragmatic to shake hands, and when he can operate under Crocker's rules and request not to shake hands. I also expect that he is smart enough to have an antibacterial wipe in his pocket to use after shaking hands (but not use it until he is out of sight in the gardens).
1Soothsilver
What do Crocker's rules have to do with this? Also, it seems carrying antibacterial wipe to use after shaking hands is excessive. The chance that he'll suffer serious health problems from infection by handshake is so small that I doubt even the time taken for all these efforts is worth it.
1SanguineEmpiricist
You quoted him saying he did not shake hands, that to a lot of us seems a bit excessive. Tem42 tells us that it is more plausible to carry antibacterial wipe for hygiene concerns as opposed to a blanket bank on shaking hands, which to us, is rather strange. If the cost/benefit is vs . It seems like the latter is more plausible, especially cause the article also said he shook hands and left.
1Soothsilver
I think the most plausible is that he does shake hands and he does not use anti-bacterial wipe, merely that he mentioned to the reporter "I prefer not to shake hands to keep myself safe" and that the reporter exaggerated.

As soon as I have two Karma points, I will post a 2000 word article on bias in most LW posts (which I would love to have your feedback on) with probably more to follow. However, I don't want to search for some more random rationality quotes to meet that requirement. Note to the administrators: Either you are doing a fabulous job at preventing multiple accounts or registration is currently not working (tried multiple devices, email addresses, and other measures).

5signal
Thanks. It is now online in the discussion section: "The Market for Lemons."
3[anonymous]
Same here. I was going to make an incredible article about wizards and the same thing happened plus all the negative karma I got from trolling. :(

I've been hearing about all this amazing stuff done with recurrent neural networks, convolutional neural networks, random forests, etc. The problem is that it feels like voodoo to me. "I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?" I'm not sure, it sounds like you don't understand what your program is doing. That's pretty much why I'm not studying machine learning right now. What do you think?

ML is search. If you have more parameters, you can do more, but the search problem is harder. Deep NN is a way to parallelize the search problem with # of grad students (by tweaks, etc.), also a general template to guide local-search-via-gradient (e.g. make it look for "interesting" features in the data).

I don't mean to be disparaging, btw. I think it is an important innovation to use human AND computer time intelligently to solve bigger problems.


In some sense it is voodoo (not very interpretable) but so what? Lots of other solutions to problems are, too. Do you really understand how your computer hardware or your OS work? So what if you don't?

6ZankerH
There is research in that direction, particularly in the field of visual object recognising convolutional networks. It is possible to interpret what a neural net is looking for. http://yosinski.com/deepvis
3cousin_it
I guess the difference is that an RNN might not be understandable even by the person who created and trained it.
2Lumifer
There is an interesting angle to this -- I think it maps to the difference between (traditional) statistics and data science. In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting. In data science (and/or ML) a lot of models are of the sprawling black-box kind where coefficients are not separable and make no sense outside of the context of the whole model. These models aren't traditionally parsimonious either. Also, because many usual metrics scale badly to large datasets, overfitting has to be managed differently.
-1bogus
Keep in mind that traditional stats also includes semi-parametric and non-parametric methods. These give you models which basically manage overfitting by making complexity scale with the amount of data, i.e. they're by no means "small" or "parsimonious" in the general case. And yes, they're more similar to the ML stuff but you still get a lot more guarantees. I get the impression that ML folks have to be way more careful about overfitting because their methods are not going to find the 'best' fit - they're heavily non-deterministic. This means that an overfitted model has basically no real chance of successfully extrapolating from the training set. This is a problem that traditional stats doesn't have - in that case, your model will still be optimal in some appropriate sense, no matter how low your measures of fit are.
1IlyaShpitser
I think I am giving up on correcting "google/wikipedia experts," it's just a waste of time, and a losing battle anyways. (I mean the GP here). ---------------------------------------- That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.
2IlyaShpitser
I don't think any one person understands the Linux kernel anymore. It's just too big. Same with modern CPUs.
5cousin_it
An RNN is something that one person can create and then fail to understand. That's not like the Linux kernel at all.
3jacob_cannell
Correction: An RNN is something that a person working with a powerful general optimizer can create and then fail to understand. A human without the optimizer can create RNNs by hand - but only of the small and simple variety.
5solipsist
Although the Linux kernel and modern CPUs are piecewise-understandable, whereas neural networks are not.
7IlyaShpitser
Lots of neural networks at an individual vertex level are a logistic regression model, or something similar, -- I think I understand those pretty well. Similarly: "I think I understand 16-bit adders pretty well."

I did my PhD thesis on a machine learning problem. I initially used deep learning but after a while I became frustrated with how opaque it was so I switched to using a graphical model where I had explicitly defined the variables and their statistical relationships. My new model worked but it required several months of trying out different models and tweaking parameters, not to mention a whole lot of programming things from scratch. Deep learning is opaque but it has the advantage that you can get good results rapidly without thinking a lot about the problem. That's probably the main reason that it's used.

RNNs and CNNs are both pretty simple conceptually, and to me they fall into the class of "things I would have invented if I had been working on that problem," so I suspect that the original inventors knew what they were doing. (Random forests were not as intuitive to me, but then I saw a good explanation and realized what was going on, and again suspect that the inventor knew what they were doing.)

There is a lot of "we threw X at the problem, and maybe it worked?" throughout all of science, especially when it comes to ML (and statistics more broadly), because people don't really see why the algorithms work.

I remember once learning that someone had discretized a continuous variable so that they could fit a Hidden Markov Model to it. "Why not use a Kalman filter?" I asked, and got back "well, why not use A, B, or C?". At that point I realized that they didn't know that a Kalman filter is basically the continuous equivalent of a HMM (and thus obviously more appropriate, especially since they didn't have any strong reason to suspect non-Gaussianity), and so ended the conversation.

4cousin_it
Can you give a link to that explanation of random forests?
5Vaniver
Unfortunately I can't easily find a link to the presentation: it was a talk on Mondrian random forests by Yee Whye Teh back in 2014. I don't think it was necessarily anything special about the presentation, since I hadn't put much thought into them before then. The very short version is it would be nice if classifiers had fuzzy boundaries--if you look at the optimization underlying things like logistic regression, it turns out that if the underlying data is linearly separable it'll make the boundary as sharp as possible, and put it in a basically arbitrary spot. Random forests will, by averaging many weak classifiers, create one 'fuzzy' classifier that gets the probabilities mostly right in a computationally cheap fashion. (This comment is way more opaque than I'd like, but most of the ways I'd want to elaborate on it require a chalkboard.)
4IlyaShpitser
This is related to making a strong learner (really accurate) out of weak learners (barely better than majority). It is actually somewhat non-obvious this should even be possible. ---------------------------------------- The famous example here is boosting, and in particular "AdaBoost." The reason boosting et al. work well is actually kind of interesting and I think still not entirely understood. ---------------------------------------- I didn't really get Vaniver's explanation below, there are margin methods that draw the line in a sensible way that have nothing to do with weak learners at all.
3V_V
Start with the base model, the decision tree. It's simple and provides representations that may be actually understandable, which is rare in ML, but it has a problem: it sucks. Well, not always, but for many tasks it sucks. Its main limitation is that it can't efficiently represent linear relations unless the underlying hyperplane is parallel to one of the input feature axes. And most practical tasks involve linear relations + a bit of non-linearity. Training a decision tree on these tasks tends to yield very large trees that overfit (essentially, you end up storing the training set in the tree which then acts like a lookup table). Fortunately, it was discovered that if you take a linear combination of the outputs a sizeable-but-not-exceptionally-large number of appropriately trained decision trees, then you can get good performances on real-world tasks. In fact it turns out that the coefficients of the linear combination aren't terribly important, a simple averaging will do. So the issue is how do you appropriately train these decision trees. You want these trees to be independent from each other conditioned on the true relation as much as possible. This means that ideally you would have to train each of them on a different training set sampled from the underlying true distribution, that is, you would have to have enough training data for each tree. But training data is expensive (ok, used to be expensive in the pre-big data era) and we want to learn an effective model from as few data as possible. The second requirement is that each decision tree must not overfit. In the tradeoff between overfitting and underfitting, you prefer underfitting the invdividual models, since model averaging at the end can take care of it. Random forests use two tricks to fulfill these requirements: The first one is Bootstrap aggregating, aka "bagging": instead of gathering from the true distribution m training sets of n examples each for each of your m decision trees, you generate
1RaelwayScot
I find CNNs a lot less intuitive than RNNs. In which context was training many filters and successively apply pooling and again filters to smaller versions of the output an intuitive idea?
7Manfred
In the context of vision. Pooling is not strictly necessary but makes things go a bit faster - the real trick of CNNs is to lock the weights of different parts of the network together so that you go through the exact same process to recognize objects if they're moved around (rather than having different processes for recognition for different parts of the image).
0RaelwayScot
Ok, so the motivation is to learn templates to do correlation at each image location with. But where would you get the idea from to do the same with the correlation map again? That seems non-obvious to me. Or do you mean biological vision?
5Manfred
Nope, didn't mean biological vision. Not totally sure I understand your comment, so let me know if I'm rambling. You can think of lower layers (the ones closer to the input pixels) as "smaller" or "more local," and higher layers as "bigger," or "more global," or "composed of nonlinear combinations of lower-level features." (EDIT: In fact, this restricted connectivity of neurons is an important insight of CNNs, compared to full NNs.) So if you want to recognize horizontal lines, the lowest layer of a CNN might have a "short horizontal line" feature that is big when it sees a small, local horizontal line. And of course there is a copy of this feature for every place you could put it in the image, so you can think of its activation as a map of where there are short horizontal lines in your image. But if you wanted to recognize longer horizontal lines, you'd need to combine several short-horizontal-line detectors together, with a specific spatial orientation (horizontal!). To do this you'd use a feature detector that looked at the map of where there were short horizontal lines, and found short horizontal lines of short horizontal lines, i.e. longer horizontal lines. And of course you'd need to have a copy of this higher-level feature detector for every place you could put it in the map of where there are short lines, so that if you moved the longer horizontal line around, a different copy of of this feature detector would light up - the activation of these copies would form a map of where there were longer horizontal lines in your image. If you think about the logistics of this, you'll find that I've been lying to you a little bit, and you might also see where pooling comes from. In order for "short horizontal lines of short horizontal lines" to actually correspond to longer horizontal lines, you need to zoom out in spatial dimensions as you go up layers, i.e. pooling or something similar. You can zoom out without pooling by connecting higher-level feature detectors
7Lumifer
Clarke's Third Law :-) Anyway, you gain understanding of complicated techniques by studying them and practicing them. You won't understand them unless you study them -- so I'm not sure why you are complaining about lack of understanding before even trying.
5Douglas_Knight
That is ambiguous. Do you mean the final output program or the ML program? Most ML programs seem pretty straight-forward to me (search, as Ilya said); the black magic is the choice of hyperparameters. How do people know how many layers they need? Also, I think time to learn is a bit opaque, but probably easy to measure. In particular, by mentioning both CNN and RNN, you imply that the C and R are mysterious, while they seem to me the most comprehensible part of the choices. But your further comments suggest that you mean the program generated by the ML algorithms. This isn't new. Genetic algorithms and neural nets have been producing incomprehensible results for decades. What has changed is that new learning algorithms have pushed neural nets further and judicious choice of hyperparameters have allowed them to exploit more data and more computer power, while genetic algorithms seem to have run out of steam. The bigger the network or algorithm that is the output, the more room for it to be incomprehensible.
2bogus
What this is really saying is: "Hey, convincing-looking C code can be modeled by a RNN, i.e. a state-transition version ("recurrent") of a complex non-linear model which is ultimately a generalization of logistic regression ("neural network")! And the model can be practically 'learned', i.e. fitted empirically, albeit with no optimality or accuracy guarantees of any kind. The variable use is a bit off, though. Isn't this cool/Does this tell us anything important?"
2solipsist
Is it for reasons similar to the Strawman Chompsky view in this essay by Peter Norvig?
0[anonymous]
Yeah. Maybe Norvig is right and it's much easier to implement Google Translate with what I call "voodoo" than without it. That's a good point, I need to think some more.
2solipsist
-- Letter from James Clerk Maxwell to Michael Faraday, in the setup of a Steam Punk universe I just now invented
1solipsist
Here's how I read your question. 1. Many machine learning techniques work, but in ways we don't really understand. 2. If (1), I shouldn't study machine learning I agree with (1). Could you explain (2)? Is it that you would want to use neural networks etc. to gain insight about other concrete problems, and question their usefulness as a tool in that regard? Is it that you would not like to use a magical back box as part of a production system? EDIT I'm using "machine learning" here to mean the sort of fuzzy blackbox techniques that don't have easy interpretations, not techniques like logistic regression where it is clearer what they do
2Daniel_Burfoot
I agree that this is a huge problem, but RNNs and CNNs aren't the whole of ML (random forests are a different category of algorithm). You should study the ML that has the prettiest math. Try VC theory, Pearl's work on graphical models, AIT, and MaxEnt as developed by Jaynes and applied by della Pietra to statistical machine translation. Hinton's early work on topics like Boltzmann machines and Wake-Sleep algorithm is also quite "deep".
1cousin_it
Yeah, I suppose our instincts agree, because I've already studied all these things except the last two :-)
0V_V
Have fun with generative models such as variational Bayesian neural networks, generative adversarial networks, applications of Fokker–Planck/Langevin/Hamiltonian dynamics to ML and NNs in particular, and so on. There are certainly lots of open problems for the mathematically inclined which are much more interesting than "Look ma, my neural networks made psychedelic artwork and C-looking code with more or less matched parentheses". For instance, this paper provides pointers to some of these methods and describes a class of failure modes that are still difficult to address.
1Dagon
There are some pretty amazing actually useful applications for larger and larger feasible ML spaces. Everyone studying CS or seriously undertaking any computer engineering should at least learn the fundamentals (I'd recommend the Coursera ML class). And most should not spend a huge fraction of their study time on it unless it really catches your fancy. But rather than saying "that's why I'm not studying ML right now", I'd like to hear the X in "that's why I'm focusing on X over ML right now".
0V_V
The trippy pictures and the vaguely C-looking code are just cool stunts, not serious experiments. People may be tempted to fell into the hype, sometimes a reality check is helpful. This said, neural networks really do well in difficult tasks such as visual object recognition and machine translation, indeed for reasons that are not fully understood. Sounds like a good reason to study the field in order to understand why they can do what they do, and why they can't do what they can't do, doesn't it?
0SanguineEmpiricist
Might want to take a look into the library google just open sourced http://tensorflow.org/
[-][anonymous]100

http://boingboing.net/2015/11/16/our-generation-ships-will-sink.html

Kim Stanley Robinson, author of the new scifi novel Aurora and back in the day the Mars trilogy, on how the notion of interstellar colonization and terraforming is really fantasy and we shouldnt let it color our perceptions of the actual reality we have, and the notion of diminishing returns on technology.

He doesnt condemn the genre but tries to provide a reality check for those who take their science fiction literally.

7EGI
Um, no, we cannot colonise the stars with current tech. What a surprise! We cannot even colonise mars, antarctica or the ocean floor. Of course you need to solve bottom up manufacturing (nanotech or some functional eqivalent) first, making you independent from eco system services, agricultural food production, long supply chains and the like. This also vastly reduces radiation problems and probably solves ageing. Then you have a fair chance. So yes, if we wreck earth the stars are not plan B, we need to get our shit together first. If at this point there is still a reason to send canned monkeys is a completely different question.
5WalterL
I've never thought colonizing worlds outside of the solar system with human beings was reasonable. If we are somehow digitized, and continue to exist as computer programs, then sure.
2Stingray
Are there any science fiction novels that take this approach?
6NancyLebovitz
Charles Stross' Saturn's Children and Neptune's Brood has robots with minds based on humans as humanity's successor. David Moffitt's Genesis Quest and Second Genesis has specs for humans sent out by radio and recreated by aliens. James Hogan's Voyage from Yesteryear has a probe which has humans recreated on another planet and raised by robots.
3philh
The characters in Greg Egan's Diaspora are mostly sentient software, who send out several probes containing copies of themselves.
1DanielLC
Alternately, learn to upload people. Which is still probably going to require nanotech. This way, you're not dependent on ecosystems because you don't need anything organic. You can also modify computers to be resistant to radiation more easily than you can people. If we can't thrive on a wrecked Earth, the stars aren't for us.
3passive_fist
The thing that is somewhat frustrating to me is that I've been saying this for years. In our current form, it is quite pointless to attempt interstellar colonization. But once we start uploading, it becomes straightforward, even easy.
3Daniel_Burfoot
It's strange that he doesn't talk about radical life extension. To me, the game plan is pretty clear: 1. Discover life extension technology to enable humans to live for one million years. 2. Colonize the galaxy 3. ??? 4. Profit

The great advantage of Robin Hanson's posts is that you can never tell when he's trolling :-D

Sample:

...maybe low status men avoiding women via male-oriented video games isn’t such a bad thing?

-1polymathwannabe
I'm getting tired of the Overcoming Bias blog in general. It feels like for Hanson everything is translatable into status terminology.

Is he wrong though? Sometimes I feel I'm getting tired of humanity, because it makes everything about status.

6IlyaShpitser
Outside view: scientists often think their models apply to everything. Hanson is very insightful, but not immune to this, I think.
1ChristianKl
I think Hanson considers it his role to try to argue that his models fit everywhere and make the best possible case that the models apply. I think you would sometimes get different answers from him if you would bet with him.
4NancyLebovitz
Everything people do isn't entirely about status, or we couldn't survive. I don't have a handle on how status and useful actions interact with each other. If I had some idea of how to approach the subject (and I do think it's important), maybe I'd have an article for Main.
8Viliam
I agree. Yet, almost every human interaction has this... uhm... parallel communication channel where status is communicated and transferred. If you ignore it for a while, unless you make a big blunder, nothing serious happens. But in long term the changes accumulate, and at some moment it will bite you. (Happened to me a few times, then I started paying more attention. Probably still less attention than would be optimal.) Also, some people care about status less (this probably correlates with the autistic spectrum), but some people care more. Sometimes you have to interact with the latter, and the result of the interaction may depend on your status. I prefer environments where I don't have to care about status fights, but they are merely "bubbles" in the large social context.
3entirelyuseless
Exactly. And I really like Hanson's blog, even though he's sometimes wrong, because he's very often right, and because even when he isn't, he says what he thinks no matter how weird it sounds.
2hg00
It is a bit unfortunate though that talking about status can turn what would have been a productive fact-based discussion in to a status competition.
2polymathwannabe
Well, you know the old saying: if your only tool is game theory, everything will look like signaling.
0MrMind
Well, as social animals, status evaluation is deeply embedded in our biological firmware. I suppose it's only because our psychological unity of consciousness is so far removed from the basic process of the brain that we can find status irritating.

I found out about Omnilibrium a couple months ago, and I was thinking of joining in eventually. I was also thinking of telling some friends of mine who might want to get in on it even more than I do about it. However, I've been thinking if I told lots of people, or they themselves told lots of people, then suddenly Omnilibrium might get flooded with dozens of new users at once. I don't know how big that is compared to the whole community, but I was thinking Omnilibrium would be averse to it growing it too big, as well-kempt gardens die by pacificism and al... (read more)

7Douglas_Knight
The whole point of the site is to do automated moderation and curation. At the moment, it is so small that it is serving no purpose better than a human dictator would. The whole point is that algorithms can scale. Maybe the algorithms aren't yet ready for prime time and maybe it's better if it grows slowly so that they have time to understand how to modify the algorithms. In particular, I believe that it currently grades users on a single political axis, while with more users it would probably be better to have a more complicated clustering scheme. But you probably won't cause it to grow rapidly, anyhow.

Recommended: a conversation between Tyler Cowen and Cliff Asness about financial markets. Especially recommended for people who insist that markets are fully efficient.

Samples:

A momentum investing strategy is the rather insane proposition that you can buy a portfolio of what’s been going up for the last 6 to 12 months, sell a portfolio of what’s been going down for the last 6 to 12 months, and you beat the market. Unfortunately for sanity, that seems to be true.

and

One thing I should really be careful about. I throw out the word “works.” I say “This s

... (read more)
-1ChristianKl
Maybe because everybody thinks that you try to buy a stock when it's low and sell if it is high?

Disinformation review, a weekly publication, which collects examples of the Russian disinformation attacks.

The main aim of this product is to raise the awareness about Russian disinformation campaign. And the way to achieve this goal is by providing the experts in this field, journalists, academics, officials, politicians, and anyone interested in disinformation with some real time data about the number of disinformation attacks, the number of countries targeted, the latest disinformation trends in different countries, the daily basis of this campaign,

... (read more)
3Viliam
This is extremely important. Truths are entangled, and if you once tell a lie, the truth is ever after your enemy. In politics, it is often an advantage to sell a specific lie. Sometimes the most efficient way to do that repeatedly is to allocate a huge budget for "lowering the sanity waterline". (Here is an example of what it looks like when someone uses a political crisis in your country to launch an insanity attack.)
2knb
How do you know this isn't a disinformation attack against Russia?

How are you all doing today? I'm having a pretty good start of my day(it's 11:42 am) here :P

I have found Krushke's bayesian data analysis & Gelman's text to be pretty good companions to each other and I'm glad I bought both. Personally I also found that building a physical personal library was much better for my person development than probably any other choice I made throughout the last year and a half. Libraries are definitely antifragile.

Also http://www.amazon.com/gp/offer-listing/0471257095/ref=dp_olp_all_mbc?ie=UTF8&condition=all Feller vol 2 paperback is 8 dollars used.

[-][anonymous]30

Is anybody interested in Moscow postrationality meetup?

I'm curious about how others here process study results, specifically in psychology and the social sciences.

The (p < 0.05) threshold for statistical significance is, of course, completely arbitrary. So when I get to the end of a paper and the result that came in at, for example, (p < 0.1) is described as "a non-significant trend favoring A over B," part of me wants to just go a head and update just a little bit, treating it as weak evidence, but I obviously don't want to do even that if there isn't a real effect and the evidence is unrelia... (read more)

7gjm
I don't spend enough of my time reading the results of studies that you should necessarily pay much attention to what I think. But: you want to know what information it gives you that the study found (say) a trend with p=0.1, given that the authors may have been looking for such a trend and (deliberately or not) data-mining/p-hacking and that publication filters out most studies that don't find interesting results. So here's a crude heuristic: * There's assorted evidence suggesting that (in softish-science fields like psychology) somewhere on the order of half of published results hold up on closer inspection. Maybe it's really 25%, maybe 75%, but that's the order of magnitude. * How likely is a typical study result ahead of time? Maybe p=1/4 might be typical. * In that case, getting a result significant at p=0.05 should be giving you about 4.5 bits of evidence but is actually giving you more like 1 bit. * So just discount every result you see in such a study by 3 bits or so. Crudely, multiply all the p-values by 10. You might (might!) want to apply a bit less discounting in cases where the result doesn't seem like one the researchers would have been expecting or wanting, and/or doesn't substantially enhance the publishability of the paper, because such results are less likely to be produced by the usual biases. E.g., if that p=0.1 trend is an incidental thing they happen to have found while looking at something else, you maybe don't need to treat it as zero evidence. This is likely to leave you with lots of little updates. How do you handle that given your limited human brain? What I do is to file them away as "there's some reason to suspect that X might be true" and otherwise ignore it until other evidence comes along. At some point there may be enough evidence that it's worth looking properly, so then go back and find the individual bits of evidence and make an explicit attempt to combine them. Until then, you don't have enough evidence to affect your beh
4TylerJay
Thank you! This is exactly what I was looking for. Thinking in terms of bits of information is still not quite intuitive to me, but it seems the right way to go. I've been away from LW for quite a while and I forgot how nice it is to get answers like this to questions.

I've never understood the appeal of uploading.

I've seen just once someone talk about an idea which I strongly doubt is the mainstream, that there's this question about which hardware "you" will "wake up" in. Surely not. Both would be conscious, right?

If I upload myself, there are two of me.

But this doesn't make me feel like I don't mind dying. What do I care if the world will continue with another of me? I want to live. It's not that I want someone who is me to keep existing, I want to keep living myself.

Am I confused about why people think of this as a life extension possibility?

6knb
I want both, personally. If my organic body was going to die but I could create an upload version of myself I definitely would. I would take solace in the fact that some version of me was going to continue on.
4Tem42
There are a number of different reasons different people give as to why uploading is good. For example, I do see making copies of myself as a good and positive goal. If there are two of me, all other things being equal, I am twice as well off -- regardless of whether or not I have any interaction with myselves. I am a really good thing and there should be more of me. Some people, on the other hand, either subconsciously assume or actively desire a destructive upload -- they have one copy of themselves, the software copy, and that's all they want. The meat body is either trash to be disposed of or simply not considered. Closely related, some people conceive of a unitary selfhood as a inherently valuable thing, but also want access to a meat body in some form. In this case, duplication/destruction is a problem to be solved -- the meat body might be disposed of, might be disposed of but DNA kept for potential reanimation, might be kept in cold storage... If we go by published science fiction, this seems to be the most common model. This is an interesting case in which the meat body (and perhaps the brain specifically) is often seen as a good and desirable thing, and in some cases the point of uploading is only that it is a useful way of insuring immortality (John Varley style transhumanism). With so many mental models to choose from, it is not surprising that anyone who does not want to think about a lonely meatbody wasting away on a dying Earth just doesn't bother to consider the problem. It's an easy issue to ignore, when most people are still doubtful that uploading will be a possibility in their lifetime. However, I think in most cases, people who think about uploading see it as "better than dying", while at the same time acknowledging your concern that really, someone called you is dying. Whether they see this as a personal death (as you do) or statistical death ("half of me's just died!") probably has no more ontological fact behind it than whether or not yo
4Bound_up
But's it's precisely this reference to "my meatbody" and "my computer body" or whatever that confuses me. When you upload, a new consciousness is created, right? You don't have two bodies, you just have a super-doppleganger. He can suffer while I don't, and vice versa. And he can die, and I'll go on living. And I can still die just as much as before, while the other goes on living. I don't understand what about this situation would make me okay with dying. So I could understand it as valuable to someone for other reasons, but I don't understand its presentation as a life extension technology.
2wizard
My understanding is that LWers do not believe in a permanent consciousness. * A teleporter makes a clone of you with identical brain patterns: did it get a new consciousness, how do you tell your consciousness didn't go to the clone, where does the consciousness lies, is it real, etc. * It's not real, therefore the clone is literally you. Either that or we're dying every second.
1Tem42
I understand what you are saying, and I think that most people would agree with your analysis (at least, once it is explained to them). But I also think that it is not entirely coherent. For example, imagine that we had the technology to replace neurons with nanocircuits. We inject you with nanobots and slowly, over the course of years, each of your brain cells are replaced with electronic equivalents. This happens so slowly that you do not even notice -- conscious is maintained unbroken. Then, one at a time, the circuits and feedback loops are optimized; this you do notice, as you get a better memory and you can think faster and more clearly; throughout this, however, your consciousness is maintained unbroken. Then your memory is transcribed onto a more efficient storage medium (still connected to your brain, and with no downtime). You can see where this is going. There is no point where it is clear that one you ceases and another begins, but at the end of the process you are a 'computer body'. Moreover, while I set this up to happen over years, there's no obvious reason that you couldn't speed the example up to take seconds. Wizard has given another example; most of us accept Star Trek style transporters as a perfectly reasonable concept (albeit maybe impossible in practice), but when you look at them closely they present exactly the sort of moral/ontological dilemma you are worried about.This suggests that we do not fully grok even our own concept of personal identity. One solution, is to conclude that, after much thought, if you cannot define a consistent concept of persistence of personal identity over time, perhaps this is because it is not an intellectual concept, but a lizard-brain panic caused by the mention of death. In my mind this is exactly the same sort of debate people have over free will. The concept makes no real sense as an ontological concept, but it is one so deeply ingrained in our culture that it takes a lot of thought to accept that.
1Bound_up
So if uploading was followed by guillotining the "meatbody," would you sign up? I have no problem with the brain just being one kind of hardware you can run a consciousness on. I have no problem with transporting the mind from one hardware to another, instantaneously, if you can do it in between the neural impulses. But it seems like people mean you get scanned, a second, fully "real," person comes into existence, and this is supposed to extend your life. Are we to believe that the new consciousness would be fine with being killed, just because you would still be around afterwards? Would their life be extended in you even if they were deleted after being created? Are they going to stick around feeling and experiencing life because you exist? My confusion is that these seem like obvious points. Why are people even taking this seriously, why is it on the list? I can fully understand why the rest of us might like to upload the great people of the world, or maybe everybody if we value having them around. But I don't think this should make them feel indifferent to their deaths, because it's not extending anyone's life. I put this in the open thread because I assumed I was just ignorant of some key part of the process. If this is really it, maybe these points should be their own post and we can kick uploading off the life extension possibility list.
0Tem42
I would not signup for a destructive upload unless I was about to die. But if I was convinced that I was about to die, then I absolutely would. I don't think that you are missing anything, really. If I uploaded the average transhumanist, and then asked the meatbody (with mind intact) what I should do with the meatbody, they'd say either to go away and leave them alone or to upload them a few more times, please. If I asked them if they were happy to have a copy uploaded, they would say yes. If I asked them if they were disappointed that they were the meatbody version of themselves, they'd say yes. If I asked if the meatbody would now like an immortality treatment, they would say yes. If I asked the uploaded copy if they wanted the meatbody to get the immortality treatment, they would say yes.... I think. I think that uploading is on the list primarily because there is a lot of skepticism that the original human brain can last much more than ~150 years. Whether or not this skepticism is justified is still an open question. Uploading may also get a spot on the list because if you can accept a destructive upload, then your surviving self does get (at least theoretically) a much much better life than is likely to be possible on meatEarth.
1entirelyuseless
If you accept this solution, however, you might also say that neither uploading nor life extension technology in general is actually necessary, because many other things, such as having children, are just as good objectively, even if your lizard-brain panic caused by the mention of death doesn't agree.
0Tem42
I like children and want children that are as cool as I am. But no child of mine has a statistically significant chance of being me. "Just as good objectively" misses the point on two counts: 1. Lots of things are as good as other things. But just because tiramisu is just as good as chocolate mousse, this does not mean that it is okay to get rid of chocolate mousse. What might make it okay to get rid of chocolate mousse is if you had another dish that tasted exactly like chocolate mousse, to the point that the only way you could tell which is which was by looking at the dish it was in. 2. This is not a question of objectivity -- this is a question of managing your own subjective feelings. I may well find that I am best off if I keep my highly subjective view that I am one of the most important people in my world, but also be better off if I rejected my subjective view that meatbody death is the same as death of me. Etid: tpos.
1entirelyuseless
The point is that "so and so is me" is never an objective fact at all. So if the child has no chance of being you, neither does the upload. If you are saying that you can identify with the upload, that is not in any objective sense different from identifying with your child, or identifying with some random future human and calling that a reincarnated version of yourself. And I don't object to any of that; I think it may well be true that it is objectively just as good to have a child and then to die, as to continue to live yourself, or to upload yourself and die bodily. As you say, the real issue is managing your feelings, and it is just a question of what works for you. There is no reason to argue that having children shouldn't be a reasonable strategy for other people, even if it is not for you.
1Tem42
Granted, and particularly true, I'd like to think, for rationalists. It is reasonable to argue that any social/practical aspect of yourself also exists in others, and that the most rational thing to do is to a) confirm that this is a objectively good thing and b) work to spread it throughout the population. This is a good reason to view works of art, scientific work, and children as valid forms of immortality. This is particularly useful to focus on if you expect to die before immortality breakthroughs happen, but as a general outlook on life it might be more socially (and economically) productive than any other. As some authors have pointed out, immortality of the individual might equal the stagnation of society. Accepting death of the self might be the best way forward for society, but it is a hard goal to achieve.

Finally, someone with a clue about biology tells it like it is about brain uploading

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/

In reading this, suggest being on guard against own impulse to find excuses to dismiss the arguments presented because they call into question some beliefs that seem to be deeply held by many in this community.

3username2
If they never studied those things, they would never figure out the answers to those objections. If they already knew about all these things, new studies wouldn't be needed. What else is there to study if not things we don't understand?
[-][anonymous]120

The human brain project is largely not study, its a huge pointless database and 'simulation' (without knowing what you are simulating) project for its own sake. Which is why so many scientists hate it, for its pointlessness and taking research money that could actually be productive rather than buzzword salad elsewhere.

3bokov
I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.
3Baughn
Ah, no. I do agree that uploading is probably the best path, but B doesn't follow from A. Just because I think it's the best option, doesn't mean I think it'll be easy.
1ChristianKl
The human brain project exists because "we want to simulate the human brain" is a project that can be sold to politicians. A lot of more sensible projects such as having money for replication of already existing research isn't as easily sold.

What do people think of the “When should I post in Discussion, and when should I post in Main?” section in the FAQ?

I find myself looking less and less in Main because I don’t see much content in there besides the meetup posts. I have a suggestion which might improve this and that is to update the FAQ so that it encourages reposting highly voted content in discussion into Main. This would have couple of benefits:

  • It would allow the potential main articles to go through a process of vetting. It would be suggested that only highly voted (15 karma or more, ma
... (read more)
[-][anonymous]150

I think we should get rid of "main" and "promoted" .

Right now there's four tiers: open thread, discussion, main, and main promoted.

at least once a week I see a comment that says "this should be in main," "this shouldn't be in main", "this should be in the open thread," or "this shouldn't be in the open thread, it should be it's own post".

I think the two tier system of open thread/discussion would suffice, and the upvote downvote mechanism could take care of the rest.

Right now there's four tiers: open thread, discussion, main, and main promoted.

And the "main" tier is actually worse than the "discussion" tier. :(

So I'd recommend removing only the dysfunctional part, and have: open thread, discussion, discussion promoted.

3hg00
As far as I can tell "promoted" is meaningful because (a) there's a promoted RSS feed (b) it goes to the LW twitter feed. Posts get "promoted" by people who have admin powers, but most of the users with admin powers left a while ago. I think I would kill "promoted" and maybe make it so if you discussion post gets at least +4 or something it goes to the old promoted RSS feed and the twitter feed.
2ScottL
Do you think that “main” is a bad idea or that we should get rid of “main” because it hasn’t had much content for a while? I personally like the concept of “main” because from a site mechanics point of view with its (10x) karma it indicates that less wrong promotes and prioritizes longer, multi-post and referenced material, which is the type of material I am more interested in.
3[anonymous]
I like the concept of "main" for exactly the same reasons. However, it seems like most people who would post longer, more-referenced material are no longer contributing here. Indeed, even detailed discussion posts are now rare; most content now seems to be in open threads. This dwindling content can be seen most clearly in the "Top Contributors, 30 Days" display. At the time I write this there are only seven posters with > 100 karma in the past 30 days, and it only takes 58 to appear on the list of 15. Perhaps the question should not be whether the content of LW should be reorganised, but whether LW is fulfilling its desired purpose any longer. As nearly all the core people who worked the hardest to use this site to promote rationality are no longer contributing here, I wonder if this goal is still being achieved by LW itself. Is it still worth reading? Still worth commenting here?
2signal
LW does seem dying and mainly useful for its old content. Any suggestions for a LW 2.0?
[-]hg00140

Yvain, #2 in all-time LW karma, has his own blog which is pretty great. The community has basically moved there and actually grown substantially... Yvain's posts regularly get over 1000 comments. (There's also Eliezer Yudkowsky's facebook feed and the tumblr community.) Turns out online communities are hard, and without a dedicated community leader to tweak site mechanics and provide direction, you are best off just taking a single top contributor and telling them to write whatever they want. Most subreddits fail through Eternal September; Less Wrong is the only community I know of that managed to fail from the opposite effect of setting an excessively high bar for itself. Good online communities are an unsolved and nontrivial problem (but probably worth solving since the internet is where discussions are happening nowadays--a good solution could be great for our collective sanity waterline).

I haven't visited Hacker News for a while, but it seemed like the leadership there was determined to create a quality community by whatever means possible, including solving Eternal September without oversolving it. I'll bet there is a lot to learn from them.

9Viliam
Writing high-quality content is one problem, selecting high-quality content is another. This is the advantage of one-person blogs, where if the author consistently writes high-quality content, both problems are solved at the same time. The role of author is difficult and requires some level of talent, but it can also be emotionally rewarding. The author gets fans, maybe even money: from context advertising, asking for donations, selling their own product or services. The role of censor (the person who filters what other people wrote) is emotionally punishing. Whatever you do, some people will hate you. If you remove an article, the author of the article, plus everyone who liked the article, will hate you. If you don't remove an article, everyone who disliked the article will hate you. There are not exact rules; some cases are obvious, but some cases are borderline and require your personal choice; and however you choose, people who would choose otherwise will hate you. People will want mutually conflicting things: some of them prefer higher quality, some of them prefer more content, and both of them will suspect that if you would do your job right, the website would have content both excellent and numerous. It is very difficult for the censor to learn from feedback, because the feedback will be negative either way, thus it does not work as an evidence for doing the job correctly or not. The author writes when he or she wishes. The censor works 24/7. Etc. Give me a perfect (x-rational, unbiased, and tireless) censor, and we can have a great rationalist website. Here is how -- In version 1.0, the censor would create a subreddit. Then he would look at a few rationalist blogs (and facebook pages, and tumblr pages, etc.), and whatever passes his filter, he would post it in the subreddit. Also, anyone would be allowed to post/link things on subreddit, and the censor would delete them if they are not good enough. Also, the censor would delete comments, and possibly ban
0hg00
There might be clever ways to distribute the job of censor, e.g. have an initial cadre of trusted users and ban any newcomer that gets voted down too much by your trusted users. Someone gets added to the trusted users if the existing trusted users vote them up sufficiently. Or something like that. But I expect you would need someone to experiment with the site full time for a while (years?) before the mechanics could be sufficiently well worked out.
0tut
Is this similar to r/rationalistdiaspora?
5Viliam
Oh, I haven't seen r/rationalistdiaspora for a long time. Looking there: The front page contains 25 posts, each of them is 2 days old, most of them don't have any upvotes, none of them has comments. Nope. When people don't vote or comment, slow down. Only choose the best stuff, or perhaps if you believe there is so much great content, create an article containing more links. Also, I guess you have to somehow create the initial community, to get the ball rolling. I don't know exactly how, but some startups solve this problem by having an invitation-only phase, where you can join only if an existing user invites you, which means that you have demonstrated your interest (artificial scarcity) and also that you know at least one person who is already there, thus you will keep coming to meet them, and they will keep coming to meet you. Okay, I admit there is more than just having a good censor.

I've been thinking about this for a few months. I'm pointing this out to commit to writing a main-level article by December 1st, hopefully earlier.

3Viliam
You have my upvote, which on December 1st will become a downvote unless you will have posted. (Just kidding.)
0Vaniver
Article written, edited, slept on, and edited again. I could post it now but will wait until the 2nd for timing reasons.
0[anonymous]
Upvoted today for following through (and raising this discussion in a constructive and thoughtful manner).
1hg00
If this change is made, the karma multiplier for a discussion post should also be increased. Right now making a 10 point discussion point gets you 10 karma but making a 10 point post in Main gets you 100 karma. Which doesn't make much sense given the reality of how Main & Discussion are being used (virtually identically: basically you post to Discussion if you're a person with a humble disposition). I'm in favor of having a solid multiplier for discussion posts, 4x at the absolute least, to encourage more toplevel posts. I would also disable downvoting for users with less than 100 karma to encourage more contributions... Less Wrong is such a dinosaur at this point there's little reason not to try this kind of radical change.
1tut
I would like to combine your two suggestions like so: Posts in discussion still earn 1 karma per vote. But as soon as a post gets at least five or so points it transfers to promoted. And then you get 10 karma per vote the post receives after getting promoted. Posts in promoted are visible to people reading discussion, but readers can choose to see only promoted posts. That way you have a smaller downside risk (if your post is received poorly you only lose one karma per downvote), but you can still get more karma if you write a substantial post that people like.
0hg00
I really like the suggestion of making it so downvotes only cost 1 karma on toplevel posts. But it seems weird to have the marginal karma from an upvote suddenly switch from 1 to 10 as soon as you get at least 5 points.
[-][anonymous]20

Live in LA? On the autism spectrum? Got social anxiety or social phobia? You're elegible for legal MDMA therapy. Congrats. For the rest of you out there, take it from me, don't do extasy, it's unreliabe. The tests are shitty.

Live in Canada and got an addiction? Ayahuasca for you!. Live in Australia with PTSD? Soon.

0SanguineEmpiricist
Just buy high quality stuff from black markets. It's pretty simple. If you ask around you should be able to find a local hook who has some, just stay updated with the scene.
-1[anonymous]
That is improper. I prefer lawful transactions sanctioned by the expert opinions. You can get 2 'doses' of crystal MDMA here in Melbourne for $50 from 'Alex'. But who knows how good it is. Dealers don't sell purity kits and they were banned as of a few months ago from the dodgy stores like Off Ya Tree and that place near Flinders station.

A Quasipolynomial Time Algorithm for Graph Isomorphism: The Details

Laszlo Babai has claimed an astounding theorem, that the Graph Isomorphism problem can be solved in quasipolynomial time. On Tuesday I was at Babai’s talk on this topic (he has yet to release a preprint), and I’ve compiled my notes here. As in Babai’s talk, familiarity with basic group theory and graph theory is assumed, and if you’re a casual (i.e., math-phobic) reader looking to understand what the fuss is all about, this is probably not the right post for you. This post is research lev

... (read more)

Has anybody donated a car to charity before (in the US? CA in particular, but I imagine it'll generalize outside of location-specific charities).

The general advice online is useful but not very narrowly-tailored. Couple points I'm looking for information on:

1) Good charities (from an EA perspective)

2) Clarification on the tax details (when car's fair market value is between $500 and $5000)

Would appreciate any advice.

1Tripitaka
Since you didnt receive a lot of feedback, my thoughts: a) Take your highest ranking EA orgs and ask them if they would benefit from having a car available to them. Donate car to that NGO. b) Sell car at market value and donate money. No clue to taxes, not being US-based.
1knb
Kars for Kids is one that advertises heavily but they use the revenue primarily to support ultra-orthodox religious education, which doesn't seem very EA to me.

My girlfriend's cat poops on the carpet. The cat does poop in the litter boxes some of the time, and always urinates in them, but she also poops on the carpet several times a day in different places. (She also never buries her poop when she does use the boxes.) Any advice?

9James_Miller
Get a new girlfriend. (Probably easier than getting your current girlfriend to get a new cat.)
4CronoDAS
My girlfriend actually doesn't like the cat very much - I'm more of a cat person than she is, so her cat has sort of become my cat... I just wish the cat didn't leave "land mines" on the carpet for us to clean up.
4username2
This is silly.
2James_Miller
Not for men who have dated women who have cats.
7Lumifer
If you find yourself below the cat on the totem pole, maybe you do want a cat which poops less often...
0[anonymous]
She actually doesn't like the cat very much...
6drethelin
The cat is probably unhealthy, they don't normally poop several times a day
4Dagon
the top google hits will give reasonable advice. likely: the cat doesn't like the litterbox for some reason - wrong kind of litter, too small, too far away, or not changed often enough.
2raydora
Have you talked to her about it? What does she say?

meow

-6ZankerH
[-][anonymous]10

There are profit (and income) premiums in vice industries from non-competitive behaviour by moralists. Wouldn't be suprised if moral entrepreneurs intersect with actual entrepreneurs.

0Dagon
Bigger than vice industries. See also the bootleggers and baptists model of regulation. I'd be interested to hear more about "moral enterpreneurs". I'm guessing you don't mean people who take moral risks in order to maximize their morality.

Feeling like you're an expert can make you closed-minded

Victor Ottati at Loyola University and his colleagues manipulated their participants (US residents, average age in their 30s) to feel relative experts or novices in a chosen field, through easy questions like “Who is the current President of the United States?” or tough ones like “Who was Nixon's initial Vice-President?” and through providing feedback to enforce the participants’ feelings of knowledge or ignorance. Those participants manipulated to feel more expert subsequently acted less open-minde

... (read more)
[-][anonymous]00

[Stanford researchers uncover patterns in how scientists lie about their data(http://news.stanford.edu/news/2015/november/fraud-science-papers-111615.html)

Even the best poker players have "tells" that give away when they're bluffing with a weak hand. Scientists who commit fraud have similar, but even more subtle, tells, and a pair of Stanford researchers have cracked the writing patterns of scientists who attempt to pass along falsified data.

The work, published in the Journal of Language and Social Psychology, could eventually help scientists i

... (read more)
[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

Focus on a new frame of reference, not on technique. Clients need to shift away from content—“it’s about my heart/ my debt/ the safety of the plane/ germs”—and toward the very best strategies to recover from their anxiety disorder. These strategies will always address the intentions that currently motivate their actions. Most decisions by anxious clients have two functions:

1) to only take actions that have a highly predictable, positive outcome

2) to stay comfortable

And that makes sense. Everyone seeks comfort. And everyone wants to feel confident about ce

... (read more)
[-][anonymous]00

I bought a visa prepaid debit card that's expiring in a month. I have a bank account. How do I get the money from the debit card (anonymous, not attached to my name and has no online account associated with it) into my bank account? There's no payment gate in my online bank account.

1RobertM
I was able to use Square to transfer money from a pre-paid gift card (not sure if it was Visa though) to my bank account. Transaction fee is ~2.75% iirc.

A meta-ethics reflection about the three chimps.
We know that chimps societies are in a meta-stable Molochian equilibrium of violence, but you can tip them off with more resources into a more pacific state.
There is supposedly a "universal" progress of society towards a more moral baseline, such as less slavery, less torture, more freedom, but there were also notable exception. I was thinking about the seventeen's century Venice, which was freer than contemporary Venice. But at the time Venice was one the most powerful city-state in the Mediterrane... (read more)

3Lalartu
Whether there is "universal progess" in described sense depends on which start and end points do we choose. If take say from Middle Ages to today, then there is. If from Paleolithic to the height of Roman Empire, then trends would be exactly opposite, a march from freedom to slavery. So growth of per capita wealth can coexist with different directions of moral change.
1OrphanWilde
Not to espouse moral directionality, but from the Paleolithic to the height of the Roman Empire, we didn't go from freedom to slavery, we went from informal to formal modes of dominance. Informal modes of dominance -look- more like freedom than formal modes of dominance, because there are more rules on the slave - but there are more rules on the master, as well, which is, in the end, what that thing we call freedom is.
0Lumifer
Um... You believe that between Paleolithic and the height of Roman Empire the progress went in reverse?
2Lalartu
If we define "progress" as "less slavery, less torture, more freedom" as in top comment, then yes it went in reverse.
0Lumifer
The top post actually talked about 'a "universal" progress of society towards a more moral baseline', but let's see. A fair-warning preamble: no one really knows much about cultural practices in the Paleolithic, so the credence of statements about what Paleos (sorry, diet people) did is low. Slavery -- sure, there was less slavery in the Paleolithic. So, what did they do instead? The usual source of slaves in Antiquity was wars: losers were enslaved. And during the Paleolithic? Well, I would guess that the losers had all the males killed and the fertile women dragged off to be breeding stock. Maybe it's just me, but I don't see how the Paleolithic way is morally better or closer to the "more moral baseline", whatever it might be. As to torture, it is entirely not obvious to me that Paleos had less torture than the Roman Empire. Primitive tribes tend to be very cruel to enemies (see e.g. this). And freedom... it depends on how do you define it, but the Paleo tribes were NOT a happy collection of anarchists. In contemporary political terminology I expect them to have been dictatorships where the order was maintained by ample application of force and most penalties for serious infractions involved death. That doesn't look like a particularly free society. I have a feeling you are thinking about noble savages. That's fiction.
2Lalartu
I don't think it is reasonable to portray Paleolithic tribe as dictatorship. When the best weapon is pointed stick, and every man is has skill to use it, minority simply can't rule by force.
3Lumifer
That's obviously wrong, as there is a large set of social animals which don't even have pointy sticks, and yet alpha males manage to rule the tribe with an iron hand (or paw, or beak, etc.).
0RolfAndreassen
How many slaves were there in the Paleolithic?
0Lumifer
See my other comment in this subthread.
[+][anonymous]-70
[+][anonymous]-80
[+][anonymous]-110
[+][anonymous]-110