Open Thread August 31 - September 6
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (326)
I hypothesise that there are several topics for which you can reliably expect upvotes or downvotes depending your position, regardless of your content.
Are there any advocacy groups with sex buyers or 'johns'? They're an affluent bunch, and their interests include easily influenced poor settings, and they're not neccersarily constrained by the scrupolosity that advocates for say sex worker's rights may have. It suprises me that they don't exist, when advocacy groups for smokers and other vices exist, when only advocacy groups for the suppliers and workers in the sex trade seem to exist.
Being a sex buyer is low status. Being in an oppressed group such as sex workers is high status in many political contexts.
Hence the term "status whore."
That depends. Being a john is low-status. Inviting girls over to your yacht for champagne and caviar is high-status.
That really depends. A whore is not a high-status professsion.
That's not being a "sex buyer" within the context of needing advocacy for sex buying.
Thus, "in many political contexts".
I wonder what is the lesson here.
"If you want to buy sex for money, you better have a lot of money, or it will reflect poorly on you."
Or perhaps:
"Doing things in a way which demonstrates that you have a lot of money can make almost anything high-status."
Or: be classy, not crass. Form and style matter.
It is, of course, easier to be classy when you have a yacht stocked with champagne and caviar on hand... X-/
Counter-example: Donald Trump. A dictionary counter-example: nouveau riche :-)
From memory: Amnesty International has come out in favor of legalizing prostitution. They were grudging about admitting that, while they aren't going to call it human rights, they have to support something like human rights for prostitutes' customers and agents.
I read the Amnesty paper and it didn't said something about rights for customers or agents.
Do western civilizations owe something to those civilizations that were disadvantaged as a result of imperialism? A common reaction of national conservatives to this idea is that what happened during imperialism is time-barred and each country is responsible for their citizens.
No.
Could you explain why you see it this way? Our wealth is partly based on exploitation. Wouldn't it be fair to fix the damage we've done to exploited people? This could perhaps be also justified in terms of utilitarianism, as fairness might bring people closer together which prevents wars.
Not to any significant extent. Most colonized places were net money-losers for the colonizer for most of their history. In addition, I doubt most western-colonized countries were made substantially worse off compared to non-colonized countries, since the Europeans introduced some level of infrastructure, medicine, etc.
First of all, who is this "we" you speak of? More importantly, there are a few "control-group" countries which were not colonized while their neighbors were, like Siam (modern Thailand) and Ethiopia, and they don't seem better off than their neighbors. Unlike most African countries, which abolished slavery when the Europeans took control, Ethiopia banned slavery only in 1942--under pressure from the British, who were a bit embarrassed to be allied with a slave state.
But then why did people keep conquering and colonizing new lands?
There is also Japan, which was better off than its neighbors. In 1905 Japan was strong enough to win a war against Russia.
Money is not the only motivator. Power is another one.
Because conquering new lands helps spread the meme that one should conquer as much as one can.
Because the people directly responsible for the colonization profited, even if their nation as a whole did not. To go back further in history, the general of a roman legion often came home from a campaign fabulously wealthy, while the people back home saw far less of the plunder. And asking modern italians to pay spain for what ceasar looted is kind of absurd
Is that true? I can think of examples, like Cecil Rhodes arranging for the British Empire to pay for the Boer Wars for his personal enrichment, but is that typical? The East India Companies were profitable, but they paid their own military costs and used a light touch. I think the question at hand is the 19th century, when European states claimed vast swaths of land.
(I don't like the comparison to Caesar. I believe that he paid to outfit his army, so the Romans as a whole made a profit, in contrast to knb's claim about European colonialism, which I believe is correct.)
That is a very good question on which books have been written. Some of this was about religion and prestige, and competition with others. Some of it was various sovereigns being convinced to fund dubious (in retrospect) ventures by good marketing.
We have our biases and our cultural zeitgeist, and folks in the past had theirs. After the Otman Turks conquered Constantinople and killed off the Roman empire for good, the Portuguise started looking for an alternative route to do spice trading (and also look for Prester John, the mythical Christian king in the east). "We are looking for spices and Christians" was the motto.
The English had complicated reasons to start colonizing that were not all about money. A lot of the times it felt like colonial things happened for complex reasons (e.g. having to do w/ what was happening w/ Christianity at the time), and the Crown tried to find ways to make money off it.
It was the case that at some point the sugar trade became very valuable (e.g. to Napoleon the tiny sugar-producing possessions of France were worth much more than the entirety of Louisiana), but this happened much later -- there wasn't a "master imperialist plan" at all.
I don't see any basis for this claim. More explicitly, I don't see any reasonable and consistent legal/moral theory which would justify such a claim. Note that I do not consider the popular "deep pockets" legal theory to be reasonable.
I think that framing "Imperialism" as belonging to the past is inaccurate.
Many of the problemmatic behaviours grouped together into the term "Imperialism" have not actually stopped. There are Western developed countries that are doing horrible things to non-Western developing countries right now, and doing horrible things to their own people too.
I think a good first step would be to stop doing the horrible stuff now. If the problemmatic behaviour stopped, the topic of redress for past wrongs could be considered from a better vantage point. "I'm sorry I killed your ancestors and stole their stuff 100 years ago" tastes like ashes when coming from someone who is killing your family and stealing your things now, or who is doing something more subtle but equally awful.
"Disadvantaged" is a word that glosses over the damage done. Also, the whole question could benefit from being more specific and defining terms better.
Do all other civilizations owe something to western civilization for the benefits they gained stemming from western science and technology?
Meh, companies did clearly got rich on exporting western technology (and they often didn't export our ethical standards to maximize profit).
Capturing only a tiny fraction of the value they created, and that's just the for-proft companies, not to mention all the scientists and charitable organizations that gave out western science and technology for free.
I would love to see some statistics on that, but it's probably too hard to measure; also how much % of the exported technology was charity.
This seems to be clearly an ethical question to me, and the field of ethics is far from scientific. What kind of answer are you looking for?
My system of ethics would suggest that developed nations are morally obligated to help poorer nations (at least in so far as significant human suffering is caused by limited resources), and that this is the only relevant factor. So help disadvantaged peoples yes, but the cause (imperialism or otherwise) is irrelevant in determining the need.
If you would like a different answer, I can surely construct an argument pointing in the direction you prefer.
But the cause is relevant to determining the incentives created by your help.
Is anywhere on Earth inhabited by the descendants of the humans who first moved in?
Off the top of my head Iceland for sure, Māori-inhabited areas, and possibly the Basque Country. But yes, that's pretty much the exception.
How much does Mongolia owe Russia? How much do North African countries owe Europe for the millions of Europeans kidnapped and sold into the Arab slave trade in north Africa? The notion is itself ridiculous.
It is relatively easy to understand the situation when one person owes money to another person, having borrowed it before. It is also not much more difficult to understand the situation when one person owes another person a compensation for damages after being ordered by court to pay it. Somewhat more vague is a situation when there is no court involved, but the second person expects the first one to pay for damages (e.g. breaking a window), because it is customary to do so. All these situations involve one person owing a concrete thing, and the meaning of the word "owes" is (disregarding edge cases) relatively clear.
Problems arise when one tries to go from singular to plural but we still want to use intuition from the usage of singular verb. Quite often, there are many ways to extend the meaning of a singular verb to a plural verb in a way that is still compatible with the meaning of the former. For example, one can extend the singular verb "decides" to a many different group decision making procedures (voting, lottery, one person deciding for everyon, etc.), saying "a group decides" simply obscures this fact.
Concerning the word "owe", even when we have a well defined group of people, we usually prefer to either deal with them separately (e.g. customers may owe money for services) or create a juridical person which helps to abstract a group of people as one person and this allows us to use the word "owe" in its singular verb meaning. There are more ways to extend the meaning of the word "owe" from singular to plural, but they are quite often contentious.
"Western civilizations" is a very abstract group of people. It is not a well defined group of people. It is not a juridical person. It is not a country. It is not a clan. The singular verb "owes" is clearly inapplicable here, and if one wants to use it here, one must extend its meaning from singular to plural. But there seems to be a lot of possible extensions. Therefore one has to resort to other kinds of arguments (e.g. consequentialist arguments, arguments about incentives, etc.) to decide which meaning one prefers. But if that is the case, one can bypass the word "owe" entirely and go to those arguments instead, because that is essentially what one is doing, because words whose meanings one knows only very vaguely probably do not do much in actually shaping the overall argument.
In addition to that "being disadvantaged as a result of imperialism" is very dissimilar from "having a window broken by a neighbour", it is not a concrete thing. The central example of "owing something" is "owing a concrete and well defined thing". Whenever we have a definition that works well for a central example and we want to use it for a noncentral one, we again must extend it and there are often more than one way to extend it (Schelling points sometimes help to choose between all possible extensions, but often there are more than one of them and choice of the extension becomes a subject of debate).
In general, I would guess that if someone argues that an entity as abstract as "western civilizations" owes something to someone, most likely they are either unknowingly rationalizing the conclusion they came to by other means or simply sloppily using an intuition from the usage of the singular verb "owes". I think that the meaning of the word can be extended in many ways, many of which would still be compatible with the meaning of the singular word and some of them would imply "new generations are not responsible for the sins of the past ones", while some of them wouldn't, therefore it is probably better to bypass them altogether and attempt to solve a better defined problem.
Other words where trying to go from singular to plural often causes problems are: "owns", "chooses", "decides", "prefers" (problem of aggregation of ordinal utilities), etc.
I would only count debts toward the specific peoples directly affected; e.g. the Spanish Empire lived off Bolivian silver, the Belgians worked the Congolese to death, and the United States is literally built on stolen Native land. Those examples and many others allow for a case in favor of reparations.
However, the passage of time sometimes blurs the effects of exploitation and aggression. Should the UK sue Denmark for the Norman Conquest? Should Italy sue Germany because Germanic tribes destroyed the Roman Empire? Should Hungary sue Mongolia for what the Golden Horde did to them? I admit I don't know how to answer to that in a way that is consistent with my first paragraph.
If you focus on utilitarianism the question doesn't come up. The important thing isn't who "owes" but how we can produce utility. If that means the best way is to give betnets to African's than that's the thing to do, regardles of the concept of "owing".
In view of this http://essay.utwente.nl/66307/1/Bolle%20Colin%20-s%201246933%20scriptie.pdf did the smartphone makers anticipate addiction, as did the tobacco companies in the U.S.?
Certainly both are profiting from it.
For me it seems like some version of The Tulip Mania.
If you think you have been infected or potentially infected with HIV, IMMEDIATELY go to an emergency department and explain your situation. You can get a treatment that can stop you getting HIV! Here's more information relevant to Australians. Yes, science has come this far!
Also, if you are engaging in risky sexual behaviour like having sex without a condom, guys get some of your foreskin chopped off. It reduces your HIV risk. Women note, it doesn't reduce your risk of getting infected from an infected male.
I'm looking for a good demonstration of Aumann's Agreement Theorem that I could actually conduct between two people competent in Bayesian probability. Presumably this would have a structure where each player performs some randomizing action, then they exchange information in some formal way in rounds, and eventually reach agreement.
A trivial example: each player flips a coin in secret, then they repeatedly exchange their probability estimates for a statement like "both coin flips came up heads". Unfortunately, for that case they both agree from round 2 onwards. Hal Finney has a version that seems to kinda work, but his reasoning at each step looks flawed. (As soon as I try to construct a method for generating the hints, I find that at each step when I update my estimate for my opponent's hint quality, I no longer get a bounded uniform distribution.)
So, what I'd like: a version that (with at least moderate probability) continues for multiple rounds before agreement is reached; where the information communicated is some sort of simple summary of a current estimate, not the information used to get there; where the math at each step is simple enough that the game can be played by humans with pencil and paper at a reasonable speed.
Alternate mechanisms (like players alternate communication instead of communicating current states simultaneously) are also fine.
Bridge, the card game. Bidding is the process of two players exchanging information about the cards they hold via the very limited communications channel (bids). The play itself is also used to transfer more information about which cards remain in the hand.
I don't know if that will work as a demonstration of the Aumann's Theorem, though, bridge gets very complicated very fast :-/
That's an excellent practical example, though it doesn't really have the explicit probability math I was hoping for.
In particular, I like that you'll see stuff like which player thinks the partnership has the better contract flips back and forth, especially around auctions involving controls, stops, or other specific invitational questions. The concept of evaluating your hand within a window ("My hand is now very weak, given that I opened") is also explicitly reasoning about what your partner infers based on what you told them.
I think the most important thing here might be that bridge requires multiple rounds because bidding is limited bandwidth, whereas giving a full-precision probability estimate is not.
If you want explicit probability math, you might be able to construct some kind of cooperative poker (for example, allow two partners to exchange one card from their hands following some very restricted negotiations). The probabilities in poker are much more straightforward and amenable to calculation.
The two-coins example might be useful as a first step, even if you then present a more difficult one.
How about some variation on Bulls and Cows?
One of my professors claimed that postmodernism, and particularly its concept of "no objective truth", is responsible for much of the recent liberalism of society, through the idea of "live and let live". (Specific examples given were attitudes towards legalization of gay marriage and drugs.) I pointed out that libertarianism and liberalism predated postmodernism historically, and they said that that's true, but you can still trace the popularity back to postmodernism.
Is this historically accurate? If not, is there something I can point to that would convince them? It seems to me that the shift in society is much more a shift on the object level questions than on the meta level "should we ban things we disagree with", but I don't know very much recent history of philosophy (it isn't strictly their field either, so I'm justified in not taking them at face value).
Edit: re-asked on latest OT here
I don't know about history, but this reminds me of a "valley of bad rationality". Assuming that the historical hypothesis is true, I would treat it as just another example that if your belief system is sufficiently insane, another false belief does not necessarily make it worse, and could actually neutralize some more harmful beliefs. If you map is worse than noise, even beliefs like "there is no reality" could improve your thinking.
It is the year 2050 and much of the world’s soils have only five more fertile harvests remaining
Does anyone know of a good life expectancy calculator? Preferably one which has good justification behind the model, and also has been tested.
I tried this calculator, but I noticed a few issues. First, it sells me I should start doing conditioning exercise... when I did check that off. I think that part of the calculator is broken. It also seems to think that taller people live longer, when from what I understand it's well accepted that the opposite is true. Some of its other features seem unjustified to me, for example, it seems to think you get a life expectancy boost from eating less than 10% of your calories from fat, but I can't find any evidence for that.
Good life expectancy calculators seem very valuable to those interested in longevity. Perhaps some people at LessWrong should create some sort of model. Though I have little experience with these sorts of statistical models, I think the Monte Carlo method might be useful here to get a distribution. If we put the code on GitHub then others can take a look at its guts and submit corrections/improvements/pull requests if they want to.
A good life expectancy calculator implies a good model of which factors drive longevity. I don't believe such a model exists (for healthy people -- the effects of various illnesses on your life expectancy are known much better). There are a lot of correlation studies but correlations and causality are not quite the same thing.
"Some sort of a model" is a very low bar -- presumably you would like the model to be good. People who will be able to make a good comprehensive model of how various health/diet/lifestyle/etc. interventions affect longevity will probably be in the running for a Nobel.
It's like saying that you found online some investment advice which doesn't look too good, perhaps some LW people would like to construct a model of the markets that will give better advice. Well...
Fair points. I'm don't think what we understand about longevity is as bad as what we understand about investments.
I suppose what I'm looking for is a model which 1) doesn't have any obvious bugs, 2) doesn't contradict anything we do know, and 3) has at least some evidence behind the model. If it produces a fairly wide distribution because that represents the (poor) state of our knowledge, I think that's fine.
The issue of correlation vs. causation also is important, and I'm not sure what we could do about it short of allowing someone to turn off certain features of the model if they believe them to be untrustworthy. For example, I've seen a fair bit about how marriage is correlated with an increase in longevity, and it seems obvious to me that any similar sort of social structure where one has frequent socialization and possibly receives feedback and care is probably where the real benefit is. So I think you can say you are married if you believe your situation is equivalent in some way. Obviously these details need to be shown more rigorously, but this is the basic argument.
What's in the way of large scaleprospective placebocontrolled trial of preexposure HIV prophelaxis?
Does anyone else have trouble with people who openly display their intelligence or attempt to be smart about something? High-school and media have somehow ingrained a hostility towards that and I find it surprisingly hard to overcome. I think it is some sort of empathy response, similar to vicarious embarrassment.
For me the most annoying aspect of "displaying intelligence openly" is the following:
Imagine that you have an average person A, an intelligent person B, and a super-intelligent person C. More precisely, imagine that there are 100 As, 10 Bc, and 1 C, because most people are at the center of the bell curve.
From A's point of view, both B and C are smarter than him, and he cannot really compare them. All he can say is that he kinda understands what B says, but a lot of what C says is incomprehensive.
The experience of B is that most people are either A or B. Add some political or other mindkilling, and B may quickly develop a heuristic "everyone who agrees with me is a B, and everyone who disagrees is A and a huge waste of time".
Now once in a while B and C meet and disagree about something. B, using their long-practiced heuristics says "lol, you're an idiot".
An observer A looks at their interaction and thinks "B is probably right, since I know B to be a smart person; and C also seems kinda smart, but not as smart as B, and B says he is wrong, so he probably is".
From my point of view, B is "cheating" in this process, using both his intelligence and his lack of even higher intelligence to create an advantage over C. Thus I applaud the norms which prevent this, even if they were created for other reasons.
I openly display my intelligent all the time. Nobody would -describe- it as that, however. They'd describe me as giving advice, suggesting solutions, or similar -specific- activities, and only in appropriate situations. (If you don't know when advice is desired - which is, critically, not whenever somebody mentions a problem they have - don't give it unless asked.)
"Openly displaying your intelligence", as an activity in itself, is merely -bragging-, and is just as annoying, and for precisely the same reason, as the guy who will tell anyone who will listen about how he's a motorcycle racer who could easily win any race he ever entered, but he just enjoys riding his motorcycle for the fun of it.
It's worth distinguishing a number of things.
Actually and visibly being really smart, and pretty much always right in their domain of expertise.
Trying to look really smart and right, over and above merely being so.
Arrogance in dealing with people who are wrong.
Arrogance in dealing with people disagreeing with oneself.
(1) is a great virtue, (2) and (4) are mortal sins of rationality, and (3) merely a venial one. I will overlook a lot of arrogance in someone who is actually pretty much always right, especially if it isn't me they're being arrogant at.
People who are insecure around smart people often read actually being right and knowing it (1 and 3) as pretending to be right and intimidating others (2 and 4).
seconded. nothing to add.
That's what the little thumbs-up button is for.
I don't think we have a problem on LW with too much people writing messages that they agree with other people.
. Additional points missed are that there are no agricultural subsidies, and there are some other things mentioned in the comments.
Anyone ever try modeling internal monologue as political parties? I suppose it's not so different from the House voices in HPMOR, but I'm curious if there's RL experience.
Why would you want to dumb yourself down? X-/
I've tried to model it as it was shown on Herman's Head. It helps me remember that I don't have to listen only to my inner wimp.
I've been thinking about different ways to model the adaptive system of thought and ideas in my mind. Governments don't seem like a helpful model because parts of my mind aren't as autonomous as people, nor do I have clearly defined interests groups or political party proxies. Also keen to hear ways of modelling that system for internal usage.
The abstraction is that each party gets one voice, without worrying too hard about who exactly is speaking for it, and the voting public represents the support for each voice.
I find parties better capture the fact that some voices are more supported than others. If I thought of all the voices in my head as people in a room together, I'm afraid I'd end up thinking the voices I most endorse are jerks pushing everyone else around.
Tumblr user su3su2u1 (probably most known to LWers for his critiques of HPMOR's scientific claims, and subsequent fallout with Eliezer) has an interesting post about MIRI's research strategy. I think it has some really good ideas. What do other folks think?
It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.
The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.
If you have outside-view criticisms of an organization and you're suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what's really going on.
One dictionary definition of academia is "the environment or community concerned with the pursuit of research, education, and scholarship." By this definition MIRI is already part of academia. It's just a separate academic island with tenuous links to the broader academic mainland.
MIRI is a research organization. If you maintain that it is outside of academia then you have to explain what exactly makes it different, and why it should be immune to the pressures of publishing.
Low-quality publications don't get accepted and published. I know of no universities that would rather have a lot of third-rate publications than a small number of Nature publications. I'll agree with you that things like impact factor aren't good metrics but that's somewhat missing the point here.
If MIRI doesn't publish reasonably frequently (via peer review), how do you know they aren't wasting donor money? Donors can't evaluate their stuff themselves, and MIRI doesn't seem to submit a lot of stuff to peer review.
How do you know they aren't just living it up in a very expensive part of the country doing the equivalent of freshman philosophizing in front of the white board. The way you usually know is via peer review -- e.g. other people previously declared to have produced good things declare that MIRI produces good things.
How did science get done for the centuries before peer review? Why do you place such weight on such a recently invented construct like peer review (you may remember Einstein being so enraged by the first and only time he tried out this new thing called 'peer review' that he vowed to never again submit anything to a 'peer reviewed' journal), a construct which routinely fails anytime it's evaluated and has been shown to be extremely unreliable where the same paper can be accepted and rejected based on chance? If peer-review is so good, why do so many terrible papers get published and great Nobel-prize-winning work rejected repeatedly? If peer review is such an effective method of divining quality, why do many communities seem to get along fine with desultory use of peer review where it's barely used or left as the final step long after the results have been disseminated and evaluated and people don't even bother to read the final peer-reviewed version (particularly in economics, I get the impression that everyone reads the preprints & working papers and the final publication comes as a non-event; which has caused me serious trouble in the past in trying to figure out what to cite and whether one cite is the same as another; and of course, I'm not always clear on where various statistics or machine learning papers get published, or if they are published in any sense beyond posting to ArXiv)? And why does all the real criticism and debate and refutations seem to take place on blogs & Twitter if peer-review is such an acid test of whether papers are gold or dross, leading to the growing need for altmetrics and other ways of dealing with the 'post-publication peer review' problem as journals increasingly fail to reflect where scientific debates actually are?
I've said it before and I'll said it again: 'peer review' is not a core element of science. It's barely even peripheral and unclear it adds anything on net. For the most part, calls for 'peer review' are cargo culting. What makes science work is replication and putting your work out there for community evaluation. Those are the real review by peers.
If you are a donor who wants to evaluate MIRI, whether some arbitrary reviewers pass or fail its papers is not very important. There are better measures of impact: is anyone building on their work? have MIRI-specific claims begun filtering out? are non-affiliated academics starting to move into the AI risk field? Heck, even citation counts would probably be better here.
Is this an "arguments as soldiers" thing? Compare an isomorphic argument: "how did medicine get done for the centuries before antibiotics."
Leaving aside that this an argument from authority, there is also selection bias here: peer review may well not be crucial -- if you happen to be of Einstein's caliber. But: "they also laughed at Bozo the Clown." I am sure plenty of Bozos are enraged at peer review too, unjustly rejecting their crap.
There is a stochastic element to peer review, but in my experience it works remarkably well, given what it is. Good papers are very likely to get a fair shake and get published. I routinely get very penetrating comments that greatly improve the quality of the final paper. I almost always get help with scholarship from reviewers (e.g. this is probably a good paper to cite.) A bigger issue I saw was not chance, but ideology from reviewers. I very occasionally get bad reviews (<5% chance) and associate editors (people who handle the paper and assign reviewers) are almost always helpful in such cases.
I asked you this before, gwern, how much experience with actual peer review (let's say in applied stats journals, as that is closest to what you do) do you have?
Absolute numbers are kind of useless here. Do you have some work in mind on false positive and false negative rates for peer review?
I don't think we disagree here, I think this is a form of peer review. I routinely do this with my papers, and am asked to look over preprints by others. I think this is fine for certain types of papers (generally very specialized or very large/weighty ones).
The worry is MIRI's conception of what a "peer" is basically ignores the wider academic community (which has a lot of intellectual firepower), so they end up in a bubble. The other worry is people who worry about getting tenured are incentivized to be productive (albeit imperfectly). MIRI is not incentivized to be productive except in some vague "saving the world" sense. And indeed, MIRI appears to be remarkably unproductive by academic standards. The guy who really calls the shots at MIRI, EY, has not internalized academic norms and appears to be fairly hostile to them.
Honestly, you sound a bit angry about peer review.
That's not isomorphic. To put it bluntly, medicine didn't. It only started becoming net beneficial extremely recently (and even now tons of medicine is harmful or a pure waste), based on copying a tremendous amount of basic science like biology and bacteriology and benefitting from others' discoveries, and importing methodology like randomized trials (which it still chafes at) and not by importing peer review. Up until the very late 1800s or so, you would have been better off often ignoring doctors if you were, say, an expecting mother wondering whether to give birth in a hospital pre-Semmelweiss. You can't expect too much too much help from a field which published its first RCT in 1948 (on, incidentally, an antibiotic).
I include it as a piquant anecdote since you seem to have no interest in looking up any of the statistical evidence on the unreliability and biases (in the statistical senses) of peer review, or the absence of any especial evidence that it works.
That is not what I am saying. I am saying, 'if you think MIRI is Bozo the Clown, get a photograph of its leader and see if he has a red nose! See if his face is suspiciously white and the entire MIRI staff saves a remarkable amount on gas purchases because they can all fit into one small car to run their errands! Don't deliberately look away and simply listen for the sound of laughter! That's a terrible way of deciding!'
No, they're not, or at the very least, you need to modify this to, 'after being forced to repeatedly try solely thanks to the peer review process, a good paper may still finally be published'. For example, in the NIPS experiment, most accepted papers would not have been accepted given a different committee. Unsurprisingly! given low inter-rater reliabilities for tons of things in psychology far less complicated, and enormous variability when n=1 or 3.
Yes, any of it. They all say that peer review is not a little but highly stochastic. This isn't a new field by any means.
I have little first-hand experience; my vitriol comes mostly from having read over the literature showing peer-review to be highly unreliable, and biased, from the unthinking respect and overestimation of it that most people give it, being shocked at how awful many published studies are despite being 'peer reviewed', and from talking to researchers and learning how pervasive bias is in the process and how reviewers enforce particular cliques & theories (some politically-motivated) and try to snuff opposition in the cradle.
The first represents a huge waste of time; the second hinders scientific progress directly and contributes to one of the banes of my existence as a meta-analyst, publication bias (why do we have a 'grey literature' in the first place?); the third is seriously annoying in trying to get most people to wake up and think a little about the research they read about ('but it's peer-reviwed!'); and the fourth is simply enraging as the issue moves from an abstract, general science-wide problem to something I can directly perceive specifically harming me and my attempts to get accurate beliefs.
(Well, actually I think my analysis of Silk Road 2 listings is supposed to be peer-reviewed, but the lead author is handling the bureaucracy so I can't say anything directly about how good or bad the reviewers for that journal are, aside from noting that this was a case of problem #4: the paper we were responding too is so egregiously, obviously wrong that the journal's reviewers must have either been morons or totally ignorant of the paper topic they were supposed to be reviewing. I'm still shocked & baffled about this: how does an apparently respectable journal wind up publishing a paper claiming, essentially, that Silk Road 2 did not sell drugs? This would have been caught in a heartbeat by any kind of remotely public process - even one person who had actually used Silk Road 1 or 2 peeking in on the paper could have laughed it out of the room - but because the journal is 'peer reviewed'... Pace the Gell-Man Effect, it makes me wonder about all the papers published about topics I am not so knowledgeable about as I am on Silk Road 2 and wonder if I am still not cynical enough.)
Yes, I have no objection to 'peer review' if by what you mean is all the things I singled out as opposed to, and prior to, and afterwards, the institution of peer review: having colleagues critique your work, having many other people with different perspectives & knowledge check it over and replicate it and build on it and post essays rebutting it - all this is great stuff, we both agree. I would say replication is the most important of those elements, but all have their place.
What I am attacking is the very specific formal institutional practice of journals outsourcing editorial judgment to a few selected researchers and effectively giving them veto power, a process which hardly seems calculated to yield very good results and which does not seem to have been institutionalized because it has been rigorously demonstrated to work far better than the pre-existing alternatives (which of course it wasn't, any more than medical proposals at that time were routinely put through RCTs first, even though we know how many good-sounding proposals in psychology & sociology & economics & medicine go down in flames when they are rigorously tested), but - to go off on a more speculative tangent here - whose chief purpose was to simply make the bureaucracy of science scale to the post-WWII expansion of science as part of the Cold War/Vannevar Bush academic-military-government complex.
If this is the problem with MIRI, I think there are far more informative ways to criticize them. For example, I don't think you need to rely on any proxies or filters: you should be able to evaluate their work directly and form your own critique of whether it's any good or if it seems like a good research avenue for their stated goals.
Science is srs bsns. (I find it hard to see why other people can't get worked up over things like publication bias or aging or p-hacking. They're a lot more important than the latest outrage du jour. This stuff matters!)
Medicine was often harmful in the past, with some occasional parts that helped, e.g. amputating gangrenous limbs was dangerous and people died, but probably was still a benefit on net. Admiral Nelson had multiple surgeries and was in serious danger of infection and death afterwards, but he would have been a goner for sure without surgery.
Science was pretty similar, it was mostly nonsense with occasional islands of sense. It didn't really get underway until, what, Francis Bacon wrote about biases and empiricism? That is not very long ago. The early "gentlemen scholars" all did informal peer review by sending their stuff to each other (they also hid discoveries from each other due to competition and egos, but this stuff happens today too).
Gwern, peer review is my life. My tenure case will be decided by peer review, ultimately. I do peer review myself as a service, constantly. I know all about peer review.
The burden of proof is on MIRI, not on me. MIRI is the one that wants funding and people to save the world. It's up to MIRI to use all available financial and intellectual resources out there, which includes engaging with academia.
I really think you should moderate your criticism of peer review. Peer review for data analysis papers is very different from peer review for mathematics or theoretical physics. Fields are different and have vastly different cultural norms. Even in the same field, different conferences/journals may have different norms.
I do a lot of theory. When I do data analysis, my collabs and I try to lead by example. What is the point of being angry? Angry outsiders just make people circle the wagons.
This argument seems exactly identical to the argument for trepanning, even including the survivorship bias. (One of the suspected uses of trepanning was to revive people otherwise thought dead.)
While we're looking at anecdotes, this bit of Nelson's experience with surgery seems relevant:
I'm not sure I'd count that as a win for surgery, or evidence that he couldn't have survived without it!
But this means that, unless you're particularly good at distancing yourself from your work, you should expect to be worse at judging it than a disinterested observer. The classic anecdote about "which half?" comes to mind, or the reaction of other obstetricians to Semmelweis's concerns.
Regardless, we would expect that, if studies are better than anecdotes, studies on peer review will outperform anecdotes on peer review, right?
It's not identical because we know, with benefit of hindsight, that amputating potentially gangrenous limbs is a good idea. The folks in the past had solid empirical basis for amputations, even if they did not fully understand gangrene. Medicine was mostly, but not always nonsense in the past. A lot of the stuff was not based on the scientific method, because they had no scientific method. But there were isolated communities that came up with sensible things for sensible reasons. This is one case when standard practices were sensible (there are other isolated examples, e.g. honey to disinfect wounds).
Ok, but isn't this "incentive tennis?" Gwern's incentives are clearer than mine here -- he's not a mainstream academic, so he loses out on status. So a "low motive" interpretation of the argument is: "your status castle is built on sand, tear it down!" Gwern is also pretty angry. Are we going to stockpile argument ammunition [X] of the form "you are more biased when evaluating peer review because of [X]"?
For me, peer review is a double edged sword -- I get papers rejected sometimes, and at other times I get silly reviewer comments, or editors that make me spend years revising. I have a lot of data both ways. The point with peer review is I sleep better at night due to extra sanity checking. Who sanity-checks MIRI's whiteboard stuff?
A "low motive" argument for me would be "keep peer review, but have it softball all my papers, they are obviously so amazing why can't you people see that!"
A "low motive" argument for MIRI would be "look buddy, we are trying to save the world here, we don't have time for your flawed human institutions. Don't you worry about our whiteboard content, you probably don't know enough math to understand it anyways." MIRI is doing pretty theoretical decision theory. Is that a good idea? Are they producing enough substantive work? In standard academia peer review would help with the former question, and answering to the grant agency and tenure pressure would help with the second. These are not perfect incentives, but they are there. Right now there are absolutely no guard rails in place preventing MIRI from going off the deep end.
Your argument basically says not to trust domain experts, that's the opposite of what should be done.
Gwern also completely ignores effect modification (e.g. the practice of evaluating conditional effects after conditioning on things like paper topic). Peer review cultures for empirical social science papers and for theoretical physics papers basically have nothing to do with each other.
I would put the start of solid empirical basis for gangrene treatment at Middleton Goldsmith during the American Civil War (dropping mortality from 45% to 3%), about sixty years after Nelson.
I think this is putting too much weight on superficial resemblance. Yes, gangrene treatment from Goldsmith to today involves amputation. But that does not mean amputation pre-Goldsmith actually decreased mortality over no treatment! My priors are pretty strong that it would increase it, but going into details on my priors is perhaps a digression. (The short version is that I take a very Hansonian view of medicine and its efficacy.) I'm not aware of (but would greatly appreciate) any evidence on that question.
(To see where I'm coming from, consider that there is a reference class that contains both "trepanning" and "brian surgery" that seems about as natural as the reference class that includes amputation before and after Goldsmith.)
But this only makes sense if peer review actually improves the quality of studies. Do you believe that's the case, and if so, why?
I think my argument is domain expert tennis. That is, I think that in order to evaluate whether or not peer review is effective, we shouldn't ask scientists who use peer review, we should ask scientists who study peer review. Similarly, in order to determine whether a treatment is effective, we shouldn't ask the users of the treatment, but statisticians. If you go down to the church/synagogue/mosque, they'll say that prayer is effective, and they're obviously the domain experts on prayer. I'm just applying the same principles and same level of skepticism.
I am not sure what the relevance of either of these are. If anything, the latter suggests that we need to make the case for peer review field by field, and so proponents have an even harder time than they do without that claim!
Peer review seems like a form of costly signalling. If you pass peer review, it only demonstrates that you have the ability to pass peer review. On the other hand, if you don't pass peer review, it signals that you don't have even this ability. (If so much crap passes peer review, why doesn't your research? Is it even worse than the usual crap?)
This is why I recommend to treat "peer review" simply as a hoop you have to jump through, otherwise people will bother you about it endlessly. To remove the suspicion that your research is even worse than the stuff that already gets published.
Mostly by well-off people satisfying their personal curiosity. Other than that, by finding a rich and/or powerful patron and keeping him amused :-D
I agree that the cult of peer review is overblown. But does MIRI produce any relevant and falsifiable output at all?
I would answer differently than you: "Very inefficiently and with lots of errors".
As opposed to quick, reliable present-day peer-reviewed science? ;-)
What leads you to that conclusion? When do you think peer review began and how do you judge efficiency before and after?
Well, not that this has changed...
Isn't it "cultish" to assume that an organization could do anything better than the high-status Academia? :P
Because many people seem to worry about publishing, I would probably treat it as another form of PR. PR is something that is not your main reason to exist, but you do in anyway, to survive socially. Maximizing the academic article production seems to fit here: it is not MIRI's goal, but it would help to get MIRI accepted (or maybe not) and it would be good for advertising.
Therefore, AcademiaPR should be a separate department of MIRI, but it definitely should exist. It could probably be done by one person. The job of the person would be to maximize MIRI-related academic articles, without making it too costly for the organization.
One possible method that didn't require even five minutes of thinking: Find smart university students who are interested in MIRI's work but want to stay in academia. Invite them to MIRI's workshops, make them familiar with what MIRI is doing but doesn't care about publishing. Then offer them to become co-authors by taking the ideas, polishing them, and getting them published in academic journals. MIRI gets publications, the students get a new partially explored topic to write about; win/win. Also known as "division of labor".
It seems that writing publishable papers isn't easy.
Really? You can't think of another reason to publish than PR?
Just because MIRI researchers' incentives aren't distorted by "publish or perish" culture, it doesn't mean they aren't distorted by other things, especially those that are associated with lack of feedback and accountability.
I think there's definitely not enough thought given to this, especially when they say one of the main constraints is getting interested researchers.
Ever since I started hanging out on LW and working on UDT-ish math, I've been telling SIAI/MIRI folks that they should focus on public research output above all else. (Eliezer's attitude back then was the complete opposite.) Eventually Luke came around to that point of view, and things started to change. But that took, like, five years of persuasion from me and other folks.
After reading su3su2u1's post, I feel that growing closer to academia is another obviously good step. It'll happen eventually, if MIRI is to have an impact. Why wait another five years to start? Why not start now?
+1
A very reasonable suggestion, and I'm not just saying that because I have a PhD. I'm saying it because it's so easy to reinvent the wheel and think you're doing original research when you're really just re-discovering other people's work in a different context. It's very hard to root out these sorts of errors; when I was doing a PhD I thought the work I was doing in developmental biology was new and unique until about a year later I found that the 'new' mathematical problems I had solved had actually been widely used in polymer science for years. I just wasn't able to find the research because none of the search terms matched.
A link to the wider academic community would do a lot to help in MIRIs goals, and a very good way to do this would be undertaking PhDs. It should be a snap for the MIRI folks...
Gwern rubbishes longevity research.
I think he's taking about the dream of achieving indefinite numbers of healthy years.
However, there are some people who live into their 90s in pretty good health, and they're far from the majority. What's the likelihood of just making good health into one's 90s much more likely? I'm not talking about lifestyle improvement-- I'm talking about some technological fix.
So, he's specifically talking about the failures of previous longevity research. It seems to me that modern longevity research has portions that are considerably better (among other things, the reductionistic view appears to be the dominant view among the top researchers). Consider this section in particular:
That Stambler spent too little time on whether or not they actually got the science right / pushed in the right or wrong direction, and spent too much time focusing on their political persuasion, strikes me as highly relevant and interesting when it comes to scientific history (and the modern versions--namely, choosing who to fund or not, and what experiments to pursue or not).
Gwern also makes a more general claim that aging is too complex for any simple solution to be plausible.
I don't think SENS is one of the simple approaches Gwern was referring to in context. The simple approaches are things like turning off a genetically coded "mortality switch," lengthening telomeres, calorie-restriction mimetics, or just getting tons of antioxidants in your diet. Here's a recent Aubrey de Grey interview.
Here's one for the "life pro tips" category since Less Wrong users are mostly male. It seems as though the best way to deal with balding is to catch it as early as possible, because that's the time drug treatments (well Finasteride at least) are most effective. Of the "big 3" baldness treatments, ketoconazole shampoo is available over the counter and has few side effects reported online. (It's also used as an anti-dandruff shampoo.) (EDIT: Looks like it is not recommended to take orally, although I don't see anyone saying that topical application carries risks. Here's a study saying it's about as effective as minoxidil?) I recently noticed that my hairline has receded ever so slightly... after doing some research, I bought some ketoconazole shampoo and am planning to start using it. This brand seems to have fewer bad experience reports and fewer shill reviews on Amazon than other brands. Thoughts? (BTW although it's the safest, ketoconazole also seems to be the least effective of the balding treatments... you should probably hop on the Finasteride if you have a serious problem. More info.)
BTW, there's the 'Boring advice repository', consider cross-posting or linking to this there, so that it would not get lost.
Update on the Slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/
A list of our topics:
These are expected to grow and change as we need them. I count 58 people who have joined so far today. Feel free to PM me as well.
It's worth noting that parenting just opened up.
A Defense of the Rights of Artificial Intelligences by Eric Schwitzgebel and Mara [official surname still be decided]
"Do Artificial Reinforcement-Learning Agents Matter Morally?" Yes, says Brian Tomasik, even present-day ones (by a very small but nonzero amount). He foresees their ethical significance increasing in the near future, and he isn't talking about strong AI, but an increase in the ordinary applications of reinforcement learning to our technology.
The argument is, briefly: for various claims about what consciousness physically is, RL programs display these features to some extent as well. Therefore they have a nonzero degree of consciousness, and so a nonzero degree of moral standing. Enough that we should be thinking now about guidelines for the ethical creation of such software.
He suggests that, paralleling guidelines for the use of animals in research, RL algorithms should be replaced by others whenever possible, or if they must be used, reduced in number, and driven through rewards, not punishments.
He considers the idea of an organisation of People for the Ethical Treatment of Reinforcement Learners, and the embedding of RL algorithms in humanoid bodies and videogame characters as ways of persuading the public to the idea that they have moral significance.
Regarding prediction markets and regulation, does anyone know whether a betting market wherein the payout for the betting contract goes to the winner's choice in charities (as opposed to going to the winner) would avoid most or all of the legal issues involved?
So, Long Bets? Betting for charity has always been legal AFAIK.
A summary of rather counterintuitive results of the effect of priming on raising people's performance on various tests of cognitive abilities, and the ability to negate (or enhance) the effects of stereotype threat through priming:
"Picture yourself as a stereotypical male"
(It's not all about gender, either. Some of it is about race! How exciting!)
http://slatestarscratchpad.tumblr.com/post/128364907116/gruntledandhinged-drethelin-shlevy
Yes, effects that raise performance are good because they rule out a number of problematic mechanisms. However, this experiment has no control group and thus it does not have this benefit.
Julian Savulescu: The Philosopher Who Says We Should Play God
How to perform surgery on yourself with Clarity
I do irrational things. The other day I bought a flight interstate, somewhat impulsively, to a conference I knew next to nothing about for complicated reasons. Instantregret, but the cancellation fee is about half the price of the ticket. I also got some art professionally designed for a few hundred dollars, that I didn't need or want. I've also lost thousands gambling and on the stock exchange. I'm stupid in many ways, but I'm also capable enough to be able to share insights from the other side of sanity with the real world, or so I'd like to think. There are some things which I do that aren't rational, for which the term irrational isn't very useful, in the same what that people can be 'not even wrong', perhaps. But enough self-indulgent psychopity and self-handicapping.
I'm finding it hard recently to concentrate on anything other than surgery - particular self surgery and how and why I ought to perform it. But, Im not a surgeon. And, for this to be rational I ought to have a terminal goal. I don't have one. In fact, at best I can rationalise that in case I get in a survival situation and have no one to help, I can do it myself. But, that's extremely unlikely. It's not even rationalisation since I haven't made the decision, it's merely optimism. Being crazy is hard, so looking on the bright side keeps me from feeling like killing myself. At least this new found interest is somewhat amusing and something that is somewhat learnable. Sometimes I get interested in areas for which I have no where near the pre-requisite knowledge to understand, often some technical something in economics or computer science. In those cases, I just end up learning things incorrectly. At least with surgery, it's somewhat of a practical skill and medical students are often taught things superficially (this leads to this, or this is connected to that) rather than say, (this is proven by that the rem, or demonstrated by this experiment). To celebrate my 100 karma (and it was a difficult journey!) I just thought I would document this experience and what I'm compelled to research to give the more rational among you some insight in what its like to be on the far other side of rationality, and aware of it.
See examples of self-surgery for inspiration. Examples
people who do it are heroic. Don't be half assed
desensitise yourself by snooping on actual surgeries. From experience in psychiatric wards, it shouldn't be very hard to sneak into surgical viewing theatres. Minimal social engineering required. Hospitals are shocking with security. Note: Don't actually do this. Remember, this is just to explain my thinking process which as I mentioned is off the beaten path of sensisbility)
read this guide which is the only guide to self-surgery I can find. Though it suggests reading textbooks, the medical textbooks in the surgery section of my local university's library don't seem to be very useful at all in actually how to do surgery. Maybe one has to learn how to do it by watching.
Ok. At this point. Looks like I've somehow managed to overcome this little excursion from sensibility. I don't really care for self-surgery anymore. My testicles feel kinda sore for no apparent reason, but it feels good knowing that at least they're there and not in a medical waste bin instead.
In the spirit of radical honesty, I'm going to be posting this highly embarrassing comment then try not to think about it. Certainly won't be my most embarrassing post so far.
Voted up for honesty.
Do you know anything about the difference between the times when your irrational impulses fade and the times when you act on them?
Ahh, the miracle question. I had forgotten about those. Thank you for asking.
My answer is currently no.
Here's what I currently suspect, but I don't have the present of mind to be confident in this assessment. I'm particularly vulnerable to gambling and sexual and aesthetic impulses like compulsively listening to music, or staring at art. For instance, just I recently signed up for an international share trading account because I intended to bet about 1/4 of my assets (yes, I still am not convinced by either the kelly criterion nor modern portfolio theory since no free lunches!) on this one stock where I had very little knowledge of. Luckly for me, it takes 5 days to process the int. trading account application and I found it hard to get my mind of the stock so I started looking up more in depth information and realised it's not the undervalued, cheap, super awesome stock I thought it would be.
When I'm with people, I also tend to be less goal-oriented and give into impulses more readily. Another consideration for me is whether these impulses are the same class as say the surgical impulse, since that sounds more delusional than impulsive. None of these categorisations are clear. You've inspired me to sit down properly in the near future and map out different behaviours then try to summarise underling commonalities and potential control measures note to self.
The times when irrational impulse fades, in contrast, is times when I can use strict decision theoretic tools to explain to myself why it's irrational. That's why LessWrong is my scaffold out of insanity. If I can analyse a particular scenario and see that one particular choice dominates another, or I can model a particular impulse as my tendency to compensate for a sunk cost when I ought to be thinking at the margin, for instance, I can grit my way out of it.
Perhaps things are hardest when I'm dealing with extremely high subjective value options (e.g. jerking off to porn when I'm really horny), or betting a whole lot of money, I get carried away. Temporally, I discount at several orders of magnitude above hyperbolic, perhaps. But honestly, I don't really know. I'm just chucking intuitions into this comment box. I'll probably add to this answer at some point for my own reference.
As an aside, I saw your comment this morning and was thinking about it in the shower. Recalling the 'miracle question' approach to problem solving made me feel empowered. Later, I listened to a song I hadn't heard in a while just before going into the shower and realised that it would motivate me to linger less in there cause I anticipated the joy of continuing to listen to it after I got out. Then I thought about how I could suggest that approach to others who had trouble limiting their shower time, and grateful that there are places that I could share that information. At that point, I realised that my mood and anxiety had lifted a bit which I attributed to that sequence of events, cascading from you. I suspect increased self-trust in my ability to handle problems is at the heart of this (so I'll add that to my mental health checklist in the other thread sometime). So thank you! I'm going to be investigating how I can replicate this again.I did mess it up a bit by feeling very self-congratulatory then rumminating for a while and ultimately not getting out of the shower as promptly as perhaps possible, but hopefully that wouldn't occur in the future.
How do you get from "no free lunches" to disagreement with either Kelley or portfolio theory?
An argument by Stephen Hsu that boosted-IQ humans will appear before Artifical Intelligence and will co-evolve with AI after that.
Seems to me these two things are incomparable in speed. Imagine that research in genetic engineering will allows us to make each generation have IQ 20 points higher than the previous one. Could even such IQ-boosted humans compete with a superhuman AI which can rewrite its own source code?
Of course I am making many assumptions here, but the idea is that biological humans will probably still have to go through the cycle of birth and maturation, and face various biological constraints, while AI will not have these obstacles.
Killer robots about to be released into the world's oceans!!eleven!
So says Auntie Beeb.
I bought a $200 prepaid debit card to precommit to getting a beeminder account that won't fuck up my bank balance. I plan to use it to give up pornography and excessive masturbation (<or=1 a week is my goal). However, $200 doesn't have a lot of marginal value to me. I'm thinking of exploiting my irrationality and warm fuzzies by precommiting to donate it to a warm fuzzies charity, or maybe I'll put the money towards potential dates so I can get a girlfriend as substitute if I'm successful in nofapping or watching porn. Ideally there would be a system whereby I could donate to people who would be incentivised to help me stay on the yellow road, at the end of passing the Beeminder test. I hope beeminder will let me do that. Any tips or comments? I've never done beeminder before.
I am curious about your terminal goal here.
I've never heard of this book or author before, anyone read it? How does it compare to eg "Smarter Than Us" or "Our Final Invention"?
Calum Chace, "Surviving AI"
I have yet to see a treatise, for strategic managers or from academics of any domain, on the game theoretic implications of data science and data-driven firm behaviour in general.
I for one would expect data driven organisations to be act more rationally and therefore predictable, meaning that game-theoretic optimal strategic behaviour, or rather an approximation of it because many data driven organisations will be stupid like many poker players forming a nash equilbirum would maximise expected utility. However, I don't see how machine learning provides an avenue for firms to inform their strategic multi-agent decisions. They instead need to consider artificial intelligence techniques more broadly and to be able to frame machine learning in that context. This, I suspect, will lead to the goldrush for AGI development. As soon as the potential for this becomes common knowledge, linkedin losers will start 'hailing AI experts as the sexiest job in the 21st century. MIRI, take head of my warning that if you are not more transparent with your research agenda (which to those who don't know, is still secret in part) you may find yourself developing FAI solutions way too slow.
Release your agenda and let others work on your problems cooperateively. Maybe you'll even get a more heterogenous audience at the Intelligent Agents Forum. Maybe mainstream researchers can craft work you can actually use on the mathematical foundations of AI or UAI. I suspect the reason that this community blog, albeit devoted to human rationality and not machine rationality, devolves into topics like 'polygamy' is that we don't have shared problems to solve.
Human rationality is a very, very awkward construct and the problem space is unclear and tangential, albeit related to MIRI's work which let's admit, is the very reason this please exists. Let us run wild and perhaps LessWrongers will start alternative agendas like developing criminal networks and intelligence networks so potential hostile AI could be detected in advance and stopped coersively. I'm just giving the first example I could think of.
My point is, you don't have any significant proprietary hard assets, why shouldn't I or any other particular funder instead create a prize on award for a more transparent FAI research organisation to pivet off your incredible work? I'm not in a position to judge whether or not your ongoing contributions are essential, but this could also be good opportunity for the community to discuss what will happen if or when you die or become incapable of contributing to the community. Same goes for other critical members of the community. Are their intellectual succession proceses in place?
What hypothesis are you testing, or is gnawing at the back of your mind, in relation to LessWrong, as you surf LessWrong right now? Or perhaps you're just surfing idly.
For me its: Has anyone experimented with replacing their socialising with friends time with LessWrong exclusively? I wonder if the benefits associated with socialising such as increased well-being can be substituted for interaction in online communities.
Though, I suspect the nature of the community would be a strong determinant of the outcome. For instance, facebook would probably be unhealthy, as would IRC exclusively, but the LessWrong community as a whole excl. the IRL meeting community may be great! I feel like I've basically outgrown all my friends who I don't have some sort of professional relationship with anyway, or who I have a codependent/insecure-attachment towards.
Could a moderator please nuke the swidon account and all of its posts?
The account is nuked. I need to find out how to remove posts.
Is anyone willing to share an Anki deck with me? I'm trying to start using it. I'm running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.
There are many shared Anki decks. In my experience, the hardest thing to get correct in Anki is picking the correct thing to learn, and seeing someone else's deck doesn't work all that well for it because there's no guarantee that they're any good at picking what to learn, either.
Most of my experience with Anki has been with lists, like the NATO phonetic alphabet, where there's no real way to learn them besides familiarity, and the list is more useful the more of it you know.
What I'd recommend is either picking selections from the source that you think are valuable, or summarizing the source into pieces that you think are valuable, and then sticking them as cards (perhaps with the title of the source as the reverse). The point isn't necessarily to build the mapping between the selection and the title, but to reread the selected piece in intervals determined by the forgetting function.
Alright, I'll be a little more clear. I'm looking for someone's mixed deck, on multiple topics, and I'm looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.
I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I'm looking for how other people do the process of creating cards for new knowledge.
I am missing a big chunk of intuition on learning in general, and this is part of how I want to fix it. I also don't expect people to really be able to answer my questions on it, and I don't expect that I've gotten every specification. Which is why I wanted the example deck.
Edit: So I can't pull a deck off Ankiweb because I want the kind of decks nobody puts on Ankiweb.
Disabled people can benefit from sex. Presumably, some disabled people cannot access sex without paying for it (including neurodevelopmentally disabled, mentally ill, etc). There are barriers to sex workers providing for disabled clients. Unfortunately, there are compelling misconceptions that criminalizing the buying of sex is helpful to society when the evidence appears overwhelmingly on the other side, not to mention the stigma and access to the information about the rewards of sexual experience for sex worker's clients. Further, existing advocacy for sex workers and their client's rights outside of Europe is overly gentle, rarely attacking the other side. I hypothesise that it's because an extremely small minority of people have both the pre-requisite compassion, steadfastness against stigma and endurance against low-status to do something that is good but won't 'look' good.
Someone changed the password on the Username public throwaway account. It's a shame a troll finally got to it after several years.
I actually meant to ask at some point whether the Username account would have protection against people changing passwords willy-nilly, but I didn't because, you know... information hazards and all that. Didn't want to give people the idea. But now that it's happened, I suppose I could ask retrospectively: how come nobody ensured some protection against that?
Because in general a forum that's designed to allow anonymous comments would allow anonymous comments and not let people go through the hack of using a separate account for it. The account wasn't created by any moderator but simple by a using who thinks that such an account would be good to have.
While being in infohazard territory: It's not only possible to change passwords. It's also possible to delete accounts.
It's worth contacting a moderator and seeing whether they can do anything about it.
I'm looking for for a high quality parenting blog, one with relatively frequent well written content and which might accept guest contribution - or one with a discussion forum that's not just gossiping. Can be English speaking or German. I'd like to try my hand on some posts before opening my own blog. Any ideas?
Something which may prove interesting to somebody here:
A tentative list of internal states (certainly incomplete), divided into emotions and mental states. I distinguish between emotions and mental states on the basis of something I can't quite put my finger on, but I'm reasonably certain there -is- a difference, something like the difference between color photographs and black-and-white photographs. (It's quite fuzzy in some places, though, so not everything neatly fits in one or the other. Suspicious/paranoid, for example, I quibble about the placement of.) I've done a few passes at combining emotions I suspect are identical except for context and intensity. You'll notice emotions like "Happy" and "Angry" aren't present - unless somebody can correct me, I think these aren't distinct emotions in and of themselves, but simplifications of a broad range of more complex emotions. (A couple permutations of "Angry" show up under "Rage"). Some words show up multiple times, where the word appears to refer to more than one emotional state, with clarifications.
Out of the emotions listed, I experience somewhere around a third of them, which makes it hard to evaluate how distinct they actually are, and in other places leads me to incorrectly consider them separate internal states. Of the mental states, I experience most of them (which is why I think the sorting criteria isn't -entirely- arbitrary). Of the uncertain - I have no idea whether those things are actually distinct feelings, or just ways people describe other people's behavior, so it's safe to say, if they are experiencable, they're in those things I don't experience.
The list is largely comprised of entries from the following list: https://robbsdramaticlanguages.files.wordpress.com/2014/07/vocabulary-expand.jpg.
Some I've omitted as being, as far as I can tell, embellishments. I've added others, as well.
Emotions:
Mental States:
Uncertain:
I would like to point out a concept that has recently entered into my life.
Sometimes these emotions are generated internally and often the word for the emotion is one that is about an emotion that "pulls" you to feel that way. An example is; "Appreciated" where something else gives you a feeling of being appreciated. It's not an emotion you can give to yourself. (only recognise it) where distress can be from yourself; or hesitation.
Not sure how that adds to the list exactly.
I make a spreadsheet of how often I think I experience each one - https://docs.google.com/spreadsheets/d/1lkOftycrnhjSdbC6cExawoiyX-Jbn9wuxg2GlCjGeh4/edit?usp=sharing on a scale of 1-10, nothing is 9 or 10 because that would imply I experience it all the time.
In digital markets with extremely quick liquidity like the stock exchange, Is investing based on macroeconomic factors and megatrends foolhardy? Is it only sensible to invest when one has privellaged information including via analysis of public data at a level no one else has done?
Unpack the question. What do you mean by "foolhardy"? What is your next-best option for your money?
In almost all cases, you should opt not to make a wager on a topic where you are at an information disadvantage. However, investments are not purely a wager - they're also direction of capital and sharing of risk (and reward) with for-profit organizations. It's quite possible that you can lose the wager part of your investment and still do fairly well on the long-term rewards of corporate shared ownership.
One shouldn't expect to systematically beat the market without privileged information. But even "trying to beat the market" (depending on what exactly that strategy entails) or doing what you describe is often better than what most people do in terms of actually growing their savings. Financial securities (especially stocks) have high enough long-run expected returns such that a "strategy" of routinely accidentally slightly overpaying for them and holding them still results in a lot more money than not investing at all.
Not investing is far worse than shoving your money into random stocks and committing to reinvest all dividends for the next 50 years.
Is there absolute utilitty maximisation in portfolio diversification or is that just a risk control mechanism? Could I pick one random stock and put a whole lot of money in it? I suspect I may be commiting the law of large numbers here (or the gambler's fallacy).
If you're not familiar with it, you should check out www.bogleheads.com for investment/finance advice.
(Not trying to discourage you from discussing this here... just that if you don't know bogleheads, it's quite valuable)
Look at Kelly Betting for some information on why "risk control" is utility maximization.
Presuming you have declining marginal utility for money, picking one random stock gives you the same average/expected monetary outcome, but far lower utility.
This was a productive use of my time - a panel with Peter Thiel, Audrey de Grey (who I don't know) and Eliezer Yudhowsky.
Tweet Sized Insight Porn
Hope LW likes it. Open for tweet suggestions.
Inability and Obligation in Moral Judgment
My conscience is as hypertrophied as the next person, but how is a balance struck between avoiding cognitive biases, logical fallacies, etc., and enjoying life?
I think I know what you are talking about.
There are almost two modes of functioning. "Never thinking hard and going with the flow"; and "thinking hard about what happened". I would suggest that these processes are like system1/system2 processes about living. where if you only play in system 2 you have an exhausting life where you feel like you never get far because you didn't actually do the washing; you just thought really hard about it. You never really had fun; you just thought hard about it. etc etc.
The important thing to note is that we need both system 1 and system 2 to go about getting things done. You are concerned about the balance; Absolutely!
In my post here; http://lesswrong.com/lw/mj7/3_classifications_of_thinking_and_a_problem/ Slider suggested a heuristic for producing results in the area of knowing how to balance.
in this case because you are balancing "hard thinking about the problem" and "enjoying life" If you are finding you are not enjoying life; reduce the time you spend hard-thinking. Iif you are finding you are making mistakes; or needing more planning time to make things work the way you want them to; increase hard-thinking time. If you want to increase both at once - take a break; work on a problem of no consequence.
This is a broad question, and it will get broad answers.
Can you give some examples when avoiding biases made life less enjoyable?
For me, avoiding biases means a cognitive load which means I have to be vigilant which means I can't relax. Perhaps when and if avoiding all/most of the foibles becomes second nature then it will be less of a load. I hope! :)
Famous neurologist and science popularizer Oliver Sacks has died. Which of his books are your favorites?
Awakenings is a perennial favorite, a cohort of people with severe Parkinsonism given levodopa all at once (and going through the several month long process of becoming nearly completely functional with the quirks that come from excess dopamine, then their brains slowly losing homeostasis in the face of the exogenous uncontrolled neurotransmitters).
Seeing Voices, a look into the perceptions of the deaf and the nuances of signed languages, was fascinating to me.
The macro/micro validity tradeoff
Hilary Putnam, one of the most famous philosophers of the twentieth century, has a blog
Solving a Non-Existent Unsolved Problem: The Critical Brachistochrone
I think you were the person using the username account to post in this style. Thank you for making an account and welcome :)
An interesting paper by the name of Fuck nuance.
Abstract:
No, I'm not kidding, this is the actual abstract at the beginning of the paper.
Technically, it's about sociological theories, but I feel the general principle applies much more widely.
(Normally I would quote a teaser chunk of the paper here, but this PDF file seems unusually resistant to copy-and-paste-as-text and I don't feel like manually inserting back all the spaces between the words...)
Nancy Leibowitz was quoting this. Having spent the weekend reading 20th century French philosophers, this was refreshing. From the paper:
It's not a loose analogy. It's a literal description of an example of the sort of thing that should happen in the reality underlying the theory.
There is another aspect to nuance that I don't yet see mentioned in the paper. In French philosophy, the nuance is nuance of interpretation, not an attempt to handle more cases. Many theories are presented without having any cases at all that they handle! Jacques Lacan, for instance, only described one case history during his entire career; he presented detailed theories of personality development with no citations or data.
This happens with many who descend academically from Hegel: Marx, Lacan, Derrida. The model is not "nuanced" in the sense of handling many cases; it is never demonstrated to handle any data at all, or at best one over-simplified case (a general claim, or a particular sentence which the philosopher made up to illustrate the model). The nuance is all in the interpretation. It complexifies the theory without enabling it to handle any more cases--the worst of both worlds.
Thanks for mentioning that I'd already brought up the paper. I've got three quotes here.
My last name is Lebovitz.
I think of the way people tend to get it wrong as a rationality warning. I know about those errors because I have an interest in my name, but the commonness of the errors suggests that people get a tremendous amount wrong. How much of it matters? How could we even start to find out?
Sorry for misspelling your name. I don't think memory errors are rationality errors.
Dilbert creator Scott Adams, who has a fantastic rationalist-compatible blog, is giving Donald Trump a 98% of becoming president because Trump is using advanced persuasion techniques. We probably shouldn't get into whether Trump should be president, but do you think Adams is correct, especially about what he writes here. See also this, this, and this.
I think Scott Adams wildly overestimate the power of conversational hypnosis.
First of all, yes, there have been prominent public figures who are well versed in the art. But that's no argument at all: how many people are trained in conversational hypnosis (or NLP, or what have you), and how many of those are hyper-successful? And how many hyper-successful people are not trained in Ericksonian hypnosis? You could even make the point that Steve Jobs and Bill Clinton were successful despite being trained in that art.
There's also something to be said about linear return on persuasion. If you are 2X more persuasive than your opponent, would you gain twice the supporter? I'm not very confident in this hypothesis too.
There might be a network externality effect with persuasion, where the more people I persuade the more persuasive I become because of social proof issues. In this situation, the returns to persuasion are exponential.
Why do so many people see Adams as being rationality-compatible? I've seen very little that he has to say that sounds at all rational or helpful. Cynical != rational.
See my review of his book: http://lesswrong.com/lw/jdr/review_of_scott_adams_how_to_fail_at_almost/
Having written a rationality-compatible book isn't the same thing as writing a rationality-compatible blog. (It surely indicates being able to write a rationality-compatible blog, but his actual goals may be different.)
Well... Scott Adams has a lot of money. I am willing to bet that Trump will NOT become president, at EVEN ODDS. Scott, if you read this, how about a wager? I propose a $10,000 stake.
Despite his frequent comments that he's "betting" on Trump and that Silver is "betting" against Trump, Adams's position is that gambling is illegal when pressed to actually bet. This means one of the big feedback mechanisms preventing outlandish probabilities is not there, so don't take his stated probabilities as the stated numbers.
(In general, remember how terrible people are at calibration: a 98% chance probably corresponds to about a 70% chance in actuality, if Adams is an expert in the relevant field.)
How convenient for him.
And Adams himself says the "smart money" is on Silver's prediction! I think Adams's prediction is more performative than prognostic, even allowing for ordinary unconsciously bad calibration.
Did Adams praise Obama for skillful use of vagueness? "Hope" seems to be in the same category as "take your country back".
I think Adams is right that Trump has played the media exceedingly well and he has clearly surprised a lot of people. Some Republican pollsters have focus-grouped Trump supporters and found an extreme level of antipathy among them toward "establishment" Republicans. So it is unlikely his current supporters will abandon him in a sudden collapse, which is the failure mode a lot of Trump-skeptics have been describing. That means Trump will likely stay in the race for a long time--unless he gets bored and drops out. I doubt Trump will actually drop out though, he seems to enjoy the fray and clearly hates many establishment conservatives enough to stay in just to have a platform to keep attacking them.
Most likely Trump will split the anti-establishment vote with Ben Carson and eventually most of the establishment candidates will drop out and throw their support to an establishment survivor, who will manage to beat Trump with solid but not huge majorities and take the nomination. If Trump does manage to win the nomination, it is unlikely he will win the white house--odds are less than even, maybe 2:1 against him. Overall I would estimate a ~10% chance Trump wins the presidency.
Forgetting what I know (or think I know) about Scott Adams, Donald Trump, Nate Silver, Jeb Bush, whoever, and going straight to the generic reference class forecast — I'm very sceptical someone could predict US presidential elections with 98% accuracy 14 months in advance.
Actuarial tables give him a roughly 2% chance of dying before the election.
Well, he's very likely substantially healthier than the average 69-year-old American man, so I'd be willing to bet at 1/50 odds that he will survive to the election.
I wouldn't put it at 98%, but I definitely wouldn't put it at Nate Silver's 2%, which I think comes from an analysis that is just way too simplistic.
I would take Silver's analysis over Adams' any day. Look at their respective prediction track records.
Were any of Silver's previous predictions generated by making a list of possibilities, assuming each was a coin flip, multiplying 2^N, and rounding? I get the impression that he's not exactly employing his full statistical toolkit here.
Isolated demands for rigor -- what do you think Adams is doing? (I think he's generating traffic.)
But sure, I agree, that's more of a reasonable prior than an argument. There's more info on the table now.
What Adams does is that he looks at Silver's estimate, says that it is way too low and then takes 1 minus Silver's estimate as his own estimate just to make a point. He does not attempt any statistical analysis and the 98% figure should not be taken seriously.
It was because of Nate Silver's track record that I initially had high confidence in his estimate. Then as I read his justification my confidence in his estimate decreased. I think he's just being lazy in his justification, here, when he says things like:
To be fair to Silver, when he wrote the article he might not have considered Trump's campaign plausible enough to give serious thought. I suspect that if Trump continues to perform well in the polls Silver will give a more thoughtful and realistic analysis later on.
Does Adams have a track record at predicting this sort of thing? I am not aware of any instances he's said "here is a master persuader trying to do X, they will succeed" and them having failed, but I can't remember more than one instance of him saying that and it being correct (and I don't remember the specifics), but I don't follow Adams closely enough to have a good count.
I think that Adams is raising the sort of challenge that Silver is weakest against: Trump's tactics are a "black swan" in the technical sense that no candidate in Silver's dataset has run with a similar methodology. That Silver thinks Herman Cain's campaign is the right reference class for Trump's campaign seems to me like a very strong argument for Silver not getting what's going on.
He has an excellent track record of saying outrageous things -- that's what he is optimizing for, I think.
I think Scott Adams has taken to trolling the readers of his blog.
Taken to? He's been doing it for like a decade at this point.
Meta: in posting the open thread at this time I note that it is Monday where I am in Sydney Australia; even if this is roughly 6-12 hours earlier than usual to start the open thread. (hope you all have a good week ahead)
I like Comic Sans too, but is it intended?
apologies again! (same as last OT)