Open Thread August 31 - September 6
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (326)
Meta: in posting the open thread at this time I note that it is Monday where I am in Sydney Australia; even if this is roughly 6-12 hours earlier than usual to start the open thread. (hope you all have a good week ahead)
I like Comic Sans too, but is it intended?
apologies again! (same as last OT)
Dilbert creator Scott Adams, who has a fantastic rationalist-compatible blog, is giving Donald Trump a 98% of becoming president because Trump is using advanced persuasion techniques. We probably shouldn't get into whether Trump should be president, but do you think Adams is correct, especially about what he writes here. See also this, this, and this.
I think Scott Adams has taken to trolling the readers of his blog.
Taken to? He's been doing it for like a decade at this point.
I wouldn't put it at 98%, but I definitely wouldn't put it at Nate Silver's 2%, which I think comes from an analysis that is just way too simplistic.
I would take Silver's analysis over Adams' any day. Look at their respective prediction track records.
Does Adams have a track record at predicting this sort of thing? I am not aware of any instances he's said "here is a master persuader trying to do X, they will succeed" and them having failed, but I can't remember more than one instance of him saying that and it being correct (and I don't remember the specifics), but I don't follow Adams closely enough to have a good count.
I think that Adams is raising the sort of challenge that Silver is weakest against: Trump's tactics are a "black swan" in the technical sense that no candidate in Silver's dataset has run with a similar methodology. That Silver thinks Herman Cain's campaign is the right reference class for Trump's campaign seems to me like a very strong argument for Silver not getting what's going on.
He has an excellent track record of saying outrageous things -- that's what he is optimizing for, I think.
It was because of Nate Silver's track record that I initially had high confidence in his estimate. Then as I read his justification my confidence in his estimate decreased. I think he's just being lazy in his justification, here, when he says things like:
To be fair to Silver, when he wrote the article he might not have considered Trump's campaign plausible enough to give serious thought. I suspect that if Trump continues to perform well in the polls Silver will give a more thoughtful and realistic analysis later on.
Were any of Silver's previous predictions generated by making a list of possibilities, assuming each was a coin flip, multiplying 2^N, and rounding? I get the impression that he's not exactly employing his full statistical toolkit here.
Isolated demands for rigor -- what do you think Adams is doing? (I think he's generating traffic.)
But sure, I agree, that's more of a reasonable prior than an argument. There's more info on the table now.
What Adams does is that he looks at Silver's estimate, says that it is way too low and then takes 1 minus Silver's estimate as his own estimate just to make a point. He does not attempt any statistical analysis and the 98% figure should not be taken seriously.
What Adams has said he's doing is simulating the future along the mainline prediction--i.e. nothing too weird happens--and under his model, Trump is guaranteed to win. Then he says "well, maybe something weird will happen" and drops that confidence by 2%, instead of a more reasonable 30% (or 50%).
Forgetting what I know (or think I know) about Scott Adams, Donald Trump, Nate Silver, Jeb Bush, whoever, and going straight to the generic reference class forecast — I'm very sceptical someone could predict US presidential elections with 98% accuracy 14 months in advance.
Actuarial tables give him a roughly 2% chance of dying before the election.
Well, he's very likely substantially healthier than the average 69-year-old American man, so I'd be willing to bet at 1/50 odds that he will survive to the election.
I think Adams is right that Trump has played the media exceedingly well and he has clearly surprised a lot of people. Some Republican pollsters have focus-grouped Trump supporters and found an extreme level of antipathy among them toward "establishment" Republicans. So it is unlikely his current supporters will abandon him in a sudden collapse, which is the failure mode a lot of Trump-skeptics have been describing. That means Trump will likely stay in the race for a long time--unless he gets bored and drops out. I doubt Trump will actually drop out though, he seems to enjoy the fray and clearly hates many establishment conservatives enough to stay in just to have a platform to keep attacking them.
Most likely Trump will split the anti-establishment vote with Ben Carson and eventually most of the establishment candidates will drop out and throw their support to an establishment survivor, who will manage to beat Trump with solid but not huge majorities and take the nomination. If Trump does manage to win the nomination, it is unlikely he will win the white house--odds are less than even, maybe 2:1 against him. Overall I would estimate a ~10% chance Trump wins the presidency.
Did Adams praise Obama for skillful use of vagueness? "Hope" seems to be in the same category as "take your country back".
Well... Scott Adams has a lot of money. I am willing to bet that Trump will NOT become president, at EVEN ODDS. Scott, if you read this, how about a wager? I propose a $10,000 stake.
Despite his frequent comments that he's "betting" on Trump and that Silver is "betting" against Trump, Adams's position is that gambling is illegal when pressed to actually bet. This means one of the big feedback mechanisms preventing outlandish probabilities is not there, so don't take his stated probabilities as the stated numbers.
(In general, remember how terrible people are at calibration: a 98% chance probably corresponds to about a 70% chance in actuality, if Adams is an expert in the relevant field.)
And Adams himself says the "smart money" is on Silver's prediction! I think Adams's prediction is more performative than prognostic, even allowing for ordinary unconsciously bad calibration.
How convenient for him.
Why do so many people see Adams as being rationality-compatible? I've seen very little that he has to say that sounds at all rational or helpful. Cynical != rational.
See my review of his book: http://lesswrong.com/lw/jdr/review_of_scott_adams_how_to_fail_at_almost/
Having written a rationality-compatible book isn't the same thing as writing a rationality-compatible blog. (It surely indicates being able to write a rationality-compatible blog, but his actual goals may be different.)
Tumblr user su3su2u1 (probably most known to LWers for his critiques of HPMOR's scientific claims, and subsequent fallout with Eliezer) has an interesting post about MIRI's research strategy. I think it has some really good ideas. What do other folks think?
It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.
The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.
If you have outside-view criticisms of an organization and you're suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what's really going on.
Ever since I started hanging out on LW and working on UDT-ish math, I've been telling SIAI/MIRI folks that they should focus on public research output above all else. (Eliezer's attitude back then was the complete opposite.) Eventually Luke came around to that point of view, and things started to change. But that took, like, five years of persuasion from me and other folks.
After reading su3su2u1's post, I feel that growing closer to academia is another obviously good step. It'll happen eventually, if MIRI is to have an impact. Why wait another five years to start? Why not start now?
+1
I think there's definitely not enough thought given to this, especially when they say one of the main constraints is getting interested researchers.
Just because MIRI researchers' incentives aren't distorted by "publish or perish" culture, it doesn't mean they aren't distorted by other things, especially those that are associated with lack of feedback and accountability.
Isn't it "cultish" to assume that an organization could do anything better than the high-status Academia? :P
Because many people seem to worry about publishing, I would probably treat it as another form of PR. PR is something that is not your main reason to exist, but you do in anyway, to survive socially. Maximizing the academic article production seems to fit here: it is not MIRI's goal, but it would help to get MIRI accepted (or maybe not) and it would be good for advertising.
Therefore, AcademiaPR should be a separate department of MIRI, but it definitely should exist. It could probably be done by one person. The job of the person would be to maximize MIRI-related academic articles, without making it too costly for the organization.
One possible method that didn't require even five minutes of thinking: Find smart university students who are interested in MIRI's work but want to stay in academia. Invite them to MIRI's workshops, make them familiar with what MIRI is doing but doesn't care about publishing. Then offer them to become co-authors by taking the ideas, polishing them, and getting them published in academic journals. MIRI gets publications, the students get a new partially explored topic to write about; win/win. Also known as "division of labor".
Really? You can't think of another reason to publish than PR?
I can.
But PR also plays a role here, and this is how to fix it relatively cheaply. And it would also provide feedback about what people outside of MIRI think about MIRI's research.
I think the primary purpose of peer review isn't PR, but sanity checking. Peer reviewed publications shouldn't be a concession to outsiders, but the primary means of getting work done.
It seems that writing publishable papers isn't easy.
One dictionary definition of academia is "the environment or community concerned with the pursuit of research, education, and scholarship." By this definition MIRI is already part of academia. It's just a separate academic island with tenuous links to the broader academic mainland.
MIRI is a research organization. If you maintain that it is outside of academia then you have to explain what exactly makes it different, and why it should be immune to the pressures of publishing.
Low-quality publications don't get accepted and published. I know of no universities that would rather have a lot of third-rate publications than a small number of Nature publications. I'll agree with you that things like impact factor aren't good metrics but that's somewhat missing the point here.
If MIRI doesn't publish reasonably frequently (via peer review), how do you know they aren't wasting donor money? Donors can't evaluate their stuff themselves, and MIRI doesn't seem to submit a lot of stuff to peer review.
How do you know they aren't just living it up in a very expensive part of the country doing the equivalent of freshman philosophizing in front of the white board. The way you usually know is via peer review -- e.g. other people previously declared to have produced good things declare that MIRI produces good things.
A very reasonable suggestion, and I'm not just saying that because I have a PhD. I'm saying it because it's so easy to reinvent the wheel and think you're doing original research when you're really just re-discovering other people's work in a different context. It's very hard to root out these sorts of errors; when I was doing a PhD I thought the work I was doing in developmental biology was new and unique until about a year later I found that the 'new' mathematical problems I had solved had actually been widely used in polymer science for years. I just wasn't able to find the research because none of the search terms matched.
A link to the wider academic community would do a lot to help in MIRIs goals, and a very good way to do this would be undertaking PhDs. It should be a snap for the MIRI folks...
Do you have any ideas about how it could be made easier to find out whether you're just rediscovering previous work?
Eliminate context, reduce problems to their abstract fundamentals, collaborate with other people who might have a chance of having been exposed to similar problems in other domains.
An interesting paper by the name of Fuck nuance.
Abstract:
No, I'm not kidding, this is the actual abstract at the beginning of the paper.
Technically, it's about sociological theories, but I feel the general principle applies much more widely.
(Normally I would quote a teaser chunk of the paper here, but this PDF file seems unusually resistant to copy-and-paste-as-text and I don't feel like manually inserting back all the spaces between the words...)
Nancy Leibowitz was quoting this. Having spent the weekend reading 20th century French philosophers, this was refreshing. From the paper:
It's not a loose analogy. It's a literal description of an example of the sort of thing that should happen in the reality underlying the theory.
There is another aspect to nuance that I don't yet see mentioned in the paper. In French philosophy, the nuance is nuance of interpretation, not an attempt to handle more cases. Many theories are presented without having any cases at all that they handle! Jacques Lacan, for instance, only described one case history during his entire career; he presented detailed theories of personality development with no citations or data.
This happens with many who descend academically from Hegel: Marx, Lacan, Derrida. The model is not "nuanced" in the sense of handling many cases; it is never demonstrated to handle any data at all, or at best one over-simplified case (a general claim, or a particular sentence which the philosopher made up to illustrate the model). The nuance is all in the interpretation. It complexifies the theory without enabling it to handle any more cases--the worst of both worlds.
Thanks for mentioning that I'd already brought up the paper. I've got three quotes here.
My last name is Lebovitz.
I think of the way people tend to get it wrong as a rationality warning. I know about those errors because I have an interest in my name, but the commonness of the errors suggests that people get a tremendous amount wrong. How much of it matters? How could we even start to find out?
Sorry for misspelling your name. I don't think memory errors are rationality errors.
Judging by the particular way you mis-spelled the name, I'd guess your memory is more auditory in nature?
It's not a memory error, it's a hasty pattern-match error.
I agree that it's a pattern-match error, but I think I'd classify that as a type of memory error.
I think of memory errors as retrieving something other than what was stored. In this case I doubt people "stored" your name correctly -- most likely they interpreted it wrong to start with. It's a perception error, then.
Excellent point. These errors are fairly common. When I use this username, I somewhat frequently see people write it as brettel. I guess that means that they interpret it as brett-el, when in reality it's b-trettel. I can understand this.
Memory errors have a bearing on rationality because you need accurate data to think about, and one of the primary causes of not remembering something is not having noticed it.
I can say my name twice, spell it, and show people a business card, and still have them get it wrong.
If you want more about how little people perceive, I recommend Sleight of Mind, a book about neurology and stage magic.
Solving a Non-Existent Unsolved Problem: The Critical Brachistochrone
I think you were the person using the username account to post in this style. Thank you for making an account and welcome :)
Hilary Putnam, one of the most famous philosophers of the twentieth century, has a blog
The macro/micro validity tradeoff
Famous neurologist and science popularizer Oliver Sacks has died. Which of his books are your favorites?
Awakenings is a perennial favorite, a cohort of people with severe Parkinsonism given levodopa all at once (and going through the several month long process of becoming nearly completely functional with the quirks that come from excess dopamine, then their brains slowly losing homeostasis in the face of the exogenous uncontrolled neurotransmitters).
Seeing Voices, a look into the perceptions of the deaf and the nuances of signed languages, was fascinating to me.
My conscience is as hypertrophied as the next person, but how is a balance struck between avoiding cognitive biases, logical fallacies, etc., and enjoying life?
This is a broad question, and it will get broad answers.
Can you give some examples when avoiding biases made life less enjoyable?
For me, avoiding biases means a cognitive load which means I have to be vigilant which means I can't relax. Perhaps when and if avoiding all/most of the foibles becomes second nature then it will be less of a load. I hope! :)
Ok, can you give an example of when you felt less relaxed, and the bias this helped you avoid?
One approach could be to set priorities. "How important is it if I do this not-optimally? What are the consequences of cognitive biases leading me to a poor choice here?" and to be vigilant on the most important stuff, and let it go for lower priority things.
However, practice can help, and sometimes it is easier to catch oneself on tasks or issues of a smaller scale than on the big importart ones. So practicing on the lower priority ones can be useful.
Vigilance takes energy. Awareness...not as much. Maybe a shift toward developing awareness rather than vigilance could help.
Would it be bad if you gave yourself time off for specific durations and/or activities?
I think I know what you are talking about.
There are almost two modes of functioning. "Never thinking hard and going with the flow"; and "thinking hard about what happened". I would suggest that these processes are like system1/system2 processes about living. where if you only play in system 2 you have an exhausting life where you feel like you never get far because you didn't actually do the washing; you just thought really hard about it. You never really had fun; you just thought hard about it. etc etc.
The important thing to note is that we need both system 1 and system 2 to go about getting things done. You are concerned about the balance; Absolutely!
In my post here; http://lesswrong.com/lw/mj7/3_classifications_of_thinking_and_a_problem/ Slider suggested a heuristic for producing results in the area of knowing how to balance.
in this case because you are balancing "hard thinking about the problem" and "enjoying life" If you are finding you are not enjoying life; reduce the time you spend hard-thinking. Iif you are finding you are making mistakes; or needing more planning time to make things work the way you want them to; increase hard-thinking time. If you want to increase both at once - take a break; work on a problem of no consequence.
Inability and Obligation in Moral Judgment
Tweet Sized Insight Porn
Hope LW likes it. Open for tweet suggestions.
Anyone ever try modeling internal monologue as political parties? I suppose it's not so different from the House voices in HPMOR, but I'm curious if there's RL experience.
In the US at least, where the system it set up such that there can only be two parties that matter, I think the parties are too much of a "big tent" hodgepodge for that to work. Perhaps it would if they were based on the parties in a country where they have more of an incentive to be based around a consistent world view.
Any Germans want to weigh in?
Why would you want to dumb yourself down? X-/
I've tried to model it as it was shown on Herman's Head. It helps me remember that I don't have to listen only to my inner wimp.
I've been thinking about different ways to model the adaptive system of thought and ideas in my mind. Governments don't seem like a helpful model because parts of my mind aren't as autonomous as people, nor do I have clearly defined interests groups or political party proxies. Also keen to hear ways of modelling that system for internal usage.
The abstraction is that each party gets one voice, without worrying too hard about who exactly is speaking for it, and the voting public represents the support for each voice.
I find parties better capture the fact that some voices are more supported than others. If I thought of all the voices in my head as people in a room together, I'm afraid I'd end up thinking the voices I most endorse are jerks pushing everyone else around.
Political parties, no. I just don't care that much about the topic to have a solid identity for any party which I could usefully use to apply to myself.
I do have an internal dialog, though. It's just more fluid about identity of participants. I generally think of it as different-timeline future-selves arguing about which of them has it better based on the decisions I'm about to make.
Are most of the hard choices you face ones with known factual outcomes? The future-self approach seems to rely on that.
Nope. Hard choices will have outcomes, but I don't know them in advance, and can't always be sure of them even in retrospect. That doesn't keep me from imagining how I'll feel about the decision if I find myself in each cell of the matrix of options and outcomes.
. Additional points missed are that there are no agricultural subsidies, and there are some other things mentioned in the comments.
I hypothesise that there are several topics for which you can reliably expect upvotes or downvotes depending your position, regardless of your content.
Does anyone else have trouble with people who openly display their intelligence or attempt to be smart about something? High-school and media have somehow ingrained a hostility towards that and I find it surprisingly hard to overcome. I think it is some sort of empathy response, similar to vicarious embarrassment.
It's worth distinguishing a number of things.
Actually and visibly being really smart, and pretty much always right in their domain of expertise.
Trying to look really smart and right, over and above merely being so.
Arrogance in dealing with people who are wrong.
Arrogance in dealing with people disagreeing with oneself.
(1) is a great virtue, (2) and (4) are mortal sins of rationality, and (3) merely a venial one. I will overlook a lot of arrogance in someone who is actually pretty much always right, especially if it isn't me they're being arrogant at.
People who are insecure around smart people often read actually being right and knowing it (1 and 3) as pretending to be right and intimidating others (2 and 4).
seconded. nothing to add.
That's what the little thumbs-up button is for.
I don't think we have a problem on LW with too much people writing messages that they agree with other people.
I find it good to be clear as to add support for the original idea; and also tell the person they have agreement not just "that was a thing that I felt like +1 to.
but I could have been more lazy...
I openly display my intelligent all the time. Nobody would -describe- it as that, however. They'd describe me as giving advice, suggesting solutions, or similar -specific- activities, and only in appropriate situations. (If you don't know when advice is desired - which is, critically, not whenever somebody mentions a problem they have - don't give it unless asked.)
"Openly displaying your intelligence", as an activity in itself, is merely -bragging-, and is just as annoying, and for precisely the same reason, as the guy who will tell anyone who will listen about how he's a motorcycle racer who could easily win any race he ever entered, but he just enjoys riding his motorcycle for the fun of it.
I think the ingrained hostility doesn't come from high school and media, but from human nature which doesn't like it when people are trying to raise their status relative to you.
But anyway, the motive of speaking the truth is different from the motive of displaying intelligence, so to the degree that someone has the second motive that is likely enough to hinder the first. So if someone has the second motive, that isn't a good reason to be hostile, but it is a good reason to take what they say with a grain of salt.
"attempt to" is a key phrase in your question. I don't see much trouble with openly displayed intelligence, as long as it's actually intelligent (correct, and directed to an agreed shared goal). Nobody much cares for show-offs or useless knowledge.
I do see a bit of resistance to "weird", which often comes with analysis. Much of the time, but not always, that's because the supposed-intelligent participant has done only a superficial analysis and not really attempted to understand the equilibrium that is the status quo.
High-school is ... unrelated to the real world, for which I am grateful. Don't extrapolate from what is effectively a Robbers Cave experiment that kids impose on each other in the absence of any meaningful effort/skill rewards.
For me the most annoying aspect of "displaying intelligence openly" is the following:
Imagine that you have an average person A, an intelligent person B, and a super-intelligent person C. More precisely, imagine that there are 100 As, 10 Bc, and 1 C, because most people are at the center of the bell curve.
From A's point of view, both B and C are smarter than him, and he cannot really compare them. All he can say is that he kinda understands what B says, but a lot of what C says is incomprehensive.
The experience of B is that most people are either A or B. Add some political or other mindkilling, and B may quickly develop a heuristic "everyone who agrees with me is a B, and everyone who disagrees is A and a huge waste of time".
Now once in a while B and C meet and disagree about something. B, using their long-practiced heuristics says "lol, you're an idiot".
An observer A looks at their interaction and thinks "B is probably right, since I know B to be a smart person; and C also seems kinda smart, but not as smart as B, and B says he is wrong, so he probably is".
From my point of view, B is "cheating" in this process, using both his intelligence and his lack of even higher intelligence to create an advantage over C. Thus I applaud the norms which prevent this, even if they were created for other reasons.
This was a productive use of my time - a panel with Peter Thiel, Audrey de Grey (who I don't know) and Eliezer Yudhowsky.
In digital markets with extremely quick liquidity like the stock exchange, Is investing based on macroeconomic factors and megatrends foolhardy? Is it only sensible to invest when one has privellaged information including via analysis of public data at a level no one else has done?
One shouldn't expect to systematically beat the market without privileged information. But even "trying to beat the market" (depending on what exactly that strategy entails) or doing what you describe is often better than what most people do in terms of actually growing their savings. Financial securities (especially stocks) have high enough long-run expected returns such that a "strategy" of routinely accidentally slightly overpaying for them and holding them still results in a lot more money than not investing at all.
Not investing is far worse than shoving your money into random stocks and committing to reinvest all dividends for the next 50 years.
Is there absolute utilitty maximisation in portfolio diversification or is that just a risk control mechanism? Could I pick one random stock and put a whole lot of money in it? I suspect I may be commiting the law of large numbers here (or the gambler's fallacy).
It's purely for risk control, but most people are extremely loss averse and so do well to diversify.
You could. It's a bet with positive expectation and a really risky one. But people do much dumber things with their money. Having said that, I'd recommend an index fund instead if you're plopping a whole lot of money in.
Look at Kelly Betting for some information on why "risk control" is utility maximization.
Presuming you have declining marginal utility for money, picking one random stock gives you the same average/expected monetary outcome, but far lower utility.
If you're not familiar with it, you should check out www.bogleheads.com for investment/finance advice.
(Not trying to discourage you from discussing this here... just that if you don't know bogleheads, it's quite valuable)
Unpack the question. What do you mean by "foolhardy"? What is your next-best option for your money?
In almost all cases, you should opt not to make a wager on a topic where you are at an information disadvantage. However, investments are not purely a wager - they're also direction of capital and sharing of risk (and reward) with for-profit organizations. It's quite possible that you can lose the wager part of your investment and still do fairly well on the long-term rewards of corporate shared ownership.
SNP's are not independent - tag SNP's 'represent a region fo hiily correlated SNP's.
So, can the correlations be used to correct the reported risks in promethease to identify overall risk for a particular thing?
Does 23andme test for highly correlated SNP's, or does it exclude them cause unnecersary???
https://en.wikipedia.org/wiki/Linkage_disequilibrium
Are there any advocacy groups with sex buyers or 'johns'? They're an affluent bunch, and their interests include easily influenced poor settings, and they're not neccersarily constrained by the scrupolosity that advocates for say sex worker's rights may have. It suprises me that they don't exist, when advocacy groups for smokers and other vices exist, when only advocacy groups for the suppliers and workers in the sex trade seem to exist.
Being a sex buyer is low status. Being in an oppressed group such as sex workers is high status in many political contexts.
From memory: Amnesty International has come out in favor of legalizing prostitution. They were grudging about admitting that, while they aren't going to call it human rights, they have to support something like human rights for prostitutes' customers and agents.
I read the Amnesty paper and it didn't said something about rights for customers or agents.
That depends. Being a john is low-status. Inviting girls over to your yacht for champagne and caviar is high-status.
That really depends. A whore is not a high-status professsion.
That's not being a "sex buyer" within the context of needing advocacy for sex buying.
Thus, "in many political contexts".
I wonder what is the lesson here.
"If you want to buy sex for money, you better have a lot of money, or it will reflect poorly on you."
Or perhaps:
"Doing things in a way which demonstrates that you have a lot of money can make almost anything high-status."
Or: be classy, not crass. Form and style matter.
It is, of course, easier to be classy when you have a yacht stocked with champagne and caviar on hand... X-/
Counter-example: Donald Trump. A dictionary counter-example: nouveau riche :-)
Hence the term "status whore."
Cigratte companies manage to fund advocacy groups for smokers. Mafia that runs brothels on the other hand doesn't fund advocacy groups.
I'm looking for a good demonstration of Aumann's Agreement Theorem that I could actually conduct between two people competent in Bayesian probability. Presumably this would have a structure where each player performs some randomizing action, then they exchange information in some formal way in rounds, and eventually reach agreement.
A trivial example: each player flips a coin in secret, then they repeatedly exchange their probability estimates for a statement like "both coin flips came up heads". Unfortunately, for that case they both agree from round 2 onwards. Hal Finney has a version that seems to kinda work, but his reasoning at each step looks flawed. (As soon as I try to construct a method for generating the hints, I find that at each step when I update my estimate for my opponent's hint quality, I no longer get a bounded uniform distribution.)
So, what I'd like: a version that (with at least moderate probability) continues for multiple rounds before agreement is reached; where the information communicated is some sort of simple summary of a current estimate, not the information used to get there; where the math at each step is simple enough that the game can be played by humans with pencil and paper at a reasonable speed.
Alternate mechanisms (like players alternate communication instead of communicating current states simultaneously) are also fine.
How about some variation on Bulls and Cows?
That seems like fertile ground for exploration, but no probability / agreement variation immediately springs to mind. Did you have something specific in mind?
Have several people try to guess the same number, with everyone able to see everyone's guesses and results.
But then everyone has the exact some information, right? I'm specifically looking for something that's like Hal Finney's game, in that the different players have different information, and communicate some different set of information (some sort of knowledge about the state of the world, like their posteriors on the joint data).
Based on simple coin flip; other games:
I am sure there are more small games that have a similar "known" problem space.
What change would you make that results in multiple rounds being required?
For example, if each player flips multiple coins, and then we share probability estimates for "all coins heads" or "majority of coins heads" or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.
example I was thinking:
each player flips 3(? 10) coins of their own. (giving them various possibilities on what they think the whole coin-space looks like) They present their 90%, 99% confidence intervals on there being more than 4 (9) heads. Round 2 repeat. (also make statements based on what they think the state of play is ++ try to get to the answer before the other person. So make statements that can be misleading maybe?)
Not sure how easy it is to tease out that information for a human. maybe a computer could solve it. but not so much a human...
"I flipped 10 coins; My 90% confidence that there are at least 7 of each heads and tails is 90%. 99% confidence is 60%."
confidence for "at least 10 heads and 6 tails" etc.
Here's how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for "there are 4+ heads total" is now 4/8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0/8) (1H, 1/8) (2H, 4/8) (3H, 7/8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.
(Also, you're not using "confidence interval" in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)
I still don't see any version of this that's simpler than Finney's that actually makes use of multiple rounds, and when I fix the math on Finney's version it's decidedly not simple.
My version of making this work would be choosing to only share limited information.
i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don't sum to 100%, and will be harder to work out the "unknown space" in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)
The two-coins example might be useful as a first step, even if you then present a more difficult one.
Bridge, the card game. Bidding is the process of two players exchanging information about the cards they hold via the very limited communications channel (bids). The play itself is also used to transfer more information about which cards remain in the hand.
I don't know if that will work as a demonstration of the Aumann's Theorem, though, bridge gets very complicated very fast :-/
That's an excellent practical example, though it doesn't really have the explicit probability math I was hoping for.
In particular, I like that you'll see stuff like which player thinks the partnership has the better contract flips back and forth, especially around auctions involving controls, stops, or other specific invitational questions. The concept of evaluating your hand within a window ("My hand is now very weak, given that I opened") is also explicitly reasoning about what your partner infers based on what you told them.
I think the most important thing here might be that bridge requires multiple rounds because bidding is limited bandwidth, whereas giving a full-precision probability estimate is not.
If you want explicit probability math, you might be able to construct some kind of cooperative poker (for example, allow two partners to exchange one card from their hands following some very restricted negotiations). The probabilities in poker are much more straightforward and amenable to calculation.
Talos Principle has an AI singularity plot. In the the final test to pass is an anti-friendliness test. However upon experiencing the story this doesn't seem especially repugnant. Is friendliness in conflict with moral autonomy?
Something which may prove interesting to somebody here:
A tentative list of internal states (certainly incomplete), divided into emotions and mental states. I distinguish between emotions and mental states on the basis of something I can't quite put my finger on, but I'm reasonably certain there -is- a difference, something like the difference between color photographs and black-and-white photographs. (It's quite fuzzy in some places, though, so not everything neatly fits in one or the other. Suspicious/paranoid, for example, I quibble about the placement of.) I've done a few passes at combining emotions I suspect are identical except for context and intensity. You'll notice emotions like "Happy" and "Angry" aren't present - unless somebody can correct me, I think these aren't distinct emotions in and of themselves, but simplifications of a broad range of more complex emotions. (A couple permutations of "Angry" show up under "Rage"). Some words show up multiple times, where the word appears to refer to more than one emotional state, with clarifications.
Out of the emotions listed, I experience somewhere around a third of them, which makes it hard to evaluate how distinct they actually are, and in other places leads me to incorrectly consider them separate internal states. Of the mental states, I experience most of them (which is why I think the sorting criteria isn't -entirely- arbitrary). Of the uncertain - I have no idea whether those things are actually distinct feelings, or just ways people describe other people's behavior, so it's safe to say, if they are experiencable, they're in those things I don't experience.
The list is largely comprised of entries from the following list: https://robbsdramaticlanguages.files.wordpress.com/2014/07/vocabulary-expand.jpg.
Some I've omitted as being, as far as I can tell, embellishments. I've added others, as well.
Emotions:
Mental States:
Uncertain:
I would like to point out a concept that has recently entered into my life.
Sometimes these emotions are generated internally and often the word for the emotion is one that is about an emotion that "pulls" you to feel that way. An example is; "Appreciated" where something else gives you a feeling of being appreciated. It's not an emotion you can give to yourself. (only recognise it) where distress can be from yourself; or hesitation.
Not sure how that adds to the list exactly.
I make a spreadsheet of how often I think I experience each one - https://docs.google.com/spreadsheets/d/1lkOftycrnhjSdbC6cExawoiyX-Jbn9wuxg2GlCjGeh4/edit?usp=sharing on a scale of 1-10, nothing is 9 or 10 because that would imply I experience it all the time.
Scheming! That emotion definitely belongs on the list. WRT Disappointment/Disheartened/Discouraged, which would you separate? (Or are all three distinct?)
There is a sense that some of these are... very self-inflicted. I suspect some people have a fine degree of control over that, and others have no control over the distress, or hesitation, they experience. (I don't feel "Appreciated", so I can't comment on that example, but there are similar external emotions I do, such as annoyance, which is one I'm incapable of feeling towards myself, in pretty much exactly the same way I couldn't tickle myself.)
Equanimity is... a bit broader than "cool and collected", at least in my personal experience. Cool and collected is a good description for the outer-state of it - what is directly experienced in most situations. There's an inner component to it, too - it's... a capacity for dealing with emotions. It's the capacity to remain cool and collected, whatever emotions are hurled at you. When my equanimity is low, I feel like I'm on the top of an immensely tall column that is swaying haphazardly, and will topple in the slightest emotional breeze. When my equanimity is high, there's an inner stability, like a hurricane of emotion couldn't budge it - I describe that state as "centered".
I would separate Disappointment from Discouraged. As distinctly things that don't have to have each other there to happen. Disappointment also doesn't have to be disheartening. Dishearten/Discourage are similar and could probably be left close by.
Looking good. Not sure how to use it; but if it stays up - I will think about it...
Done!
No idea what any of those three are supposed to feel like. I imagine the inverse of relief?
Disheartened ~= "soulcrushing" Discouraged ~= I am running a race against my peers and I don't seem to be able to keep up. After a month of training; they seem to be getting faster and I seem to not be keeping up at the same grade. "All this effort for nothing" Disappointed ~= I was expecting chocolate spread on my sandwich but it was only jam. (slightly in the direction of "something I expected but did not quite estimate right")
This is useful. Do you have experience with Focusing? Part of the workflow is to sit with your emotional state and gently try to discern what label applies to it. This can be hard because sometimes the feeling is complex or unclear, but I expect part of the difficulty lies in a simple lack of vocabulary with which to label the feeling.
The biggest issue from my perspective is that the labels don't immediately connect to any kind of easily-communicable qualia, so even if you know the correct label, you don't necessarily have a good way of connecting the label to the feeling. (That said, the only emotion I required outside assistance to identify was a generalized anxiety, which didn't feel at all like I expected it to. I expected anxiety to be definitively unpleasant, and it was merely ambiguously so.)
I'm looking for for a high quality parenting blog, one with relatively frequent well written content and which might accept guest contribution - or one with a discussion forum that's not just gossiping. Can be English speaking or German. I'd like to try my hand on some posts before opening my own blog. Any ideas?
Someone changed the password on the Username public throwaway account. It's a shame a troll finally got to it after several years.
Far out, that was an excellent account and several people had clearly used it to make important contributions.
It would be nice if there was a way to memorialize the posts or something externally. Or, perhaps the moderators could implement an 'official' throw-away to protect against this.
I have been a beneficiary of comments from the Username account and believe it does...or did a true service to the community. Thank you for taking it upon yourself to report this and making a new account.
While I had no objection to the existence of the account and in fact used it several times myself, it was a bit annoying to me that someone was using it as his personal account rather than bothering to create his own.
It's worth contacting a moderator and seeing whether they can do anything about it.
Even if they set the password it's a nature of a public account that the PW can always be set differently.
How about make the password reset automatically every X minutes ?
I actually meant to ask at some point whether the Username account would have protection against people changing passwords willy-nilly, but I didn't because, you know... information hazards and all that. Didn't want to give people the idea. But now that it's happened, I suppose I could ask retrospectively: how come nobody ensured some protection against that?
Because in general a forum that's designed to allow anonymous comments would allow anonymous comments and not let people go through the hack of using a separate account for it. The account wasn't created by any moderator but simple by a using who thinks that such an account would be good to have.
While being in infohazard territory: It's not only possible to change passwords. It's also possible to delete accounts.
I always assumed that was just one person. I feel like someone died. (Not really. But, how was I supposed to know it was an open account?)
The beauty of the account laid in the fact that it was not publicized, so only people who were long-time lurkers would know about it.
Disabled people can benefit from sex. Presumably, some disabled people cannot access sex without paying for it (including neurodevelopmentally disabled, mentally ill, etc). There are barriers to sex workers providing for disabled clients. Unfortunately, there are compelling misconceptions that criminalizing the buying of sex is helpful to society when the evidence appears overwhelmingly on the other side, not to mention the stigma and access to the information about the rewards of sexual experience for sex worker's clients. Further, existing advocacy for sex workers and their client's rights outside of Europe is overly gentle, rarely attacking the other side. I hypothesise that it's because an extremely small minority of people have both the pre-requisite compassion, steadfastness against stigma and endurance against low-status to do something that is good but won't 'look' good.
It's not a straightforward subject. Legalized prostiution in Germany results in a situation where it likely would be good if the majority of brothels don't exist http://www.spiegel.de/international/germany/human-trafficking-persists-despite-legality-of-prostitution-in-germany-a-902533.html because they are abusive to the women in them.
On the other hand there are people doing body work for whom the lines between sexual and nonsexual are pretty fluid. For those people a law forbidding sexual contact makes little sense.
Isn't there some "Uber for escorts" app in Germany that mostly solves that problem?
Why should an uber like app solve the problem? When women get drugged and get beaten if the don't engage in prostitution having an App to connect them to buyers doesn't solve much.
I don't know the details, but from reading the article seems to me that "legalization" is this case simply meant saying "okay, it is no longer illegal", instead of treating it as any other employment.
For example the article mentions prostitutes under 14. Did they have an employment contract? If no, then the whole situation was illegal, even if prostitution per se is legal. Keeping prostitutes locked in the basement; again, would the same situation be legal if the locked "employees" would be e.g. programmers? Etc.
Legalizing prostitution should mean treating the prostitutes as standard employees with standard employee rights (and duties: taxes, insurance), not just ignoring the whole business. The employees should be able to sue their employers, if necessary, and get legal assistance.
Simply the whole situation should be treated exactly the same way as if some organizations would decide that it is cheaper to kidnap programmers and keep them locked in basement, making them write Java code for food, and torturing them if they refuse. We would not have a debate about whether we should make programming illegal, or merely buying Java applications illegal, or any similar frequently proposed "solution".
Do western civilizations owe something to those civilizations that were disadvantaged as a result of imperialism? A common reaction of national conservatives to this idea is that what happened during imperialism is time-barred and each country is responsible for their citizens.
If you focus on utilitarianism the question doesn't come up. The important thing isn't who "owes" but how we can produce utility. If that means the best way is to give betnets to African's than that's the thing to do, regardles of the concept of "owing".
I would only count debts toward the specific peoples directly affected; e.g. the Spanish Empire lived off Bolivian silver, the Belgians worked the Congolese to death, and the United States is literally built on stolen Native land. Those examples and many others allow for a case in favor of reparations.
However, the passage of time sometimes blurs the effects of exploitation and aggression. Should the UK sue Denmark for the Norman Conquest? Should Italy sue Germany because Germanic tribes destroyed the Roman Empire? Should Hungary sue Mongolia for what the Golden Horde did to them? I admit I don't know how to answer to that in a way that is consistent with my first paragraph.
Related: A British answer.
No.
Could you explain why you see it this way? Our wealth is partly based on exploitation. Wouldn't it be fair to fix the damage we've done to exploited people? This could perhaps be also justified in terms of utilitarianism, as fairness might bring people closer together which prevents wars.
I don't see any basis for this claim. More explicitly, I don't see any reasonable and consistent legal/moral theory which would justify such a claim. Note that I do not consider the popular "deep pockets" legal theory to be reasonable.
Not to any significant extent. Most colonized places were net money-losers for the colonizer for most of their history. In addition, I doubt most western-colonized countries were made substantially worse off compared to non-colonized countries, since the Europeans introduced some level of infrastructure, medicine, etc.
First of all, who is this "we" you speak of? More importantly, there are a few "control-group" countries which were not colonized while their neighbors were, like Siam (modern Thailand) and Ethiopia, and they don't seem better off than their neighbors. Unlike most African countries, which abolished slavery when the Europeans took control, Ethiopia banned slavery only in 1942--under pressure from the British, who were a bit embarrassed to be allied with a slave state.
But then why did people keep conquering and colonizing new lands?
There is also Japan, which was better off than its neighbors. In 1905 Japan was strong enough to win a war against Russia.
It is relatively easy to understand the situation when one person owes money to another person, having borrowed it before. It is also not much more difficult to understand the situation when one person owes another person a compensation for damages after being ordered by court to pay it. Somewhat more vague is a situation when there is no court involved, but the second person expects the first one to pay for damages (e.g. breaking a window), because it is customary to do so. All these situations involve one person owing a concrete thing, and the meaning of the word "owes" is (disregarding edge cases) relatively clear.
Problems arise when one tries to go from singular to plural but we still want to use intuition from the usage of singular verb. Quite often, there are many ways to extend the meaning of a singular verb to a plural verb in a way that is still compatible with the meaning of the former. For example, one can extend the singular verb "decides" to a many different group decision making procedures (voting, lottery, one person deciding for everyon, etc.), saying "a group decides" simply obscures this fact.
Concerning the word "owe", even when we have a well defined group of people, we usually prefer to either deal with them separately (e.g. customers may owe money for services) or create a juridical person which helps to abstract a group of people as one person and this allows us to use the word "owe" in its singular verb meaning. There are more ways to extend the meaning of the word "owe" from singular to plural, but they are quite often contentious.
"Western civilizations" is a very abstract group of people. It is not a well defined group of people. It is not a juridical person. It is not a country. It is not a clan. The singular verb "owes" is clearly inapplicable here, and if one wants to use it here, one must extend its meaning from singular to plural. But there seems to be a lot of possible extensions. Therefore one has to resort to other kinds of arguments (e.g. consequentialist arguments, arguments about incentives, etc.) to decide which meaning one prefers. But if that is the case, one can bypass the word "owe" entirely and go to those arguments instead, because that is essentially what one is doing, because words whose meanings one knows only very vaguely probably do not do much in actually shaping the overall argument.
In addition to that "being disadvantaged as a result of imperialism" is very dissimilar from "having a window broken by a neighbour", it is not a concrete thing. The central example of "owing something" is "owing a concrete and well defined thing". Whenever we have a definition that works well for a central example and we want to use it for a noncentral one, we again must extend it and there are often more than one way to extend it (Schelling points sometimes help to choose between all possible extensions, but often there are more than one of them and choice of the extension becomes a subject of debate).
In general, I would guess that if someone argues that an entity as abstract as "western civilizations" owes something to someone, most likely they are either unknowingly rationalizing the conclusion they came to by other means or simply sloppily using an intuition from the usage of the singular verb "owes". I think that the meaning of the word can be extended in many ways, many of which would still be compatible with the meaning of the singular word and some of them would imply "new generations are not responsible for the sins of the past ones", while some of them wouldn't, therefore it is probably better to bypass them altogether and attempt to solve a better defined problem.
Other words where trying to go from singular to plural often causes problems are: "owns", "chooses", "decides", "prefers" (problem of aggregation of ordinal utilities), etc.
How much does Mongolia owe Russia? How much do North African countries owe Europe for the millions of Europeans kidnapped and sold into the Arab slave trade in north Africa? The notion is itself ridiculous.
I think that framing "Imperialism" as belonging to the past is inaccurate.
Many of the problemmatic behaviours grouped together into the term "Imperialism" have not actually stopped. There are Western developed countries that are doing horrible things to non-Western developing countries right now, and doing horrible things to their own people too.
I think a good first step would be to stop doing the horrible stuff now. If the problemmatic behaviour stopped, the topic of redress for past wrongs could be considered from a better vantage point. "I'm sorry I killed your ancestors and stole their stuff 100 years ago" tastes like ashes when coming from someone who is killing your family and stealing your things now, or who is doing something more subtle but equally awful.
"Disadvantaged" is a word that glosses over the damage done. Also, the whole question could benefit from being more specific and defining terms better.
Is anywhere on Earth inhabited by the descendants of the humans who first moved in?
What's in the way of large scaleprospective placebocontrolled trial of preexposure HIV prophelaxis?
CBT is becoming less effective and (by the article author' insinuation) is creating disability
For SSC fans, here's an article that's probably about the same thing, but I can't bring myself to read the inane story at the start.
According the the first article's author, the declining efficacy effect is seen in psychiatry in particularly, but also in medicine more generally. Interesting.
How does one deliver Interpersonal psychotherapy? It's just as effective as CBT without the psychobabble. I can't find information on what is actually done, however.
If you can't find information on what's done why do you think there less psychobabble than in CBT?
If you think you have been infected or potentially infected with HIV, IMMEDIATELY go to an emergency department and explain your situation. You can get a treatment that can stop you getting HIV! Here's more information relevant to Australians. Yes, science has come this far!
Also, if you are engaging in risky sexual behaviour like having sex without a condom, guys get some of your foreskin chopped off. It reduces your HIV risk. Women note, it doesn't reduce your risk of getting infected from an infected male.
Is anyone willing to share an Anki deck with me? I'm trying to start using it. I'm running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.
There are many shared Anki decks. In my experience, the hardest thing to get correct in Anki is picking the correct thing to learn, and seeing someone else's deck doesn't work all that well for it because there's no guarantee that they're any good at picking what to learn, either.
Most of my experience with Anki has been with lists, like the NATO phonetic alphabet, where there's no real way to learn them besides familiarity, and the list is more useful the more of it you know.
What I'd recommend is either picking selections from the source that you think are valuable, or summarizing the source into pieces that you think are valuable, and then sticking them as cards (perhaps with the title of the source as the reverse). The point isn't necessarily to build the mapping between the selection and the title, but to reread the selected piece in intervals determined by the forgetting function.
Alright, I'll be a little more clear. I'm looking for someone's mixed deck, on multiple topics, and I'm looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.
I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I'm looking for how other people do the process of creating cards for new knowledge.
I am missing a big chunk of intuition on learning in general, and this is part of how I want to fix it. I also don't expect people to really be able to answer my questions on it, and I don't expect that I've gotten every specification. Which is why I wanted the example deck.
Edit: So I can't pull a deck off Ankiweb because I want the kind of decks nobody puts on Ankiweb.
I don't know if this question will help:
What is the least-bad way of doing the thing you want to do that you can think of?
(apologies I can be no help because I don't anki; but I wonder if answering this question will help you)
Could a moderator please nuke the swidon account and all of its posts?
agreed.
The account is nuked. I need to find out how to remove posts.
Does anyone know of a good life expectancy calculator? Preferably one which has good justification behind the model, and also has been tested.
I tried this calculator, but I noticed a few issues. First, it sells me I should start doing conditioning exercise... when I did check that off. I think that part of the calculator is broken. It also seems to think that taller people live longer, when from what I understand it's well accepted that the opposite is true. Some of its other features seem unjustified to me, for example, it seems to think you get a life expectancy boost from eating less than 10% of your calories from fat, but I can't find any evidence for that.
Good life expectancy calculators seem very valuable to those interested in longevity. Perhaps some people at LessWrong should create some sort of model. Though I have little experience with these sorts of statistical models, I think the Monte Carlo method might be useful here to get a distribution. If we put the code on GitHub then others can take a look at its guts and submit corrections/improvements/pull requests if they want to.
A good life expectancy calculator implies a good model of which factors drive longevity. I don't believe such a model exists (for healthy people -- the effects of various illnesses on your life expectancy are known much better). There are a lot of correlation studies but correlations and causality are not quite the same thing.
"Some sort of a model" is a very low bar -- presumably you would like the model to be good. People who will be able to make a good comprehensive model of how various health/diet/lifestyle/etc. interventions affect longevity will probably be in the running for a Nobel.
It's like saying that you found online some investment advice which doesn't look too good, perhaps some LW people would like to construct a model of the markets that will give better advice. Well...
Fair points. I'm don't think what we understand about longevity is as bad as what we understand about investments.
I suppose what I'm looking for is a model which 1) doesn't have any obvious bugs, 2) doesn't contradict anything we do know, and 3) has at least some evidence behind the model. If it produces a fairly wide distribution because that represents the (poor) state of our knowledge, I think that's fine.
The issue of correlation vs. causation also is important, and I'm not sure what we could do about it short of allowing someone to turn off certain features of the model if they believe them to be untrustworthy. For example, I've seen a fair bit about how marriage is correlated with an increase in longevity, and it seems obvious to me that any similar sort of social structure where one has frequent socialization and possibly receives feedback and care is probably where the real benefit is. So I think you can say you are married if you believe your situation is equivalent in some way. Obviously these details need to be shown more rigorously, but this is the basic argument.
What hypothesis are you testing, or is gnawing at the back of your mind, in relation to LessWrong, as you surf LessWrong right now? Or perhaps you're just surfing idly.
For me its: Has anyone experimented with replacing their socialising with friends time with LessWrong exclusively? I wonder if the benefits associated with socialising such as increased well-being can be substituted for interaction in online communities.
Though, I suspect the nature of the community would be a strong determinant of the outcome. For instance, facebook would probably be unhealthy, as would IRC exclusively, but the LessWrong community as a whole excl. the IRL meeting community may be great! I feel like I've basically outgrown all my friends who I don't have some sort of professional relationship with anyway, or who I have a codependent/insecure-attachment towards.
Why aren't more LW's public intellectuals in the conventional sense - making appearances on radio or television news bulletins? The benefits seem obvious, if you're okay with fame. And, It's a position of influence and seems relatively easy to contact news organisations to say you have original research for a reputable organisation. Many of us are academics so that's probably true. Perhaps there is even an easier way to contact many news distributes at once to get your name out there and get offers coming to you. Something easier than say manually sending out press releases for instance. Though, they are probably paid PR services, but I mean there's probably a free service somewhere too.
The only existing ways I know are to get listed in expert databases like this one for Australia or for the world. I vaguely remember one run by an institute in Australia that requires experts to have completed meta-analyses or systematic reviews in their area, but it's for consulting work not journalists and the institute gets a cut (but they are prestigious, so it's good affiliation). Their name starts with K if I remember correctly. Don't know why I tend to remember the first names of things, but I tend to be pretty accurate with it. There's probably a menmotechnical explanation out there that some cogpsy LW will inform me about.
I have yet to see a treatise, for strategic managers or from academics of any domain, on the game theoretic implications of data science and data-driven firm behaviour in general.
I for one would expect data driven organisations to be act more rationally and therefore predictable, meaning that game-theoretic optimal strategic behaviour, or rather an approximation of it because many data driven organisations will be stupid like many poker players forming a nash equilbirum would maximise expected utility. However, I don't see how machine learning provides an avenue for firms to inform their strategic multi-agent decisions. They instead need to consider artificial intelligence techniques more broadly and to be able to frame machine learning in that context. This, I suspect, will lead to the goldrush for AGI development. As soon as the potential for this becomes common knowledge, linkedin losers will start 'hailing AI experts as the sexiest job in the 21st century. MIRI, take head of my warning that if you are not more transparent with your research agenda (which to those who don't know, is still secret in part) you may find yourself developing FAI solutions way too slow.
Release your agenda and let others work on your problems cooperateively. Maybe you'll even get a more heterogenous audience at the Intelligent Agents Forum. Maybe mainstream researchers can craft work you can actually use on the mathematical foundations of AI or UAI. I suspect the reason that this community blog, albeit devoted to human rationality and not machine rationality, devolves into topics like 'polygamy' is that we don't have shared problems to solve.
Human rationality is a very, very awkward construct and the problem space is unclear and tangential, albeit related to MIRI's work which let's admit, is the very reason this please exists. Let us run wild and perhaps LessWrongers will start alternative agendas like developing criminal networks and intelligence networks so potential hostile AI could be detected in advance and stopped coersively. I'm just giving the first example I could think of.
My point is, you don't have any significant proprietary hard assets, why shouldn't I or any other particular funder instead create a prize on award for a more transparent FAI research organisation to pivet off your incredible work? I'm not in a position to judge whether or not your ongoing contributions are essential, but this could also be good opportunity for the community to discuss what will happen if or when you die or become incapable of contributing to the community. Same goes for other critical members of the community. Are their intellectual succession proceses in place?
I've never heard of this book or author before, anyone read it? How does it compare to eg "Smarter Than Us" or "Our Final Invention"?
Calum Chace, "Surviving AI"