Someone should create a free speech Twitter that doesn't censor anything protected by the U.S. 1st amendment.
I wonder if the "average complainant" exists here - or more generally, what complaints correlate positively? Will look at the data tomorrow (precommitment!). I feel ill-disposed towards listening to people who fit some stereotype of "people who never really got it but want to slag it," but maybe that group of people is negligible and there's just a bunch of people with different feelings and experiences.
Impressions: I didn't do any actual statistical tests, just massaged the data a lot.
For almost every subject it was possible to complain about, the people who complained about it were a diverse group. Particularly, I noticed that they were diverse in terms of how much of LW they'd read. On the other hand, people who marked an unusual number of complaints were more likely than usual to never have commented.
Comparing people who have both read and commended on LW to everyone else, the full users had more complaints on average (meaning that the prolific complainers were small minority among non-commenters). Full users were less likely to complain (proportional to amount of complaining) about focus on AI or criticism of science, and more likely to complain about jargon. On the community side, they were more likely to care about overly high standards and website design, and also somewhat more likely to care about in-person interaction.
Complaints are not correlated very well. For example, the correlation between full users who complained about software and full users who complained about high standards was very low (though I think significantly positive). There didn't seem to be any large population of average complainers, just a bunch of people with different opinions.
I wonder if the "average complainant" exists here - or more generally, what complaints correlate positively? Will look at the data tomorrow (precommitment!). I feel ill-disposed towards listening to people who fit some stereotype of "people who never really got it but want to slag it," but maybe that group of people is negligible and there's just a bunch of people with different feelings and experiences.
Two questions:
Can anyone who is a user for a significant amount of time give links to anything that wasn't deemed worhy of the sequences but is a worthy read? I have no idea when the sequences were collected but if LW was really great in the past, there would've been a bunch of other high-quality posts that are easily missed. This could also double as proof that LW was, indeed, as great as advertised.
What do other places have that LW doesn't? If LW is dedicated to human rationality, is it truly doing that?
Am I a complete dumbass for typing this? In hindsight, it doesn't take a special variation of Godwin's law to think 'someone probably posted a similar question before'.
\1. There have been several posters who wrote some very nice articles, including Alicorn, lukeprog, Yvain, AnnaSalamon, and Wei_Dai. (Listed in order on a sort of life-hacks to decision-theory spectrum).
Oh, and here's a classic by that prolific author, anonymous (Who? That would be telling :) )
\2. To be uncharitable, we might say that other places have way more discussions of race, politics, and gender. Or to be uncontroversial, we might just say that other places have a lot more ordinary blog-type content, which people read for ordinary blog-type reasons.
A lot of diasporae I like most (E.g. Otium, Paul Christiano's medium) don't have such content, and are correspondingly unpopular.
\3. On question 1, there are definitely index posts aimed at this sort of thing, but I couldn't find the specific one I Was thinking of with just a cursory search.
My feeling about this is that it's okay to have some degree of arbitrariness in our preferences - our preferences do not have a solid external foundation, they're human things, and like basically all human things will run into weird boundary cases when you let philosophers poke at them.
The good news is that I also think that hard-to-decide boundary cases are the ones that matter least, because I agree with others that moral uncertainty should behave a lot like regular uncertainty in this case (though I disagree with certain other applications of moral uncertainty).
The writing is really nice! I especially like the "what would we see if it was only distorted standards of beauty" bit, although it does seem plausible that the media interacts with our judgments to some extent. The structure of the middle is a little murky to me - if I was rewriting it, I'd condense both "reasons you mostly pay attention to your flaws" into one section and tie loss aversion into an overarching negativity bias more explicitly, then spend a little more time on attentional bias / availability bias.
I really like the message that we commonly misjudge ourselves, and need to try to learn to see ourselves as others see us.
The classic textbook is Li and Vitanyi's An Introduction to Kolmogorov Complexity and Its Applications.
NZ epidemiologist Pearson A.L appears to have predicted the trans-pacific partnership in 2014: Although such a case may have no strong grounds in existing New Zealand law, it is possible that New Zealand may in the future sign international trade agreements where such legal action became more plausible. - British Medical Journal
Why do I, as a desperate male, lonely and horney level desperate, stave of the attention of females when I’m not the one leading the charge? One of my peak experiences was visiting Torquay on an undergrad uni field trip walking with the sexiest girl I’d ever met. A busker was playing ‘I’m a believer’ at a market. It was magical. After the field trip she invited me to a coffee date - I agreed. I never took the initiative from there, and nothing happened. I had spent a week fantasising about her and enjoying her company, but her sexual aggression was somewhat intimidating. The same happened with someone I struggle to appreciate, recently, a girl who flirts with me on an ongoing basis.
- My housemate said having a strong feeling of I don't want to be like my parents will make me more like them. I wonder if that's true? Is trying to be less neurotic self defeating?
- CFMEU has a new slogan: 'every battle makes us stronger'. Looks like smart advertising from a group that's under constant fire.
Reframe log
- Instead if seeing people moving through crowds as antagonistic see them as compatible want (not wanting to collide with you)
- instead of seeing strangers around me as potentially violent threats see them as potential defenders
behavioural insight, modification
Stop doing those sloppy back slap drum roll hugs Carlos!
- I used topsy turvy photo icons in my science presentation. I thought it looks kooky and kitsch. It looked dumb. As they say: ironic shitposting is still shitposting.
Why do I stave off the attention of women?
I've had similar reactions in the past. There are a couple reasons, I think. Fear or rejection of the unknown, of jumping into new social situations. Nearsightedness in wanting everything to go perfectly the first time so much that you don't get practice at making things go well. Fear of exposing myself to rejection, coupled with harder to describe feelings of low romantic or sexual worth. The feeling that you don't really know for absolutely sure that you want to spend a ton of time with the person you're flirting with, so you shouldn't follow through.
Two things have helped me with this. The first is increasing my self-worth a little. You can probably think of men less physically attractive than you who have had perfectly happy relationships. Try to understand what makes them attractive people (I tend to think of this as "falling in love" in miniature). In fact, I've found this exercise of trying to see the lovable in other people is a pretty good one in general. Anyhow, you can do this on yourself too. You have plenty of good points, I guarantee it.
The second thing was just jumping into those novel social situations. I have a mantra for it, even: "I would regret not doing it, therefore I will do it."
Trying posting here since I don't see how to post to https://agentfoundations.org/.
People who haven't been given full membership in the forum can post links to things they have written elsewhere, but cannot make posts on the forum itself.
Is this kind of reasoning covered by already known desiderata for logical uncertainty?
It sounds similar to the Gaifman condition. Say you have a Pi_1 sentence, meaning a sentence of the form "for all x: phi(x)", where phi(x) is computable. If you've checked all values of x up to some large number, and phi(x) has always been true so far, you might think that this means that phi(x) is probably true for all the other values of x too. The Gaifman condition says that the probability that you assign to "for all x: phi(x)" should go to 1 as the range of values of x you've checked goes to infinity.
But it turns out that any computable way of handling logical uncertainty that satisfies the Gaifman condition also must give probabilities that go to 0 for some other true sentences. https://intelligence.org/files/Pi1Pi2Problem.pdf This may sound alarming, but I don't think it is too surprising; after all, the theory of the natural numbers is not computable, so any logical uncertainty engine will not be able to rule out different theories even in the limit.
Right. There's also a somewhat stronger desideratum that we want to expect sequences to be simple rather than complex.
But I think there is something lots of logical uncertainty schemes are missing, which is estimation of numerical parameters. We should be able to care whether the target region is of size 0.001 or 0.000000000000000000000000000001, even if we have no positive examples, but sequence-prediction approaches don't do that.
If we're willing to "cheat" a bit and use as an input to our logical uncertainty method the class of objects that we're drawing from and comparing to some numerical parameter, then we can just treat prior examples as being drawn from the distribution we're trying to learn. And this captures our intuition very well, but it has some trouble fitting into schemes for logical uncertainty because of the requirement for cheating.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?
1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.
Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.
The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.
There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.
The doubling time in some benchmarks in deep learning seems to be 1 year.
Media overhype AI achievements.
Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).
A lot of new investment going into AI research and salaries in field are rising.
Military are increasingly interested in implementing AI in warfare.
Google has AI ethic board, but what it is doing is unclear.
It seems like AI safety and implementation is lagging from actual AI development.
OpenAI is significantly more nuanced than you might expect. E.g look at interviews with Ilya Sustskever where he discusses AI safety, or consider that Paul Christiano is (briefly) working for them. Also, where did you get the description of Bostrom as "Elon Musk's mentor?"