This post by Eric Raymond should be interesting to LW :-) Extended quoting:
...There’s a link between autism and genius says a popular-press summary of recent research. If you follow this sort of thing (and I do) most of what follows doesn’t come as much of a surprise. We get the usual thumbnail case studies about autistic savants. There’s an interesting thread about how child prodigies who are not autists rely on autism-like facilities for pattern recognition and hyperconcentration. There’s a sketch of research suggesting that non-autistic child-prodigies, like autists, tend to have exceptionally large working memories. Often, they have autistic relatives. Money quote: “Recent study led by a University of Edinburgh researcher found that in non-autistic adults, having more autism-linked genetic variants was associated with better cognitive function.”
But then I got to this: “In a way, this link to autism only deepens the prodigy mystery.” And my instant reaction was: “Mystery? There’s a mystery here? What?” Rereading, it seems that the authors (and other researchers) are mystified by the question of exactly how autism-like traits promote genius-level capabilities.
At which point I blin
Simple hypothesis relating to Why Don't Rationalists Win:
Everyone has some collection of skills and abilities, including things like charisma, luck, rationality, determination, networking ability, etc. Each person's success is limited by constraints related to these abilities, in the same way that an application's performance is limited by the CPU speed, RAM, disk speed, networking speed, etc of the machine(s) it runs on. But just as for many applications the performance bottleneck isn't CPU speed, for most people the success bottleneck isn't rationality.
It could be worse. Rationality essays could be attracting a self-selected group of people whose bottleneck isn't rationality. Actually I think that's true. Here's a three-step program that might help a "stereotypical LWer" more than reading LW:
1) Gym every day
2) Drink more alcohol
3) Watch more football
Only slightly tongue in cheek ;-)
Well, there's also the possibility that people who did successfully hack their determination, networking ability, and performance are now mostly not spending time on LW.
Probably everybody had seen it, but EY wrote long post on FB about AlphaGO which get 400 reposts. The post overestimates power of AlphaGO, and in general it seems to me that EY did too much conclusions based on very small available information (3:0 wins at the moment of the post - 10 pages of conclusions). The post's comment section includes contribution from Robin Hanson about usual foom's speed and type topic. EY later updated his predictions based on Segol win on game 4 and stated that even superhuman AI could make dumb mistakes, which may result in new...
History of "That which can be destroyed by the truth, should be"
First said by Hodgell, Yudkowsky wrote a variant, Sagan didn't say it.
Ok, now Lenet run out his AI after 30 years of development https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/
Russian Compreno system, which manually model language is also suggested first service Findo (after 20 years and 80 million USD) https://abbyy.technology/en:features:linguistic:semanitc-intro
Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.
I am also very frustrated with the ...
A while ago I was, for some reason, answering a few hundred questions with yes-or-no answers. I thought I would record my confidence in the answers in 5% intervals, to check my calibration. What I found was that for 60%+ confidence I am fairly well calibrated, but when I was 55% confidant I was only right 45% of the time (100)!
I think what happened is that sometimes I would think of a reason why the proposition X is true, and then think of some reasons why X is false, only I would now be anchored onto my original assessment that X is true. So instead of ch...
In The genie knows, but it doesn't care, RobbBB argues that even if an AI is intelligent enough to understand its creator's wishes in perfect detail, that doesn't mean that its creator's wishes are the same as its own values. By analogy, even though humans were optimized by evolution to have as many descendants as possible, we can understand this without caring about it. Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stym...
The recently posted Intelligence Squared video titled Don't Trust the Promise of Artificial Intelligence may be of interest to LW readers, if only because of IQ2's decently sized cultural reach and audience.
Replication crisis: does anyone know of a list of solid, replicated findings in the social sciences? (all I know is that there were 36 in the report by Open Science Collaboration, and those are the ones I can easily find)
Telling truth to any face -
Not a lie, with mortar hoary -
Go apace to any place,
To attend to any story.
Happy belated Pi Day, everyone!
I want to make a desktop map application of my city, kinda like Paradox Interactive's games. My city is 280 km^2, and I would like it at a street level detail. I want to be able to just overlay multiple layers of different maps. What I have in mind is displaying predicted tram locations, purchasing power maps, and pretty much any information I can find on one map, and combining these at will, with a reasonable speed (and I would much prefer it to be seamless, like in a game, and not displaying white spots at the edges while it is loading)
Does anyone know of some toolset for such?
Do you have a background in formal debate?
[pollid:1129]
If you do, do you think it was worth the time?
[pollid:1130]
If you don't, do you regret not having it?
[pollid:1131]
I've always enjoyed Kurzweil's story about how the human genome project was "almost done" when they had decoded the first 1% of the genome, because the doubling rate of genomic science was so high at the time. (And he was right).
It makes me wonder if we're "almost done" with FAI.
I don't really know where we are with FAI. I don't know if our progress is even knowable, since we don't really know where we're going. There's certainly not a percentage associated with FAI Completion. However, there are a number of technologies that might sudd...
Modest proposal for Friendly AI research:
Create a moral framework that incentivizes assholes to cooperate.
Specifically, create a set of laws for a "community", with the laws applying only to members, that would attract finance guys, successful "unicorn" startup owners, politicians, drug dealers at the "regional manager" level, and other assholes.
Win condition: a "trust app" that everyone uses, that tells users how trustworthy every single person they meet is.
Lose condition: startup fund assholes end up with majority...
Looking for advice with something it seems LW can help with.
I'm currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I'm sorry to be vague, but I can't actually say more than that.
As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?
Naturally, I'm of the ...
Does it make a difference if an organism reproduces in multiple smaller populations versus one larger, if the number of offspring at generation one is held constant? (score is determined by the number of offspring and their relatedness, so the standard game)
Smaller populations are more prone to genetic drift, but in both directions, right?
Does this change somehow if the populations are connected, with different rates of flow depending on the direction?
For example, in humans, migration to the capitals (and in general, urbanization) happens way more often t...
I have a rationalist/rationalist-adjacent friend who would love a book recommendation on how to be good at dating and relationships. Their specific scenario is that they already have a stable relationship, but they're relatively new to having relationships in general, and are looking for lots of general advice.
Since the sanity waterline here is pretty high, I though I'd ask if anyone had any recommendations or not. If not, I'll just point them to this LW post, though having a bit more material to read through might suit them well.
Thanks!
Isn't some sort of deism at least plausible and reasonable at this juncture? Is there a materialistic theory of what happened before the big bang that is worth putting any stock in? Or are we in an agnostic wait-and-see mode regarding pre-big bang events?
One major difference between left and right is the stance on personal responsibility.
Leftist intellectuals (tends to) think society influence trumps individual capabilities, so people are not responsible for their misfortunes and deserve to be helped. Whereas Rightist have the opposite view (related).
This seems trivial, especially in hindsight. But I hardly ever see it mentioned and in most discussions the right side treat the left as foolish and irrational and the left thinks right people are self-interested and evil rather than simply having a differen...
Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.
I am also very frustrated with the peer review system. The reviewers found some minor errors, and some of their other comments were helpful in the sense that they reveal which parts of the paper are most likely to be misunderstood. However, on the whole, the comments do not change my belief in the soundness of the idea, and in my view they mostly show that the reviewers simply didn’t understand what I was saying.
One comment does stand out, and I’ve spent a lot of energy today thinking about its implications: Reviewer 3 points out that my language is “too casual”. I would have had no problem accepting criticism that my language is ambiguous, imprecise, overly complicated, grammatically wrong or idiomatically weird. But too casual? What does that even mean? I have trouble interpreting the sentence to mean anything other than an allegation that I fail at a signaling game where the objective is to demonstrate impressiveness by using an artificially dense and obfuscating academic language.
From my point of view, “understanding” something means that you are able to explain it in a casual language. When I write a paper, my only objective is to allow the reader to understand what my conclusions are and how I reached them. My choice of language is optimized only for those objectives, and I fail to understand how it is even possible for it to be “too casual”.
Today, I feel very pessimistic about the state of academia and the institution of peer review. I feel stronger allegiance to the rationality movement than ever, as my ideological allies in what seems like a struggle about what it means to do science. I believe it was Tyler Cowen or Alex Tabarrok who pointed out that the true inheritors of intellectuals like Adam Smith are not people publishing in academic journals, but bloggers who write in a causal language. I can’t find the quote but today it rings more true than ever.
I understand that I am interpreting the reviewers choice of words in a way that is strongly influenced both by my disappointment in being rejected, and by my pre-existing frustration with the state of academia and peer review. I would very much appreciate if anybody could steelman the sentence “the writing is too casual”, or otherwise help me reach a less biased understanding of what just happened.
The paper is available at https://rebootingepidemiology.files.wordpress.com/2016/03/effect-measure-paper-0317162.pdf . I am willing to send a link to the reviewers’ comments by private message to anybody who is interested in seeing it.
But too casual? What does that even mean?
Having glanced at your paper I think "too casual" means "your labels are too flippant" -- e.g. "Doomed". You're showing that you're human and that's a big no-no for a particular kind of people...
By the way, you're entirely too fond of using quoted words ("flip", "transported", "monotonicity", "equal effects", etc.). If the word is not exactly right so that you have to quote it, find a better word (or make a footnote, or something). Frequent word quoting is often perceived as "I was too lazy to find the proper word, here is a hint, you guess what I meant".
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.