Guardian: Scientists threaten to boycott €1.2bn Human Brain Project:
The European commission launched the €1.2bn (£950m) Human Brain Project (HBP) last year with the ambitious goal of turning the latest knowledge in neuroscience into a supercomputer simulation of the human brain. More than 80 European and international research institutions signed up to the 10-year project.
But it proved controversial from the start. Many researchers refused to join on the grounds that it was far too premature to attempt a simulation of the entire human brain in a computer. Now some claim the project is taking the wrong approach, wastes money and risks a backlash against neuroscience if it fails to deliver.
In an open letter to the European commission on Monday, more than 130 leaders of scientific groups around the world, including researchers at Oxford, Cambridge, Edinburgh and UCL, warn they will boycott the project and urge others to join them unless major changes are made to the initiative.
[...] "The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature," Peter Dayan, director of the computational neuroscience unit at UCL, told the Guardian.
Open message to the European Commission concerning the Human Brain Project now with 234 signatories.
Finally, scientists speaking up against sensationalistic promises and project titles...
In a weird dance of references, I found myself briefly researching the "Sun Miracle" of Fatima.
From a point of view of a mildly skeptic rationalitist, it's already bad that almost anything written that we have comes from a single biased source (the writings of De Marchi), but also bad is that some witnesses, believer and not, reported not having seen any miracle. But what arose my curiosity is another: if you skim witnesses accounts, they tell the most divers(e) things. If you OR the accounts, what comes out is really a freak show: the sun revolving, emitting strobo lights, dancing in the sky, coming close to the earth drying up the soaking wet attendants.
If you otherwise AND the accounts, the only consistent element is this: the 'sun' was spinning. To which I say: what? How can something that has rotational symmetry be seen spinning? The only possible answer is that there was an optical element that broke the symmetry, but I have been unable to find out what was this element. Do you know anything about it?
I have tried some online lessons from Udacity and Coursera, and this is my impression so far:
The system of Udacity is great, but there is little content. Also, the content made by the founder Sebastian Thrun is great, but the content made by other authors is sometimes much less impressive.
For example, some authors don't even read the feedback for their lessons. Sometimes they make a mistake in a lesson or in a test, the mistake is debated in the forum, and... one year later, the mistake is still there. They don't even need to change the lesson video... just putting one paragraph of text below the video would be enough. (In one programming lesson, you had to pass a unit test, which sometimes mysteriously crashed. The crash wasn't caused by something logical, like spending too much time or too much memory, it was a bug in the test. In the forum students gave each other advice how to avoid this bug. It probably could be fixed in 5 minutes, but the author didn't care.) -- The lesson is, you can't make online education just like "fire and forget", but some authors probably underestimate this.
Coursera is the opposite: it has a lot of content, almost anything, but the system fee...
Abstract: It is frequently believed that autism is characterized by a lack of social or emotional reciprocity. In this article, I question that assumption by demonstrating how many professionals—researchers and clinicians—and likewise many parents, have neglected the true meaning of reciprocity. Reciprocity is “a relation of mutual dependence or action or influence,” or “a mode of exchange in which transactions take place between individuals who are symmetrically placed.” Assumptions by clinicians and researchers suggest that they have forgotten that reciprocity needs to be mutual and symmetrical—that reciprocity is a two-way street. Research is reviewed to illustrate that when professionals, peers, and parents are taught to act reciprocally, autistic children become more responsive. In one randomized clinical trial of “reciprocity training” to parents, their autistic children’s language developed rapidly and their social engagement increased markedly. Other demonstrations of how parents and professionals can increase their behavior of reciprocity are provided.
— Morton Ann Gernsbacher, "Towards a Behavior of Reciprocity"
The paper cites several examples of improvements ...
This is the outline of a conversation that took part no fewer than 14 times on Friday just past, between me and a number of close friends.
"Life is like an RPG. Often, a wise, kind, and and deeply important character (hand gesture to myself) gives a quest item to a lowly, unsuspecting, otherwise plain character (hand gesture to friend). As a result of this, this young character goes on to be a great hero in an important quest.
Now, here with me today, I have a quest item.
For you.
But I can only give it to you if you shake on the following oath; that, once you have finished with this item, when you have taken what you require from it, that then, you too shall find someone for whom this will be of great utility, and pass it along. They must also shake on this oath."
"I will."
Handshake occurs.
"Here is your physical copy of the first 16 and a half chapters of 'Harry Potter and the Methods of Rationality'."
Spoilers: after a tedious chain of deals, your friend's going to end up with half an oyster shell sitting in their inventory and no idea what to do with it.
Quantified-self biohacker-types: what wearable fitness tracker do I want? Most will meet my basic needs (sleep, #steps, Android-friendly), but are there any on the market with clever APIs that I can abuse for my own sick purposes?
I think the main thing the facebook emotional contagion experiment highlights is that our standard for corporate ethics is overwhelmingly lower than our standard for scientific ethics. Facebook performed an A/B test, just as it and similar companies do all the time, but because it was in the name of science we recognized that it was not up to usual ethical standards. By comparison, there is no review board for the ethics of advertisements and products. If something is too dangerous, it will result in lawsuits. If it is offensive, it will be censored. However, something unethical in science, like devoting millions of dollars to engineer and millions of experimental-subject-hours to develop a sugar-coated money-sucking skinner box won't make anyone blink an eye.
I think the core issue is one of lack of understanding how modern technology works. Facebook performed a A/B test and everyone who know how the internet works shouldn't be surprised.
On the other hand there are a bunch of people who don't get that web companies run thousands of A/B tests. Those people got surprised by reading about the study.
Hey guys, so, I'm dumb and am continuing to attempt to write fiction. I figured I would post an excerpt first this time so people can point out glaring problems before I post anything to Discussion. I've changed some of the premise (as can be seen most obviously in the title); mostly I'm moving away from LessWrong-parody and toward self-parody, mostly because Eliezer's followers are really whiny and it was distracting from the actual ideas I was trying to convey. The premise is now less disingenuous about its basically being a self-insert fic. Also I've tr...
Okay, I'm probably never going to actually get very far into my fanfic, so:
The story starts as stereotypical postmodern fare, but it is soon revealed that behind the seemingly postmodern metaphysic there is a Berkeleyan-Leibnizian simulationist metaphysic where programs are only indirectly interacting with other programs despite seeming to share a world, a la Leibniz' monadology. Conflicts then occur between character programs with different levels of measure in different simulations of the author's mind, where the author (me) is basically just a medium for the simulators that are two worlds of emulation up from the narrative programs.
Meanwhile the Order of the Phoenix (led by Dumbledore, a fairly strong rationalist rumored to be an instantiation of the monad known as '[redacted]') has adopted and adapted an old idea of Grindelwald's and is constructing a grand Artifact to invoke the universal prior so that an objective measure over programs can be found, thus ending the increasingly destructive feuds. Different characters help or hinder this endeavor, or seem to help or hinder it, according to whether they think they will be found to be more or less plausible by the Artifact. The...
What happened to Will Newsome's drunken HPMOR send-up?
On Twitter he suggested that EY had deleted it, but provided no evidence.
I just tested this by deleting one of my posts (it was a test post). The post can still be accessed, while Will Newsome's post can't be accessed anymore (except by visting his profile). My username disappeared from my post after deleting it, Will Newsome's name does still appear on his post under his profile. This seems to be evidence in favor of Will Newsome's claim that his post has been deleted by someone else than himself.
I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I'm interested in what could cause someone to think that. I actually have a little more information upon asking a few...
Merging traditional Western occultism with Bayesian ideas seems to produce some interesting parallels, which may be useful psychologically/motivationally. Anyone care to riff on the theme?
Eg: "The Great Work" is the Most Important Thing that you can possibly be doing.
Eg, tests to pass and gates to go through in which a student has to realize certain things for themselves, as opposed to simply being taught them, from pre-membership ones of learning basic arithmetic and physics, to the initial initiation of joining the Bayesian Conspiracy, to an ea...
That makes a bit of sense. The occultists fancied themselves scientists, back when that wasn't such a clearly defined term as it is now, and they rummaged through lots of traditions looking for bits to incorporate into their new (claimed to be old) culture. But computer games design had all the same sources to draw from, greater manpower and vastly more cultural impact. I would expect "almost any" useful innovations the occultists came up with to be contained in computer games.
This is true for both of your examples: "winning the game" and skill trees, respectively. And skill trees are better than initiation paths, because they aren't fully linear while still creating motivation to go further.
Compare the rules of how to play more like a PC, less like an NPC.
I say "almost any" because an exception may be fully immersed, bodily ritual stuff. Maybe that can hammer things down into system 1 that you simply don't "get" the same way when you just read them.
Is there a way to tag a user in a comment such that the user will receive a notification that s/he's been tagged?
Are there any resources (on lesswrong or elsewhere) , I can use for improving my social effectiveness and social intelligence ? Its something I'd really like to improve on so I can understand social situations better and perhaps improve the quality of my social interactions.
I'm living in rural Alabama for the next five years with little opportunity for mental challenge outside of my job. The only local groups of notable interests are our Rotary Club (which would really only bring a networking benefit) and our Trailmasters (from whom I can learn gardening and horticulture). I'd like to take part in more rationality related activities, both for the self improvement and the community benefits. Are there any suggestions for useful activities or groups I might join that can help? With so many meetup groups, I'm sure I can't be the only one living in isolated conditions. I'd like to hear from others how they beat the duldrums.
I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go "well, seems like we disagree" and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.
Why?
Every so often someone writes an essay on why Superman doesn't just stop social injustice or whatever. My response is that since superhumans are still only human in the ways that matter, they can make mistakes. If you stop crime, that at least puts a lower bound on how bad your mistake is. It's pretty obvious that stopping Dr. Octopus benefits people--the chances of being wrong about that are nil.
If the superhero starts overthrowing governments--or promoting cryonics--there's a chance he could be wrong and screw up. And if someone with such an influence screws up, he really screws up.
Reading that fic from a point of view of someone who doesn't support cryonics makes it clear exactly why beating up super-criminals is the best use of his powers.
The Transhumanist Wager. Has anyone read this thing? The Wikipedia synopsis reads like a satirical description of a fictional book. This review is absolutely scathing, including of the ethics of the author-avatar protagonist; this one is a bit nicer. The author commented here very slightly.
I tried reading it, gave up around page 70. At first I was reading it as a self-satire B-movie thing about transhumanist stereotypes, but at some point it dawned upon me that it was apparently meant to be read in all seriousness.
The shark-jumping moment to me was the part in the novel where the President of the United States has called for a public meeting between bioconservative religious leaders and transhumanist scientists. The dialogue is stalled, until the main character, a fourth-year philosophy student, gets up and holds a speech. He says basically that state institutions that restrict research are evil, that scientific research must proceed freely and without limitations, and that furthering transhumanism is a moral obligation which will end up benefiting both national well-being and competitiveness. The "state institutions are evil" bit is mostly the only part that gets actual arguments supporting it, the rest of the points are just stated without really providing anything to back them up.
The crowd's reaction:
...The rotunda was silent for a long time after Jethro stopped speaking. In those moments every person believed in the speech’s common sense, in the potentia
The Less Wrong Study Hall's tinychat room is acting up this morning. For anyone who uses it and can't get in, we're in /lesswrong2 instead.
[Edit: It looks like support has fixed it, so please go back to the regular room.]
Sometimes I've tried to argue in favor of eugenics. The usual response I got has been something like: "but what if we create a race of super-human beings that wipes us out?".
It's interesting that people are much more prone to believe it's possible to create unfriendly human super-intelligence rather than an unfriendly artificial super-intelligence.
suppose someone's life plan was to largely devote themselves to making money until they were in, say, the top 10% in cumulative income. They also did not plan to save money to any very unusual extent.
then after that was accomplished, they would switch goals and devote themselves to altruism.
Given that the person today is able to make the money and resolves to do this, I wonder what people here think the chance is of doing it. For example, fluid intelligence declines over time. So by the time you're 60 years old and have made your money and have kids, wil...
In the comments to this post we discussed the signalling theory of education, which has previously been discussed on Less Wrong. The signalling theory says that education doesn't make you more productive, but constitutes a signal that you are productive (since only a productive worker could obtain a degree at a prestigious university, or so many employers think).
Such signalling can be very socially wasteful, since it can lead to a signalling arms race where people spend more and more money on signals that don't increase their productivity (like peacocks' t...
A very useful site. readlists.com/ You can compile lists of articles and share them with your friends or convert them to epub/mobi. I used them on sequences I wanted to read or share.
https://en.wikipedia.org/wiki/Receptor_activated_solely_by_a_synthetic_ligand
I just learned there was such a thing as Designer Receptors Exclusively Activated by Designer Drugs (DREADD). I think this is huge. Do you people know the current status of this field?
Quick calibration test for those who like to have opinions on the US: of the standard US racial groupings (white, black, hispanic, asian) and the overall population, which do you expect to have the highest gini ratio for income? Why?
Here is the answer, according to the US Fed
Please use rot13 for spoilers.
An interesting article on "precrastination". Basically some people spend more time and effort doing things , when it would be more efficient to complete them later. Also this writer reads lesswrong and refers to one of the posts on akrasia in his other articles
Coursera just started a course called Experimentation for Improvement. Is anyone interested in taking it together?
I have been thinking about the argument of the singularity in general. This proposition that an intellect sufficiently advanced can / will change the world by introducing technology that is literally beyond comprehension. I guess my question is this, is there some level of intelligence such that there are no possibilities that it can't imagine even if it can't actually go and do them.
Are humans past that mark? We can imagine things literally all the way past what is physically possible and or constrained to realistic energy levels.
Has anybody written up a primer on "what if utility is lexically ordered, or otherwise not quite measurable in real numbers"? Especially in regard to dust specks?
So there's a MIRIxMountain View, but is it redundant to have a MIRIxEastBay/SF? It seems like the label MIRIx is content to be bestowed upon even low key research efforts, and considering the hacker culture/rationality communities there may be interest in this.
I think I remember an app in which one guessed the probability of things and then logged if they actually happened and kept track or ones record discussed here. Anyone know what it's called?
Question for anyone who knows:
I've been getting "cannot connect to the real..." error messages in Google Chrome when trying to access several websites, which I gather has something to do with invalid certificates. I would like to know if going to Settings > Advanced > Manage Certificates and simply Removing everything under every tab will a) fix the problem and b) not break anything else. If not, then I would like to know what will.
If anyone uses org-mode in emacs to track their todo list, org-gamify is a way to add some gamification to your org. I haven't used it, but there's a decent introduction on how to use it on the git repo page.
Now that I'm on the job market, I'm considering changing my gmail address, but I'm having trouble deciding between the alternatives.
My current address (created in '05 or so) consists of two words. This has the advantage of being easy to say, but the second word is a bit long and I feel slightly silly writing it on a CV.
On the other hand, it's 2014 and almost every reasonable gmail address has already been taken. The exceptions in my case are a slightly l33t version of my name, a version of my name with vowels removed, and my name followed by a random numbe...
I'm trying to run a calibration training/potluck in Portland next Saturday for myself and any lesswrongians who'd like to join. Any lessons learned from people who have done calibration training themselves or run a calibration training?
Previous thread
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.