Some users might find this interesting: I've finished up 3 years of scraping/downloading all the Tor-Bitcoin darknet markets and have released it all as a 50GB compressed archive (~1.5tb). See http://www.gwern.net/Black-market%20archives
I found this paper: Adults Can Be Trained to Acquire Synesthetic Experiences.
The goal of the study was to see if they could induce synesthesia artificially by forcing people to associate letters with colors. But the interesting part is that after 9 weeks of training, the participants gained 12 IQ points. I have read that increasing IQ is really difficult, and effect sizes this large are unheard of. So I found this really surprising, especially since it doesn't seem to have gotten a lot of attention.
EDIT: This is a Cattell Culture Fair IQ which uses 24 points as a standard deviation instead of 15. So it's more like 7.5 IQ points.
They made each participant do 30 minutes of training every day of 9 weeks, which involved a few different tasks to try to form associations between colors and letters. They also assigned colored reading material to read at home.
They took IQ tests before and after and gained 12 IQ points after the training. A control group also took the tests before and after but did not receive training, and did not improve. The sample sizes are small, but the effect sizes might be large enough to justify it. They give a p value of 0.008.
In the paper there are some quotes fr...
I was lucky enough to stumble upon LW a few months ago, right after deconverting from Christianity. I had a lot of questions, and people here have been incredibly, incredibly helpful. I've been directed to many great old posts, clicked on hyperlinks to hundreds more, and finished reading Rationality: AI to Zombies last month. But a very short time ago, I was one of those rare, overly trusting fundamentalist Christians who truly believed the entire Bible was God's Word... anyway, I made a comment or two sharing my old perspective, and people here seemed to find it interesting, so I thought I might as well share the few blog posts I've written, even though my Christian friends/family were my target audience.
Things I Miss About Christianity If I'm totally honest, there's actually a lot.
Atheists and Christians: Thinking More Similarly Than You Think Just some thought patterns I've observed. Doesn't apply too much to LWers.
Is Christianity Wildly Improbable? Talks about my apologetics class in college, motivated cognition, and some evidence against Christianity which Christians have a harder time responding to by simply repeating how God is above human reason.
Link from March that apparently hasn't been discussed here: Y-Combinator's Sam Altman thinks AI needs regulation:
“The U.S. government, and all other governments, should regulate the development of SMI [Superhuman Machine Intelligence],”
“The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea,”
“For example, beyond a certain checkpoint, we could require development [to] happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.,”
The regulations should mandate that the first SMI system can’t harm people, but it should be able to sense other systems becoming operational,
Further, he’d like to see funding for research and development flowing to organizations groups that agree to these rules.
Sounds sensible.
What is Omnilibrium? What are these links about? If this comment is a reply to something or making a point, what?
In a reddit AMA a couple of days ago, someone asked Sam Altman (president of Y Combinator) "How do you think we can best prepare ourselves for the advance of AI in the future? Have you and Elon Musk discussed this topic, by chance?" He replied:
Elon and I have discussed this many, many times. It's one of the things I think about most. Have some news coming here in a few months...
Any guesses on the news?
Good books on economics, investing?
Are there equivalent books to "Probability theory, the logic of science" and/or "The Feynman lectures on Physics" in economics or investing?
Who are the great authors of these fields?
I haven't read Feyman's lectures on physics, but if it's "someone really good at this explains how he thinks in an intuitive way", then Warren Buffet's letters to shareholders are an equivalent in investing.
Quote:
...We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants
I live in South Africa. We don't, as far as I know, have a cryonics facility comparable to, say, Alcor.
What are my options apart from "emigrate and live next to a cryonics facility"?
Also, I'm not sure if I'm misremembering, but I think it was Eliezer that said cryonics isn't really a viable option without an AI powerful enough to reverse the inevitable damage. Here's my second question, with said AI powerful enough to reverse the damage and recreate you, why would cryonics be a necessary step? Wouldn't alternative solutions also be viable? For ex...
I was just wondering abou the following: testosterone as a hormone is actually closely linkable to pretty much everything that is culturally considered masculine (muscles, risk-taking i.e. courage, sex drive etc.) and thus it is not wrong to "essentialize" it as the The He Hormone.
However it seems estrogen does not work like that for women: surprisingly, it is NOT linked with many culturally feminine characteristics, and probably should NOT be essentialized as The She Hormone. For example, it crashes during childbirth: i.e it has nothing to do wi...
It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.
The sheer number of ways sex can be determined amongst vertebrates is amazing, let alone other animals or microbes (there are fungi with 10,000 'sexes'/mating types...). I will restrict my examples to vertebrates.
As a rule, in most vertebrates (including humans and other organisms in which it is genetically determined) everything needed to make all the biology of both sexes is present in every individual, but a switch needs to be thrown to pick which processes to initiate.
Many reptiles use temperature during a critical developmental period with no sex chromosomes. Many fish too.
The x y system has evolved independently several times, when an allele of a gene or a new gene appears that when it is present reliably leads to maleness regardless of what else is in the genome. For weird population genetic reasons this nucleates an expanding island of DNA that cannot recombine with the homologous chromosome and which is free to degenerate except for sex determining factors and a few male gamete specific genes that migrate there over evolutionary time, until eventually the entire chromosome degenerates and you get a sex chromosome.
The zw system has evolved multiple times, in which t...
People with full androgen insensitivity syndrome (never responding to androgens produced by gonads) or gonadal dysgenesis of various stripes (gonads fail to develop properly and don't make any hormones) usually wind up more or less externally normally female regardless of the state of their sex-associated karyotypes/genoypes (with the internal plumbing variable depending on the exact details). In this way, the pre-pubescent female state is probably the closest thing we have to a default inasmuch as that means anything.
These people do, however, fail to naturally go through most of puberty (a few androgens are usually made by the adrenal glands in everyone regardless of sex but not much) which is an active switch being thrown regardless of sex. As such, the secondary female sex characteristics of sexual maturity are not exactly 'default' themselves in the same way.
Good Judgment Project has ended with season 4 and everyone's evaluations are available. They say they're taking down the site next month, so you may want to log in and make copies of everything relevant.
You can see my own stuff at https://www.dropbox.com/s/03ig3zr8j9szrjr/gjp-season4-allpages.maff - I managed to hit #41 out of 343 or the 12th percentile. Not bad.
If I want to learn General Semantics, what is the best book for a beginner?
(Maybe it was already answered on LW, but I can't find it.)
New papers byt Jan Leike, Marcus Hutter:
Solomonoff Induction Violates Nicod's Criterion http://arxiv.org/abs/1507.04121
On the Computability of Solomonoff Induction and Knowledge-Seeking http://arxiv.org/abs/1507.04124
It has been reported, that a 5 quarks particle has been produced/spotted in LHC CERN.
http://www.bbc.com/news/science-environment-33517492
I am very happy, that this apparently isn't a strange matter particle.
https://en.wikipedia.org/wiki/Strange_matter
At least not of a dangerous kind. For now, at least.
So, I hope it will continue, without a major malfunction on the global (cosmic) scale.
Maybe machine learning can give us recommendations for gardening without hurting your back.
"When changing directions turn with the feet, not at thewaist, to avoid a twisting motion."
“Push” rather than “pull” objects.
Is it worth it to learn a second language for the cognitive benefits? I've seen a few puff pieces about how a second language can help your brain, but how solid is the research?
Could someone be kind enough to share the text of Stuart Russell's interview with Science here?
Fears of an AI pioneer
John Bohannon
Science 17 July 2015:
Vol. 349 no. 6245 pp. 252
DOI:10.1126/science.349.6245.252
http://www.sciencemag.org/content/349/6245/252.full
...From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy". The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both se
Despite there being multiple posts on recommended reading, there does not seem to be any comprehensive and non-redundant list stating what one ought to read. The previous lists do not seem to cover much non-rationality-related but still useful material that LWers might not have otherwise learned about (e.g. material on productivity, happiness, health, and emotional intelligence). However, there still is good material on these topics, often in the form of LW blog posts.
So, what is the cause of the absence of a single, comprehensive list? Such a list sounds...
I experienced a discussion on facebook a few months ago where someone tried to calmly have a discussion, of course it being facebook it failed, but I am interested in the idea, and wanted to see if it can be carried out here calmly, knowing it is potentially of controversy. I first automatically felt negative to the discussion but then I system-2'd it and realised I don't know what the answers might be:
The historic basis of relationships was for procreation and child rearing purposes. In the future I expect that to not be the case. either with designer-...
Do you have a source for "natural tendency for humans to be less attracted to close relatives than to others"? I am interested.
I made a tool to download all of my lesswrong comments. I think that it is useful data to have. In case anyone is interested it's available here: https://github.com/Houshalter/LesswrongCommentArchive
17/7 - Update: Thank you to everyone for their assistance. Here is a re-worked version of Father. It is unlisted, for testing purposes. If one happens to comes across this post, please consider giving feedback regarding how long it captures your attention.
In the interests of privacy, please excuse the specialised account and lack of identifying personal information.
A bit of background: recently created a YouTube channel for the dual purposes of creating an online repository of works that can easily be hyperlinked, and establishing an alternative source ...
What are your thoughts on this AI failure mode: Assume an AI works by rewarding itself when it improves its model of the world (which is roughly Schmidhuber’s curiosity-driven reinforcement learning approach to AI), however, the AI figures out that it can also receive reward if it turns this sort of learning on its head: Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.
Has this been considered before? Can we see this occurring in natural intelligence?
Has there really been no rationality quotes thread since March this year?
Hi all, I'm new here so pardon me if I speak nonsense. I have some thoughts regarding how and why an AI would want to trick us or mislead us, for instance behaving nicely during tests and turning nasty when released and it would be great if I could be pointed in the right direction. So here's my thought process.
Our AI is a utility-based agent that wishes to maximize the total utility of the world based on a utility function that has been coded by us with some initial values and then has evolved through reinforced learning. With our usual luck, somehow it's...
Can someone explain this article in layman terms? I do not know any sort of quantum terminology, sorry.
Specifically I would like to know what this means:
The ESP is quite a mild assumption, and to me it seems like a necessary part of being able to think of the universe as consisting of separate pieces. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you.
Hi all, I'm new here so pardon me if I speak nonsense. I have some thoughts regarding how and why an AI would want to trick us or mislead us, for instance behaving nicely during tests and turning nasty when released and it would be great if I could be pointed in the right direction. So here's my thought process.
Our AI is a utility-based agent that wishes to maximize the total utility of the world based on a utility function that has been coded by us with some initial values and then has evolved through reinforced learning. With our usual luck, somehow it's learnt that paperclips are a bit more useful than humans. Now the "treacherous turn" problem that I've read about says that we can't trust the AI if it performs well under surveillance, because it might have calculated that it's better to play nice until it acquires more power before turning all humans into paperclips. I'd like to understand more about this process. Say it calculates that the world with maximum utility is one where it can turn us all into paperclips with minimum effort, with the total utility of this world being UAI(kill)=100. Second best is a world where it first plays nice until it is unstoppable, then turns us into paperclips. This is second best because it's wasting time and resources to achieve the same final result. UAI(nice+kill)=99. Why would it possibly choose the second, sub-optimal, option, which is the most dangerous for us? I suppose it would only choose it if it associated it with a higher probability of success, which means somehow, somewhere the AI must have calculated that the the utility a human would give to these scenarios is different than what it is giving, otherwise we would be happy to comply. In particular, it must believe that for each possible world w:
if UAI(kill)≥UAI(w)≥UAI(nice+kill) then Uhuman(w)≤Uhuman(nice+kill)
How is the AI calculating utilities from a human point of view? (Sorry but this questions comes straight out of my poor understanding of AI architectures.) Is it using some kind of secondary utility function that it applies to humans to guess their behavior? If the process that would motivate the AI to trick us is anything similar to this, then it looks to me like it could be solved by making the AI use EXACTLY it's own utility function when it refers to other agents. Also note that the utilities must not be relative to the agent, but to the AI. For instance, if the AI greatly values its own survival over the survival of other agents, then the other agents should equally greatly value the AI's survival over their own. This should be easily achieved if whenever the AI needs to look up another agent's utility for any action it is simply redirected to its own.
This way the AI will always think we would love it's optimum plan and would never see the need to lie to us or trick us, brainwashing us or engineer us in any way as it would only be a waste of resources. In some cases it might even openly look for our collaboration if that makes the plan any better. Clippy, for instance, might say "OK guys I'm going to turn everything into paperclips, can you please quickly get me the resources I need to begin with, then you can all line up over there for paperclippification. Shall we start?".
This also seems to make the AI indifferent to our actions, provided its belief regarding the identity of our utility functions is unchangeable. For instance, even while it sees us pressing the button to blow it up, it won't think we are going to jeopardize the plan. That would be crazy. Or it won't try to stop us from re-booting it. Considering that it can't imagine you not going along with the plan from that moment onward, it's never a good choice to waste time and resources to stop you. There's no need to stop you.
Now obviously this does not solve the problem of how to make it do the right thing, but it looks to me that at least we would be able to assume that a behavior observed during tests should be honest. What am I getting wrong? (don't flame me please!!!)
Hi all, thanks for taking your time to comment. I'm sure it must be a bit frustrating to read something that lacks technical terms as much as this post, so I really appreciate your input. I'll just write a couple of lines to summarize my thought, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn an utility function for humans or for other agents, but uses for everyone the same ...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.