Here is something I'd like to see: You give the machine the formally specified ruleset of a game (go, chess, etc), wait while the reinforcement learning does its job, and out comes a world-class computer player.
Here is one reason, but it's up for debate:
Deep learning courses rush through logistic regression and usually just mention SVMs. Arguably it's important for understanding deep learning to take the time to really, deeply understand how these linear models work, both theoretically and practically, both on synthetic data and on high dimensional real life data.
More generally, there are a lot of machine learning concepts that deep learning courses don't have enough time to introduce properly, so they just mention them, and you might get a mistaken impression ab...
In the last two days I alone wrote a prototype that can take a whiteboard photo, and automatically turn it into a mindmap-like zoomable chart. Pieces of the chart then can be rearranged and altered individually:
https://prezi.com/igaywhvnam2y/whiteboard-prezi-2015-12-04-152935/
This was part of a company hackathon, and I had some infrastructure to help me regarding the visualization, but with the shape recognition/extraction, it was just me and the nasty python bindings for OpenCV.
Oh my god, look at 0-4-year old assaults, both ED visits and deaths. (Assault is the leading TBI-related cause of death for 0-4-year olds.) Some of those falling 4 year olds were assaulted.
There are worse fates than not being able to top your own discovery of general relativity.
That's not a top-level comment, so it's excluded by my script from this version. I won't manually edit the output, sorry. There's another version where non-top-level comments are kept, too. Your quote is in there:
Top quote contributors by statistical significance level:
Top quote contributors by karma score collected in 2014:
Top quote contributors by total (2009-2014) karma score collected:
Top original authors by karma collected:
Top original authors by number of quotes. (Note that authors and mentions are not disambiguated.)
Top short quotes (2009-2014) by karma per character:
Nice. If we analyze the game using Vitalik's 2x2 payoff matrix, defection is a dominant strategy. But now I see that's not how game theorists would use this phrase. They would work with the full 99-dimensional matrix, and there defection is not a dominant strategy, because as you say, it's a bad strategy if we know that 49 other people are cooperating, and 49 other people are defecting.
There's a sleight of hands going on in Vitalik's analysis, and it is located at the phrase "regardless of one’s epistemic beliefs [one is better off defecting]". I...
I don't know too much about decision theory, but I was thinking about it a bit more, and for me, the end result so far was that "dominant strategy" is just a flawed concept.
If the agents behave superrationally, they do not care about the dominant strategy, and they are safe from this attack. And the "super" in superrational is pretty misleading, because it suggests some extra-human capabilities, but in this particular case it is so easy to see through the whole ruse, one has to be pretty dumb not to behave superrationally. (That is, not...
They're running on the blockchain, which slows them down.
They can follow the advice of any off-the-blockchain computational process if that is to their advantage. They can even audit this advice, so that they don't lose their autonomy. For example, Probabilistically Checkable Proofs are exactly for that setup: when a slow system has to cooperate with an untrusted but faster other. There's the obvious NP case, when the answer by Merlin (the AI) can be easily verified by Arthur (the blockchain). But the classic IP=PSPACE result says that this kind of coop...
An advanced DAO (decentralized/distributed autonomous organization), the way Vitalik images it, is a pretty believable candidate for an uncontrolled seed AI, so I'm not sure Eliezer and co shares Vitalik's apparent enthusiasm regarding the convergence of these two sets of ideas.
I was unsurprised but very disappointed when it turned out there are no other posts tagged one_mans_vicious_circle_is_another_mans_successive_approximation. But Shalizi has already used the joke once in his lecture notes on Expectation Maximization.
Tononi gives a very interesting (weird?) reply: Why Scott should stare at a blank wall and reconsider (or, the conscious grid), where he accepts the very unintuitive conclusion that an empty square grid is conscious according to his theory. (Scott's phrasing: "[Tononi] doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.") Here is Scott's reply to the reply:
I have no problem with an arbitrary border. I wouldn't even have a problem with, for example, old people gradually shrinking in size to zero just to make the image more aesthetically pleasing.
Wow, I'd love to see some piece of art depicting that pink worm vine.
No family history.
Can you ask the second doctor to examine you to at least the same standard as the first one?
Unfortunately, no. See my answer to Lumifer.
What he proposed is in fact laser iridotomy, although they called it laser iridectomy.
It was less than a disagreement. I'm sorry that I over-emphasized this point. The first time the pressure was Hgmm 26/18, the second time 19/17. The second doctor said that the pressure can fluctuate, and her equipment is not enough to settle the question. (She is an I-don't-know-the-correct-term national health service doctor, the first one is an expensive private doctor with better equipment, and more time for a patient.)
My eye doctor diagnosed closed-angle glaucoma, and recommends an iridectomy. I think he might be a bit too trigger-happy, so I followed up with another doctor, and she didn't find the glaucoma. She carefully stated that the first diagnosis can still be the correct one, the first was a more complete examination.
Any insights about the pros and cons of iridectomy?
Get a third independent opinion.
Yes. To be exact, not all capitalized words, but all capitalized words that my English spellchecker does not recognize. With all capitalized words the list would start like this:
Of course the spellchecking method is itself a source of errors. Previous years I never felt like manually correcting these, but checking now it seems like these were the main victims:
Graham is actually number one. I added them to this list, and also to the "Top ori...
You are #2 by karma collected from 2009 to 2013, not just in 2013. You earned an average of 8.20 karma points from 5 quotes in 2013, and an average of 11.05 karma points from 81 quotes in total, which is near to a P-value of 0.5 in my statistical test.
Top short quotes (2009-2013) by karma per character:
Top original authors by karma collected:
Top original authors by number of quotes. (Note that authors and mentions are not disambiguated.)
Top quote contributors of 2013 by statistical significance level:
Top quote contributors by karma score collected in 2013:
Top quote contributors by total (2009-2013) karma score collected:
Thanks!
Is this the latest open thread? Generally, how do I find the latest open thread? The tag does not help.
Amusingly, google chrome autofill still remembered my answers from last year. This made filling the demographic part a bit faster, and allowed a little game: after giving a probability estimation I could check my answer from a year ago.
The smaller thing could be a human, too. Giant, good looking but creepy child holding small vulnerable human in one hand, looking at it emotionlessly. But MIRI will not like this version, because they really want to avoid anthropomorphizing the AI.
I fully agree with this point, and I fully agree with Page's goals. But I think there are things here that a simple total-years-of-potential-life-lost framework can not capture. As you might have guessed even from my first comment, this issue is very personal to me. Not long ago a good friend died after terrible suffering, leaving three young children behind. That's very sad, and I really don't know for what values of N could this be balanced in a utilitarian sense by lengthening the healthy old age of N of my friends with 10 years. Obviously, such trade-offs are taboo, but even if I try to force myself into some detached outside view, I still believe that number N must be large.
“Are people really focused on the right things? One of the things I thought was amazing is that if you solve cancer, you’d add about three years to people’s average life expectancy,” Page said. “We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think.” (Larry Page as quoted in the Time article)
This is something like the ecological falla...
Looking at the individual level, most of us had close friends who had lost 30 years of potential life.
Suppose you could either extend the life of one close friend by 30 years, or the lives of all of your friends by 10 years. (Hopefully you have more than three friends.) Page is pointing out that the second could possibly be on the table, but it wouldn't be obvious because we're so used to treating rare serious diseases instead of making everyone a bit healthier or live a bit longer on the margins.
Anyway, the only reason that we lose "only 3 years" to cancer is that something else is going to kill us not long after if cancer doesn't. However, if we were able to prevent all other forms of death, cancer would still kill all of us eventually.
I tend to think of curing cancer as not just "adding 3 years of life", but as a small but vital part of developing extreme medical (that is, organic) longevity.
I am not a physicist, but this stack exchange answer seems to disagree with your assessment: What are the primary obstacles to solve the many-body problem in quantum mechanics?
I did exactly that after looking at this thread, and only spotted your comment when I wanted to post the results.
I skipped some obvious refinements as this was a 5 minute project.
unrestricted Turing test passing should be sufficient unto FOOM
I tend to agree, but I have to note the surface similarity with Hofstadter's disproved "No, I'm bored with chess. Let's talk about poetry." prediction.
I was trying to position the paper in terms of LW opinions, because my target audience were LW readers. (That's also the reason I mentioned the tangential Eliezer reference.) It's beneath my dignity to list all the different philosophical questions where my opinion is different from LW consensus, so let's just say that I used the term as a convenient reference point rather than a creed.
If it really has only finitely many utility levels, then for a sufficiently small epsilon and some even smaller delta, it will not care whether it ends up in Hell with probability epsilon or probability delta.
I removed the broken index.html, sorry. Now you can see the whole (messy) directory. The README is actually a list of commands with some comments, the source code consists of parse.py and convolution.py.
When I stated that the middle is roughly exponential, this was the graph that I was looking at:
d <- density(karma)
plot(log(d$y) ~ d$x)
I don't do this for a living, so I am not sure at all, but if I really really had to make this formal, I would probably use maximum likelihood to fit an exponential distribution on the relevant interval, and then Kolmogorov-Smirnoff. It's what shminux said, except there is probably no closed formula because the cutoffs complicate the thing. And at least one of the cutoffs is really necessary, because below 3 it is obviously not exponential.
I am afraid I don't understand your methodology. How is a rank versus value function supposed to look like for an exponentially distributed sample?
It is roughly exponential in the range between 3 and 60 karma.
You can find the raw data here.
Edit: I didn't spot gwern's more careful analysis. I am still digesting it. gwern, you should use the above link, it contains the below-10 quotes, too.
I think I misunderstand your definition. Let feature a be represented by x_1 > 0.5, and let feature b be represented by x_2 > 0.5. Let x_i be iid uniform [0, 1]. Isn't that a counterexample to (a and b) being linearly representable?