I have another theory on how Deep Learning works: http://lesswrong.com/lw/m9p/approximating_solomonoff_induction/
The idea is that neural networks are a (somewhat crude) approximation of solomonoff induction.
I have another theory on how Deep Learning works: http://lesswrong.com/lw/m9p/approximating_solomonoff_induction/
The idea is that neural networks are a (somewhat crude) approximation of solomonoff induction.
Basically every learning algorithm can be seen as a crude approximation of Solomonoff induction. What makes one approximation better than the others?
I don't see why this article is on -1 karma at the moment. It's an interesting topic.
As a general rule, if something is a problem, the solution needs to deal with the problem, not with its proxy. The problem is "army of sockpuppet accounts", not "downvotes" per se, therefore a successful solution must somehow address the sockpuppetting itself.
I don't want to give Eugine new ideas, but banning the downvotes would probably just make him change strategy. I can imagine two powerful attack strategies that would work if (a) downvotes are banned, or even (b) all votes are banned.
The successful solution must:
I think there are two essential approaches to this:
These two can come with different flavors and combinations. For example, we could have an invisible whitelist of trusted users, in general treat votes by trusted and untrusted voters equally, but also provide an automatical warning to the moderators if the votes given by trusted vs untrusted voters differ dramatically (for example, if 9 of 10 trusted users upvoted a comment, but 30 of 40 untrusted users downvoted it). This is just an example; it could be made more sophisticated, but that would require more programming resources and computing power.
Perhaps only accounts that have made a discussion post with a few upvotes should be allowed to downvote at all
Eugine would simply upvote posts made by his sockpuppets by his other sockpuppets. In the very best case, this would force him to write one half-decent post per sockpuppet.
limits per week and per user to be downvoted
Limits per sockpuppet = more sockpuppets.
perhaps there should be a per-user limit on downvoting of sufficiently old comments
Maybe downvoting of sufficiently old comments should be limited in general, not just per user. Just like on Reddit you cannot vote on too old stuff. (Question is, how old is "sufficiently old"; on Reddit that means a few months.)
Dealing with bots is hard, banning downvotes is easy. Ideally, with infinite resources, the bots would be eliminated.
In the very best case, this would force him to write one half-decent post per sockpuppet.
that's actually a huge inconvenience for him - writing a good post, using the puppet to mass downvote, then an hour later the puppet is caught because it is mass downvoting and the votes are reverted.
Limits per sockpuppet = more sockpuppets
presumably there is a cost per puppet. Combining this with my above suggestion would mean more articles for him to write...
Here's the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.
Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I've even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.
Obviously these aren't exactly careful, step by step arguments, where if I refute some point they'll reverse their decision and decide we should spread humanity to the stars. It's a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be "ok sure, but what about [lists a thousand other things that are wrong with the world]". It's like fighting fog, because it's not their true objection, at least not quite. It's not like either of us feels like we're on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule "humanity sucks". However, obviously refuting all thousand things, one by one, isn't a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I'm sure.
Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I'll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater's prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they're more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we're in a utopia.
However, I think I might have better luck trying to counter-counter signal. "Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn't we be made to suffer through climate change and everything else we've brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we'll learn our lesson." [Obviously, I'm joking here.]
I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there's no impulse to counter-counter-counter-signal, because I've gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I'll bet we could then proceed to have interesting discussions on how to solve the world's problems. If whoever I'm musing with comes up with a few ideas of their own, maybe they'll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.
There are both good and bad aspects of the human race, and our future could easily contain a lot which is bad. However, this is a reason to support improvements, as well as a reason to support our own destruction.
So it's a half full/half empty situation.
There is some progress on this, but overall changes to the codebase have been slow-going. I've pushed for doing things the right way, even if it takes longer, rather than quicker attempts that are less likely to work.
Of the three pieces that I think are useful, one has been implemented, another written but not yet merged (it needs a bit more work), and a third has not yet been written. If you'd like to contribute coding effort, this issue is my highest priority of the open issues with no pull requests and seems like it should be fairly simple to me.
I definitely think we should ban downvotes, at least temporarily. Also, it is clear that Eugene has an army of automated sockpuppet accounts that are repeatedly downvoting this entire thread. At a later stage something should be done about this, for example limiting the ability of people to spam accounts (e.g. the google "I'm not a robot" button) and limiting the ability of new accounts to downvote. Perhaps only accounts that have made a discussion post with a few upvotes should be allowed to downvote at all, and even then with limits per week and per user to be downvoted, and also perhaps there should be a per-user limit on downvoting of sufficiently old comments, so that even with an army of bots you cannot mass downvote people by attacking all their old content.
Overall it seems we have given out downvoting privileges like candy and now we are reaping the consequences...
yeah, we definitely need to ban downvotes. This is ridiculous.
By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn't invalidate it. There's a rather narrow set of circumstances where your argument doesn't apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what it will be in a year, then you'd better not have turned down any loans with APY less than 900%.
And how does the value of cryonics go up as your mortality rate does? Are you planning on enrolling in a program with a fixed monthly fee?
By how many orders of magnitude? Would you play Russian Roulette for $10/day?
Back of the envelope I would say my chances of dying in the next 6 months and also being successfully cryopreserved (assuming I magically completed the signup process immediately) are about 1 in 10000. That trades off against using my time and money at a time when I'm short of both.
Then you have the problem that I'm not in the USA (I plan to eventually move, once my career is strong enough to score the relevant visa); being in the US is the best way to ensure a successful, timely suspension. If you are in Europe you have to both pay more for transport and you will be damaged more by the long journey, assuming you die unexpectedly in Europe.
OTOH it looks like the mortality in your late 20s in the EU is less than half that in the US.
Yeah, but I'm not planning on magically becoming a randomly chosen 29 year old American male. If you condition on being wealthy and living in Mountain view or something I would expect the correlation to go away.
By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn't invalidate it. There's a rather narrow set of circumstances where your argument doesn't apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what it will be in a year, then you'd better not have turned down any loans with APY less than 900%.
And how does the value of cryonics go up as your mortality rate does? Are you planning on enrolling in a program with a fixed monthly fee?
then you'd better not have turned down any loans with APY less than 900%.
Since I was unemployed with no assets, I wasn't (until very recently, i.e. yesterday) eligible for any kind of personal loan.
By how many orders of magnitude?
Mortality rate in your late 20s is low, and when you add that accidents, sudden deaths and murder are already very bad for cryo, that is further compounded.
Then you have the problem that I'm not in the USA (I plan to eventually move, once my career is strong enough to score the relevant visa); being in the US is the best way to ensure a successful, timely suspension. If you are in Europe you have to both pay more for transport and you will be damaged more by the long journey, assuming you die unexpectedly in Europe.
And how does the value of cryonics go up as your mortality rate does?
Well obviously it is worth more to mitigate death if your death is more likely. Especially when the kinds of ways you die when young are bad for yoir cryo chances.
View more: Next
I am interested in this line of research, I feel it needs a lot more work than one paper, though.
A key question is whether we can dig down into the relationship between environments and learning agents. Are there low complexity environments that neural networks do badly in?
What is really essential about our laws of physics to create a world that neural networks do relatively well in?