Comment author: Maybe_a 27 April 2016 01:17:35PM 5 points [-]

I'd think 'ethical' in review board has noting to do with ethics. It's more of PR-vary review board. Limiting science to status-quo-bordering questions doesn't seem most efficient, but a reasonable safety precaution. However, typical view of the board might be skewed from real estimates of safety. For example, genetic modification of humans is probably minimally disruptive biological research (compared, to, say, biological weapons), though it is considered controversial.

Comment author: Luke_A_Somers 16 July 2014 07:07:24PM *  1 point [-]

My town, with more than 1001 voters in each ward, had around twenty elections last year and two of them were decided by one vote. Your model says that this should happen somewhere in America much less than once in a billion years. The fact of the matter is, voters are not binomially distributed (sometimes this lowers the probabilities further, but sometimes it raises them a lot)

Also, elected officials change their behavior based on margins, and on the size and habits of the population of voters. Politicians pay a lot more attention to vote-giving populations than non-vote-giving populations, for instance. The number of minor thresholds that can have some impact is large.

And that's before you multiply the impact of your reasoning by the population who might follow it.

Comment author: Maybe_a 17 July 2014 11:00:29AM *  0 points [-]

My town ...

Let 20% wards be swung by one vote, that gives each voter 1 in (5 * amount of voters) chance of affecting a vote cast on the next level, if that's how US system works?

... elected officials change their behavior based on margins ...

Which is an exercise in reinforcing prior beliefs, since margins are obviously insufficient data.

Politicians pay a lot more attention to vote-giving populations...

Are politicians equipped with a device to detect voters and their needs? If not, then it's lobbying, not voting that matters.

...impact of your reasoning by the population who might follow it.

Population following my reasoning: me.

P.S. Thanks for hinting at other question, which might be of actual use to me.

Comment author: kilobug 13 July 2014 07:55:14AM 0 points [-]

I think that's a common misconception for not actually running the numbers. We individually have a very low chance of changing anything at large-scale problems, but the effects of changing anything in large-scale problems is enormous. When dealing with very minor chance of very major change, we can't just use our intiutions (which breaks down) but we need to actually run the numbers.

And when it's done, like it was on this post, it says that we should care, the order of magnitude of changes being higher than the order of magnitude of our powerlessness.

Comment author: Maybe_a 13 July 2014 08:34:53PM *  0 points [-]

Absolutely, shutting up and multiplying is the right thing to do.

Assume: simple majority vote, 1001 voters, 1 000 000 QALY at stake, votes binomially distributed B(p=0.4), no messing with other people's votes, voting itself doesn't give you QALY.

My vote swings iff 500 <= B(1001, 0.4) < 501, with probability 5.16e-11, it is advised if takes less than 27 minutes.

Realistically, usefulness of voting is far less, due to:

  • actual populations are huge, and with them chance of swing-voting falls;

  • QALY are not quite utils (eg. other's QALY counts the same way as your own);

  • You will rarely see such huge rewards (if 1 QALY ~ 50 000$, our scenario gave each voter free $50 M )

So, people who 'need your vote' in real-world scenarios are either liars or just hopeless.

Comment author: Maybe_a 10 July 2014 09:23:21AM 7 points [-]

I don't care, because there's nothing I can do about it. It also applies to all large-scale problems, like national elections.

I do understand, that that point of view creates 'tragedy of commons', but there's no way I can force millions of people to do my bidding on this or that.

I also do not make interventions to my lifestyle, since I expect AGW effects to be dominated by socio-economic changes in the nearest half a century.

Comment author: passive_fist 02 January 2014 09:11:11AM 8 points [-]

I've been mainly working on my graduate studies, but as a side project I've been working on developing a more efficient way of doing Solomonoff induction. The idea is that Solomonoff induction requires some language over which to construct programs. But most 'formal' languages we know of - such as Turing machines - do not fit well with observed reality, in that you wind up requiring a very large additive constant complexity to the complexity of your programs. So the goal is to find some computational substrate that will not require such large additive complexity. The most obvious solution to this would be probabilistic graphical networks (such as Bayesian networks), but searching through Bayesian networks is extremely hard because there are a huge number of possible network topologies. So my idea is to restrict the network topologies. It is known that ANNs are able to, in principle, simulate any function, but I suspect most ANNs are too crude. Instead, I've been working on combining ANN approaches with hierarchical bayesian networks. I suspect the resulting structure has enough power to produce most sequences found in real life while also being very easy to optimize over due to the recursive structure and the fact that good learning algorithms for ANNs are known. My initial experiments in this direction have been positive; the main goal now is to derive a more efficient learning algorithm and to prove bounds on time- and space-complexity.

Comment author: Maybe_a 03 January 2014 02:25:24PM *  1 point [-]

Are artificial neural networks really Turing-complete? Yep, they are [Siegelman, Sontag 91]. Amount of neurons in the paper is , with rational edge weights, so it's really Kolmogorov-complex. This, however, doesn't say if we can build good machines for specific purposes.

Let's figure out how to sort a dozen numbers with -calculus and sorting networks. It must stand to notice, that lambda-expression is O(1), whereas sorter network is O(n (log n)^2) in size.

Batcher's odd–even mergesort would be O(log n) levels deep, and given one neuron is used to implement comparator, would result in O(n!) possible connections (around per level). That we need 200 bits of insight to sort a dozen of numbers with that specific method does not mean that there is no cheaper way to do that, but sets a reasonable upper bound.

Apparently, I cannot do good lambda-calculus, but seems like we can do merge sorting of Church-encoded numerals in less than a hundred lambda-terms which is about the same amount of bits as sorting networks.

On a second note: how are Bayesian networks different from preceptrons, except fro having no thresholds?

Comment author: Maybe_a 14 December 2013 05:58:30PM 1 point [-]

A is not bad because torturing a person and then restoring their initial state has precisely same consequences as forging own memory of torturing a person and restoring their initial state.

Comment author: DanArmak 09 February 2013 12:28:33PM 2 points [-]

Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.

That is rather begging the question. As a result of WW1 there have been agreements in place - the Geneva Protocol - not to develop or use chemical weapons, and so fewer people have been killed by them than might have otherwise.

Comment author: Maybe_a 09 February 2013 04:59:52PM 1 point [-]

Well, it seems somewhat unfair to judge the decision on information not available for decision-maker, however, I fail to see how is that an 'implicit premise'.

I didn't think Geneva convention was that old, and, actually updating on it makes Immerwahr decision score worse, due to lower expected amount of saved lives (through lower chance of having chemical weapons used).

Hopefully, roleplaying this update made me understand that in some value systems it's worth it. Most likely, E(\Delta victims to Haber's war efforts) > 1.

Comment author: Maybe_a 09 February 2013 07:17:37AM 1 point [-]

Standing against unintended pandemics, atomic warfare and other extinction threatenting events have been quite good of an idea in retrospect. Those of us working of scientific advances shall indeed ponder the consequences.

But Immerwahr-Haber episode is just an unrelated tearjerker. Really, inventing process for creation of nitrogen fertilizers is so more useful than shooting oneself in the heart. Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.

Comment author: Jack 08 February 2013 07:23:58PM *  26 points [-]

This is insightful. I also think we should emphasize that it is not just other people or silly theistic, epistemic relativists who don't read Less Wrong who can get exploded by Philosophical Landmines. These things are epistemically neutral and the best philosophy in the world can still become slogans if it gets discussed too much E.g.

Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out,

Now I wasn't there and I don't know you. But it seems at least plausible that that is exactly what your sister felt she was doing. That this is how having your philosophical limbs getting blown off feels like from the inside.

I think I see this phenomena most with activist-atheists who show up everywhere prepared to categorize any argument a theist might make and then give a stock response to it. It's related to arguments as soldiers. In addition to avoiding and disarming landmines, I think there is a lot to be said for trying to develop an immunity. So that even if other people start tossing out slogans you don't. I propose that it is good policy to provisionally accept your opponent's claims and then let your own arguments do their work on those claims in your mind before you let them out.

So...

Theist: "The universe is too complex for it to have been created randomly."

Atheist (pattern matches this claim to one she has heard a hundred times before, searches for the most relevant reply, outputs): "Natural selection isn't random and in that case how was God created?"

KABOOM!

Instead:

Theist: "The universe is too complex for it to have been created randomly."

Atheist (Entertains the argument as if she had no prior experience with it, see's what makes the argument persuasive for some people, then searches for replies and applies them to the argument "Is natural selection really random? Oh, and God, to the extent He is supposed to be like a human agent, would be really complicated too. So that just pushes the problem of developing complexity back a step."): "Oh yeah, I've heard things like that before. Here are two issues with it...."

Obviously this is hard to do, and maybe not worthwhile in every situation.

Comment author: Maybe_a 09 February 2013 06:38:20AM *  -1 points [-]

"The universe is too complex for it to have been created randomly."

You're right. Exactly.

Unless there are on order of $2^KolmogorovComplexity(Universe)$ universes, the chance of it being constructed randomly is exceedingly low.

Please, do continue.