Comment author: Meni_Rosenfeld 07 October 2015 05:21:25PM 5 points [-]

The web game greatly suffers from the network effect. There's just very little chance you'll get >=3 people to log on simultaneously, and of course, because of this people will give up on trying, worsening the effect.

Maybe we can designate, say, 12:00 AM and PM, UTC, as hours at which people should log on? This will make it easier to reach critical mass.

Comment author: kilobug 02 August 2015 07:22:58AM 1 point [-]

To be fair, the DRAM bit flipping thing doesn't work on ECC RAM, and any half-decent server (especially if you run an AI on it) should have ECC RAM.

But the main idea remains yes : even a program proven to be secure can be defeated by attacking one of the assumptions made (such as the hardware being 100% reliable, which it rarely is) in the proof. Proving a program to be secure down from applying Schrödinger's equation on the quarks and electrons the computer is made of is way beyond our current abilities, and will remain so for a very long time.

Comment author: Meni_Rosenfeld 31 August 2015 05:07:44PM *  2 points [-]

Proving a program to be secure down from applying Schrödinger's equation on the quarks and electrons the computer is made of is way beyond our current abilities, and will remain so for a very long time.

Challenge accepted! We can do this, we just need some help from a provably friendly artificial superintelligence! Oh wait...

Comment author: Meni_Rosenfeld 18 March 2015 06:31:36PM *  2 points [-]

I've written a post on my blog covering some aspects of AGI and FAI.

It probably has nothing new for most people here, but could still be interesting.

I'll be happy for feedback - in particular, I can't remember if my analogy with flight is something I came up with or heard here long ago. Will be happy to hear if it's novel, and if it's any good.

How many hardware engineers does it take to develop an artificial general intelligence?

Comment author: Epictetus 10 February 2015 10:17:58PM 2 points [-]

The least convenient world is one where there's no traveler and the doctor debates whether to harvest organs from another villager. I figure that if it's okay to kill the traveler for organs, then it should be okay to kill a villager. Similarly, if it's against general principle to kill a villager for organs, then it shouldn't be okay to kill the traveler. Perhaps someone can come up with a clever argument why the life of a villager is worth intrinsically more than the life of the traveler, but let's keep things simple for now.

So, let us suppose that N sick people is the threshold wherein it is okay to kill a traveler, and hence a villager. If it's good to do once, it's good to do anytime this situation comes up. So we have ourselves a society where whenever the doctor needs is in dire need of organs for N patients, a villager is sacrificed. If we scale it up to the national level we should have ourselves a proper system wherein each month a certain number of people are chosen (perhaps by lottery) for sacrifice and their organs are harvested. I should imagine an epidemic of obesity and alcoholism as people seek to make their organs undesirable and so avoid being sacrificed.

I find that a fair number of morality puzzles of this sort exhibit interesting behavior under scaling.

Comment author: Meni_Rosenfeld 03 March 2015 04:25:04PM *  0 points [-]

The perverse incentive to become alcoholic or obese can be easily countered with a simple rule - a person chosen in the lottery is sacrificed no matter what, even if he doesn't actually have viable organs.

To be truly effective, the system needs to consider the fact that some people are exceptional and can contribute to saving lives much more effectively than by scrapping and harvesting for spare parts. Hence, there should actually be an offer to anyone who loses the lottery, either pay $X or be harvested.

A further optimization is a monetary compensation to (the inheritors of) people who are selected, proportional to the value of the harvested organs. This reduces the overall individual risk, and gives people a reason to stay healthy even more than normally.

All of this is in the LCPW, of course. In the real world, I'm not sure there is enough demand for organs that the system would be effective in scale. Also, note that a key piece of the original dilemma is that the traveler has no family - in this case, the cost of sacrifice is trivial compared to someone who has people that care about him.

Comment author: skeptical_lurker 17 February 2015 02:48:00PM 3 points [-]

Your own celibacy would have to massively benefit others - for instance, if you were going to have 2 children, your celibacy would have to allow your sibling to have 4 extra children.

Comment author: Meni_Rosenfeld 18 February 2015 07:37:27PM 0 points [-]

Of course. I'm not recommending to any genes to have their host go celibate. I just disagree with the deduction "if you're ceilbate you can't have children, so there's no way your genes could benefit from it, QED".

Comment author: CCC 16 February 2015 02:41:42PM 2 points [-]

So, let me see if I understand this right - you're contending that human behaviour is constrained to behaviour that best benefits the genes? That is to say, that human goals are really sneakily disguised genetic goals?

Then what about people who take a vow of celibacy, who deliberately choose not to have descendants, and therefore subvert any possible genetic goal? How is that possible?

Comment author: Meni_Rosenfeld 17 February 2015 01:12:28PM 0 points [-]

If your own celibacy somehow helps your relatives (who have a partial copy of your genes) reproduce, then the needs of your genes have been served. In general, genes have ways to pursue their agenda other than have their host reproduce. Sometimes genes even kill their host in an attempt to help copies of themselves in other hosts.

In response to comment by goocy on 2014 Survey Results
Comment author: satt 06 January 2015 02:56:06PM *  9 points [-]

But means and standard deviations are the results of modeling a gaussian distribution, and if the model fit is too bad, these metrics simply don't apply for this dataset.

?

Means and standard deviations are general properties one can compute for any statistical distribution which doesn't have pathologically fat tails. (Granted, it would've been conceptually cleaner for Yvain to present the mean & SD of log donations, but there's nothing stopping us from using his mean & SD to estimate the parameters of e.g. a log-normal distribution instead of a normal distribution.)

In response to comment by satt on 2014 Survey Results
Comment author: Meni_Rosenfeld 16 February 2015 08:42:31PM 2 points [-]

Is the link to "Logical disjunction" intentional?

In response to 2014 Survey Results
Comment author: EternalStargazer 09 January 2015 02:32:31AM 1 point [-]

Birth Month Jan: 109, 7.3% Feb: 90, 6.0% Mar: 123, 8.2% Apr: 126, 8.4% Jun: 107, 7.1% .Jul: 109, 7.3% Aug: 120, 8.0% Sep: 94, 6.3% Oct: 111, 7.4% Nov: 102, 6.8% Dec: 106, 7.1%

[Despite my hope of something turning up here, these results don't deviate from chance]

It would appear 0% of lesswrongers were born in May. Which is strange, because I seem to remember being born then, and also taking the survey.

Comment author: Meni_Rosenfeld 16 February 2015 08:36:12PM 0 points [-]

I was born in May, and I approve this message.

Comment author: Meni_Rosenfeld 17 October 2013 04:23:47PM 0 points [-]

Ouch, I completely forgot about this (or maybe I never knew about it?), and that's a talk I wanted to hear...

Is it possible perhaps to get it in text form?

Comment author: Meni_Rosenfeld 06 October 2013 06:40:12AM *  3 points [-]

It's worth mentioning that EFF has resumed accepting Bitcoin donations a while ago.

https://supporters.eff.org/donate

View more: Next