Comment author: Elo 05 January 2016 02:32:14PM 4 points [-]

PSA: I had a hard drive die on me. Recovered all my data with about 25 hours of work all up for two people working together.

Looking back on it I doubt many things could have convinced me to improve my backup systems; short of working in the cloud; my best possible backups would have probably lost the last two weeks of work at least.

I am taking suggestions for best practice; but also a shout out to backups, and given it's now a new year, you might want to back up everything before 2016 right now. Then work on a solid backing up system.

(Either that or always keep 25 hours on hand to manually perform a ddrescue process on separate sectors of a drive; unplugging and replugging it in between each read till you get as much data as possible out, up until 5am for a few nights trying to scrape back the entropy from the bits...) I firmly believe with the right automated system it would take less than 25 hours of effort to maintain.

bonus question: what would convince you to make a backup of your data?

Comment author: iceman 05 January 2016 09:20:23PM 2 points [-]

Use RAID on ZFS. RAID is not a backup solution, but with the proper RAIDZ6 configuration will protect you against common hard drive failure scenarios. Put all your files on ZFS. I use a dedicated FreeNAS file server for my home storage. Once everything you have is on ZFS, turn on snapshotting. I have my NAS configured to take a snapshot every hour during the day (set to expire in a week), and one snapshot on Monday which lasts 18 months. The short lived snapshots lets me quickly recover from brain snafus like overwriting a file.

Long lived snapshotting is amazing. Once you have filesystem snapshots, incremental backups become trivial. I have two portable hard drives, one onsite and one offsite. I plug in the hard drive, issue one command, and a few minutes later, I've copied the incremental snapshot to my offline drive. My backup hard drives become append only logs of my state. ZFS also lets you configure a drive so that it stores copies of data twice, so I have that turned on just to protect against the remote chance of random bitflips on the drive.

I do this monthly, and it only burns about 10 minutes a month. However, this isn't automated. If you're willing to trust the cloud, you could improve this and make it entirely automated with something like rsync.net's ZFS snapshot support. I think other cloud providers also offer snapshotting now, too.

Comment author: iceman 31 December 2015 01:39:46AM 4 points [-]

Epistemic status: vague conjecture and talking aloud.

So this article by Peter Watts has been making the rounds, talking about how half the people with cranial cavities filled 95% with cerebrospinal fluids still have IQs over 100. One of the side discussions on Hacker News was about how most of the internal tissue in the brain was used for routing while most 'logic' happened in the outer millimeters.

So far, I haven't seen anyone make the connection to cryonics and plasatination. If it's true that most of the important data is stored near the outside of the brain, does that make identity preservation through cryonics more or less likely? I vaguely remember reading that getting the core of the brain to LN temperatures took time. But if most data is near outside of the brain, which will reach LN temperatures first, shouldn't that raise our estimates of whether personal identity is preserved?

Comment author: So8res 11 December 2015 11:08:03PM 5 points [-]

Thanks! And thanks again for your huge donation in the summer; I was not expecting more.

Comment author: iceman 12 December 2015 05:13:56AM 7 points [-]

Your welcome. I wasn't planning on this, but I would have been leaving money on the table from a coworker's private matching funds.

Comment author: iceman 11 December 2015 09:31:35PM 16 points [-]

$1000. (With an additional $1000 because of private, non-employer matching.)

In response to comment by OrphanWilde on LessWrong 2.0
Comment author: Vaniver 04 December 2015 12:21:21AM 5 points [-]

But... SSC gets its biggest influxes in traffic when Scott Alexander talks about exactly those sorts of things we explicitly forbid talking about here - identity, politics, etc.

I am optimistic about Omnilibrium as a place with LWish norms where people can talk about politics and other 'fun' topics. I think this is also one of the reasons why I think I'm more bullish on the diaspora and people branding their own blogs than other people are--someone needs to have skilled hands before I endorse them talking about a touchy subject.

In response to comment by Vaniver on LessWrong 2.0
Comment author: iceman 04 December 2015 06:15:24AM *  8 points [-]

As much as I like Omnilibrium as a concept, there's a lot of work needed to productize the site. A lot of the style is whitespace, there are boxes instead of icons (notably the bookmark icon), there's no password reset (and the site barfed on my >20 character autogenerated password so that if I lose my current browser profile, I'm going to lose my account there), etc. But even worse, people didn't move there en masse so the site was never bootstrapped.

I'm not convinced that the karma system as it exists today actually performs its desired task anymore because a good chunk of the voting seems to be done by the unquiet spirits. Back when I cared about karma here, it was because it reflected the opinions of people that I very much respected. I don't feel that way anymore.

One possible[*] solution would be to port the Omnilibrium algorithm back to LessWrong, customizing the scoring for each user, but this might be a place where we should hold off proposing solutions.

[*] As in, "Well I suppose that's technically possible, but..."

Comment author: Houshalter 23 September 2015 04:21:27AM 1 point [-]

Is it just my browser, or does this site not allow keyboard input? I can't scroll the page with arrow keys or pg up/down.

Comment author: iceman 25 September 2015 06:50:34AM 1 point [-]

It's not just you. I can't use the arrow keys either. Chrome 45 on Windows 8.

Comment author: PhilGoetz 29 August 2015 12:19:51AM 1 point [-]

How does Omnilibrium voting work?

Comment author: iceman 29 August 2015 01:47:44AM 2 points [-]

I'm not sure about the mathematical details, but as described in their FAQ, they presume that it's inevitable that people will form into local Blue and Green tribes, so they attempt to cluster the population into Blue and Green to not just be a better recommendation engine to both Blues and Greens, but also calculate a nonpartisan score of upvotes by the other side and downvotes by your side.

In general, I thought this was fascinating because it gets to the heart about what voting is for on social websites. If we're trying to build a recommendation engine, having an extremely diverse set of viewpoints is probably something that we want in the input stream of links and discussion. However, we then don't want to have everyone's voting then represent a single score variable, because people are different and have different worldviews. Mixing everyone's scores together will make a homogenized mess that doesn't really speak to anyone.

The idea of tracking partisanship not just to Bayes voting to make better recommendations to users, but to get a sense of nonpartisan quality really impressed me as an idea that's totally obvious...in retrospect. I do wonder how well it scales, as Omnilibrium is fairly small right now.

Comment author: Anders_H 21 August 2015 08:12:15PM *  12 points [-]

I am going to publicly call for banning user VoiceOfRa for the following reasons:

(1) VoiceOfRa is almost certainly the same person as Eugene_Nier and Azathoth123. This is well known in rationality circles; many of us have been willing to give him a second chance under a new username because he usually makes valuable contributions.

(2) VoiceOfRa almost certainly downvote bombed the user who made the grandparent comment, including downvoting some very uncontroversial and reasonable comments.

(3) As I have said before in this context, downvote abuse is very clear evidence of being mindkilled. It is also a surefire way to ensure you never change your mind, because you discourage people who disagree with you from taking part in the discussion and therefore prohibit yourself from updating on their information. I do not understand how someone who genuinely believes in epistemic rationality could think this is a good strategy.

I will also note that I was the first person to publicly call out Eugine_Nier under his previous username, Azathoth123, at http://lesswrong.com/lw/l0g/link_quotasmicroaggressionandmeritocracy/bd4o . Like I said in that comment, I continue to believe he is a valuable contributor to the community. Like many other people, I have been willing to give him a second chance under his new username. However, this was conditional on completely ceasing and desisting with the downvote abuse. And yes, any downvoting of old comments made in a different context is a clear example of abuse.

The following links provide background material for readers who are unfamiliar with Eugine_Nier and the context in which I am requesting a ban:

http://lesswrong.com/r/discussion/lw/kbk/meta_policy_for_dealing_with_users/ http://lesswrong.com/lw/kfq/moderator_action_eugine_nier_is_now_banned_for/ http://lesswrong.com/lw/ld0/psa_eugine_nier_evading_ban/

Edited to add: If I see clear evidence that VoiceOfRa is not Eugine_Nier, or that he was not behind the most recent downvote abuse, I will retract this message and publicly apologize

Comment author: iceman 21 August 2015 10:48:52PM -1 points [-]

I am going to publicly call for banning user VoiceOfRa [...] VoiceOfRa almost certainly downvote bombed the user who made the grandparent comment, including downvoting some very uncontroversial and reasonable comments.

Consequentially...why bother even if this is true?

Assuming you are correct, Eugene's response to being banned (twice!) was to just make another account. It's highly likely that if you ban this new account, he will make a fourth account. That account will quickly quickly gain karma because, as you note, Eugene's comments are actually valuable. You are proposing that we do the same thing a third time and expect a different result.

Possible actual solutions that are way too much work:

  • move LW on to an Omnilibrium like system of voting where Eugene's votes will put him strongly into the optimate cluster and won't hurt as much.

  • give up on moderation democracy on the web.

Comment author: iceman 21 July 2015 07:20:37PM 50 points [-]

Donated $25,000. My employer will also match $6,000 of that, for a grand total of $31,000.

Comment author: Evan_Gaensbauer 25 May 2015 05:17:05AM 8 points [-]

It doesn't appear this is discussed much, so I thought I'd start a conversation:

Who on LessWrong is uncomfortable with or doesn't like so much discussion of effective altruism here? if so, why?

Other Questions:

  • Do you feel there's too much of it now, or would even a little bit of it seem averse?
  • Do you think such discussion is inappropriate given the implicit or explicit goals of LessWrong?
  • Has too much discussion of effective altruism caused you to think less of LessWrong, or use it less?
  • For what reason(s) do you disagree with effective altruism? Is it because of your values and what you care about, or because you don't like normative pressure to take such strong personal actions? Or something else?

I want to discuss it because what proportion of the LessWrong community is averse or even indifferent or disinterested in effective altruism doesn't express their opinions much. Also, while I identify with effective altruism, I don't only value this site as a means to altruistic ends, and I don't want other parts of the rationalist community to feel neglected.

Comment author: iceman 27 May 2015 07:12:24AM 15 points [-]

(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)

It appears to me that there are two LessWrongs.

The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner's dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.

The first LessWrong results in serious insights that should be integrated into one's life. In Program Equilibrium in the Prisoner's Dilemma via Lob's Theorem, the authors take a moment to discuss the issue of "Defecting Against CooperateBot"--if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner's dilemma. On the latest survey, I instead put the correct answer.

The second LessWrong is the LessWrong of utilitarianism, especially of a Singerian sort, which I find to clash with the first LessWrong. My understanding is that Peter Singer argues that because you would ruin your shoes to jump into a creek to save a drowning child, you should incur an equivalent cost to save the life of a child in the third world.

Now never mind that saving the child might have postive expected value to the jumper. We can restate Singer's moral obligation as a prisoner's dilemma, and then we can apply something like TDT to it and make the FairBot version of Singer: I want to incur a fiscal cost to save a child on the other side of the world iff parents on the other side of the world would incur a fiscal cost to save my child. I believe Singer would deny this statement (and would be more aghast at the PrudentBot version), and would insist that there's a moral obligation regardless of the other theoretical reciprocation.

I notice that I am being asked to be CooperateBot. I don't think CFAR has "Don't be CooperateBot," as a rationality technique, but they should.

Practically, I find that 'altruism' and 'CooperateBot' are synonyms. The question of reciprocality hangs in the background. It must, because Azathoth both generates those who are CooperateBot and those who exploit CooperateBots.

I will also point out that this whole discussion is happening on the website that exists to popularize humanity's greatest collective action problem. Every one of us has a selfish interest in solving the friendly AI problem. And while I am not much of a utilitarian, I would assume that the correct utilitarian charity answer in terms of number of people saved/generated would be MIRI, and that the most straightforward explanation is Hansonian cynacism.

View more: Prev | Next