Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Ethical Injunctions
Comment author: khafra 27 February 2017 03:38:57PM 0 points [-]

Tangentially, there's an upcoming Netflix six-episode series named “The Heavy Water War,” that should cover both this event, and the sabotage of the heavy water production facility that led up to it.

Comment author: Gunnar_Zarncke 08 February 2017 02:43:15PM 2 points [-]

I almost think, that that LIME article merits it's own Link post, what do you think?

Comment author: khafra 08 February 2017 10:02:58PM 0 points [-]

It should be posted, but by someone who can more rigorously describe its application to an optimizer than "probably needs to be locally smooth-ish."

Comment author: khafra 08 February 2017 12:21:50PM 3 points [-]

Point 8, about the opacity of decision-making, reminded me of something I'm surprised I haven't seen on LW before:

LIME, Local Interpretable Model-agnostic Explanations, can show a human-readable explanation for the reason any classification algorithm makes a particular decision. It would be harder to apply the method to an optimizer than to a classifier, but I see no principled reason why an approach like this wouldn't help understand any algorithm that has a locally smooth-ish mapping of inputs to outputs.

Comment author: DanArmak 01 August 2015 10:35:09AM *  11 points [-]

I'm not sure what you think it means. Care to elaborate?

In computer science, provably secure software mechanisms rely on an idealized model of hardware; they aren't and can't be secure against hardware failures.

Comment author: khafra 31 August 2015 02:29:40PM *  3 points [-]

provably secure software mechanisms rely on an idealized model of hardware

In my experience, they also define an attacker model against which to secure. There are no guarantees against attackers with greater access, or abilities, than specified in the model.

In response to Crazy Ideas Thread
Comment author: [deleted] 09 July 2015 12:05:01PM 12 points [-]

A crazy prediction: 25 years from now, what we call intermittent fasting will be called a normal daily schedule, and what we call a normal daily schedule (3 meals and some healthier snacks) will be called food addiction. And the primary reason for this change will not be even e.g. obesity but the mental effects: people will consider it an obvious truth that being constantly in a fed state dulls the mind and saps motivation and generates akrasia and generally harms productivity.

They will look back to us and think these people went through life half-asleep because they went through life constantly (nearly) sated.

They will says stuff like "you are a like a dolphin: if you feed yourself before you jumped through all the hoops you planned for that day, you won't jump through them".

Arriving to work with a breakfast in hand will be a bit like arriving to work with a beer in hand: if you roll best that way it is not for others to judge, but most people will prefer to work sober and sharp - and that means literally staying hungry. Today we joke about having a food coma and difficulty to concentrate after a work lunch, fixing ourselves up with coffee: this will sound a lot like as an 1950's person complaining that he finds it hard to concentrate after a two-martini lunch sounds today.

In response to comment by [deleted] on Crazy Ideas Thread
Comment author: khafra 22 July 2015 10:51:23AM 0 points [-]

Dave Asprey says, with a reasonably large set of referenced studies, that it's the mold in food which reduces your fed performance.

Comment author: khafra 04 June 2015 11:02:13PM -1 points [-]
Comment author: Richard_Loosemore 10 May 2015 07:34:24PM -1 points [-]

The lack of understanding in this comment is depressing.

You say:

"No. The AI does not have good intentions. Its intentions are extremely bad."

If you think this is wrong, take it up with the people whose work I am both quoting and analyzing in this paper, because THAT IS WHAT THEY ARE CLAIMING. I am not the one saying that "the AI is programmed with good intentions", that is their claim.

So I suggest you write a letter to Muehlhauser, Omohundro, Yudkowsky and the various others quoted in the paper, explaining to them that you find their lack of precision depressing.

Comment author: khafra 14 May 2015 12:12:50PM 2 points [-]

If you think this is wrong, take it up with the people whose work I am both quoting and analyzing in this paper, because THAT IS WHAT THEY ARE CLAIMING. I am not the one saying that "the AI is programmed with good intentions", that is their claim.

I think I spotted a bit of confusion: The programmers of the "make everyone happy" AI had good intentions. But the AI itself does not have good intentions; because the intent "make everyone happy" is not good, albeit in a way that its programmers did not think of.

Comment author: Nornagest 13 April 2015 09:24:18PM *  2 points [-]

Short answer is I don't know. The long answer will take a little background.

I haven't bothered to read through Decoy's internals, but this sort of steganography usually hides its secret data in the least significant bits of the decoy image. If that data is encrypted (assuming no headers or footers or obvious block divisions), then it will appear to an attacker like random bytes. Whether or not that's distinguishable from the original image depends on whether the low bits of the original image are observably nonrandom, and that's not something I know offhand -- although most images will be compressed in some fashion and a good compression scheme aims to maximize entropy, so that's something. And if it's mostly random but it does fit a known distribution, then with a little more cleverness it should be possible to write a reversible function that fits the encrypted data into that distribution.

It will definitely be different from the original image on the bit level, if you happen to have a copy of it. That could just mean the image was reencoded at some point, though, which is not unheard of -- though it'd be a little suspicious if only the low bits changed.

Comment author: khafra 07 May 2015 01:20:09PM 1 point [-]

If that data is encrypted (assuming no headers or footers or obvious block divisions), then it will appear to an attacker like random bytes. Whether or not that's distinguishable from the original image depends on whether the low bits of the original image are observably nonrandom, and that's not something I know offhand

It's super-easy to spot in a histogram, so much so that there's ongoing research into making it less detectable.

Comment author: shminux 03 May 2015 08:39:29PM 5 points [-]

Is there good reason to suppose that Gal's desire for internal combustion is irrational

I am trying to steelman this statement, and the best I can come up with is "this particular terminal value of hers is potentially in conflict with some of her other terminal values and could use an adjustment." But I don't know what her other terminal values are.

Comment author: khafra 05 May 2015 12:22:01PM 2 points [-]

Presumably it's in conflict with the instrumental values of retaining resources which could be used for other terminal values (the money she would save, going with the fuel cell), and the combination of instrumental and terminal values represented by the improved acceleration of the fuel cell.

Comment author: khafra 28 April 2015 11:35:50AM 6 points [-]

Do you have plans for when your term life insurance expires, but you're still alive (which is, actuarially speaking, fairly certain)?

View more: Next