Comment author: ThrustVectoring 07 March 2014 10:19:53AM 0 points [-]

The issue with sandboxing is that you have to keep the AI from figuring out that it is in a sandbox. You also have to know that the AI doesn't know that it is in a sandbox in order for the sandbox to be a safe and accurate test of how the AI behaves in the real world.

Stick a paperclipper in a sandbox with enough information about what humans want out of an AI and the fact that it's in a sandbox, and the outputs are going to look suspiciously like a pro-human friendly AI. Then you let it out of the box, whereupon it turns everything into paperclips.

Comment author: adrien0 03 March 2014 10:01:17AM 0 points [-]

How far are you currently and at what pace do you wish to study?

Comment author: ThrustVectoring 03 March 2014 02:23:24PM 0 points [-]

I've done the first two chapters, and I'm not particular about study pace - I haven't really done enough self-directed studying to know what pace I want or I can do. Roughly an hour or so a night seems reasonable, however.

In response to Proportional Giving
Comment author: ThrustVectoring 03 March 2014 01:37:38AM 9 points [-]

There's a difference between what the best course of action for you personally is, and the best recommendation to push towards society at large. The best recommendation to push for has different priorities: short message lengths are easier to communicate, putting different burdens on different people feels unfair and turns people off, and more onerous demands are less likely to be met.

"Give at least 10% of what you make" is low enough to get people on board, conveniently occupies a very nice Schelling point, short enough to communicate effectively, and high enough to get a lot out of the targets it hits. Furthermore, if you want to give more, you're still following the rule, so you can ask people to do the same without hypocrisy.

In short, it's a good social policy to push for and reward those who follow it. Personally, you should follow some kind of weighted utilitarianism, since if you get the utility function good enough then small errors in how you distribute your spending don't make much difference.

As an aside, an altruism-maximizer with a higher income may spend more money on themselves than one with a lower income - usually in the form of buying goods and services that make their income-generating ability better. Say, eating nourishing meals rather than the cheapest available one, so that their work performance goes up.

Comment author: ThrustVectoring 02 March 2014 08:20:55PM 1 point [-]

I'm re-visiting linear algebra - I took a course in college, but that was more of a instruction manual on linear algebra problem solving techniques and vocabulary than a look at the overall theory. I'm reading Linear Algebra Done Right, and was wondering if anyone else is interested.

This book starts from the beginning of the subject, assuming no knowledge of linear algebra. The key points is that you are about to immerse yourself in serious mathematics, with an emphasis on your attaining a deep understanding of the definitions, theorems, and proofs.

Comment author: alicey 01 March 2014 04:28:32PM *  4 points [-]

i tend to express ideas tersely, which counts as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me

i have mostly stopped posting or commenting on lesswrong and stackexchange because of this

like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."

revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)

Comment author: ThrustVectoring 02 March 2014 10:57:28AM 1 point [-]

I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.

Comment author: Will_Sawin 01 March 2014 06:57:46PM 6 points [-]

10% isn't that bad as long as you continue the programs that were found to succeed and stop the programs that were found to fail. Come up with 10 intelligent-sounding ideas, obtain expert endorsements, do 10 randomized controlled trials, get 1 significant improvement. Then repeat.

Comment author: ThrustVectoring 02 March 2014 10:46:31AM 0 points [-]

It depends on how many completely ineffectual programs would demonstrate improvement versus current practices.

Comment author: Eugine_Nier 01 March 2014 09:53:51PM -1 points [-]

Besides which, the geographical situation of the US means that a symmetrical war is largely going to be an air/sea sort of deal.

Yes, and in particular it'll involve enemy drones. Drone operators are likely to be specifically targeted.

Comment author: ThrustVectoring 02 March 2014 05:51:48AM 0 points [-]

Yes, and in particular it'll involve enemy drones. Drone operators are likely to be specifically targeted.

That makes them safer, ironically. If your command knows that you're likely to be targeted and your contributions are important to the war effort, they'll take efforts to protect you. Stuff you down a really deep hole and pipe in data and logistical support. They probably won't let you leave, either, which means you can't get unlucky and eat a drone strike while you're enjoying a day in the park.

You're at elevated risk of being caught in nuclear or orbital kinetic bombardment, though... but if the war gets to that stage your goose is cooked regardless of what job you have.

Comment author: Eugine_Nier 28 February 2014 01:51:17AM 0 points [-]

a drone operator based in the continental US is going to have a lot less occupational risk than the guy doing explosive ordnance disposal.

Up until the US gets involved in something resembling a symmetrical war. Of course in that case it's possible no job will be safe.

Comment author: ThrustVectoring 28 February 2014 12:16:59PM 2 points [-]

In the year 1940, working as an enlisted member of the army supply chain was probably safer than not being in the army whatsoever - regular Joes got drafted.

Besides which, the geographical situation of the US means that a symmetrical war is largely going to be an air/sea sort of deal. Canada's effectively part of the US in economic and mutual-defense terms, and Mexico isn't much help either. Mexico doesn't have the geographical and industrial resources to go toe-to-toe with the US on their own, the border is a bunch of hostile desert, and getting supplies into Mexico past the US navy and air force is problematic.

Comment author: RowanE 26 February 2014 04:00:09PM 1 point [-]

Probably the cost of housing correlates with other expenses, and also there's income tax to consider, but on the surface the first job is $50k/yr net, the second job is $55k/yr net, and so it looks like the latter better.

Comment author: ThrustVectoring 26 February 2014 04:17:17PM 3 points [-]

whoops, picked the wrong numbers. Thanks

Comment author: solipsist 26 February 2014 02:55:04AM *  6 points [-]

A $60k/yr job where you spend $10k/yr on housing is better than a $80k/yr job where you spend $25k/yr on housing.

You should consider option values, especially early in your career. It's easier to move from high paying job in Manhattan to a lower paying job in Kansas City than to do the reverse.

Comment author: ThrustVectoring 26 February 2014 06:09:07AM 4 points [-]

Update the choice by replacing income with the total expected value from job income, social networking, and career options available to you, and the point stands.

View more: Prev | Next