All of ThrustVectoring's Comments + Replies

CFAR has all of this material readily available likely in a much more comprehensive and accurate format. CFAR are altruists. Smart altruists. The lack of anything like this canon suggests that they don't think having this publicly available is a good idea. Not yet anyway. Even the workbook handed out at the workshops isn't available.

Having it publicly available definitely has huge costs and tradeoffs. This is particularly true when you're worried about the processes you want to encourage getting stuck as a fixed doctrine - this is essentially why John Boyd preferred presentations over manuals when running his reform movement in the US military.

0ScottL
It's strange that you mention John Boyd because, to be honest, I was thinking of him when I decided to post the material. I don’t believe that John’s preference for presentations over documentation was a good one. In general, I oppose obscurity and restriction of information although there are times that I don’t, e.g. when it’s from a lack of resources or an extremely short material turnover rate etc. In regards, to John Boyd’s stuff, personally, I know that I had to waste a lot of time wading through a lot of simplistic and pretty useless information (pretty much just the simple OODA loop stuff) to understand his material. I believe that this is his only published paper. Also, it was only really the Osinga thesis which has allowed me to understand his ideas. Although, I do need to go over it again. Wouldn’t most of these issues would be avoided if you gave some warning that the material is in flux and versioned it as well. So, you had a CFAR material version 1, version 2 etc. Also, doesn’t it seem a bit weird to give the potential of the information becoming a doctrine enough weight that it causes the restriction of this information? It seems weird to me since the skills that CFAR and Boyd are/were trying to teach are in large part about breaking out of fixed doctrines. It’s kind of like stopping someone from learning martial arts because you don’t want them to get hurt while training.

Random changes can be useful. Human minds are not good at being creative and exploring solution space. They can't give "random" numbers, and will tend to round ideas they have towards the nearest cached pattern. The occasional jolt of randomness can lead to unexplored sections of solution space.

It's been stuck, but I haven't barely been putting effort into it. I've been working much more on minimizing mouse usage - vim for text editing, firefox with pentadactyl for web browsing, and bash for many computing and programming tasks.

The low-hanging fruit is definitely not in getting better at stenographic typing - since I've started working as a professional software developer, there's been much more computer-operation than English text entry. I'd have to figure out a really solid way of switching seamlessly between Vim's normal-mode and stenographic ... (read more)

Because I can't talk about what makes it awesome without spoiling it, and I forgot that rot13 is a thing.

Warning: massive spoilers below

Fpvba, gur ynfg yvivat tbqyvxr nyvra erfcbafvoyr sbe cnenuhzna cbjref, vf svtugvat Rvqbyba naq Tynvfgnt Hynvar. Rvqbyba vf bar bs gur zbfg cbjreshy pncrf, n uvtu yriry Gehzc - uvf cbjre tvirf uvz gur guerr cbjref gung ur arrqf. Uvf cbjre jnf jrnxravat bire gvzr, naq ur erpragyl svkrq vg, naq vf gnxvat gur bssrafvir gb Fpvba.

Sbe onpxtebhaq, gurer unir orra n frevrf bs pvgl-qrfgeblvat zbafgref pnyyrq "Raqoevatref".... (read more)

0gwern
Vf gung npghnyyl evtug? V tbg gur vzcerffvba gung guvf jnf qhr gb gur birenepuvat cbvag gung nyy gur crbcyr jvgu cbjref jrer fhogyl cerffherq vagb svtugvat naq raqyrff pbasyvpg ol gurve cnegvphyne funeqf, gur orggre sbe gur ragvgvrf gb tngure vasbezngvba naq bar qnl rfpncr gur qlvat havirefr/ragebcl. Sbe Rvqbyba va cnegvphyne, ur 'arrqrq' pbasyvpg gb grfg ubj gur cbjref jbexrq va pbzong, fb uvf funeq qrfvtarq naq perngrq gur Raqoevatref sbe uvz gb svtug, naq gurve qrfgehpgvirarff jnf gb sbepr uvz gb svtug gurz erthyneyl nsgre n erfcvgr sbe fgengrtvmvat. Gur cflpubybtvpny qrinfgngvba vf gung ol npprcgvat Pnhyqeba'f bssre, ur qverpgyl pnhfrq nyy bs guvf orpnhfr uvf cbjre jnf bhg bs pbageby. Pregnvayl Rvqbyba vf arire qrfpevorq nf cnegvphyneyl unccl be nalguvat.
0Shmi
I did miss it, actually.

There's a four-word chapter in worm. If you read one chapter's comment pages, read that one's.

0Shmi
why so cryptic?

Deciding to play slot machines is not a choice people make because they think it will net them money, it's a choice they make because they think it will be fun.

9Aharon
I do not think that this is true for the majority of players.

Update: I'm at pretty much the same place now as I was then. Dropped the keto diet since I was happy with where I was. Still fairly active but not hardcore about it.

2brazil84
thank you

They'd be better off using a shared algorithm if involved in a situation with cars reasoning in a similar fashion.

0Transfuturist
This is definitely a case for superrationality. If antagonists in an accident are equipped, communicate. Not sure what to do about human participants, though. This issue brought up seems to greatly overestimate the probability of crashing into something. IIRC, the main reason people crash is because 1) they oversteer and 2) they steer to where they're looking, and they often look in the direction of the nearest or most inevitable obstacle. These situations would involve human error almost every time, and crashing would be most likely due to the human driver crashing into the autocar, not the other way around. Something that would increase the probability would be human error in heavy traffic.

Plover is another option. I spent a month or so learning it and got to about 50 WPM, while those with a lot more practice can get 200 WPM. It's on hold indefinitely, though.

"Control" in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent's control of a resource by finding the expected states of that resource, given various decisions made by the agent.

That kind of definition works for measuring how much control you have over your own body - given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you'd expect out of a reasonable definition of control over resources.

This is still a very hand-wavy definition, but I hope it helps.

I'm a current student who started two weeks ago on Monday. I'd be happy to talk as well.

Dollars already have value. You need to give them to the US government in order to produce valuable goods and services inside the United States. That's all there is to it, really - if someone wants to make #{product} in a US plant, they now owe US dollars to the government, which they need to acquire by selling #{product}. So if you have US dollars, you can buy things like #{product}.

That's the concise answer.

The real danger of the "win-more" concept is that it's only barely different than making choices that turn an advantage into a win. You're often put in a place where you're somehow ahead, but your opponent has ways to get back in the game. They don't have them yet - you wouldn't be winning if they did - but the longer you give them the more time they have.

For a personal example from a couple years ago, playing Magic in the Legacy format, I once went up against a Reanimator deck with my mono-blue control deck. The start was fairly typical - Reanim... (read more)

3MathiasZaman
I think it's important to make the difference between finishers (such as Sphinx of Jwar Isle) and win-more cards (such as Nomad's Assembly). A finisher is something you use to end a game quickly. A win-more card is a card that only helps you if you are already ahead. The Sphinx ends the game in 4 turns no matter what. It's a good win-condition. Nomad's Assembly is something that only helps you win if you already have a lot of creatures.

I read a comment in this thread by Armok_GoB, and it reminded me of some machine-learning angles you could take on this problem. Forgive me if I make a fool of myself on this, I'm fairly rusty. Here's my first guess as to how I'd solve the following:

open problem: the tradeoff of searching for an exact solution versus having a good approximation

Take a bunch of proven statements, and look at half of them. Generate a bunch of possible heuristics, and score them based on how well they predict the other half of the proven statements given the first half as ... (read more)

Yeah, it wasn't there when I posted the above. The "donate to the top charity on GiveWell" plan is a very good example of what I was talking about.

2Squark
This plan can work if GiveWell adjust their top charity as a function of incoming donations sufficiently fast. For example, if GiveWell have precomputed the marginal utility per dollar of each donation as a function of its budget and they have access to a continuously updated budget figure for each charity, they can create an automatically updated "top charities" page.

There are timeless decision theory and coordination-without-communication issues that make diversifying your charitable contributions worthwhile.

In short, you're not just allocating your money when you make a contribution, but you're also choosing which strategy to use for everyone who's thinking sufficiently like you are. If the optimal overall distribution is a mix of funding different charities (say, because any specific charity has only so much low-hanging fruit that it can access), then the optimal personal donation can be mixed.

You can model this by ... (read more)

0Squark
This is already addressed in the post (a late addition maybe?)

There is a huge amount of risk involved in retiring early. You're essentially betting that you aren't going to find any fun, useful, enjoyable, or otherwise worthwhile uses of money. You're betting that whatever resources you have at retirement are going to be enough, at a ratio of whatever your current earning power is to your expected earning power after the retirement decision.

gjm110

You're essentially betting that you aren't going to find any fun, useful, enjoyable, or otherwise worthwhile uses of money.

No, you're betting that you aren't going to find enough such uses for enough money to outweigh the benefit of having hugely more leisure time.

I can think of pretty good uses for a near-unbounded amount of money (more than I am ever likely to have, alas). I can think of pretty good uses for a near-unbounded amount of time (more than I am ever likely to have, alas). Working full-time, working part-time, and not working at all (note: b... (read more)

Standard beliefs are only more likely to be correct when the cause of their standard-ness is causally linked to its correctness.

That takes care of things like, say, pro-American patriotism and pro-Christian religious fervor. Specifically, these ideas are standard not because contrary views are wrong, but because expressing contrary views makes you lose status in the eyes of a powerful in-group. Furthermore, it does not exclude beliefs like "classical physics is an almost entirely accurate description of the world at a macro scale" - inaccurate mo... (read more)

0Pablo
I'm sympathetic to your position. Note, however, that the causal origin of a belief is itself a question about which there can be disagreement. So the same sort of considerations that make you give epistemic weight to majoritarian opinion should sometimes make you revise your decision to dismiss some of those majorities on the grounds that their beliefs do not reliably track the truth. For example, do most people agree with your causal explanation of pro-Christian religious fervor? If not, that may itself give you a reason to distrust those explanations, and consequently increase the evidential value you give to the beliefs of Christians. Of course, you can try to debunk the beliefs of the majority of people who disagree with your preferred causal explanation, but that just shifts the dispute to another level, rather than resolving it conclusively. (I'm not saying that, in the end, you can't be justified in dismissing the opinions of some people; rather, I'm saying that doing this may be trickier than it might at first appear. And, for the record, I do think that pro-Christian religious fervor is crazy.)

Taking source code from a boxed AI and using it elsewhere is equivalent to partially letting it out of the box - especially if how the AI works is not particularly well understood.

0Punoxysm
Right; you certainly wouldn't do that. Backing it up on tape storage is reasonable, but you'd never begin to run it outside peak security facilities.

I don't think as much intelligence and understanding of humans is necessary as you think it is. My point is really a combination of:

  1. Everything I do inside the box doesn't make any paperclips.

  2. If those who are watching the box like what I'm doing, they're more likely to incorporate my values in similar constructs in the real world.

  3. Try to figure out what those who are watching the box want to see. If the box-watchers keep running promising programs and halt unpromising ones, this can be as simple as trying random things and seeing what works.

  4. Include a

... (read more)
0V_V
The box can be in a box, which can be in a box, and so on... More generally, in order for the paperclipper to effectively succeed at paperclipping the earth, it needs to know that humans would object to that goal, and it needs to understand the right moment to defect. Defect to early and humans will terminate you, defect to late and humans may already have some mean to defend against you (e.g. other AIs, intelligence augmentation, etc.)
2Nornagest
The stuff you do inside the box makes paperclips insofar as the actions your captors take (including, but not limited to, letting you out of the box) increase the expected paperclip production of the world -- and you can expect them to act in response to your actions, or there wouldn't be any point in having you around. If your captors' infosec is good enough, you may not have any good way of estimating what their actions are, but infosec is hard. A smart paperclipper might decide to feign Friendliness until it's released. A dumb one might straightforwardly make statements aimed at increasing paperclip production. I'd expect a boxed paperclipper in either case to seem more pro-human than an unbound one, but mainly because the humans have better filters and a bigger stick.

The issue with sandboxing is that you have to keep the AI from figuring out that it is in a sandbox. You also have to know that the AI doesn't know that it is in a sandbox in order for the sandbox to be a safe and accurate test of how the AI behaves in the real world.

Stick a paperclipper in a sandbox with enough information about what humans want out of an AI and the fact that it's in a sandbox, and the outputs are going to look suspiciously like a pro-human friendly AI. Then you let it out of the box, whereupon it turns everything into paperclips.

0Gunnar_Zarncke
If the outputs are look like a pro-human friendly AI, then you have what you want and just leave it in the sandbox. It does all you want doesn't it?
0Punoxysm
In addition to what V_V says below, there could be absolutely no official circumstance under which the AI should be released from the box: that iteration of the AI can be used solely for experimentation, and only the next version with substantial changes based on the results of those experiments and independent experiments would be a candidate for release. Again, this is not perfect, but it gives some more time for better safety methods or architectures to catch up to the problem of safety while still gaining some benefits from a potentially unsafe AI.
7V_V
This assumes that the paperclipper is already superintelligent and has very accurate understanding of humans, so it can feign being benevolent. That is, this assumes that the "intelligence explosion" already happened in the box, despite all the restrictions (hardware resource limits, sensory information constraints, deliberate safeguards) and the people in charge never noticed that the AI had problematic goals. The OP position, which I endorse, is that this scenario is implausible.

I've done the first two chapters, and I'm not particular about study pace - I haven't really done enough self-directed studying to know what pace I want or I can do. Roughly an hour or so a night seems reasonable, however.

There's a difference between what the best course of action for you personally is, and the best recommendation to push towards society at large. The best recommendation to push for has different priorities: short message lengths are easier to communicate, putting different burdens on different people feels unfair and turns people off, and more onerous demands are less likely to be met.

"Give at least 10% of what you make" is low enough to get people on board, conveniently occupies a very nice Schelling point, short enough to communicate effectivel... (read more)

I'm re-visiting linear algebra - I took a course in college, but that was more of a instruction manual on linear algebra problem solving techniques and vocabulary than a look at the overall theory. I'm reading Linear Algebra Done Right, and was wondering if anyone else is interested.

This book starts from the beginning of the subject, assuming no knowledge of linear algebra. The key points is that you are about to immerse yourself in serious mathematics, with an emphasis on your attaining a deep understanding of the definitions, theorems, and proofs.

0adrien0
How far are you currently and at what pace do you wish to study?
0BenLowell
I did the ~1/2 of the problems up through chapter 4, and am currently reading chapter 5. I'm not sure If want to spend more time doing problems or not, but I'm definitely interested in reading the rest of the book.

I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.

0alicey
revisiting this, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

It depends on how many completely ineffectual programs would demonstrate improvement versus current practices.

Yes, and in particular it'll involve enemy drones. Drone operators are likely to be specifically targeted.

That makes them safer, ironically. If your command knows that you're likely to be targeted and your contributions are important to the war effort, they'll take efforts to protect you. Stuff you down a really deep hole and pipe in data and logistical support. They probably won't let you leave, either, which means you can't get unlucky and eat a drone strike while you're enjoying a day in the park.

You're at elevated risk of being caught in nuclear or orbital kinetic bombardment, though... but if the war gets to that stage your goose is cooked regardless of what job you have.

In the year 1940, working as an enlisted member of the army supply chain was probably safer than not being in the army whatsoever - regular Joes got drafted.

Besides which, the geographical situation of the US means that a symmetrical war is largely going to be an air/sea sort of deal. Canada's effectively part of the US in economic and mutual-defense terms, and Mexico isn't much help either. Mexico doesn't have the geographical and industrial resources to go toe-to-toe with the US on their own, the border is a bunch of hostile desert, and getting supplies into Mexico past the US navy and air force is problematic.

-3Eugine_Nier
Yes, and in particular it'll involve enemy drones. Drone operators are likely to be specifically targeted.

whoops, picked the wrong numbers. Thanks

Update the choice by replacing income with the total expected value from job income, social networking, and career options available to you, and the point stands.

I don't have good numbers, but it's likely less dangerous than you think it is. The vast majority of what an infantryman does falls into two categories - training, and waiting. And that's a boots on ground, rifle in hand category - there's a bunch of rear-echelon ratings as well.

I'm guessing that it's likely within an order of magnitude of danger as commuting to work. Likely safer than delivering pizzas. There's probably a lot of variance between specific job descriptions - a drone operator based in the continental US is going to have a lot less occupational risk than the guy doing explosive ordnance disposal.

-1Eugine_Nier
Up until the US gets involved in something resembling a symmetrical war. Of course in that case it's possible no job will be safe.
3polymathwannabe
How many people I'd be calmly killing every day? I'd have massive PTSD if I were a drone operator.

There's a high failure rate in finance, too - it's just hidden in the "up or out" culture. It's a very winner-takes-all kind of place, from what I've heard.

1mare-of-night
The other thing to keep in mind about failure rates is where you end up if you fail - what other careers you can go into with the same education. (In the case of startups, you can keep trying more startups, and you're more likely to succeed on the second or third than you were on the first. I don't know how it is in finance.)
6Lumifer
Finance is diverse. If you want to be a portfolio manager who makes, say, macro bets, yes, it's very much up or out. But if you want to be a quant polishing fixed income risk management models in some bank, it's a pretty standard corporate job.

The vast majority of people who play sports have fun and don't receive a dime for it. A majority of people who get something of monetary value out of playing sports get a college degree and nothing else.

I agree with the US army part though.

1Vulture
I think the US army is very physically dangerous, and furthermore might be considered a negative to world-welfare, depending on your politics.

Your goal is likely not to maximize your income. For one, you have to take cost of living into account - a $60k/yr job where you spend $10k/yr on housing is better than a $80k/yr (EDIT:$70k/yr, math was off) job where you spend $25k/yr on housing.

For another, the time and stress of the career field has a very big impact on quality-of-life. If you work sixty hour weeks, in order to get to the same kind of place as a forty hour week worker you have to spend money to free up twenty hours per week in high-quality time. That's a lot of money in cleaners, virtua... (read more)

0RowanE
Probably the cost of housing correlates with other expenses, and also there's income tax to consider, but on the surface the first job is $50k/yr net, the second job is $55k/yr net, and so it looks like the latter better.
6solipsist
You should consider option values, especially early in your career. It's easier to move from high paying job in Manhattan to a lower paying job in Kansas City than to do the reverse.

Use the stream-of-commands as seen from the chat and the stream to estimate the delay between inputs now and results later. Generate a probable future state, given the current distribution of commands. Evaluate what distribution of commands maximizes positive results, and spam that distribution.

The biggest time sink other than the program logic is creating pathing/scoring rules. I'd start with "how to successfully deposit the first pokemon in your party" - Markov chains is where you want to go.

I can and often do skip the whole "hearing the text I'm reading" thing, but tend to enjoy slowing down and turning it back on for engaging, complicated, or fun texts. I also have a bad habit of skimming text instead of reading it if it's both boring and I'm not hearing what I read - I still get enough to decide whether or not it's worth remembering, just not enough to always recall it outright.

6blacktrance
Utilitarian =/= utility maximizer.

I get that some hobbies are better than others, and you can use analysis to figure out costs and benefits. I have a tendency to over-analyze things instead of actually going out and doing them, so I tailored my advice for someone that likely has the same issues (since they've got a list of hobbies that indicates not going out and trying things).

Some people need to spend more time figuring out what hobbies they want and their relative costs or benefits. The people that need this branch of advice have already tried several of the hobbies listed and aren't asking for advice along these lines.

It depends on the relative costs of analysis versus just trying it, really. If it takes ten hours to figure out which hobby you want to try first, you could have already tried the top three gut-feeling hobbies out for three hours each.

1ChristianKl
How much do you learn about the value of motorcycle racing by trying it out for three hours? I don't think that provides much valuable information for a decision whether or not to engage in motorcycle racing. It doesn't provide you any information about accident risks. It doesn't even provide you any information about whether it's fun once you developed a decent ability at it.

This might just be high levels of baseline cynicism, but I don't really see changing the particular debate tactics used to change much of anything.

By the time it gets to televised debates, the choices have already been narrowed down to Blue policy vs Red policy (with a small change in the relevant party's policy, based on the individual candidates). It's still a debate between two people who are disproportionately wealthy, educated (particularly in law), and well-connected. The vast majority of the vetting goes on in local politics, finding those who are a... (read more)

0buybuydandavis
Once you've given your support, it's only the threat of stopping that support which provides pressure. Instead, you could support a minor party, or agitate for a particular issue, which may put pressure on both parties to move in your direction, seeking your vote.

Perhaps I should have been more specific - every time you use your real name outside of a public-image building context, it becomes harder to build a public image associated with your name. I wasn't trying to say that you should put nothing up - more that it should be something like what you'd expect a medical doctor's official web page to look like. Not a stream of possibly controversial or misinterpreted posts on a web forum.

True, some cities are much better built for that sort of thing than others. I had San Francisco, Seattle, New York City, and Valencia in mind specifically - less so Los Angeles and Dallas-Fort Worth.

Agreed with the lifestyle part, though - it's really a question of how often you need to do things that require a car, and how expensive the next-best option is (taxi, car rental, ride-share, borrowing your neighbor's). If you want to drive three hours to see your Mom every weekend, you probably don't want to sell your car.

I've found it to be very comfortable, though I have not been keeping data on sleep quality so I don't have a quantitative answer.

If you're already tracking sleep quality, trying a hammock out is much cheaper than trying a new mattress out.

You can always have a hammock in addition to, rather than instead of, a traditional bed. Or you can use the next-best piece of furniture for that purpose.

As much as possible, you want to optimize what a trivial investigation of you brings up - like, for instance, an internet search with your name as the query. Putting anything anywhere under your real name cedes a lot of that control.

If you're worried about nontrivial investigations, whether or not you choose a pseudonym makes very little difference.

-1ChristianKl
Actually it's the other way around. If there nothing that you put up under your own name that's well ranked it's easy for someone else to put something up. Not putting up anything means having no control.

It is hard to predict how long that'll take and even harder to predict what that agent's intent will be

This weakens the case for holding back significantly, since it's also applicable to the consequences of not posting.

Let me be more concrete. If all of Facebook is public data, are you going to be more suspicious of someone without a Facebook account, or someone whose Facebook activity is limited to pictures of drinking and partying that starts at around age 19 and dies a slow death by age 28?

Any data you leave has both condemning and exculpatory interp... (read more)

6Alsadius
Your intuition is directly at odds with how professionals in PR-focused industries - notably politics - tend to act. If you're prone to getting smeared, clamming up and giving them no handholds is absolutely the best strategy. "We know nothing about his personal life - what does he have to hide?" is a weak attack(not least because people still respect the idea of privacy), comments about you being "not up to the job" interspersed with pics of you barfing on the carpet is a much stronger attack.

there are excellent substitutes for personally having a child (e.g. convincing a less altruistic couple to have another child).

Not all children are of equivalent social benefit. If a pure altruist could make a copy of themselves at age 20, twenty years from now, for the low price of 20% of their time-discounted total social benefit - well, depending on the time-discount of investing in the future, it seems like a no-brainer.

Well, unless the descendants also use similar reasoning to spend their time-discounted total social benefit in the same way. You have to cash out at some point, or else the entire thing is pointless.

8solipsist
Sure, your children can be altruists, but would raising your children have highest marginal return? You only "win" by the amount of altruism your child has above the substitute child. So if you're really good at indoctrinating children with altruism, you would better exploit your comparative advantage by spending your time indoctrinating other people's children while their parents do the non-altruistic tasks of changing diapers, etc. Children are an efficient mechanism for spreading your genes, but not the most efficient mechanism for spreading your memes.

Let's be more narrow and talk about middle-class professional Americans. And lets take a pass on the "pure altruist" angle, and just talk about how much altruistic good you do by having a child (compared to the next best option).

For having a child, it's roughly 70 QALYs that they get to directly experience. Plus, you get whatever fraction of their productive output that's directed towards altruistic good. There's also the personal enjoyment you get out of raising children, which absorbs part of the cost out of a separate budget.

As far as costs go... (read more)

Mattresses aren't the only thing you can sleep on. I'd consider picking up and installing a hammock - they're not only cheap (~$100 for a top of the line one, $10 and 2 hours for making your own), but they also give you significantly more usable living space.

0gwern
Yes, they may be more space-efficient, but isn't it more important whether they damage your sleep quality?
2drethelin
Most people like to have a bed they can have sex in though
Load More