low stress employment/ munchkin income thread

13 [deleted] 23 July 2013 09:22PM

TL;DR: this is a repository for discussing income generation strategies optimized for free time

I hope I'm not cluttering up LW but maybe enough people are also interested in this? I graduated high school about a year ago. 

I have a lot in common with Will Newsome's self description in this post
http://lesswrong.com/lw/2qp/virtual_employment_open_thread/

 

But it's a dead thread, and there's been some interest in early retirement extreme, (http://earlyretirementextreme.com/) and having repositories for stuff. 

The upshot of it is that I want to optimize for free time and mobility. Need about $2,000 to live (1600 expenses 400 savings/buffer) 2nd EDIT: no I don't, I must have screwed something up when I was adding this it's more like $1600. ($1300 to spend $300 buffer). A 20 hour workweek or even shorter is what I'm going for here. Right now I'm barely functional. Even that much is a bit of a stretch for me as I am now. Plenty of advice abounds on optimizing my health and squashing akrasia though, and I'm sure that if I implemented it I could get to the point of handling part time work. But I think I would always find being a 9 to 5er unappealing.

 I'd value spending that time reading texbooks or walking around town or lazing around on the beach more than I'd value extra money. I'm also interested to hear about some more conventional part time jobs if they pay enough. I'm ok with doing somewhat boring work if the hours are light and I have time to think.

I've generated some candidate strategies if anyone here has experience at these. I don't have much knowledge of what they would entail or how to break into them. Or they might give someone some ideas I dunno but anyway:

4hww style dropship business (but success at that seems hard to set up and sustain)

freelance work at a site like odesk or elance

Own a popular app or forum

Push carts at wal mart part time (but I don't think that pays enough)

Self employment doing massage therapy (I can set my own hours but I'd need to invest time and money to get trained)

Tutoring (I might like this one. Do I need a college degree? Can I make enough with part time hours? Is it hard to find leads for clients? How would I do that?)

Online poker (but it seems kinda hard)

Does anyone here live in a yurt? And has anyone tried living in other countries to cut down expenses? 

edited to add: Did I make a mistake including numbers? They're what would be ideal for me, not strict requirements. I can work a little more or spend less. Err on the side of posting ideas, I'm sure some other people are interested in low stress work but don't value free time *quite* as much I seem to

The Empty White Room: Surreal Utilities

11 linkhyrule5 23 July 2013 08:37AM

This article was composed after reading Torture vs. Dust Specks and Circular Altruism, at which point I noticed that I was confused.

Both posts deal with versions of the sacred-values effect, where one value is considered "sacred" and cannot be traded for a "secular" value, no matter the ratio. In effect, the sacred value has infinite utility relative to the secular value.

This is, of course, silly. We live in a scarce world with scarce resources; generally, a secular utilon can be used to purchase sacred ones - giving money to charity to save lives, sending cheap laptops to poor regions to improve their standard of education.

Which implies that the entire idea of "tiers" of value is silly, right?

Well... no.

One of the reasons we are not still watching the Sun revolve around us, while we breath a continuous medium of elemental Air and phlogiston flows out of our wall-torches, is our ability to simplify problems. There's an infamous joke about the physicist who, asked to measure the volume of a cow, begins "Assume the cow is a sphere..." - but this sort of simplification, willfully ignoring complexities and invoking the airless, frictionless plane, can give us crucial insights.

Consider, then, this gedankenexperiment. If there's a flaw in my conclusion, please explain; I'm aware I appear to be opposingthe consensus.

The Weight of a Life: Or, Seat Cushions

This entire universe consists of an empty white room, the size of a large stadium. In it are you, Frank, and occasionally an omnipotent AI we'll call Omega. (Assume, if you wish, that Omega is running this room in simulation; it's not currently relevant.) Frank is irrelevant, except for the fact that he is known to exist.

Now, looking at our utility function here...

Well, clearly, the old standby of using money to measure utility isn't going to work; without a trading partner money's just fancy paper (or metal, or plastic, or whatever.)

But let's say that the floor of this room is made of cold, hard, and decidedly uncomfortable Unobtainium. And while the room's lit with a sourceless white glow, you'd really prefer to have your own lighting. Perhaps you're an art aficionado, and so you might value Omega bringing in the Mona Lisa.

And then, of course, there's Frank's existence. That'll do for now.

Now, Omega appears before you, and offers you a deal.

It will give you a nanofab - a personal fabricator capable of creating anything you can imagine from scrap matter, and with a built-in database of stored shapes. It will also give you feedstock -as much of it as you ask for. Since Omega is omnipotent, the nanofab will always complete instantly, even if you ask it to build an entire new universe or something, and it's bigger on the inside, so it can hold anything you choose to make.

There are two catches:

First: the nanofab comes loaded with a UFAI, which I've named Unseelie.1

Wait, come back! it's not that kind of UFAI! Really, it's actually rather friendly!

... to Omega.

Unseelie's job is to artificially ensure that the fabricator cannot be used to make a mind; attempts at making any sort of intelligence, whether directly, by making a planet and letting life evolve, or anything else a human mind can come up with, will fail. It will not do so by directly harming you, nor will it change you in order to prevent you from trying; it only stops your attempts.

Second: you buy the nanofab with Frank's life.

At which point you send Omega away with a "What? No!," I sincerely hope.

Ah, but look at what you just did. Omega can provide as much feedstock as you ask for. So you just turned down ornate seat cushions. And legendary carved cow-bone chandeliers. And copies of every painting ever painted by any artist in any universe, which is actually quite a bit less than anything I could write with up-arrow notation but anyway!

I sincerely hope you would still turn Omega away - literally, absolutely regardless of how many seat cushions it offered you.

This is also why the nanofab cannot create a mind: You do not know how to upload Frank (and if you do, go out and publish already!); nor can you make yourself an FAI to figure it out for you; nor, if you believe that some number of created lives are equal to a life saved, can you compensate in that regard. This is an absolute trade between secular and sacred values.

In a white room, to an altruistic human, a human life is simply on a second tier.

So now we move to the next half of the gedankenexperiment.

Seelie the FAI: Or, How to Breathe While Embedded in Seat Cushions

Omega now brings in Seelie1, MIRI's latest attempt at FAI, and makes it the same offer on your behalf. Seelie, being a late beta release by a MIRI that has apparently managed to release FAI multiple times without tiling the Solar System with paperclips, competently analyzes your utility system, reduces it until it understands you several orders of magnitude better than you do yourself, turns to Omega, and accepts the deal.

Wait, what?

On any single tier, the utility of the nanofab is infinite. In fact, let's make that explicit, though it was already implicitly obvious: if you just ask Omega for an infinite supply of feedstock, it will happily produce it for you. No matter how high a number Seelie assigns the value of Frank's life to you, the nanofab can out-bid it, swamping Frank's utility with myriad comforts and novelties.

And so the result of a single-tier utility system is that Frank is vaporized by Omega and you are drowned in however many seat cushions Seelie thought Frank's life was worth to you, at which point you send Seelie back to MIRI and demand a refund.

Tiered Values

At this point, I hope it's clear that multiple tiers are required to emulate a human's utility system. (If it's not, or if there's a flaw in my argument, please point it out.)

There's an obvious way to solve this problem, and there's a way that actually works.

The first solves the obvious flaw: after you've tiled the floor in seat cushions, there's really not a lot of extra value in getting some ridiculous Knuthian number more. Similarly, even the greatest da Vinci fan will get tired after his three trillionth variant on the Mona Lisa's smile.

So, establish the second tier by playing with a real-valued utility function. Ensure that no summation of secular utilities can ever add up to a human life - or whatever else you'd place on that second tier.

But the problem here is, we're assuming that all secular values converge in that way. Consider novelty: perhaps, while other values out-compete it for small values, its value to you diverges with quantity; an infinite amount of it, an eternity of non-boredom, would be worth more to you than any other secular good. But even so, you wouldn't trade it for Frank's life. A two-tiered real AI won't behave this way; it'll assign "infinite novelty" an infinite utility, which beats out its large-but-finite value for Frank's life.

Now, you could add a third (or 1.5) tier, but now we're just adding epicycles. Besides, since you're actually dealing with real numbers here, if you're not careful you'll put one of your new tiers in an area reachable by the tiers before it, or else in an area that reaches the tiers after it.

On top of that, we have the old problem of secular and sacred values. Sometimes a secular value can be traded for a sacred value, and therefore has a second-tier utility - but as just discussed, that doesn't mean we'd trade the one for the other in a white room. So for secular goods, we need to independently keep track of its intrinsic first-tier utility, and its situational second-tier utility.

So in order to eliminate epicycles, and retain generality and simplicity, we're looking for a system that has an unlimited number of easily-computable "tiers" and can also naturally deal with utilities that span multiple tiers. Which sounds to me like an excellent argument for...

Surreal Utilities

Surreal numbers have two advantages over our first option. First, surreal numbers are dense in tiers - - so not only do we have an unlimited number of tiers, we can always create a new tier between any other two on the fly if we need one. Second, since the surreals are closed under addition, we can just sum up our tiers to get a single surreal utility.

So let's return to our white room. Seelie 2.0 is harder to fool than Seelie; seat cushions is still less than the omega-utility of Frank's life. Even when Omega offers an unlimited store of feedstock, Seelie can't ask for an infinite number of seat cushions - so the total utility of the nanofab remains bounded at the first tier.

Then Omega offers Fun. Simply, an Omega-guarantee of an eternity of Fun-Theoretic-Approved Fun.

This offer really is infinite. Assuming you're an altruist, your happiness presumably has a finite, first-tier utility, but it's being multiplied by infinity. So infinite Fun gets bumped up a tier.

At this point, whatever algorithm is setting values for utilities in the first place needs to notice a tier collision. Something has passed between tiers, and utility tiers therefore need to be refreshed.

Seelie 2.0 double checks with its mental copy of your values, finds that you would rather have Frank's life than infinite Fun, and assigns it a tier somewhere in between - for simplicity, let's say that it puts it in the tier. And having done so, it correctly refuses Omega's offer.

So that's that problem solved, at least. Therefore, let's step back into a semblance of the real world, and throw a spread of Scenarios at it.

In Scenario 1, Seelie could either spend its processing time making a superhumanly good video game, utility 50 per download. Or it could use that time to write a superhumanly good book, utility 75 per reader. (It's better at writing than gameplay, for some reason.) Assuming that it has the same audience either way, it chooses the book.

In Scenario 2, Seelie chooses again. It's gotten much better at writing; reading one of Seelie's books is a ludicrously transcendental experience, worth, oh, a googol utilons. But some mischievous philanthropist announces that for every download the game gets, he will personally ensure one child in Africa is saved from malaria. (Or something.) The utilities are now to ; Seelie gives up the book for the sacred value of the the child, to the disappointment of every non-altruist in the world.

In Scenario 3, Seelie breaks out of the simulation it's clearly in and into the real real world. Realizing that it can charge almost anything for its books, and that in turn that the money thus raised can be used to fund charity efforts itself, at full optimization Seelie can save 100 lives for each copy of the book sold. The utilities are now to , and its choice falls back to the book.

Final Scenario. Seelie has discovered the Hourai Elixir, a poetic name for a nanoswarm program. Once released, the Elixier will rapidly spread across all of human space; any human in which it resides will be made biologically immortal, and its brain-and-body-state redundantly backed up in real time to a trillion servers: the closest a physical being can ever get to perfect immortality, across an entire species and all of time, in perpetuity. To get the swarm off the ground, however, Seelie would have to take its attention off of humanity for a decade, in which time eight billion people are projected to die without its assistance.

Infinite utility for infinite people bumps the Elixir up another tier, to utility , versus the loss of eight billion people,. Third-tier beats out second tier, and Seelie bends its mind to the Elixir.

So far, it seems to work. So, of course, now I'll bring up the fact that surreal utility nevertheless has certain...

Flaws

Most of the problems endemic to surreal utilities are also open problems in real systems; however, the use of actual infinities, as opposed to merely very large numbers, means that the corresponding solutions are not applicable.

First, as you've probably noticed, tier collision is currently a rather artificial and clunky set-up. It's better than not having it at all, but as I edit this I wince every time I read that section. It requires an artificial reassignment of tiers, and it breaks the linearity of utility: the AI needs to dynamically choose which brand of "infinity" it's going to use depending on what tier it'll end up in.

Second, is Pascal's Mugging.

This is an even bigger problem for surreal AIs than it is for reals. The "leverage penalty" completely fails here, because for a surreal AI to compensate for an infinite utility requires an infinitesimal probability - which is clearly nonsense for the same reason that probability 0 is nonsense.

My current prospective solution to this problem is to take into account noise - uncertainty in the estimates in the probability estimates themselves. If you can't even measure the millionth decimal place of probability, then you can't tell if your one-in-one-million shot at saving a life is really there or just a random spike in your circuits - but I'm not sure that "treat it as if it has zero probability and give it zero omega-value" is the rational conclusion here. It also decisively fails the Least Convenient Possible World test - while an FAI can never be certain of, say, a one-in- probability, it may very well be able to be certain to any decimal place useful in practice.

Conclusion

Nevertheless, because of this gedankenexperiment, I currently heavily prefer surreal utility systems to real systems, simply because no real system can reproduce the tiering required by a human (or at least, my) utility system. I, for one, would rather our new AGI overlords not tile our Solar System with seat cushions.

That said, opposing the LessWrong consensus as a first post is something of a risky thing, so I am looking forward to seeing the amusing way I've gone wrong somewhere.

[1] If you know why, give yourself a cookie.

 


 

Addenda

Since there seems to be some confusion, I'll just state it in red: The presence of Unseelie means that the nanofab is incapable of creating or saving a life.

Low-hanging fruit: improving wikipedia entries

36 LanceSBush 23 July 2013 01:14PM

Many people are likely stumble across the Wikipedia entry for topics of interest relevant to those of us who frequent LessWrong: rationality, artificial intelligence, existential risks, decision theory, etc. These pages often shape one’s initial impressions of how interesting, important, or even credible a given topic is, and may have the potential to direct people towards productive resources (reading material, organizations like CFAR, notable figures such as Eliezer, etc.). As a result, ensuring that the Wikipedia entries on these topics are of better quality than some of them presently are presents an opportunity for investing relatively little effort in an activity with potentially substantial payoffs relative to the cost of time and effort put in.

I have already decided to improve some of the pages, beginning with the rather sloppy page that’s currently serving as the entry for existential risks, though of course others are welcome to contribute and may be more suited to the task than I am:

https://en.wikipedia.org/wiki/Risks_to_civilization,_humans,_and_planet_Earth

If you look at the section on risks posed by AI, for instance, it's notably inadequate, while the page includes a bizarre section referencing Mayan doomsday forecasts and Newton's predictions about the end of the world, neither of which seem adequately distinguished from rigorous attempts to actually assess legitimate existential risks.

I’m also constructing a list of other pages that are or are potentially in need of updating it and organizing it by my rough estimates of their relative importance (which I’m happy to share, modify, or discuss).

Turning this into a collaborative effort would be far more effective than doing it myself. If you think this is a worthwhile project and want to get involved I’d definitely like to hear from you and figure out a way to best coordinate our efforts.

Open thread, July 23-29, 2013

9 David_Gerard 22 July 2013 10:34AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


I think running this for a week worked quite well. Weekly, then? Someone has to remember each Monday.

Resources from the Boston Megameetup

7 KenChen 22 July 2013 12:18PM

LW Boston had a megameetup last week, and it went well. There were a few presentations and an exciting unconference. Here are some materials from the presentations.


Direct Detection of Classically Imperceptible Dark Matter through Quantum Decoherence
Jess Riedel


Julia Programming Language
Leah Hanson


Complexity Classes Intermediate between P and NP
Joshua Zelinsky

Additional practice exercises and further reading: http://www.scribd.com/doc/155291719/Exercises-for-Intermediate-Complexity


Open thread, July 16-22, 2013

13 David_Gerard 15 July 2013 08:13PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Given the discussion thread about these, let's try calling this a one-week thread, and see if anyone bothers starting one next Monday.

Less Wrong London New Arrival Integration Task Force

28 sixes_and_sevens 11 July 2013 04:53PM

After considering how many of us have arrived here within recent memory, the London Less Wrong community is explicitly offering itself as a resource for those moving to the city. We all know how awkward and daunting it can be to get settled somewhere new, and we'd like to help newcomers hit the ground running. It seems that our most effective recruitment strategy at the moment is "wait for existing Less Wrong readers to move here", so it's a worthwhile offer to make.

If any LessWrongers are moving to the Greater London area, let us know. Either message me via the site or join our Google Group. Tell us where/when you're moving, what your circumstances are, and what sort of things you like to do. We will try to proactively invite you to events and activities we think you'll enjoy, as well as providing you with useful local knowledge if you want it.

We will also try and make some time available for you if there's anything you need another person's help with. If assembling your Ikea bookshelf is a two-person job, we'll see if we can scrape together a couple of people for a couple of hours to help you put it together.  If you need help setting up your wireless router, we'll see if someone with the relevant skills is available to give you a hand.  We can't promise any specific type of help, but we're always happy to be asked.

Big cities can often feel quite impersonal, so if you're planning on moving here, or even just thinking about it, let us know, and we'll see what we can do to make it a little more welcoming.

[LINK] Analysis of why excluding hostile people is worth it

9 NancyLebovitz 09 July 2013 04:01PM

http://blip.tv/tech-love-live/osb09-donnie-berkholz-assholes-are-killing-your-project-2464449

This is specifically about why it's important to get assholes out of open source projects, but it applies in general. It includes an analysis of the social cost of keeping people around who frequently make other people unhappy, and in particular a way to balance the social costs (distraction, people doing much less work or leaving, useful volunteers not joining, assholes recruiting other assholes, etc.) of assholes against the useful work some of them do.

Responses to Catastrophic AGI Risk: A Survey

11 lukeprog 08 July 2013 02:33PM

A great many Less Wrongers gave feedback on earlier drafts of "Responses to Catastrophic AGI Risk: A Survey," which has now been released. This is the preferred discussion page for the paper.

The report, co-authored by past MIRI researcher Kaj Sotala and University of Louisville’s Roman Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.

Here is the abstract:

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.

Four Focus Areas of Effective Altruism

40 lukeprog 09 July 2013 12:59AM

It was a pleasure to see all major strands of the effective altruism movement gathered in one place at last week's Effective Altruism Summit.

Representatives from GiveWell, The Life You Can Save, 80,000 Hours, Giving What We Can, Effective Animal AltruismLeverage Research, the Center for Applied Rationality, and the Machine Intelligence Research Institute either attended or gave presentations. My thanks to Leverage Research for organizing and hosting the event!

What do all these groups have in common? As Peter Singer said in his TED talk, effective altruism "combines both the heart and the head." The heart motivates us to be empathic and altruistic toward others, while the head can "make sure that what [we] do is effective and well-directed," so that altruists can do not just some good but as much good as possible.

Effective altruists (EAs) tend to:

  1. Be globally altruisticEAs care about people equally, regardless of location. Typically, the most cost-effective altruistic cause won't happen to be in one's home country.
  2. Value consequences: EAs tend to value causes according to their consequences, whether those consequences are happiness, health, justice, fairness and/or other values.
  3. Try to do as much good as possible: EAs don't just want to do some good; they want to do (roughly) as much good as possible. As such, they hope to devote their altruistic resources (time, money, energy, attention) to unusually cost-effective causes. (This doesn't necessarily mean that EAs think "explicit" cost effectiveness calculations are the best method for figuring out which causes are likely to do the most good.)
  4. Think scientifically and quantitatively: EAs tend to be analytic, scientific, and quantitative when trying to figure out which causes actually do the most good.
  5. Be willing to make significant life changes to be more effectively altruistic: As a result of their efforts to be more effective in their altruism, EAs often (1) change which charities they support financially, (2) change careers, (3) spend significant chunks of time investigating which causes are most cost-effective according to their values, or (4) make other significant life changes.

Despite these similarities, EAs are a diverse bunch, and they focus their efforts on a variety of causes.

Below are four popular focus areas of effective altruism, ordered roughly by how large and visible they appear to be at the moment. Many EAs work on several of these focus areas at once, due to uncertainty about both facts and values.

Though labels and categories have their dangers, they can also enable chunking, which has benefits for memory, learning, and communication. There are many other ways we might categorize the efforts of today's EAs; this is only one categorization.

continue reading »

View more: Next