This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
How to Keep Someone with You Forever.
This is a description of "sick systems"-- jobs and relationships which destructively take people's lives over.
I'm posting it here partly because it may be of use-- systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.
One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent-- and it can take a very long time to recognize the contradiction. It's plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what's being done to them, but is there any more to it than that?
One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children-- the parents are unlikely to have a substantial network of helpers, they aren't sharing a bed with the baby (leading to more serious sleep deprivation), and there'...
Message from Warren Buffett to other rich Americans
I find super-rich people's level of rationality specifically interesting, because, unless they are heirs or entertainment, it takes quite a bit of instrumental rationality to 'get there'. Nevertheless it seems many of them do not make the same deductions as Buffett, which seem pretty clear:
My wealth has come from a combination of living in America, some lucky genes, and compound interest. Both my children and I won what I call the ovarian lottery. (For starters, the odds against my 1930 birth taking place in the U.S. were at least 30 to 1. My being male and white also removed huge obstacles that a majority of Americans then faced.)
My luck was accentuated by my living in a market system that sometimes produces distorted results, though overall it serves our country well. I've worked in an economy that rewards someone who saves the lives of others on a battlefield with a medal, rewards a great teacher with thank-you notes from parents, but rewards those who can detect the mispricing of securities with sums reaching into the billions. In short, fate's distribution of long straws is wildly capricious.
In this sense they are sort of 'natural experiments' of cognitive biases at work.
I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain's post titled "That Other Kind of Status." I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I'm leaving it up to keep the responses in context).
I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.
I've been a lurker in this community for three months and I've found that it's the smartest community that I've ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having "arrived home."
At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such pos...
At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe
I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).
You know what... I'm going to come right out and say it.
A lot of people need their clergy. And after a decade of denial, I'm finally willing to admit it - I am one of those people.
The vast majority of people do not give their 10% tithe to their church because some rule in some "holy" book demands it. They don't do it because they want a reward in heaven, or to avoid hell, or because their utility function assigns all such donated dollars 1.34 points of utility up to 10% of gross income.
They do it because they want th...
I don't think intelligence explosion is imminent either. But I believe it's certain to eventually happen, absent the end of civilization before that. And I believe that its outcome depends exclusively on the values of the agents driving it, hence we need to be ready, with good understanding of preference theory at hand when the time comes. To get there, we need to start somewhere. And right now, almost nobody is doing anything in that direction, and there is very poor level of awareness of the problem and poor intellectual standards of discussing the problem where surface awareness is present.
Either right now, or 50, or 100 years from now, a serious effort has to be taken on, but the later it starts, the greater the risk of being too late to guide the transition in a preferable direction. The problem itself, as a mathematical and philosophical challenge, sounds like something that could easily take at least 100 years to reach clear understanding, and that is the deadline we should worry about, starting 10 years too late to finish in time 100 years from now.
Is getting 100% of the lightcone a hundred times better than 1%?
I think yes, if we take into account that the more of the lightcone we (our FAI) get, the more trading opportunities we would have with UFAI in other possible worlds. Diminishing marginal value shouldn't apply across possible worlds, because otherwise it would imply gross violations of expected utility maximization.
Also, I suspect that there are possible worlds with much greater resources than our universe (perhaps with physics that allow hypercomputation, or just many orders of magnitude more total exploitable resources), and some of them would have potential trading partners who are willing to give us a small share of their world for a large share of ours. We may eventually achieve most of our value from trading with them. But of course such trade wouldn't be possible if we didn't have something to trade with!
I'd like to share introductory level posts as widely as possible. There are only three with this tag. Can people nominate more of these posts, perhaps messaging the author to encourage them to tag their post "introduction."
We should link to, stumble on, etc. accessible posts as much as possible. The sequences are great, but intimidating for many people.
Added: Are there more refined tags we'd like to use to indicate who the articles are appropriate for?
There are a few scattered posts in Eliezer's sequences which do not, I believe, have strong dependencies (I steal several from the About page, others from Kaj_Sotala's first and second lists) - I separate out the ones which seem like good introductory posts specifically, with a separate list of others I considered but do not think are specifically introductory.
Introductions:
Not introductions, but accessible and cool:
Wikipedia says the term "Synthetic Intelligence" is a synonym for GAI. I'd like to propose a different use: as a name for the superclass encompassing things like prediction markets. This usage occurred to me while considering 4chan as a weakly superintelligent optimization process with a single goal; something along the lines of "producing novelty;" something it certainly does with a paperclippy single-mindedness we wouldn't expect out of a human.
It may be that there's little useful to be gained by considering prediction markets and chans as part of the same category, or that I'm unable to find all the prior art in this area because I'm using the wrong search terms--but it does seem somewhat larger and more practical than gestalt intelligence.
I've noticed a surprising conclusion about moral value of the outcomes (1) existential disaster that terminates civilization, leaving no rational singleton behind ("Doom"), (2) Unfriendly AI ("UFAI") and (3) FAI. It now seems that although the most important factor in optimizing the value of the world (according to your personal formal preference) is increasing probability of FAI (no surprise here), all else equal UFAI is much more preferable than Doom. That is, if you have an option of trading Doom for UFAI, while forsaking only neglig...
I have an idea that I would like to float. It's a rough metaphor that I'm applying from my mathematical background.
Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.
First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area...
...“There is no scientist shortage,” declares Harvard economics professor Richard Freeman, a pre-eminent authority on the scientific work force. Michael Teitelbaum of the Alfred P. Sloan Foundation, a leading demographer who is also a national authority on science training, cites the “profound irony” of crying shortage — as have many business leaders, including Microsoft founder Bill Gates — while scores of thousands of young Ph.D.s labor in the nation’s university labs as low-paid, temporary workers, ostensibly training for permanent faculty positions that
I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.
The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.
On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.
My take? The system of go-to-college/get-a-job needs to collapse and be replaced, for the most part, by apprenticeships (or "internships" as we fine gentry call them) at a younger age, which will give people significantly more finan...
I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.
I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it's getting worse per unit person.
For the absolute level, I've noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I'm too sleepy to go into detail right now, but briefly:
There's no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it's solvable.
There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has b
I think you've got something really important here. If you want to get someone to an intuitive understanding of something, then why not go with explanations that are closer to that intuitive understanding? I usually understand such explanations a lot better than more dignified explanations, and I've seen that a lot of other people are the same way.
I remember when a classmate of mine was having trouble understanding mutexes, semaphores, monitors, and a few other low-level concurrency primitives. He had been to the lectures, read the textbook, looked it up online, and was still baffled. I described to him a restroom where people use a pot full of magic rocks to decide who can use the toilets, so they don't accidentally pee on each other. The various concurrency primitives were all explained as funny rituals for getting the magic toilet permission rocks. E.g. in one scheme people waiting for a rock stand in line; in another scheme they stand in a throng with their eyes closed, periodically flinging themselves at the pot of rocks to see if any are free. Upon hearing this, my friend's confusion was dispelled. (For my part, I didn't understand this stuff until I had translated it into va...
A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:
Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of t...
Two counters to the majoritarian argument:
First, it is being mentioned in the mainstream - there was a New York Times article about it recently.
Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought - nuclear war. I've been reading Bertrand Russel's autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK's upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.
Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.
I think your second point is stronger. However, I don't think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you've got something that's like a human brain, but faster. Let it replicate itself, and you've got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.
Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?
The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind.
If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today's humanity is pretty close to understanding the human mind well enough to improve it.
That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.
A question: Do subscribers think it would be possible to make an open-ended self-improving system with a perpetual delusion - e.g. that Jesus loves them.
I'm thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes' Theorem. Would people be interested in such a post?
we've had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.
good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?
An idea I had: an experiment in calibration. Collect, say, 10 (preferably more) occasions on which a weather forecaster said "70% chance of rain/snow/whatever," and note whether or not these conditions actually occurred. Then find out if the actual fraction is close to 0.7.
I wonder whether they actually do care about being well calibrated? Probably not, I suppose their computers just spit out a number and they report it. But it would be interesting to find out.
I will report my findings here, if you are interested, and if I stay interested.
Econ question: if a child is renting an apartment for $X, and the parents have a spare apartment that they are currently renting out for $Y, would it help or hurt the economy if the child moved into that apartment instead? Consider the cases X<Y, X=Y, X>Y.
If I moved into that apartment instead, would that help or hurt the country's economy as a whole?
Good question, not because it's hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.
If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don't.
If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don't.
If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.
ANYTHING beyond that -- anything whatsoever -- is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people's wants are being satisfied, which is supposed to be what we mean by a "good economy".
Today, economists equate growing GDP -- irrespective of measuring artifacts that make it deviate from what we want it to measure -- with a good economy. If the eco...
Fascinating talk (Highly LW-relevant)
http://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception.html
These days, I sometimes get bumped into great new ideas[tm], that are at times well proven, or at least workable and useful -- only to remember that I did already use that idea some years ago with great success and then dumped it for no good reason whatsoever. Simple example: in language learning write ups, I repeatedly find the idea of an SRS. That is a program which does spaced repetitions at nice intervals and consistently helps in memorizing not only language items, but also all other kinds of facts. Programs and data collections are now freely availab...
Looks like LW briefly switched over to its backup server today, one with a database a week out of date. That, or a few of us suffered a collective hallucination. Or, for that matter, just me. ;)
Just in case you were wondering too.
The causal-set line of physics research has been (very lightly) touched on here before. (I believe it was Mitchel Porter that had linked to one or two things related to that, though I may be misremembering). But recently I came across something that goes a bit farther: rather than embedding a causal set in a spacetime or otherwise handing it the spacetime structure, it basically just goes "here's a directed acyclic graph... we're going to add on a teensy weensy few extra assumptions... and out of it construct the minkowski metric, and relativistic tra...
Whole Brain Emulation: The Logical Endpoint of Neuroinformatics? (google techtalk by Anders Sandberg)
I assume someone has already linked to this but I didn't see it so I figured I'd post it.
Creeping rationality: I just heard a bit on NPR about a proposed plan to distribute the returns from newly found mineral wealth in Afghanistan to the general population. This wasn't terribly surprising. What delighted and amazed me was the follow-up that it was hoped that such a plan would lead to a more responsive government, but all that was known was that such plans have worked in democratic societies, and it wasn't known whether causality could be reversed to use such a plan to make a society more democratic.
Can anyone recommend a good book or long article on bargaining power? Note that I am NOT looking for biographies, how-to books, or self-help books that teach you how to negotiate. Biographies tend to be outliers, and how-to books tend to focus on the handful of easily changeable independent variables that can help you increase your bargaining power at the margins.
I am instead looking for an analysis of how people's varying situations cause them to have more or less bargaining power, and possibly a discussion of what effects this might have on psycholog...
Amanda Knox update: Someone claims he knows the real killer, and is being taken seriously enough to give Knox and Sollecito a chance of being released. Of course, he's probably lying, since Guede most likely is the killer, and it's not who this new guy claims. But what can you do against the irrational?
I found this on a Slashdot discussion as a result of -- forgive me -- practicing the dark arts. (Pretty depressing I got upmodded twice on net.)
Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?
Q: What Is I.B.M.’s Watson?
http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all
A: what is Skynet?
Apologies for posting so much in the June Open Threads. For some reason I'm getting many random ideas lately that don't merit a top-level post, but still lead to interesting discussions. Here's some more.
How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.
How to check that you aren't a brain in a vat: inflict some minor brain damage on y
Is anyone else concerned about the possibility of nuclear terrorist attacks? No, I don't mean what you usually hear on the news about dirty bombs or Iran/North Korea. I mean an actual terrorists with an actual nuclear bomb. There are a suprising number of nuclear weapons on the bottom of the ocean. Has it occured to anyone that someone with enough funding and determination could actually retrieve one of them. Maybe they already have?
...In its campaign to discredit General Lebed’s revelations, the Russian government insisted that the loss of a nuclear weapon
I don't know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :
You see, there's a shifty little game that proponents of gender discrimination are playing. They argue that high SAT scores are indicative of success in science, and then they say that males tend to have higher math SAT scores, and therefore it is OK to encourage more men in the higher ranks of science careers…but they never get around to saying what their SAT scores were. Larry Summers could smugly lecture to a bunch of accomplished women about how men and women were different and having testicles helps you do science, but his message really was "I have an intellectual edge over you because some men are incredibly smart, and I am a man", which is a logical fallacy.
From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn't that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.
The whole article is attacking a position that, as far as I know, nobody holds in the West a...
Off That (Rationalist Anthem) - Baba Brinkman
More about skeptics than rationalists, but still quite nice. Enjoy
Sometimes I try to catch up on Recent Comments, but it seems as though the only way to do it is one page at a time. To make matters slightly worse, the link for the Next page going pastwards is at the bottom of the page, but the page loads at the top, so I have to scroll down for each page.
Is there any more efficient way to do it?
Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?
ETA: absent other suggestions, I'm going to call such devices "AI bombs".
I recently read a fascinating paper that argued based on what we know about cognitive bias that our capacity for higher reason actually evolved as a means to persuade others of what we already believe, rather than as a means to reach accurate conclusions. In other words, rationalization came first and reason second.
Unfortunately I can't remember the title or the authors. Does anyone remember this paper? I'd like to refer to it in this talk. Thanks!
Interview with Lloyd's of London space underwriter.
Feds under pressure to open US skies to drones
http://news.yahoo.com/s/ap/20100614/ap_on_bi_ge/us_drones_over_america
Looking through a couple of posts on young rationalists, it occurred to me to ask the question, how many murderers have a loving relationship with non-murderer parents?
Is there a way to get these kinds of statistics? Is there a way to filter them for accuracy? Accuracy both of 'loving relationship' and of 'guilty of murder' (i.e. plea bargains, false charges, etc.)
Today is Autistic Pride Day, if you didn't know. Celebrate by getting your fellow high-functioning autistic friends together to march around a populated area chanting "Aspie Power!" Preferably with signs that say "Neurotypical = manipulative", "fake people aren't real", or something to that effect.
Kidding. (About everything after the first sentence, I mean.)
one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution conver...
Episode of the show Outnumbered that might appeal to this community. The show in general is very funny, smart and well acted, children's roles in particular.
I'm looking for some concept which I am sure has been talked about before in stats but I'm not sure of the technical term for it.
Lets say you have a function you are trying to guess with a certain range and domain. How would you talk about the amount of data you would need to likely get the actual functions with noisy data? My current thoughts are the larger the cardinality of the domain the more data you would need (in a simple relationship) and the type of noise would determine how much the size of the range would affect the amount of data you would need.
Physics question: Is it physically possible to take any given mass, like the moon, and annihilate the mass in a way that yields usable energy?
I'd like to pose a sort of brain-teaser about Relativity and Mach's Principle, to see if I understand them correctly. I'll post my answer in rot13.
Here goes: Assume the universe has the same rules it currently does, but instead consists of just you and two planets, which emit visible light. You are standing on one of them and looking at the other, and can see the surface features. It stays at the same position in the sky.
As time goes by, you gradually get a rotationally-shifted view of the features. That is, the longitudinal centerline of the side you ...
Kids experiment with 'video playdates'
http://www.cnn.com/2010/TECH/innovation/06/11/video.playdate/index.html?hpt=Sbin
Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?
I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.
The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.
On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.
My take? The system of go-to-college/get-a-job needs to collapse and be replaced, for the most part, by apprenticeships (or "internships" as we fine gentry call them) at a younger age, which will give people significantly more financial security and enhance the economy's productivity. But this will be bad news for academics.
And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.
So the slack will have to be picked up by people "outside the system". Yes, they'll be starved for funds and rely on rich people and donations to non-profits, but they'll mostly make up for it by their ability to get much more insight out of much less data: knowing what data-mining techniques to use, spotting parallels across different fields, avoiding the biases that infect academia, and generally automating the kind of inference currently believed to require a human expert to perform.
In short: this, too, shall pass -- the only question is how long we'll have to suffer until the transition is complete.
Sorry, [/rant].
So what is the realistic alternative for those who have no other marketable skills, such as myself? (I specifically don't have a high school diploma, though I suppose it would be trivially easy to nab a GED.)