Open Thread: July 2009
Here's our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).
Here's our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).
Comments (235)
Wanting to think of apples already constitutes thinking about apples.
That's not entirely true: look up Wegner, "Paradoxical Effects of Thought Suppression", 1987.
If my own experience is typical, people don't usually think "I want to think about apples" unless it's part of a thought experiment or something. A behaviorist model might work here: You get a stimulus: something that activates your brain's concept of apples. It may be a sense impression, like seeing an apple, or it may be a thought, for example a long train of thoughts about gravity and Isaac Newton eventually gets by a train of spreading activation to "apples". This stimulus gets processed by various different cognitive layers in various different ways that are interpreted by your conscious mind as "thinking about apples."
If you want to not think about apples for some odd reason, the natural tendency is for this to activate your apple concept and cause you to think about apples. If you're smart, though, you'll try to distract yourself by thinking about oranges or something, and since your conscious brain can only think about one thing at a time, this will probably work.
The breakthrough for me was realizing that "I think about apples" is more a peculiarity of the English language than a good reflection of what is happening - about as useful as "I choose to produce adrenaline in response to stress". It suggests that there's someone named me with a flashlight illuminating certain thoughts at certain times because I feel like it. I find it less wrong (though still a little wrong) to imagine the thoughts percolating up of their own accord, and me as a spectator. This might make more sense if you meditate.
I'd really like to see your long chain of reasoning.
I hope LW has room for self-help/improvement as well as other topics.
I'd prefer that it stay focused on refining the art of human rationality.
And I'd like to know about separate quality place to discuss self-help/improvement, as the original poster suggests.
ISTM that instrumental rationality overlaps a great deal with self-help/improvement. We could avoid the latter only by restricting ourselves to discussing epistemic rationality.
I don't want the more practical or self-improvement posts to overwhelm the more academic or posts, but I don't think the balance is too far off yet.
It's a subset of it. But there are a lot of other self-help topics that don't belong here except (as for any topic that isn't rationality) when there's a specific rationality angle being discussed: diet, physical fitness, personal organisation (i.e. things like GTD and 43 folders), and so on.
I would suggest that most people here are rational enough in terms of epistemic rationality, but their instrumental rationality is lagging behind, if I may call it this way. Hence the need for self-improvement stuff.
Once you reach a certain level of epistemic rationality, you realize that what you want next is not more refined epistemic rationality (that would be sub-optimal), you'd rather have.. more winning.
I for one don't object to discussions of self-improvement per se, only insist that they meet the intellectual standards of LW.
That is precisely my problem with them: in my humble opinion, the discussions about self-improvement have not met the intellectual standards of the other discussions here. And since they have represented a significant fraction of all comments here, they have decreased the intellectual standards of the average comment enough to make me worry that the kind of participants I most want to interact with are leaving Less Wrong at a rate higher than the other participants are.
EDIT. It would ease my worries if they were easier to avoid: for example, it would ease my worries if there were fewer of them in the comment sections of posts with no obvious connection to self-improvement.
Personal Development for Smart People Forums
...doesn't strike me as overwhelmingly high-quality.
It looks cheesy, but I've heard quite a few people like it, and I've read some interesting posts on his blog.
I read the blog, which is good in parts, but I've never found the forums worth the time.
After seeing:
I'm gonna recant my previous post. Not worth your time (and when did dreaming and lucid dreaming become "paranormal"?).
Edit:
Sheesh, half of these forums are the same thing.
Association fallacy. Just because the forums contain sections abhorrent to you, doesn't mean other sections are just as bad. Also, 3 of 17 is hardly half.
Are you suggesting it does not paint something about the general reliability of the community? I think this is silly.
Sure it is, modulo hyperbole :-)
Yes. I'm saying that those forums are quite large, and people who post in one section are unlikely to post in other sections. We can rely on, say, people in tech section to know tech.
True; the largeness is a factor.
In the previous open thread, there was a request that we put together [The Simple Math of Everything][post]. There is now a [wiki page][wiki], but it only has one section. Please contribute.
People who contribute to the wiki are my heroes.
Do you know who the real heroes are? The guys who wake up every morning, and go into their normal jobs, and get a distress call from the commissioner, and take off their glasses and change into capes and fly around fighting crime. Those are the real heroes.
I want to, and intended to write a top level post about it, but my internship plus studying math has taken up the majority of my time. I will try to squeeze some LW time in though
So, I'm looking for some advice.
I seem to have finally reached at that stage in my life where I find myself in need of an income. I'm not interested in a particularly large income; at the moment, I only want just enough to feed a Magic: the Gathering and video game habit, and maybe pay for medical insurance. Something like $8,000 a year, after taxes, would be more than enough, as long as I can continue to live in my parents' house rent-free.
The usual method of getting an income is to get a full-time job. However, I don't find that appealing, not one bit. I want to have lots of free time in which to use the things I buy with the money I would earn. I'd much rather just continue to spend down my savings than work more than two days a week at a normal job.
This suggests that instead, I should try to get a part-time job. Chances are, that would mean working in a local restaurant or store of some kind. Unfortunately, I tried one of these once before, and it didn't work out very well. I was hired to be a cashier at a local supermarket. To my great surprise, I didn't particularly mind the work, but on my third day after being hired, I was fired for insubordination. (I had a paperback novel with me, and I wouldn't stop reading it during periods when there were no customers.) I've also tried working for a temp agency. That didn't work out too well either. After completing my first assignment, I was told that the company I was contracted out to complained about my behavior (it's a long story), and so I would not be considered for any other assignments. In effect, I was fired from there, too.
As far as I'm concerned, the ideal source of income would be something with no set hours, that I could leave and come back to as I please. In other words, if I decide that I'd rather play video games for a month instead of earning money, it won't prevent me from earning money the month after that. Unfortunately, the only things I know of offhand that work like that are writing (which is extremely hard to make a living at, and requires a lot of time and effort anyway) and online poker (which I suck at). I'm lazy and undisciplined, and I'm not particularly interested in changing that, so I'm hoping to find a way to make money that works even if I don't try very hard at it.
In terms of skills and education, I have a B.S. from Rutgers University in computer engineering. I can program, but when I've tried programming as a job (as a summer intern), it turned into a Dilbert cartoon very, very quickly. Basically, I was given vague instructions, left on my own to do whatever, and instead of working, I mostly sat and surfed the Web while feeling guilty about not working. I don't think I want to do programming professionally. I ever have to sit in another cubicle again, there's a good chance I'm quitting on the spot.
So, um... I need some suggestions on what to do. Bring on the other-optimizing?
Find odd programming jobs to do at home, like making websites for people or whatever. Get them at RentACoder or from people you know.
Well, if we really wanted to other-optimize we'd try to change your outlook on life, but I'm sure you get a lot of such advice already.
One thing you could try is making websites to sell advertising and maybe amazon clickthroughs. You would have to learn some new skills and have a little bit of discipline (and have some ideas about what might be popular). You could always start with the games you are interested in.
There's plenty of information out there about doing this. It will take a while to build up the income, and you may not be motivated enough to learn what you need to do to succeed.
Well, you really, really need to change your entire outlook and work on the laziness.
But if you're not going to do that: Have you tried betting in prediction markets like Intrade? If you're good at noticing things that are "obviously" going to happen but aren't correctly priced, or have enough money to afford to be right on average, that could work. It does require an initial investment though.
I've been on it since August and have played conservatively so I've only made about a 5% return. (Made small amounts on the Chrysler and GM bankruptcies.)
Intrade is an interesting suggestion, but I don't think he could make enough on it. He wants 8000 USD a year, and even if we assume he can get 10%, he'll still need 80k invested.
I don't think he has 80k to spare, and I have to wonder - is 10% feasible in the long run? I could see getting it in an election year easily, because the markets are so volatile and heavily traded, but what about off-years?
Agreed. We should always be skeptical of an individual's ability to beat the market.
Well, I should clarify that I think a smart bias-educated person can beat the prediction markets fairly easily - I doubled my (small) investment in the IEM just by exploiting some obvious biases in the last presidential election, and I know I'm not the smartest bear around. My doubt is whether he can beat the market enough: any sum of money CronoDAS has is likely small enough he would need really absurd returns.
Are there differences between prediction and markets that make it easier for a "smart bias-educated person" to win fairly easily?
If you think its fairly easy, then I'd be curious to know whether you're putting your money where your mouth is... how much have you invested?
Yes. Prediction markets are far smaller, and have far less intelligence devoted to exploiting away their irrationalities.
Besides what Nick said, people seem to treat prediction markets more as entertainment than seriously. For example, Ron Paul or Al Gore should never have broken 1%, and Hillary shares were high long after it became obvious she wasn't going to make the nomination. These were all pretty clear to anyone suspicious of fanciful wouldn't-it-be-fun? scenarios and being biased towards what one would like to happen.
I started in the IEM with ~$20, and even after taking some heavy losses in 2004 and whatever fee the IEM charged ($5?), I still cashed out $38 in 2008. If you're interested in more details, see my http://www.gwern.net/Prediction%20markets
I appreciate your careful documentation. And I thought these words of yours were wise: "I often use them [prediction markets] to sanity-check myself by asking 'If I disagree, what special knowledge do I have?' Often I have none."
Words are vague, lets use numbers. Say you were forced to invest $1000 in the prediction markets over the next year. What probability would you assign various outcomes: e.g. [-100%,-50%], [-50%,-25%] [-25%,-10%], [-10%,0] [0,10%], [10%,25%] [25%, 50%], [50%,100%], [100%, 200%], and [200%, 1000000%]
One must be wary of faux precision. But I think I would put the odds of >100% or <-40% at under 30%; I'd assign another 10 or 20% to a gain between 30% and 100%, and leave the rest to the range of small losses/gains.
The ten categories I suggested may be a bit excessive, but it would be much easier to judge if you were a little more precise. You acknowledge a non-trivial chance of losing a non-trivial amount of money. The confusion is that I thought your previous statement that a "smart bias-educated person can beat the prediction markets fairly easily" would preclude this.
my question was about how much more efficient the stock market is, and why.
My answer to whether there are differences between prediction [markets] and markets was no, except in as much as prediction markets that are currently active are far larger (noise cancellation), more heavily traded (more information from more experts is represented already), and have had longer for biasses to be exploited and so corrected for.
Efficiency.
No detailed suggestions, but one thing that comes out very strongly from what you wrote is that you don't want a job and a job doesn't want you.
This is not necessarily a bad thing.
Steve Pavlina wrote about why getting a job is a really bad idea; as for what to do instead, to make your way in the world, some of his other stuff may be of interest. (His other stuff also includes some things I think are woo, so don't take this as a pointer to a pure fount of wisdom.)
I second the suggestions made by others to look for freelance computing work. It sounds ideal for your situation, if you can learn to take orders from yourself, which it sounds like you won't from other people.
del
I'm only passingly familiar with Pavlina. Would you say the same thing about the advice of Tim Ferris?
del
Ok, someone tell me what the fuck this woo shit is!
Edit: Ok, pardon my language. That rules out my two first hypotheses. Anyone?
woo
Nice. Now I have a swear word that means something actually bad as opposed to taboo for doing in public.
In Hunting Fish, A Cross-Country Search for America's Worst Poker Players, Jay Greenspan conceives of the poker world as a giant inverted pyramid, with the fishiest (i.e., least skilled) players at the top pouring money down the pyramid toward the most skilled players at the bottom, such as Doyle Brunson and Phil Ivey.
If you channel the income in the right direction, it won't be useless.
I read jajvirta as saying that the occupation itself doesn't produce positive externalities for mankind, unlike productive work in physics research or something.
Its not only a lack of positive externalities, but the presence of negative externalities. Your gains are someone else's losses.
You provide entertainment to people. Both players chose to play so even if one player has a negative expectation in $ he might enjoy playing the game.
Productive work in physics could produce negative externalities if humanity cannot be trusted with new physics results. Hell, even math education could produce negative externalities!
When I played poker with my brother and his friends, I didn't think it was all that fun, and I didn't win very much either. I don't plan on going into online poker for real money any time soon.
Magic is my game. ;)
Could you play Magic profiessionally? What's in the way? Just a matter of startup money?
Well, there are a few things. I'm good at Magic, but I don't think I'm good enough to play professionally. I've never qualified for the Pro Tour. There seem to be lots of players that are better than I am, and you usually have to be world-class in order to make more than pocket change by playing in Magic tournaments. (In order to get better at Magic, the obvious next step for me to take is to try to seek out players in my area that already are world-class and learn from them.) Additionally, competitive Magic requires a continual investment in new cards; $1000 or more a year is quite possible, and travel costs and entry fees also eat up a large chunk of change.
The closest thing to online poker for Magic is, well, "Magic Online." At one point, I was playing it and turning a profit, at least in terms of the MTGO event tickets. However, turning MTGO event tickets into cash is difficult, as eBay and PayPal fees eat up a distressingly large percentage of what you can make by selling them, and if someone tries to cheat you, there's little recourse.
If you have any crafting skills, or if you can make food of some kind that's fairly portable and doesn't need refrigeration, and you have access to someplace where you can park with your wares and bother passerby, that might work. I once made about $30 sitting in a hallway at school for an afternoon selling muffins for a buck fifty each (I was on a muffin-baking spree, and my freezer was getting full), and my town has a fair number of street vendors in nice weather (I have bought things from them before). If the only problem with cashiering was that you weren't supposed to read, this doesn't seem like it would present any problems for you, since, who's going to stop you?
Some places might require you to have a permit; I'm pretty sure the street vendors have to get one every morning from town hall. Nobody bothered me when I sold muffins and I didn't have any kind of permission, though.
Along the same vein, Etsy is a place to do that online (not so much with the food though)
What are your thoughts on the recent "Etsy considered harmful" article?
It doesn't seem like she has a good grasp on what people are doing with Etsy and what it's about. If you want to make a 'profitable' business, you're already looking in the wrong place on Etsy. But if your time isn't worth much and you want to sell some crafts, it seems to work fine.
Wow! $30? For only an afternoon plus baking time?
quits day job
ETA: Okay, that was too snarky, even for me. Crono only wants to make $8000/year, and that's good enough for that goal. So, good suggestion.
Well, an afternoon, plus baking time for six basic muffins and variations, plus cooking time for the applesauce that went into one batch of muffins, plus the cost of all the ingredients, plus the time it took to write up little flavor labels for each muffin and individually wrap them in saran wrap... And transit time by bus to and from school... I baked the muffins for fun, though, and only decided to sell them when I did not have room to store them and wasn't eating them fast enough.
I mean, I'm not knocking it as a way to spend time, or I wouldn't have suggested it, but I'm not still doing it. I got thirty bucks, spent it on a used camera and a necklace, and called it good. And I had my laptop open the entire time and did exciting things like read Less Wrong, which is more or less what I would have been doing if I'd stayed home to goof off instead of selling muffins.
Another thing: Can you go over this one more time:
Something like $8,000 a year, after taxes, would be more than enough, as long as I can continue to live in my parents' house rent-free.
What made you decide you're okay with living with your parents for the rest of your life? Did you really give up hope or something?
Well, for one, I like the house I live in, and, for the most part, my parents let me do what I want. I just don't feel any particular need or desire to move out and, financially at least, I'm getting a great deal. Moving out would drive up my expenses enormously, because I'd no longer be able to use my parents' stuff, including their HDTV, their internet connection, and all those other things. (Incidentally, I have a first cousin once removed who never moved out of his parents' house. Unlike me, though, he does have a job.)
As for giving up hope, well, yeah, I basically gave up hope way back in 1997. I have a lot of trouble trying to imagine the kind of activity that I would find fulfilling and could realistically expect to get paid for. For the most part, I just try to get through life one day at a time, doing my best to anesthetize myself and not think about the future.
Crono, that's a horrible, horrible state to be in, and in asking for advice, you're asking completely the wrong question. For your own sanity, you need to find something you enjoy doing, not just something that can soften the pain for one more day.
I've been in your position before. In some respects, I still am. I thought I couldn't get a job and any job I'd get I'd be unable to handle. I had no connections, but finally was able to find one in my field.
Maybe a standard day job isn't right for you, but you need to look for something more ambitious than living with your parents, even if you enjoy the amenities. There are many things you can try. Just keep churning through them, or resign yourself to worsening sadness.
If you think you can do well at Intrade, I'll loan you the money if you can put up your karma as collateral.
That sounds flagrantly inappropriate. If you are confident that CronoDAS trying his hand at Intrade would be a good risk, why don't you just loan him the money and ask for interest or some percentage of what he makes? If you aren't confident that he'd do well enough to pay you back, isn't this just outright karma purchase?
Just a hedge against any akrasia that might pop up.
Replace "do well enough" with "make any effort at all", then.
"make any effort at all" =/= "no akrasia"
If you expect that he'd make some effort, and be defeated by akrasia, then clearly, you are not confident that he would do well.
My point isn't that, however. My point is that karma is inappropriate collateral, even if there were some easy way to move it from one person to another.
"Even if"? Are you serious?
The only thing inhibiting such a transfer is the very fact that those who consider it innapropriate would prevent it politically. Even then, if someone wants to and is uninterested in said social judgements beyond their political implications then it would not exactly be hard to make the transfer subtly.
I wouldn't think that I know more than anybody else about most of the topics on Intrade, although betting against cold fusion seems like a good idea.
Well... I think I like playing Magic, or, at least, I like winning at Magic. (When I lose a lot, I have a tendency to take it pretty hard.) For some reason, video games start to become a lot less appealing when I don't have some homework to put off. But, yeah, to paraphrase something I once heard about drug addiction, I don't play video games to games to feel good, I play them in order to feel normal.
Let me put it this way:
If I won a huge lottery jackpot tomorrow and could easily afford to maintain my current lifestyle with no effort, independent of my parents' financial support, I still probably wouldn't move out, because I like living with my parents. What bothers me is that I'm dependent on them for financial support, so whenever they ask me to do something, there's always an undercurrent of "if you make us angry enough, you'll be out on the streets." (It still beats working, though.)
There's only one thing that I want that I can't get by living at home, and that's a cat. It might be a bit silly, but I feel as though if I had a cat, I wouldn't have to be lonely or sad any more.
One thing I think I should look into in more detail is tutoring; I did a lot of that informally in high school and I was a teacher's assistant of sorts for a math class during college. Does anyone here know anything about how to make money as a tutor? (I live within easy commuting distance to Rutgers University, so that might help.)
Try babysitting.
You can easily do that with a business, if you set it up correctly, and you are willing to spend money to make money. More to the point, though, you'd need to actually want to have a business, a bit more badly than you appear to want a job. ;-)
I've always heard that having a successful business is usually an awful lot of work, even more than being an employee. At least, that's what my father says, and he's almost always right.
Setting one up is. Having one is not necessarily the case.
So why aren't you asking his advice. ;-)
I said almost, didn't I?
It's a bit of a cliche for children of a certain age to say that their parents don't understand them when, in fact, they understand them perfectly well, but my father has admitted to me that he doesn't understand my feelings and behavior, so I'm not going to him for advice on how to live my life.
And you expect complete strangers to do better? I'm not sure that's rational.
Conversely, if you've adequately constrained the problem for us, surely you can adequately constrain it for him?
That's... a pretty good point, actually.
At least there's more of you, though; you might suggest something I haven't thought about before.
Perhaps it is possible that your parent(s) "doesn't understand you" but still internally expects to, and so does worse than someone who doesn't know you or knows you from recent experience.
Freelance programming possibly?
Also if you attend a lot of big magic tournaments it is pretty easy to make some money with smart trading and selling on ebay. Just pay attention to ebay values for cards. Also keep track of differing values of cards in different geographic areas.
Serious question: why? If there was a pill you could take that would magically make you disciplined and hard working, would you turn it down? The pill wouldn't make you unable to play computer games, or surf the web; it would just mean that if you said to yourself "for the next two hours I'm going to do X, without getting distracted by computer games or surfing the web" you would carry that intention out.
I tend to be lazy and undisciplined, but I also tend to find that even if your job doesn't really do anyone any good in the large, working at work is more fun than slacking off. I'm increasingly coming to think that the rewards I get when I'm lazy and undisciplined aren't up to much. What are the upsides, for you, of being lazy and undisciplined?
I've always thought of "discipline" as a bit of a rip-off. To me, "discipline" suggests "the willingness to do something unpleasant now, in exchange for a later reward." The problem with this is that, even though you do get the reward, you've spent all that time doing something unpleasant, when you could have been doing something pleasant - such as playing video games - instead. It doesn't seem like a good way to maximize "moments of pleasure" over the near future. Being lazy and undisciplined means I don't go off chasing future rewards that turn out not to be worth the trouble.
My mom says that, as a young child, I had a "low frustration tolerance," which might explain a lot. I suspect that "doing something I don't feel like" feels worse to me than it does to most people, although I can't prove this. In college, I once started to feel physically ill whenever I looked at my "Engineering Mechanics - Statics" textbook. There was something deep inside me, screaming, "This is awful! Avoid this!" whenever I was confronted with my homework. I only ever got work done when I became more afraid of not doing it than I was of doing it, if that makes any sense.
Not to play psychiatrist, but this sounds like a more likely explanation for your predicament than the hypothesis of contentment. If you could take a pill that would remove your anxiety when you faced the prospect of doing something that appears difficult or that you might be judged on, would you take that pill?
ETA: This is starting to remind me of Robin Hanson's recent post.
You know, I just might. The "don't get frustrated" pill seems more in line with my preferences than a "be willing to play hurt" pill. The last time I tried - well, "was pushed into" is more accurate than "tried" - filling out a job application, I got frustrated halfway through and stopped.
Incidentally, I'm a lot better at getting things done when I have someone to do those things with, but there is one big exception. I have a great deal of trouble at working alongside one of my parents. Nothing kills my intrinsic motivation to do something as effectively as one of my parents telling me I need to do it.
Another note: I've generally found that, when I "work hard" at something, I'm usually reasonably successful at it. By simply applying enough effort for a long enough period of time, I can brute force my way through many tasks that are really, really difficult, such as learning to play an extremely difficult song on the piano, beating the notoriously difficult Battletoads on the NES, or even just cramming for an exam by doing several months' worth of suggested problems in the space of a week or two. The difference between what I think of myself capable of doing with enough effort and what I actually achieve contributes to thinking of myself as "lazy." I have a strong preference for avoiding anything that feels like it takes some kind of an effort to do; in other words, something that feels frustrating. (Interestingly, difficult video games often don't trigger this reaction. I like games that show me no mercy, that let me push myself to my limits and make even the little successes feel like an accomplishment.)
The only emotion that I've found that really motivates me to do things I don't normally do is, oddly enough, anger. If I get sufficiently annoyed with a problem, I'll go to absurd, ridiculous lengths to solve or fix the problem. A trivial example of this is the time I got annoyed at the dirt on the floor in my room sticking to my feet, so I went and got the broom to sweep it. A less trivial example concerns one of my courses at college. In that course, I had to "design" digital circuits using Verilog and an automatic hardware generator. I hated doing the work, would only get started reluctantly, and could never focus on it. This one time, however, the Verilog code worked just fine, but the hardware generator gave me a design that kept giving me errors. Instead of getting frustrated, I got angry. How dare this program not work! I ended up spending several hours in the computer lab making a furious, focused effort to understand what was going on and fix it. Which I did.
That's really interesting... I think I understand you better now. I think that, because of this recurring anxiety and frustration, you've felt for a long time that your options were:
As per the second pill example, I think this is a false dichotomy, but a universal one; people take their emotional reactions for granted, and don't often imagine that it could be possible to feel differently about something that persistently troubles them. (Of course, it doesn't seem possible to just feel differently by a direct act of will, which is all that most people ever think of to try.)
Given that you'd take the second pill, though, you can now imagine a third alternative:
If that sounds appealing to you (and of course it doesn't mean you'll have to end up doing what others want you to do; it just means you'll be able to genuinely explore some new options), then it might be time to start carefully analyzing why you get these feelings, and whether there's something you can do to change that...
Thank you for your help. I'll have to let this stew in my subconscious for a while, then get back to you.
In the book "A Theory of Fun for Game Design" by Ralph Koster (of possible special interest to a game nerd) he basically defines "fun" as "learning without pressure". Learning, in this context, means improving skills and responding to a challenge where there is no extrinsic consequence for failure.
Your desire for a job you can "take or leave" on a day-to-day basis, and your anxiety about homework, fits well with (but is more extreme than, I think) my own experience. If I were to diagnose myself with something (which I am loathe to do) it would be some type of anxiety disorder ( I have a friend with similar issues who was so diagnosed, medicated, and actually seems to be doing better, although it's difficult to separate cause from effect here).
See if you relate to the following anecdote: in grade 9 I entered a special school program which was kind of like correspondence (work through assignments at your own pace) except that it was held at a regular high school so that students could socialize, have progress monitored by and access to teachers, and take supervised written tests whenever we were ready. Sounds pretty great compared to normal classes? It was. But, my first year (grade 9) I got rather behind in my work, in more than one subject, and started getting concerned reports home. Even though the work I had to do was obviously within my capabilities, I found it very difficult to face. Eventually I had to bite the bullet and finish everything in one big cram at the end of the year, and I pulled OK grades, but I stressed out endlessly over what was really a trivial amount of work (which I recognized even at the time).
The following year (grade 10) I hit the ground running in September. By mid-october I had finished Math 10. I got similarly ahead in other subjects, and the further ahead I got the easier it was for me to work more and more. (To a point, I also had a defiant self-image of rational laziness so that I didn't want to do more than the minimum amount of work, even if I could do it faster/better. So I never skipped a grade, I would just get ahead by a few weeks/months and then... yup, play Magic (the original (Beta/Unlimited)!) and basically fuck around with my friends, computer, porn, etc.
More recently, as a PhD student, I still encounter the same thing. When I've fallen behind on a project, often due to unrelated and mild doubts/laziness/underestimation, I become more and more unwilling to face work the farther behind I get. OTOH if a colleague comes to me with a problem which I am not "supposed to be" working on, I become immediately energized. Of course, I allow myself to work on side projects less and less the farther "behind" I am on the projects I am assigned to.
I have finally seen the pattern, maybe too late not to suffer serious damage in my "career". It is largely this: I hate exposing myself to the possibility of public failure. For me, the "consequence" which makes learning/trying/failing/mastering "not fun" is simply having to admit that a) I want to get/achieve/do/win at X and b) I failed (in this instance) to get/achieve/do/win at X. When I am doing something optional, and where I am not expected to succeed (e.g. because it's someone else's problem and any contribution I make will be accepted with grateful surprise), I can be extremely goal-directed and work with intense focus. In the very short term, fear of missing a hard deadline (mainly in undergrad) can also make me work til the break of dawn with amazing concentration, much as you described anger doing for you.
I'm not suggesting that you have exactly the same anxieties that i do. But recognizing what it is that separates the activities you can focus and work on from those you can't may lead to surprising revelations about yourself, and may even suggest ways to find a job that's a good fit for your temperament.
Sorry if this was a bit rambling and self-indulgent.
This, too, makes a lot of sense.
You might want to look into setting up a business in Second Life. If you learn the programming language it uses, you can find work fairly easily writing custom code for people, and/or make various things to sell, and it's all on your terms.
If you're interested, and want help getting started, my screen name there is Adelene Dawner.
Do you program for fun?
No.
Unless that changes then, I wouldn't particularly recommend programming as a job. I quite like my programming job but that's because I like programming and I don't work in a dilbert cartoon.
A retail job other than the supermarket might be interesting. Alternately, take a notepad instead of a novel and doodle/write instead of read when there are no customers.
I don't know if your BS in comp engineering includes other aspects of computer work than programming, and I don't know if people hire for Configuration management/process control or reliability testing right away. If the answer is yes to both, then those jobs are much more structured than "make the computer do this 'kay by." I've never had a programming job where I didn't have to report to CM/process often enough that I felt I could get away with slacking. Lots of itty bitty crunch times.
Some questions about the site:
1) How come there's no place for a user profile? Or am I just too stupid to find it? I know there was a thread a while back to post about yourself, and I joined LW on facebook, but it would be much easier for people to see a profile when they click on someone's name.
2) What's with the default settings for what comments "float to the top" of the comment list? Not to whine or anything, but I made a comment that got modded to 11 on the last Perceptual Control theory thread, followed up on by a few other highly-modded comments, and a rather fruitful discussion that involved input from someone who had tried some of the "conclusive" demos pjeby linked to. But the thread got buried under the rest.
Userpages are in the works, supposedly.
They're on the list, but no-one's working on them at the moment. It should be pretty easy to link up the wiki user pages. Open source contributions are welcome.
Regarding 2, I think the default setting (Popular) is to display comments as a function of karma and time since posting. As comments get old, newer comments float to the top even if the older ones have some positive karma. If some comment has very high karma, I guess it outweighs the time constraint and stays at the top.
... and the ageing function is tuned for Reddit traffic volumes, so on this site, everything ages too fast and can't stay in popular for very long at all. Open source contributions to fix this are welcome.
My previous attempt at asking this question failed in a manner that confuses me greatly, so I'm going to attempt to repair the question.
Suppose I'm taking a math test. I see that one of the questions is "Find the derivative of 1/cos(x^2)." I conclude that I should find the derivative of 1/cos(x^2). I then go on to actually do so. What is it that causes me (specifically, the proximate cause, not the ultimate) to go from concluding that I should do something to attempting to do it?
The Perceptual Control Theory crowd here (pjeby, RichardKennaway, Kaj) will probably respond with some kind of blackbox control systems model.
I don't have a complete answer, but I can tell you what form it takes.
The quantum states in your body become entangled with a new Everett branch, branches being weighted by the Bohr probabilities. This is what your choice to find the derivative (or not) feels like. These new, random values get filtered through the rest of your architecture into coherent action, as opposed to the seizure you would have if this randomness were not somehow filtered.
I know, not much at the nuts-and-bolts level, but I hope that provides a good sketch.
In a deterministic classical universe, all can be the same for minds and beliefs and decisions as it is in our world. Any good argument should generalize there.
"Entanglement" is the black box there, and PCT, as set out in the materials I've linked to in past posts, is the general form the real answer will take.
The more general answer, but too general to be of practical use, is the one that several people have given already. At some point the hardware bottoms out in doing the task instead of thinking about it.
What kind of answer do you expect? For example, the obvious answer is "the algorithm implemented in your mind causes that to happen".
In what sense can you be said to conclude this? When I took tests, my mind went straight from reading questions to trying to answer them without stopping to consciously conclude anything. At no point was my attention fixed on what I should do; it was fixed on doing.
I think you are asking the question that is a major theme of Hofstadter's book: Godel, Escher,Bach. To be more specific he raises the question humorously on page 461 in the Birthday Cantatatatatat.... to motivate chapter XV: Jumping out of the System.
He returns to the question in Chapter XX and page 685 offers a quotable answer
Another way to look at the problem is to ask what kind of life experiences would give you the anchors in reality to dissolve the question? What works for me is understanding how computers work from gate level to interpreters for high level languages. How does (eval '(eval '(+ 2 2))) go from concluding it should evaluate (+ 2 2) to attempting to do it?
That's a good question, judging by the number and variety of replies.
I'd suggest that in a way, things go the other way around. Instead of your concluding you should do something causing you to do it, instead I think you are (already) aiming to do something, and that drives you to figure out what you should do. The urge to do causes figuring out what to do, rather than the figuring causing the doing.
But that's a little over-simplified, as discovered by people trying to program robots that interact with the world. Deciding what to do at any given moment is distinctly non-trivial.
It is an interesting mental exercise, when you are about to do something but have not yet begun it, to try to introspectively perceive the moment of decision. I find it's like trying to see the back of my own head.
At the risk of providing a non-answer I'll say: Operant conditioning.
The test problem, the solving of it, and getting an answer correspond to a light coming on, pressing a lever, and getting food.
We've long since been trained that solving problems in that context build up token points that will pay out later in praise and promises of money.
Presumably this training translates fairly well to real world problems.
Indeed, that's the conclusion I came to. What I wonder now is how we operant-condition ourselves without just reinforcing reinforcement itself. Which, I suppose, is more or less precisely what the Friendly AI problem is.
Suppose you found yourself suddenly diagnosed with a progressive, fatal neurological disease. You have only a few years to live, possibly only a few months of good health. Do the insights discussed here offer any unique perspectives on what actions would be reasonable and appropriate?
...sign up for cryonics?
Except you presumably won't be able to get life insurance.
Okay, sign up now.
If the sudden addition of an apparent deadline to your life changes the game completely, isn't it likely you've been playing the game wrong?
You always knew about death.
Your probability estimates about how many years of health you'll have have changed considerably, so you wouldn't expect to continue with the exact same behavior.
For instance, if you've been working on something that would take you several more years of good health to accomplish, you might want to spend a month finding someone to carry it on for you who's similarly motivated and making it easier for them to carry it on.
Or you might decide that you don't care about that long-term goal enough to justify the time and effort it would take away from other things that are more important for you to do in your life, but that you would have spread out over a longer timespan if you were going to live longer and accomplish a number of less-important goals or ones that are only achievable if you have more time to work on them.
You might also realize that the things you want are considerably different than the ones you thought you wanted. Maybe that was previously "playing the game wrong", but I can't see how a human could rule out the possibility of themself having a change in outlook/values/expectations after getting such news, which may have an impact on basic motivations as well as shifting attention from old lines of thinking, which they may have tried to make very rational, to ones that they may have been neglecting--and I seriously doubt anyone lacks these. Shifts in where they reason and rationalize.
/shrugs
One question that arises is a fundamental issue of motivation. Is it rational, for example, to have a list of "things to do before I die"? Especially if you believe that it is likely that you will not remember whether you did them or not, after you die? If you find out you're going to die in a couple of years, does it make sense to try to cram as many items from your list as possible in that limited time? What would be the point? Indeed, what is the point of any action?
Ultimately, what is the source of our motivation, if we know that after we die we won't remember what happened? It's one thing when death is off in a nebulous future, but when it is relatively soon and immediate, there is going to be little or no time to enjoy an accomplishment.
It seems reminiscent of the difference between the iterated and one-shot prisoner's dilemma. A long and somewhat indefinite life span is like the iterated PD, in that we expect to experience a wide range of effects and impacts from our actions. A short and more definite life span is like the one-shot PD, with only limited and short-term effects. Perhaps another way to think of it is that our normal actions affect our future selves, while with terminal illness, there are no future selves to worry about.
It is rational to have a list of things to do before you die if you have preferences over configurations of external reality outside the small part of external reality that causes your internal experiences.
Right, that makes sense, but most things I've seen on such lists are more focused on personal experiences that would be enjoyable and/or challenging. The first Google hit I got was http://brass612.tripod.com/cgi-bin/things.html and it has the typical things: skydive, travel, eat rare foods, have adventures. Some of them are focused on other people or leaving the world a better (or at least different) place but most of them seem to be for the purpose of giving yourself happy memories.
Is doing this irrational? Or at least, would it be irrational to pursue such activities if you knew that you weren't going to live long afterward?
Turning it around, suppose there were an adventure which would be unique and exciting, but also fatal? Consider skydiving without a parachute, perhaps into a scenic wilderness. Clearly you won't remember the experience afterwards, you'll have only those few minutes. Should the discovery of a shortened lifespan make this kind of adventure more attractive?
Haha, if you knew you were going to die without recovering enough health to do anything else of value, only perhaps drain you family's bank accounts and emotions, along with hospital resources, hooked up to machines, that kind of adventure SHOULD be more attractive.
I think you're underestimating the value of an experience as you live it. I would think that the value of a happy memory is only a small fraction of the value of a good experience, and a lot of the value of the memory is in directing you to seek out further good experiences and to believe in your own ability to engage in activities with good outcomes. But these positive benefits are only valuable because while you keep the happy memories in mind, you engage in further positive experiences.
Just because you don't remember something doesn't mean it disappears. It's still there--just at a certain position in time. You seem to be thinking, "Well, I can't remember this now, I can't remember the happiness, therefore the happiness I experienced doesn't exist." But remember there won't be any you to forget how good skydiving to your death felt in retrospect, and there WILL be a you at the time of diving to feel gloriously good--as opposed to the you who could feel miserably bad over a protracted deathbed.
But I would think the most important things to do would involve loved ones--either providing for them after they're gone or bonding as much as you can with them while they're around. Which may makes things more painful, but at least you'll know you had an impact on the world, could convey your ideas and values--which most of us consider as an essential part of ourselves. Other priorities for extending your influence might include writing memoirs or giving and recording a talk. You might also have something you need to do--like go see something for yourself--so you can HAVE an idea or position to record and influence others with after your life.
And certainly things like saving for your retirement would become unimportant, so your overall priorities would shift.
[Edited the sentence that starts "But remember there won't be any you..."]
What are some suggestions for approaching life rationally when you know that most of your behavior will be counter to your goals, that you'll know this behavior is counter to your goals, and you DON'T know whether or not ending this division between what you want and what you do (ie forgetting about your goals and why what you're doing is irrational and just doing it) has a net harmful or helpful effect?
I'm referring to my anxiety disorder. My therapist recently told me something along the lines of, "But you have a very mild form of conversion disorder. Even though your whole body gets paralyzed, whereas you could function with just a hand paralyzed, most people with the disorder aren't aware that it has a psychological cause, and they worry about it all the time, going to doctor after doctor to try to get a physical cure." It doesn't FEEL mild when I've been barely able to move for eight hours and finally get going enough to log onto the computer and waste time browsing online. Insight can be painful when you have so long to dwell on it.
My current thinking is that the best way to get what I want out of life is to get treatment, which I am doing, and to keep an optimistic view of my ability to be non-disabled. It's gotten a lot better, but I still spend a considerable amount of time making very bad decisions, or having the anxiety make them for me.
What are some examples of recent progress in AI?
In several of Elizer's talks, such as this one, he's mentioned that AI research has been progressing at around the expected rate for problems of similar difficultly. He also mentioned that we've reached around the intelligence level of a lizard so far.
Ideally I'd like to have some examples I can give to people when they say things like "AI is never going to work" - the only examples I've been able to come up with so far have been AI in games, but they don't seem to think that counts because "it's just a game".
The Roomba is an example that seems to get a bit more respect (although it seems like a much simpler problem than many game AIs to me), but after that I pretty much run out of examples. Maybe I'm just not thinking hard enough because a lot of AI isn't called AI when it becomes mainstream?
Examples that are more 'geeky' would also be good for me, even if they would be dismissed by non-geeky people I meet.
I see 7 upvotes but no answers. Should I conclude that even those who think AI is attainable find nothing to boast of in the record so far?
I usually cite the DARPA Grand Challenge, which I gather was won using such advanced modern methods as particle filtering (a Bayesian technique).
Last time I read much about computer chess, the better programs were still relying primarily on brute-force search with some minor algorithmic optimizations to prune the search space, together with enormous databases for openings and endgames. Are there actually chess programs nowadays that deserve to be called intelligent?
Your first point -- that you can be easily killed or checkmated by a sufficiently powerful program regardless of how it is implemented -- is true but irrelevant: the question was not whether the program is powerful and effective (which I would not dispute) but whether it deserves to be called intelligent. You can say that whether it is intelligent or not is unimportant and that what matters is how effective it is, but it is wrong to conflate the two questions and pretend that an answer for one is an answer for the other, unless you are going to make an explicit argument that they are isomorphic or equivalent in some way.
I would argue that a problem domain where brute-force search with simple optimizations actually works extremely well is a problem domain that does not require intelligence. If brute-force search with a few optimizations is intelligent, then a program for factoring numbers is an artificial intelligence.
I don't have a criterion for intelligence in mind, but like porn, "I know it when I see it". We might disagree about edge cases, but almost all of us will agree that a number factoring program isn't "intelligent" in any interesting sense of the term. That's not to say that it might not be fantastically effective, or that a similarly dumb program with weapons as actuators might not be a formidable foe, but it's a different question to that of intelligence.
And the reason for that is simple - the real working definition of "intelligence" in our brains is something like, "that invisible quality our built-in detectors label as 'mind' or 'agency'". That is, intelligence is an assumed property of things that trip our "agent" detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems "animate". If we discover that the thing is not "animate", then our built-in detectors stop considering it an agency... in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would've needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat's workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not "intelligent" any more.
This is one reason why it's really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting "intelligence" onto its own mechanical processes. (Which is why we have oxymoronic terms like "unconscious mind", and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to "control" them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of "mind". To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
You said it was "considered to be the pinnacle of intelligence" 150 years ago, that is, almost 150 years after calculus was invented, and now you're interpreting that as meaning "a person on the street would think that intelligent." And you said I was moving goalposts?
It is a bad example, but it's a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don't think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they're poor examples of intelligence.
What's a good procedure for determining whether or not to vote up a comment?
There are many. For a collection of data points on how people tend to do it, look at this post.
In general, I try to upvote if I think the author made a good new point in the discussion (or made an old point in a better way). I also vote up humorous comments if I find them funny and if they don't detract from the surrounding conversation.
I try to reserve downvotes for occasions where the author is not just espousing a conclusion that I think wrong, but when they are making a rationalist mistake in the particular comment:
On the subject, newcomers should be aware that there's some karma-based limit on how many downvotes you can make (to prevent trolls from mass-downvoting everyone they disagree with, etc), but I think it's rare to hit that limit.
EDIT: By the way, welcome to Less Wrong! Check out the welcome thread if you haven't already. (One point it doesn't make: unlike most blogs, you can comment on older posts and still get a conversation, because many of us regularly follow the comments feed.)
If you think it's more worth reading than the average in that thread, vote it up. If you think it's less worth reading than the average in the thread, vote it down. If you want to conserve peoples' feelings, vote down less often than these instructions suggest.
Sorry, I sort of asked this question in a thread here, but I'm interested enough in answers that I'm going to ask it again.
Does it seem like a good idea for the long-term future of humanity for me to become a math teacher or producer of educational math software? Will having a generation of better math and science people be good or bad for humanity on net?
If I included a bit about existential risks in my lecturing/math software would that cause people to take them more seriously or less seriously?
In the unlikely event that you end up significantly improving the amount of mathematical expertise in humanity, you should be very pleased with yourself.
It's definitely not a bad cause. You should do it if it's something that would engage and satisfy you. If you turn out not to be suited for it, no harm done; find something else you're good at.
So you're not much afraid that people will develop artificial general intelligence before figuring out how to make it friendly?
It's fine to include some low-probability catastrophe risk management in your overall planning. But are you considering all the possible catastrophes, or just one particular route to unfriendly AI (one unlocked by your marginal recruitment of mathematically capable tinkerers)?
Wouldn't furthering our mathematical and technological prowess as soon as possible mitigate many catastrophes? See the movie Armageddon, for instance :)
Maybe general AI is inevitable even at current computing power, so long as a small, persistent cult keeps at it for a few hundred years. If so, I think having more mathematical facility gives a better chance of managing the result.
Real, all-of-the-sudden, self-optimizing with increasing speed, with limits way above human, general AI is 99.999% not implemented in the next 10 years, at least. The only reason I consider it so likely (and don't feel comfortable predicting, say, 50 years forward) is the possibility of apparent limits in computing hardware being demolished by some unforeseen breakthrough.
When I read this paper, the risks seem to be on balance increased rather than decreased by greater human intelligence.
The median LWer's guesses on when the singularity will occur is 2067.
Improving math education is a problem I'd really like to work on but it seems likely to be harmful unless I can include an effective anti-existential-risk disclaimer. Even if I'm guaranteed to be relatively unsuccessful, I don't want a big part of my life's work to be devoted to marginally increasing the probability that something really bad will happen.
I skimmed the paper. It's interesting. Thanks.
I still don't think you should curtail your math instruction, even if you do have a large impact on the course of humanity, in that millions of people end up more capable in math. I think you'd increase our resiliency against existential hazards, if anything.
But you're welcome to evangelize awareness of X on the side. I would have liked to hear my math teachers raise the topic - it's gripping stuff.
Eliezer_Yudkowsky said:
This comes from a post from almost a year ago, Excluding the Supernatural. I quote it because I was hoping to revive some discussion on it: to me, this argument seems dead wrong.
The counter-argument might go like this:
Reductionism is anything but a priori logically necessary-- it's something that must be verified with extensive empirical data and inductive, probabilistic reasoning. That is, we observe that the attributes of many entities can be explained with laws describing their internal relations. Occam's razor tells us that we don't need both the higher and lower order model to actually exist, so we unify our theory. The repeated experience of this success leads us to extrapolate that this can be done with all entities. Perhaps some entities present obstacles to this goal, but we then infer that their irreducibility is in the map (our model for understanding them) not in the territory (the entity itself.) But again, we infer this by assuring ourselves that they just haven't been explained YET--which implies it's reasonable, based on inductive reasoning from the past, to assume that they will be reduced. Or we describe some element of the entity's complexity that makes "irreducibility in practice" something to be expected. We therefore preserve its reducibility in principle.
But we do not (it seems to me) merely exclude its irreducibility based on a priori necessity. Why would we? It's perfectly conceivable. Eliezer describes in an earlier post the "small, hard, opaque black ball" that is a non-reductionist explanation of an entity. He claims its just a placeholder, something that fools us into thinking there's a causal chain where nothing has actually been clarified.
But it's perfectly conceivable that such a "black ball" could exist. I suppose there's no way to prove that it's irreducible, and not just unreduced as of yet, in the same way that one can't prove a negative. But this just presupposes that the default position ought to be reductionism. We should assume innocent until proven guilty. But which is innocent in this case: reducible or non-reducible?
So what if we come across something that appears to be a "black ball"? We attempt with all our mental and technological acuity to analyze it in terms or more fundamental laws, and every attempt fails. I would argue this is a good example of empirical evidence against materialist reductionism. We indeed have an entity that obeys laws which we can describe and predict--it just has laws that can't be reconciled with the physical laws of everything else, and when interacting with anything else, violates them.
Occam's razor is indeed strong here: we recognize that, given the faintest hope of reduction, we should throw out irreducibility in favor of having as few types of "stuff" as possible. This happens in the case of "elan vital." But it seems perfectly conceivable to me that there might be an entity that's truly a black ball.
Now this seems so massively incorrect that I fear I'm misunderstanding Eliezer. Does anyone have any feedback? I'd love to make a post about this, once I generate some karma.
I didn't get the 'and so' above at first, but I think it makes sense for the following reason: you can only ever "construct models made of interacting simple things" (possibly elaborated upon and abstracted to such an extent that they no longer seem simple or physical) in that universe because any model you could possibly make in that universe would be causally determined by and entangled with the quarks in your brain. The verbalization and high-level understanding of the model is just another way of explaining what is going on with the quarks in your brain (it explains nothing additionally), and so whatever the 'irreducibly mental' things in your model are, the chain of causal unpacking and explicating ultimately bottoms out with descriptions of quarks, etc., by hypothesis. When you think "non-reductionist", there is a purely reductionist explanation of what you are thinking. If there is just one level, then the explanation for everything is on that level or can be reduced to that level, so you can't concretely envision, as Eliezer says, something that can't be reduced.
I wish I had time to make this clearer, but I don't have any more time today.
I'm pretty sure that just can't be right. (His argument, that is. I think your interpretation of it is dead on.) We are not limited to imagining the sorts of things our brain is causally determined by. And the way you just put it seems completely backwards. Even if everything reduces to quarks, it's only in principle-- our brains are hard wired to create multiple levels of models, and could never conceive of an explanation of a 747 in terms of quarks.
Look at it this way. Can a painting have a subject? Can it be "about" something? Of course. Certainly there's nothing supernatural about this, but there's also nothing legitimate on the level of quarks that could be used to differentiate between a painting that has a subject and a painting that is just random blobs. I can imagine, after all, two paintings, almost identical in their coordinate-positioning of quarks, which have completely different subjects. I can also imagine two paintings, very different in terms of coordinates of quarks (perhaps painted with two different materials) which have the same subject. So while everything reduces down to quarks, it's the easiest thing in the world to explain a painting's about-ness on a separate level from quarks, and completely impossible to envision an explanation for this about-ness in terms of quarks.
I'm just not sure what about a "black ball" misses the mark of conceivability.
This is a good example of how the "natural" concepts are actually quite elaborate, paying utmost attention to tiny details that are almost invisible in other representations. But these details are in fact there, in the territory. The fact that they are small in one representation doesn't belittle their significance in another representation. And the fact that one object is placed in one high-level category and a "slightly" different object is placed in another category results from exactly these "tiny" differences. You can't visualize these differences in terms of quarks directly, but in terms of other high-level categories it is exactly what you are doing: keeping track of the tiny distinctions that are important to you for some reason.
That sounds right, but that sounds like I am (or at least could) visualize these levels as separate, since to keep track of the tiny differences that end up being important is impossible for my mind to do. This seems to necessitate that imagining irreducibility is not only possible, but natural (and perhaps unavoidable?).
This is not to say that irreducibility is logical, and our reason may insist to us that the painting is indeed reducible to quarks, whether or not we can imagine this reduction. But collapsing the levels is not the default position, a priori logically neccessary.
I'm not entirely clear on what you are saying above. Your mind keeps many overlapping concepts that build on each other. It's also incapable of introspecting on this process in detail, or of representing one concept explicitly in terms of an arbitrary other concept, even if the model in the mind supports a lawful dependence between them. You can only visualize some concepts in the context of some other closely related concepts. Notice that we are only talking about the algorithm of human mind and its limitations.
Perhaps it would help (since I think I've lost you as well) to relate this all back to the original question: is all levels reducing down to a common lowest level a priori logically necessary? My contention is that it's possible to reduce the levels, but not logically necessary-- and I support this contention with the fact that we don't necessarily collapse the levels in our reasoning, and we can't collapse the levels in our imagination. If you weren't disagreeing with this, then I've just misunderstood you, and I apologize.
There are at least 3 ways for anti-reductionism to be not only clearly consistent, but with some plausibility, true - in the sense that there is empirical as well as conceptual evidence for every position (This is connected to a quote I posted yesterday):
Ontological monism: The whole universe is prior to its parts (see this paper)
No fundamental level: The descent of levels is infinite (see that paper)
"Causation" is an inconsistent concept (I'm one free afternoon and two karma points away from a top-level post on this ;)
You want to be very careful every time you find yourself saying that.
And that too.
Certainly-- that was somewhat sloppy of me. In my defense, however, a priori and conceivability/imaginability are pretty inextricably tied. Additionally, you yourself used the word "envision."
It would perhaps be helpful if you could clarify what you meant when you said:
Your usage doesn't seem to fit into the Kantian sense of the term-- the unity of my experience of the world is not conditioned by everything being reducible. What do you mean when you say irreducibility is a priori logically incoherent?
See blog post links in Priors. A priori incoherent means that you don't need data about the world to come to a conclusion (i.e. in this case the statement is logically false).
This doesn't really answer the question, though. I know that a priori means "prior to experience", but what does this consist of? Originally, for something to be "a priori illogical", it was supposed to mean that it couldn't be thought without contradicting oneself, because of pre-experiential rules of thought. An example would be two straight lines on a flat surface forming a bounded figure-- it's not just wrong, but inconceivable. As far as I can tell, an irreducible entity doesn't possess this inconceivability, so I'm trying to figure out what Eliezer meant.
(He mentions some stuff about being unable to make testable predictions to confirm irreducibility, but as I've already said, this seems to presuppose that reducibility is the default position, not prove it.)
Eliezer, in Excluding the Supernatural, you wrote:
"Fundamentally complicated" does sound like an oxymoron to me, but I'm not sure I could say why. Could you?
I'm having the same difficulty. Aren't quarks (or whatever is the most elemental bit of matter) fundamentally complicated? What's meant by "complicated"?
(Sorry for being so chatty.)
Are you actually implying that quantum mechanics is remotely comparable in complexity to paintings and artistic "subjects"? Please direct me to the t-shirt that summarizes all of artistic critique.
This is probably wrong. The important point is that physics isn't a mind, and less so human mind or your mind, so it doesn't care about your high-level concepts, which makes their materialization in reality impossible. Even though the territory computes much more data than people, it's data not structured in a way human concepts are.
To loqi and Nesov:
Again, both of your responses seem to hinge on the fact that my challenge below is easily answerable, and has already been answered:
To loqi: Where do we draw the line? Where is an entity too complex to be considered fundamental, whereas another is somewhat less complex and can therefore be considered simple? What would be a priori illogical about every entity in the universe being explainable in terms of quarks, except for one type of entity, which simply followed different laws? (Maybe these laws wouldn't even be deterministic, but that's apparently not a knockdown criticism of them, right? From what I understand, QM isn't deterministic, by some interpretations.)
To Nesov: Again, you're presupposing that you know what's part of the territory, and what's part of the map, and then saying "obviously, the territory isn't affected by the map." Sure. But this presupposes the territory doesn't have any irreducible entities. It doesn't demonstrate it.
Don't get me wrong: Occam's razor will indeed (and rightly) push us to suspect that there are no irreducible entities. But it will do this based on some previous success with reduction-- it is an inference, not an a priori necessity.
I don't know. I wasn't supporting the main thread of argument, I was responding specifically to your implicit comparison of the complexity of quarks and "about-ness", and pointing out that the complexity of the latter (assuming it's well-defined) is orders of magnitude higher than that of the former. "About-ness" may seem simpler to you if you think about it in terms that hide the complexity, but it's there. A similar trick is possible with QM... everything is just waves. QM possesses some fundamental level of complexity, but I wouldn't agree in this context that it's "fundamentally complicated".
I would assert that, by definition, a meaningful concept is reducible to some other set of concepts. If this chain of meaning can be extended to unambiguous physics, then their "materialization in reality" is certainly possible, it's just a complicated boundary in Thingspace.
I have not been able to imagine a pair of (painting+context with a subject)s which have two completely different subjects but are almost identical in their coordinate-positioning of quarks.
You can, though? Can you give an example?
Well, wouldn't a painting of the Mona Lisa, and a computer screen depicting said painting, have very different quarks, and quark patterns? While two computer screens depicting some completely different subject would be much more similar to each other? This is what I was trying to get at.
The two computer screens depicting completely different subjects have almost everything in common, in that they are of the same material. However, where they differ -- namely, the color of each pixel -- is where all the information about the painting is contained. So the screens have enough different information (at the quark level) to distinguish what the paintings are about.
So I don't think you are getting at why "about-ness" isn't related to the quarks of the painting. I think a better example is a stick figure. A child's stick figure can be anybody. What the painting is about is in her head, or your head, or in the head of anyone thinking about what the painting is about.
So it's not in the quarks of the painting at all. "About-ness" is in the quarks of the thoughts of the person looking at the painting, right? (And according to reductionism, completely determined by the quarks in the painting, the quarks of the observer, and the quarks of their mutual environment.)
Above, you wrote:
Thus I agree with this statement as it is written, because I think the difference in the subjects of the paintings are found instead in the thoughts of the beholder. Would you agree that there is a legitimate difference at the level of quarks between the thought that a painting has a subject and the thought that a painting is just random blobs?
But the two screens with two different subjects are probably more similar than a screen and a painting with the same subject, in terms of coordinates of quarks. Additionally, it's not clear to me that there's a one-to-one correspondence between color and quarks. Even establishing a correspondence between color and chemical make up is extremely difficult, due to the influence of natural selection in how we see color (I remember Dennett having a cool chapter on this in CE.)
I don't want to make our disagreement sound more stark than it actually is. I agree that the about-ness is in the mind of the beholder, and the stick figure is a good example as well... but I think this just emphasizes my point. Let me put it this way: Given the data for the point-coordinates of the three entities, could a mind choose which one had which subject? No, even though the criteria is buried abstrusely somewhere in there. The point being that the models are inextricably separate in the imagination, and its therefore not clear to me why its a priori logically necessary that they all collapse into the same territory (though I agree that they do, ultimately).
Yes, and it does.
Could you explain? If I were presented with a data sheet full of numbers, and told "these are the point coordinates of the fundamental building blocks of three entities. Please tell me what these entities are, and if applicable, what they are about" I would be unable to do so. Would you?
Given a computer that can handle the representation and convert it into form acceptable by the interface of your mind, this data can be converted into a high-level description. The data determines its high-level properties, even if you are unable to extract them, just like a given number determines which prime factors it has, even if you are unable to factor it.
Maybe I've misunderstood you and you're not talking about what "about" means. Are you talking about how it seems impossible that we can decode the quarks into our perception of reality? And thus that while you agree everything is quarks, there's some intermediate scale helping us interpret that would be better identified as 'fundamental'? (If I'm wrong just downvote once, and I'll delete, I don't want to make this thread more confusing.
Haha if I just downvoted it, then I wouldn't be able to explain what I do mean.
I'm simply attempting to disagree with the logical necessity of reductionism. I said this earlier, I thought it was pretty clear:
So, the fact that a painting has a subject is a good example of this: I can't imagine the specific differences between a) the quark-configuration that would lead to me believing its "about a subject", versus b) the quark-configuration that would lead to me believing its just a blob. I can believe that quarks are ultimately responsible, but I'm not obligated to do so by a priori logical necessity.
So I'm not contending anything about what the most fundamental level is. I'm just saying that non-reductionism isn't inconceivable.
I feel that someone should point out how difficult this discussion might be in light of the overwhelming empirical evidence for reductionism. Non-reductionist theories tend to get... reduced. In other words, reductionism's logical status is a fairly fine distinction in practice.
That said, I wonder if the claim can't be near-equivalently rephrased "it's impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions". Your use of the term "conceivable" seems to mean (or include) something like "choose an arbitrary state space of possible worlds and an observation relation over that space". Clearly anything goes.
You're simply expanding your definition of "everything" to include arbitrary chunks of state space you bolted on, some of which are underdetermined by their interactions with every previous part of "everything". I don't have a fully fleshed-out logical theory of everything on hand, so I'll give you the benefit of the doubt that what you're saying isn't logically invalid. Either way, it's pointless. If there's no link between levels, there's no way to distinguish between states in the extended space except by some additional a priori process. Good luck acquiring or communicating evidence for such processes.
This is a slippery concept. With some tiny probability anything is possible, even that 2+2=3. When philosophers argue for what is logically possible and what isn't, they implicitly apply an anthropomorphic threshold. Think of that picture with almost-the-same atoms but completely different message.
The extent to which something is a priori impossible is also probabilistic. You say "impossible", but mean "overwhelmingly improbable". Of course it's technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.
But surely there's something in the painting that is causing the observer to have different thoughts for different subjects. But that something in the painting is not anything discernible on the level of quarks. This is why I brought the example up, after all. It was in response to:
I believe (I could be wrong, since I started this thread asking for a clarification) that the implication of this statement (derived from the context) was that "brains made of quarks can't think about things as if they're irreducibly not made of quarks."
First of all, saying "brains made of quarks can't think [blank] because quarks themselves aren't [blank]," seems to me equivalent to saying that paintings can't be about something because quarks can't be about something. It's confusing the abilities and properties of one level for those of another. I know this is a stretch, but be generous, because I think the parallelism is important.
Second of all, we think about things as if they're not quarks all the time. We can "predict" or "envision" the subject of the painting without thinking about the quark coordinates at all (and such coordinates would not help us envision or predict anything to do with the subject).
So I clearly need some help understanding what Eliezer actually meant. I find no reason to believe that brains made of quarks can't think about things as if they're not made of quarks. (Or rather, Eliezer only seems to allow this if it's a "confusion." I don't understand what he means by this.)
Some comic relief, with a serious point:
The famous cartoon of two mathematicians going over a proof, the middle step of which is "then a miracle occurs".
If reductionism is false in the way you've described, then it seems that we can start at the level of quarks and work our way back up to the highest level, but that at some point there must be a "magical stuff happens here" step where level N+1 cannot be reduced to level N.
Indeed, an irreducible entity (albeit with describable, predictable, behavior) is not much better than a miracle. This is why Occam's Razor, insisting that our model of the world should not postulate needless entities, insists that everything should be reduced to one type of stuff if possible. But the "if possible" is key: we verify through inference and induction whether or not it's reasonable to think we'll be able to reduce everything, not through a priori logic.
Could someone answer my question in the Where Physics Meets Experience thread, please?
Thanks.
Is there a way to undelete posts?
That might seem a weird question - just submit it again - but it turns out that "deleting" a post doesn't actually delete it. The post just moves to a netherworld where people can view it, link to it, discuss it in the comments etc. but: a) it doesn't show in the sidebar, b) it doesn't show in the user's submitted page, c) it says "deleted" where the poster's username should be. Editing and saving doesn't help.
This calamity has just befallen a post of mine that I submitted by mistake, then killed, but people (presumably) saw it in their feeds and started commenting away. Vladimir_Nesov suggests that it's an okay post and should be resurrected, but I lack the power.
If I'm causing trouble, then sorry for causing trouble.
An interesting book is out: Information, Physics and Computation by Andrea Montanari and Marc Mézard. See this blog post for more detail.
I like this book and have already got myself lost in it. The title is confusing; they should've called it "Phasetransitionology" like the awesome Generatingfunctionology.
Anders Sandberg - Swine Flu, Black Swans, and Geneva-eating Dragons (video/youtube)
Anders Sandberg on what statistics tells us we should (not) be worried about. Catastrophic risks, etc.
Sandberg's post on his blog about the talk.
What do you guys think of the Omega Point? Perhaps more importantly, what do you think of Tipler's claim that we've known the correct quantum gravity theory since 1962?
We don't.
By that, do you mean "it's not worth a second look", "that's not relevant to Less Wrong", "I haven't heard of it", or something else I haven't thought of?
It's not worth a second look.
In my opinion, too many comments lately have explicitly incidentally discussed their authors' votes; I think it distracts from the actual topic and metadiscussions ought to be separate comments.
Counterpoint: knowing why other people vote or don't vote is helpful. Voting alone provides little information about what people did or did not like about a comment.
However, it does distract from the primary topic and some sort of side channel for such discussions would be nice.
Inspired by Yvain's post on Dr. Ramachandran's model of two different reasoning models located in the two hemispheres, I am considering the hypothesis that in my normal everyday interactions, I am a walking, talking, right brain confabulating apologist. I do not update my model of how the world works unless I discover a logical inconsistency. Instead, I will find a way to fit all evidence into my preexisting model.
I'm a theist, and I've spent time on Less Wrong trying to be critical of this view without success. I've already ascertained that God's existence doesn't present a logical inconsistency. (An atheist thinks God's existence is illogical, but based on assumptions that are not necessary.) All empirical evidence I'll ever receive can be consistently incorporated into a God model. (Since, for example, I can question my perception or my sanity before questioning whether God exists.)
I'm an unusual theist, however, in that I have no emotional attachment to believing in God. The God that I believe in is already impersonal. Also, I've ascertained while on Less Wrong that atheism is also not logically inconsistent and, from what I can tell, is not a disadvantageous philosophical position. So how can I trigger a switch? Why is it not easy to flip from one position to another?
I hypothesize that there is something analogous to an activation energy required to update one's model so that there must be some motivation or impetus to update the model. For example, perhaps a new model explains things in a simpler way than the current model, and thus would be chosen for aesthetic reasons, or perhaps the new model would afford some practical benefit. (A difference in predictions that effects anything tangible would be an example of a practical benefit.)
(A) Choosing atheism because it is more aesthetic than theism.
I already prefer atheism to the extent that it is a simpler theory. (Some form of Occam's Razor.) However, it leaves a hole that God is shaped to fit, so, finally, I don't consider it to be more aesthetic.
This hole is the reason/cause/explanation for the existence and causal dependence/inter-connectedness of everything. As far as I am aware, the atheist model has no comment on this. However, apparently you don't experience any hole. Tell me, how does your model cover this hole? Perhaps if I could see that atheism is just as good as theism as a model, I could perform the switch, or at least hold them both as simultaneously equal hypotheses.
(B) Choosing atheism because it would provide some practical benefit.
In what way could becoming an atheist possibly improve my life for the better? Is there any actual, tangible benefit? Is there some cost that I'm not aware of that theism is exacting? As far as I know, there is no cost to being theist, because I recognize no extra guilt or obligation for my belief. Organized religion does provide some non-negligible burden to my everyday life, but that is independent of my belief. If I was an atheist, would anything in my life be easier or better?
It's not that I, an atheist, don't experience such a hole. Far from it! I inhabit a gaping and mysterious void of ignorance. The difference is that while you see the outline of the hole and find something hole-shaped to fit into it (God), I am more interested in changing the dimensions of the hole. I don't want to explain why the hole exists, I want to destroy the hole altogether. The hole isn't a problem to be solved by the atheist model but by the scientific model.
A thousand years ago the hole was a lot bigger, and yet God was still perfectly hole-shaped. Consider that one day the hole may no longer exist, that the great and ineffable cause is finally exploded with a brilliant theory that describes the every underpinning of the universe and we all go "Ohhhh... that makes sense."
Whither then doth hole-shaped God go?
If you want to maximize practical benefit, become a Christian. Or a Muslim, if you live in the Middle East. Or a superstitious atheist, if you live in China. Being an atheist, for myself, at least, is not about practical benefit. It's just that I don't have any rational way of believing anything else.
So you agree there is a hole, and that this hole is fillable, by science. May I take this to mean that you do believe that there is an explanation? Suppose that science could provide an explanation, what would it look like? I understand that you don't know (and I don't either) but to speculate..
Exactly, sounds like God to me. I would be happy with a God = a single universal theory of everything, or more precisely any set of laws which also included some sort of self-explanation.
Many mathematicians and physicists identify God with mathematics and/or physical laws of the universe. Einstein believed in such a God. I think that belief in God, distilled to it's most elemental component, is the belief that there is a consistent theory of everything, whether this theory is knowable or not.
Organized religions make up a lot of stuff about what the consistent theory consists of. (For example, were humans part of the plan? If the universe is deterministic, then they were.) Eliezer is correct that they focus on overly positive aspects. Perhaps they should just stay silent and appeal to the mystery, but they insist upon speculating, and then make it dogma. I find the speculation interesting, but find all kinds of dogma oppressive. You're a sinner (religion) or an idiot (Less Wrong).
It seems to me that your idea of God has no volition and is not equipped to care about anything we do. Why is the idea important, then? Why is it a worthwhile idea to collect the regularities of the Universe in a bag labelled "God"?
First: I wholly agree that my idea of God has no volition and is not equipped to care about anything we do. This is the view of God I'm defending, not a personal God.
Good question -- I've been anticipating it for some time now. There are three reasons why the idea is important.
(1) Many people (especially scientists) believe in this God. Many/most world religions actually assert a God that is much more more like the God I describe than you might think. So I would like atheists to understand that when they assert that belief in God is irrational or absurd, they are really (usually) just making arguments that there is no personal God, which is annoying to theists that believe in the impersonal God. Perhaps mostly because as a result of the identification God=Personal God, they can't express their beliefs in a meaningful way. For example, even after having sketched my view of God, it was still implied that I "suppose that the entire universe is the creation of some infinite mind-like thing with an unconditional respect for reason!"
(2) Many logical arguments against God don't focus on properties of God specific to a personal God (the problem of evil is a noteworthy exception). Since they argue that God of any kind can't exist, but then my watered-down do-nothing version of God can exist, what went wrong with the reasoning? My favorite example of this is the argument that a supreme power would be too complex to exist. (Are the fundamental physics laws too complex to exist??) So I would really like to learn, after all, how anyone can tell the difference between an logical argument and just a line of reasoning that conforms with your point of view.
(3) As humanists, we need to identify what we have in common and not exaggerate differences. I think a lot of people, theists and atheists alike, have an innate belief that the world must make sense. As some people have pointed out in comments to me, it is possible for them to hold this as a theory rather than a belief. However, I then suspect that our personalities or the way our mind is structured is really quite different. And this difference is not a good reason to think of most of humanity as idiotic. I strongly assert that what theists really can't let go of (even the ones who believe in a personal God) is the idea of a meaningful/consistent universe. So practically, you'd have a lot more progress pulling them "sideways" towards a belief in an impersonal God than in no God. I've made a similar argument here.
Finally, there is an aspect to your question that I cannot fully address. It is: what difference does believing in God make if there's no reason to worship him and it would have no effect on my behavior? I have no response to this because I don't think it does make a difference. I have no objection to people being atheists. But I think some people innately do have a belief in God, and for whatever reason, it is connected with their motivation to explore the universe. If I don't believe in God -- if I consider it unimportant whether or not the world actually makes sense -- then I lose my interest in it. I might just take psychedelic drugs all the time. From observation of the true atheists here (who seem more or less reasonable) I suspect this is a difference in innate constitution.
I'm interested in learning how true atheists avoid this feeling of nihilism. I was actually once quite comfortable once with nihilism, but ultimately rejected it in favor of belief in an objective external universe. Which is why I am so interested in how other empiricists organize their worldview. When I say that my belief in God is innate, I should qualify that it may only be innate when I am simultaneously being an empiricist.
This seems to suggest that you either are not truly convinced that (your) God exists, or that it does not bother you when people are wrong.
But not like yours in the key aspects I noted -- those aspects imply a lack of any need for religious practices.
My impression is that arguments that advocates of atheism are making to the public (as opposed to academia) are largely against the idea of a personal God. These atheists would just shug their shoulders at your stance.
I assert that what theists really can't let go of is a social setting and their place within it.
Honestly, I don't know. I think we have some pretty good tools that could help us find an explanation, and I hope that we'll have the universe completely sussed out one day and that everything makes sense... but then again, it could be God. I just don't think filling the hole with God is a useful step toward figuring out what the hole is about.
If all you want is a reason for why the universe does x like x, why not just settle for the anthropic principle? I guess I don't really understand why you would apply the label 'God' to a universal theory of everything that explained itself. Wherein do you see the God-nature of the physical rules of this universe? And if you had this universal theory, where is the usefulness of calling it God? Aren't you just adding unnecessary semantic complexity?
I would hardly compare believing in a consistent theory of everything with believing in God, for any meaningful definition of "God".
A deterministic universe doesn't mean there must be a plan. Water doesn't plan to conform to the shape of the container it is poured into.