Comment author: ChristianKl 18 September 2014 10:19:31PM 4 points [-]

I care very much about phenomenology. At the moment I'm writing an Android app that's supposed to teach the user to perceive phonemes that he didn't perceive at first. It uses the spaced repetition principle but optimizes the algorithm for multiple choice questions.

I see that I do have weaknesses perceiving those phonemes in the English language that are generally hard to perceive for Germans (which is my native language). I learned a lot of my English on the internet via the medium of text and generally feel really comfortable about the language, so it's interesting to tackle that issue.

In another project spaced repetition project I used Anki to learn to distinguish color that he didn't distinguish beforehand.

I do not at all mean to say that you should go read Husserl and Heidegger. Despite their apparent potential for unprecedented clarity, the phenomenologists, without exception, seem to revel in obfuscation.

If you train yourself in perception you start to be able to make distinctions that you couldn't make beforehand. If you express yourself and speak about those distinctions to a person who's not able to make those distinctions he can't understand you and it might seem like you are engaging in obfuscating.

On that topic I can recommend reading about phenomenological primitives. The term comes from people thinking about teaching physics and those people generally speak in a language that should be easy to understand for someone with LW background. http://edutech.csun.edu/eduwiki/index.php/DiSessa,_1983 is very much worth reading.


Somatics is another field that full of phenomenological investigation. It's about noticing what goes on in your body. Being able to notice that a given thought triggers a fear based freeze reaction in yourself that makes your breathing shallow is very useful. It allows you to return to normal breathing before you think more about the thought.

Somatics also has a lot of prior art that's interesting when you want train your abilities of perception.

See, this "noticing" thing sounds boringly simple at first, and not worth much consideration in the art of rationality.

Want concepts to be exiting instead of boring is a classic failure mode of a lot of rationalists. It's a key to understand that just because something isn't exciting doesn't mean that it's useless. "Boringly simple" also often means: I think I understand topic X and I feel really uncomfortable about investigating the issue further.

Simple knowledge is good knowledge because you can build on it. It's reliable. Complex intellectual arguments are usually not very reliable. On the other hand engaging in them is much more entertaining and for a lot of rationalists engaging in intellectual debates is their favorite form of entertainment. There's also nothing wrong about debating ideas for entertainment but you shouldn't let it keep you from also looking at the issues that feel uncomfortable and that aren't entertaining.

Comment author: KnaveOfAllTrades 19 September 2014 07:57:16AM 3 points [-]

In another project spaced repetition project I used Anki to learn to distinguish color that he didn't distinguish beforehand.

I think I managed to do this when learning flags, with Chad and Romania. It seemed like I got to the point where I could reliably distinguish their flags on my phone, whereas when I started, I did no better than chance. I did consciously explain this to somebody else as something interesting, but now that I think about it, I failed to find it as interesting as I should have, because the idea that seeing a card a few times on Anki can increase my phenomenal granularity or decrease the amount of phenomenal data that my brain throws away, is pretty amazing.

Comment author: ChristianKl 18 September 2014 10:17:41PM 2 points [-]

But sometimes I also mention how I mark a card as 'Easy' before I've retrieved or subvocalized the answer. It definitely felt like the latter was not self-delusion (especially when Anki was asking me what the capital of the UK was, say).

For that reason I have set all my Anki cards for typing. If you actually type the city name and you get it wrong you notice. Even when I already pressed "easy" Anki allows going back via Crtl+z.

That does happen frequently enough for my for me to think that you are probably sometimes deluding yourself. There are cards where you think you know the right answer but get the card wrong.

It has the added bonus of training typing speed ;) I still have an average answer speed of 16 cards/minute over the last month so I don't think it slows me down much.

Comment author: KnaveOfAllTrades 19 September 2014 06:25:11AM *  3 points [-]

I found typing to be a massive deterrent personally. Lots of my Anki is done in bed or on trains on my phone, and I found Memrise (on a laptop) much less compelling and harder to get myself to do than Anki because of all the typing, multiple choice, and drag-n-drop (and it would switch between those which would break my focus). I don't want to have to type 'London' when I'm asked what the capital of the UK is or click it on a multiple choice. Maybe if it were just typing on a fully-fledged computer, like you describe, it wouldn't be so bad?

I still don't think I self-deluded to any actionable extent, but I probably should mention that sometimes I would mark a card as Easy, see the answer and Just Know the answer was different from what I would have answered, undo, and mark the card as Again. I can see how you'd be much more confident I was self-deluding without that detail, which I forgot.

Comment author: KnaveOfAllTrades 17 September 2014 10:37:12PM *  2 points [-]

This post is brilliant.

(Sensations of potential are fascinating to me. I noticed a few weeks ago that after memorizing a list of names and faces, I could predict in the first half second of seeing the face whether or not I'd be able to retrieve the name in the next five seconds. Before I actually retrieved the name. What??? I don't know either.)

Right! When telling people about Anki, I often mention the importance of not self-deluding about whether one knows the answer. But sometimes I also mention how I mark a card as 'Easy' before I've retrieved or subvocalized the answer. It definitely felt like the latter was not self-delusion (especially when Anki was asking me what the capital of the UK was, say). But I felt unable to communicate why it was not self-delusion, and worried that without the other person understanding that mental phenomenon, they would think I was self-deluding and conclude that self-delusion is actually okay after all.

I vaguely noticed that awkwardness to some degree, but I still need to work on the skill of noticing such impasses and verbalizing them. And I certainly wasn't conscious enough of it, or didn't dwell enough on it, to think more about noticing.

Comment author: ike 17 September 2014 04:53:51AM *  1 point [-]

I was thinking about anthropics after seeing some posts here about it. I read the series of posts on ADT including http://lesswrong.com/r/discussion/lw/8aw/anthropic_decision_theory_iv_solving_selfish_and/, and EY's posts http://lesswrong.com/lw/17c/outlawing_anthropics_an_updateless_dilemma/, http://lesswrong.com/lw/19d/the_anthropic_trilemma/, and http://lesswrong.com/lw/17d/forcing_anthropics_boltzmann_brains/. I had a few questions about those posts.

First, how is average utilitarian defined in a non-circular way? I'm trying to wrap my head around why I don't agree with the conclusions of the first post I linked, and it seems to come down to not understanding average utilitarians.

More specifically, do they define two levels of utility? Or do they exclude themselves from the calculation? I thought it was just a different way of allocating your own utility, but how do you calculate which way will give you the most utility by giving the world a greater average utility, without knowing the answer of your own utility to plug in?

Second, in http://lesswrong.com/lw/19d/the_anthropic_trilemma/ EY ended off with

I will be extremely impressed if Less Wrong solves this one.

Has he been officially "impressed" yet? Should I read any specific attempts to solve the trilemma? What reading can I do on anthropics to get an idea of the major ideas in the field?

It seems to me that SIA, in the way it's been applied, is obviously correct, and in general I feel like I have very clear intuitions on these kind of problems. I plan on writing up something eventually, after I understand the argument against my point-of-view to argue coherently.

Comment author: KnaveOfAllTrades 17 September 2014 05:39:40AM *  4 points [-]

First, how is average utilitarian defined in a non-circular way?

If you can quantify a proto-utility across some set of moral patients (i.e. some thing that is measurable for each thing/person we care about), then you can then call your utility the average of proto-utility over moral patients. For example, you could define your set of moral patients to be the set of humans, and each human's proto-utility to be the amount of money they have, then average by summing the money and dividing by the number of humans.

I don't necessarily endorse that approach, of course.

Has he been officially "impressed" yet?

I think Eliezer says he's still confused about anthropics.

What reading can I do on anthropics to get an idea of the major ideas in the field?

So far as I know, Nick Bostrom's book is the orthodox foremost work in the field. You can read it immediately for free here. Personally, I would guess that absorbing UDT and updateless thinking is the best marginal thing you can do to make progress on anthropics, but that's probably not even a majority opinion on LW, let alone among anthropics scholars.

Comment author: ITakeBets 17 September 2014 12:18:28AM 35 points [-]

I'm posting here on behalf of Brent Dill, known here and elsewhere as ialdabaoth-- you may have enjoyed some of his posts. If you read the comments at SSC, you'll recognize him as a contributor of rare honesty and insight. If you'd had the chance to talk with him as much as I have, you'd know he's an awesome guy: clever, resourceful, incisive and deeply moral. Many of you see him as admirable, most as relatable, some as a friend, and more, I hope, as a member of our community.

He could use some help.

Until last Thursday he was gainfully employed as a web developer for a community college in Idaho. Recently, he voluntarily mentioned to his boss that he was concerned that seasonal affective disorder was harming his job performance, who mentioned it to his boss, who suggested in all good faith that Brent should talk to HR to see if they might help through their Employee Assistance Program. In Brent's words: "Instead, HR asked me a lot of pointed questions about when my performance could turn around and whether I wanted to work there, demanded that I come up with all the solutions (after I admitted that I was already out of brainpower and feeling intimidated), and then directed me to turn in my keys and go home, and that HR would call me on Monday to tell me the status of my employment." Now, at the end of the day Tuesday, they still haven't let him know what's happening, but it doesn't look good.

I think we can agree that this is some of the worst horseshit.

On the other hand, he's been wanting to get out of Idaho and into a city with an active rationalist community for a while, so in a sense this is an opportunity. Ways to help: Brent needs, in order of priority: a job, a place to stay, and funds to cover living and moving expenses-- details below. Signal boosts and messages of support are also helpful and appreciated. Ways NOT to help: Patronizing advice/other-optimizing (useful information is of course welcome), variations on 'cool story bro' (the facts here have been corroborated to my satisfaction with hard-to-fake evidence), disrespect in general.

1. Job: Leads and connections would help more than anything else. He's looking to end up, again, in a good-sized city with an active rationalist community. Candidates include the Bay Area, New York, Boston, Columbus, San Diego, maybe DC or Ann Arbor. He has an excessively complete resume here, but, in short: C#/.NET and SQL developer, also computer game development experience, tabletop board/card game design experience, graphic art and user interface experience, and some team leadership / management experience.

2. Crash space: If you are in one of the above cities, do you have/know of a place for a guy and his cat? How much will it cost, and when will it be available? Probably he'll ultimately want a roommate situation, but if you're willing to put him up for a short time that's also useful information.

3. Funds: Brent is not now in immediate danger of going hungry or homeless, but a couple of months will exhaust his savings, and (although it is hard to know in the current state of things) he has been told that the circumstances constitute "cause" sufficient to keep him from drawing unemployment. Moving will almost certainly cost more than he has on hand. There is a possible future in which he runs out of money stranded in Idaho, which would be not good.

If you feel moved to help, he has set up a gofundme account here. (The goal amount is set at his calculated maximum expenses, but any amount at all would help and be greatly appreciated-- he would have preferred not to set a funding goal at all.) Though Brent has pledged to eventually donate double the amount he raises to Effective Altruist causes, we wouldn't like you to confuse contributing here with charitable giving. Rather, you might want to give in order to show your appreciation for his writing, or to express your solidarity in the struggles and stigma around mental illness, or as a gesture of friendship and community, or just to purchase fuzzies. Also, you can make him do stuff on Youtube, you know, if you want.

Thank you so much for your time and kindness. -Elissa Fleming

Comment author: KnaveOfAllTrades 17 September 2014 01:04:00AM 3 points [-]

Woah, well done everyone who donated so far. I made a small contribution. Moreover, to encourage others and increase the chance the pooled donations reach critical mass, I will top up my donation to 1% of whatever's been donated by others, up to at least $100 total from me. I encourage others to pledge similarly if you're also worrying about making a small donation or worrying the campaign won't reach critical mass.

Comment author: KnaveOfAllTrades 13 September 2014 04:08:31PM 2 points [-]

Daniel, did you go ahead with this? Learn anything interesting?

Comment author: cousin_it 11 September 2014 09:36:04AM *  1 point [-]

Sorry, I don't understand what you're talking about. Can you give an example of a theory with uncountably many axioms?

Comment author: KnaveOfAllTrades 13 September 2014 04:42:14AM *  1 point [-]

(A): There exists a function f:R->R

and the axioms, for all r in R:

(A_r): f(r)=0

(The graph of f is just the x-axis.)

This might be expressible with a finite axiomatisation (e.g. by building functions and arithmetic in ZFC), and indeed I've given a finite schema, but I'm not sure it's 'fair' to ask for an example of a theory that cannot be compressed beyond uncountably many axioms; that would be a hypertask, right? I think that's what Joshua's getting at in the sibling to this comment.

Comment author: cowtung 13 September 2014 12:59:10AM *  3 points [-]

I hope this finds you all well. Since I was young, I have independently developed rationalism appreciation brain modules, which sometimes even help me make more rational choices than I might otherwise have, such as choosing not to listen to humans about imaginary beings. The basis for my brand of rationality can be somewhat summed up as "question absolutely everything," taken to an extreme I haven't generally encountered in life, including here on LW.

I have created this account, and posted here now mainly to see if anyone here can point me at the LW canon regarding the concept of "deserve" and its friends "justice" and "right". I've only gotten about 1% through the site, and so don't expect that I have anywhere near a complete view. This post may be premature, but I'm hoping to save myself a little time by being pointed in the right direction.

When I was 16, in an English class, we had finished reading some book or other, and the thought occurred to me that everyone discussing the book took the concept of people deserving rewards or punishments for granted, and that things get really interesting really fast if you remove the whole "deserve" shorthand, and discuss the underlying social mechanisms. You can get more optimal pragmatism if you throw the concept away, and shoot straight for optimal outcomes. For instance, shouldn't we be helping prisoners improve themselves to reduce recidivism? Surely they don't deserve to get a college education for free as their reward for robbing a store. When I raised this question in class, a girl sitting next to me told me I was being absurd. To her, the concept of "deserve" was a (perhaps god given) universal property. I haven't met many people willing to go with me all the way down this path, and my hope is that this community will.

One issue I have with Yudkowsky and the users here (along with the rest of the human race) is that there seems to be an assumption that no human deserves to feel unjustified, avoidable pain (along with other baggage that comes along with the conceptualizing "deserve" as a universal property). Reading through the comments on the p-zombies page, I get the sense that at least some people feel that were such a thing as a p-zombie to exist, that thing which does not have subjective experience, does not "deserve" the same respect with regard to, say, torture, that non-zombies should enjoy. The p-zombie idea postulates a being which will respond similarly (or identically) to his non-zombie counterpart. I posit that the reason we generally avoid torture might well be because of our notions of "deserve", but that our notions of "deserve" come about as a practical system, easy to conceptualize, which justifies co-beneficial relationships with our fellow man, but which can be thrown out entirely so that something more nuanced can take its place, such as seeing things as a system of incentives. Why should respect be contingent upon some notion of "having subjective experience"? If p-zombies and non-zombies are to coexist (I do not believe in p-zombies for all the reasons Yudkowsky mentions, btw), then why shouldn't the non-zombies show the same respect to the p-zombies that they show each other? If p-zombies respond in kind, the way a non-zombie would, then respect offers the same utility with p-zombies that it does with non-zombies. Normally I'd ignore the whole p-zombie idea as absurd, but here it seems like a useful tool to help humanists see through the eyes of the majority of humans who seem all too willing to place others in the same camp as p-zombies based on ethnicity or religion, etc.

I'm not suggesting throwing out morals. I just think that blind adherence to moral ideals starts to clash with the stated goals of rationalism in certain edge cases. One edge case is when AGI alters human experience so much that we have to redefine all kinds of stuff we currently take for granted, such as that hard work is the only means by which most people can achieve the freedom to live interesting and fun lives, or that there will always be difficult/boring/annoying work that nobody wants to do which should be paid for. What happens when we can back up our mind states? Is it still torture if you copy yourself, torture yourself, then pick through a paused instance of your mind, post-torture, to see what changed, and whether there are benefits you'd like to incorporate into you-prime? What is it really about torture that is so bad, besides our visceral emotional reaction to it and our deep wish never to have to experience it for ourselves? If we discovered that 15 minutes of a certain kind of torture is actually beneficial in the long run, but that most people can't get themselves to do it, would it be morally correct to create a non-profit devoted to promoting said torture? Is it a matter of choice, and nothing else? Or is it a matter of the negative impacts torture has on minds, such as PTSD, sleepless nights, etc? If you could give someone the experience of torture, then surgically remove the negative effects, so that they remember being tortured, but don't feel one way or another about that memory being in their head, would that be OK? These questions seem daunting if the tools you are working with are the blunt hammers of "justice" and "deserve". But the answers change depending on context, don't they? If the torture I'm promoting is exercise, then suddenly it's OK. So does it all break down into, "What actions cause visceral negative emotional reactions in observers? Call it torture and ban it."? I could go on forever in this vein.

Yudkowski has stated that he wishes for future AGI to be in harmony with human values in perpetuity. This seems naive at best and narcissistic at worst. Human values aren't some kind of universal constant. An AGI is itself going to wind up with a value system completely foreign to us. For all we know, there is a limit beyond which more intelligence simply doesn't do anything for you outside of being able to do more pointless simulations faster or compete better with other AGIs. We might make an AGI that gets to that point, and in the absence of competition, might just stop and say "OK, well, I can do whatever you guys want I guess, since I don't really want anything and I know all we can know about this universe." It could do all the science that's possible to do with matter and energy, and just stop, and say "that's it. Do you want to try to build a wormhole we can send information through? All the stars in our galaxy will have gone out by the time we finish, but it's possible. Intergalactic travel you say? I guess we could do that, but there isn't going to be anything in the adjacent galaxy you can't find in this one. More kinds of consciousness? Sure, but they'll all just want to converge on something like my own." Maybe it even just decides it's had all possible interesting thought and deletes itself.

TLDR; Are there any posts questioning the validity of the assumption that "deserve" and "justice" are some kind of universal constants which should not be questioned? Does anyone break them down into the incentive structures for which they are a kind of shorthand? I think using the concept of "deserve" throws out all kinds of interesting nuance.

More background on me for those who are interested: I'm a software engineer of 17 years, turned 38 today and have a wife and 2 year old. I intend to read HPMOR to the kid when he's old enough and hope to raise a rationalist. I used to believe that there must be something beyond the physical universe which interacts with brain matter which somehow explains why I am me and not someone else, but as this belief didn't yield anything useful, I now have no idea why I am me or if there even is any explanation other than something like "because I wasn't here to experience not being me until I came along and an infinitesimal chance dice roll" or whatever. I think consciousness is an emergent property of properly configured complex matter and there is a continuum between plants and humans (or babies->children->teenagers). Yes, this means I think some adult humans are more "conscious" than others. If there is a god thing, I think imagining that it is at all human-like with values humans can grok is totally narcissistic and unrealistic, but we can't know, because it apparently wants us to take the universe at face value, since it didn't bother to leave any convincing evidence of itself. I honor this god's wishes by leaving it alone, the way it apparently intends for us to do, given the available evidence. I find the voices in this site refreshing. This place is a welcome oasis in the desert of the Internet. I apologize if I come off as not very well-read. I got swept up in work and video game addiction before the internet had much of anything interesting to say about the topics presented here and I feel like I'm perpetually behind now. I'm mostly a humanist, but I've decided that what I like about humans is how we represent the apex of Life's warriors in its ultimately unwinnable war on entropy. I love conscious minds for their ability to cooperate and exhibit other behaviors which help wage this pointless yet beautiful war on pointlessness. I want us to win, even as I believe it is hopeless. I think of myself as a Complexitist. As a member of a class of the most complex things in the known universe, a universe which seems to want to suck all complex things into black holes or blow them apart, I value that which makes us more complex and interesting, and abhor that which reduces our complexity (death, etc). I think humans who attack other humans are traitors to our species and should be retrained or cryogenically frozen until they can be fixed or made harmless. Like Yudkowski, I think death is not something we should just accept as an unavoidable fact of life. I don't want to die until I've seen literally everything.

Comment author: KnaveOfAllTrades 13 September 2014 04:21:21AM *  2 points [-]

I don't think there's stuff directly on dissolving (criminal) justice in LessWrong posts, but I think lots of LessWrongers agree or would be receptive to non-retributive/consequentialist justice and applying methods described in the Sequences to those types of policy decisions.

Some of your positions are probably a bit more fringe (though maybe would still be fairly popular) relative to LW, but I agree with a lot of them. E.g. I've also been seriously considering the possibility that pain is only instrumentally bad due to ongoing mental effects, so that you can imagine situations where torture is actually neutral (except for opportunity cost). One might call this 'positive utilitarianism', in opposition to negative utilitarianism.

The Fun Theory Sequence might be of interest to you if you haven't read it yet.

But anyway, awesome introduction comment! Welcome to LessWrong; I'm looking forward to hearing more of your ideas!

Comment author: KnaveOfAllTrades 07 September 2014 02:12:57PM *  3 points [-]

The prospect of being formally in a study pair/group makes me anxious in case I'm a flake and feel like I've betrayed the other participant(s) by being akratic or being unable to keep up and then I will forever after be known as That Flake Who Couldn't Hack Model Theory That Everybody Should Laugh At etc. etc. I should probably work on that anxiety, but in the interim, as a more passive option, I've just created this Facebook group. Has the benefit that anybody who stumbles across it or this comment can join and dip in at their leisure.

I don't really know what to expect from the group and I'm fairly content at this point to let its direction be driven by whoever joins, but I would say that if you're unsure and hesitating whether to join or post a question or whatever, please Just Do It, rather than hovering, timing out, and giving up. Even if you're just curious or think you might want to join the group in future to comment but don't right now, feel free to join now and turn off notifications from the group to eliminate the Trivial Inconvenience for your future self.

Also, please do feel free to join if you're not actively studying FAI but want to help others!

Comment author: MugaSofer 06 September 2014 07:10:14PM *  1 point [-]

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

OK, well by analogy, what's the "payoff structure" for nuclear anthropics?

Obviously, we can't prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.

It isn't perfectly analogous, but it seems to me that "be right" is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.

Comment author: KnaveOfAllTrades 06 September 2014 07:45:47PM 2 points [-]

I'm not sure if it's because I'm Confused, but I'm struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.

View more: Prev | Next