Comment author: jknapka 12 December 2013 08:43:15PM 7 points [-]

Survey taken. I hope I didn't break it - I am a committed atheist, but also an active member of a Unitarian Universalist congregation, and I indicated that in spite of the explicit request for atheists not to answer the denomination question. (Atheist UUs are very common, and people on the "agnostic or less religious" side of the spectrum probably make up around 40% of the UU congregations I'm familiar with.)

Comment author: Transfuturist 03 September 2013 09:00:04PM -1 points [-]

What if the AI's utility function is to find the right utility function, being guided along the way? Its goals could be such as learning to understand us, obey us, and predict what we might want/like/approve, moving its object-level goals to what would satisfy humanity? In other words, a probabilistic utility function with great amounts of uncertainty, and great amounts of apprehension to change, or stability.

Regardless of the above questions/statement, I think much of the complexity of human utility comes from complexities of belief.

If we offload complexity of the AI's utility function into very uncertainly defined concepts, and give it an apprehension to do anything but observe given such little data... I don't know, though. This has been something I've been sitting on for a while, lambast me.

As one last thing, I think the best kind of FAI would be a singleton, with a metautility function, or society's utility function. I think one part of Friendliness would be determining a utility function for society, as to how people can interfere with each other in what circumstances, and then build the genie's utility function in the singleton's constraints.

Please critique. If my ideas are as unclear as I think they may be (I'm sick), please mention it.

Comment author: jknapka 04 September 2013 01:55:55AM 0 points [-]

(I am in the midst of reading the EY-RH "FOOM" debate, so some of the following may be less informed than would be ideal.)

From a purely technical standpoint, one problem is that if you permit self-modification, and give the baby AI enough insight into its own structure to make self-modification remotely a useful thing to do (as opposed to making baby repeatedly crash, burn, and restore from backup), then you cannot guarantee that utility() won't be modified in arbitrary ways. Even if you store the actual code implementing utility() in ROM, baby could self-modify to replace all references to that fixed function with references to a different (modifiable) one.

What you need is for utility() to be some kind of fixed point in utility-function space under whatever modification regime is permitted, or... something. This problem seems nigh-insoluble to me, at the moment. Even if you solve the theoretical problem of preserving those aspects of utility() that ensure Friendliness, a cosmic-ray hit might change a specific bit of memory and turn baby into a monster. (Though I suppose you could arrange, mathematically, for that particular possibility to be astronomically unlikely.)

Comment author: JRMayne 03 September 2013 04:28:30PM 10 points [-]

Random thoughts:

  1. The decision that smart high school students should take calculus rather than statistics (in the U.S.) strikes me as pretty seriously misguided. Statistics has broader uses.

  2. I got through four semesters of engineering calculus; that was the clear limit of my abilities without engaging in the troublesome activity of "trying." I use virtually no calculus now, and would be fine if I forgot it all (and I'm nearly there). I think it gave me no or almost no advantages. One readthrough of Scarne on Gambling (as a 12-year-old) gave me more benefit than the entirety of my calculus education.

  3. I ended up as the mathiest guy around in a non-math job. But it's really my facility with numbers that makes it; my wife (who has a master's degree in math) says what I am doing is arithmetic and not math, but very fast and accurate arithmetic skills strike me as very handy. (As a prosecutor, my facility with numbers comes as a surprise to expert witnesses. Sometimes, they are sad afterward.)

  4. Anecdotally, math education may make people crazy or attract crazy people disproportionately. I think that pursuit of any topic aligns your brain to think in a way conducive to that topic.

My tentative conclusions are that advanced statistics has uses in understanding the world; other serious math is fun but probably not optimal use of time, unless it's really fun. "Really fun," has value. This conclusion is based on general observation, and is hardly scientific; I may well be wrong.

Comment author: jknapka 03 September 2013 10:44:14PM 4 points [-]

I agree that basic probability and statistics is more practically useful than basic calculus, and should be taught at the high-school level or even earlier. Probability is fun and could usefully be introduced to elementary-school children, IMO.

However, more advanced probability and stats stuff often requires calculus. I have a BS in math and many years of experience in software development (IOW, not much math since college). I am in a graduate program in computational biology, which involves more advanced statistical methods than I'd been exposed to before, including practical Bayesian techniques. Calculus is used quite a lot, even in the definition of basic probabilistic concepts such as expectation of a random variable. Anything involving continuous probability distributions is going to be a lot more straightforward if approached from a calculus perspective. I, too, had four semesters of calculus as an undergrad and had forgotten most of it, but I found it necessary to refresh intensely in order to do well.

Comment author: Vladimir_Nesov 21 December 2011 05:43:18PM 4 points [-]

Also, IMO it took guts to bring this to the LW community. So kudos for that, too.

I expect this is wrong both as a model of the community and of Raemon's model of the community. Appearances don't matter, and psychological effects do.

Comment author: jknapka 21 December 2011 07:15:31PM 3 points [-]

Noted. I'll try to update accordingly.

Comment author: jknapka 21 December 2011 05:34:51PM 3 points [-]

Raemon, this is really great. As a lay leader of a Unitarian Universalist congregation, I love what you say about the importance of ritual -- it can be strongly affecting, and can motivate people to action they might not otherwise take. If we can construct rituals that inspire and invigorate, without misleading, then that is a win.

I'd suggest that when doing this kind of ritual, we should invite guests who are almost-but-not-quite in the rationalist camp. It can be a tool to attract new minds.

I will try to do a similar event at my church next year. We have quite a few atheists and fellow travelers, so I think it will work well. And maybe there will be further opportunities for rational ritual during the year -- other noteworthy astronomical events, perhaps. Or maybe when Nobels are announced.

Also, IMO it took guts to bring this to the LW community. So kudos for that, too.

Comment author: Nebu 16 March 2009 09:37:15PM 17 points [-]

I voted up on your post, Yvain, as you've presented some really good ideas here. Although it may seem like I'm totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to explore these 3 scenarios on their own.

Pascal's Wager

In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would happen here.

But let's say Omega additionally tells me that Catholicism is actually self-consistent, and I just misunderstood something about it, before flying away. In that case, I guess I'd start to study Catholicism. If my revised view of Catholicism has me believe that it does some rather cruel stuff (stone people for minor offenses, etc.) then I'd have to weight that against my desire to not suffer eternal torture.

I mean, eternal torture is pretty frickin' bad. I think in the end, I'd convert. And I'd also try to convert as many other people as possible, because I suspect I'd need to be cruel to fewer people if fewer people went against Christianity.

The God-Shaped Hole

To clarify your scenario, I'm guessing Omega explicitly tells me that I will be happier if I believe something untrue (i.e. God). I would probably reject God in this case, as Omega is implicitly confirming that God does not exist, and I do care about truth more than happiness. I've already experience this in other manners, so this is a much easier scenario for me to imagine.

Extreme Altruism

I don't think I can overcome this challenge. No matter how much I think about it, I find myself putting up semantic stop signs. In my "least convenient world", Omega tells me that Africa is so poverty stricken, and that my contribution would be so helpful, that I would be improving the lives of billions of people, in exchange for giving up all my wealth. While I might not donate all my money to save 10, I think I value billions of lives more than my own life. Do I value it more than my own happiness? This is an extremely painful question for me to think about, so I stop thinking about it.

"Okay", I say to Omega, "what if I only donate X percent of my money, and keep the rest for myself?" In one possible "least convenient world", Omega tells me that the charity is run by some nutcase whom, for whatever reason, will only accept an all-or-nothing deal. Well, when I phrase it like that, I feel like not donating anything, and blaming it on the nutcase. So suppose instead Omega tells me "There's some sort of principles of economy of scale which is too complicated for me to explain to you which basically means that your contribution will be wasted unless you contribute at least Y amount of dollars, which coincidentally just happens to be your total net worth." Again, I'm torn and find it difficult to come to a conclusion.

Alternative, I say to Omega "I'll just donate X percent of my money." Omega tells me "that's good, but it's not optimum." And I reply "Okay, but I don't have to do the optimum." but then Omega convinces me that actually, yes, I really should be doing the optimum somehow. Perhaps something along the line of how my current "ignore Africa altogether" behaviour is better than the behaviour of going to Africa and killing, torturing, raping everyone there. That doesn't mean that the "ignore Africa" strategy is moral.

Comment author: jknapka 02 December 2011 05:36:58PM 1 point [-]

I mean, eternal torture is pretty frickin' bad. I think in the end, I'd convert. And I'd also try to convert as many other people as possible, because I suspect I'd need to be cruel to fewer people if fewer people went against Christianity.

This is a very good point, and I believe I'll point it out to my rather fundamentalist sibling when next we talk about this: if I really, truly believed that every non-Christian was doomed to eternal damnation, you can bet I'd be an evangelist!

Extreme Altruism

While I might not donate all my money to save 10, I think I value billions of lives more than my own life. Do I value it more than my own happiness? This is an extremely painful question for me to think about, so I stop thinking about it.

I definitely don't value those billions of lives more than my own happiness, or more than the happiness of those I know and love. However, I would seriously consider giving all of my wealth if Omega assured me that me and mine would be able to continue to be reasonably happy after doing so, even if it meant severe lifestyle changes.

In response to comment by [deleted] on More "Personal" Introductions
Comment author: falenas108 01 December 2011 04:22:29PM *  8 points [-]

I'm 19 years old at the University of Chicago, originally from Maryland. I'm a biochemistry major and physics minor, and have almost no idea which area of these field to go into. My only expressed goals for work are to do research somewhere and to have my work improve the overall quality of life by some amount.

I was raised in a Jewish household, but slowly turned atheist between the ages of 14 and 16. My parents were rarely around due to work, and they took little interest in my schoolwork, assuming I could handle it. As an only child, I was essentially raised by the internet. Honestly, I'm shocked I turned out as well as I did. I have little trouble with akrasia, as I use the method of procrastinating work with different work.

I'm called a morning person, but that's just because I always get almost exactly 7 hours of sleep every night. So, if I stay up until 2, I'll be awake by 9. Unless I get bored by a teacher's lecture, this method ensures that I'm almost never tired during the day but still able to fall asleep easily in the evening.

At the beginning of the year, I joined my school's circus club, which was surprisingly fun and fairly easy to pick up. I previously did glowstringing (poi) which is what got me into circus, but now I do various acrobatics, some stilt work, and a bit of juggling.

Additionally, I recently got a research position modeling protein folding that requires the use of unix and python. I'm pretty happy about this, as this will force me to actually learn a programming language, a mid-priority goal of mine for a while now.

Comment author: jknapka 02 December 2011 07:07:15AM *  1 point [-]

I seem to be succeeding in helping to convince my graduate program in bioinformatics to ditch Perl in favor of Python. I'm very happy about this! When you don't have a programming background, and you're going into a field with heavy programming, Perl will hurt you -- it's likely to make you dislike programming. Python OTOH is like the fuzzy kitten of programming languages -- but it still has claws! (By which I mean, you can do serious stuff with it, despite its apparent adorableness.)

Also I've just started juggling again after a longish hiatus. I just decided to try a four-ball pattern the other day, and was absolutely shocked when I kept it going for like four complete cycles. Next mileposts will be: five-ball cascade, and three balls one-handed. I think 3/1 is probably harder than 5/2, but I'm not sure. I did a 3/1 flash the other day after ten tries, but I've never been able to complete a 5/2 flash. OTOH I've only recently begun to regard a 5-ball pattern as even achievable.

Comment author: FeepingCreature 16 November 2011 07:02:39PM 1 point [-]

Semi-rare poster. I was almost two-hundred years off. I think it might be the latin title that throws people.

Comment author: jknapka 01 December 2011 04:06:21PM 0 points [-]

I was over 100 years off, but in the opposite direction.

Comment author: jknapka 01 December 2011 04:01:23PM *  2 points [-]

I took the survey, sometime last week I think. EDIT: I think I may also have messed up the "two-digit probabilities" formatting requirement. I can't recall specifically any answer that might have violated it, but I also don't recall paying attention to that requirement while answering the survey.

Comment author: jknapka 01 December 2011 06:25:48AM 8 points [-]

Hello, all. I'm Joe. I'm 43, currently a graduate student in computational biology (in which I am discovering that a lot of inference techniques in biology are based on Bayes's Theorem). I'm also a professional software developer, and have been writing software for most of my life (since about age 10). In the early 1990's I was a graduate student at the AI lab at the University of Georgia, and though I didn't finish that degree, I learned a lot of stuff that was of great utility in my career in software development -- among other things, I learned about a number of different heuristics and their failure modes.

I remember a moment early in my professional career when I was trying to convince someone that some bug wasn't my fault, but was a bug in a third-party library. I very suddenly realized that, in fact, the problem was overwhelmingly more likely to be in my code than in the libraries and other tools we used, tools which were exercised daily by hundreds of thousands of developers. In that instant, I become much more skeptical of my own ability to do things Right. I think that moment was the start of my journey as a rationalist. I haven't thought about that process in a systematic way, though, until recently.

I've known of LW for quite a while, but really got interested when lukeprog of http://commonsenseatheism.com started reading Eliezer's posts sequentially. I'm now reading the sequences somewhat chaotically; I've read around 30% of the sequence posts.

My fear is, no matter how far I progress as a rationalist, I'll still be doing it Wrong. Or I'll still fear that I'm doing it wrong. I think I suffer greatly from under-confidence http://lesswrong.com/lw/c3/the_sin_of_underconfidence/ , and I'm very risk-averse. A property which I've just lately begun to view as a liability.

I am coming to view formal probabilistic reasoning as of fundamental importance to understanding reality, and I'd like to learn all I can about it.

If I overcome my reluctance to be judged by this community, I might write about my experiences with education in the US, which I believe ill-serves many of its clients. I have a 14-year-old daughter who is "unschooled". The topics of raising children as rationalists, and rational parenting, could engender some valuable discussions.

I might write about how, as an atheist, I've found it practically useful to belong to a religious community (a Unitarian Universalist church). "Believing in" religion is obviously irrational, but being connected with a religious community can in some circumstances be a rational, and non-cynical, move.

I might also write about software debugging as a rational activity. Though that's kind of obvious, I guess. OTOH debugging is IMO a severely under-valued skill in the field of software development. Most of my work is in soft real-time systems, which requires a whole different approach to debugging than interactive/GUI/web application development.

I might write about my own brief bout with mental illness, and about the process of dealing with a severely mentally-ill close relative, from a rationalist perspective.

My favorite sentence on LW so far: "Rationalists should WIN."

View more: Next