All of TerminalAwareness's Comments + Replies

"Think clearly" seems a reasonable goodbye.

4boredstudent
Super old but in case someone else is looking..second
1VincentYu
* Third
0VincentYu
I am able to view the entire CyberChild paper in this book preview on Google Books.
6VincentYu
* Fourth * Fifth

Absolutely; I certainly do have things I'd love to code.

  • I rely heavily on a python notes taking program, Zim, which could use some help implementing more features like tables, or an android port.
  • Linux could use an extended nutrition, food, and exercise tracking program
  • I've toyed with the idea of trying to pull components together under KDE and link food purchases to a pantry tracking program to a nutrition tracking program to a health logging program
  • The BIOS on my laptop is broken under Linux in many ways; I've seen and attempted to decompile and repa
... (read more)
0shokwave
All the projects you list are probably too challenging except the nutrition/exercise/food tracking program, I'd wager. A suggestion on how to go from Project Euler to stronger things quickly: try some of the Google AI challenges. Planet Wars is a good spot to start. I found working on it outside of the competition to be very interesting; coding actual bots is not much more challenging than Project Euler, you can increase the difficultly level with "ooh, I'd really like to see my bot do x", and when you start thinking about how to exploit the game you end up digging through their code and learning a lot about bigger projects. More generally, these kinds of competitions where you submit a simple piece of code to a more complex piece are a great way to step up in skill (as long as you don't try to actually compete just yet - I found that stress and time constraint to be counterproductive).

As someone who can program well for a beginner ( Linux user, scripts very well; otherwise Python, C, C++ and MATLAB are what I've used), what advantage is there to be gained in learning more? I'd really like to; I'm trying to all the time, but I have no real problems I need to code to solve, or they are simply much too big. Can you suggest some benefits that I'd gain from a moderate skill increase?

0shokwave
Can you give me an example or two of a problem that is much too big?

I know it's been some time, but I wanted to thank you for the reply. I've thought considerably, and I still feel that I'm right. I'm going to try to explain again.

Sure, we all have our own utility functions. Now, if you're trying to maximize utility for everyone, that's no easy task, and you'll end up with a relatively small amount of utility.

Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not... (read more)

2KPier
If you haven't, you should read Yvain's Consequentialism FAQ, which addresses some of these points in a little more detail. Preference utilitarianism works well for any situation you'll encounter in real life, but it's possible to propose questions it doesn't answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn't force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient. Forcing transhumanity on people is a harder question, because I'm not sure that everyone's preferences would converge in this case. In any event, I would not personally do it, because I don't trust my own reasoning enough. I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don't think we should change their utility function so much as implement their ideal preferences, not their stated ones. is what? Is willing to change people's utility functions?

Well, I am new here, and I suppose it was a slightly presumptive of me to say that. I was just trying to introduce myself with a few of the thoughts I've had while reading here.

To attempt to clarify, I think that this story is rather like the fable of the Dragon-Tyrant. To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for whi... (read more)

5Nornagest
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they're expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we're summing (or averaging over, or taking the minimum of, etc.) isn't as simple as sometimes assumed. Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it's a very deep question, and intimately related to human value structure and all its AI consequences), and you're certainly free to prefer 3WC's assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don't find convincing. Personally, though, I'm inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there's a considerably looser and more subtle relationship between the two. But I'm nowhere near a fully developed ethics, and while this seems to have something to do with the "complexity" you mentioned I feel like stopping there would be an unjustified handwave.
1Nectanebo
It really does seem like either you don't really believe that the assimilation ending is optimal and you prefer the true ending, or you are suffering from akrasia by fighting against it despite believing that it is. You haven't really explained why it could be anything else.
4KPier
I think the confusion comes from what you mean by "utilitarian." The whole point of Three Worlds Collide (well, one of the points), is that human preferences are not for happiness alone; the things we value include a life that's not "vapid and devoid of meaning", even if it's happy! That's why (to the extent we have to pick labels) I am a preference utilitarian, which seems to be the most common ethical philosophy I've encountered here (we'll know more when Yvain's survey comes out). If you prefer not to be a Superhappy, then preference utilitarianism says you shouldn't be one. When you catch yourself saying "the right thing is X, but the world I'd actually want to live in is Y," be careful - a world that's actually optimal would probably be one you want to live in.
5Desrtopa
And you think that not being able to bear submitting to that is wrong? Personally, I'm one of those who prefers the assimilation ending, there are quite a few of us, and I certainly wouldn't be tempted to fight to the death or kill myself to avoid it. But for a person who would fight to the death to avoid it to say that assimilation is optimal and the True Ending is senseless seems to me to be incoherent.

LessWrong community, I say hello to you at last!

I'm a first year chemical engineering student in Canada. At some point in time I was linked to The AI-Box Experiment by Yudkowsky, probably 3-1/2 years ago. I'm not sure. The earliest record I have, of an old firefox history file, is Wed Jun 25 20:19:56 ADT 2008. I guess that's when I first encountered rationality, though it may have been back when I used IE (shudders). I read a lot of his site, and occasionally visited it and againstbias. I though it was pretty complicated, and that I'd see more of that guy ... (read more)

5Nectanebo
Welcome to LessWrong! I would say that if you're interested in rationality, you belong here. It doesn't matter if you're not that good at it yet, as long as you're interested and want to improve then I would say this is where you should be. Be careful of the priming effects of calling yourself bad at rationality, questioning your place here, saying you'll never escape a drug addiction, etc. etc. The article on cached selves might be somewhat relevant.
4thomblake
This suggests to me that you don't understand ethics. While I'm occasionally convinced of the existence of akrasia, it would be an odd thing to note that one fighting to the death was caused by it.

Alright, I finally made an account. Thanks for the push, though this had little to do with why I've joined. I liked the probability parts of the survey, though I know I need to improve my estimates. Political section might be better done with a full-fledged Question section just devoted to it. Perhaps a later survey? I can't wait to see the results.