Random LW-parodying Statement Generator

59 Armok_GoB 11 September 2012 07:57PM

So, I were looking at this, and then suddenly this thing happened.

EDIT:

New version! I updated the link above to it as well. Added LOADS and LOADS of new content, although I'm not entirely sure if it's actually more fun (my guess is there's more total fun due to varity, but that it's more diluted).

I ended up working on this basically the entire day to day, and implemented practically all my ideas I have so far, except for some grammar issues that'd require disproportionately much work. So unless there are loads of suggestions or my brain comes up with lots of new ideas over the next few days, this may be the last version in a while and I may call it beta and ask for spell-check. Still alpha as of writing this thou.

Since there were some close calls already, I'll restate this explicitly: I'd be easier for everyone if there weren't any forks for at least a few more days, even ones just for spell-checking. After that/I move this to beta feel more than free to do whatever you want.

Thanks to everyone who commented! ^_^

old Source, old version, latest source

Credits: http://lesswrong.com/lw/d2w/cards_against_rationality/ , http://lesswrong.com/lw/9ki/shit_rationalists_say/ , various people commenting on this article with suggestions, random people on the bay12 forums that helped me with the engine this is a descendent from ages ago.

"The True Rejection Challenge" - Thread 2

7 Armok_GoB 02 July 2011 11:49AM

The old thread (found here: http://lesswrong.com/lw/6dc/the_true_rejection_challenge/ ) was becoming very unwieldy and hard to check, so many people suggested we made a second one. I just realized that the only reason it didn't exist yet was bystander effect-like, so I desiced to just do this one.

From the original thread:

An exercise:

Name something that you do not do but should/wish you did/are told you ought, or that you do less than is normally recommended.  (For instance, "exercise" or "eat vegetables".)

Make an exhaustive list of your sufficient conditions for avoiding this thing.  (If you suspect that your list may be non-exhaustive, mention that in your comment.)

Precommit that: If someone comes up with a way to do the thing which doesn't have any of your listed problems, you will at least try it.  It counts if you come up with this response yourself upon making your list.

(Based on: Is That Your True Rejection?)

Edit to add: Kindly stick to the spirit of the exercise; if you have no advice in line with the exercise, this is not the place to offer it.  Do not drift into confrontational or abusive demands that people adjust their restrictions to suit your cached suggestion, and do not offer unsolicited other-optimizing.

Specific Fiction Discusion (April 2011)

6 Armok_GoB 14 April 2011 12:29PM

Seeing some recent comments on my links comment, I think this thread might be warranted.

This is a thread for discussing specific works of fiction; books, movies, TV shows, webcomics, fanfictions, whatever. It's purpose is to provide a rationality perspective on shows that are not necessarily aimed at rationalists (but by the correlation of target audience I predict many of them might be anyway...)

To keep this organized, please follow these guidlines when posting; Top level coments shuld with NO exception (I'll make a single meta comment where discussion about this thread itself can go) fit into one of the following templates:

For a single work, the top level comment should consist of the full title, a link to where the work can be found online if applicable, and the TV tropes page for it OR a short description ONLY if there is no TV tropes page for it.

For certain authors that have written a lot of books popular on LW, such as for example Vernor Vinge, discussion of each one might tend to dominate the thread, therefore there should be one post for ALL the works of such authors, and they can be made entire own threads if discussion grows to big for that. The format for these comments is: Authors name, link to their wikipedia page (or homepage if they don't have a wikipedia page), and a short bibliography to make it easier to avoid making separate top level comments for their books.

Also, pleas refrain from discussing things written by Eliezer or otherwise already having a discussion space on LW, for similar reasons you should avoid discussing a certain institute and because it'd be redundant.

If this thread grows large and popular, I'm thinking this might become a monthly thing, hence the (April) part.

Problem noticed in aspect of LW comunity bonding?

18 Armok_GoB 05 April 2011 11:40PM

 

I have noticed that given how much I identify as a rationalist, how much I have in common with the community here, how important I consider it, etc. I have surprisingly little instant in group identification with community members compared to other online communities. There seem to be an aspect of social involvement that LW does bad at. And there is one thing lacking that to me seems the obvious first suspect; the lack of of-topic unstructured chatter.

What I do when I feel that I identify with some continuity online is in fact not usually the thing the community is ABOUT. Instead, it's the things that grow out of the sides; forum games, members art projects, photo share threads, fanworks. I can speculate on why this happens this is so, but it dosn't seem very useful at the moment, I'm not highly confident on any specific theory, and most will probably find it fairly obvious anyway.

LW, however, has no real room for this. Even in the discussion section, things that are not reasonably on topic will be punished with negative karma. Now, this is obviously needed, but one must still recognize there IS a prise to being so structured and focused on a single goal when humans naturally tend not to be. Look for third options.

Now, I have a specific solution in mind, but I'm going to hold of on proposing it and see if you come up with something better before I post my idea.

 

EDIT: My suggestion has now been added in the comments, please check it out.

 

Some altruism anecdotes [link]

7 Armok_GoB 16 March 2011 10:26PM

(I am not sure if this is the right place to post this, if I'm wrong please just tell me so and I'll delete the post, ok? No need to give me -20 karma. )

So, I just stumble upon this compilation of tweets from the recent tragedies in japan, with translations. Link to forum post where I found it: http://www.bay12forums.com/smf/index.php?topic=79383.msg2080663#msg2080663

While it may not seem very relevant at first, I actually found a fair lot of them were related to things that are: the volition of humanity, sanity waterlines and how a less mad world might look, "MoR!hermione" type people and possibly it contains evidence that the right social  climate can make them more likely, the notion that a society of rationalists should win (this one may be a bit far fetched), and considering the length of this list probably a few things that I missed!

It is also rather hertwarming! :) ((free fuzzies, so now that money is freed up for you to spend on utilions instead. :P ))

Comprehensible Improvments: Things you Could Do.

-1 Armok_GoB 11 February 2011 11:15PM

Edit2: reactions to the edit made me reconsider, partially. I might get around to making more posts here.

EDIT: Because this and all my comments on it is getting downvoted already, I won't bother finish this and wish I'd never posted anything on it. Should I delete this thread or leave it as a monument to my own pathetic failure?

 

 

The topic of what you'd do if you found yourself as an upload and were to self improve is dangerous to think about for many reasons. It's unlikely to happen before the singularity and if it happens afterwards you'll have knowledge and a community that renders current speculation moot. As a human you almost certainly can't reach superintelegence without becoming Unfriendly. You can't think about any that improve intelligence beyond the first iteration because thatd be trying to predict somehting smarter than you. Etc.

However, even if you can only think about the very start of it, and the actual predictions or plans that you generate neither will or should have any reason to happen, there can be less direct benefits. The dominant one is it's damn fun; thinking about things you could do to your mind is way more interesting than what you could do with that hot guy/gal sitting in front of you on the bus or what you'd do with a billion dollars. More importantly thou, it serves to provide a LOWER BOUND, helping against failures of imagination and providing more salient and near mode motivation for a friendly singularity in establishing life after it will be at least this good and the only reason you wont do these awesome things is that you'll be provided by even better alternatives. Lastly, the chance is infinitesimal, but maybe you really will at some time have to boot the singularity from only your own upload and then a repository of the least unsafe upgrades LW could think of might come in handy. Just don't fool yourself the first one isn't the real cause of doing this thou. :p

Now, it happens to appear that all these 3 goals actually have the same most important heuristic: Keep it comprehensible to a vanilla human. There is a limited amount of fun to be gained from thinking of just a change to do without your brain being able to respond with what it'd feel like afterwards. Likewise, in the second goal the abstract "somehting really good but i don't know how good or in what exact way" is what we're trying to get away from. And for the last one, doing only changes you can comprehend is just common sense; "know what you're doing" taken literally.

So, for he format of this thread: Have discrete improvement suggestions, and put only one in each comment with a witty title bolded. To keep it from degenerating in to buzzwords and the obvious, but all these are very lose suggestions, here's a few guidelines that improvements should follow:

  • The exact situational assumptions for each example may vary, but in general you're yourself, uploaded to a machine with enough power to simulate you at 10 to 10^12 times human speed, having 10 to 10^12the required memory, containing only you and software not much more advanced than we have today, using an architecture that provides no additional obstacles to anything (for example, all the computing power can be used serially and latency can be considered negligible), and you have no reason to be interested the outside world and under no obligation to personally cause the singularity and just enjoying yourself, but making sure you do not foom and cause a bad one. These are just establishing a default and you're free to make other assumptions but you have to write them out.
  • It should be highly predictable and EASILY comprehensible. I won't bother defining this other than by heuristic: you should be able to predict what you'd do and feel after the change as well as you'd be able to predict what you'd do before it. By this definition reading a book you haven't read before is an example of a non comprehensible change but being wireheaded is. The narrowness of this is indeed excessive, but I'm confident it still gives a large enough search space and there is no need to go further into unpredictability than necessary.
  • keep it low level. The point of this is things you can vividly imagine, and it's very easy to get carried away into far mode and abstraction. Talk neurons and algorithms, not ideas and functionality. Or rather, talk about the low level changes first and then the results they give on higher levels. Describe not what end result it'd be cool to have, but what procedure it'd be fun to do!
  • Have a witty title. it should be in bold.
  • Keep it fun. This is intended fair bit less serious than most LW discussions.
  • Keep it something more than fun, and on topic for LW.
  • Look at the examples I make.

EDIT: Damn, it's really late and I were a lot wordier than I thought. I don't have time to write the actual examples. I'll do that tomorrow then hopefully. Sorry. :(