On my way to work, there's a random piece of graffiti that says "FREE OMEGA". Every time I pass it I can't help but think of a boxed AI trying to get out.
There are two boxes. One contains an FAI, and the other contains Omega. You can open either of them. Unfortunately, if you choose to open one, Omega has already predicted this, and is in the one you're going to open.
I have a meta-question regarding my participation style at LW.
I would like to learn how to contribute more positively to the community, rather than being confused and frustrated with the reactions I get to my posts. Is this a teachable skill? And if so, where would I go to learn it? (So far, I've tried asking here, and on #lesswrong, but I never get anything that I can parse into a consistent or actionable model, other than "less posts like this one".)
There are a lot of users on LW, and any one of them could like or dislike a comment or post for any reason. If you look at the score of a comment a short time after it has been posted, it's very likely that your score only reflects the opinion of a couple of people chosen essentially at random from the whole voting LW user base. (Also, if you comment on older threads, you will have fewer people reading them, so this effect gets amplified.) It's only over a longer time that the scores become reliably non-random. You should basically just ignore short-term score and only look at the comments/posts that have been up for a while, say at least 1-3 days for new posts as well as for comments in relatively recent threads. (I rarely post in old threads, so can't offer a good number there, but I would guess that it could be much longer.)
Fewer links should have nofollow added to them. Any user with karma more than ~100 should get the benefit of the google-juice in their profile and even in any links they add. There is also benefit for the owners of those linked sites and to web denizens in general---the fact that a LW-er with karma > 100 linked to them is important information.
But beyond that, even the internal links on the front page are nofollow! Certainly links to lesswrong.com and the LW Wiki should not have this tag.
I have a specific question and a generalisation of that question.
Specifically, I have recently considered obtaining and working my way through some maths teacher training materials because I want to be better at explaining mathematical concepts (or any concepts, really) to others. I don't know whether this will actually be a productive use of my time. So, a question to educators: are there general theories and principles of this aspect of education (tuition, explaining stuff, etc.) that I could pick up through reading a book, and experience immediate gains from?
More generally, are there any useful heuristics for determining what subjects do or don't have this characteristic of "core principles with immediate gains"? A few hours of self-defence training raise you considerably above zero hours of self-defence training, and reading How to Win Friends and Influence People gives the reader a lot of immediate practical tips that they can start using. Meanwhile, a lot of academic subjects require a considerably greater investment of time and effort before you can actually do anything with them.
I do have a certain level of skepticism as far as this characteristic is concerned. I'm pretty sure someone who's read a decent popular introduction to economics is equipped with a lot of useful principles, but they're probably also equipped with a lot of oversimplified ideas and a great deal of overconfidence in their understanding of the subject.
How Learning Works: Seven Research-Based Principles for Smart Teaching (2010) is the standard text that gets thrown around (as far as education in general). I'm surprised it apparently hasn't come up here before, since the approach is very well aligned with LW norms. I'd say it's worthwhile for anyone who expects to teach (or learn) in the future.
I'll plan on writing up a summary/review if no one beats me to it.
Yes, please do write the summary!
(Former teacher here, and I sometimes discuss this topic with my friends.)
Following up on a precommitment I made in the study hall: I am looking into using the Google Hangouts API for a better study hall. This is also a precommitment to follow up by February 1st with:
Preliminary notes:
Preliminary thoughts on ways to use API to implement things we want:
Bold long-term prediction:
"[I predict] that by 2035, almost no country will be as poor as any of the 35 countries that the World Bank classifies as low-income today, even after adjusting for inflation."
I don't know how other meetups go, but my local meetup is based on the fact that members of the group volunteer to lead the meetup. (on a week by week basis) The person who volunteers puts in some extra amount of their time to ensure that there is a good topic. These people keep the meetups going, and are doing a service for the rationality community.
These people should not be punished with negative karma. If anything, we should be awarding karma for those people who make meetup posts.
Your complaint is about the fact that there is no separate list of meetups and non-meetup posts, and by down voting meetup posts, you are punishing innocent volunteers.
Karma is currently very visible to the writers. If you give little positive and negative points to human beings, they will interpret it as reward/punishment, no matter what the intent was. As a meetup organiser, I know I do feel more motivated when my meetup organisation posts get positive karma.
I think not, unless there are only very specific meetup threads that you don't want to see. E.g. ones with no location in the title.
Any individual meetup thread is very valuable for a small number of people, and indifferent-to-mildly-costly to a large number of people. Votes allow you to express a preference direction but not magnitude, which doesn't actually capture preferences in this case.
I'm the guy who posts the DC meetups. While I'm sympathetic to the problem, I'm not sure what I can do to help, aside from not posting meetups at all (not really an option). Pressuring me won't help you if I can't do anything.
I feel like I'm whoring for upvotes just so I can post links. I've been lurking for so long, but I guess the 20 karma finally got me into action in the last two weeks.
2 more to go cracks best rationalist grin and winks
Yep, this isn't reddit, a naked link (with no summary) and a clever title, even with [LINK] generally gets downvoted (except maybe if it's super-relevant to the community, like "Eliezer Yudkowky arrested by police for pushing fat people off bridges")
I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn't very good (on the level of Calculus 101). I'm not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a...
What examples can you give of books that contain discussions of advanced (graduate or research-level) mathematics, similar to what Greg Egan does in his novels (I suppose the majority of such books are hard sci-fi, though I'm not betting on it)? I'm trying to find out what has already been done in the area.
Is Biofeedback crank or a promising area of self improvement? Anyone have a personal experiences with the use of GSR, temperature and heart rate biofeedback devices?
Devices include this one and those under frequently bought together.
More info here.
tldr: Good food, exercise, frequent stretch breaks, meditation, down time afterwards.
(Reposted from the LW facebook group)
The next LW Brussels meetup will be about morality, and I want to have a bunch of moral dilemmas prepared as conversation-starters. And I mean moral dilemmas that you can't solve with one easy utilitarian calculation. Some in the local community have had little exposure to LW articles, so I'll definitely mention standard trolley problems and "torture vs dust specks", but I'm curious if you have more original ones.
It's fine if some of them use words that should really be tabooed. The discussion will double as a...
Any thoughts on doing a reboot of the Irrationality Game, as was done in the following old posts?
http://lesswrong.com/lw/2sl/the_irrationality_game/
http://lesswrong.com/r/discussion/lw/df8/irrationality_game_ii_electric_boogaloo
Which stimulants/eugeroics have a short (< 3 hours) half-life? I did some research into this. Nicotine and selegiline (~1.5 hours) are the shortest I could find. Methylphenidate comes in next (~3.5 hours), but that's longer than I'd like. I don't particularly like any of these choices for various reasons and am interested in learning about others. Alternatively, if there's a way to significantly reduce the half-life of modafinil, I'd like to hear about that.
I've considered amphetamine, armodafinil, atomoxetine, caffeine, ephedrine, methylphenidate, modafinil, nicotine, pseudoephedrine, and selegiline.
I'm thinking of doing a science-overview post on boredom, with the intent of working out how to notice and respond to it. How would I go about finding good existing studies on the subject, noting that I don't have access to university resources and have basically no academic training beyond undergrad level?
(this won't happen soon; it's on my Potential Next Projects list after my current project is completed. This is more so I can get an idea of how feasible or difficult such a project will be for me. I'll probably repeat this request if and when I decide ...
I participate in a long-term game of Europa Universalis 3, which is about to convert to Victoria. We are in need of players for at least two major Powers. If you would like to play a weekly strategy game (Sundays, 1000 to 1400 Eastern time) with enormous opportunities for diplomacy, backstabbing, exposing your source code to other actors, and generally putting all that PD theory into practice, PM me.
I'd like to go against Robin Hanson's recommendation and tell people to go see Her. The visual direction is beautiful, as one would expect, and quirks like fashion, advertisements, and art are just jarring enough to remind you that its the future. I found it easy to overlook the 'why don't they just buy an AI and make it write the letters' problems because it isn't really a movie about technology changing us, but how relationships and their endings do.
There are these monthly topic posts like "Open Thread", "Group Diary",... Are these managed somehow? Can I just start one?
I thought about a monthy (or so) polling thread where everybody may post a poll he is interested in but first has to vote all the polls already there (to ensure feedback and avoid excessive polls).
The LW survey shows that most LW people who used spaced repetition like Anki quit using it after some time. I would be very interested in the causes.
[pollid:581]
If a human was artificial, would it be considered FAI or UAI? I'm guessing UAI because I don't think anything like the process of CEV has been followed to set human's values at birth.
If a human would be UAI if artificial, why are we less worried about billions of humans than we are about 1 UAI? What is it about being artificial that makes unfriendliness so scary? What is it about being natural that makes us so blind to the possible dangers of unfriendliness?
It is that we don't think humans can self-modify? The way tech is going it seems to me that ...
We're unFriendly, but we're unFriendly in a weaker sense than we normally talk about around here: we bear some relationship to the implicit human ethics that we'd want an FAI to uphold, though not a perfect or complete one, and we probably implement a subset of the features that could be used to create a version of Friendliness. Most of us also seem somewhat resistant to the more obvious cognitive traps like wireheading. We're not there yet, but we're far further along the road to Friendliness than most points in mind-space.
We also have some built-in limitations that make a hard takeoff difficult for us: though we can self-modify (in a suitably general sense), our architecture is so messy that it's not fast or easy, especially on individuals. And we run on hardware with a very slow cycle time, although it does parallelize very, very well.
More colloquially, given the kind of power that we talk about FAI eventually having, an arbitrary human or set of humans might use it to make giant golden statues of themselves or carve their dog's face into the moon, but probably wouldn't convert the world's biomass to paperclips. Maybe. I hope.
Is there a lesswrong post or sequence item that discusses the analog nature of the brain and the challenges it might pose to implementing brain-like properties in digital algorithms? And responses to those challenges? I did a Google custom search for "analog digital" and turned up no relevant results.
If there isn't one, would someone (please) like to create one?
I'm going through the PGM coursera class (It's one of the classes in the MIRI course-list). I'm definitely going to finish it because I'm doing it as an independent study at my University.
Message me if you'd like to join me. I have a few friends at school who read LW who said they'll probably join me. The more the merrier.
Overconfidence bias causes people to give more extreme probabilities than the should. Risk aversion means that people don't accept risks without higher-than-necessary confidence. Isn't this the same as saying that people are about as confident and risk-taking as they should be, and they just suck at reading and writing probability?
Is this the latest open thread? Generally, how do I find the latest open thread? The tag does not help.
Can somebody explain the colors of the vote icon of a post? I see
empty, no number, only a dot - occurs even for posts that have been voted
a number in white
a number in green
Libertarians or liberty sympathizers who are undergraduates or recently graduated: the Institute for Humane Studies is accepting applications for their summer seminars. They're week-long, packed with lectures and discussions, and a tremendously fun and edifying experience. The two I've been to both definitely rank among the best weeks of my life. They're free (except for travel), and they try to get ideological diversity among attendees (so you shouldn't feel the need to misstate your beliefs on the application). If you've got any other questions about the...
How would I tell my girlfriend that gifts my love language love language without looking like I'm exploiting her for free stuff.
I think it is a good idea to ask this question here. I see that you didn't got replied on your earlier try on this in one such comment.
First of all my experience is that such replies are seldom answered. First: Few people explain why they are rude. And then it is unlikely that the downvoter himself will see your edit.
I looked at you first page of comments and the answer is neither simple nor clear.
I get the impression that your comments have one or more of the following characteristics
[pollid:582]
Maybe some people here might vote the main problem you should work on.
As a consolation: You may calculate the vote total you reaped by k/(2p-1) where k is your karma and p your positive fraction. Take some consolation from it that you caused controversy. That is still better then silence.
You are at 75% and I am at 85% that is not that much differnt and for me I see it as a sign that I post controversial topics and opinions that differe sufficiently from the mean that some think them wrong.