Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LINK: In favor of niceness, community, and civilisation

26 Solvent 24 February 2014 04:13AM

Scott, known on LessWrong as Yvain, recently wrote a post complaining about an inaccurate rape statistic.

Arthur Chu, who is notable for winning money on Jeopardy recently, argued against Scott's stance that we should be honest in arguments in a comment thread on Jeff Kaufman's Facebook profile, which can be read here.

Scott just responded here, with a number of points relevant to the topic of rationalist communities.

I am interested in what LW thinks of this.

Obviously, at some point being polite in our arguments is silly. I'd be interested in people's opinions of how dire the real world consequences have to be before it's worthwhile debating dishonestly.

Meetup : Inaugural Canberra meetup

1 Solvent 30 January 2014 05:03PM

Discussion article for the meetup : Inaugural Canberra meetup

WHEN: 12 February 2014 07:30:00PM (-0800)

WHERE: CSIT Building, Acton ACT 2601, Australia

(I'm posting this on behalf of my lurker friend.)

WHEN: 12 February 2014 07:30:00PM(+1100) WHERE: CSIT Building, Blg 108, North Road, ANU

This will be the first meetup in Canberra! We will get to know each other and play board games, joining the weekly board games night held by the ANU Computer Science Students' Association. Vegan-friendly snacks and board games will be provided (although if anyone has board games, it would be great if you could bring your own). I will be waiting at the main door of the building from 7:20 to 7:40 to show people to the room where we will be meeting.

Discussion article for the meetup : Inaugural Canberra meetup

Wes Weimer from Udacity presents his list of things you should learn

9 Solvent 23 June 2012 07:04AM

I've just gotten to the end of Udacity's CS262 course in programming languages. It's been pretty good. Wes Weimer, the lecturer, seems to be a really cool guy. There's a quote from HPMOR in the final exam, which I thought was pretty cool.

In the last part of the last lecture, Weimer gives advice on what we should learn next. You can watch it here.

He advises that you learn the following (paraphrased):

Philosophy until you've covered epistemology, formal logic, free will, the philosophy of science, and what it's like to be a bat.

Cognitive psychology until you've covered perception, consciousness, and the Flynn effect.

Speech or rhetoric until you've covered persuasion.

Anthropology and gender studies, to get an idea of what behaviors are socially constructed and which are essential

Statistics, until you can avoid being fooled by either others or yourself

Religion or ethics until you've covered the relationship between unhappiness and unrealized desires

Physics and engineering until you can explain how a microphone, speaker, and radio all work

Government until you have an opinion about legislating morality and the relative importance of freedom and equality.

History until you are not condemned to make the mistakes of the past.

Life until you are happy. They say ignorance is bliss, but they are wrong all but finitely often.

I thought that was all really useful (except maybe the last two). I've learned up to his required level of philosophy, cognitive psychology, and religion and ethics. I'm working on the physics and gender studies.

(Incidentally, I strongly recommend Udacity for learning programming. It's really good.)

A simple web app to create aversion to unhealthy food

5 Solvent 05 May 2012 06:07AM

At a startup camp weekend, I've made this (incredibly sketchy, amateurish) app (NSFW) which implements classical conditioning to create a negative response to candy, smoking, or meat.

After using it intensively for about a day while writing it, I certainly feel disgusted at the sight of candy or chocolate. There's a little bit of evidence that something like this could actually have an effect on people: I'm reminded of the penny jar fetish experiment and this chapter on "curing" homosexuality.

I post this here for two reasons: to get advice on how to improve the app, and to get your opinions on whether this type of program might actually be useful. I'm reminded a bit of Anki, how the science led to the effective app. In this case, I'm not sure if it will turn out to be actually useful: in the experiments which have shown an effect from aversion type stimulus like this, the unpleasant image has been causally connected to the stimulus we're conditioning against, eg unhealthy food and diseased tissue.

Go easy on the web design, I'm just an amateur. :)

Thanks

EDIT: Yeah, it should be marked NSFW. Sorry.

Does anyone know any kid geniuses?

9 Solvent 28 March 2012 12:03PM

I'm friends with an incredibly smart kid. He's 14, but has been put up three grades in school at one point. He does all the obvious enrichment things which are available in the relatively small Australian city he lives in.

His life experience has been pretty unusual. He doesn't really know what it's like to be challenged in school. All his friends are way older than he is. (Once, I asked him how being constantly around people older than him made him feel. He replied, "Concerned for my future.")

He doesn't know anyone like him, which I think is a shame: he'd probably get along very well with them.

Does anyone know any similar kid geniuses? If so, can I give them my friend's details?

Thanks.

Anyone have any questions for David Chalmers?

11 Solvent 10 March 2012 09:57PM

I'm doing an undergraduate course on the Free Will Theorem, with three lecturers: a mathematician, a physicist, and David Chalmers as the philosopher. The course is a bit pointless, but the company is brilliant. Chalmers is a pretty smart guy. He studied computer science and math as an undergraduate, before "discovering that he could get paid for doing the kind of thinking he was doing for free already". He's friendly; I've been chatting with him after the classes.

So if anyone has any questions for him, if they seem interesting enough I could approach him with them.

Emails to him also work, of course, but discussion in person lets more understanding happen faster. For example, in a short discussion with him I understood his position on consciousness way better than I would have just from reading his papers on the topic.

My summary of Eliezer's position on free will

16 Solvent 28 February 2012 05:53AM

I'm participating in a university course on free will. On the online forum, someone asked me to summarise Eliezer's solution to the free will problem, and I did it like this. Is it accurate in this form? How should I change it?

 

“I'll try to summarise Yudkowsky's argument.

As Anneke pointed out, it's kinda difficult to decide what the concept of free will means. How would particles or humans behave differently if they had free will compared to if they didn't? It doesn't seem like our argument is about what we actually expect to see happening.

This is similar to arguing about whether a tree falling in a deserted forest makes any noise. If two people are arguing about this, they probably agree that if we put a microphone in the forest, it would pick up vibrations. And they also agree that no-one is having the sense experience of hearing the tree fall. So they're arguing over what 'sound' means. Yudkowsky proposes a psychological reason why people may have that particular confusion, based on how human brains work.

So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.

It feels like I choose between some of my possible futures. I can imagine waking up tomorrow and going to my Engineering lecture, or staying in my room and using Facebook. Both of those imaginings feel equally 'possible'.

Humans execute a decision making algorithm which is fairly similar to the following one.

  1. List all your possible actions. For my lecture example, that was “Go to lecture” and “Stay home.”

  2. Predict the state of the universe after pretending that you will take each possible action. We end up with “Buck has learnt stuff but not Facebooked” and “Buck has not learnt stuff but has Facebooked.”

  3. Decide which is your favourite outcome. In this case, I'd rather have learnt stuff. So that's option 2.

  4. Execute the action associated with the best outcome. In this case, I'd go to my lecture.

Note that the above algorithm can be made more complex and powerful, for example by incorporating probability and quantifying your preferences as a utility function.

As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively. The way our brain implements this is by considering those possible worlds which we could reach through our choices, and by treating them as possible.

So now we have a fairly convincing explanation of why it would feel like we have free will, or the ability to choose between various actions: it's how our decision making algorithm feels from the inside.”

Hard philosophy problems to test people's intelligence?

-2 Solvent 15 February 2012 04:57AM

I'm looking for hard philosophical questions to give to people to gauge their skill at philosophy.

So far, I've been presenting people with Newcomb's problem and the Sleeping Beauty problem. I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.

What other problems should I use? 

The utility of information should almost never be negative

5 Solvent 29 January 2012 05:43AM

As humans, finding out facts that we would rather not be true is unpleasant. For example, I would dislike finding out that my girlfriend were cheating on me, or finding out that my parent had died, or that my bank account had been hacked and I had lost all my savings.

 

However, this is a consequence of the dodgily designed human brain. We don't operate with a utility function. Instead, we have separate neural circuitry for wanting and liking things, and behave according to those. If my girlfriend is cheating on me, I may want to know, but I wouldn't like knowing. In some cases, we'd rather not learn things: if I'm dying in hospital with only a few hours to live, I might rather be ignorant of another friend's death for the short remainder of my life.

 

However, a rational being, say an AI, would never rather not learn something, except for contrived cases like Omega offering you $100 if you can avoid learning the square of 156 for the next minute.

 

As far as I understand, an AI with a set of options decides by using approximately the following algorithm. This algorithm uses causal decision theory for simplicity.

 

"For each option, guess what will happen if you do it, and calculate the average utility. Choose the option with the highest utility."

 

So say Clippy is using that algorithm with his utility function of utility = number of paperclips in world.

 

Now imagine Clippy is on a planet making paperclips. He is considering listening to the Galactic Paperclip News radio broadcast. If he does so, there is a chance he might hear about a disaster leading to the destruction of thousands of paperclips. Would he decide in the following manner?

 

"If I listen to the radio show, there's maybe a 10% chance I will learn that 1000 paperclips were destroyed. My utility in from that decision would be on average reduced by 100. If I don't listen, there is no chance that I will learn about the destruction of paperclips. That is no utility reduction for me. Therefore, I won't listen to the broadcast. In fact, I'd pay up to 100 paperclips not to hear it."

 

Try and figure out the flaw in that reasoning. It took me a while to spot it, but perhaps I'm just slow.

 

 

* thinking space *

 

 

For Clippy to believe "If I listen to the radio show, there's maybe a 10% chance I will learn that 1000 paperclips were destroyed," he must also believe that there is already a 10% chance that 1000 paperclips have been destroyed. So his utility in either case is already reduced by 100. If he listens to the radio show, there's a 90% chance his utility will increase by 100, and a 10% chance it will decrease by 900, relative to ignorance. And so, he would be indifferent to gaining that knowledge.

 

As humans, we don't work that way. We don't constantly feel the pressure of knowledge like “People might have died since I last watched the news,” just because humans don't deal with probability in a rational manner. And also, as humans who feel things, learning about bad things is unpleasant in itself. If I were dying in my bed, I probably wouldn't even think to increase my probability that a friend had died just because no-one would have told me if they had. An AI probably would.

 

Of course, in real life, information has value. Maybe Clippy needs to know about these paperclip-destroying events in order to avoid them in his own paperclips, or he needs to be updated on current events to participate in effective socialising with other paperclip enthusiasts. So he would probably gain utility on average from choosing to listen to the radio broadcast.

 

In conclusion. An AI may prefer the world in one state than another, but almost always prefers more knowledge about the actual state of the world, even if what it learns isn't good.

ICONN 2012 nanotechnology conference in Perth

1 Solvent 17 January 2012 02:34AM

I won a prize and get to travel to Perth, Australia to attend the 2012 ICONN nanotechnology conference.

Is anyone else from LessWrong going to be there?

I don't actually know much about nanotechnology. Does anyone have any particular recommendations for some a good introduction?

Thanks.

View more: Next