Comment author: Giles 04 February 2013 10:27:10PM 4 points [-]

This is useful to me as I'll be attending the March workshop. If I successfully digest any of the insights presented here then I'll have a better platform to start from. (Two particular points are the stuff about the parasympathetic nervous system, which I'd basically never heard of before, and the connection between the concepts of "epistemic rationality" and "knowing about myself" which is more obvious-in-retrospect).

Thanks for the write-up!

And yes, I'll stick up at least a brief write-up of my own after I'm done. Does LW have an anti-publication-bias registry somewhere?

Comment author: Vaniver 25 January 2013 12:06:23AM 3 points [-]

I had in mind treating individuals as institutions, with interlocking and often competing subcomponets. I'm not finding a link that would be helpful; does anyone else know of a good summary or introduction?

Comment author: Giles 03 February 2013 04:03:49AM 0 points [-]

There's probably better stuff around, but it made me think of Hanson's comments in this thread:

http://lesswrong.com/lw/v4/which_parts_are_me/

Comment author: Vaniver 25 January 2013 12:06:23AM 3 points [-]

I had in mind treating individuals as institutions, with interlocking and often competing subcomponets. I'm not finding a link that would be helpful; does anyone else know of a good summary or introduction?

Comment author: Giles 03 February 2013 04:03:11AM 1 point [-]

There's probably better stuff around, but it made me think of Hanson's comments in this thread:

http://lesswrong.com/lw/v4/which_parts_are_me/

Comment author: passive_fist 01 February 2013 10:14:58PM 6 points [-]

I used not to take Searle's arguments seriously until I actually understood what they were about.

Before anything, I should say that I disagree with Searle's arguments. However, it is important to understand them if we are to have a rational discussion.

Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.

Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.

Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.

To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.

In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we've looked, we've found just neurons doing simple computations on their inputs. I believe that that is all there is to it - that something with the capabilities of the human brain also has the ability to understand.

However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.

So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.

Comment author: Giles 02 February 2013 05:18:55PM 0 points [-]

just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well

I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!

Comment author: Giles 10 January 2013 03:23:53AM 8 points [-]

More posts like this please!

Comment author: Giles 09 December 2012 09:28:21PM 13 points [-]

As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and ...

OK, this is big news. Don't know how I missed this one.

Comment author: Giles 09 December 2012 09:01:27PM 0 points [-]

Appoint a chief editor. Chief's most important job would be to maintain a list of what most urgently needs adding or expanding in the wiki, and posting a monthly Discussion post reminding people about these. (Maybe choosing a different theme each month and listing a few requested edits in that category, together with a link to the wiki page that contains the full list).

When people make these changes, they can add a comment and chief editor (or some other high status figure) will respond with heaps of praise.

People will naturally bring up any other topics they'd like to see on the wiki or general comments about the wiki. Chief editor should take account of these and where relevant bring them up with the relevant people (e.g. the programmers).

Meetup : Toronto THINK

4 Giles 06 December 2012 08:49PM

Discussion article for the meetup : Toronto THINK

WHEN: 12 December 2012 07:00:00PM (-0500)

WHERE: 7 Hart House Circle, Toronto

In the Hart House Reading Room (the room across from the reception desk with the purple walls)

We're trying to save a life, and we want to get you involved! Three of us are already promising to give money to the Against Malaria Foundation, but if you don't have spare cash - or if you're not convinced this is a good idea - you can also help by contributing to the discussion.

Charity evaluator GiveWell estimates that the cost of saving the life of a child by giving to AMF is $2300. But there is a lot of uncertainty in this estimate, and the focus of this meetup will be on how to deal with that uncertainty. In particular, if we discover that AMF is less cost effective than we thought, how should we proceed?

We should also look at some specific sources of uncertainty, try and establish whether GiveWell has already taken account of them and estimate how much uncertainty is introduced by each factor. I plan on researching this a little and hope to bring along some relevant information for each one.

  • Will AMF make good use of additional funds?
  • Might AMF change in the future, e.g. starting to fund other kinds of programme which may be less effective?
  • Possible widespread distribution of anti-malarial vaccines in the future (currently still under development); will this affect value of bed nets today? (HT Steven Bukal)
  • If someone like the Gates Foundation were to dump a huge amount of money into AMF and/or other bednetting programmes, what would the effect be on the marginal value of our own donations?
  • Insecticide resistance or "behavioural" resistance of mosquitoes
  • Decrease in child mortality between mid-1990's (when the studies were done) and now. Expecting continued decrease in the future?
  • Out of the people getting nets, how many actually already have one?
  • Are nets going where they should and are they being used correctly for as many years as they're good for?
  • Do AMF's activities discourage local governments from distributing nets? (Or for that matter, other nonprofits?)
  • Are people discouraged from buying nets if they expect them to be given out for free at some point? (Even if they can afford them)
  • Uncertainties and biases associated with the original studies (published statistical uncertainty, representativeness, publication bias)
  • Over-optimism causes higher estimates of expected value. If we focus on the best (according to GiveWell), does that mean they're more likely to have been over-optimistic and should we correct for this? (This point is somewhat technical but I'll try and explain in the meeting)
  • How do we account for "leveraging"? (AMF requires its distribution partners to acquire the funds to cover distribution costs themselves; should we consider non-AMF funding to be "free" or should we include all costs in our estimate, or somewhere in between?)

But there's also some uncertainty in the upward direction (i.e. we know it's not included in the $2300 figure):

  • Additional non-life-saving benefits
  • Helping GiveWell by tagging donations as GiveWell-inspired

So if you want to have a think about some of these before the meetup, you're more than welcome. I realise everyone's busy though! In any case, we'll have plenty of in-depth stuff to talk about. Note: THINK is not directly LW-affiliated but I've been told to post our meetups here anyhow :-)

Discussion article for the meetup : Toronto THINK

Comment author: NancyLebovitz 30 November 2012 02:15:20PM *  4 points [-]

I'm not sure how much I can elaborate-- it seemed like a very undifferentiated ugh field. I not only didn't want to do more, I didn't want to think about how to get myself to do more.

It took some sense of duty to make my first comment rather than just letting the whole thing go.

Comment author: Giles 30 November 2012 07:40:45PM 3 points [-]

Do you think "ugh" should be listed as a response to survey questions? (Or equivalently a check box that says "I've left some answers blank due to ugh field rather than due to not reading the question" - not possible with the current LW software, just brainstorming)

Comment author: Giles 30 November 2012 02:29:02AM 0 points [-]

This might be helpful - thanks.

View more: Prev | Next