Comment author: Valentine 05 February 2013 08:10:18AM 1 point [-]

Does LW have an anti-publication-bias registry somewhere?

Not that I know of, but that does sound quite awesome.

..I'll be attending the March workshop.

I look forward to meeting you, Giles!

Comment author: Giles 05 February 2013 02:34:52PM 1 point [-]

Not that I know of

Any advice on how to set one up? In particular how to add entries to it retrospectively - I was thinking about searching the comments database for things like "I intend to", "guard against", "publication bias" etc. and manually find the relevant ones. This is somewhat laborious, but the effect I want to avoid is "oh I've just finished my write-up (or am just about to), now I'll go and add the original comment to the anti-publication bias registry".

On the other hand it seems like anyone can safely add anyone else's comment to the registry as long as it's close enough in time to when the comment was written.

Any advice? (I figured if you're involved at CFAR you might know a bit about this stuff).

Comment author: Giles 04 February 2013 10:44:00PM 8 points [-]

This is interesting. People who are vulnerable to the donor illusion either have some of their money turned into utilons, or are taught a valuable lesson about the donor illusion, possibly creating more utilons in the long term.

Comment author: Giles 04 February 2013 10:27:10PM 4 points [-]

This is useful to me as I'll be attending the March workshop. If I successfully digest any of the insights presented here then I'll have a better platform to start from. (Two particular points are the stuff about the parasympathetic nervous system, which I'd basically never heard of before, and the connection between the concepts of "epistemic rationality" and "knowing about myself" which is more obvious-in-retrospect).

Thanks for the write-up!

And yes, I'll stick up at least a brief write-up of my own after I'm done. Does LW have an anti-publication-bias registry somewhere?

Comment author: Vaniver 25 January 2013 12:06:23AM 3 points [-]

I had in mind treating individuals as institutions, with interlocking and often competing subcomponets. I'm not finding a link that would be helpful; does anyone else know of a good summary or introduction?

Comment author: Giles 03 February 2013 04:03:49AM 0 points [-]

There's probably better stuff around, but it made me think of Hanson's comments in this thread:

http://lesswrong.com/lw/v4/which_parts_are_me/

Comment author: Vaniver 25 January 2013 12:06:23AM 3 points [-]

I had in mind treating individuals as institutions, with interlocking and often competing subcomponets. I'm not finding a link that would be helpful; does anyone else know of a good summary or introduction?

Comment author: Giles 03 February 2013 04:03:11AM 1 point [-]

There's probably better stuff around, but it made me think of Hanson's comments in this thread:

http://lesswrong.com/lw/v4/which_parts_are_me/

Comment author: passive_fist 01 February 2013 10:14:58PM 6 points [-]

I used not to take Searle's arguments seriously until I actually understood what they were about.

Before anything, I should say that I disagree with Searle's arguments. However, it is important to understand them if we are to have a rational discussion.

Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.

Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.

Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.

To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.

In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we've looked, we've found just neurons doing simple computations on their inputs. I believe that that is all there is to it - that something with the capabilities of the human brain also has the ability to understand.

However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.

So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.

Comment author: Giles 02 February 2013 05:18:55PM 0 points [-]

just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well

I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!

Comment author: Giles 10 January 2013 03:23:53AM 8 points [-]

More posts like this please!

Comment author: Giles 09 December 2012 09:28:21PM 13 points [-]

As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and ...

OK, this is big news. Don't know how I missed this one.

Comment author: Giles 09 December 2012 09:01:27PM 0 points [-]

Appoint a chief editor. Chief's most important job would be to maintain a list of what most urgently needs adding or expanding in the wiki, and posting a monthly Discussion post reminding people about these. (Maybe choosing a different theme each month and listing a few requested edits in that category, together with a link to the wiki page that contains the full list).

When people make these changes, they can add a comment and chief editor (or some other high status figure) will respond with heaps of praise.

People will naturally bring up any other topics they'd like to see on the wiki or general comments about the wiki. Chief editor should take account of these and where relevant bring them up with the relevant people (e.g. the programmers).

Comment author: NancyLebovitz 30 November 2012 02:15:20PM *  4 points [-]

I'm not sure how much I can elaborate-- it seemed like a very undifferentiated ugh field. I not only didn't want to do more, I didn't want to think about how to get myself to do more.

It took some sense of duty to make my first comment rather than just letting the whole thing go.

Comment author: Giles 30 November 2012 07:40:45PM 3 points [-]

Do you think "ugh" should be listed as a response to survey questions? (Or equivalently a check box that says "I've left some answers blank due to ugh field rather than due to not reading the question" - not possible with the current LW software, just brainstorming)

View more: Prev | Next