Indeed, it seems likely. Many humans have the concept that 'locked' values are better than 'wishy-washy' ones; few have the concept of local maximums and even fewer the understanding of complex, changing human value systems. Thus a priori we should expect there is some bias or leaning in that direction, which would presumably have a chance of affecting one human in particular. This chance is greater than that of an AI's, who chooses at random.
Harry is aware of these ideas, but he often catches himself in errors. When it comes to self-modification there are no opportunities to catch your errors; you are stuck with them and will never even realise there are any.
I wonder if Quirrell realises Harry desires to be an actual god, and not just Supreme Emperor of the magical world.
Hopefully Harry is bright enough not to test invasive intelligence improvement on himself.
- This thread has run its course. You will find newer threads in the discussion section.
Another discussion thread - the fourth - has reached the (arbitrary?) 500 comments threshold, so it's time for a new thread for Eliezer Yudkowsky's widely-praised Harry Potter fanfic.
Most of the paratext and fan-made resources are listed on Mr. LessWrong's author page. There is also AdeleneDawner's collection of most of the previously-published Author's Notes.
Older threads: one, two, three, four. By tag.
Newer threads are in the Discussion section, starting from Part 6.
Spoiler policy as suggested by Unnamed and approved by Eliezer, me, and at least three other upmodders:
It would also be quite sensible and welcome to continue the practice of declaring at the top of your post which chapters you are about to discuss, especially for newly-published ones, so that people who haven't yet seen them can stop reading in time.