It's my mistake.
I meant: it's not connected to your brain at all except when making you happy/making you believe you decided.
ie. it's not taking any input from the brain at any point. Much like the lesion.
But in general, if somebody said they decided freely, I take it as given. I don't know any better criterion how to judge whether the decision was free, whatever it means.
In the specific case of the bulb-world, would you consider their decisions free, if they did?
If the bulb-apparatus physically took no input from the brain, if it was attached to the brain artificially (as opposed from being a native part of human body, or growing spontaneously - so that it couldn't be considered a part of the brain), if its action was direct enough (e.g. implanting the decision by some sequence of electric impulses in course of seconds, as opposed to altering the brain only in a slight, but predictable manner, which modification would develop into the final decision after years of thought going inside the brain) and if the decisio...
This is part of a sequence titled "An introduction to decision theory". The previous post was Newcomb's Problem: A problem for Causal Decision Theories
For various reasons I've decided to finish this sequence on a seperate blog. This is principally because there were a large number of people who seemed to feel that this sequence either wasn't up to the Less Wrong standard or felt that it was simply covering ground that had already been covered on Less Wrong.
The decision to post it on another blog rather than simply discontinuing it came down to the fact that other people seemed to feel that the sequence had value. Those people can continue reading it at "The Smoking Lesion: A problem for evidential decision theory".
Alternatively, there is a sequence index available: Less Wrong and decision theory: sequence index