jacob_cannell comments on The Brain as a Universal Learning Machine - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (166)
The actual paper is "Cortical information flow during flexible sensorimotor decisions"; it can be found here. I don't believe the reporter's summary is very accurate. They traced the flow of information in a moving dot task in a couple dozen cortical regions. It's interesting, but I don't think it especially differentiates the ULH.
.3. Good point. I'll need to correct that. I'm skeptical of embarrassment, but surprise and fear certainly.
.4. Yes that's correct. It perhaps would be more accurate to say more useful or more valuable. I meant more powerful in a general political/economic utility sense.
.5. I agree that the human brain, in particular the reward system, has dependencies on the body that are probably complex. However, reverse engineering empathy probably does not require exactly copying biological mechanisms.
.6. I should probably just cut that sentence, because it is a distraction itself.
But for context on the previous old boxing discussions. .. See this post in particular. Here Yudkowsky presents a virtual sandbox in the context of a sci fi story. To break out, he has to give the AI essentially infinite computation, and even then the humans also have to be incredibly dumb - they intentionally send an easter egg message. The humans apparently aren't even monitoring their creation. etc. It's a strawman. Later it is used as evidence to suggest that EY has somehow proven that computer security sandboxes can't work.