I actually brought up a similar question in the open thread, but it didn't really go very far. May or may not be worth reading, but it's still not clear to me whether such a thing is even practical. It's likely that all substantially easier AIs are too far from FAI to still be a net good.
I've come a little closer to answering my questions by stumbling on this Future of Humanity Institute video on "Reduced Impact AI". Apparently that's the technical term for it. I haven't had a chance to look for papers on the subject, but perhaps some exist. No hits on google scholar, but a quick search shows a couple mentions on LW and MIRI's website.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Does anyone have any insight into VoI plays with Bayesian reasoning?
At a glance, it looks like the VoI is usually not considered from a Bayesian viewpoint, as it is here. For instance, wikipedia says:
""" A special case is when the decision-maker is risk neutral where VoC can be simply computed as; VoC = "value of decision situation with perfect information" - "value of current decision situation" """
From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:
If the agent's prior over the arms is uniform over [0,1], so its current value is .5 (playing arm1), but after many observations, it learns that (with high confidence) arm1 has reward of .1 and arm2 has reward of .2, it should be glad to know this (so it can change to the optimal policy, of playing arm2), BUT the subjective value of this decision situation is less than when it was ignorant, because .2 < .5.