I had the same problem.
I think it would need some genetic algorithm in order to figure out about how "close" it is to the solution, then make a tree structure where it figures out what happens after every combination of however many moves, and it does the one that looks closest to the solution.
It would update the algorithm based on how close it is to the closest solution. For example, if it's five moves away from something that looks about 37 moves away from finishing, then it's about 42 moves away now.
The problem with this is that when you start it, it will have no idea how close anything is to the solution except for the solution, and there's no way it's getting to that by chance.
Essentially, you'd have to cheat and start by giving it almost solved Rubik's cubes, and slowly giving it more randomized ones. It won't learn on its own, but you can teach it pretty easily.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think there would be more overall pleasure if mankind continued on its merry way. It might be possible to wirehead the entire human population for the rest of the universes' lifespan, for instance; any scenario which ends the human race would necessarily have less pleasure than that.
But would I want the entire human race to be wireheaded against their will? No... I don't think so. It's not the worst fate I can think of, and I wouldn't say it's a bad result; but it seems sub-optimal. I value pleasure, but I also care about how we get it - even I would not want to be just a wirehead, but rather a wirehead who writes and explores and interacts.
Does this mean I value things other than pleasure, if I think it is the Holy Grail but it matters how it is attained? I'm not certain. I suppose I'd say my values can be reduced to pleasure first and freedom second, so that a scenario in which everyone can choose how to obtain their pleasure is better than a scenario in which everyone obtains a forced pleasure, but the latter is better than a scenario in which everyone is free but most are not pleasured.
I'm not certain if my freedom-valuing is necessary or just a relic, though. At least it (hopefully) protects against moral error by letting others choose their own paths.
The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy more accurately than your own expectations could.
I agree with the intuition that most people value freedom, and so would prefer a free pleasure over a forced one if the amount of pleasure was the same. But I think that it's a situational intuition, that may not hold in the future. (And is a value really a value if it's situational?)