PhilosophyTutor
PhilosophyTutor has not written any posts yet.

PhilosophyTutor has not written any posts yet.

I won't argue against the claim that we could conceivably create an AI without knowing anything about how to create an AI. It's trivially true in the same way that we could conceivably turn a monkey loose on a typewriter and get strong AI.
I also agree with you that if we got an AI that way we'd have no idea how to get it to do any one thing rather than another and no reason to trust it.
I don't currently agree that we could make such an AI using a non-functioning brain model plus "a bit of evolution". I am open to argument on the topic but currently it seems to me that you might as well say "magic" instead of "evolution" and it would be an equivalent claim.
A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy "reasonable person" standard, which obviously we can't fully explicate in neurological terms either yet, but which is much more achievable.
Liberty isn't a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.
What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated people what things they consider important freedoms and what incentives they would need to be offered to give them up, to figure... (read more)
I tend to think that you don't need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that "free will is an illusion" (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.
I think that you are basically right that the Framing Problem is like the problem of building a longer bridge, or a faster car, in... (read more)
I said earlier in this thread that we can't do this and that it is a hard problem, but also that I think it's a sub-problem of strong AI and we won't have strong AI until long after we've solved this problem.
I know that Word of Eliezer is that disciples won't find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as "the frame problem" since the eighties and it might be worth a read for you. Fodor argued that there are a class of "informationally unencapsulated" problems where you cannot specify in advance what information is and is not... (read more)
I didn't think we needed to put the uploaded philosopher under billions of years of evolutionary pressure. We would put your hypothetical pre-God-like AI in one bin and update it under pressure until it becomes God-like, and then we upload the philosopher separately and use them as a consultant.
(As before I think that the evolutionary landscape is unlikely to allow a smooth upward path from modern primate to God-like AI, but I'm assuming such a path exists for the sake of the argument).
I think there is insufficient information to answer the question as asked.
If I offer you the choice of a box with $5 in it, or a box with $500 000 in it, and I know that you are close enough to a rational utility-maximiser that you will take the $500 000, then I know what you will choose and I have set up various facts in the world to determine your choice. Yet it does not seem on the face of it as if you are not free.
On the other hand if you are trying to decide between being a plumber or a blogger and I use superhuman AI powers to subtly... (read more)
If I was unclear, I was intending that remark to apply to the original hypothetical scenario where we do have a strong AI and are trying to use it to find a critical path to a highly optimal world. In the real world we obviously have no such capability. I will edit my earlier remark for clarity.
The standard LW position (which I think is probably right) is that human brains can be modelled with Turing machines, and if that is so then a Turing machine can in theory do whatever it is we do when we decide that something ls liberty, or pornography.
There is a degree of fuzziness in these words to be sure, but the fact we are having this discussion at all means that we think we understand to some extent what the term means and that we value whatever it is that it refers to. Hence we must in theory be able to get a Turing machine to make the same distinction although it's of course beyond our current computer science or philosophy to do so.
If you can do that, then you can just find someone who you think understands what we mean by "liberty" (ideally someone with a reasonable familiarity with Kant, Mill, Dworkin and other relevant writers), upload their brain without understanding it, and ask the uploaded brain to judge the matter.
(Off-topic: I suspect that you cannot actually get a markedly superhuman AI that way, because the human brain could well be at or near a peak in the evolutionary landscape so that there is no evolutionary pathway from a current human brain to a vastly superhuman brain. Nothing I am aware of in the laws of physics or biology says that there must be... (read more)
(EDIT: See below.) I'm afraid that I am now confused. I'm not clear on what you mean by "these traits", so I don't know what you think I am being confident about. You seem to think I'm arguing that AIs will converge on a safe design and I don't remember saying anything remotely resembling that.
EDIT: I think I figured it out on the second or third attempt. I'm not 100% committed to the proposition that if we make an AI and know how we did so that we can definitely make sure it's fun and friendly, as opposed to fundamentally uncontrollable and unknowable. However it seems virtually certain to me that we... (read more)