All of voi6's Comments + Replies

Sorry the way worded it makes me look silly. I just meant that even if we had the perfect software we simply wouldn't get a big enough speedup to bridge the gap.

  1. You can't know the difficulty of a problem until you've solved it. Look at Hilbert's problems. Some were solved immediately while others are still open today. Proving the you can color a map with five colors is easy and only takes up half a page. Proving that you can color a map with four colors is hard and takes up hundreds of pages. The same is true of science - a century ago physics was thought to soon be a dead field until the minor glitches with blackbody radiation and Mercury's orbit turned out to be more than minor and actually dictated by mathemat

... (read more)
3fubarobfusco
We don't use parallel systems efficiently today because we don't have software systems that provide typical programmers with a human-comprehensible interface to program them. Writing efficient, correct parallel code in traditional programming languages is very difficult; and some of the research languages which promise automatic parallelization are on the high end of difficulty for humans to learn.

Sorry to respond to this 2 years late. I'm aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn't mean they're aren't maximizing a utility function however.

Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn't always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen).

Secondly, if you consider ... (read more)

I'm by no means an expert but I have studied a lot of the relevant fields to this topic in college. I know this thread is long dead but since it came up on the front page of google search of the title I feel the need to give my input. I was really into AI and studied computer science in college until I found out that Moore's law is going to hit the atomic barrier before we have enough hardware by reasonable estimates to simulate a brain and there is no clear way to move forward (neither parallel programming nor quantum computing looks like it will save the... (read more)

While humans may not be maximizing pleasure they are certainly maximizing some utility function which can be characterized. Human concerns can then be programmed to optimize this function in your FAI.

0xelxebar
You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn't maximize any utility function. If you're aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.