bbarth
bbarth has not written any posts yet.

Yeah, I might, but here I was just surprised by the down-voting for contrary opinion. It seems like the thing we ought to foster not hide.
I'm interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! "questions".
Also, there's no reason that I've seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?
Beyond this, my belief is that without formalization... (read more)
Harsh crowd.
It might be nice to be able to see the voting history (not the voters' names, but the number of up and down votes) on a comment. I can't tell if my comments are controversial or just down-voted by two people. Perhaps even just the number of votes would be sufficient (e.g. -2/100 vs. -2/2).
Seems unlikely to work out to me. Humans evolved intelligence without Newcomb-like problems. As the only example of intelligence that we know of, it's clearly possible to develop intelligence without Newcomb-like problems. Furthermore, the general theory seems to be that AIs will start dumber than humans and iteratively improve until they're smarter. Given that, why are we so interested in problems like these (which humans don't universally agree about the answers to)?
I'd rather AIs be able to help us with problems like "what should we do about the economy?" or even "what should I have for dinner?" instead of worrying about what we should do in the face of something godlike.
Additionally, human minds aren't universal (assuming that universal means that they give the "right" solutions to all problems), so why should we expect AIs to be? We certainly shouldn't expect this if we plan on iteratively improving our AIs.
i don't see how your example is apt or salient. My thesis is that Newcomb-like problems are the wrong place to be testing decision theories because they do not represent realistic or relevant problems. We should focus on formalizing and implementing decision theories and throw real-world problems at them rather than testing them on arcane logic puzzles.
Given the week+ delay in this response, it's probably not going to see much traffic, but I'm not convinced "reading" source code is all that helpful. Omega is posited to have nearly god-like abilities in this regard, but since this is a rationalist discussion, we probably have to rule out actual omnipotence.
If Omega intends to simply run the AI on spare hardware it has, then it has to be prepared to validate (in finite time and memory) that the AI hasn't so obfuscated its source as to be unintelligible to rational minds. It's also possible that the source to an AI is rather simple but it is dependent a large amount... (read more)
If LW would update the page template to have the script in the html header, I think we'd be set. Isn't there a site admin for this?
I think this is critical, because rationality in the end needs mathematical support, and MathJax is really the de facto way of putting math in web posts at this point.
Wouldn't the right solution be to use MathJax?
As one of the folks who made this argument in the other job thread, I'm going to disagree with you. Paying an assistant $36k/yr seems low to me for the Bay Area, but $100k/yr is probably out of line. These all seem like assistanty things that draw more modest salaries. Indeed.com puts the average for administrative assistants in SF at $43k/yr, so given that it's non-profit, it's certainly in range. Do SIAI jobs come with health insurance?
Sorry. It didn't seem rude to me. I'm just frustrated with where I see folks spending their time.
My apologies to anyone who was offended.