Comment author: TheMajor 16 December 2015 10:08:50AM 0 points [-]

According to your example the non-human agent would gain more by self-modifying to an agent who always writes down 9. Is this not something that UDT agents can do?

Comment author: TheMajor 04 December 2015 09:51:56AM *  2 points [-]

You are paying for the classes, i.e. the attention and time of your teachers. Make sure to get your money's worth: if you don't understand something speak up, or contact the teacher after the class. If your class has teaching assistants contact them (for example by email) if you get stuck on the homework/exercises or don't understand something from the lecture. All of these people are literally being paid to answer these questions, be aware that this is a resource you have at your disposal at all times. A common failure mode is thinking: "It's embarrassing to speak up in front of the whole group and/or I don't want to waste their time, I'll just figure it out on my own later" - if you have this thought contact the teacher/assistant after the course about the part that wasn't clear.

Comment author: TheMajor 20 October 2015 08:07:44PM *  3 points [-]

I know not that much about astronomy, but I found this thread on Reddit and frankly I agree that the original paper does not spend enough time (the last two paragraphs of section 4.2) discussing the twin star hypothesis. Could anybody with more experience/knowledge explain why this is not plausible in this case? Is the main objection to this hypothesis that it is not clear how such a system would form without a disk of matter present?

Comment author: ShardPhoenix 15 September 2015 11:47:54PM *  1 point [-]

Or is does the trick lie in the "stronger than a naive implementation of depth-limited search", and is there some reason why we expect depth-limited search to have sophisticated implementations, but do not expect this for probablistic search?

Something like that I think. The paper suggests that optimizations applied to depth-based search techniques in more sophisticated engines are already effectively like an approximation of probability-based search.

Comment author: TheMajor 16 September 2015 09:53:40AM 0 points [-]

Should in this case the probabilistic search not already be comparable in performance with non-naive depth-based search, if most of the sophistication in the latter just serves to approximate the former? Since the probabilistic search seems relatively simple the argument above seems insufficient to explain why probabilistic search is not used more widely, right?

Comment author: TheMajor 15 September 2015 07:18:04PM *  2 points [-]

I am confused by section 5 in the paper about probabilistic generation of the search tree - the paper states:

Testing showed that a naive implementation of probability-limited search is slightly (26 +- 12 rating points) stronger than a naive implementation of depth-limited search.

But the creators of the most popular engines literally spend hours a day trying to increase the rating of their engine, and 26 rating points is massive. Is this probabilistic search simply that unknown and good? Or is does the trick lie in the "stronger than a naive implementation of depth-limited search", and is there some reason why we expect depth-limited search to have sophisticated implementations, but do not expect this for probablistic search?

Comment author: Diadem 15 September 2015 10:50:07AM 6 points [-]

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players. But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That's a pretty hard contradiction right there. The latter quote is probably the correct one. Modern chess engines beat any human player these days, even running on relatively modest hardware. That's assuming full length games. At blitz games computers are much better still, compared to humans, because humans are much more error prone.

So if this neural net is playing at master level it's still much, much weaker than the best computers. From master to grandmaster is a big leap, from grandmaster to world top is another big leap, and the best computers are above even that.

Still interesting of course.

Comment author: TheMajor 15 September 2015 07:12:59PM *  2 points [-]

Yes, I noticed this too. The paper itself compares Giraffe (ELO ~2400) to 8 other chess engines (page 25 of the PDF), and decides that

It is clear that Giraffe's evaluation function now has at least comparable positional understanding compared to evaluation functions of top engines in the world.

For comparison, a frequently updated list of chess engines and their (approximate) ELO ratings, which would list Giraffe around shared 165'th place. It seems that it is the reporting, instead of the paper, that is making the exaggeration.

Comment author: ShardPhoenix 14 September 2015 11:34:48PM *  9 points [-]

Although it's not better than existing solutions, it's a cool example of how good results can be achieved in a relatively automatic way - by contrast, the evaluation functions of the best chess engines have been carefully engineered and fine-tuned over many years, at least sometimes with assistance from people who are themselves master-level chess players. On the other hand this neural network approach took a relatively short time and could have been applied by someone with little chess skill.

edit: Reading the actual paper, it does sound like a certain amount of iteration, and expertise on the author's part, was still required.

edit2: BTW, the paper is very clear and well written. I'd recommend giving it a read if you're interested in the subject matter.

Comment author: TheMajor 15 September 2015 07:08:02PM 2 points [-]

Thank you for recommending to read the paper, I don't think I would have otherwise and I greatly enjoyed reading it!

Comment author: TheMajor 01 September 2015 06:45:18AM 0 points [-]

I was always under the impression that the particular case of watching a movie was a strong example of belief as attire - forming an opinion (and discussing it at length) about particular bits of a movie everybody watched is a good way to show that you paid attention to details and are capable of complicated analysis (disclaimer: of course there are genuine movie critics, as well as movies that are actually bad and movies that are actually good. I just think the above explains the most common scenario).

I think your general point is correct, though, but I wouldn't say the phenomenon is caused primarily by sneering. People discount opinions they disagree with in general, which includes sincere criticism, and this effect is further strengthened if they get the impression that the differing opinion is offered with insulting as a primary goal.

Comment author: PhilGoetz 28 August 2015 09:09:51PM 0 points [-]

Sorry; I don't know why your comment got downvoted so much. It seems reasonable to me.

Comment author: TheMajor 28 August 2015 09:54:17PM *  1 point [-]

The parent argument proves too much, I think. Try adding the following, for example:

Since any communication can be described as the transmission of information, and, in order to be transmitted, this information must exist, any formal system of semiotics (providing it exists) can be encompassed by a larger formal system of physics. Taken together with the earlier observation (about the triviality of semiotics) we conclude that any formal explanation of physics must be trivial and/or incomplete.

I think the moral of the story is that one should not attempt to invoke Gödels Incompleteness Theorem in Social Science.

Comment author: TheAncientGeek 11 August 2015 11:40:40AM *  -1 points [-]

RQM may not end in an I, but it is still an interptetation.

What the I in MWI means is that it is an interpretation, not a theory, and therefore neither offers new mathematical apparatus, nor testable predictions.

and finally we reject the idea that these observer-dependent representations can be combined to one global representation.

Not exactly, RQM objects to observer independent state. You can have global state, providing it is from the perspective of a Test Observer, and you can presumably stitch multiple maps into such a picture.

Or perhaps you mean that if you could write state in a manifestly basis-free way, you would no longer need to insist on an observer? I'm not sure. A lot of people are concerned about the apparent disappearance of the world in RQM. There seems to be a realistic and a non realistic version of RQM. Rovellis version was not realistic, but some have added an ontology of relations.

In other words, where should we begin searching for maps of a territory containing observers that make accurate maps with QM that cannot be combined to a global map?

its more of a should not than a cannot.

2) What experiment could we do to distinguish between RQM and for example MWI?

Well, we can't distinguish between MWI and CI, either.

Comment author: TheMajor 11 August 2015 08:54:44PM 0 points [-]

Just because something is called an 'interpretation' does not mean it doesn't have testable predictions. For example, macroscopic superposition discerns between CI and MWI (although CI keeps changing its definition of 'macroscopic').

I notice that I am getting confused again. Is RQM trying to say that reality via some unknown process the universe produces results to measurements, and we use wavefunctions as something like an interpolation tool to account for those observations, but different observations lead to different inferences and hence to different wavefunctions?

View more: Prev | Next