Comment author: gilch 20 May 2016 02:22:19AM 0 points [-]

The link appears to be broken, but it's probably referring to this video.

In response to comment by Nominull on Hyakujo's Fox
Comment author: gwern 25 March 2009 02:19:18AM 3 points [-]

Zen is a matter of life and death:

'Kyogen said, "It (Zen) is like a man (monk) hanging by his teeth in a tree over a precipice. His hands grasp no branch, his feet rest on no limb, and under the tree another man asks him, 'Why did Bodhidharma come to China from the West?' If the man in the tree does not answer, he misses the question, and if he answers, he falls and loses his life. Now what shall he do?" '

In response to comment by gwern on Hyakujo's Fox
Comment author: gilch 23 April 2016 10:49:24PM *  0 points [-]

First, grasp a branch. Then, ask the other man for help.

Comment author: gilch 22 April 2016 02:26:25AM 1 point [-]

I seem to be years late to this party, but I've heard the LW culture isn't opposed to commenting on old posts. In the interest of "breadth" I'll answer anyway after at least five minutes of thought, without looking at the other answers first (though I've probably seen subsequent posts that have been influenced by this one by now).

So there are three categories of tests here. In order of strictness: those for masters, those for students, and those for employees?

There are many skills under the "rationality" umbrella. Enumerate them and test separately. Maybe there are some we don't know yet. How do we test for those? There's also a difference between epistemic and instrumental rationality. Epistemic seems easier to test and is probably required for instrumental. But instrumental is what we really want. Some of my test suggestions will only test a part of "rationality".

Schools and science have a lot of experience measuring things like this. Can we learn from them?

Every test I've come up with seems to be in one of two categories: toy problems, or real-life problems. The real-life problems are better for the masters, perhaps; and the toy problems for the students. The toy problems are less real, but more replicable. I thought we're supposed to hold off on proposing solutions to avoid attractors like categorization prematurely limiting our scope. But we've been asked to brainstorm. Can we break out of these categories?

Some Ideas:

  • Give the students a sum to invest for a small business, and time limit, then see how much they make. Require strict record keeping to prevent cheating. Noisy.

  • Give them a sum to invest in a prediction market, then see how much they make.

  • Use more direct calibration tests. Make students give probabilities for things. See how often they're right.

  • A student must catch specific examples of cognitive errors/fallacies in a video. (Arguably the important part is to catch one's own errors, and the ability to find others' errors doesn't prove that.)

  • Make a student write an essay before the term. The instructor will find examples of cognitive errors in it, but keep it secret. Then after the term, the student must review his essay and find as many errors in his former thinking as possible. This will measure personal improvement, but might not help measure relative to peers, since they're all taking different "tests".

  • SAT-style multiple-choice exam. This can test knowledge of the material, and synthesis too (or so the test writers claim) to a limited extent.

  • Like the three integers test, the master can play the role of nature, while the students play the role of a scientist, trying to figure out a simple rule by "experiment". Grading can be on the number of questions asked, the time taken, the difficulty of the rule, or the number of these questions answered correctly. The instructor must be strictly forbidden from giving hints that could ruin the results. This is actually very similar to debugging software. Maybe this kind of test could be computerized, with "nature" as an opaque program and, students writing code that interacts with it as their "experiments". They then may have to write code to emulate the rule. If it passes the unit tests, a human instructor can confirm if it implements the same rule. This can also give students a feel for what it's like to do science correctly.

  • Competition programming AI to win at game-theory-inspired challenges. See how they compare to well-known strategies. May be hard to keep challenges secret. Could the payoff grid be randomized?

  • Life outcome survey over years. Are they "winning" more versus control group? May be hard to define. Slow. We should do this, but we shouldn't wait for it before developing the program.

  • Masters can actually try to accomplish something. Maybe improve life outcomes in a third-world country or something. To be meaningful, it would have a control group, competitors, a time limit and a budget.

Comment author: Vaniver 06 January 2016 09:16:50PM 0 points [-]

Why don't ordinary photons spontaneously collapse into black holes?

How do you know that ordinary photons aren't black holes?

Comment author: gilch 06 January 2016 09:57:27PM *  0 points [-]

I think I see where you're going with this. If I can answer the question of how the universe would be different if photons did collapse, that might help explain why they don't.

That applies to anything, not just photons.

What would the world look like if, say, electrons were actually black holes with an electric charge? I do think black holes can be electrically charged, that is, charge is still conserved even if charged particles fall into a black hole. Same with angular momentum etc. We would expect black holes of electron mass to spontaneously decay via Hawking radiation. Into a shower of particles that in sum obey the conservation laws... in other words, into another electron. Hmm. That didn't really change anything did it? It might help to explain quantum tunneling though.

I would also expect electrons to have an event horizon of finite radius, rather than behaving as an infinitesimal point. I don't know enough general relativity to calculate how big this should be for a black hole of electron mass, but perhaps it's too small for us to have observed yet. (Edit: asking Wolfram Alpha yields 1.353E-57 meters. Plank Length is only 1.616E-35 meters, far too small to observe.) An event horizon means that light can be trapped by the gravity of the electron. Which would give the black hole enough extra mass to spontaneously decay into more than just an electron. In the case a low-energy photon, into another electron and photon (explains photon scattering), or if high-enough energy, into heavier particles that add up to zero charge and spin, plus the election again. Like positron/electron pair production. Which has also been observed. Hmm. That still didn't change anything, did it?

Maybe electrons really are black holes?

Oh, I know! Neutrinos are as massive as electrons (edit: not really, but they do have positive rest mass), but lack charge. If electrons are black holes, then neutrinos are also. The effect of gravitationally scattering light as described above should work for neutrinos too, but to my knowledge, they don't. (Do they?)

Comment author: gilch 06 January 2016 08:52:29PM *  1 point [-]

Has this ever been seriously considered? (I’ve done some homework but undoubtedly not enough).

This idea is hardly new here. See Simulation, Consciousness, Existence by Hans Moravec, Permutation City by Greg Egan, and the Quantum Physics Sequence by our own Yudkowsky (especially the Many Worlds parts with the Ebborians, starting with Where Physics Meets Experience) and Robin Hanson's mangled worlds, which might help to explain some of the odd probabilities we find in quantum mechanics.

Comment author: gilch 06 January 2016 12:07:10AM 1 point [-]

When you consider it—these are all rather basic matters of study, as such things go. A quick introduction to all of them (well, except naturalistic metaethics) would be... a four-credit undergraduate course with no prerequisites?

I wonder what it would take to make and run a MOOC? I know that MOOC software has been open sourced (e.g. OpenMOOC, edX). If a single undergrad course could have that big an impact on the world, isn't it worth doing?

Comment author: Lumifer 05 January 2016 06:48:11PM 1 point [-]

narrow AIs that can supposedly identify an author from their writing

It's not a "narrow AI", it's a straightforward statistical model. Or, if you prefer, an outcome of machine learning applied to texts of different authors.

identify sockpuppets

A voting sockpuppet doesn't post except to get the initial karma. It just up- and down-votes -- there is no text to analyse.

Comment author: gilch 05 January 2016 07:24:20PM *  0 points [-]

It's not a "narrow AI"

https://en.wikipedia.org/wiki/Weak_AI It is by that definition. Of course, words are only useful if people understand them. I know LW has some non-standard terminology. Point me to the definitions agreed upon by this community and I'll update accordingly.

A voting sockpuppet doesn't post except to get the initial karma.

Sounds like the initial karma threshold is too low. I have various other ideas about how to fix the karma system, but perhaps I should hold off on proposing (more) solutions before we've discussed the problem thoroughly. If that's already been started I should probably continue from there, otherwise, do you think this issue (karma problems) merits a top-level discussion post?

Comment author: Good_Burning_Plastic 04 January 2016 05:40:12PM 5 points [-]

The Lion started posting "abruptly" with no signs of being a newbie, not very long after VoiceOfRa was banned (much like VoiceOfRa did after Azathoth123 was banned and Azathoth123 did after Eugine Nier was). Also, the first comments of The Lion have been on points that the previous EN incarnations also often made, and their writing styles sound very similar to me.

Comment author: gilch 05 January 2016 05:55:59AM 1 point [-]

I've heard of narrow AIs that can supposedly identify an author from their writings. I'm not certain how accurate they are, or how much material they need, but perhaps we could use such a system here to identify sockpuppets and make ban evasion more difficult.

In response to comment by IlyaShpitser on LessWrong 2.0
Comment author: Vaniver 22 December 2015 03:11:07PM 1 point [-]

I suggest ignoring karma.

I think that "be bold" and "ignore karma" cash out very differently, and while I mostly agree with "be bold" I mostly disagree with "ignore karma."

Karma is a good mechanism for directing attention and for providing quick, anonymous feedback; if we required everyone to to write a comment publicly lauding or shaming posts and comments instead of voting, we would get much less in the way of feedback because it requires much more in the way of attention and risk. If someone is consistently getting downvotes, they are most likely consistently doing something wrong.

I do think that an important part of making LW more useful is making the karma signal better; the votes are only as good as the people casting them.

In response to comment by Vaniver on LessWrong 2.0
Comment author: gilch 30 December 2015 08:14:01AM *  1 point [-]

Karma only gives you one bit of feedback per person voting. A [+] or [-], that's it. We can probably do better. Even so, it's much better than nothing.

I don't have time to read every single comment when there are hundreds to sift through, but I can read the important ones. The only way to find the important ones without reading everything is through karma. For example, SSC posts can get a comparable number of comments, but I've given up reading them.

Adding even a few more bits per person to the signal could improve quality a lot. On the other hand, simplicity is one of the karma system's strong points. The low effort required encourages participation, as you pointed out. I don't want to complicate the system too much, but I don't think the current version is optimal.

We could take an approach similar to Google’s PageRank, so votes by high-karma people carry more weight. This wouldn’t require any more effort for participation than it does now. We could perhaps keep the current one-bit system for determining karma score in the first place, but we would be able to sort posts/comments by the weighted score.

I’m not sure how hard this would be to implement. The database must have enough information to do this, since it tracks who made each vote. I’m also not sure how to set up the weighting function, but this sounds like a job for Bayesian methods—some of us are good at that, right? :)

Getting downvoted can be discouraging. People who get downvoted enough (or fear getting downvoted enough) may not participate. Sometimes this is a good thing (e.g. trolls). But in other cases, there could be people with important things to say, who could improve their quality with just a little guidance.

For anyone reading this, what are your usual reasons for downvoting?

Perhaps they fall into some common patterns we could enumerate. (Perhaps a fallacy or cognitive bias from the sequences?) If so, we could add flags for these common reasons to the comment system. Marking a flag would count as your downvote, but would provide much more valuable feedback to the commenter, and also to other newcomers. We could control these specific problems without discouraging participation as much as a simple “[-]. You’re wrong. About something.”, like we do now.

Comment author: username2 03 December 2015 10:49:56PM 1 point [-]

When, if ever, is playing computer games good for me?

Comment author: gilch 10 December 2015 09:33:58PM 0 points [-]

There's been some actual research into that, this talk is a good summary.

View more: Prev | Next