Comment author: ModusPonies 04 February 2013 04:08:18PM 0 points [-]

What have you heard about CI's quality control, and do you happen to have the sources conveniently available? (I'm making the decision between CI and Alcor.)

Comment author: saturn 06 February 2013 11:25:40PM *  3 points [-]

I don't have any special insight on this subject, only what I've picked up from reading LW and occasionally talking about it on IRC. Many sources are linked from the comments in this thread (the comments are much more informative than the original post). To sum up, it seems that both CI and Alcor are lamentably bad, but CI is considerably worse.

Comment author: insufferablejake 26 January 2013 10:52:44AM 0 points [-]

Most of the comments on this thread are about people who seem to find this useful or think that this will make a difference positively. While I think the idea interesting, and would like to try it out, I am one of those who don't seem to like really bright lighting. In fact, at work, I've had some of the overhead lights removed to make it generally less bright ambiently. I tend to suffer from eyestrain or seem to get a headache, though now that I think about it, I am not sure if this was because the over head lights were reflecting badly off of my computer screen or not.

Anyway, I'd like to get the opinions of and comments from people who generally turn off lights at work (such that the ambient light source is behind the monitor) , and likewise, even better if they have tried this out.

Comment author: saturn 26 January 2013 09:03:43PM 0 points [-]

If using your computer in bright light gives you eyestrain, it might be possible that you need a brighter monitor to go with your brighter lights.

Comment author: iDante 19 January 2013 02:25:47AM 0 points [-]

What's the point of randomization if you can easily tell the difference between a bright bulb and a dim one?

Comment author: saturn 26 January 2013 09:00:28PM 5 points [-]

Randomization still eliminates some confounding factors even without blinding. For example, you might be more likely to decide to turn on your bright lights when you're already feeling alert.

Comment author: wedrifid 22 January 2013 02:42:49PM 6 points [-]

Enough money was raised, and when she died on the January 17th, she was preserved by Alcor.

Alcor? That's curious. Given the critical lack of funds I would have expected Cryonics Institute to be used. It seems like enough money and then some was raised!

Comment author: saturn 22 January 2013 10:19:06PM 10 points [-]

Given what I've heard about CI's quality control, I don't blame her for trying to raise enough money for Alcor.

Comment author: Kaj_Sotala 19 January 2013 07:38:13AM *  0 points [-]

105 watts of incandescents with halogen gas, billed as the equivalent of 130 watts of incandescent light. And I got an adaptor like this that lets me screw four of those into the same socket in the ceiling.

I'm unclear on the physics of lightning. If you have four lights that are the equivalent of 130 watts each, is the light output equivalent to a single 520 watt light? Or is there some sort of nonlinear effect?

Comment author: saturn 19 January 2013 04:51:51PM 5 points [-]

"Equivalent watts" is not a well-defined unit and the figures given by manufacturers are often exaggerated. Real incandescent bulbs vary in light output per watt. It's easier to use lumens, which are additive. However, human brightness perception is logarithmic, so 4 times the lumens will appear less than 4 times as bright.

Comment author: leplen 03 January 2013 07:11:24PM *  9 points [-]

So I'm fairly new to LessWrong, and have being going through some of the older posts, and I had some questions. Since commenting on 4 year old posts was probably unlikely to answer those questions or to generate any new discussion, I thought posting here might be more appropriate. If this is not proper community etiquette, I'm happy to be corrected.

Specifically, I'm trying to evaluate how I understand and feel about this post: The Level Above Mine

I have some very mixed feelings on this post, and the subject in general. (You might say I've noticed that I'm confused.) Sure. It's hard to evaluate reliably just how intelligent someone who is more intelligent than you is, just like a test that every student in a class aces doesn't allow you to identify which student knows the information the best, but doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset. Indeed, it seems in many ways the raison d'etre of LW relies on the assumption that it is possible to improve your intelligence. I would further argue that LW relies on the assumption that it is possible to recursively improve your intelligence, (i.e. learning things that help you learn better).

Is it possible that the fundamental attribution error is at work here? I mean, if it's ridiculous to believe in "mutants born with unnaturally high anger levels" then why the rush to believe in mutants with unnaturally high levels of intelligence? I'm not sure what to make of a post that discusses assessing how many standard deviations above average intelligence someone is, if I really believe that "Any given aspect of someone's disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability."

Indeed if we make fundamental attribution error when assessing someone because "we don't see their past history trailing behind them in the air", then can we not say the same for experiences that result in greater situational intelligence? Perhaps I'm straining the bounds of metaphor slightly, since problem-solving intelligence tends to be more enduring than vending-machine kicking anger, but is it so fixed that my SAT scores from the 7th grade are meaningful or worth discussing? Is it possible that what we perceive as greater intelligence, as "the level above mine" is just someone who has spent more time working on something, or working on something similar to it? What is the prior probability that someone picks up a new idea quickly because they've been exposed to a similar idea before, versus the prior probability that they are of mutant intelligence?

The entire ranking debate to me, sounds suspiciously like human social hiearchies, and since that's a type of irrationality humans are especially prone to, it makes me very suspicious. I know from personal experience, that being considered of "above average intelligence" is a very useful social tool which I can use to create a place for myself in social hierarchies, and often that place is not only secure, but also grants me reasonably high social status. I have at various times in my life, evaluated others, and granted social status accordingly, on the basis of their SAT scores and other similar measures. Is that what is going on here?

Fundamentally, I believe this question boils down to a handful of related questions:

  1. How accurate over time is our evaluation of general intelligence?
  2. Does our love of static hiearchies, esp. one that priveleges intelligence affect our answer to 1?

Sub-questions to #1

  • a. How varaible is intelligence, and over what time span? Or more generally, what do we estimate are the most heavily weighted inputs to a function that describes intelligence?
  • b. Is there an upper bound on human intelligence?
  • c. Are the people whose intelligence we're evaluating operating near that bound?
  • d. Can we reliably distinguish between intelligence and knowledge? How?

I'm not sure about question 1, but I'm pretty sure the answer to question 2 is yes.

Comment author: saturn 17 January 2013 08:50:18AM 0 points [-]

doesn't the idea of a persistent ranking system, and the concern with it imply a belief in intelligence as a static factor? Less Wrong is a diverse community, but I was by and large under the impression that it was biased towards a growth mindset.

I'd just like to point out that a growth mindset is fully compatible with fixed intelligence. Fixed intelligence doesn't mean that growth is impossible, only that some people can grow faster than others.

Comment author: ygert 15 January 2013 07:10:06PM *  17 points [-]

I just went and watched (half of) the video you just linked to. As someone who has heard of My Little Pony but never actually watched any of it, I can say that while I knew that this was not real, and knowing that I could see how it was not real, I see how without that I would not have been able to tell. While I am sure you can tell the differences at a glance, it is not obvious to someone who has not watched it. In other words, the Illusion of Transparency is kicking in.

With all that said, I think this post just got me to try watching My Little Pony. I have heard nothing but good about it in the past, and this post gives me just that little push that might get me to actually watch it. When I do watch it, if I like it, (which I most likely will, given the fanbase it has here on Less Wrong), please accept my thanks for finally pushing me to watch this (presumably) great show.

Comment author: saturn 16 January 2013 04:52:06AM 4 points [-]

I have heard nothing but good about it in the past

If you'd like a countervailing anecdote, I was amused by the parody but I can't stand the actual show.

Comment author: [deleted] 11 January 2013 01:08:38PM 3 points [-]

"Please divide the length of this sentence by the length of the stone block it is written on and convert to base 26 in order to reveal a code which can be cyphered using the obvious number-to-letter mapping"

I don't think it would be feasible to encode even half a dozen letters with that technique.

In response to comment by [deleted] on Just One Sentence
Comment author: saturn 11 January 2013 05:43:34PM 2 points [-]

Assuming my math is right, if your stone carving were accurate to 1 micron, in order to encode a 140 character 'tweet' using this method, you would need a stone tablet 10^163 times larger than the observable universe. (!)

Comment author: Qiaochu_Yuan 01 January 2013 10:21:27AM 6 points [-]

If I wanted to test whether I had a good way of picking stocks, I would be hesitant to do it with real money. I had admittedly very vague plans to test various ways of picking stocks with play money on my own, but those vague plans are more likely to coalesce into action if simplicio's proposal actually happens.

Comment author: saturn 01 January 2013 03:54:26PM 8 points [-]

Several free stock market simulators already exist.

Comment author: Eliezer_Yudkowsky 24 December 2012 08:03:10PM 2 points [-]

Everyone even slightly famous gets arbitrary green ink. Choosing which green ink to 'complain' about on your blog, when it makes an idea look bad which you would find politically disadvantageous, is not a neutral act. I'm also frankly suspicious of what the green ink actually said, and whether it was, perhaps, another person who doesn't like the "UFAI is possible" thesis saying that "Surely it would imply..." without anyone ever actually advocating it. Why would somebody who actually advocated that, contact Ben Goertzel when he is known as a disbeliever in the thesis?

No, I don't particularly trust Ben Goertzel to play rationalist::nice with his politics. And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

Comment author: saturn 24 December 2012 10:05:49PM 7 points [-]

And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

If that's the case, it seems like giving him the title Director of Research could cause a lot of confusion. I certainly find it confusing. Maybe that was a different Ben Goertzel?

View more: Prev | Next