Comment author: A4FB53AC 17 May 2012 08:00:51PM *  1 point [-]

First off, I'd like to say, I have met Christians who similarly were very open to rationality and applying it to the premises of their religion, especially the ethics. In practice, one of these was the only person who directly recognized me as an immortalist a few sentences into our first discussion, where no one else around me even knew what that is. I find that admirable, and fascinating.

I also think it likely that human beings as they are now need some sort of comfort, reassurance, that their universe is not that universe of cold mathematics.

So I'm not sure I should point this out, but, in the end, you're still trying to find a God of the gaps. In the end, you're still basing your view of the universe on a very special premise, that is, God.

Eventually, this can only be resolved in a few ways : either God exists, or He doesn't, or using its existence as a premise doesn't make a difference, and a theist would eventually come to the same understanding of the universe as a down-to-earth, reductionist atheistic rationalist.

But I also began to feel depressed, and then sort of hollow inside. I had no attachment to young-earth creationism, but I suppose I was trying to keep a sort of "God of the gaps" with regard to the beginning and development of intelligent life on Earth. Having seen why there were considerably fewer gaps than I had thought, I couldn't un-see it. A little part of me had been booted out of Eden.

I don't think God exists, and I'm still puzzled by how anyone could come to believe it does. Here I mean believe in that sense where you don't just "like to pretend something is real for the comfort it brings", which I do too, but rather in the sense where you think "stop kidding yourself now, you need a real, practical, useable answer now".

Both are different, the first is fine and necessary for many people, but if you use God in the latter I'm worried you're going to be up for a few disappointing experiences for the next few decades.

Comment author: gwern 14 May 2012 05:02:40PM 14 points [-]

I have not voted on it, but if I were downvoting, my reasoning would run that this is a single link which many of us heard about at the time, and Gerard is not providing any context, real commentary, or comprehensiveness, and to some extent, he's focusing on uninteresting things: iodine is way cheaper than parasite prevention, as effective, and comes with nifty bonuses like 'increases the IQ of females more than males'. As well, the appeal to sanity waterline is a little misplaced without any reference to any of the papers I've been collating in http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r01

Discussion has low standards, of course, but 1 link and 3 sentences is a bit low.

Comment author: A4FB53AC 14 May 2012 06:52:37PM 4 points [-]

comes with nifty bonuses like 'increases the IQ of females more than males'.

Why is that a bonus?

Comment author: A4FB53AC 14 May 2012 06:48:44PM *  9 points [-]

Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome? I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this.

My immediate reaction to this was "as opposed to doing what?" In this segment it seems like it is argued that SI's work, raising awareness that not all paths to AI are safe, and that we should strive to find safer paths towards AI, is actually making it more likely that an undesirable AI / Singularity will be spawned in the future. Can someone explain me how not discussing such issues and not working on them would be safer?

Just having that bottom line unresolved in Holden's post makes me reluctant to accept the rest of the argument.

Comment author: loup-vaillant 09 May 2012 08:32:17AM 3 points [-]

Eliezer also wrote multiple times that he's an "infinite set atheist". I'm not sure that's actually compatible with mathematical Platonism. (The way I understand it, at least.)

Comment author: A4FB53AC 13 May 2012 11:04:33PM 1 point [-]
Comment author: khafra 19 April 2012 12:47:07PM 3 points [-]

I don't think it was clear from the context that you were arguing against the practice of community moderation in general. I also don't think you supported your case anywhere near well enough to justify your verbal vehemence. Was this a test/demonstration of Wei Dai's point about intolerance of overconfident newcomers with different ideas?

Comment author: A4FB53AC 21 April 2012 07:04:34PM *  2 points [-]

Actually, not against. I was thinking that current moderation techniques on lesswrong are inadequate/insufficient. I don't think the reddit karma system's been optimized much. We just imported it. I'm sure we can adapt it and do better.

At least part of my point should have been that moderation should provide richer information. For instance by allowing for graded scores on a scale from -10 to 10, and showing the average score rather than the sum of all votes. Also, giving some clue as to how controversial a post is. That'd not be a silver bullet, but it'd at least be more informative I think.

And yes, I was also arguing this idea thinking it would fit nicely in this post.

I guess I was wrong since it seems it wasn't clear at all what I was arguing for, and being tactless wasn't a good idea either, contrarian intolerance context or not. Regardless, arguing it in detail in comments, while off-topic in this post, wasn't the way to do it either.

Comment author: David_Gerard 19 April 2012 06:58:50AM 1 point [-]

People optimizing for "more like this" eventually downgrades content into lolcats and porn.

More so than "vote up"? You've made a statement here that looks like it should be supported by evidence. What sites do you know of this happening from going from "vote up" to "more of this"?

Comment author: A4FB53AC 19 April 2012 08:08:54AM 0 points [-]

Not more so than "vote up".

In this case I don't think both are significantly different. They both don't convey a lot of information, both are very noisy, and a lot of people seem to already mean "more like this" when they "vote up" anyway.

Comment author: Bugmaster 19 April 2012 06:35:30AM 1 point [-]

Don't you technically need at least two bits ? There are three states: "downvoted", "upvoted", and "not voted at all".

Comment author: A4FB53AC 19 April 2012 08:05:08AM *  1 point [-]

True, except you don't know how many people didn't vote (i.e. we don't keep track of that : a comment at 0 could as well have been read and voted as "0" by 0, 1, 10 or a hundred people and is the default state anyway.)(We similarly can't know if a comment is controversial, that is, how many upvotes and downvotes went into the aggregated score).

Comment author: David_Gerard 18 April 2012 10:45:43PM 13 points [-]

Change the mouseovers on the thumbs-up/thumbs-down icons from "Vote up"/"Vote down" to "More like this"/"Less like this". I've suggested this before and it got upvotes, I suggest now it might be time to implement it.

Comment author: A4FB53AC 19 April 2012 04:12:38AM -1 points [-]

You should call it black and white. Because that's what it is, black and white thinking.

Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics).

Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people usually mean it to say "more / less of this please". But what's your evidence for it working? The quality of the discussion here? How much of that stems from the quality of the public, and the quality of the base material such as Eliezer's sequence?

Do you realize that judgements like "more / less of this" may well optimize less than you think for content, insight, or epistemic hygiene, and more than it should for stuff that just amuses and pleases people? Jokes, famous quotes, group-think, ego grooming, etc.

People optimizing for "more like this" eventually downgrades content into lolcats and porn. It's crude wireheading. I'm not saying this community isn't somewhat above going that deep, but we're still human beings and therefore still susceptible to it.

More intuitive programming languages

4 A4FB53AC 15 April 2012 11:35AM

I'm not a programmer. I wish I were. I've tried to learn it several times, different languages, but never went very far. The most complex piece of software I ever wrote was a bulky, inefficient game of life.

Recently I've been exposed to the idea of a visual programming language named subtext. The concept seemed interesting, and the potential great. In short, the assumptions and principles sustaining this language seem more natural and more powerful than those behind writing lines of codes. For instance, a program written as lines of codes is uni-dimensional, and even the best of us may find it difficult to sort that out, model the flow of instructions in your mind, how distant parts of the code interact together, etc. Here it's already more apparent because of the two-dimensional structure of the code.

I don't know whether this particular project will bear fruit. But it seems to me many more people could become more interested in programming, and at least advance further before giving up, if programming languages were easier to learn and use for people who don't necessarily have the necessary mindset to be a programmer in the current paradigm.

It could even benefit people who're already good at it. Any programmer may have a threshold above which the complexity of the code goes beyond their ability to manipulate or understand. I think it should be possible to push that threshold farther with such languages/frameworks, enabling the writing of more complex, yet functional pieces of software.

Do you know anything about similar projects? Also, what could be done to help turn such a project into a workable programming language? Do you see obvious flaws in such an approach? If so, what could be done to repair these, or at least salvage part of this concept?

Comment author: A4FB53AC 06 April 2012 01:52:29PM *  3 points [-]

Is the amount of bits necessary to discriminate one functional human brain among all permutations of matter of the same volume greater or smaller than the amount of bits necessary to discriminate a version of yourself among all permutations of functional human brains? My intuition is that once you've defined the first, there isn't much left needed, comparatively, to define the latter.

Corollary, cryonics doesn't need to preserve a lot of information, if any, you can patch it up with, among other things, info from what a generic human brain is, or better what a human brain derived from your genetic code is, and correlate that with information left behind on the Internet, in your writings, the memories of other people, etc., about what some of your own psychological specs and memories should be.

The result might be a fairly close approximation of you, at least according to this gradation of identity idea.

View more: Prev | Next