Comment author: Bound_up 13 April 2015 08:28:11PM 0 points [-]

The following are Christian religious teachings which strike me as more rational/empirical than most. Do you detect any reasoning flaws in them?

Q. Is the knowledge of the existence of God a matter of mere tradition, founded upon human testimony alone, until a person receives a manifestation of God to themselves? A. It is.

No one can truly say he knows God until he has handled something.

The beauty of the teachings of the Lord is that they are true and that you can confirm them for yourself.

If we understood the process of creation there would be no mystery about it, it would be all reasonable and plain, for there is no mystery except to the ignorant. This we know by what we have learned naturally since we have had a being on the earth.

Their not being able to prevail against it does not prove it to be the Kingdom of God, for there are many theories and systems on the earth, incontrovertible by the wisdom of the world, which are nevertheless false.

Comment author: Normal_Anomaly 13 April 2015 08:41:09PM *  2 points [-]

Going through in order:

1 is a confession of bad epistemology,

2 is an assertion with no bad epistemology but a wrong premise,

3 is a generic wrong assertion with a "and that's beautiful" tacked on the front,

4 is a true statement largely independent of religious questions,

5 is good epistemology applied to wrong premises.

Does that engage with what you were asking, or have I misparsed you completely?

Comment author: Algon 13 April 2015 07:19:04PM 0 points [-]

I believe there is an 'ask stupid questions' thread for each month, and some other open threads. I'm not sure if you can make open threads yourself, but they're fairly common.

Comment author: Normal_Anomaly 13 April 2015 08:34:02PM 0 points [-]

I think there's an open thread once or twice a month. Also, IMO this post would go better in an open thread than a stupid questions thread; the stupid questions thread is for sharing advice.

Comment author: eternal_neophyte 13 April 2015 03:47:26AM *  3 points [-]

That's not really a fertile direction of criticism. Whether or not he's engaging in self-promoting provocation doesn't affect the validity of his position. Whether the USA can be trusted as the sole custodian of super intelligent AI is however an interesting question, since American exceptionalism appears to be in decline.

Comment author: Normal_Anomaly 13 April 2015 08:31:26PM 0 points [-]

IAWYC, but disagree on the last sentence: it's not an interesting question because it's a wrong question. Superintelligent AI can't have a "custodian". Geopolitics of non-superintelligent AI that is smarter than a human but won't FOOM is a completely different question, probably best debated by people who speculate about cyberwarfare since it's more their field.

Comment author: Normal_Anomaly 13 April 2015 08:24:58PM *  2 points [-]

My reaction to the first quoted statement was a big "Huh?". The only reason it would matter where superintelligent AI is first developed is that the researchers in different countries might do friendliness more or less well. A UFAI is equally catastrophic no matter who builds it; an AI that is otherwise friendly but has a preference for one country would . . . what would that even mean?Create eutopia and label it "The United Galaxy of America"? Only take the CEV of Americans instead of everybody? Either way, getting friendliness right means national politics is probably no longer an issue.

Also: I did not vote for this guy in the Transhumanist Party primaries!

Comment author: Normal_Anomaly 13 April 2015 08:10:42PM *  6 points [-]

I think this is at bottom a restatement of "determining the right goals with sufficient rigor to program it into an AI is hard; ensuring that these goals are stable under recursive self-modification is also hard." If I'm right, then don't worry; we already know it's hard. Worry, if you like, about how to do it anyway.

In a bit more detail:

the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths).

Evolution did a bad job. Humans were never given a single primary drive; we have many. If our desires were simple, AI would be easier, but they are not. So evolution isn't a good example here. Also, I'm not sure of your assertion that the best advances in AI so far came from mimicking the brain. The brain can tell us useful stuff as an example of various kinds of program (belief-former, decision-maker, etc.) but I don't think we've been mimicking it directly. As for machine learning, yes there are pitfalls in using that to come up with the goal function, at least if you can't look over the resulting goal function before you make it the goal of an optimizer. And making a potential superintelligence with a goal of finding [the thing you want to use as a goal function] might not be a good idea either.

Comment author: NancyLebovitz 13 April 2015 03:30:49PM 0 points [-]

I've banned hoofwall, so there's no point in asking them.

It's probably a good idea to ask new people how they found the site, just to find out how its reputation is spreading.

Comment author: Normal_Anomaly 13 April 2015 08:00:44PM 0 points [-]

That was why I was curious: presumably they didn't get here through any of the usual channels, so LW's reputation has gone somewhere I wouldn't expect. Ah well, just as well they're gone, should've asked faster.

Comment author: Normal_Anomaly 13 April 2015 02:24:07PM 0 points [-]

The quality of argument in this post is awful, but the closest thing to a main point that I can extract from it is "there is no rational reason for human nudity taboos", which is amusing because it's probably true. Not important, but still true. Also, hoofwall, how did you even find this website? It's not the sort of website that people who haven't picked up a book since 8th grade usually find, let alone care to post on.

Comment author: ChristianKl 02 April 2015 10:50:40PM 0 points [-]

Various lesswrong memes convinced me that working toward uploading by advancing neuroscience was a better alternative.

In what kind of timeframe do you consider uploading to be relevant?

Comment author: Normal_Anomaly 03 April 2015 12:46:05AM 0 points [-]

Maybe sometime before I die of old age, if I'm very lucky, or sufficiently shortly afterward that it's worth getting cryonics and hoping. Probably sometime within the next 100-200 years, if something else doesn't make it unnecessary by then.

Comment author: [deleted] 02 April 2015 11:27:19AM 2 points [-]

May I ask what is the utility of Haskell? Or rather, in what field it has one? Functional programming as a shortcut is great, but Python has that covered. Even C# LINQ has that covered, for most pragmatic functional programming is about writing domain-specific query languages, as a lot of complicated programming can be reduced to input - massage the data - output. The rest often just library-juggling. As opposed to this pragmatically functional stuff, purely functional programming is largely about avoiding bugs of certain types, but in my experience 95% of bugs come from not of those types, but from misunderstanding requirements or requirements themselves being sloppy and chaotic. Pure functionality is largely about programming like a mathemathician, strictly formal and everything the result of reasoning instead of just cobbling things together by trial and error which tends to characterize most programming, but the kind of bugs this formalist attitude cuts down on is not really the kinds of bugs that actually annoy users. So I wonder what utility you found to Haskell.

In response to comment by [deleted] on How has lesswrong changed your life?
Comment author: Normal_Anomaly 02 April 2015 10:47:00PM 0 points [-]

I'm taking a class in Haskell, and I'd really like to know this too. Haskell is annoying. It's billed as "not verbose", but it's so terse that reading other people's code and learning from it is difficult. (Note: the person I'm on a project with likes one-letter variable names, so that's a bit of a confounder.)

Comment author: Dahlen 02 April 2015 11:18:01AM 0 points [-]

I changed my intended college major from biomedical engineering to neuroscience+compsci.

As a biomedical engineering undergrad, can I ask you what prompted this decision and how the two options compare to each other, in your opinion?

Comment author: Normal_Anomaly 02 April 2015 10:19:07PM 0 points [-]

I wanted to do research that would have practical implications for the human condition, and I thought working on genetic diseases was the best way to do that. Various lesswrong memes convinced me that working toward uploading by advancing neuroscience was a better alternative. Also, the exposure to cognitive science on LW and the idea that human intelligence is the Most Important Thing made neuroscience seem a lot more interesting. I can't say much about the comparison, since I changed my plans while still in high school, but I'm glad I did it. For one thing, if I hadn't, I wouldn't have discovered how much I love to code.

View more: Prev | Next