GregFish
GregFish has not written any posts yet.

Oh fun, we're talking about my advisers' favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.
But here's the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It's making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.
Sounds like a logical conclusion to me...
I still have a lot of questions about detail but I'm starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise
Again, it really shouldn't be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn't require new code, it just requires a routine to initialize a new set of ANN objects at runtime.
Just as my desktop computer no longer functions by the rules of a dRAM.
It never really did. DRAM is just a way to keep bits in memory for processing. What's going on under the hood of any computer hasn't changed at all. It's just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today's machines function by the same rules, it's just that the latter is given the tools to do so much more with them.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better.
But... (read more)
Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.
the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".
Ok, I'll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.
You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.
No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs.
... (read more)I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote. But I don't see why that should be particularly
Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative?
Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do.
Is "incorrectly" a normative or descriptive term?
Yes. When you need it to return "A" and it retuns "Finland," it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic... (read more)
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.
No, actually I think the tutorial was necessary, especially since what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn't, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.
More will go on in a future superhuman AI than goes on in any present-day toy AI.
And again I'm trying to figure out what the "superhuman" part will consist of. I keep getting answers like "it will be faster than us" or "it'll make correct dicisons faster", and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...
Not sure. You could argue both points in this situation.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
I suppose it would.