Posts

Sorted by New

Wiki Contributions

Comments

That our health and body chemistry affects our mental processes, is not unreasonable to expect. More interesting would be if this goes the other way.. do our belief systems and rationality have a profound impact on our body chemistry?

For instance, I wonder if being rational and self-aware drives our digestive system to become clever over time.. consider that we may have a hoard of gastric juices which our body tries and tests on various kinds of foods, keeps a track of which works better, and adapts accordingly. It may also try to create newer juices and see how they work.. At the extreme end, we would be leading our body to set up a gastrochemistry lab in our guts.

Another example: I hope that studying computer science may lead one's own brain to apply those concepts to optimize one's neural connections in some way.. give us 'speedup', so to say.

the potential difficulty of the concepts necessary to formulate the solution

As I see it, there might be considerable difficulty of concepts in formulating even the exact problem statement. For instance, given that we want a 'friendly' AI; our problem statement very much depends on our notion of friendliness; hence the necessity of including psychology.

Going further, considering that SI aims to minimize AI risk, we need to be clear on which AI behavior is said to constitute a 'risk'. If I remember correctly, the AI in the movie "I-robot" inevitably concludes that killing the human race is the only way to save the planet. The definition of risk in such a scenario is a very delicate problem.

Some important aspects of the future AI 'friendliness' would probably link up with the greater economy surrounding us; and more importantly, it would depend upon the nature of AI interaction with people as well as their behaviour. So, besides the obvious component of mathematics, I feel that some members of the FAI team should also have some background in subjects such as psychology; and also a generic perspective on global issues such as resource management.

Suppose someone, on inspecting his own beliefs to date, discovers a certain sense of underlying structure; for instance, one may observe a recurring theme of evolutionary logic. Then while deciding on a new set of beliefs, would it not be considered reasonable for him to anticipate and test for similar structure, just as he would use other 'external' evidence? Here, we are not dealing with direct experience, so much as the mere belief of an experience of coherence within one's thoughts.. which may be an illusion, for all we know. But then again, assuming that the existing thoughts came from previous 'external' evidence, could one say that the anticipated structure is indeed well-rooted in experience already?

Hello, everyone! I'm 21, soon to graduate from IIT Bombay, India. I guess the first time I knowingly encountered rationality, was at 12, when I discovered the axiomatic development of Euclidean geometry, as opposed to the typical school-progression of teaching mathematics. This initial interest in problem-solving through logic was fueled further, through my later (and ongoing) association with the Mathematics Olympiads and related activities.

Of late, I find my thoughts turning ever more to understanding the working and inefficiencies of our macro-economy, and how it connects with basically human thought and behavior. I very recently came to know of Red Plenty, which seems generally in line with the evolutionary alternative described in the foreword to Bucky Fuller's Grunch of Giants.. and that is what made me feel the need to come here, actively study and discuss these and related ideas with a larger community.

Having just started with the Core Sequences, looking forward to an enriching experience here!