Free Kindle Textbook: The Cerebellum: Brain for an Implicit Self (FT Press Science)
**** DEAL OVER: As of 20120611.
Another free kindle I thought some might have interest in. I haven't read it, but the first review was glowing and looked relevant.
First Amazon Review:
> Five Star Final; Excellent; A "must read" for any "student" of brain-behavior relationships
http://www.amazon.com/gp/product/B005DKQQG4/
UPDATE: Still free at the US amazon at 2pm eastern time. Reports that it is not free at the UK site, which I verified. Since I can log in to the UK site from the US and see the price, I assume people in the UK could sign into the US site and buy it. If anyone gives that a try, let me know and I'll further update the top level.
UPDATE: Free at amazon.fr. Can buy at the US site from the Netherlands. Can't buy from FR or US sites from UK.
What is the best programming language?
Learning to program in a given language requires a non-trivial amount of time. This seems to be agreed upon as a good use of LessWrongers' time.
Each language may be more useful than others for particular purposes. However, like e.g. the choice of donation to a particular charity, we shouldn't expect the trade-offs of focusing on one versus another not to exist.
Suppose I know nothing about programming... And I want to make a choice about what language to pick up beyond merely what sounds cool at the time. In short I would want to spend my five minutes on the problem before jumping to a solution.
As an example of the dilemma, if I spend my time learning Scheme or Lisp, I will gain a particular kind of skill. It won't be a very directly marketable one, but it could (in theory) make me a better programmer. "Code as lists" is a powerful perspective -- and Eric S. Raymond recommends learning Lisp for this reason.
Forth (or any similar concatenative language) presents a different yet similarly powerful perspective, one which encourages extreme factorization and use of small well-considered definitions of words for frequently reused concepts.
Python encourages object oriented thinking and explicit declaration. Ruby is object oriented and complexity-hiding to the point of being almost magical.
C teaches functions and varying abstraction levels. Javascript is more about the high level abstractions.
If a newbie programmer focuses on any of these they will come out of it a different kind of programmer. If a competent programmer avoids one of these things they will avoid different kinds of costs as well as different kinds of benefits.
Is it better to focus on one path, avoiding contamination from others?
Is it better to explore several simultaneously, to make sure you don't miss the best parts?
Which one results in converting time to dollars the most quickly?
Which one most reliably converts you to a higher value programmer over a longer period of time?
What other caveats are there?
Wasted life
It's just occurred to me that, giving all the cheerful risk stuff I work with, one of the most optimistic things people could say to me would be:
"You've wasted your life. Nothing of what you've done is relevant or useful."
That would make me very happy. Of course, that only works if it's credible.
Imposing FAI
All the posts on FAI theory as of late have given me cause to think. There's something in the conversations about it that has always bugged me, but it is something that I haven't found the words for before now.
It is something like this:
Say that you manage to construct an algorithm for FAI...
Say that you can show that it isn't going to be a dangerous mistake...
And say you do all of this, and popularize it, before AGI is created (or at least, before an AGI goes *FOOM*)...
...
How in the name of Sagan are you actually going to ENFORCE the idea that all AGIs are FAIs?
I mean, if it required some rare material (like nuclear weapons) or large laboratories (like biological wmds) or some other resource that you could at least make artificially scarce, you could set up a body that ensures that any AGI created is an FAI.
But if all it is, is the right algorithms, the right code, and enough computing power... even if you design a theory for FAI, how would you keep someone from making UFAI anyway? Between people experimenting with the principles (once known), making mistakes, and the prospect of actively malicious *humans*... it just seems like unless you somehow come up with an internal mechanism that makes FAI better and stronger than any UFAI could be, and the solution turns out to be such that any idiot could see that it was a better solution... that UFAI is going to exist at some point no matter what.
At that point, it seems like the question becomes not "How do we make FAI?" (although that might be a secondary question) but rather "How do we prevent the creation of, eliminate, or reduce potential damage from UFAI?" Now, it seems like FAI might be one thing that you do toward that goal, but if UFAI is a highly likely consequence of AGI even *with* an FAI theory, shouldn't the focus be on how to contain a UFAI event?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)