I'm curious as to why you chose to target this paper at academic philosophers. Decision theory isn't my focus, but it seems that while the other groups of researchers in this area (mathematicians, computer scientists, economists, etc) talk to one another (at least a little), the philosophers are mostly isolated. The generation of philosophers trained while it was still the center of research in logic and fundamental mathematics is rapidly dying off and, with them, the remaining credibility of such work in philosophy.
Of course, philosophers are the only gro...
If your goal is to found an IT startup, I'd recommend learning basic web development. I formerly used rails and, at the time I picked it up, the learning curve was about a month (just pick a highly rated book and work through). If not web, consider app development. If you know a bit of Java, Android would probably be the way to go. With either of these, you'll have a skill that allows you to single-handedly create a product.
At the same time, start keeping a list of ideas you have for startups. Some will be big, others small. But start looking for opportuni...
Rather than thinking of it as spending 30 minutes a day on rationality when you should be doing other things, it might be more accurate to think of it as 30 minutes a day spent optimizing the other 23.5 hours. At least in my experience, taking that time yields far greater total productivity than when I claim to be too busy.
Strongly seconded. While getting good people is essential (the original point about rationality standards), checks and balances are a critical element of a project like this.
The level of checks needed probably depends on the scope of the project. For the feasibility analysis, perhaps you don't need anything more than splitting your research group into two teams, one assigned to prove, the other disprove the feasibility of a given design (possibly switching roles at some point in the process).
Good point. And, depending on your assessment of the risks involved, especially for AGI research, the level of the lapses might be more important than the peak or even the average. A researcher who is perfectly rational (hand waving for the moment about how we measure that) 99% of the time but has, say, fits of rage every so often might be even more dangerous than the slightly less rational on average colleague who is nonetheless stable.
I think some of it comes down to the range of arguments offered. For example, posted alone, I would not have found Objection 2 particularly compelling, but I was impressed by many other points and in particular the discussion of organizational capacity. I'm sure there are others for whom those evaluations were completely reversed. Nonetheless, we all voted it up. Many of us who did so likely agree with one another less than we do with SIAI, but that has only showed up here and there on this thread.
Critically, it was all presented, not in the context of an ...
The different emphasis comes down to your comment that:
...they support SI despite not agreeing with SI's specific arguments. Perhaps you should, too...
In my opinion, I can more effectively support those activities that I think are effective by not supporting SI. Waiting until the Center for Applied Rationality gets its tax-exempt status in place allows me to both target my donations and directly signal where I think SI has been most effective up to this point.
If they end up having short-term cashflow issues prior to that split, my first response would be to register for the next Singularity Summit a bit early since that's another piece that I wish to directly support.
In addition to going directly to articles, consider dropping an email or two to researchers working on those topics (perhaps once you've found an interesting article of theirs). Many are very willing to provide their overview of the area and point you to interesting resources. While there are times when you won't get a response (for example, before conference season or at the end of the semester), most are genuinely pleased to be contacted by people interested in the topics they care about.
The primary reason I think SI should be supported is that I like what the organization actually does, and wish it to continue. The Less Wrong Sequences, Singularity Summit, rationality training camps, and even HPMoR and Less Wrong itself are all worth paying some amount of money for.
I think that my own approach is similar, but with a different emphasis. I like some of what they've done, so my question is how do encourage those pieces. This article was very helpful in prompting some thought into how to handle that. I generally break down their work into ...
First, let me say that, after re-reading, I think that my previous post came off as condescending/confrontational which was not my intent. I apologize.
Second, after thinking about this for a few minutes, I realized that some of the reason your papers seem so fluffy to me is that they argue what I consider to be obvious points. In my mind, of course we are likely "to develop human-level AI before 2100." Because of that, I may have tended to classify your work as outreach more than research.
But outreach is valuable. And, so that we can factor out t...
My hope is that the upcoming deluge of publications will answer this objection, but for the moment, I am unclear as to the justification for the level of resources being given to SIAI researchers.
Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.
This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resou...
This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resources should be devoted to your projects.
Because some people like my earlier papers and think I'm writing papers on the most important topic in the world?
It's impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system...
Note that this isn't uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies d...
And note that these improvements would not and could not have happened without more funding than the level of previous years
Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).
Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?
Of course, th...
I definitely agree.
For (3), now is the time to get this moving. Right now, machine ethics (especially regarding military robotics) and medical ethics (especially in terms of bio-engineering) are hot topics. Connecting AI Risk to either of these trends would allow you extend and, hopefully, bud it off as a separate focus.
Unfortunately, academics are pack animals, so if you want to communicate with them, you can't just stake out your own territory and expect them to do the work of coming to you. You have to pick some existing field as a starting point. Then,... (read more)