Well I totally missed the diaspora. I read star slate codex (but not the comments) and had no idea people are posting things in other places. It surprises me that it even has a name "rationalist diaspora." It seemed to me that people ran out of things to say or the booster rocket thing had played itself out. This is probably because I don't read Discussion, only Main and as Main received fewer posts I stopped coming to Less Wrong. As "meet up in area X" took over the stream of content I unsubscribed from my CSS reader. Over the past few...
When you’re “up,” your current strategy is often weirdly entangled with your overall sense of resolve and commitment—we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice.
Aside 1: I run into many developers who aren't able to separate their idea from their identity. It tends to make them worse at customer and product oriented thinking. In a high bandwidth collaborative environment, it leads to an assortment of problems. They might not suggest an idea, because ...
I've always thought that "if I were to give, I should maximize the effectiveness of that giving" but I did not give much nor consider myself an EA. I had a slight tinge of "not sure if EA is a thing I should advocate or adopt." I had the impression that my set of beliefs probably didn't cross over with EAs and I needed to learn more about where those gaps were and why they existed.
Recently through Robert Wiblin's facebook have encountered more interesting arguments and content in EA. I had no concrete beliefs about EA, only vague impres...
I'm curious about the same thing as [deleted].
Furthermore, a hard to use text may be significantly less hard to use in the classroom where you have peers, teachers, and other forms of guidance to help digest the material. Recommendations for specialists working at home or outside a classroom might not be the same as the recommendations you would give to someone taking a particular class at Berkeley or some other environment where those resources are available.
A flat out bad textbook might seem really good when it is something else such as the teacher, the method, or the support that makes the book work.
"A directed search of the space of diet configurations" just doesn't have the same ring to it.
Thanks for this. I hadn't seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).
I wonder if there is a rationality exercise in 'write pseudocode for problem descriptions, explore the callers and implementations'.
Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.
Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like 'characteristic of a metaphysical belief system.' The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn't consistent with my more fundamental, empirical beliefs.
So in my mind I have 'WARNING!' tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being ea...
As an aside, what are IFS and NVC?
Edit: Ah, found links.
IFS: http://en.wikipedia.org/wiki/Internal_Family_Systems_Model
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn't seem unreasonable to me that folks interested could explore it and see what turns up.
Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I'd be paying opportunity...
To address your second point first, the -attendees- were not a group who strongly shared common beliefs. Some attended due to lots of prior exposure to LW, a very small number were strong x-risk types, several were there only because of recent exposure to things like Harry Potter and were curious, many were strongly skeptical of x-risks. There were no discussions that struck me as cheering for the team -- and I was actively looking for them!
Some counter evidence, though: there was definitely a higher occurrence of cryonicists and people interested in cryon...
I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized -- my rationality concepts were all individual floating bits not well sewn together -- so I reached a point where new concepts didn't fit into my map very easily.
I agree things got more disorganized, in fact, I remember on a couple occasions seeing the 'this isn't the o...
I attended the 2011 minicamp.
It's been almost a year since I attended. The minicamp has greatly improved me along several dimensions.
I now dress better and have used techniques provided at minicamp to become more relaxed in social situations. I'm more aware of how I'm expressing my body language. It's not perfect control and I've not magically become an extrovert, but I'm better able to interact in random social situations successfully. Concretely: I'm able to sit and stand around people I don't know and feel and present myself as relaxed. I dress better
What we know about cosmic eschatology makes true immortality seem unlikely, but there's plenty of time (as it were) to develop new theories, make new discoveries, or find possible new solutions. See:
Cirkovic "Forecast for the Next Eon: Applied Cosmology and the Long-Term Fate of Intelligent Beings"
Adams "Long-term astrophysical processes"
for an excellent overview of the current best-estimate for how long a human-complexity mind might hope to survive.
Just about everything CIrkovic writes on the subject is really engaging.
More importantly, cryonics is useful for preserving information. (Specifically, the information stored by your brain.) Not all of the information that your body contains is critical, so just storing your spinal cord + brain is quite a bit better than nothing. (And cheaper.) Storing your arms, legs, and other extremities may not be necessary.
(This is one place where the practical reasoning around cryonics hits ugh fields...)
Small tissue cryonics has been more advanced than whole-body. This may not be the case anymore, but certainly was say four years ago. S...
Your company plan sounds very much like how Valve is structured. You may find it challenging to maintain your desired organizational structure, given that you also plan to be dependent on external investment. Also, starting a company with the express goal of selling it as quickly as possible conflicts with several ways you might operate your company to achieve a high degree of success. Many of the recent small studios that have gone on to generate large amounts of revenue (relative to their size) (Terraria / Minecraft / etc) are independently owned and bui...
Carl Shulman has been convincing that I should do nothing directly (in terms of labor) on the problem on AI risks, instead become successful elsewhere, and then direct resources as I am able toward the problem.
However I believe I should 1) continue to educate myself on the topic 2) try to learn to be a better rationalist so when I do have resources I can direct them effectively 3) work toward being someone who can gain access to more resources 4) find ways to better optimize my lifestyle.
At one point I seriously considered running off to San Fran to be in...
I will say that I feel 95% confident that SIAI is not a cult because I spent time there (mjcurzi was there also), learned from their members, observed their processes of teaching rationality, hung out for fun, met other people who were interested, etc. Everyone involved seemed well meaning, curious, critical, etc. No one was blindly following orders. In the realm of teaching rationality, there was much agreement it should be taught, some agreement on how, but total openness to failure and finding alternate methods. I went to the minicamp wondering (along w...
Thanks for taking the time to respond.
I rebuilt my guitar thing and added today's datapoint and now it seems to be predicting my path properly. Makes more sense now. I think I was confused at first because I had made a custom graph instead of using the "Do More" prefab.
Neat software!
An exercise we ran at minicamp -- which seemed valuable, but requires a partner -- is to take and argue for a position for some time. Then, at some interval, you switch and argue against the position (while your partner defends). I used this once at work, but haven't had a chance since. The suggestion to swap sides mid argument surprised the two, but did lead to a more effective discussion.
The exercise sometimes felt forced if the topic was artificial and veered too far off course, or if one side was simply convinced and felt that further artificial defense was unproductive.
Still, it's a riff on this theme.
I created two goals:
https://www.beeminder.com/greenmarine
Both goals have perfectly tight roads. Is this correct? I would like to give myself some variance, since I'll probably not ever do exactly 180 minutes in a day. To start, I fudged the first day's value at the goal value.
Based on how you describe the system, it looks like I should expect to pay $5 if I practice 179 min...
It would actually be worthwhile to post a small analysis of Lifeboat. How they meet the crank checklist, etc. Do they do anything other than name drop on their website, etc?
Hiring Luke full time would be an excellent choice for the SIAI. I spent time with Luke at mini-camp and can provide some insight.
Luke is an excellent communicator and agent for the efficient transmission of ideas. More importantly, he has the ability to teach these skills to others. Luke has shown this skill publicly on Less Wrong and also on his blog, with this distilled analysis of Eliezer's writing "Reading Yudkowsky."
Luke is a genuine modern day renaissance man, a true polymath. However, Luke is very self-aware of his limitations and ha
Is "transform function" a technical term from some discipline I'm unfamiliar with? I interpret your use of that phrase as "operation on some input that results in corresponding output." I'm having trouble finding meaning in your post that isn't redefinition.
Here is another question, regarding the basic methdology of study. When you are reading a scholastic work and you encounter an unfamiliar concept, do you stop to identify the concept or continue but add the concept to a list to be pursued later? In other words, do you queue the concept for later inspection or do you 'step into' the concept for immediate inspection?
I expect the answer to be conditional, but knowing what conditions is useful. I find myself sometimes falling down the rabbit hole of chasing chained concepts. Wikipedia makes this mistake easy.
Here's a question: does learning to read faster provide a net marginal benefit to the pursuit of scholarship? Are there narrow, focused, and confirmed methods of learning to read faster that yield positive results? This would be beneficial to all, but perhaps moreso to those of us that have full time jobs that are not scholarship.
I've never had success with 'speed reading' in a way that allows me to consume more words per minute and have the same degree of retention and comprehension, especially for dense scholarly material.
Efficient scholarship benefits much more, I think, from learning to be strategic and have good intuitions about what to read - on the level of fields of knowledge, on the level of books and articles, and on the level of paragraphs within books and articles. I've been doing something like what I described in this post for at least two years and I have the impress...
Grunching. (Responding to the exercise/challenge without reading other people's responses first.)
Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority ...
I donated $275 to the SIAI via the Facebook page. Given the flight prices on Orbitz, this should cover somebody. Maybe not an east coaster or someone overseas.
Pledge fulfilled!
Also: I will be attending mini-camp and have also gotten my own ticket.
Yes. 40 per week.
I would be willing to do this work, but I need some "me" time first. The SIAI post took a bunch of spare time and I'm behind on my guitar practice. So let me relax a bit and then I'll see what I can find. I'm a member of Alcor and John is a member of CI and we've already noted some differences so maybe we can split up that work.
He is full time. According to the filings he reports 40 hours of work for the SIAI. (Form 990 2009, Part VII, Section A -- Page 7).
"Michael Vassar's Persistent Problems Group idea does need funding, though it may or may not operate under the SIAI umbrella."
It sounds like they have a similar concern.
I agree, this doesn't deserve to be downvoted.
It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn't also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.
At this point an admin should undelete the original SIAI Fundraising discussion post. I can't seem to do it myself. I can update it with a pointer to this post.
Thanks, I added a note to the text regarding this.
Yeah, I'll update it when the 2010 documents become available.
Added to the overview section.
I didn't know about that! I will update the post to use it as soon as I can. Thanks! Most of my work on this post was done by editing the HTML directly instead of using the WYSIWYG editor.
EDIT: All of the images are now hosted on lesswrong.
The older draft contains some misinformation. Much is corrected in the new version. I would prefer people use the new version.
Typo fixed.
I will donate the amount without earmarking it. It will fill the gap taken by the cost to send someone to the event.
I don't see a lot of value in earmarking funds for the SIAI. I'm working on a document about SIAI finances and from reading the Form 990s I believe they use their funds efficiently. Given my low knowledge of their internal workings and low knowledge of their immediate and medium term goals I would bet that they would be better at figuring out the best use of the money than I would be. Earmarking would increase the chance the money is used inefficiently, not decrease it.
Yes. In general, earmarking is a hideous pain in the backside for charities and leads to great inefficiency thinking about how to deal with this radioactive donation. If the donation is sufficiently large it may be worth it, but it's still a nuisance.
Simple heuristic: if you trust a charity enough to donate to them, just donate and leave them to figure out what to do with it. Don't try to micromanage.
Can everyone see all of the images? I received a report that some appeared broken.
Once I finish the todo at the top and get independent checking on a few things I'm not clear on, I can post it to the main section. I don't think there's value in pushing it to a wider audience before it's ready.
Zvi Mowshowitz! Wow color me surprised. Zvi is a retired professional magic player. I used to read his articles and follow his play. Small world.
I'm also going to see if I can get a copy of the 2010 filing.
Edit: The 2002 and on data is now largely incorporated. Still working on a few bits. Don't have the 2010 data, but the SIAI hasn't necessarily filed it yet.
Fixed.
The section that led me to my error was 2009 III 4c. The amount listed as expenses is $83,934 where your salary is listed in 2009 VII Ad as $95,550. The text in III 4c says:
"This year Eliezer Yudkowsky finished his posting sequences on Less Wrong [...] Now Yudkowsky is putting together his blog posts into a book on rationality. [...]"
This is listed next to two other service accomplishments (the Summit and Visiting Fellows).
If I had totaled the program accomplishments section I would have seen that I was counting some money twice (and also noticed that the total in this field doesn't feed back into the main sheet's results).
Please accept my apology for the confusion.
I -- thoughtlessly -- hadn't considered donating to the SIAI as a matter of course until recently (helped do a fund raiser for something else through my company and this made me think about it). Now reading the documentation on GuideStar has me thinking about it more...
Looking at the SIAI filings, I'd be interested in knowing more about the ~$118k that was misappropriated by a contractor (reported in 2009). I hadn't heard of that before. For an organization that raises less than or close to half a million a year, that's a painful blow.
Peter Thiel's contrib...
I applied to mini-camp. However, I may not be selected because of my personal situation (older, not college educated). I believe the mini-camp program is worth supporting and should be helped to be successful. I am willing to back up this belief with my wallet...and in public, so you all can hold me to it.
Whether or not I am selected, I pledge to pay for the flight of one individual who is (and who isn't me). This person must live in the continental United States.
If the easiest way to fulfill this pledge is to donate to the SIAI, earmarked for this purpos...
I donated $275 to the SIAI via the Facebook page. Given the flight prices on Orbitz, this should cover somebody. Maybe not an east coaster or someone overseas.
Pledge fulfilled!
Also: I will be attending mini-camp and have also gotten my own ticket.
You're spending after-tax money if you buy the flight yourself, or before-tax if you donate to SIAI, assuming they're 501(c)3. If you trust them to honor a targeted donation (I would), it's better to donate.
Donation sent.
I've been very impressed with MIRI's output this year, to the extent I am able to be a judge. I don't have the domain specific ability to evaluate the papers, but there is a sustained frequency of material being produced. I've also read much of the thinking around VAT, related open problems, definition of concepts like foreseen difficulties... the language and framework for carving up the AI safety problem has really moved forward.