I agree that the single comment view has more boilerplate up top, but otherwise I'd say it usually fits on screens without any trouble.
I was curious about your comment so I took a look at the screenshot. You say in the bug report that you're using a "fairly small font" setting but the font is being rendered much larger for you than I see using default IE9 and FF4 settings. Plus your picture shows the page with a serif font while the CSS specifies sans-serif. I'm not sure if it's a browser issue or if you're using custom settings, but in a 1600...
Sorry I can't make it this time - I've got travel plans this weekend. Hope to see everyone next time.
Count me in.
I plan on coming.
I'll make a weak vote for the IHOP near UCI. It's easy to get to, has free parking, and seemed to work reasonably well for the last meetup.
I'll be there. I'll be driving from Torrance and can give a ride to anyone who happens to be in that area or along the way.
For those of you who are interested, some of us folks from the SoCal LW meetups have started working on a project that seems related to this topic.
We're working on building a fault tree analysis of existential risks with a particular focus on producing a detailed analysis of uFAI. I have no idea if our work will at all resemble the decision procedure SIAI used to prioritize their uFAI research, but it should at least form a framework for the broader community to discuss the issue. Qualitatively you could use the work discuss the possible failure modes t...
I'll be there but I may not arrive until ~2PM. Not sure what the setup at the IHOP is, but I can bring a LCD projector to hook up to any laptops that join us.
It's a social gathering for anyone interested in discussing anything relevant to the LW community. I personally have been part of discussing rationality in general, cryonics, existential risk, personal health, and cognitive bias (among other topics) at the 2 meetups I've been to. It's a good excuse to meet some other folks and trade ideas, start projects, etc.
I don't think we have an agenda organized for this one. But if you're curious, take a look at the comments from the September SoCal meetup for an idea about what was discussed and what people thought was good/bad/interesting about it.
I tried something different and added a link to this section. Any comments on how that works?
I'll join in the fun - any suggestions appreciated.
My profile is currently limited to OKC users, though. I wish there were more LW ladies in SoCal who were easier to find...
Hi Darius - If no one else is driving through Burbank, I can backtrack and pick you up.
I'll be there. I've got space for 3 more in my car. If anyone in the Pasadena/Glendale area would like a ride, let me know.
Is there any philosophy worth reading?
Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.
...So my question is: What philosophical works and authors have you found especially valuable
An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.
I found a public parking structure just around the corner here with the first 2 hrs free and I believe $3 flat rate parking after 5pm that was a good deal.
A trip through Burbank should be fine - I just PMed you contact details.
I'm game. I'll be driving from Pasadena and can give a ride if you need one.
Were you thinking of "Affirmative Action Isn’t About Uplift"?
http://www.overcomingbias.com/2009/07/affirmative-action-wasnt-about-uplift.html
I got an amazing amount of use out of Order of Magnitude Physics. It can get you in the habit of estimating everything in terms of numbers. I've found that relentlessly calculating estimates greatly reduces the number of biased intuitive judgments I make. A good class will include a lot of interaction and out-loud thinking about the assumptions your estimates are based on. Also or as an alternative, a high-level engineering design course can provide many of the same experiences within the context of a particular domain. (Aerospace/architecture/transpo...
So you're positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite - very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon - not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement - probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list...
I'm pretty sure I'm not mistaken. At this risk of driving this sidetrack off a cliff...
Once an object (in this case, a potentially dangerous AI) is in our past light cone, the only way for its world line to stay outside of our future light cone forever (besides terminating it through thermite destruction as mentioned above) is for it to travel at the speed of light or faster. That was the physics nitpick I was making. In short, destroy it because you cannot send it far enough away fast enough to keep it from coming back and eating us.
Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.
To launch...
it's basically saying that gravity and EM are both obeying some more general law
No, what's happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.
Does it suggest a way to unify gravity and EM?
No.
Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you...
Well, I suppose you could launch them out of our future light cone.
I hope that was a joke because that doesn't square with our current understanding of how physics works...
The morals of FAI theory don't mesh well at all with the morals of transhumanism.
It's not clear to me that a "transhuman" AI would have the same properties as a "synthetic" AI. I'm assuming that a transhuman AI would be based on scanning in a human brain and then running a simulation of the brain while a synthetic AI would be more declaratively algorithmic. In that scenario, proving a self-modification would be an improvement for a transhuman AI would be much more difficult so I would treat it differently. Because of that, I'd exp...
Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.
The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.
So: do you know any counterarguments or articles that address either of these points?
I don't have any articles but I'll take a st...
I don't think that's a good example. For the status-quo bias to be at work we need to have the case that we think it's worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I'm not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn't be in play and the preference reversal test wouldn't apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.)
I think the problem in the example is ...
Thoughts I found interesting:
The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.
Interesting because I don't think it's true. I think the problem is more about the need of AI builders to show results. Providing a solution (or a partial solution or a path to a solution) in a narrow context is a way to do that when your tools aren't yet powerful enough for more general or mixed approaches. Given the v...
Hi all - been lurking since LW started and followed Overcoming Bias before that, too.
First off, let me say thank you for all the work that's gone into the site update by everyone involved! The three changes I like most are the new header design (especially the clear separation between Main and Discussion - the old menu was too cluttered), the nearby meetup section, and the expanding karma bubbles.
I had one question about how the nearby meetup list and Location. Is the meetup list supposed to sort by location somehow? If so, what do I need to put in my location? Thanks!