All of mindviews's Comments + Replies

First off, let me say thank you for all the work that's gone into the site update by everyone involved! The three changes I like most are the new header design (especially the clear separation between Main and Discussion - the old menu was too cluttered), the nearby meetup section, and the expanding karma bubbles.

I had one question about how the nearby meetup list and Location. Is the meetup list supposed to sort by location somehow? If so, what do I need to put in my location? Thanks!

4matt
It should be locating you by your IP address and showing the 5 nearest meetups from those scheduled over the next 14 days. We think it's broken. We intend (implementation unscheduled) to use the location you enter in your preferences if you have supplied one.

I agree that the single comment view has more boilerplate up top, but otherwise I'd say it usually fits on screens without any trouble.

I was curious about your comment so I took a look at the screenshot. You say in the bug report that you're using a "fairly small font" setting but the font is being rendered much larger for you than I see using default IE9 and FF4 settings. Plus your picture shows the page with a serif font while the CSS specifies sans-serif. I'm not sure if it's a browser issue or if you're using custom settings, but in a 1600... (read more)

0saturn
You don't know the (physical) pixel size of gwern's display. Personally, I can see the first 2 lines of a comment without scrolling; over 2/3rds of the visible space below the site logo is haphazardly filled with links and buttons. It would be nice if it could be reduced to just the main site header, the "You are viewing a comment permalink..." line, then the comment itself.

Sorry I can't make it this time - I've got travel plans this weekend. Hope to see everyone next time.

I'll make a weak vote for the IHOP near UCI. It's easy to get to, has free parking, and seemed to work reasonably well for the last meetup.

I'll be there. I'll be driving from Torrance and can give a ride to anyone who happens to be in that area or along the way.

mindviews130

For those of you who are interested, some of us folks from the SoCal LW meetups have started working on a project that seems related to this topic.

We're working on building a fault tree analysis of existential risks with a particular focus on producing a detailed analysis of uFAI. I have no idea if our work will at all resemble the decision procedure SIAI used to prioritize their uFAI research, but it should at least form a framework for the broader community to discuss the issue. Qualitatively you could use the work discuss the possible failure modes t... (read more)

6PeerInfinity
This project sounds really interesting and useful. It sounds a lot like a project that I tried and failed to get started. Or at least like part of that project. Though my project is so vague and broad that pretty much anything involving graphs/trees related to x-risks would seem "kinda like part of the project I was working on" Here's a link to another comment about that project I would like to hear more about your project.

I'll be there but I may not arrive until ~2PM. Not sure what the setup at the IHOP is, but I can bring a LCD projector to hook up to any laptops that join us.

It's a social gathering for anyone interested in discussing anything relevant to the LW community. I personally have been part of discussing rationality in general, cryonics, existential risk, personal health, and cognitive bias (among other topics) at the 2 meetups I've been to. It's a good excuse to meet some other folks and trade ideas, start projects, etc.

I don't think we have an agenda organized for this one. But if you're curious, take a look at the comments from the September SoCal meetup for an idea about what was discussed and what people thought was good/bad/interesting about it.

0mdcaton
Was hoping to make it to this one from San Diego but couldn't; can't wait for the next one. Anyone in San Diego who needs a ride next time, hold onto my email, mdcblogs@gmail.com.

I tried something different and added a link to this section. Any comments on how that works?

2Alicorn
This is better.

I'll join in the fun - any suggestions appreciated.

My profile is currently limited to OKC users, though. I wish there were more LW ladies in SoCal who were easier to find...

4Alicorn
I don't recommend leaving this as a surprise. The question it prompts is "Why? What's wrong with his taste in books? Is he embarrassed or something?" And while these questions might go into a message to you from the girl of your dreams, they might also send her scurrying away.

Hi Darius - If no one else is driving through Burbank, I can backtrack and pick you up.

I'll be there. I've got space for 3 more in my car. If anyone in the Pasadena/Glendale area would like a ride, let me know.

Is there any philosophy worth reading?

Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.

So my question is: What philosophical works and authors have you found especially valuable

... (read more)

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

0Nisan
It might also be a useful tool for attaining self-knowledge outside of goal-directed behavior. Consider this passage from The Aleph:
2Vladimir_Golovin
This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist".

I found a public parking structure just around the corner here with the first 2 hrs free and I believe $3 flat rate parking after 5pm that was a good deal.

A trip through Burbank should be fine - I just PMed you contact details.

1darius
Thanks!

I'm game. I'll be driving from Pasadena and can give a ride if you need one.

2darius
Would a detour through Burbank be any bother? I don't have a car myself.
-2wedrifid
Thanks mindviews. That is one of them.

I got an amazing amount of use out of Order of Magnitude Physics. It can get you in the habit of estimating everything in terms of numbers. I've found that relentlessly calculating estimates greatly reduces the number of biased intuitive judgments I make. A good class will include a lot of interaction and out-loud thinking about the assumptions your estimates are based on. Also or as an alternative, a high-level engineering design course can provide many of the same experiences within the context of a particular domain. (Aerospace/architecture/transpo... (read more)

So you're positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite - very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon - not practical.

To bring this thread back onto the LW Highway...

It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement - probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list... (read more)

6JoshuaZ
I don't think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.

I'm pretty sure I'm not mistaken. At this risk of driving this sidetrack off a cliff...

Once an object (in this case, a potentially dangerous AI) is in our past light cone, the only way for its world line to stay outside of our future light cone forever (besides terminating it through thermite destruction as mentioned above) is for it to travel at the speed of light or faster. That was the physics nitpick I was making. In short, destroy it because you cannot send it far enough away fast enough to keep it from coming back and eating us.

wedrifid100

Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.

To launch... (read more)

it's basically saying that gravity and EM are both obeying some more general law

No, what's happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.

Does it suggest a way to unify gravity and EM?

No.

Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you... (read more)

0SilasBarta
True, but what got me the most interested is the gravitic analog of magnetic fields. It shows that masses can produce something analogous to magnetism by their rotation. Rotate one way, you drag the object closer; rotate the other way, you push it away. This allows both attraction and repulsion in the equations for gravity, and suggests something similar is going on that generates magnetism.

Well, I suppose you could launch them out of our future light cone.

I hope that was a joke because that doesn't square with our current understanding of how physics works...

2wedrifid
You are mistaken.

The morals of FAI theory don't mesh well at all with the morals of transhumanism.

It's not clear to me that a "transhuman" AI would have the same properties as a "synthetic" AI. I'm assuming that a transhuman AI would be based on scanning in a human brain and then running a simulation of the brain while a synthetic AI would be more declaratively algorithmic. In that scenario, proving a self-modification would be an improvement for a transhuman AI would be much more difficult so I would treat it differently. Because of that, I'd exp... (read more)

Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.

So: do you know any counterarguments or articles that address either of these points?

I don't have any articles but I'll take a st... (read more)

I don't think that's a good example. For the status-quo bias to be at work we need to have the case that we think it's worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I'm not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn't be in play and the preference reversal test wouldn't apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.)

I think the problem in the example is ... (read more)

Thoughts I found interesting:

The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

Interesting because I don't think it's true. I think the problem is more about the need of AI builders to show results. Providing a solution (or a partial solution or a path to a solution) in a narrow context is a way to do that when your tools aren't yet powerful enough for more general or mixed approaches. Given the v... (read more)

Hi all - been lurking since LW started and followed Overcoming Bias before that, too.