Comment author: protest_boy 22 July 2014 01:05:52AM 0 points [-]

So there's a MIRIxMountain View, but is it redundant to have a MIRIxEastBay/SF? It seems like the label MIRIx is content to be bestowed upon even low key research efforts, and considering the hacker culture/rationality communities there may be interest in this.

Comment author: protest_boy 18 July 2014 04:38:29AM -1 points [-]

I have a question about the nature of generalization and abstraction. Human reasoning is commonly split up into two categories: deductive and inductive reasoning. Are all instances of generalization examples of inductive reasoning? If so, does this mean that if you have a deep enough understanding of inductive reasoning, you broadly create "better" abstractions?

For example, generalizing the integers to the rationals satisfies a couple of things: the theoretical need to remove previous restrictions on the operations of subtraction and division, and AFAIK the practical need of representing measurable quantities. This generalization doesn't seem to fit into the examples given here http://en.wikipedia.org/wiki/Inductive_reasoning at first glance, and I was hoping someone could give me some nuggets of insight about this. Or, can someone point out what the evidence is that leads to this inductive conclusion/generalization?

Comment author: protest_boy 18 July 2014 12:32:08AM -1 points [-]

Related -- here are some attempts to formalize and understand analogy from a category theoretic perspective:

http://link.springer.com/article/10.1023/A:1018963029743 http://pages.bangor.ac.uk/~mas010/pdffiles/Analogy-and-Comparison.pdf

Comment author: protest_boy 12 July 2014 05:09:30AM 2 points [-]

Is there a way to tag a user in a comment such that the user will receive a notification that s/he's been tagged?

Comment author: protest_boy 12 July 2014 05:08:07AM 0 points [-]

Before I embark on this seemingly Sisyphean endeavor, has anyone attempted to measure "philosophical progress"? It seems that no philosophical problem I know of is apparently fully solved, and no general methods are known which reliably give true answers to philosophical problems. Despite this we definitely have made progress: e.g. we can chart human progress on the problem of Induction, of which an extremely rough sketch looks like Epicurus --> Occam --> Hume --> Bayes --> Solomonoff, or something. I don't really know, but there seem to be issues with Solomonoff's formalization of Induction.

I'm thinking of "philosophy" as something like "pre-mathematics/progressing on confusing questions that no reliable methods exist yet to give truthy answers/forming a concept of something and formalizing it". Also it's not clear to me "philosophy" exists independent of the techniques its spawned historically, but there are some problems for which the label of "philosophical problem" seems appropriate, e.g. "how do uncertainties work in a universe where infinite copies of you exist?" and like, all of moral philosophy, etc.

Comment author: lukeprog 20 January 2013 07:16:45PM *  3 points [-]

You're right, I should say more about what I mean by "Eliezer-level philosophical ability." Clearly, I don't mean "writing clarity," as many of my favorite analytic philosophers write more clearly than Eliezer does.

It'll take me some time to prepare that explanation. For now, let me show some support for your comment by linking to another example of Eliezer being corrected by a professional philosopher.

Comment author: protest_boy 20 June 2014 08:09:54AM -1 points [-]

Do you have anything quick to add about what you mean by "Eliezer-level philosophical ability"?

Comment author: Qiaochu_Yuan 01 June 2014 06:43:31PM *  7 points [-]

I keep a list, in Workflowy, of titles for posts almost none of which I've turned into posts. (I generally recommend using Workflowy for capture in this way.) Here are the ones where I at least remember what the point of the post was supposed to be:

  • Against ethical consistency
  • Against ethical criteria
  • Against verbal reasoning
  • The instrumental lens
  • Maximizing utility vs. the hedonic treadmill
  • Mathematics for rationalists
  • Beware cool ideas
  • How to not die (RomeoStevens already wrote this post though)
Comment author: protest_boy 10 June 2014 06:10:19AM -1 points [-]

I would love to see these as posts. (I really enjoyed your posts on the CFAR list about human ethics).

What does "The instrumental lens" hint at?

Comment author: protest_boy 30 April 2014 10:43:06PM *  8 points [-]

Everyone's posting evidence for this, which is great and LW is awesome, but I'm also interested in any rebuttals of the sort like "I expected it to hugely change my social life but it didn't really"

In particular, for me:

  • I found out about CFAR from LW and attended a CFAR workshop
  • I've attended a couple of meetups in the bay area
  • I found out about 80000 hours, GiveWell, MIRI, and effective altruism in general, which has been a large force in my life
  • I've met many interesting people working on many interesting things in spheres that I care about

Declaring pseudo-Crocker's rules...

Not soon after I found out about LW, I expected to e.g move into a rationalist community, immerse myself in the memespace, etc. But there's a distinct qualitative difference that I feel when I'm hanging out with my friends whom I've met from other more prosaic circles (house parties, friends of friends, college, etc) than when I'm hanging out with people at the meetups I've been to and even the CFAR workshop. I find it hard to really connect with most people I've met through LW in a way that gives me the fuzzywuzzies, even though many of us share similar values and are working towards similar goals.

Yes, my friends are stoners, entrepreneurs, weirdos, normals, hot people, people-probably-more-concerned-social-status-than-LWers, whatever. Some of them know about LW and are familiar with rationality concepts. But I just have a really fun time with them, and I haven't had that in my experiences so far with LW people. I suspect (at the risk of sounding insulting) that there's a difference in social acumen and sense of humor or something. I honestly found some of my social experiences with LWers kind of alienating.

Please note I'm not drawing a hard and fast line here, (and obviously there's a selection effect) but I'm just curious if anyone else has had the same experience.

Comment author: protest_boy 03 February 2014 12:42:19AM 0 points [-]

I'm not sure that he doesn't have "natural" skill or talent. I find the link now but I remember reading that he's extremely high IQ. (or something something eidetic memory something something?)

Motifs in his standup comedy routines are about how much smarter he is than everyone else, etc etc (anecdata)

Comment author: James_Miller 29 January 2014 05:00:43PM 2 points [-]

I'm looking for a non-language specific book on proper computer programming techniques.

Comment author: protest_boy 01 February 2014 08:09:45AM 2 points [-]

I highly recommend the book Concepts, Techniques, and Models of Computer Programming (http://www.amazon.com/Concepts-Techniques-Models-Computer-Programming/dp/0262220695) which is the closest I've seen to distilling programming to its essence. It's language agnostic in the sense that you start with a small "kernel language" and build it up incorporating different concepts as needed.

View more: Prev | Next