Comment author: 16 September 2013 02:30:51PM 0 points [-]

I am seeking a mathematical construct to use as a logical coin for the purpose of making hypothetical decision theory problems slightly more aesthetically pleasing. The required features are:

• Unbiased. It gives (or can be truncated or otherwise resolved to give) a 50/50 split on a boolean outcome.
• Indexable. The coin can be used multiple times through a sequence number. eg. "The n-th digit of pi is even".
• Intractable. The problem is too hard to solve. Either because there is no polynomial time algorithm to solve it or just because it is somewhat difficult and the 'n' supplied is ridiculous. eg. "The 3^^^3 digit of pi is even".
• Provable or otherwise verifiable. When the result of the logical coin is revealed it should be possible to also supply a proof of the result that would convince a mathematician that the revealed outcome is correct.
• Simple to refer to. Either there is a common name for the problem or a link to a description is available. The more well known or straightforward the better.

NP-complete problems have many of the desired features but I don't know off the top of my head any that can be used as indexable fair coin.

Can anyone suggest some candidates?

Comment author: 16 September 2013 10:58:14PM 1 point [-]

My first idea is to use something based on cryptography. For example, using the parity of the pre-image of a particular output from a hash function.

That is, the parity of x in this equation:

f(x) = n, where n is your index variable and f is some hash function assumed to be hard to invert.

This does require assuming that the hash function is actually hard, but that both seems reasonable and is at least something that actual humans can't provide a counter example for. It's also relatively very fast to go from x to n, so this scheme is easy to verify.

Comment author: 10 May 2013 07:09:58PM *  23 points [-]

My neck is asymmetrical because some years back I used to often lie in bed while using a laptop, and would prop my head up on my left elbow, but not my right because there was a wall in the way. In general, using a laptop while lying in bed is an ergonomics nightmare. The ideal would be to lie on your back with the laptop suspended in the air above you, except that that would make typing inconvenient.

So a friend recently blew my mind by informing me that prism glasses are a thing. These rotate your field of vision 90 degrees downwards, so that you can lie on your back and look straight up while still seeing your laptop. I have tried these and highly recommend them.

That said: You should probably not do non-sleep/sex things in bed because that can contribute to insomnia. I recommend trying a standing desk, by putting a box or a chair on top of your desk and putting your laptop on top of that, then just standing permanently; it will be painful at first. Also currently experimenting with only allowing myself to sit down with my laptop if I'm at the same time doing the highest-value thing I could be doing (which is usually ugh-fielded and unpleasant because otherwise I'd have already done it).

Another thing: I have a crankish theory that looking downwards lowers your unconscious estimation of your own social status (which seems to be partly what is meant by "confidence"/"self-esteem"). If that's true, prism glasses and standing desks could increase confidence.

Comment author: 11 May 2013 03:30:20AM 3 points [-]

Obligatory note re: standing desk ergonomics: http://ergo.human.cornell.edu/CUESitStand.html

The lesson seems to be to mostly sit, but stand and walk around every 30-45 minutes or so.

Comment author: 30 November 2012 12:08:55PM *  9 points [-]

The people who excel at Starcraft don't do it because they follow explicit systems. They do it mostly by practice (duh) and by listening to the advice of people like Day[9].

Day9 is the best-known Starcraft II commenter, with many YouTube videos (here's a random example) and many millions of views. He occasionally does explain systems (or subsystems really) for playing, but what I think he mostly does right is that

• he entertains and engages his audience really well,
• he evidently knows what he's talking about,
• he is relentlessly positive and has a good video about that,
• he exudes total confidence that luck has almost nothing to do with your results,
• he can talk way better than anyone I've ever heard talk about rationality and
• he is easy to like, and easy to want to be like.

I may be missing something, but I think this is most of what he does so right about teaching what he teaches. Anyway, my point is clear: We don't need systems, we need a Day[9] of rationality.

AIs may need systems. We aren't AIs.

Comment author: 01 December 2012 04:49:39PM *  4 points [-]

I think that the main difference between people who do and don't excel at SC2 isn't that experts don't follow algorithms, it's that their algorithms are more advanced/more complicated.

For example, Day[9]'s build order focused shows are mostly about filling in the details of the decision tree/algorithm to follow for a specific "build". Or, if you listen to professional players talking about how they react to beginners asking for detailed build orders the response isn't "just follow your intuition" it's "this is the order you build things in, spend your money as fast as possible, react in these ways to these situations", which certainly looks like an algorithm to me.

Edit: One other thing regarding practice: We occasionally talk about 10,000 hours and so on, but a key part of that is 10,000 hours of "deliberate practice", which is distinguished from just screwing around as being the sort of practice that lets you generate explicit algorithms.

Comment author: 16 October 2012 06:04:31PM *  44 points [-]

Better reading: "If money doesn't make you happy, then you probably aren't spending it right", Dunn et al 2011, Journal of Consumer Psychology

Comment author: 18 October 2012 11:41:57PM 4 points [-]

I actually see a connection between the two: One of the points in the article is to buy experiences rather than things, and Alicorn's post seems to be (possibly among other things) a set of ways to turn things into experiences.

Comment author: 11 September 2012 01:13:26AM 1 point [-]

Update: Sinak's SRS system as reviewed by Hacker News: http://news.ycombinator.com/item?id=4496647

Comment author: 11 September 2012 01:38:52AM 1 point [-]

I'm not sure about the rest of the app, but the bookmarklet seems like a ridiculously good idea. The 'trivial inconvenience' of actually making cards for things is really brutal, anything that helps seems like a big deal.

Comment author: 05 September 2012 09:24:25PM 2 points [-]

My friend kept repeating roughly the same arguments to me about why he couldn't feel better about his situation. I rather suspect I've done something similar in regards to some of my problems.

The nature of self-defeating behavior is to be self-sustaining. Or to put it another way, our problems usually live one meta-level above the place we insist they are. (Or perhaps one assumption-level below?)

IOW, the arguments we repeat about why we can't do something are correct, if viewed from within the assumptions we're making. The trick is that at least one of those assumptions must therefore be wrong, and you have to find out which ones. The original NLP metamodel is one such tool for identifying such assumptions, or at least pointing to where an assumption must exist in order for the argument to appear to make sense.

when I try asking myself about my motivations, they form cycles rather than (as the book) a straight line to the basic motivations.

There are at least a couple ways you could end up cycling, that I can think of. One is that you're not actually connecting with your near-mode brain about the subject, and are thus ending up in abstractions. Another is that you're not placing enough well-formedness constraint on your questions. At each level, you have to imagine that you already have ALL the things you wanted before.... which would make it kind of difficult to cycle back to wanting a previous thing.

In other words, the most likely cause (assuming you're not just verbalizing in circles and not connecting with actual near-mode feelings and images and such), is that you're not fully imagining having the things that you want, and experiencing what it would be like to already have them.

This is a stumbling block for a lot of techniques, not just Core Transformation. The key to overcoming it is to notice whether you have something preventing you from imagining "what it would be like", like that you think it's unrealistic, bad, or whatever. Noticing and handling these objections are the real meat of almost ANY mindhacking process, because they're the "second meta-level" issues I alluded to above, that are otherwise so very hard to notice or identify.

If you don't address these objections, but instead just plow through the technique (whether it's CT or anything else), you'll get inconsistent results, problems that seem to go away and then come back, etc.

(NLP sometimes refers to these things as "ecology", but relatively little time is spent on the subject in entry-level training. It's something that you need lots of examples of in order to really "get", because the principles by themselves are like saying you can ride a bike by "pumping the pedals and maintaining your balance". Knowing it and doing it just aren't the same.)

I tried going to a practitioner, and I'm now a lot more cynical about certifications.

Sadly, NLP practitioner certification at best means that you learned some REALLY basic stuff and were able to do it when supervised, and while doing it with people who are receiving the same training at the same time.

That is, NLP certification drills are done by trainee groups, who thus already know what's expected of them, which means nobody gets much experience of what it would be like to walk somebody through a technique who didn't receive the same training!

Your idea that the basis of the problem that Core Transformation is people not letting themselves feel what they're actually feeling makes sense.

Not actually what I said: it's about not allowing ourselves to feel good unless certain conditions are met. Or more precisely, our brain's rules about feelings are not reflexive: if you have a rule that says "feel bad when things don't go well", this does NOT imply that you will feel good when things do go well!

And, you will actually be better off having rules that tell you to feel good even when things don't go well, because bad feelings are not very useful when it comes to motivating constructive action. They're much better at telling us to avoid things than getting us to accomplish things.

(By the way, another common cause of self-defeating behavior being self-sustaining is that we tend to filter incoming concepts to match our existing frameworks. So, where my phrasing was ambiguous ("allow ourselves to feel certain things"), your brain may have pattern-matched that to "feel what we're feeling", even though that's almost the opposite of what I intended to say. The "certain things" I was referring to were feelings like the Andreas's notion of "core states": things that most of us aren't already feeling.)

Comment author: 06 September 2012 07:11:46AM 0 points [-]

Is there a good book/resource in general for trying to learn the meta-model you mention?

Comment author: [deleted] 02 September 2012 01:08:10AM 3 points [-]
Comment author: 06 September 2012 05:51:21AM 0 points [-]

Of course, this is a straightforward problem to fix in the mechanism design: Just make responses to downvoted comments start at -5 karma, instead of having a direct penalty, as suggested elsewhere. I think that suggestion was for unrelated reasons, but it also fixes this little loophole.

Comment author: 04 September 2012 03:40:20PM 1 point [-]

Yes, but how much of the work that goes into the next generation is just layout? It doesn't solve all of your chemical or quantum mechanical issues, or fixes your photomasks for the next shrunken generation, etc. If layout were a major factor, we should expect to hear of 'layout farms' or supercomputers or datacenters devoted devoted to the task. I, at least, haven't. (I'm sure Intel has a datacenter or two, but so do many >billion tech multinationals.)

And if layout is just a fraction of the effort like 10%, then Amdahl's law especially applies.

Comment author: 05 September 2012 12:07:22AM 1 point [-]

it doesn't give many actual current details, but http://en.wikipedia.org/wiki/Computational_lithography implies that as of 2006 designing the photomask for a given chip required ~100 CPU years of processing, and presumably that has only gone up.

Etching a 22nm line with 193nm light is a hard problem, and a lot of the techniques used certainly appear to require huge amounts of processing. It's close to impossible to say how much of a bottle neck this particular step in the process is, but based on how much really knowing what is going on in even just simple mechanical design requires lots of simulation I would actually expect that every step in chip design has similar types of simulation requirements.

Comment author: 30 August 2012 08:27:43PM 0 points [-]

One friend of mine simply sprinted between every class. Seemed to help his fitness!

Comment author: 31 August 2012 04:03:50AM 0 points [-]

also generates free time! generally just trying to walk between classes as fast as possible is probably good, if sprinting seems too scary.

Comment author: 22 August 2012 03:47:38PM 14 points [-]

This post inspired me to make a small donation.

Comment author: 22 August 2012 10:28:01PM 9 points [-]

Me as well.

View more: Next