All of KnaveOfAllTrades's Comments + Replies

Why can't the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I've found a factor to update downwards. This feels like it relies on the reasoner's embedding though, so maybe it's cheating, but it's not clear and non-confusing to me why it doesn't count.

Begin here and read up to part 5 inclusive. On the margin, getting a basic day-in, day-out wardrobe of nice well-fitting jeans/chinos (maybe chino or cargo shorts if you live in a hot place) and t-shirts is far more valuable when you start approaching fashion than hats. Hats are a flair that come after everything else in the outfit you're wearing them with. Maybe you want to just spend a few hours one-off choosing a hat and don't want to think about all the precursors. But that can actually make you backslide. If you look at their advice about hats, you'll... (read more)

4DataPacRat
Now that looks like a good reference to start in on, given my current (lack of) knowledge of all things clothesy, and was worth starting this thread. Not all of it's going to be useful - my budget for nonessentials is roughly $100 per month, and I recently had to replace a fried laptop - but with luck, I'll at least start getting a sense of the general patterns of what's involved with fashionableness. The summer temperatures regularly go above 30 C, so I've usually been wearing cargo shorts. In winter, usually simple black or tan slacks - they're buried in a closet, so I'm not sure if they're cotton, but they could be chinos. (My usual trouble with long pants is finding a pair with short enough legs; IIRC, I have a 28" inseam, the shortest slacks I can find are usually 30", so I generally end up informally hemming them by folding the bottoms up inside.) I'm leaning towards avoiding a fedora unless and until I get to the point of having at least a business-casual suit. Another link that looks potentially highly useful; thank you kindly. Are you familiar with the "Everyday Carry" subculture, as can be seen in the /r/EDC/ subreddit? My take on the approach is that the ideal is to be as prepared as possible for life's little emergencies, without looking like you're a survivalist nut. Eg, paracord bracelets can be decorative accents - that can conceal a whole survival kit within their loops. I'm hoping to end up with a hat that just might reduce a head injury while I'm hiking, but which won't look out-of-place while I'm at a mall.

Yes!! I've also independently come to the conclusion that basic real analysis seems important for these sorts of general lessons. In fact I suspect that seeing the reals constructed synthetically, or the Peano --> Integers --> Rationals --> Dedekind cuts construction, or some similar rigorous construction of an intuitively 'obvious' concept, is probably a big boost in accessing the upper echelons of philosophical ability. Until you've really seen how axioms work and broken some intuitive thing down to the level that you can see how a computer coul... (read more)

What specifically did you mean here?

What I mean is if you have the resources (time, energy, etc.) to do so, consider trying to get the data where the script returned '0' values because the source you used didn't have that bit of data. But make it clear that you've done independent research where you find the figures yourself, so that the user realises it's not from the same dataset. And failing that, e.g. if there just isn't enough info out there to put a figure, state that you looked into it but there isn't enough data. (This lets the user distinguish ... (read more)

I like the graph that shows salary progression at every age. Often career advice just gives you the average entry figure and the average and peak senior figures, which kinda seems predicated upon the 'Career for life' mentality which locks people into professions they dislike. Suggestions, to do or not do with as you see fit, no reply necessary:

Ability to compare multiple jobs simultaneously. Make a note saying the graph will appear once you pick a job, or have it pop up by default on a default job. Center the numerical figures in their cells.

Make the list... (read more)

4Nanashi
One thing I was thinking about on this note was, comparing the "true cost of post-graduate education", in other words, you choose a job that will require X years of post-grad, and then you choose a job that doesn't. And it will compare lifetime earnings. Good idea. Good catch. From looking it seems like the BLS statistics (which is what this polls from) has duplicate entries that have the same info but separate ID codes. Government efficiency right there. I'll rewrite the script to scrub these out. What specifically did you mean here? I think the big problem with trying to determine "related jobs" is that, more often than not, in the actual job market, the relationship between similar jobs is in name only. If I'm trying to hire someone for sales, someone who has a lot of marketing experience probably isn't going to be a great candidate, even though "sales" and "marketing" seem to go hand-in-hand.

Thanks to Luke for his exceptional stewardship during his tenure! You'll be awesome at GiveWell!

And Nate you're amazing for taking a level and stepping up to the plate in such a short period of time. It always sounded to me like Luke's shoes would be hard for a successor to fill, but seeing him hand over to you I mysteriously find that worry is distinctly absent! :)

I used to have an adage to the effect that if you walk away from an argument feeling like you've processed it before a month has passed, you're probably kidding yourself. I'm not sure I would take such a strong line nowadays, but it's a useful prompt to bear in mind. Might or might not be related to another thing I sometimes say, that it takes at least a month to even begin establishing a habit. While a perfect reasoner might consider all hypotheses in advance or be able to use past data to test new hypotheses, in practice it seems to me that being on the ... (read more)

This premise sounds interesting, but I feel like concrete examples would really help me be sure I understand

6Sophronius
Oh, I've thought of another example: Less Wrongers and other rationalists frequently get told that "rationality is nice but emotion is important too". Less Wrongers typically react to this by: 1) Mocking it as a fallacy because "rationality is defined as winning so it is not opposed to emotion", before eagerly taking it up as a strawman and posting the erroneous argument all over the place to show everyone how poor the enemies of reason are at reasoning. Instead of: 2) Actually considering for five minutes whether or not there might be a correlation or even an inverse causal relationship between rationality and emotional control/ability to read emotions, which causes this observation in the first place. Needless to say, I blame Yudkowsky.

Hm, okay, let me try to make it more concrete.

My main example is one where people (more than once, in fact) told me that "I might have my own truth, but other people have their truth as well". This was incredibly easy to dismiss as people being unable to tell map from territory, but after the third time I started to wonder why people were telling me this. So I asked them what made them bring it up in the first place, and they replied that they felt uncomfortable when I was stating facts with the confidence they warranted. I was reminded of someth... (read more)

I didn't follow everything in the post, but it seems like the motivating problem is that UDT fails in an anti-Newcomb problem defined in terms of the UDT agent. But this sounds a lot like a fully general counterargument against decision algorithms; for any algorithm, we can form a decision problem that penalizes exactly that and only that agent. Take any algorithm running on a physical computer and place it in a world where we specify, as an axiom, that any physical instantiation of that algorithm is blasted by a proton beam as soon as it begins to run, be... (read more)

4Squark
Hi KnaveOfAllTrades, thx for commenting! Your proton beam problem is not decision-determined in the sense of Yudkowsky 2010. That is, it depends directly on the decision algorithm rather than depending only on the decisions it makes. Indeed it is impossible to come up with a decision theory that is optimal for all problems, but it might be possible to come up with a decision theory that is optimal for some reasonable class of problems (like decision-determined problems). Now, there is a decision-determined version of your construction. Consider the following "diagonal" problem. The agent makes a decision out of some finite set. Omega runs a simulation of XDT and penalizes the agent iff its decision is equal to XDT's decision. This is indeed a concern for deterministic agents. However, if we allow the decision algorithm access to an independent source of random bits, the problem is avoided. XDT produces all answers distributed uniformly and gets optimal payoff.

I wondered about this too before I tried it. I thought I had a higher-than-average risk of being very sensitive to my own perspirations/sheddings. But I haven't detected any significant problems on this front after trying it. It goes both ways: Now I know that I'm not very sensitive to my own trouser sweat, it means I can wear trousers longer after they've been washed (i.e. exposed to potentially irritant laundry products), which possibly reduces the risk of skin problems from the laundry products (another problem that I think I have a higher-than-average ... (read more)

Not sure if it's in addition to what you're thinking of or it is what you're thinking of, but Tommy Hilfiger 'never' 'washes his Levis'. I heard this and confirmed with a fashion- and clothing-conscious friend that they (the friend) had tried it. I used to wash jeans and chinos after a few consecutive days of wearing them. For the past five or six weeks I've been trying out the 'no wash' approach. I wore one pair of jeans for about thirty five days (maybe split into two periods of continuous wearing) and washed them probably once or never during that time.... (read more)

Thanks for reminding me to do a meetup report! I've added it at the end of the announcement for this Sunday's meetup. Let me know in the comments there whether you think you can still make it this weekend.

Currently expecting at fewest two others with joint probability >70%, so I'll still do the original day. But I'll bear the next week in mind; we might do two weeks in a row.

6skeptical_lurker
Sorry, I'm actually in bath the weekend after that (oops). My use of the word 'definitely' seems to not have been properly probability-calibrated in this case.

You more-or-less said, "gwern is imperfect but net-positive. So deal with it. Not everyone can be perfect.". I think such a response, in reply to someone who feels bullied by a senior members and worries the community is/will close ranks, is not the best course of action, and in fact is better off not being made. Even assuming your comment was not a deontological imperative, but rather a shorthand for a heuristic argument, I am very uncertain as to what heuristic you are suggesting and why you think it's a good heuristic.

Even if you ignored all t... (read more)

5Shmi
I'll try one more time... gwern is not a "senior member". He is not a moderator, as far as I know, though he did do some work for MIRI. He is a very prominent regular with superb research and analysis skills, quick wit, sharp tongue and occasionally bad attitude, apparently uninterested in applying the principle of charity. He's been told as much and was unwilling to acknowledge this as a problem. Like on any forum, you don't have to engage everyone who replies to you. I ignore comments from a few regulars, some very active here, whom I have engaged in the past in repeated unproductive exchanges until I learned better. ThisSpaceAvailable should do likewise. This is basic internet hygiene. As long as the person you are unhappy to talk to does not run the place actively hounding you from thread to thread, downvoting and sniping, ignore them. If you feel that they break the forum rules, raise the issue with the mods. What ThisSpaceAvailable wrote comes across as drama-queening (an uncharitable term, but it fits in this case, hence all the downvotes of the OP). The very first sentence is an extreme put-off. Just now I have looked through the linked thread and my impression is that it's the OP who lost his cool. Anyway, I agree that my original reply could have been written in a more charitable way, but the point (a "heuristic", if you like) still stands: ignore those you don't like, unless they clearly break forum rules, or don't complain (or don't participate). It's not a "deontological imperative", more like common sense in online discourse.

I'm not sure exactly which parts you're referring to, so can you quote the parts you find odd or by which you are confused?

Those aren't weird deontological rules and you're just throwing in those words to describe those phrases as boo lights. MOST things people say aren't meant as strict rules, but as contextual and limited responses to the conversation at hand.

There is a very particular mental process of deontological thinking that epistemic rationalists should train themselves to defuse, in which an argument is basically short-circuited by a magic, invalid step. If the mental process that actually takes place in someone's head is, 'This person criticised a net-positive fi... (read more)

4Lumifer
Downvoted for wasting bits in service of drama.
3Salemicus
You have now written two long replies discussing my post, without ever getting to the bottom of what you find objectionable about it, or why that particular comment is in need of some special justification. Cutting through the verbiage, it seems you think I was unconscionably rude to turchin, although you never say what exactly you found rude. As already stated, I wasn't rude. I mocked his ideas, but not him. You appear to be trying to elide the distinction between belittling an idea and belittling a person, which does not appear accidental; you seem to come from a place where everyone is obliged to be "supportive." Needless to say, I don't agree. My comment has mostly been upvoted, so I take it community standards agree with me. To be clear: my comment was not intended to be supportive; quite the opposite. My comment was intended to say "If you don't want to look silly to the wider world, or if you really care about solving existential risk, stop what you're doing. But if what you really care about is winning at San Francisco morality theatre, I'm sure you're doing just fine."

Whoever downvoted this comment, please explain your downvote.

turchin's proposed action makes me uneasy, but how would you justify this comment? Generally such comments are discouraged here, and you would've been downvoted into oblivion if you'd made such a response to a proposal that weren't so one-sidedly rejected by Less Wrong. What's the relevant difference that justifies your comment in this case, or do you think such comments are generally okay here, or do you think you over-reacted?

6Salemicus
* I think my comment was on-point, truthful, pithy, and not overly rude. Such comments should be encouraged. * I genuinely think the post is hilarious, because it shows so many cognitive biases in service of "rationalism." * The poster claims he wants to reduce X-risk. But his proposed solution is to stand in the street with placards saying "Stop Existential Risks!" And then magically a solution appears, because of "awareness." What would we say about, for example, a malaria charity that used such a tactic? * I seem to recall that policy debates shouldn't appear one-sided. Yet all his slogans are ridiculous. Consider, for example, "Prevent Global Catastrophe!" Do you think that people who don't take existential risks are in favour of global catastrophe? What does it even mean to say there is a 50% chance of a global catastrophe? * Perhaps the funniest part is that the poster has already organised street actions for immortality. Presumably, he must believe that those made great strides to solving the problem of immortality(!!!), which is why he's now using the same tactics to tackle existential risk more generally... * But in another way, his street actions for immortality were presumably successful, because they made the participants (at Burning Man, no less!) feel good about themselves, and superior to the rest of the common flock. So the second part of my comment was a double-edged sword. * I could go on. Ultimately, if you make a ridiculous post, you can't expect people not to laugh.

Oops, I didn't actually read 7 and assumed it was public opinion had grown more positive. Given the two choices actually presented, I'd say 7 more likely.

Edit: Relative credences (not necessarily probabilities since I'm conditioning on there being significant effect sizes), generated naively trying not to worry too much about second-guessing how you distributed intuitive and counterintuitive results:

1:07 : 33:67
2:08 : 33:67
3:09 : 67:33
4:10 : 40:60
5:11 : 45:55
6:12 : 85:15

3satt
(Couple of side notes inspired by your edit.) I considered asking for people's credence in each claim with probability polls, but reckoned that'd discourage responses, due to the extra effort needed to ensure coherence. (With 1 vs. 7, for instance, one would also have to think about the probability that neither claim's true.) When distributing the pairs across the lists, I had R flip six virtual coins to decide whether to swap the places of each pair after I'd written them. So it should be nice & random, making second guessing unnecessary...although I guess no one else can be 100% sure I'm telling the truth here!

In another project spaced repetition project I used Anki to learn to distinguish color that he didn't distinguish beforehand.

I think I managed to do this when learning flags, with Chad and Romania. It seemed like I got to the point where I could reliably distinguish their flags on my phone, whereas when I started, I did no better than chance. I did consciously explain this to somebody else as something interesting, but now that I think about it, I failed to find it as interesting as I should have, because the idea that seeing a card a few times on Anki can increase my phenomenal granularity or decrease the amount of phenomenal data that my brain throws away, is pretty amazing.

8ChristianKl
A while ago I also learned country flags via Anki. While the flags in Wikipedia are different I'm not sure that the flags of Chad and Romania are different in reality. German law for example simple says that the colors of the flag are red, gold and black. It doesn't specify the exact shade of red and different flag producers might produce slightly different shades of red. Having phenomenal granularity for distinguishing different flags is also not that useful in real life. I think the key question is: "What are areas where having more phenomenal granularity actually matters?" Examples that I have found are: Audio: Phonemes, pitch of musical notes, duration of musical notes Visual: Colors, Speed Reading Kinesthetic: A lot of interesting stuff in somatics. Apart from that heartrate, breathing rate and things that are more difficult to label. Emotions are very important because noticing your emotions affect your reasoning, whether or not you are aware of them. Taste: Recognise different spices. Tim Ferriss writes about training that skill in 4-Hour Body. Mental: Credence, time intervals

I found typing to be a massive deterrent personally. Lots of my Anki is done in bed or on trains on my phone, and I found Memrise (on a laptop) much less compelling and harder to get myself to do than Anki because of all the typing, multiple choice, and drag-n-drop (and it would switch between those which would break my focus). I don't want to have to type 'London' when I'm asked what the capital of the UK is or click it on a multiple choice. Maybe if it were just typing on a fully-fledged computer, like you describe, it wouldn't be so bad?

I still don't th... (read more)

1ChristianKl
In my personal experience with Memrise is that it does a lot of overtesting for new words. It shows you a card that you have seen the first time and marked as true again in the same session. I agree that dragging and clicking on multiple choice items don't work well on the laptop. Typing doesn't work well on phones. Multitouch is simple a completely different way of interacting with a device. On a multitouch device you would want ideally to have to a map and simple click on the country on the map to select it. Speed Anatomy does that really well but it doesn't do spaced repetition. At the moment I'm working on getting binary choices on phones right and afterwards I will go to challenges such as clicking on items on a map. As those kinds of answers can be scored automatically I'm also getting rid of self evaluation. I want to instead replace it with calculating confidence in the card via things like pressure, time taking to answer and where a button get's pressed. If you want you can then click all buttons for card where you are sure on the top and all buttons where you are unsure on the bottom and the App will automatically learn your pattern. Smart users who want to tell the App their confidence (so that the app calculates intervals better) can and the average user that doesn't care isn't distracted and the app might even find unconscious patterns.

This post is brilliant.

(Sensations of potential are fascinating to me. I noticed a few weeks ago that after memorizing a list of names and faces, I could predict in the first half second of seeing the face whether or not I'd be able to retrieve the name in the next five seconds. Before I actually retrieved the name. What??? I don't know either.)

Right! When telling people about Anki, I often mention the importance of not self-deluding about whether one knows the answer. But sometimes I also mention how I mark a card as 'Easy' before I've retrieved or su... (read more)

4Nornagest
Wild speculation: it's possible to notice that a node in a representational graph is well-connected and thus likely to be close to another node, without following any actual edges (this is very close to a general metric of familiarity but doesn't actually require representing that metric). Something similar might be going on in your head: you haven't retrieved what the capital of the UK is yet, but you know you know a lot about the UK.
3ChristianKl
For that reason I have set all my Anki cards for typing. If you actually type the city name and you get it wrong you notice. Even when I already pressed "easy" Anki allows going back via Crtl+z. That does happen frequently enough for my for me to think that you are probably sometimes deluding yourself. There are cards where you think you know the right answer but get the card wrong. It has the added bonus of training typing speed ;) I still have an average answer speed of 16 cards/minute over the last month so I don't think it slows me down much.

First, how is average utilitarian defined in a non-circular way?

If you can quantify a proto-utility across some set of moral patients (i.e. some thing that is measurable for each thing/person we care about), then you can then call your utility the average of proto-utility over moral patients. For example, you could define your set of moral patients to be the set of humans, and each human's proto-utility to be the amount of money they have, then average by summing the money and dividing by the number of humans.

I don't necessarily endorse that approach, o... (read more)

Woah, well done everyone who donated so far. I made a small contribution. Moreover, to encourage others and increase the chance the pooled donations reach critical mass, I will top up my donation to 1% of whatever's been donated by others, up to at least $100 total from me. I encourage others to pledge similarly if you're also worrying about making a small donation or worrying the campaign won't reach critical mass.

7gjm
If 102 people all pledge to donate 1% of everyone else's total, the consequences could be interesting. (Of course it's vanishingly unlikely. But pedantic donors might choose to word their pledges carefully.)

Daniel, did you go ahead with this? Learn anything interesting?

5Daniel_Burfoot
The main impressions I got were: * Berkeley is appealing (to me) near the university - it has a nice cafe+bookstore vibe. The rest of the city is too spread out and potentially dangerous (a bus driver I talked to said his bus had been shot twice). * The BART works pretty well, if your destination is near a station. The bus system in San Fran proper is not so good. * I didn't like the parts of Oakland I visited. * A studio apartment near the downtown Berkeley BART station is about $2k/month. As an exercise, I broke down three cities (Boston, San Fran, Seattle) in terms of various dimensions (weather, social environment, business environment, cost of living, etc) and put a yearly dollar amount on each dimension, representing the value relative to Boston I'd be will to pay for having access to it. I was hoping that I would observe something like a $10k/year effective boost from moving to San Fran or Seattle, which would indicate a strong benefit to moving. I found only a $1k/year boost. I gave San Fran a $15k/year benefit for having better weather, but most of that was eaten up by a $10k/year increase in cost of living. The rest of the dimensions seemed pretty comparable.

(A): There exists a function f:R->R

and the axioms, for all r in R:

(A_r): f(r)=0

(The graph of f is just the x-axis.)

This might be expressible with a finite axiomatisation (e.g. by building functions and arithmetic in ZFC), and indeed I've given a finite schema, but I'm not sure it's 'fair' to ask for an example of a theory that cannot be compressed beyond uncountably many axioms; that would be a hypertask, right? I think that's what Joshua's getting at in the sibling to this comment.

I don't think there's stuff directly on dissolving (criminal) justice in LessWrong posts, but I think lots of LessWrongers agree or would be receptive to non-retributive/consequentialist justice and applying methods described in the Sequences to those types of policy decisions.

Some of your positions are probably a bit more fringe (though maybe would still be fairly popular) relative to LW, but I agree with a lot of them. E.g. I've also been seriously considering the possibility that pain is only instrumentally bad due to ongoing mental effects, so that you... (read more)

The prospect of being formally in a study pair/group makes me anxious in case I'm a flake and feel like I've betrayed the other participant(s) by being akratic or being unable to keep up and then I will forever after be known as That Flake Who Couldn't Hack Model Theory That Everybody Should Laugh At etc. etc. I should probably work on that anxiety, but in the interim, as a more passive option, I've just created this Facebook group. Has the benefit that anybody who stumbles across it or this comment can join and dip in at their leisure.

I don't really know ... (read more)

I'm not sure if it's because I'm Confused, but I'm struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.

2MugaSofer
I'm saying that if Sleeping Beauty's goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of "payoff" that gives Thirder results. From If a tree falls on Sleeping Beauty...:

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

Can you give a concrete example of what you see as an example of where anthropic reasoning wins... (read more)

2MugaSofer
OK, well by analogy, what's the "payoff structure" for nuclear anthropics? Obviously, we can't prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world. It isn't perfectly analogous, but it seems to me that "be right" is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.

Ah, that's good to know. Thanks for the suggestion!

It's not enough to say "the act of smoking". What's the causal pathway that leads from the lesion to the act of smoking?

Exactly, that's part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That's the hard edge of the pro... (read more)

1[anonymous]
I'm not interesting in discussing with this level of condescension.

I don't think you're taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.

If it's not the urge, what is it?

Obvious alternative that occurred to me in <5 seconds: It's not the urge, it's the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don't show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.

Edi... (read more)

0Gurkenglas
Both of these are invalidated by the assumption:
2[anonymous]
It's not enough to say "the act of smoking". What's the causal pathway that leads from the lesion to the act of smoking? Anyway, the smoking lesion problem isn't confusing. It has a clear answer (smoking doesn't cause cancer), and it's only interesting because it can trip up attempts at mathematising decision theory.

If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.

I've been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by t... (read more)

Yep, I find the world a much less confusing place since I learned capitals and location on map. I had (and to some extent still do have) a mental block on geography which was ameliorated by it.

Rundown of positive and negative results:

In a similar but lesser way, I found learning English counties (and to an even lesser extent, Scottish counties) made UK geography a bit less intimidating. I used this deck because it's the only one on the Anki website I found that worked on my old-ass phone; it has a few howlers and throws some cities in there to fuck with yo... (read more)

4taelor
I'm an 4th year economics undergrad preparing start applying to PhD programs, and while I've never formally attempted to memorize GDPs, I've found that having a rough idea of where a county's per capita GDP is to be very useful in understanding world news and events (for example, I've noticed that around the $8,00-12,000 per year range seems to be the point where the median household gets an internet connection). If you do attempt to go the memorization route, be sure to use PPP-adjusted figures, as non-adjusted numbers will tend to systematically under estimate incomes in developing countries.
8NancyLebovitz
What about learning a sense of scale, for both time and space? planets and stars replies to most common comments to the previous video sub-atomic to hypothetical multi-universes-- uses pictures and numbers, no zooming. I hadn't realized how much overlap there is in size between the larger moons and smaller planets, and (in spite of having seen many pictures) hadn't registered that nebulas are much bigger than stars. I'm going to post this before I spend a while noodling around science videos, but it might also be good to work on time scales and getting oriented among geological and historical time periods, including what things were happening at the same time in different parts of the world.
4sixes_and_sevens
I did British monarchs last year while on a history kick, (which I'm still on). Pro-tip: watch films, television shows and plays featuring said monarchs, as they include salient contemporary historical events. For example, Nigel Hawthorne was the mad George. Hugh Laurie was his son, the Prince Regent, a contemporary of the Duke of Wellington (Stephen Fry), which places him temporally alongside the Napoleonic wars. Colin Firth was Queen Elizableth II's stuttering dad in The King's Speech. His brother was Mike from Neighbours (or the bad guy from Iron Man 3 if you're under 30) and their dad was Dumbledore. (It turns out that royal history has plenty of independently interesting features, because it contains a lot of murders and wars and speculation about parentage. Contemporary introductions to historiography emphasise the movement away from history as the deeds of powerful men exercising their will through war and conquest, but the kings and wars are a lot more memorable and easier to place in time than the ephemeral stuff like trade routes and adoption of crops.)

So my mind state is more likely in a five-sibling world than a six-sibling one, but using it as anthropic evidence would just be double-counting whatever evidence left me with that mind state in the first place.

Yep; in which case the anthropic evidence isn't doing any useful explanatory work, and the thesis 'Anthropics doesn't explain X' holds.

2Nate_Gabriel
Anthropics fails to explain King George because it's double-counting the evidence. The same does not apply to any extinction event, where you have not already conditioned on "I wouldn't exist otherwise." If it's a non-extinction nuclear exchange, where population would be significantly smaller but nonzero, I'm not confident enough in my understanding of anthropics to have an opinion.

Yes! There's a lot of ways to remove the original observer from the question.

The example I thought of (but ended up not including): If all one's credence were on simula(ta)ble (possibly to arbitrary precision/accuracy even if perfect simulation were not quite possible) models and one could specify a prior over initial conditions at the start of the Cold War, then one could simulate each set of initial conditions forward then run an analysis over the sets of initial conditions to see if any actionable causal factors showed up leading to the presence or abse... (read more)

Is this in support of or in opposition to the thesis of the post? Or am I being presumptuous to suppose that it is either?

1khafra
The opposition is that the number of observers able to ask questions about royal siblings is not heavily correlated with the actual number of royal siblings historically present; while the number of observers able to ask questions about a lack of large thermonuclear exchanges is heavily correlated with the actual number of historical large thermonuclear exchanges.
3DanielLC
Opposition.

Haha. I did seriously consider it when that example was less central to the text, but ended up just going for playing it straight when it was interleaved, since I didn't want to encourage second-guessing/paranoia.

Thanks. I've edited the post pointing to lukeprog's more recent post about the matching drive, since I'd consider this one fully obsolete now the Stellar offer is so low.

Good chance you've seen both of these before, but:

http://en.wikipedia.org/wiki/Learned_helplessness and http://squid314.livejournal.com/350090.html

I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.

Damn, if only someone had created a thread for that, ho ho ho

Strategic incompetence?

I'm not sure if maybe Schelling uses a specific name (self-sabotage?) for that kind of thing?

4sixes_and_sevens
Schelling does talk about strategic self-sabotage, but it captures a lot of deliberated behaviour that isn't implied in my fake definition. Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn't stand out as obviously incorrect.

There will probably be holes and not quite capture exactly what I mean, but I'll take a shot. Let me know if this is not rigorous or detailed enough and I'll take another stab, or if you have any other follow-up. I have answered this immediately, without changing tab, so the only contamination is saccading my LW inbox beforing clicking through to your comment, the titles of other tabs, etc. which look (as one would expect) to be irrelevant.

Helplessness about topic X - One is not able to attain a knowably stable and confident opinion about X given the amoun... (read more)

6sixes_and_sevens
Thanks for that. The whole response is interesting. I ask because up until quite recently I was labouring under a wonky definition of "learned helplessness" that revolved around strategic self-handicapping. An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they're reinforced in this behaviour by someone taking the "hard" task away from them. Hence their "helplessness" is "learned". It wasn't until recently that I came across an accurate definition in a book on reinforcement training. I'm pretty sure I've had "learned helplessness" in my lexicon for over a decade, and I've never seen it used in a context that challenged my definition, or used it in a way that aroused suspicion. It's worth noting that I probably picked up my definition through observing feminist discussions. Trying a mental find-and-replace on ten years' conversations is kind of weird. I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.

You seem to making an assertion about me in your last paragraph, but doing so very obliquely.

Apologies for that. I don't think that that specific failure mode is particularly likely in your case, but it seems plausible to me that other people thinking in that way has shifted the terms of discourse such that that form of linguistic relativism is seen as high-status by a lot of smart people. I am more mentioning it to highlight the potential failure mode; if part of why you hold your position is that it seems like the kind of position that smart people wo... (read more)

This is really dismissive and, if I'm honest, I'm disappointed it's been upvoted so much. It's very convenient to say something like this and score points by signalling self-sacrificing stoicism and tough skin, and a lot less convenient to take the time to actually try looking for solutions or even just hold off from making dismissive comments.

I believe I remember when I hopped on #lesswrong (on which I've spent maybe between fifteen and ninety minutes' active time, so it's telling that this happened), and within a few minutes you'd complained to me (when ... (read more)

7Shmi
Huh, this is one of the worst misinterpretations of my LW comment in a long time. I don't even know where to start, so I'll just express my general disappointment with it, downvote and move on.
-5Lumifer
8drethelin
1) I'm annoyed by this and sleep deprived so forgive me if this response is incompletely coherent. 2) Those aren't weird deontological rules and you're just throwing in those words to describe those phrases as boo lights. MOST things people say aren't meant as strict rules, but as contextual and limited responses to the conversation at hand. This guy is implicitly calling for Gwern to be banned, or saying that it's either Them or Gwern. Shminux is simply explicitly conveying that we clearly choose to have Gwern rather than not. He's not Making A Rule. 3) You can't treat everyone who complains about being bullied by the community seriously. That's like auto-cooperating in a world full of potential defectors. It creates an incentive to punish anyone you dislike by starting a thread about how mean they are to you, and also has a chilling effect on conversation in general. Despite the rudeness, Gwern's replies in the linked conversation were lengthy and tried to convey information and thoughts. I've seen plenty of examples of people afraid to talk because they might offend someone online, and I don't really want the threshold for being punished for rudeness to be that low on Lesswrong. 4) There is such a thing as overreaction. Regardless of whether this person feels bullied by Gwern, everyone can take a look at the threads involved and decide if it's an appropriate response. I don't think calling someone out for something like this in a top level post (not to mention that's a pretty low quality post even for discussion) and impugning the entire community as irrational or whatever is at all proportional. 5) If thisspaceavailable (or you) want Lesswrong as a WHOLE to be less rude, rather than making a post that (clearly in my mind) is just getting back at Gwern, there are a LOT better ways to do it.

Actually, I could imagine you reading that comment and feeling it still misses your point that 0.999... is undefined or has different definitions or senses in amateur discussions. In that case, I would point to the idea that one can makes propositions about a primitive concept that turn out to be false about the mature form of it. One could make claims about evidence, causality, free will, knowledge, numbers, gravity, light, etc. that would be true under one primitive sense and false under another. Then minutes or days or month or years or centuries or mil... (read more)

It's "Here's a sequence of symbols. Should we assign this sequence of symbols the value of 1, or not?" Which is just a silly argument to have.

It's not. The "0.999... doesn't equal 1" meme is largely crackpottery, and promotes amateur overconfidence and (arguably) mathematical illiteracy.

Terms are precious real estate, and their interpretations really are valuable. Our thought processes and belief networks are sticky; if someone has a crap interpretation of a term, then it will at best cause unnecessary friction in using it (e.g. if y... (read more)

0ThisSpaceAvailable
A lot (in fact, all of them that don't involve a rigorous treatment of infinite series) of the "proofs" that it does equal 1 are fallacious, and so the refusal to accept them is actually a reasonable response. You seem to making an assertion about me in your last paragraph, but doing so very obliquely. Your analogy is not very good, as people do not try to argue that one can logically prove that "marble" does not mean "nucleotide", they just say that it is defined otherwise. If we're analogizing ".9999... = 1" to "marble doesn't mean't nucleotide", then "
3KnaveOfAllTrades
Actually, I could imagine you reading that comment and feeling it still misses your point that 0.999... is undefined or has different definitions or senses in amateur discussions. In that case, I would point to the idea that one can makes propositions about a primitive concept that turn out to be false about the mature form of it. One could make claims about evidence, causality, free will, knowledge, numbers, gravity, light, etc. that would be true under one primitive sense and false under another. Then minutes or days or month or years or centuries or millennia later it turns out that the claims were false about the correct definition. It would be a sin of rationality to assume that, since there was a controversy over definitions, and some definitions proved the claim and some disproved it, that no side was more right than another. One should study examples of where people made correct claims about fuzzy concepts, to see what we might learn in our own lives about how these things resolve. Were there hints that the people who turned out to be incorrect ignored? Did they fail to notice their confusion? Telltale features of the problem that favoured a different interpretation? etc.

A recurring problem with these forms of civilizational inadequacy is bystander effect/first-mover disadvantage/prisoners' dilemma/etc, and the obvious solutions (there might be others) are coordination or enforcement. Seeing if there's other solutions and seeing how far people have already run with coordination and enforcement seems promising. Even if one is pessimistic about how easily the problems can be addressed and thinks we're probably screwed anyway but slightly less probably screwed if we try, then the value of information is still very high; this ... (read more)

Generalising from 'plane on a treadmill'; a lot of incorrect answers to physics problems and misconceptions of physics in general. For any given problem or phenomenon, one can guess a hundred different fake explanations, numbers, or outcomes using different combinations of passwords like 'because of Newton's Nth law', 'because of drag', 'because of air resistance', 'but this is unphysical so it must be false' etc. For the vast majority of people, the only way to narrow down which explanations could be correct is to already know the answer or perform physic... (read more)

2sixes_and_sevens
I have a strange request. Without consulting some external source, can you please briefly define "learned helplessness" as you've used it in this context, and (privately, if you like) share it with me? I promise I'll explain at some later date.

Where does one draw the line, if at all? "1+1 does no inherently equal 2; rather, by convention, it is understood to mean 2. The debate is not about the territory, it is about what the symbols on the map mean." It seems to me like that--very 'mysteriously'--people who understand real analysis never complain "But 0.999... doesn't equal 1"; sufficient mathematical literacy seems to kill any such impulse, which seems very telling to me.

2tut
Yes, and that's a case of "you don't understand mathematics, you get used to it." Which applies exactly to notation and related conventions. Edit: More specifically, if we let a_k=9/10^k, and let s_n be the sum from k=1 to n of a_k, then the limit of s_n as n goes to infinity will be 1, but 1 won't be in {s_n|n in R}. When somebody who is used to calculus sees ".99..." What they are thinking of is the limit, which is 1. But before you get used to that, most likely what you think of is some member of {s_n|n in R} with an n that's large enough that you can't be bothered to write all the nines, but which is still finite.
1ThisSpaceAvailable
Exactly. The arguments about whether 0.99999.... = 1 are lacking a crucial item: a rigorous definition of what "0.9999..." refers to. The argument isn't "Is the limit as n goes to infinity of sum from 1 to n of 9*10^-n equal to 1?" It's "Here's a sequence of symbols. Should we assign this sequence of symbols the value of 1, or not?" Which is just a silly argument to have. If someone says "I don't believe that 0.9999.... = 1", the correct response (unless they have sufficient real analysis background) is not "Well, here's a proof that of that claim", it's "Well, there are various axioms and definitions that lead to that being treated as being equal to 1".
Load More