All of noggin-scratcher's Comments + Replies

How should we speak about "stressful events"? Maybe instead of, “buying a plane ticket is stressful”, something like, "buying a plane ticket made me stressed." But the word "made" implies inevitability and still cedes too much power to the event

"I am feeling stressed about buying a plane ticket" would acknowledge that the stress is coming from within you as an individual, and doesn't foreclose the possibility of instead not feeling stressed.

Answer by noggin-scratcher10

Pretty sure I've seen this particular case discussed here previously, and the conclusion was that actually they had published something related already, and fed it to the "co-scientist" AI. So it was synthesising/interpolating from information it had been given, rather than generating fully novel ideas. 

Per NewScientist https://www.newscientist.com/article/2469072-can-googles-new-research-assistant-ai-give-scientists-superpowers/

However, the team did publish a paper in 2023 – which was fed to the system – about how this family of mobile genetic elemen

... (read more)
4AnthonyC
I'm not sure what the concept of and "entirely new" or "fully novel" idea means in practice. How many such things actually exist and how often should we expect any mind however intelligent to find one? Ideas can be more or less novel, and we can have thresholds for measuring that, but where should we place the bar? If you place it at "generate a correct or useful hypothesis you don't actually have enough data to locate in idea-space" then that seems like a mistake. I'd put it more near "generate and idea good enough to lead to a publishable scientific paper or grantable patent." This still seems pretty close to that? Sometimes "obvious" implications to scientific papers go unacknowledged or unexplored for a very long time.

Technically it's still never falsifiable. It can be verifiable, if true, upon finding yourself in an afterlife after death. But if it's false then you don't observe it being false when you cease existing.

https://en.wikipedia.org/wiki/Eschatological_verification

If we define a category of beliefs that are currently neither verifiable or falsifiable, but might eventually become verifiable if they happen to be true, but won't be falsifiable even if they're false—that category potentially includes an awful lot of invisible pink dragons and orbiting teapots (who... (read more)

Looks like #6 in the TL;DRs section is accidentally duplicated (with the repeat numbered as #7)

1Conrad K.
Thank you!

Solid point. I realise I was unclear that for face shape I had in mind external influences in utero (while the bones of the face are growing into place in the fetus). Which would at least be a somewhat shared environment between twins. But nonetheless, changing my mind in real-time, because I would have expected more difference from one side of a womb to the other than we actually see between twins. 

Even if I'm mistaken about faces though, I don't think I'm wrong about brains, or humans in general.

In other words, all the information that controls the shape of your face, your bones, your organs and every single enzyme inside them – all of that takes less storage space than Microsoft Word™.

The shape of your face, and much else besides, will be affected by random chance and environmental influences during the process of development and growth. 

The eventual details of the brain, likewise, will be in large part a response to the environment—developing and learning from experience. 

So the final complexity of a human being is not actually bounded by the data contained in the genome, in the way described. 

gwern315

The shape of your face, and much else besides, will be affected by random chance and environmental influences during the process of development and growth.

The shape of your face will not be affected much by random chance and environmental influences. See: identical twins (including adopted apart).

I wasn't the one eating it, but having prepared a couple of Huel's "hot meal pot/pouch" options for my partner (I forget which ones exactly, but something in the way of mac & cheese or pasta bolognese), I can report that I found the smell coming off it to be profoundly unappetising.

Not sure how they went down with her, but there's a small stash of these pots in the cupboard that she hasn't touched beyond the first few—so I suspect not very well.

Slight glitches:

The "chapter shortcuts" section of https://www.lesswrong.com/s/9SJM9cdgapDybPksi lists "editPost" links to the chapter drafts (inaccessible to others)

The numbering in the post titles skip over #4

1Allison Duettmann
Thanks for pointing this out! I inserted the correct shortcuts  Here they are again:  Preface 1. Meet the Players: Value Diversity 2. Skim the Manual: Intelligent Voluntary Cooperation 3. Improve Cooperation: Better Technologies 4. Uphold Voluntarism: Physical Defense 5. Uphold Voluntarism: Digital Defense 6. Increase Intelligence: Welcome New Players 7. Iterate the Game: Racing Where? 

Oh I was very on board with the sarcasm. Although as a graduate of one of them, I obviously can't believe you're rating the other one so highly.

This is a general principal

Principle* — unless they're the head-teacher of a school, the type to be involved in a principal/agent problem, or otherwise the "first"

graduates of the great English universities (both of them)

Shots fired

2J Bostock
1. Thanks for catching the typo. 2. Epistemic status has been updated to clarify that this is satirical in nature.

That definitely looks like the one. Appears I'd forgotten some of the context/details though.

I could swear there was a similar Scott Alexander post, about flirting deliberately skirting the edge of plausible deniability to avoid prematurely creating common knowledge. With an analogy to spies trying to identify a fellow operative without overtly tipping their hand in case they were mistaken and speaking to a non-spy.

Can't find it now: might have since been deleted, or might have only ever existed on LiveJournal or Tumblr or something.

https://slatestarcodex.com/2017/06/26/conversation-deliberately-skirts-the-border-of-incomprehensibility/ is similar but not explicitly about flirting.

6dirk
You're probably thinking of the russian spies analogy, under section 2 in this (archived) livejournal post.

While I can appreciate it on the level of nerd aesthetics, I would be dubious of the choice of Quenya. Unless you're already a polyglot (as a demonstration of your aptitude for language-learning), it seems unlikely—without a community of speakers to immerse yourself in—that you'll reach the kind of fluid fluency that would make it natural to think in a conlang.

And if you do in fact have the capacity to acquire a language to that degree of fluency so easily, but don't already have several of the major world languages, it seems to me that the benefits of being able to communicate with an additional fraction of the world's population would outweigh those of knowing a language selected for mostly no-one else knowing it.

The strategy above makes all three statements seem equally unlikely to be true. Mathematically equivalent but with different emphasis would be to make all three statements seem equally unlikely to be false.

i.e. Pick things that seem so mundane and ordinary that surely they must be universally true—then watch the reaction as it is realised that one of them must actually be a lie.

1dkl9
* My phone runs iOS or Android * My body mass is between 60 and 80 kg * English is one of my native languages That would be fun in the same way. If your goal in playing includes informing listeners, it's better to use thoroughly absurd facts and an equally-absurd lie; absurdity is low prior probability leads to surprise corresponds to learning.

I suspect there has to be a degree of mental disconnect, where they can see that things don't all happen (or not happen) equally as often as each other, but answering the math question of "What's the probability?" feels like a more abstract and different thing.

Maybe mixed up with some reflexive learned helplessness of not really trying to do math because of past experience that's left them thinking they just can't get it.

Possibly over generalising from early textbook probability examples involving coins and dice, where counting up and dividing by the number of possible outcomes is a workable approach.

1TriflingRandom
I agree with your point about there being a 'mental disconnect'. It seems to be less of an issue with understanding the concept of two events not being equally likely to occur, but rather an issue with applying mathematical reasoning to an abstract problem. If you can't find the answer to that problem, you are likely to use the seemingly plausible but incorrect reasoning that 'it either happens or doesn't, so it's 50/50.' This fallacy could be considered a misapplication of the principle of insufficient reason.

I know someone who taught math to low-ability kids, and reported finding it difficult to persuade them otherwise. I assume some number of them carried on into adulthood still doing it.

5TriflingRandom
I feel that even an underachieving student can understand that the probability of winning the lottery is not 50/50. I can't imagine that many of those kids carried that fallacious thinking into adulthood.

In the infinite limit (or just large-ish x), the probability of at least one success, from nx attempts with 1/x odds on each attempt, will be 1 - ( 1 / e^n )

For x attempts, 1 - 1/e = 0.63212

For 2x attempts 1 - 1/e^2 = 0.86466

For 3x attempts 1 - 1/e^3 = 0.95021

And so on

Ironically, the even more basic error of probabilistic thinking that people so—painfully—commonly make ("It either happens or doesn't, so it's 50/50") would get closer to the right answer.

4Dweomite
Is that error common?  I can only recall encountering one instance of it with surety, and I only know about that particular example because it was signal-boosted by people who were mocking it.
0egor.timatkov
Haha, I didn't think of that. Funny.

not intended to be replayed

I have flagrantly disregarded this advice in an attempt to uncover its secrets. I'm assuming there are still a bunch of patterns that remain obscure, but the ones I have picked up on allowed me to end day 60 with 5581 food just now. So I'm calling that good enough.

Rat Ruins: 

Starts out rich but becomes depleted after repeat visits

Dragon Lake: 

 I don't think I've ever seen food here. Dragons not edible?

Goat Grove:

Good at the beginning, gradually runs down as time passes

Horse Hills: 

A few random hours of each da

... (read more)

Also, the guy is spamming his post about spamming applications into all the subreddits, which gives the whole thing a great meta twist, I wonder if he’s using AI for that too.

I'm pretty sure I saw what must be the same account, posting blatantly AI generated replies/answers across a ton of different subreddits, including at least some that explicitly disallow that.

Either that or someone else's bot was spamming AI answer comments while also spamming copycat "I applied to 1000 jobs with AI" posts.

The golden rule can perhaps be enhanced by applying it on a meta level: rather than "I would like to be offered oral sex, therefore I should offer oral sex", a rule of "I like it when people consider my preferences and desires before acting, and offer me things I want—therefore I should do the same for others by being considerate and attentive to their preferences and desires, but I don't expect they want to be offered oral sex"

But then, if you're getting different and contradictory recommendations depending on how much meta you decide to apply, that rather defeats the point of having a rule to follow.

Ah, perils of text-only communication and my own mild deficiency in social senses; didn't catch that it was a joke.

Has nonetheless got me thinking about whether some toasted oats would be a good addition to any of the recipes I already like. Lil bit of extra bulk and texture, some browned nutty notes—there's not nothing to that.

Not wishing to be rude but this feels like it's missing a section on the benefits of eating oatmeal sometimes.

There's a favourable comparison to the protein/fibre/arsenic content of white rice, but I don't eat a lot of white rice so I am left unclear on the motivation for substituting something I do eat with oatmeal.

4Hastings
I didn't catch on at all that this was humor, and as a result made a point to pick up oatmeal next time I was at the grocery. I do actually like oatmeal, I just hadn't thought about it in a while. It has since made for some pretty good breakfasts.  This whole sequence of events is either deeply mundane or extremely funny, I genuinely can't tell. If it's funny it's definitely at my expense.
5bhauth
Yes, on one level that's part of the joke. But also, following the above instructions, it can be a low-cost complete meal with nonperishable ingredients that can be fixed in <5 minutes of work and <10 minutes of waiting.

I'm skeptical that continuity of personal identity is actually real, beyond a social consensus and a deeply held evolved instinct. I don't expect there are metaphysical markers that strictly delineate which person-moments are part of "the same" ongoing person through time. So hypothetical new scenarios like teleportation, brain emulation, clones built from brain scans (etc) are indeed challenging—they break apart things that have previously always gone together as a bundle.

Even so, physical continuity of the brain involved seems like a reasonable basis for... (read more)

2cubefox
There are several arguments for psychological continuity over time (involving memories and personality traits) rather than physical continuity. E.g. teleportation (already John Locke made basically the teleportation argument except with resurrection in heaven) and cases like dissociative identity disorder or similar pathological cases where physical continuity seems largely maintained while personal identity arguably isn't. There are also hypothetical cases: If consciousness wasn't tied to atoms, and body swap like in fiction was possible, we would still call it "a person swapping bodies" instead of "a person swapping minds". Though Boltzmann brains seem to be an argument in favor of physical continuity.

This feels like you are, on some level, not thinking of consciousness as a thing that is fully and actually made of atoms. Instead talking about it like an immaterial soul that happens to currently be floating around in the vicinity of a particular set of atoms—but could in theory float off elsewhere to some other set of atoms that happens to be momentarily be arranged into a pattern that's similar enough to confuse the consciousness into attaching itself to to a different body.

In an atoms-first view of the world (where you have a brain made of physical st... (read more)

3cubefox
If you think consciousness can't be relocated, you presumably also think that teleportation would only create copies while destroying the originals. You might then be hesitant to use teleportation. You might even think that consciousness can't be relocated into the future, insofar physical stuff of current brains will be different in future brains. Or you might argue that consciousness can be relocated into the future, namely when the physical stuff is transformed continuously in space-time, and you argue that relocation of consciousness is possible under and only under such continuous physical transformation.

Even if there wasn't an AI voice clone involved, I'm still suspicious that someone was getting scammed. Just on priors for an unsolicited crypto exchange referral.

Do you find there's any difficulty in retaining/integrating things you've read in short few-minute snippets between other activities?

2aysajan
Not really, because I spend some dedicated time to Ankify important stuff from the materials I have read throughout the day (or days) and this process involves a quick review and put them together.

I'll accept that concern as well-intentioned, but I think it's misplaced.

I've offered zero detail of any of the accounts I've seen posting about mind uploads (I don't have the account names recorded anywhere myself, so couldn't share if I wanted to), and those accounts were in any case typically throwaway usernames that posted only once or a few times, so had no other personal detail attached to be doxxed with. They were only recognisable as the same returning user because of the consistent subject matter.

Genuinely just curious about whether the people I h... (read more)

Point of curiosity: do you happen to have posted about this scenario on the subreddit /r/NoStupidQuestions/ ?

Because someone has (quite persistently returning on many different accounts to keep posting about it)

2Seth Herd
If I were the person asking the question, I don't think I'd appreciate this question. It feels a little like doxxing. If they were different accounts that didn't share a name, they're meant to be anonymous and so private.

The technical meaning is a stimulus that produces a stronger response than the stimulus for which that response originally evolved.

So for example a candy bar having a carefully engineered combination of sugar, fat, salt, and flavour in proportions that make it more appetising than any naturally occurring food. Or outrage-baiting infotainment "news" capturing attention more effectively than anything that one villager could have said to another about important recent events.

Phil Magness notes that students could instead start their majors. That implies that when you arrive on campus, you should know what major is right for you.

That sounds like the way we do it in the UK: there's no norm of "picking a major" during the course of your time at university - you apply to a specific course, and you study that from the start.

Probably why a standard Bachelor's degree is expected to be a 3 year course rather than 4.

By the way, another tactic that is similar (and really prohibited in formal debates) is overloading the speech with technical terms

Possible typo: it is "really" prohibited, or "rarely" prohibited?

1bayesyatina
This error is not allowed in formal debates. Thank you for your feedback! I have made the necessary changes to the text. In formal debates, it is only allowed to use well-known terms, and a definition must be provided for any special terms used.

Or to add on to the thought, there are non-LW pro-truth/knowledge idioms like "knowledge is power", "the truth will set you free", or "honesty is the best policy"

"The truth hurts", "ignorance is bliss", and "what you don't know can't hurt you" don't contradict: they all say you're better off not knowing some bit of information that would be unpleasant to know, or that a small "white lie" is allowable.

The opposite there would be phrases I've mostly seen via LessWrong like "that which can be destroyed by the truth, should be", or "what is true is already true, owning up to it doesn't make it worse", or "if you tell one lie, the truth is thereafter your enemy", or the general ethos that knowing true information enables effective action.

3noggin-scratcher
Or to add on to the thought, there are non-LW pro-truth/knowledge idioms like "knowledge is power", "the truth will set you free", or "honesty is the best policy"

Sticking to multiples of three does have a minor advantage of aligning itself with things that there are already number-words for; "thousand", "million", "billion" etc. 

So for those who don't work with the notation often, they might find it easier to recognise and mentally translate 20e9 as "20 billion", rather than having to think through the implications of 2e10

2Nathan Helm-Burger
Yeah, that's probably the rationale
Answer by noggin-scratcher30

I've had some success with a rule of "If you want a sugary snack that's fine, but you have to make a specific intentional trip to the cupboard for it, not just mindlessly/reflexively grab something while putting together another meal or passing by"

4Slapstick
That one sounds good! It wouldn't work for me personally because I have a pathological relationship with refined sugar so the only equilibrium which works for me is cutting it out entirely (which has been successful and rewarding though initially very difficult). Thanks!

I haven't checked word count to identify the best excerpt, but Chapter 88 has some excellent tension to it. All you need to know to understand the stakes is that there's a troll loose, and it's got lessons about bystander effects and taking responsibility.

You’ve heard some trite truism your whole life, then one day an epiphany lands and you try to save it with words, and you realize the description is that truism

Reminds me of https://www.lesswrong.com/posts/k9dsbn8LZ6tTesDS3/sazen

I'm finding myself stuck on the question of how exactly the strict version would avoid the use of some of those negating adjectives. If you want to express the information that, say, eating grass won't give the human body useful calories...

  • "Grass is indigestible" : disallowed
  • "Grass is not nutritious" : disallowed
  • "Grass will pass through you without providing energy" : "without providing energy" seems little different to "not providing energy", it's still at heart a negative claim

Perhaps a restatement in terms of "Only food that can be easily digested... (read more)

6gwern
I find that editing my writing to use positive statements does make it better. I feel doubtful I could easily take it to the extent of making all positive statements. This might be an interesting use of LLM rewrites: negative->positive rephrasing feels like something within GPT-4's capabilities, and it would let you quickly translate a large corpus to read & evaluate without putting in a huge amount of work to write a large varied corpus of Abs-E text yourself. (I dislike the current name 'Abs-E' and by analogy to E-Prime, suggest 'E⁺' - short for 'English-positive'.) This would also combine well with 'Up-Goer-Five' style writing. In fact, I think Up-Goer-Five writing is already mostly E⁺ writing because of the need to say what something is rather than is not. (Checking the original XKCD, I see a few negations, but they all look easily rewritten to be positive: "if there's a problem so they decide not go to space" -> "...so they decide to get out of the rocket".) ---------------------------------------- That one seems easy to do if you go more quantitative. What is 'energy'? I mean, by e=mc^2, some grass embodies a lot of energy. You mean calories. "Grass provides 0 calories" is a positive assertion, which is more correct and still reasonably natural English. And also this assertion is clearly false, because plenty of animals eat grass and it provides them >0 calories. "Oh, I meant for humans, of course". Fine, your first two versions failed this ('indigestible' for whom, exactly?) but easily revised: "Grass provides 0 calories to humans." 0 is not a negation, but a specific number, and so is valid, and correctly expresses the intent while not being overly universal and implying false things about herbivores. That statement would seem to also be obviously wrong. Plenty of things are 'easily digested' in any reasonable meaning of that phrase, while providing ~0 calories. Water, for example. Or artificial sweeteners. Minerals like calcium. (Chiral molecules, if yo

I know few people these days who aren't using ChatGPT and Midjourney in some small way.

We move in very different social circles.

Have to ask: how much of the text of this post was written by ChatGPT?

3Greg Robison
Just written by me, a human. I apologize if it sounds too much like AI.

I don't have lots of keys, or frequent changes to which ones I want to carry, but a tiny carabiner has still proved useful to make individual keys easily separable from the bunch.

As an example, being able to quickly and easily say "here's the house key: you go on ahead and let yourself in, while I park the car" without the nuisance of prying the ring open to twiddle the key off.

Low positive and actively negative scores seen to me to send different signals. A low score can be confused for general apathy, imagining that few people having taken notice of the post enough to vote on it. A negative score communicates clearly that something about the post was objectionable or mistaken.

If the purpose of the scoring system is to aggregate opinions, then negative opinions are a necessary input for an accurate score.

Strikes me as inelegant for the final score to depend on the order in which readers happened to encounter the post. Which woul... (read more)

1alex.herwix
I see your point regarding different results depending on order of how people see the post but that’s also true the other way around. Given the assumption that less people are likely to view a post that has negative Karma, people who may actually turn out to like the post and upvote it never do so because of preexisting negative votes. In fact, I think that’s the whole point of this scheme, isn’t it? So, either way you never capture an „accurate“ picture because the signal itself is distorting the outcome. The key question is then what outcome one prefers, neither is objectively „right“ or in all respects „better“. I personally think that downvoting into negative karma is an unproductive practice, in particular with new posts because it stifles debate about potentially interesting topics. If you are bothered enough to downvote there should often be something to the post that is controversial. Take this post as an example. When I found it a couple of hours after posting, it was already downvoted into negative karma but there is no obvious reason why this should be so. It’s well written and makes a clear point that‘s worth discussing as exemplified by our engagement. Because it’s negative karma, however fewer people are likely to weight in to the debate because the signal is telling them to not bother engaging with this. In general my suggestion would be to only downvote into negative karma if you can be bothered to explain and defend your downvote in a comment and are willing to take it back if the author if the author of the post gives a reasonable reply. But as I said, this is just one way of looking at this. I value discourse and critical debate as essential pieces to sense and meaning making and believe that I made a reasonable argument for how this is stifled by current practice. Thanks to the author of the post for his thoughtful invite for critical reflection!

My sense (from 10+ years on reddit, 2 of which spent moderating a somewhat large/active subreddit) is that there's a "geeks MOPs and sociopaths"–like effect, where a small subreddit can (if it's lucky enough to start with one) maintain a distinctive identity around the kernel of a cool idea, with a small select group who are self-selected for a degree of passion about that idea.

But as the size of the group grows it gradually gets diluted with poor imitators, who are upvoted by a general audience who are less discerning about whether posts are in the origin... (read more)

Oh I see (I think) - I took "my face being picked up by the camera" to mean the way the camera can recognise and track/display the location of a face (thought you were making a point about there being a degree of responsiveness and mixed processing/data involved in that), rather than the literal actual face itself.

A camera is a sensor gathering data. Some of that data describes the world, including things in the world, including people with faces. Your actual face is indeed neither software nor data: it's a physical object. But it does get described by dat... (read more)

2Davidmanheim
Yes, I think you're now saying something akin to what I was trying to say. The AI, as a set of weights and activation funtions, is a different artifact than the software being used to multiply the matrices, much less the program used to output the text. (But I'm not sure this is quite the same as a different level of abstraction, the way humans versus atoms are - though if we want to take that route, I think gjm's comment about humans and chemistry makes this clearer.)

I'm not certain I follow your intent with that example, but I don't think it breaks any category boundaries.

The process using some algorithm to find your face is software. It has data (a frame of video) as input, and data (coordinates locating a face) as output. The facial recognition algorithm itself was maybe produced using training data and a learning algorithm (software).

There's then some more software which takes that data (the frame of video and the coordinates) and outputs new data (a frame of video with a rectangle drawn around your face).

It is fre... (read more)

2Davidmanheim
I'm not saying that I can force breaking of category boundaries, I'm asking whether the categories are actually useful for thinking about the systems. I'm saying it isn't, and we need to stop trying to use categories in this way. And your reply didn't address they key point - is the thing that controls the body being shown in the data being transmitted software, or data? And parallel to that, is the thing that controls the output of the AI system software or data?

True to say that there's a distinction between software and data. Photo editor, word processor, video recorder: software. Photo, document, video: data.

I think similarly there's a distinction within parts of "the AI", where the weights of the model are data (big blob of stored numbers that the training software calculated). Seems inaccurate though, to say that AI "isn't software" when you do still need software running that uses those weights to do the inference.

I guess I take your point, that some of the intuitions people might have about software (that it... (read more)

Compare this to a similar argument that a hardware enthusiast could use to argue against making a software/hardware distinction. You can argue that saying "software" is misleading because it distracts from the physical reality. Software is still present physically somewhere in the computer. Software doesn't do anything hardware can't do, since software doing is just hardware doing

But thinking in this way will not be a very good way of predicting reality. The hypothetical hardware enthusiast would not be able to predict the rise of the "programmer" p... (read more)

1Davidmanheim
We can play the game of recategorizing certian things, and saying data and software are separate - but the question is whether it adds insight, or not. And I think that existing categories are more misleading than enlightening, hence my claim. For example, is my face being picked up by the camera during the videoconference "data" in a meaningful sense? Does that tell you something useful about how to have a videoconference? If not, should we call it software, or shift our focus elsewhere when discussing it?
3JBlack
Definitely agreed. AI is software, but not all software is the same and it doesn't all conform to any particular expectations. I did also double-take a bit at "When photographs are not good, we blame the photographer, not the software running on the camera", because sometimes we do quite reasonably blame the software running on the camera. It often doesn't work as designed, and often the design was bad in the first place. Many people are not aware of just how much our cameras automatically fabricate images these days, and present an illusion of faithfully capturing a scene. There are enough approximate heuristics in use that nobody can predict all the interactions with the inputs and each other that will break that illusion. A good photographer takes a lot of photos with good equipment in ways that are more likely to give better results than average. If all of the photos are bad, then it's fair to believe that the photographer is not good. If a photograph of one special moment is not good, then it can easily be outside the reasonable control of the photographer and one possible cause can be software behaving poorly. If it's known in advance that you only get one shot, a good photographer will have multiple cameras on it.

Is there (or could there be) an RSS option that excludes Dialogue posts?

I think I'm currently using the "all posts" feed, but I prefer the brevity and coherence that comes from a single author with a thought they're trying to communicate to a reader, as compared to two people communicating conversationally with each other.

2habryka
Not currently, I think. But it definitely seems like a reasonable thing to have.

why 0^1 = 1 and not 0

Just to check, did you here mean 0^0 ?

It's been a while since I did much math, but I thought that was the one that counterintuitively equals 1. Whereas 0^1=1 just seems like it would create an unwelcome exception to the x^1=x rule.

0Brendan Long
Er yeah, I'll edit. Thanks!

I'm not working on X because when I start to look at it my brain seizes up with a formless sense of dread at even the familiar parts of the task and I can't find the "start doing" lever.

I'm not working on X because the ticket for it was put in by that guy and I don't want to deal with the inevitable nitpicking follow-up questions and unstated additional work.

I'm not working on X because if I start doing the easy parts that would commit me to also doing the hard parts. Maybe if I leave it, some other sucker will take it on and I won't have to do it at all.

I... (read more)

Load More