Reading through Backdoors as an analogy for deceptive alignment prompted me to think about a LW feature I might be interested in. I don't have much math background, and have always found it very effortful to parse math-heavy posts. I expect there are other people in a similar boat.
In modern programming IDEs it's common to have hoverovers for functions and variables, and I think it's sort of crazy that we don't have that for math. So, I'm considering a LessWrong feature that:
On "Backdoors", I asked the LessWrong-integrated LLM: "what do the Latex terms here mean"?
It replied :
...The LaTeX symbols in this passage represent mathematical notations. Let me explain each of them:
- : This represents a class of functions. The curly F denotes that it's a set or collection of functions.
- : This means that is a function that belongs to (is an element of) the class .
- : The asterisk superscript typ
The “prompt shut down” clause seemed like one of the more important clauses in the SB 1047 bill. I was surprised other people I talked to didn't think seem to think it mattered that much, and wanted to argue/hear-arguments about it.
The clauses says AI developers, and compute-cluster operators, are required to have a plan for promptly shutting down large AI models.
People's objections were usually:
"It's not actually that hard to turn off an AI – it's maybe a few hours of running around pulling plugs out of server racks, and it's not like we're that likely to be in the sort of hard takeoff scenario where the differences in a couple hours of manually turning it off will make the difference."
I'm not sure if this is actually true, but, assuming it's true, it still seems to me like the shutdown clause is the one of the more uncomplicatedly-good parts of the bill.
Some reasons:
1. I think the ultimate end game for AI governance will require being able to quickly notice and shut down rogue AIs. That's what it means for the acute risk period to end.
2. In the more nearterm, I expect the situation where we need to stop running an AI to be fairly murky. Shutting down an AI is going to be ve...
Largely agree with everything here.
But, I've heard some people be concerned "aren't basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?." And yeah, that does seem plausible.
I do think that all SSP-like plans are basically fake, and I’m opposed to them becoming the bedrock of AI regulation. But I worry that people take the premise “the government will inevitably botch this” and conclude something like “so it’s best to let the labs figure out what to do before cementing anything.” This seems alarming to me. Afaict, the current world we’re in is basically the worst case scenario—labs are racing to build AGI, and their safety approach is ~“don’t worry, we’ll figure it out as we go.” But this process doesn’t seem very likely to result in good safety plans either; charging ahead as is doesn’t necessarily beget better policies. So while I certainly agree that SSP-shaped things are woefully inadequate, it seems important, when discussing this, to keep in mind what the counterfactual is. Because the status quo is not, imo, a remotely acceptable alternative either.
Afaict, the current world we’re in is basically the worst case scenario
the status quo is not, imo, a remotely acceptable alternative either
Both of these quotes display types of thinking which are typically dangerous and counterproductive, because they rule out the possibility that your actions can make things worse.
The current world is very far from the worst-case scenario (even if you have very high P(doom), it's far away in log-odds) and I don't think it would be that hard to accidentally make things considerably worse.
I largely agree that the "full shutdown" provisions are great. I also like that the bill requires developers to specify circumstances under which they would enact a shutdown:
(I) Describes in detail the conditions under which a developer would enact a full shutdown.
In general, I think it's great to help governments understand what kinds of scenarios would require a shutdown, make it easy for governments and companies to enact a shutdown, and give governments the knowledge/tools to verify that a shutdown has been achieved.
There was a particular mistake I made over in this thread. Noticing the mistake didn't change my overall position (and also my overall position was even weirder than I think people thought it was). But, seemed worth noting somewhere.
I think most folk morality (or at least my own folk morality), generally has the following crimes in ascending order of badness:
But this is the conflation of a few different things. One axis I was ignoring was "morality as coordination tool" vs "morality as 'doing the right thing because I think it's right'." And these are actually quite different. And, importantly, you don't get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool.
There's actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:
On the object level, the three levels you described are extremely important:
I'm basically never talking about the third thing when I talk about morality or anything like that, because I don't think we've done a decent job at the first thing. I think there's a lot of misinformation out there about how well we've done the first thing, and I think that in practice utilitarian ethical discourse tends to raise the message length of making that distinction, by implicitly denying that there's an outgroup.
I don't think ingroups should be arbitrary affiliation groups. Or, more precisely, "ingroups are arbitrary affiliation groups" is one natural supergroup which I think is doing a lot of harm, and there are other natural supergroups following different strategies, of which "righteousness/justice" is one that I think is especially important. But pretending there's no outgroup is worse than honestly trying to treat foreigners decently as foreigners who can't be c...
Some beliefs of mine, I assume different from Ben's but I think still relevant to this question are:
At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.
There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you're a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.
I don't personally think you need to halt *all* helping of the powerless until you've solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.
A major suspicion/confusion I have here is that the two frames:
Look...
This feels like the most direct engagement I've seen from you with what I've been trying to say. Thanks! I'm not sure how to describe the metric on which this is obviously to-the-point and trying-to-be-pin-down-able, but I want to at least flag an example where it seems like you're doing the thing.
Periodically I describe a particular problem with the rationalsphere with the programmer metaphor of:
"For several years, CFAR took the main LW Sequences Git Repo and forked it into a private branch, then layered all sorts of new commits, ran with some assumptions, and tweaked around some of the legacy code a bit. This was all done in private organizations, or in-person conversation, or at best, on hard-to-follow-and-link-to-threads on Facebook.
"And now, there's a massive series of git-merge conflicts, as important concepts from CFAR attempt to get merged back into the original LessWrong branch. And people are going, like 'what the hell is focusing and circling?'"
And this points towards an important thing about _why_ think it's important to keep people actually writing down and publishing their longform thoughts (esp the people who are working in private organizations)
And I'm not sure how to actually really convey it properly _without_ the programming metaphor. (Or, I suppose I just could. Maybe if I simply remove the first sentence the description still works. But I feel like the first sentence does a lot of important work in communicating it clearly)
We have enough programmers that I can basically get away with it anyway, but it'd be nice to not have to rely on that.
There's a skill of "quickly operationalizing a prediction, about a question that is cruxy for your decisionmaking."
And, it's dramatically better to be very fluent at this skill, rather than "merely pretty okay at it."
Fluency means you can actually use it day-to-day to help with whatever work is important to you. Day-to-day usage means you can actually get calibrated re: predictions in whatever domains you care about. Calibration means that your intuitions will be good, and _you'll know they're good_.
Fluency means you can do it _while you're in the middle of your thought process_, and then return to your thought process, rather than awkwardly bolting it on at the end.
I find this useful at multiple levels-of-strategy. i.e. for big picture 6 month planning, as well as for "what do I do in the next hour."
I'm working on this as a full blogpost but figured I would start getting pieces of it out here for now.
A lot of this skill is building off on CFAR's "inner simulator" framing. Andrew Critch recently framed this to me as "using your System 2 (conscious, deliberate intelligence) to generate questions for your System 1 (fast intuition) to answer." (Whereas previously, he'd known System 1 ...
I disagree with this particular theunitofcaring post "what would you do with 20 billion dollars?", and I think this is possibly the only area where I disagree with theunitofcaring overall philosophy and seemed worth mentioning. (This crops up occasionally in her other posts but it is most clear cut here).
I think if you got 20 billion dollars and didn't want to think too hard about what to do with it, donating to OpenPhilanthropy project is a pretty decent fallback option.
But my overall take on how to handle the EA funding landscape has changed a bit in the past few years. Some things that theunitofcaring doesn't mention here, which seem at least warrant thinking about:
[Each of these has a bit of a citation-needed, that I recall hearing or reading in reliable sounding places, but correct me if I'm wrong or out of date]
1) OpenPhil has (at least? I can't find more recent data) 8 billion dollars, and makes something like 500 million a year in investment returns. They are currently able to give 100 million away a year.
They're working on building more capacity so they can give more. But for the foreseeable future, they _can't_ actually spend more m...
A major goal I had for the LessWrong Review was to be "the intermediate metric that let me know if LW was accomplishing important things", which helped me steer.
I think it hasn't super succeeded at this.
I think one problem is that it just... feels like it generates stuff people liked reading, which is different from "stuff that turned out to be genuinely important."
I'm now wondering "what if I built a power-tool that is designed for a single user to decide which posts seem to have mattered the most (according to them), and, then, figure out which intermediate posts played into them." What would the lightweight version of that look like?
Another thing is, like, I want to see what particular other individuals thought mattered, as opposed to a generate aggregate that doesn't any theory underlying it. Making the voting public veers towards some kind of "what did the cool people think?" contest, so I feel anxious about that, but, I do think the info is just pretty useful. But like, what if the output of the review is a series of individual takes on what-mattered-and-why, collectively, rather than an aggregate vote?
Something struck me recently, as I watched Kubo, and Coco - two animated movies that both deal with death, and highlight music and storytelling as mechanisms by which we can preserve people after they die.
Kubo begins "Don't blink - if you blink for even an instant, if you a miss a single thing, our hero will perish." This is not because there is something "important" that happens quickly that you might miss. Maybe there is, but it's not the point. The point is that Kubo is telling a story about people. Those people are now dead. And insofar as those people are able to be kept alive, it is by preserving as much of their personhood as possible - by remembering as much as possible from their life.
This is generally how I think about death.
Cryonics is an attempt at the ultimate form of preserving someone's pattern forever, but in a world pre-cryonics, the best you can reasonably hope for is for people to preserve you so thoroughly in story that a young person from the next generation can hear the story, and palpably feel the underlying character, rich with inner life. Can see the person so clearly that he or she comes to live inside them.
Realistical...
I wanted to just reply something like "<3" and then became self-conscious of whether that was appropriate for LW.
In particular, I think if we make the front-page comments section filtered by "curated/frontpage/community" (i.e. you only see community-blog comments on the frontpage if your frontpage is set to community), then I'd feel more comfortable posting comments like "<3", which feels correct to me.
Yesterday I was at a "cultivating curiosity" workshop beta-test. One concept was "there are different mental postures you can adopt, that affect how easy it is not notice and cultivate curiosities."
It wasn't exactly the point of the workshop, but I ended up with several different "curiosity-postures", that were useful to try on while trying to lean into "curiosity" re: topics that I feel annoyed or frustrated or demoralized about.
The default stances I end up with when I Try To Do Curiosity On Purpose are something like:
1. Dutiful Curiosity (which is kinda fake, although capable of being dissociatedly autistic and noticing lots of details that exist and questions I could ask)
2. Performatively Friendly Curiosity (also kinda fake, but does shake me out of my default way of relating to things. In this, I imagine saying to whatever thing I'm bored/frustrated with "hullo!" and try to acknowledge it and and give it at least some chance of telling me things)
But some other stances to try on, that came up, were:
3. Curiosity like "a predator." "I wonder what that mouse is gonna do?"
4. Earnestly playful curiosity. "oh that [frustrating thing] is so neat, I wonder how it works! what's it gonna ...
I started writing this a few weeks ago. By now I have other posts that make these points more cleanly in the works, and I'm in the process of thinking through some new thoughts that might revise bits of this.
But I think it's going to be awhile before I can articulate all that. So meanwhile, here's a quick summary of the overall thesis I'm building towards (with the "Rationalization" and "Sitting Bolt Upright in Alarm" post, and other posts and conversations that have been in the works).
(By now I've had fairly extensive chats with Jessicata and Benquo and I don't expect this to add anything that I didn't discuss there, so this is more for other people who're interested in staying up to speed. I'm separately working on a summary of my current epistemic state after those chats)
In that case Sarah later wrote up a followup post that was more reasonable and Benquo wrote up a post that articulated the problem more clearly. [Can't find the links offhand].
"Reply to Criticism on my EA Post", "Between Honesty and Perjury"
Conversation with Andrew Critch today, in light of a lot of the nonprofit legal work he's been involved with lately. I thought it was worth writing up:
"I've gained a lot of respect for the law in the last few years. Like, a lot of laws make a lot more sense than you'd think. I actually think looking into the IRS codes would actually be instructive in designing systems to align potentially unfriendly agents."
I said "Huh. How surprised are you by this? And curious if your brain was doing one particular pattern a few years ago that you can now see as wrong?"
"I think mostly the laws that were promoted to my attention were especially stupid, because that's what was worth telling outrage stories about. Also, in middle school I developed this general hatred for stupid rules that didn't make any sense and generalized this to 'people in power make stupid rules', or something. But, actually, maybe middle school teachers are just particularly bad at making rules. Most of the IRS tax code has seemed pretty reasonable to me."
Over in this thread, Said asked the reasonable question "who exactly is the target audience with this Best of 2018 book?"
By compiling the list, we are saying: “here is the best work done on Less Wrong in [time period]”. But to whom are we saying this? To ourselves, so to speak? Is this for internal consumption—as a guideline for future work, collectively decided on, and meant to be considered as a standard or bar to meet, by us, and anyone who joins us in the future?
Or, is this meant for external consumption—a way of saying to others, “see what we have accomplished, and be impressed”, and also “here are the fruits of our labors; take them and make use of them”? Or something else? Or some combination of the above?
I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable.
But, a quick "best guess" answer for now.
I see the overall review process as having two "major phases":
I've posted this on Facebook a couple times but seems perhaps worth mentioning once on LW: A couple weeks ago I registered the domain LessLong.com and redirected it to LessWrong.com/shortform. :P
A thing I might have maybe changed my mind about:
I used to think a primary job of a meetup/community organizer was to train their successor, and develop longterm sustainability of leadership.
I still hold out for that dream. But, it seems like a pattern is:
1) community organizer with passion and vision founds a community
2) they eventually move on, and pass it on to one successor who's pretty closely aligned and competent
3) then the First Successor has to move on to, and then... there isn't anyone obvious to take the reins, but if no one does the community dies, so some people reluctantly step up. and....
...then forever after it's a pale shadow of its original self.
For semi-branded communities (such as EA, or Rationality), this also means that if someone new with energy/vision shows up in the area, they'll see a meetup, they'll show up, they'll feel like the meetup isn't all that good, and then move on. Wherein they (maybe??) might have founded a new one that they got to shape the direction of more.
I think this also applies to non-community organizations (i.e. founder hands the reins to a new CEO who hands the reins to a new CEO who doesn't quite know what to do)
So... I'm kinda wonde...
From Wikipedia: George Washington, which cites Korzi, Michael J. (2011). Presidential Term Limits in American History: Power, Principles, and Politics page 43, -and- Peabody, Bruce G. (September 1, 2001). "George Washington, Presidential Term Limits, and the Problem of Reluctant Political Leadership". Presidential Studies Quarterly. 31 (3): 439–453:
At the end of his second term, Washington retired for personal and political reasons, dismayed with personal attacks, and to ensure that a truly contested presidential election could be held. He did not feel bound to a two-term limit, but his retirement set a significant precedent. Washington is often credited with setting the principle of a two-term presidency, but it was Thomas Jefferson who first refused to run for a third term on political grounds.
A note on the part that says "to ensure that a truly contested presidential election could be held": at this time, Washington's health was failing, and he indeed died during what would have been his 3rd term if he had run for a 3rd term. If he had died in office, he would have been immediately succeeded by the Vice President, which would set an unfortunate precedent of presidents serving until they die, then being followed by an appointed heir until that heir dies, blurring the distinction between the republic and a monarchy.
Posts I vaguely want to have been written so I can link them to certain types of new users:
Crossposted from my Facebook timeline (and, in turn, crossposted there from vaguely secret, dank corners of the rationalsphere)
“So Ray, is LessLong ready to completely replace Facebook? Can I start posting my cat pictures and political rants there?”
Well, um, hmm....
So here’s the deal. I do hope someday someone builds an actual pure social platform that’s just actually good, that’s not out-to-get you, with reasonably good discourse. I even think the LessWrong architecture might be good for that (and if a team wanted to fork the codebase, they’d be welcome to try)
But LessWrong shortform *is* trying to do a bit of a more nuanced thing than that.
Shortform is for writing up early stage ideas, brainstorming, or just writing stuff where you aren’t quite sure how good it is or how much attention to claim for it.
For it to succeed there, it’s really important that it be a place where people don’t have to self-censor or stress about how their writing comes across. I think intellectual progress depends on earnest curiosity, exploring ideas, sometimes down dead ends.
I even think it involves clever jokes sometimes.
But... I dunno, if looked ahead 5 years and saw that the Future People were using ...
Just spent a weekend at the Internet Intellectual Infrastructure Retreat. One thing I came away with was a slightly better sense of was forecasting and prediction markets, and how they might be expected to unfold as an institution.
I initially had a sense that forecasting, and predictions in particular, was sort of "looking at the easy to measure/think about stuff, which isn't necessarily the stuff that connected to stuff that matters most."
Tournaments over Prediction Markets
Prediction markets are often illegal or sketchily legal. But prediction tournaments are not, so this is how most forecasting is done.
The Good Judgment Project
Held an open tournament, the winners of which became "Superforecasters". Those people now... I think basically work as professional forecasters, who rent out their services to companies, NGOs and governments that have a concrete use for knowing how likely a given country is to go to war, or something. (I think they'd been hired sometimes by Open Phil?)
Vague impression that they mostly focus on geopolitics stuff?
High Volume and Metaforecasting
Ozzie described a vision where lots of forecasters are predicting things all the time...
More in neat/scary things Ray noticed about himself.
I set aside this week to learn about Machine Learning, because it seemed like an important thing to understand. One thing I knew, going in, is that I had a self-image as a "non technical person." (Or at least, non-technical relative to rationality-folk). I'm the community/ritual guy, who happens to have specialized in web development as my day job but that's something I did out of necessity rather than a deep love.
So part of the point of this week was to "get over myself, and start being the sort of person who can learn technical things in domains I'm not already familiar with."
And that went pretty fine.
As it turned out, after talking to some folk I ended up deciding that re-learning Calculus was the right thing to do this week. I'd learned in college, but not in a way that connected to anything and gave me a sense of it's usefulness.
And it turned out I had a separate image of myself as a "person who doesn't know Calculus", in addition to "not a technical person". This was fairly easy to overcome since I had already given myself a bunch of space to explore and change this week, and I'd spent the past few months transitioning into being ready for it. But if this had been at an earlier stage of my life and if I hadn't carved out a week for it, it would have been harder to overcome.
Man. Identities. Keep that shit small yo.
Also important to note that learn Calculus this week is a thing a person can do fairly easily without being some sort of math savant.
(Presumably not the full 'know how to do all the particular integrals and be able to ace the final' perhaps, but definitely 'grok what the hell this is about and know how to do most problems that one encounters in the wild, and where to look if you find one that's harder than that.' To ace the final you'll need two weeks.)
I didn't downvote, but I agree that this is a suboptimal meme – though the prevailing mindset of "almost nobody can learn Calculus" is much worse.
As a datapoint, it took me about two weeks of obsessive, 15 hour/day study to learn Calculus to a point where I tested out of the first two courses when I was 16. And I think it's fair to say I was unusually talented and unusually motivated. I would not expect the vast majority of people to be able to grok Calculus within a week, though obviously people on this site are not a representative sample.
Quite fair. I had read Zvi as speaking to typical LessWrong readership. Also, the standard you seem to be describing here is much higher than the standard Zvi was describing.
High Stakes Value and the Epistemic Commons
I've had this in my drafts for a year. I don't feel like the current version of it is saying something either novel or crisp enough to quite make sense as a top-level post, but wanted to get it out at least as a shortform for now.
There's a really tough situation I think about a lot, from my perspective as a LessWrong moderator. These are my personal thoughts on it.
The problem, in short:
Sometimes a problem is epistemically confusing, and there are probably political ramifications of it, such that the most qualified people to debate it are also in conflict with billions of dollars on the line and the situation is really high stakes (i.e. the extinction of humanity) such that it really matters we get the question right.
Political conflict + epistemic murkiness means that it's not clear what "thinking and communicating sanely" about the problem look like, and people have (possibly legitimate) reasons to be suspicious of each other's reasoning.
High Stakes means that we can't ignore the problem.
I don't feel like our current level of rationalist discourse patterns are sufficient for this combo of high stakes, political conflict, and epistemi...
Seems like different AI alignment perspectives sometimes are about "which thing seems least impossible."
Straw MIRI researchers: "building AGI out of modern machine learning is automatically too messy and doomed. Much less impossible to try to build a robust theory of agency first."
Straw Paul Christiano: "trying to get a robust theory of agency that matters in time is doomed, timelines are too short. Much less impossible to try to build AGI that listens reasonably to me out of current-gen stuff."
(Not sure if either of these are fair, or if other camps fit this)
(I got nerd-sniped by trying to develop a short description of what I do. The following is my stream of thought)
+1 to replacing "build a robust theory" with "get deconfused," and with replacing "agency" with "intelligence/optimization," although I think it is even better with all three. I don't think "powerful" or "general-purpose" do very much for the tagline.
When I say what I do to someone (e.g. at a reunion) I say something like "I work in AI safety, by doing math/philosophy to try to become less confused about agency/intelligence/optimization." (I dont think I actually have said this sentence, but I have said things close.)
I specifically say it with the slashes and not "and," because I feel like it better conveys that there is only one thing that is hard to translate, but could be translated as "agency," "intelligence," or "optimization."
I think it is probably better to also replace the word "about" with the word "around" for the same reason.
I wish I had a better word for "do." "Study" is wrong. "Invent" and "discover" both seem wrong, because it is more like "invent/discover", but that feels like it is overusing the slashes. Maybe "develop"? I think I like "invent" best. (Note...
Using "cruxiness" instead of operationalization for predictions.
One problem with making predictions is "operationalization." A simple-seeming prediction can have endless edge cases.
For personal predictions, I often think it's basically not worth worrying about it. Write something rough down, and then say "I know what I meant." But, sometimes this is actually unclear, and you may be tempted to interpret a prediction in a favorable light. And at the very least it's a bit unsatisfying for people who just aren't actually sure what they meant.
One advantage of cruxy predictions (aside from "they're actually particularly useful in the first place), is that if you know what decision a prediction was a crux for, you can judge ambiguous resolution based on "would this actually have changed my mind about the decision?"
("Cruxiness instead of operationalization" is a bit overly click-baity. Realistically, you need at least some operationalization, to clarify for yourself what a prediction even means in the first place. But, I think maybe you can get away with more marginal fuzziness if you're clear on how the prediction was supposed to inform your decisionmaking)
My personal religion involves two* gods – the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil").
When I'm facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies)
If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here).
But Humo and Robutil in fact disagree on some things, and disagree on emphasis.
They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for.
They disagree on how many high schoolers it's acceptable to accidentally fuck up psychologically, while you experiment with a new program to...
I’ve noticed myself using “I’m curious” as a softening phrase without actually feeling “curious”. In the past 2 weeks I’ve been trying to purge that from my vocabulary. It often feels like I'm cheating, trying to pretend like I'm being a friend when actually I'm trying to get someone to do something. (Usually this is a person I'm working with it and it's not quite adversarial, we're on the same team, but it feels like it degrades the signal of true open curiosity)
Hmm, sure seems like we should deploy "tagging" right about now, mostly so you at least have the option of the frontpage not being All Coronavirus All The Time.
So there was a drought of content during Christmas break, and now... abruptly... I actually feel like there's too much content on LW. I find myself skimming down past the "new posts" section because it's hard to tell what's good and what's not and it's a bit of an investment to click and find out.
Instead I just read the comments, to find out where interesting discussion is.
Now, part of that is because the front page makes it easier to read comments than posts. And that's fixable. But I think, ultimately, the deeper issue is with the main unit-of-contribution being The Essay.
A few months ago, mr-hire said (on writing that provokes comments)
Ideas should become comments, comments should become conversations, conversations should become blog posts, blog posts should become books. Test your ideas at every stage to make sure you're writing something that will have an impact.
This seems basically right to me.
In addition to comments working as an early proving ground for an ideas' merit, comments make it easier to focus on the idea, instead of getting wrapped up in writing something Good™.
I notice essays on the front page starting with flo...
Is... there compelling difference between stockholm syndrome and just, like, being born into a family?
I notice that academic papers have stupidly long, hard-to-read abstracts. My understanding is that this is because there is some kind of norm about papers having the abstract be one paragraph, while the word-count limit tends to be... much longer than a paragraph (250 - 500 words).
Can... can we just fix this? Can we either say "your abstract needs to be a goddamn paragraph, which is like 100 words", or "the abstract is a cover letter that should be about one page long, and it can have multiple linebreaks and it's fine."
(My guess is that the best equilibrium is "People keep doing the thing currently-called-abstracts, and start treating them as 'has to fit on one page', with paragraph breaks, and then also people start writing a 2-3 sentence thing that's more like "the single actual-paragraph that you'd read if you were skimming through a list of papers.")
I had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!).
Disclaimer: I am not making much effort to not ramble in this post.
A couple takeaways:
1. Working Memory Limits
One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks").
It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress.
So, one new habit I have is "whenever I've introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4.
(Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I'm going to make, or why I'm talking about the things I'm talking about)
...
2. I just don't finish sente
I frequently don't finish sentences, whether in person voice or in text (like emails). I've known this for awhile, although I kinda forgot recently. I switch abruptly to a
...[not trying to be be comprehensible people that don't already have some conception of Kegan stuff. I acknowledge that I don't currently have a good link that justifies Kegan stuff within the LW paradigm very well]
Last year someone claimed to me is that a problem with Kegan is that there really are at least 6 levels. The fact that people keep finding themselves self-declaring as "4.5" should be a clue that 4.5 is really a distinct level. (the fact that there are at least two common ways to be 4.5 also is a clue that the paradigm needs clarification)
My garbled summary of this person's conception is:
Previously, I had felt something like "I basically understand level 5 fine AFAICT, but maybe don't have the skills do so fluidly. I can imagine there bei
...After a recent 'doublecrux meetup' (I wasn't running it but observed a bit), I was reflecting on why it's hard to get people to sufficiently disagree on things in order to properly practice doublecrux.\
As mentioned recently, it's hard to really learn doublecrux unless you're actually building a product that has stakes. If you just sorta disagree with someone... I dunno you can do the doublecrux loop but there's a sense where it just obviously doesn't matter.
But, it still sure is handy to have practiced doublecruxing before needing to do it in an important situation. What to do?
Two options that occur to me are
[note: I haven't actually talked much with the people who's major focus is teaching doublecrux, not sure how much of this is old hat, or if there's a totally different approach that sort of invalidates it]
SingleCruxing
One challenge about doublecrux practice is that you have to find something you have strong opinions about and also someone else has strong opinions about. So.....
I notice some people go around tagging posts with every plausible tag that possible seems like it could fit. I don't think this is a good practice – it results in an extremely overwhelming and cluttered tag-list, which you can't quickly skim to figure out "what is this post actually about"?, and I roll to disbelieve on "stretch-tagging" actually helping people who are searching tag pages.
I just briefly thought you could put a bunch of AI researchers on a spaceship, and accelerate it real fast, and then they get time dilation effects that increase their effective rate of research.
Then I remembered that time dilation works the other way 'round – they'd get even less time.
This suggested a much less promising plan of "build narrowly aligned STEM AI, have it figure out how to efficiently accelerate the Earth real fast and... leave behind a teeny moon base of AI researchers who figure out the alignment problem."
Man, I watched The Fox and The Hound a few weeks ago. I cried a bit.
While watching the movie, a friend commented "so... they know that foxes are *also* predators, right?" and, yes. They do. This is not a movie that was supposed to be about predation except it didn't notice all the ramifications about its lesson. This movie just isn't taking a stand about predation.
This is a movie about... kinda classic de-facto tribal morality. Where you have your family and your tribe and a few specific neighbors/travelers that you welcomed into your home. Those are your people, and the rest of the world... it's not exactly that they aren't *people*, but, they aren't in your circle of concern. Maybe you eat them sometimes. That's life.
Copper the hound dog's ingroup isn't even very nice to him. His owner, Amos, leaves him out in a crate on a rope. His older dog friend is sort of mean. Amos takes him out on a hunting trip and teaches him how to hunt, conveying his role in life. Copper enthusiastically learns. He's a dog. He's bred to love his owner and be part of the pack no matter what.
My dad once commented that this was a movie that... seemed remarkably realistic about what you can expect from ani...
Sometimes the subject of Kegan Levels comes up and it actually matters a) that a developmental framework called "kegan levels" exists and is meaningful, b) that it applies somehow to The Situation You're In.
But, almost always when it comes up in my circles, the thing under discussion is something like "does a person have the ability to take their systems as object, move between frames, etc." And AFAICT this doesn't really need to invoke developmental frameworks at all. You can just ask if a person has a the "move between frames" skill.*
This still suffers a bit from the problem where, if you're having an argument with someone, and you think the problem is that they're lacking a cognitive skill, it's a dicey social move to say "hey, your problem is that you lack a cognitive skill." But, this seems a lot easier to navigate than "you are a Level 4 Person in this 5 Level Scale".
(I have some vague sense that Kegan 5 is supposed to mean something more than "take systems as object", but no one has made a great case for this yet, and in case it hasn't been the thing I'm personally running into)
There's a problem at parties where there'll be a good, high-context conversation happening, and then one-too-many-people join, and then the conversation suddenly dies.
Sometimes this is fine, but other times it's quite sad.
Things I think might help:
I'm not sure why it took me so long to realize that I should add a "consciously reflect on why I didn't succeed at all my habits yesterday, and make sure I don't fail tomorrow" to my list of daily habits, but geez it seems obvious in retrospect.
Strategic use of Group Houses for Community Building
(Notes that might one day become a blogpost. Building off The Relationship Between the Village and the Mission. Inspired to go ahead and post this now because of John Maxwell's "how to make money reducing loneliness" post, which explores some related issues through a more capitalist lens)
Lately I've been noticing myself getting drawn into more demon-thready discussions on LessWrong. This is in part due to UI choice – demon threads (i.e. usually "arguments framed through 'who is good and bad and what is acceptable in the overton window'") are already selected for getting above-average at engagement. Any "neutral" sorting mechanism for showing recent comments is going to reward demon-threads disproportionately.
An option might be to replace the Recent Discussion section with a version of itself that only shows comments and posts from the Questions page (in particular for questions that were marked as 'frontpage', i.e. questions that are not about politics).
I've had some good experiences with question-answering, where I actually get into a groove where the thing I'm doing is actual object-level intellectual work rather than "having opinions on the internet." I think it might be good for the health of the site for this mode to be more heavily emphasized.
In any case, I'm interested in making a LW Team internal option where the mods can opt into a "replace recent discussion with recent question act...
I still want to make a really satisfying "fuck yeah" button on LessWrong comments that feels really good to press when I'm like "yeah, go team!" but doesn't actually mean I want to reward the comment in our longterm truthtracking or norm-tracking algorithms.
I think this would seriously help with weird sociokarma cascades.
Can democracies (or other systems of government) do better by more regularly voting on meta-principles, but having those principles come into effect N years down the line, where N is long enough that the current power structures have less clarity over who would benefit from the change?
Some of the discussion on Power Buys You Distance From the Crime notes that campaigning to change meta principles can't actually be taken at face value (or at least, people don't take it at face value), because it can be pretty obvious who would benefit from a particular meta principle. (If the king is in power and you suggest democracy, obviously the current power structure will be weakened. If people rely on Gerrymandering to secure votes, changing the rules on Gerrymandering clearly will have an impact on who wins next election)
But what if people voted on changing rules for Gerrymandering, and the rules wouldn't kick in for 20 years. Is that more achievable? Is it better or worse?
The intended benefit is that everyone might roughly agree it's better for the system to be more fair, but not if that fairness will clearly directly cost them. If a rule change occurs far enough in the...
Musings on ideal formatting of posts (prompted by argument with Ben Pace)
1) Working memory is important.
If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head.
2) Less Wrong is for thinking
This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them.
3) You can expand working memory with visual reference
Having larger monitors or notebooks to jot down thoughts makes it easier to think.
The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain).
But regardless of font-size:
4) Optimizing a post for re-skimmability makes it easier to refer to.
This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Su...
New concept for my "qualia-first calibration" app idea that I just crystallized. The following are all the same "type":
1. "this feels 10% likely"
2. "this feels 90% likely"
3. "this feels exciting!"
4. "this feels confusing :("
5. "this is coding related"
6. "this is gaming related"
All of them are a thing you can track: "when I observe this, my predictions turn out to come true N% of the time".
Numerical-probabilities are merely a special case (tho it still gets additional tooling, since they're easier to visualize graphs and calculate brier scores for)
And then a major goal of the app is to come up with good UI to help you visualize and compare results for the "non-numeric-qualia".
Depending on circumstances, it might seem way more important to your prior "this feels confusing" than "this feels 90% likely". (I'm guessing there is some actual conceptual/mathy work that would need doing to build the mature version of this)
"Can we build a better Public Doublecrux?"
Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format.
Public Doublecrux is a more truthseeking oriented version of Public Debate. (The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice vera")
Reasons to want to do _public_ doublecrux include:
Historically I think public doublecruxes have had some problems:
Two interesting observations from this week, while interviewing people about their metacognitive practies.
Both of these are interesting because they hint at a skill of "rapid memorization => improved working memory".
@gwern has previously written about Dual N Back not actually working...
I think a bunch of discussion of acausal trade might be better framed as "simulation trade." It's hard to point to "acausal" trade in the real world because, well, everything is at least kinda iterated and at least kinda causally connected. But, there's plenty of places where the thing you're doing is mainly trading with a simulated partner. And this still shares some important components with literal-galaxy-brains making literal acausal trade.
So, AFAICT, rational!Animorphs is the closest thing CFAR has to publicly available documentation. (The characters do a lot of focusing, hypothesis generation-and-pruning. Also, I just got to the Circling Chapter)
I don't think I'd have noticed most of it if I wasn't already familiar with the CFAR material though, so not sure how helpful it is. If someone has an annotated "this chapter includes decent examples of Technique/Skill X, and examples of characters notably failing at Failure Mode Y", that might be handy.
In response to lifelonglearner's comment I did some experimenting with making the page a bit bolder. Curious what people think of this screenshot where "unread" posts are bold, and "read" posts are "regular" (as opposed to the current world, where "unread" posts "regular", and read posts are light-gray).
Issues with Upvoting/Downvoting
We've talked in the past about making it so that if you have Karma Power 6, you can choose whether to give someone anywhere from 1-6 karma.
Upvoting
I think this is an okay solution, but I also think all meaningful upvotes basically cluster into two choices:
A. "I think this person just did a good thing I want to positively reinforce"
B. "I think this person did a thing important enough that everyone should pay attention to it."
For A, I don't think it obviously matters that you award more than 1 karm...
I think learning-to-get-help is an important, often underdeveloped skill. You have to figure out what *can* be delegated. In many cases you may need to refactor your project such that it's in-principle possible to have people help you.
Some people I know have tried consciously developing it by taking turns being a helper/manager. i.e. spend a full day trying to get as much use out of another person as you can. (i.e. on Saturday, one person is the helper. The manager does the best they can to ask the helper for help... in ways that will actually help. O...
With some frequency, LW gets a new user writing a post that's sort of... in the middle of having their mind blown by the prospect of quantum immortality and MWI. I'd like to have a single post to link them to that makes a fairly succinct case for "it adds up to normality", and I don't have a clear sense of what to do other that link to the entire Quantum Physics sequence.
Any suggestions? Or, anyone feel like writing said post if it doesn't exist yet?
Draft/WIP: The Working Memory Hypothesis re: Intellectual Progress
Strong claim, medium felt
So I'm working with the hypothesis that working memory (or something related) is a major bottleneck on progress within a given field. This has implications on what sort of things fields need.
Basic idea is that you generally need to create new concepts out of existing sub-concepts. You can only create a concept if you can hold the requisite sub-concepts in your head at once. Default working memory limits is 4-7 chunks. You can expand that somewhat by writing thi...
How (or under what circumstances), can people talk openly about their respective development stages?
A lot of mr-hire's recent posts (and my own observations and goals) have updated me on the value of having an explicit model of development stages. Kegan levels are one such frame. I have a somewhat separate frame of "which people I consider 'grown up'" (i.e. what sort of things they take responsibility for and how much that matters)
Previously, my take had been "hmm, it seems like people totally do go through development stages,...
I am very confused about how to think (and feel!) about willpower, and about feelings of safety.
My impression from overviews of the literature is something like "The depletion model of willpower is real if you believe it's real. But also it's at least somewhat real even if you don't?"
Like, doing cognitive work costs resources. That seems like it should just be true. But your stance towards your cognitive work affects what sort of work you are doing.
Similarly, I have a sense that physiological responses to potentially threatening si...
People who feel defensive have a harder time thinking in truthseeking mode rather than "keep myself safe" mode. But, it also seems plausibly-true that if you naively reinforce feelings of defensiveness they get stronger. i.e. if you make saying "I'm feeling defensive" a get out of jail free card, people will use it, intentionally or no.
As someone who's been a large proponent of the "consider feelings of safety" POV, I want to loudly acknowledge that this is a thing, and it is damaging to all parties.
I don't have a good solution to this. One possibility is insisting on things that facilitate safety even if everyone is saying they're fine.
People who feel defensive have a harder time thinking in truthseeking mode rather than "keep myself safe" mode. But, it also seems plausibly-true that if you naively reinforce feelings of defensiveness they get stronger. i.e. if you make saying "I'm feeling defensive" a get out of jail free card, people will use it, intentionally or no
Emotions are information. When I feel defensive, I'm defending something. The proper question, then, is "what is it that I'm defending?" Perhaps it's my sense of self-worth, or my right to exist as a person, or my status, or my self-image as a good person. The follow-up is then "is there a way to protect that and still seek the thing we're after?" "I'm feeling defensive" isn't a "'get out of jail free' card", it's an invitation to go meta before continuing on the object level. (And if people use "I'm feeling defensive" to accomplish this, that seems basically fine? "Thank you for naming your defensiveness, I'm not interested in looking at it right now and want to continue on the object level if you're willing to or else end the conversation for now" is also a perfectly valid response to defensiveness, in my world.)
I'm currently pretty torn between:
The disagreements about "combat vs collaboration" and other related frames do seem to have real, important things to resol...
Something I haven't actually been clear on re: your opinions:
If LW ended up leaning hard into Archipelago, and if we did something like "posts can be either set to 'debate' mode, or 'collaborative' mode, or there are epistemic statuses indicating things like "this post is about early stage brainstorming vs this post is ready to be seriously critiqued",
Does that actually sound good to you?
My model of you was worried that that sort of thing could well result in horrible consequences (via giving bad ideas the ability to gain traction).
(I suppose you might believe that, but still think it's superior to the status of quo of 'sorta kinda that but much more confusingly')
I think a core disagreement here has less to do with collaborative vs debate. Ideas can, and should, be subjected to extreme criticism within a collaborative frame.
My disagreement with your claim is more about how intellectual progress works. I strongly believe you need a several stages, with distinct norms. [Note: I'm not sure these stages listed are exactly right, but think they point roughly in the right direction]
1. Early brainstorming, shower thoughts, and play.
2. Refining brainstormed ideas into something coherent enough to be evaluated
3. Evaluating, and iterating on, those ideas. [It's around this stage that I think comments like the ones I archetypically associate with you become useful]
4. If an idea seems promising enough to do rigorously check (i.e. something like 'do real science, spending thousands or millions of dollars to run experiments), figure out how to do that. Which is complicated enough that it's its own step, separate from....
5. Do real science (note: this section is a bit different for things like math and philosophy)
6. If the experiments disconfirm the idea (or, if an earlier stage truncated the idea before you got to the "real scien...
I notice that I'm increasingly confused that Against Malaria Foundation isn't just completely funded.
It made sense a few years ago. By now – things like Gates Foundation seem like they should be aware of it, and that it should do well on their metrics.
It makes (reasonable-ish) sense for Good Ventures not to fully fund it themselves. It makes sense for EA folk to either not have enough money to fully fund it, or to end up valuing things more complicated than AMF. But it seems like there should be enough rich people and governments for whom "end malaria" is a priority that the $100 million or so that it should just be done by now.
What's up with that?
My understanding is that Against Malaria Foundation is a relatively small player in the space of ending malaria, and it's not clear the funders who wish to make a significant dent in malaria would choose to donate to AMF.
One of the reasons GiveWell chose AMF is that there's a clear marginal value of small donation amounts in AMF's operational model -- with a few extra million dollars they can finance bednet distribution in another region. It's not necessarily that AMF itself is the most effective charity to donate to to end malaria -- it's just the one with the best proven cost-effectiveness for donors at the scale of a few million dollars. But it isn't necessarily the best opportunity for somebody with much larger amounts of money who wants to end malaria.
For comparison:
Check out Gates's April 2018 speech on the subject. Main takeaway: bednets started becoming less effective in 2016, and they're looking at different solutions, including gene drives to wipe out mosquitoes, which is a solution unlikely to require as much maintenance as bed nets.
[cn: spiders I guess?]
I just built some widgets for the admins on LW, so that posts by newbies and reported comments automatically show up in a sidebar where moderators automatically have to pay attention to them, approving or deleting them or sometimes taking more complicated actions.
And... woahman, it's like shining a flashlight into a cave that you knew was going to be kinda gross, but you weren't really prepared to a million spiders suddenly illuminated. The underbelly of LW, posts and comments you don't even see anymore because we insta...
Have you used the LessWrong Concepts page, or generally used our tagging/wiki features? I'm curious to hear about your experience.
I'm particularly interested in people who read content from them, rather than people who contribute content to them. How do you use them? Do you wish you could get value from them better?
One concrete skill I gained from my 2 weeks of Thinking Physics problems was:
This doesn't seem very novel ("break a problem down into simpler problems" is a pretty canonical tool). But I felt l...
Theory that Jimrandomh was talking about the other day, which I'm curious about:
Before social media, if you were a nerd on the internet, the way to get interaction and status was via message boards / forums. You'd post a thing, and get responses from other people who were filtered for being somewhat smart and confident enough to respond with a text comment.
Nowadays, generally most people post things on social media and then get much more quickly rewarded via reacts, based on a) a process that is more emotional than routed-through-verbal-centers, and b) you...
The latest magic set has… possibly the subtlest, weirdest take on the Magic color wheel so far. The 5 factions are each a different college within a magical university, each an enemy-color-pair.
The most obvious reference here is Harry Potter. And in Harry Potter, the houses map (relatively) neatly to various magic colors, or color pairs.
Slytherin is basically canonical MTG Black. Gryffindor is basically Red. Ravenclaw is basically blue. Hufflepuff sort of green/white. There are differences between Hogwarts houses and Magic colors, but they are aspiring to ...
After starting up PredictionBook, I've noticed I'm underconfident at 60% (I get 81% of my 60% predictions right) and underconfident at 70% (only get 44% right).
This is neat... but I'm not quite sure what I'm actually supposed to do. When I'm forming a prediction, often the exact number feels kinda arbitrary. I'm worried that if I try to take into account my under/overconfidence, I'll end up sort of gaming the system rather than learning anything. (i.e. look for excuses to shove my confidence into a bucket that is currently over/underconfident, rather than actually learning "when I feel X subjectively, that corresponds to X actual confidence."
Curious if folk have suggestions.
Someone recently mentioned that strong-upvotes have a particular effect in demon-thread-y comment sections, where if you see a Bad Comment, and that that comment has 10 karma, you might think "aaah! the LessWrong consensus is that a Bad Comment is in fact Good! And this must be defended against."
When, in fact, 10 karma might be, like, one person strong-upvoting a thing.
This was a noteworthy point. I think the strong upvotes usually "roughly does their job" in most cases, but once things turn "contested" they quickly turn into applause/boo lights in a political struggle. And it might be worth looking into ways to specifically curtail their usefulness in that case somehow.
If I had a vote, I'd vote for getting rid of strong votes altogether. Here's another downside from my perspective: I actually don't like getting strong upvotes on my comments, because if that person didn't do a strong upvote, in most cases others would eventually (weakly) upvote that comment to around the same total (because people don't bother to upvote if they think the comment's karma is already what it deserves), and (at least for me) it feels more rewarding and more informative to know that several people upvoted a comment than to know that one person strongly upvoted a comment.
Also strong upvotes always make me think "who did that?", which is pointless because it's too hard to guess based on the available information but I can't help myself. (Votes that are 3 points also make me think this.) (I've complained about this before, but from the voter perspective as opposed to the commenter perspective.) I think I'd be happier if everyone just had either 1 or 2 point votes.
Yep, it didn't seem worth the cost of the chilling effects that were discussed in this thread.
After this weeks's stereotypically sad experience with the DMV....
(spent 3 hours waiting in lines, filling out forms, finding out I didn't bring the right documentation, going to get the right documentation, taking a test, finding out somewhere earlier in the process a computer glitched and I needed to go back and start over, waiting more, finally getting to the end only to learn I was also missing another piece of identification which rendered the whole process moot)
...and having just looked over a lot of 2018 posts investigating coordination failure...&n
...I don't know of a principled way to resolve roomate-things like "what is the correct degree of cleanliness", and this feels sad.
You can't say "the correct amount is 'this much' because, well, there isn't actually an objectly correct degree of cleanliness."
If you say 'eh, there are no universal truths, just preferences, and negotiation', you incentivize people to see a lot of interactions as transactional and adversarial that don't actually need to be. It also seems to involve exaggerating and/or d...
I think there's a preformal / formal / post-formal thing going on with Double Crux.
My impression is the CFAR folk who created the doublecrux framework see it less as a formal process you should stick to, and more as a general set of guiding principles. The formal process is mostly there to keep you oriented in the right direction.
But I see people (sometimes me) trying to use it as a rough set of guiding principles, and then easily slipping back into all the usual failure modes of not understanding each other, or not really taking seriously the possibi...
Counterfactual revolutions are basically good, revolutions are basically bad
(The political sort of revolution, not the scientific sort)
Possible UI:
What if the RecentDiscussion section specifically focused on comments from old posts, rather than posts which currently appear in Latest Posts. This might be useful because you can already see updates to current discussions (since comments turn green when unread, and/or comment counts go up), but can't easily see older comments.
(You could also have multiple settings that handled this differently, but I think this might be a good default setting to ensure comments on old posts get a bit more visibility)
Weird thoughts on 'shortform'
1) I think most of the value of shortform is "getting started writing things that turn out to just be regular posts, in an environment that feels less effortful."
2) relatedly, "shortform" isn't quite the right phrase, since a lot of things end up being longer. "Casual" or "Off-the-cuff" might be better?
(epistemic status: off the cuff, maybe rewriting this as a post later. Haven't discussed this with other site admins)
In writing Towards Public Archipelago, I was hoping to solve a couple problems:
Idea: moderation by tags. People (meaning users themselves, or mods) could tag comments with things like #newbie-question, #harsh-criticism, #joke, etc., then readers could filter out what they don't want to see.
To save everyone else some time, here's the relevant graph, basically showing that amount of comments has remained fairly constant for the past 4 months at least (while a different graph showed traffic as rising, suggesting ESRog's hypothesis seems true)
Is there a good LLM tool that just wraps GPT or Claude with a speech-to-text input and text-to-speech output? I'd like to experiment with having an aways-on-thinking assistant that I talk out loud to.
I've recently updated on how useful it'd be to have small icons representing users. Previously some people were like "it'll help me scan the comment section for people!" and I was like "...yeah that seems true, but I'm scared of this site feeling like facebook, or worse, LinkedIn."
I'm not sure whether that was the right tradeoff, but, I was recently sold after realizing how space-efficient it is for showing lots of commenters. Like, in slack or facebook, you'll see things like:
This'd be really helpful, esp. in the Quick Takes and Popular comments sections,...
I am fairly strongly against having faces, which I think boot up a lot of social instincts that I disprefer on LessWrong. LessWrong is a space where what matters is which argument is true, not who you like / have relationships with. I think some other sort of unique icon could be good.
I... had a surprisingly good time reading Coinbase's Terms of Service update email?
...We’ve recently updated our User Agreement. To continue using our services and take advantage of our upcoming feature launches, you’ll need to sign in to Coinbase and accept our latest terms.
You can read the entire agreement here. At a glance, here’s what this update means for you:
Easier to Understand: We’ve reorganized and modified our user agreement to make it more understandable and in line with our culture of clear communications.
Clarity on Dispute Resolution: We’ve
This is a response to Zack Davis in the comments on his recent post. It was getting increasingly meta, and I wasn't very confident in my own take, so I'm replying over on my shortform.
...OP is trying to convey a philosophical idea (which could be wrong, and whose wrongness would reflect poorly on me, although I think not very poorly, quantitatively speaking) about "true maps as a Schelling point." (You can see a prelude to this in the last paragraph of a comment of mine from two months ago.)
I would have thought you'd prefer that I avoid trying to apply the ph
The 2018 Long Review (Notes and Current Plans)
I've spent much of the past couple years pushing features that help with the early stages of the intellectual-pipeline – things like shortform, and giving authors moderation tools that let them have the sort of conversation they want (which often is higher-context, and assuming a particular paradigm that the author is operating in)
Early stage ideas benefit from a brainstorming, playful, low-filter environment. I think an appropriate metaphor for those parts of LessWrong are "a couple people in a research depart
...I feel a lot of unease about the sort of binary "Is this good enough to be included in canon" measure.
I have an intuition that making a binary cut off point tied to prestige leads to one of to equilibria:
1. You choose a very objective metric (P<.05) and then you end up with goodhearting.
2. You choose a much more subjective process, and this leads to either the measure being more about prestige than actual goodness, making the process highly political, as much about who and who isn't being honored as about the actual thing its' trying to measure(Oscars, Nobel Prizes), or to gradual lowering of standards as edge cases keep lowering the bar imperceptibly over time (Grade inflation, 5 star rating systems).
Furthermore, I think a binary system is quite antithetical to how intellectual progress and innovation actually happen, which are much more about a gradual lowering of uncertainty and raising of usefulness, than a binary realization after a year that this thing is useful.
I know I'll go to programmer hell for asking this... but... does anyone have a link to a github repo that tried really hard to use jQuery to build their entire website, investing effort into doing some sort of weird 'jQuery based components' thing for maintainable, scalable development?
People tell me this can't be done without turning into terrifying spaghetti code but I dunno I feel sort of like the guy in this xkcd and I just want to know for sure.
I've lately been talking a lot about doublecrux. It seemed good to note some updates I'd also made over the past few months about debate.
For the past few years I've been sort of annoyed at debate because it seems like it doesn't lead people to change their opinions – instead, the entire debate framework seems more likely to prompt people to try to win, meanwhile treating arguments as soldiers and digging in their heels. I felt some frustration at the Hanson/Yudkowsky Foom Debate because huge amounts of digital ink were spilled, and neit...
I guess it was mostly just the basic idea that the point of a debate isn't necessarily for the debaters to reach agreement or to change each other's mind, but to produce unbiased information for a third party. (Which may be obvious to some but kind of got pushed out of my mind by the "trying to reach agreement" framing, until I read the Debate paper.) These quotes from the paper seem especially relevant:
Our hypothesis is that optimal play in this game produces honest, aligned information far beyond the capabilities of the human judge.
Despite the differences, we believe existing adversarial debates between humans are a useful analogy. Legal arguments in particular include domain experts explaining details of arguments to human judges or juries with no domain knowledge. A better understanding of when legal arguments succeed or fail to reach truth would inform the design of debates in an ML setting.
My review of the CFAR venue:
There is a song that the LessWrong team listened to awhile back, and then formed strong opinions about what was probably happening during the song, if the song had been featured in a movie.
(If you'd like to form your own unspoiled interpretation of the song, you may want to do that now)
...
So, it seemed to us that the song felt like... you (either a single person or small group of people) had been working on an intellectual project.
And people were willing to give the project the benefit of the doubt, a bit, but then you fuck...
Jargon Quest:
There's a kind of extensive double crux that I want a name for. It was inspired by Sarah's Naming the Nameless post, where she mentions Double Cruxxing on aesthetics. You might call it "aesthetic double crux" but I think that might lead to miscommunication.
The idea is to resolve deep disagreements that underlie your entire framing (of the sort Duncan touches on in this post on Punch Buggy. That post is also a reasonable stab at an essay-form version of the thing I'm talking about).
There are a few things that are releva...
We've been getting increasing amounts of spam, and occasionally dealing with Eugins. We have tools to delete them fairly easily, but sometimes they show up in large quantities and it's a bit annoying.
One possible solution is for everyone's first comment to need to be approved. A first stab at the implementation for this would be:
1) you post your comment as normal
2) it comes with a short tag saying "Thanks for joining less wrong! Since we get a fair bit of spam, first comments need to be approved by a moderator, which normally takes [N h...
Recently watched Finding Dory. Rambly thoughts and thorough spoilers to follow.
I watched this because of a review by Ozy a long while ago, noting that the movie is about character with a mental disability that has major affects on her. And at various key moments in the movie, she finds herself lost and alone, her mental handicap playing a major role in her predicament. And in other movies they might given her some way to... willpower through her disability, or somehow gain a superpower that makes the disability irrelevant or something.
And instead, she has...
Looking at how facebook automatically shows particular subcomments in a thread, that have a lot of likes/reacts.
And then looking at how LW threads often become huge and unwieldy when there's 100 comments.
At first I was annoyed by that FB mechanic, but it may in fact be a necessary thing for sufficiently large threads, to make it easy to find the good parts.
Social failure I notice in myself: there'll be people at a party I don't know very well. My default assumption is "talk to them with 'feeler-outer-questions' to figure out what what they are interested in talking about". (i.e. "what do you do?"/"what's your thing?"/"what have you been thinking about lately?"/"what's something you value about as much as your right pinky?"/"What excites you?").
But this usually produces awkward, stilted conversation. (of the above, I thi...
I really dislike the pinky question for strangers (I think it's fine for people you know, but not ideal). It's an awkward, stilted question and it's not surprising that it produces awkward, stilted responses. Aimed at a stranger it is very clearly "I am trying to start a reasonably interesting conversation" in a way that is not at all targeted to the stranger; that is, it doesn't require you to have seen and understood the stranger at all to say it, which they correctly perceive as alienating.
It works on a very specific kind of person, which is the kind of person who gets so nerdsniped wondering about the question that they ignore the social dynamic, which is sometimes what you want to filter for but presumably not always.
Have you changed your mind about frames or aesthetics?
I'm working on the next post in the "Keep Beliefs Cruxy and Frames Explicit" sequence. I'm not sure if it should be one or two posts. I'm also... noticing that honestly I'm not actually sure what actions to prescribe, and that this is more like a hypothesis and outlining of problems/desiderata.
Two plausible post titles
(I'm currently unsure whether aesthetics are best thought of as a type of frame, or a separate thing)
Honestly, I'm not sure whether I've
...Ben Kuhn's Why and How to Start a For Profit Company Serving Emerging Markets is, in addition to being generally interesting, sort of cute for being two of the canonical Michael Vassar Questions rolled into one, while being nicely operationalized and clear.
("Move somewhere far away and stay their long enough to learn that social reality is arbitrary", and "start a small business and/or startup to a bunch about how pieces of the world fit together" being the two that come easiest to mind)
random anecdote in time management and life quality. Doesn't exactly have obvious life lesson
I use Freedom.to to block lots of sites (I block LessWrong during the morning hours of each day so that I can focus on coding LessWrong :P).
Once a upon a time, I blocked the gaming news website, Rock/Paper/Shotgun, because it was too distracting.
But a little while later I found that there was a necessary niche in my life of "thing that I haven't blocked on Freedom, that is sort of mindlessly entertaining enough that I can peruse it for awhile when I&...
I frequently feel a desire to do "medium" upvotes. Specifically, I want tiers of upvote for:
1) minor social approval (equivalent to smiling at a person when they do something I think should receive _some_ signal of reward, in particular if I think they were following a nice incentive gradient, but where I don't think the thing they were doing was especially important.
2) strong social reward (where I want someone to be concretely rewarded for having done something hard, but I still don't think it's actually so important that it shou...
I have a song gestating, about the "Dream Time" concept (in the Robin Hanson sense).
In the aboriginal mythology, the dreamtime is the time-before-time, when heroes walked the earth, doing great deeds with supernatural powers that allowed them to shape the world.
In the Robin Hanson sense, the dreamtime is... well, still that, but *from the perspective* of the far future.
For most of history, people lived on subsistence. They didn't have much ability to think very far ahead, or to deliberately steer their future much. We live right now in a tim...
Kinda weird meta note: I find myself judging both my posts, and other people's, via how many comments they get. i.e. how much are people engaged. (Not aiming to maximize comments but for some "reasonable number").
However, on a post of mine, my own comments clearly don't count. And on another person's post, if there's a lot of comments but most of them are the original authors, it feels like some kind of red flag. Like they think their post is more important than other people do. (I'm not sure if I endorse this perception...
(Empirically, I post my meta thoughts here instead of in Meta. I think this might actually be fine, but am not sure)
My goal right now is to find (toy, concrete) exercises that somehow reflect the real world complexity of making longterm plans, aiming to achieve unclear goals in a confusing world.
Things that seem important to include in the exercise:
Okay, I'm adding the show "Primal" to my Expanding Moral Cinematic Universe headcanon – movies or shows that feature characters in a harsh, bloody world who inch their little corner of the universe forward as a place where friendship and cooperation can form. Less a sea of blood an violence and mindless replication.
So far I have three pieces in the canon:
1. Primal
2. The Fox and the Hound
3. Princess Mononoke
in roughly ascending order of "how much latent spirit of cooperation exists in the background for the protagonists."
("Walking Dead" is sort of in the sa...
Just rewatched Princess Mononoke, and... I'm finding that this is grounded in the same sort of morality as The Fox And The Hound, but dialed up in complexity a bunch?
The Fox and The Hound is about a moral landscape where you have your ingroup, your ingroup sometimes kills people in the outgroup, and that's just how life is. But occasionally you can make friends with a stranger, and you kinda bring them into your tribe.
Welcoming someone into your home doesn't necessarily mean you're going to take care of them forever, nor go to bat for them as if they were ...
Query: "Grieving" vs "Letting Go"
A blogpost in the works is something like "Grieving/Letting-Go effectively is a key coordination skill."
i.e. when negotiating with other humans, it will often (way more often than you wish) be necessary to give up a thing that are important to you.
Sometimes this is "the idea that we have some particular relationship that you thought we had."
Sometimes it will be "my pet project that's really important to me."
Sometimes it's "the idea that justice can be served in this particular instance."
A key skill is applying something Ser...
Somewhat tangential, but I sometimes think about the sort of tradeoffs you're talking about in a different emotional/narrative lens, which might help spur other ideas for how to communicate it.
(I'm going to use an analogy from Mother of Learning, spoilers ahead)...
There's this scene in Mother of Learning where the incredibly powerful thousand-year-old lich king realizes he's in some sort of simulation, and that the protagonists are therefore presumably trying to extract information from him. Within seconds of realizing this, without any hesitation or hemming or hawing, he blows up his own soul in an attempt to destroy both himself and the protagonists (at least within the simulation). It's cold calculation: he concludes that he can't win the game, the best available move is to destroy the game and himself with it, and he just does that without hesitation.
That's what it looks like when someone is really good at "letting it go". There's a realization that he can't get everything he wants, a choice about what matters most, followed by ruthlessly throwing whatever is necessary under the bus in order to get what he values most.
The point I want to make here is that "grieving" successfull...
I vaguely recall there being some reasons you might prefer Ranked Choice Voting over Approval voting, but can't easily find them. Anyone remember things off the top of their head?
TFW when you're trying to decide if you're writing one long essay, or a sequence, and you know damn well it'll read better as a sequence but you also know damn well that everyone will really only concentrate all their discussion on one post and it'll get more attention if you make one overly long post than splitting it up nicely.
An interesting thing about Supernatural Fitness (a VR app kinda like Beat Saber) is that they are leaning hard into being a fitness app rather than a game. You don't currently get to pick songs, you pick workouts, which come with pep talks and stretching and warmups.
This might make you go "ugh, I just wanna play a song" and go play Beat Saber instead. But, Supernatural Fitness is _way_ prettier and has some conceptual advances over Beat Saber.
And... I mostly endorse this and think it was the right call. I am sympathetic to "if you give people the ability t...
I've noticed in the past month that I'm really bottlenecked on my lack-of-calibration-training. Over the past couple years I've gotten into the habit of trying to operationalize predictions, but I haven't actually tracked them in any comprehensive way.
This is supposed to be among the more trainable rationality skills, and nowadays it suddenly feels really essential. How long are lockdowns going to last? What's going to happen with coronavirus cases? What's going to happen with various political things going on that might affect me? Will the protests turn o
...Jim introduced me to this song on Beat Saber, and noted: "This is a song about being really good at moral mazes".
I asked "the sort of 'really good at moral mazes' where you escape, or the sort where you quickly find your way the center?" He said "the bad one."
And then I gave it a listen, and geez, yeah that's basically what the song is about.
I like that this Beat Saber map includes something-like-a-literal-maze in the middle where the walls are closing around you. (It's a custom map, not the one that comes from the official DLC)
...Thinking through problems re: Attention Management
Epistemic status: thinking in realtime. don't promise that this all makes sense
Default worlds
What questions would be helpful here?
Noticing surprise to help you notice confusion.
Epistemic Status: I was about to write a post on this, and then realized I hadn't actually tried to use this technique that much since coming up with a year ago. I think this is mostly because I didn't try rather than because the technique was demonstrably not good (although obviously it wasn't so useful that practicing the skill was self-reinforcing). For now I'm writing a shortform post and giving it a more dedicated effort for the next month.
Eliezer talks about "Noticing Confusion&...
Posts I'm vaguely planning to write someday:
Something I've recently updated heavily on is "Discord/Slack style 'reactions' are super important."
Much moreso than Facebook style reacts, actually.
Discord/Slack style reacts allow you to pack a lot of information into a short space. When coordinating with people "I agree/I disagree/I am 'meh'" are quite important things to be able to convey quickly. A full comment or email saying that takes up way too much brain space.
I'm less confident about whether this is good for LW. A lot of the current LW moderation...
Beeminder, except instead of paying money if you fail, you pay the money when you create you account, and if you fail at your thingy, you can never use the app again.
I notice that I often want to reply to LW posts with a joke, sometimes because it's funny, sometimes just as a way to engage a bit with the post when I liked it but don't otherwise have anything meaningful to say.
I notice that there's some mixed things going on here.
I want LW to be a place for high quality discussion.
I think it's actually pretty bad that comprehensive, high quality posts often get less engagement because there's not much to add or contradict. I think authors generally are more rewarded by comments than by upvotes.
A...
A couple links that I wanted to refer to easily:
This post on Overcoming Bias – a real old Less Wrong progress report, is sort of a neat vantage point on the "interesting what's changed, what's stayed the same."
This particular quote from the comments was helpful orientation to me:
The general rule in groups with reasonably intelligent discussion and community moderation, once a community consensus is reached on a topic, is that
– Agreement with consensus, well articulated, will be voted up strongly
– Disagreement with consensus, well artic...
Some Meta Thoughts on Ziz's Schelling Sequence, and "what kind of writing do I want to see on LW?" [note: if it were possible, I'd like to file this under "exploring my own preferences and curious about others' take" rather than "attempting to move the overton window". Such a thing is probably not actually possible though]
I have a fairly consistent reaction to Ziz posts (as well as Michael Vassar posts, and some Brent Dill posts, among others) which is "this sure is interesting but it involves a lot of effo...
What would a "qualia-first-calibration" app would look like?
Or, maybe: "metadata-first calibration"
The thing with putting probabilities on things is that often, the probabilities are made up. And the final probability throws away a lot of information about where it actually came from.
I'm experimenting with primarily focusing on "what are all the little-metadata-flags associated with this prediction?". I think some of this is about "feelings you have" and some of it is about "what do you actually know about this topic?"
The sort of app I'm imagining would he...
Anyone know how predictions of less than 50% are supposed to be handled by PredictionBook? I predicted a thing would happen with 30% confidence. It happened. Am I supposed to judge the prediction right or wrong?
It shows me a graph of confidence/accuracy that starts from 50%, and I'm wondering if I'm supposed to be phrasing prediction in such a way that I always list >50% confidence (i.e. I should have predicted that X wouldn't happen, with 70% confidence, rather than that it would, with 30%)
I'm not sure which of these posts is a subset of the other:
Somewhat delighted to see that google scholar now includes direct links to PDFs when it can find them instead of making you figure out how to use a given journal website.
Some people have reported bugs wherein "you post a top level comment, and then the comment box doesn't clear (still displaying the text of your comment." It doesn't happen super reliably. I'm curious if anyone else has seen this recently.
At any given time, is there anything especially wrong about using citation count (weighted by the weightings of other paper's citation count) as a rough proxy for "what are the most important papers, and/or best authors, weighted?"
My sense is the thing that's bad about this is that it creates an easy goodhart metric. I can imagine worlds where it's already so thoroughly goodharted that it doesn't signal anything anymore. If that's the case, can you get around that by grounding it out in some number of trusted authors, and purging obviously fraudulent autho...
An issue in online discourse is "tendency of threads to branch more than they come back together."
Sometimes branching threads are fine, in particular when you're just exploring ideas for fun or natural curiosity. But during important disagreements, I notice a tendency in myself to want to try to address every given individual point, when actually I think the thing to do is figuring out what the most important points are and focus on those. (I think this important in-part because time is precious)
I'm wondering if there are UI updates to forum software that
...Meta/UI:
I currently believe it was a mistake to add the "unread green left-border" to posts and comments in the Recent Discussion section – it mostly makes me click a bunch of things to remove the green that I didn't really want to mark as read. Curious if anyone has opinions about that.
Lately I've come to believe in the 3% rate of return rule.
Sometimes, you can self-improve a lot by using some simple hacks, or learning a new thing you didn't know before. You should be on the look out for such hacks.
But, once you've consumed all the low-hanging fruit, most of what there is to learn involves... just... putting in the work day-in-and-day-out. And you improve so slowly you barely notice. And only when you periodically look back do you realize how far you've come.
It's good to be aware of this, to set expectations.
I...
In Varieties of Argument, Scott Alexander notes:
Sometimes meta-debate can be good, productive, or necessary.... If you want to maintain discussion norms, sometimes you do have to have discussions about who’s violating them. I even think it can sometimes be helpful to argue about which side is the underdog.
But it’s not the debate, and also it’s much more fun than the debate. It’s an inherently social question, the sort of who’s-high-status and who’s-defecting-against-group-norms questions that we like a little too much. If people have to choose between this...
This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:
I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.