2143

LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Your Feed
Load More
Slack and the Sabbath

Some things are fundamentally "out to get you," seeking your time, money and attention. Slack allows margin of error. You can relax. You can explore and pursue opportunities. 
You can plan for the long term. You can stick to principles and do the right thing.

498Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
76
159
The Memetics of AI Successionism
Jan_Kulveit
10h
28
299
Paranoia: A Beginner's Guide
habryka
5d
62
Finding Balance & Opportunity in the Holiday Flux [free public workshop]
Berkeley Solstice Weekend
[Today]Introduction to Corrigibility
[Tomorrow]Adoption of lab-grown meat
Elizabeth13h8519
Thane Ruthenis, Ruby, and 2 more
5
Back in 2020, @Raemon gave me some extremely good advice. @johnswentworth had left some comments on a post of mine that I found extremely frustrating and counterproductive. At the time I had no idea about his body of work, so he was just some annoying guy. Ray, who did know who John was and thought he was doing important work, told me: Which didn't mean I had to heal the rift with John in particular, but if I was going to make that a policy then I would need to give up on my goal of having real impact.  John and I did a video call, and it went well. He pointed out a major flaw in my post, I impressed him by immediately updating once he pointed it out. I still think his original comments displayed status dynamics while sneering at them, and find that frustrating, but Ray was right that not all factual corrections will be delivered in pleasing forms. 
Saul Munn13h38-11
Mateusz Bagiński, TsviBT, and 3 more
7
Long-term melatonin usage for insomniacs was associated with doubling of all-cause mortality* New article from the American Heart Association seems pretty damning for long-term melatonin usage safety: *Caveats: 1. we just have the abstract, not the full article 2. observational study, not experimental 3. their sample is only of insomniacs, not of the general population Responses to the caveats: 1. we'll get the full article soon-ish (probably a month or so?) 2. it seems they did quite a bit of controlling? though we won't know how good their controlling was until the full article comes out 3. I can't imagine that the validity of their results for non-insomniacs is many orders of magnitude less than for insomniacs — like, maybe a factor of two or five, but that'd still be a huge effect size All things considered — this seems like a crazily high effect size. Am I missing something?
J Bostock1d6243
fx, Jackson Wagner, and 7 more
15
Coefficient Giving is one of the worst name changes I've ever heard: * Coefficient Giving sounds bad while OpenPhil sounded cool and snappy. * Coefficient doesn't really mean anything in this context, clearly it's a pun on "co" and "efficient" but that is also confusing. They say "A coefficient multiplies the value of whatever it's paired with" but that's just true of any number? * They're a grantmaker who don't really advise normal individuals about where to give their money, so why "Giving" when their main thing is soliciting large philanthropic efforts and then auditing that * Coefficient Giving doesn't tell you what the company does at the start! "Good Ventures" and "GiveWell" tell you roughly what the company is doing. * "Coefficient" is a really weird word, so you're burning weirdness points with the literal first thing anyone will ever hear you say, this seems like a name which you would only thing is good if you're already deep into rat/ea spaces. * It sounds bad. Open Philanthropy rolls off the tongue, as does OpenPhil. OH-puhn fi-LAN-thruh-pee. Two sets of three. CO-uh-fish-unt GI-ving is an awkward four-two with a half-emphasis on the fish of coefficient. Sounds bad. I'm coming back to this point but there is no possible shortening other than "Coefficient" which is bad because it's just an abstract noun and not very identifiable, whereas "OpenPhil" was a unique identifier. CoGive maybe, but then you have two stressed syllables which is awkward. It mildly offends my tongue to have to even utter their name.  Clearly OP wanted to shed their existing reputation, but man, this is a really bad name choice.
Dalcy3h60
Algon
1
Learning algebraic topology, homotopy always felt like a very intuitive and natural sort of invariant to attach to a space whereas for homology I don't think I have anywhere as close of an intuitive handle or sense of naturality of this concept as I do for homotopy. So I tried to collect some frames / results for homology I've learned to see if it helps convince my intuition that this concept is indeed something natural in mathspace. I'd be very curious to know if there are any other frames or Deeper Answers to "Why homology?" I'm missing: 1. Measuring "holes" of a space * Singular homology: This is the first example I encountered, which will serve as intuition / motivation for the later abstract definitions. * Fixing some notations (feel free to skip this bullet point if you're familiar with the notations): * Let's fix some space X, and recall our goal associating to that space an algebraic object invariant under homeomorphism / homotopy equivalence. * First, a singular p-simplex is a map σ:Δp→X, intuitively representing a simplex living inside the space X. So there is a natural σ(i):Δp−1→X map which represents each of the i faces. Then, it is natural to consider the set {σ(i)}pi=0 as representing the "boundary" of the singular p-simplex σ. * To make this last idea more precise, we define singular p-chain, which is a free abelian group generated from all the singular p-simplicies of a space, denoted Δp(X). In short, its elements look like (finite) formal sums ∑σ:Δp→Xnσσ. A singular p-simplex σ is naturally an element of this group via 1σ∈Δp(X). * This construction, again, is motivated by the boundary idea earlier, since we now can define the boundary of a singular p-simplex σ as formal sum ∑pi=0σ(i)∈Δp−1(X). * In fact, the boundary of a singular p-simplex σ is actually ∑pi=0(−1)iσ(i)∈Δp−1(X). * Why? Intuition: if we draw these σ(i) of simple shapes like triangles (so σ:Δ2→X, hence σ(i):Δ1→X, which is identified with
LWLW17h230
Caleb Biddulph, Person
4
Is intology a legitimate research lab? Today they talked about having an AI researcher that performed better than humans on RE-bench at 64 hr time horizons. This seems really unbelievable to me. The AI system is called Locus.
Raemon2d4438
Leon Lang, habryka, and 4 more
7
I feel so happy that "what's your crux?" / "is that cruxy" is common parlance on LW now, it is a meaningful improvement over the prior discourse. Thank you CFAR and whoever was part of the generation story of that.
Richard_Ngo1d240
Lukas Finnveden, Nate Showell, and 8 more
10
Error-correcting codes work by running some algorithm to decode potentially-corrupted data. But what if the algorithm might also have been corrupted? One approach to dealing with this is triple modular redundancy, in which three copies of the algorithm each do the computation and take the majority vote on what the output should be. But this still creates a single point of failure—the part where the majority voting is implemented. Maybe this is fine if the corruption is random, because the voting algorithm can constitute a very small proportion of the total code. But I'm most interested in the case where the corruption happens adversarially—where the adversary would home in on the voting algorithm as the key thing to corrupt. After a quick search, I can't find much work on this specific question. But I want to speculate on what such an "error-correcting algorithm" might look like. The idea of running many copies of it in parallel seems solid, so that it's hard to corrupt a majority at once. But there can't be a single voting algorithm (or any other kind of "overseer") between those copies and the output channel, because that overseer might itself be corrupted. Instead, you need the majority of the copies to be able to "overpower" the few corrupted copies to control the output channel via some process that isn't mediated by a small easily-corruptible section of code. The viability of some copies "overpowering" other copies will depend heavily on the substrate on which they're running. For example, if all the copies are running on different segments of a Universal Turing Machine tape, then a corrupted copy could potentially just loop forever and prevent the others from answering. So in order to make error-correcting algorithms viable we may need a specific type of Universal Turing Machine which somehow enforces parallelism. Then you need some process by which copies that agree on their outputs can "merge" together to form a more powerful entity; and by which entities
Load More (7/55)
First Post: Out to Get You
44Solstice Season 2025: Ritual Roundup & Megameetups
Raemon
14d
8
184New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
peterbarnett, Aaron_Scher, David Abecassis, Brian Abeyta
2d
16
299Paranoia: A Beginner's Guide
habryka
5d
62
755The Company Man
Tomás B.
2mo
70
146How Colds Spread
RobertM
2d
13
353Legible vs. Illegible AI Safety Problems
Ω
Wei Dai
11d
Ω
93
162Where is the Capital? An Overview
johnswentworth
4d
18
1737 Vicious Vices of Rationalists
Ben Pace
4d
29
696The Rise of Parasitic AI
Adele Lopez
2mo
179
87Serious Flaws in CAST
Ω
Max Harms
1d
Ω
6
314I ate bear fat with honey and salt flakes, to prove a point
aggliu
17d
51
114ARC progress update: Competing with sampling
Eric Neyman
2d
8
303Why I Transitioned: A Case Study
Fiora Sunshine
19d
56
Load MoreAdvanced Sorting/Filtering