All of jimrandomh's Comments + Replies

jimrandomh3112

The remarkable thing about human genetics is that most of the variants ARE additive.

I think this is likely incorrect, at least where intelligence-affecting SNPs stacked in large numbers are concerned.

To make an analogy to ML, the effect of a brain-affecting gene will be to push a hyperparameter in one direction or the other. If that hyperparameter is (on average) not perfectly tuned, then one of the variants will be an enhancement, since it leads to a hyperparameter-value that is (on average) closer to optimal.

If each hyperparameter is affected by many gen... (read more)

1Pablo Villalobos
I suspect the analogy does not really work that well. Much of human genetic variation is just bad mutations that take a while to be selected out. For example, maybe a gene variant slightly decreases the efficiency of your neurons and makes everything in your brain slightly slower
7kman
I definitely don't expect additivity holds out to like +20 SDs. We'd be aiming for more like +7 SDs.

Downvotes don't (necessarily) mean you broke the rules, per se, just that people think the post is low quality. I skimmed this, and it seemed like... a mix of edgy dark politics with poetic obscurantism?

2RobertM
I hadn't downvoted this post, but I am not sure why OP is surprised given the first four paragraphs, rather than explaining what the post is about, instead celebrate tree murder and insult their (imagined) audience:

Any of the many nonprofits, academic research groups, or alignment teams within AI labs. You don't have to bet on a specific research group to decide that it's worth betting on the ecosystem as a whole.

There's also a sizeable contingent that thinks none of the current work is promising, and that therefore buying a little time is value mainly insofar as it opens the possibility of buying a lot of time. Under this perspective, that still bottoms out in technical research progress eventually, even if, in the most pessimistic case, that progress has to route through future researchers who are cognitively enhanced.

jimrandomh8050

The article seems to assume that the primary motivation for wanting to slow down AI is to buy time for institutional progress. Which seems incorrect as an interpretation of the motivation. Most people that I hear talk about buying time are talking about buying time for technical progress in alignment. Technical progress, unlike institution-building, tends to be cumulative at all timescales, which makes it much more strategically relevant.

2Roman Leventov
https://gradual-disempowerment.ai/ is mostly about institutional progress, not narrow technical progress.
4Vaniver
I think you need both? That is--I think you need both technical progress in alignment, and agreements and surveillance and enforcement such that people don't accidentally (or deliberately) create rogue AIs that cause lots of problems. I think historically many people imagined "we'll make a generally intelligent system and ask it to figure out a way to defend the Earth" in a way that I think seems less plausible to me now. It seems more like we need to have systems in place already playing defense, which ramp up faster than the systems playing offense. 
7aysja
Technical progress also has the advantage of being the sort of thing which could make a superintelligence safe, whereas I expect very little of this to come from institutional competency alone. 
Ben Pace268

For what it's worth, I have grown pessimistic about our ability to solve the open technical problems even given 100 years of work on them. I think it possible but not probable in most plausible scenarios.

Correspondingly the importance I assign to increasing the intelligence of humans has drastically increased.

8RHollerith
Eliezer thinks (as do I) that technical progress in alignment is hopeless without first improving the pool of prospective human alignment researchers (e.g., via human cognitive augmentation).
6aphyer
Buying time for technical progress in alignment...to be made where, and by who?

All of the plans I know of for aligning superintelligence are timeline-sensitive, either because they involve research strategies that haven't paid off yet, or because they involve using non-superintelligent AI to help with alignment of subsequent AIs. Acceleration specifically in the supply of compute makes all those plans harder. If you buy the argument that misaligned superintelligence is a risk at all, Stargate is a bad thing.

The one silver lining is that this is all legible. The current administration's stance seems to be that we should build AI quick... (read more)

If bringing such attitudes to conscious awareness and verbalizing them allows you to examine and discard them, have you excised a vulnerability or installed one? Not clear.

Possibly both, but one thing breaks the symmetry: it is on average less bad to be hacked by distant forces than by close ones.

There's a version of this that's directional advice: if you get a "bad vibe" from someone, how strongly should this influence your actions towards them? Like all directional advice, whether it's correct or incorrect depends on your starting point. Too little influence, and you'll find yourself surrounded by bad characters; too much, and you'll find yourself in a conformism bubble. The details of what does and doesn't trigger your "bad vibe" feeling matters a lot; the better calibrated it is, the more you should trust it.

There's a slightly more nuanced vers... (read more)

4Said Achmiz
This doesn’t seem quite right, because it is also possible to have an unconscious or un-verbalized sense that, e.g., you’re not supposed to “discriminate” against “religions”, or that “authority” is bad and any rebellion against “authority” is good, etc. If bringing such attitudes to conscious awareness and verbalizing them allows you to examine and discard them, have you excised a vulnerability or installed one? Not clear.

Recently, a lot of very-low-quality cryptocurrency tokens have been seeing enormous "market caps". I think a lot of people are getting confused by that, and are resolving the confusion incorrectly. If you see a claim that a coin named $JUNK has a market cap of $10B, there are three possibilities. Either: (1) The claim is entirely false, (2) there are far more fools with more money than expected, or (3) the $10B number is real, but doesn't mean what you're meant to think it means.

The first possibility, that the number is simply made up, is pretty easy to cr... (read more)

Epistemic belief updating: Not noticeably different.

Task stickiness: Massively increased, but I believe this is improvement (at baseline my task stickiness is too low so the change is in the right direction).

I won't think that's true. Or rather, it's only true in the specific case of studies that involve calorie restriction. In practice that's a large (excessive) fraction of studies, but testing variations of the contamination hypothesis does not require it.

3ChristianKl
If it would be only true in the case of calorie restriction, why don't we have better studies about the effects of salt? People like to eat together with other people. They go together to restaurants to eat shared meals. They have family dinners. 

(We have a draft policy that we haven't published yet, which would have rejected the OP's paste of Claude. Though note that the OP was 9 months ago.)

2JenniferRM
Can you link to the draft, or DM me a copy, or something? I'd love to be able to comment on it, if that kind of input is welcome.

All three of these are hard, and all three fail catastrophically.

If you could make a human-imitator, the approach people usually talk about is extending this to an emulation of a human under time dilation. Then you take your best alignment researcher(s), simulate them in a box thinking about AI alignment for a long time, and launch a superintelligence with whatever parameters they recommend. (Aka: Paul Boxing)

3Roko
I would be very surprised if all three of these are equally hard, and I suspect that (1) is the easiest and by a long shot. Making a human imitator AI, once you already have weakly superhuman AI is a matter of cutting down capabilities and I suspect that it can be achieved by distillation, i.e. using the weakly superhuman AI that we will soon have to make a controlled synthetic dataset for pretraining and finetuning and then a much larger and more thorough RLHF dataset. Finally you'd need to make sure the model didn't have too many parameters.

The whole point of a "test" is that it's something you do before it matters.

As an analogy: suppose you have a "trustworthy bank teller test", which you use when hiring for a role at a bank. Suppose someone passes the test, then after they're hired, they steal everything they can access and flee. If your reaction is that they failed the test, then you have gotten confused about what is and isn't a test, and what tests are for.

Now imagine you're hiring for a bank-teller role, and the job ad has been posted in two places: a local community college, and a priv... (read more)

2Roko
Perhaps you could rephrase this post as an implication: IF you can make a machine that constructs human-imitator-AI systems, THEN AI alignment in the technical sense is mostly trivialized and you just have the usual political human-politics problems plus the problem of preventing anyone else from making superintelligent black box systems. So, out of these three problems which is the hard one? (1) Make a machine that constructs human-imitator-AI systems (2) Solve usual political human-politics problems (3) Prevent anyone else from making superintelligent black box systems
2Roko
It's not a word-game, it's a theorem based on a set of assumptions. There is still the in-practice question of how you construct a functional digital copy of a human. But imagine trying to write a book about mechanics using the term "center of mass" and having people object to you because "the real center of mass doesn't exist until you tell me how to measure it exactly for the specific pile of materials I have right here!" You have to have the concept.
0Roko
No, this is not something you 'do'. It's a purely mathematical criterion, like 'the center of mass of a building' or 'Planck's constant'. A given AI either does or does not possess the quality of statistically passing for a particular human. If it doesn't under one circumstance, then it doesn't satisfy that criterion.

that does not mean it will continue to act indistuishable from a human when you are not looking

Then it failed the Turing Test because you successfully distinguished it from a human.

So, you must believe that it is impossible to make an AI that passes the Turing Test.

I feel like you are being obtuse here. Try again?

-1Roko
If an AI cannot act the same way as a human under all circumstances (including when you're not looking, when it would benefit it, whatever), then it has failed the Turing Test.

Did you skip the paragraph about the test/deploy distinction? If you have something that looks (to you) like it's indistinguishable from a human, but it arose from something descended to the process by which modern AIs are produced, that does not mean it will continue to act indistuishable from a human when you are not looking. It is much more likely to mean you have produced deceptive alignment, and put it in a situation where it reasons that it should act indistinguishable from a human, for strategic reasons.

-3Roko
Then it failed the Turing Test because you successfully distinguished it from a human. So, you must believe that it is impossible to make an AI that passes the Turing Test. I think this is wrong, but it is a consistent position. Perhaps a strengthening of this position is that such Turing-Test-Passing AIs exist, but no technique we currently have or ever will have can actually produce them. I think this is wrong but it is a bit harder to show that.

This missed the point entirely, I think. A smarter-than-human AI will reason: "I am in some sort of testing setup" --> "I will act the way the administrators of the test want, so that I can do what I want in the world later". This reasoning is valid regardless of whether the AI has humanlike goals, or has misaligned alien goals.

If that testing setup happens to be a Turing test, it will act so as to pass the Turing test. But if it looks around and sees signs that it is not in a test environment, then it will follow its true goal, whatever that is. And it isn't feasible to make a test environment that looks like the real world to a clever agent that gets to interact with it freely over long durations.

2Roko
This is irrelevant, all that matters is that the AI is a sufficiently close replica of a human. If the human would "act the way the administrators of the test want", then the AI should do that. If not, then it should not. If it fails to do the same thing that the human that it is supposed to be a copy of would do, then it has failed the Turing Test in this strong form. For reasons laid out in the post, I think it is very unlikely that all possible AIs would fail to act the same way as the human (which of course may be to "act the way the administrators of the test want", or not, depending on who the human is and what their motivations are).

Kinda. There's source code here and you can poke around the API in graphiql. (We don't promise not to change things without warning.) When you get the HTML content of a post/comment it will contain elements that look like <div data-elicit-id="tYHTHHcAdR4W4XzHC">Prediction</div> (the attribute name is a holdover from when we had an offsite integration with Elicit). For example, your prediction "Somebody (possibly Screwtape) builds an integration between Fatebook.io and the LessWrong prediction UI by the end of July 2025" has ID tYHTHHcAdR4W4XzHC... (read more)

Some of it, but not the main thing. I predict (without having checked) that if you do the analysis (or check an analysis that has already been done), it will have approximately the same amount of contamination from plastics, agricultural additives, etc as the default food supply.

Studying the diets of outlier-obese people is definitely something should be doing (and are doing, a little), but yeah, the outliers are probably going to be obese for reasons other than "the reason obesity has increased over time but moreso".

We don't have any plans yet; we might circle back in a year and build a leaderboard, or we might not. (It's also possible for third-parties to do that with our API). If we do anything like that, I promise the scoring will be incentive-compatible.

4Screwtape
. . . Okay, I'll bite.   Prediction   Edit: And- Prediction Now, I don't suppose that LessWrong prediction API is documented anywhere?
jimrandomh6621

There really ought to be a parallel food supply chain, for scientific/research purposes, where all ingredients are high-purity, in a similar way to how the ingredients going into a semiconductor factory are high-purity. Manufacture high-purity soil from ultrapure ingredients, fill a greenhouse with plants with known genomes, water them with ultrapure water. Raise animals fed with high-purity plants. Reproduce a typical American diet in this way.

This would be very expensive compared to normal food, but quite scientifically valuable. You could randomize a st... (read more)

5ChristianKl
The main problem of nutritional research is that it's hard to get people to eat controlled diets. I don't think the key problem is about sourcing ingredients. 
7Drake Thomas
I agree this seems pretty good to do, but I think it'll be tough to rule out all possible contaminant theories with this approach:  * Some kinds of contaminants will be really tough to handle, eg if the issue is trace amounts of radioactive isotopes that were at much lower levels before atmospheric nuclear testing. * It's possible that there are contaminant-adjacent effects arising from preparation or growing methods that aren't related to the purity of the inputs, eg "tomato plants in contact with metal stakes react by expressing obesogenic compounds in their fruits, and 100 years ago everyone used wooden stakes so this didn't happen" * If 50% of people will develop a propensity for obesity by consuming more than trace amounts of contaminant X, and everyone living life in modern society has some X on their hands and in their kitchen cabinets and so on, the food alone being ultra-pure might not be enough. Still seems like it'd provide a 5:1 update against contaminant theories if this experiment didn't affect obesity rates though.
3Tao Lin
there is https://shop.nist.gov/ccrz__ProductList?categoryId=a0l3d0000005KqSAAU&cclcl=en_US which fulfils some of this
3tailcalled
Wouldn't it be much cheaper and easier to take a handful of really obese people, sample from the various things they eat, and look for contaminants?
6Durkl
Do you mean like this, but with an emphasis on purity? 

Sorry about that, a fix is in progress. Unmaking a prediction will no longer crash. The UI will incorrectly display the cancelled prediction in the leftmost bucket; that will be fixed in a few minutes without you needing to re-do any predictions.

You can change this in your user settings! It's in the Site Customization section; it's labelled "Hide other users' Elicit predictions until I have predicted myself". (Our Claims feature is no longer linked to Elicit, but this setting carries over from back when it was.)

4Ben Pace
Bug report: It does have the amusing property that, if you hover over a part of the claim where others have left votes, the text underneath vanishes. Normally it would be replaced with the names of the users who voted, but now it shows no text. This doesn't reveal the key identity bits, but does reveal non-zero bits about what others think.

You can prevent this by putting a note in some place that isn't public but would be found later, such as a will, that says that any purported suicide note is fake unless it contains a particular password.

Unfortunately while this strategy might occasionally reveal a death to have been murder, it doesn't really work as a deterrent; someone who thinks you've done this would make the death look like an accident or medical issue instead.

How is this better than stating explicitly that you're not going to commit suicide?

TsviBT140

You can publish it, including the output of a standard hash function applied to the secret password. "Any real note will contain a preimage of this hash."

Lots of people are pushing back on this, but I do want to say explicitly that I agree that raw LLM-produced text is mostly not up to LW standards, and that the writing style that current-gen LLMs produce by default sucks. In the new-user-posting-for-the-first-time moderation queue, next to the SEO spam, we do see some essays that look like raw LLM output, and we reject these.

That doesn't mean LLMs don't have good use around the edges. In the case of defining commonly-used jargon, there is no need for insight or originality, the task is search-engine-adjacent, and so I think LLMs have a role there. That said, if the glossary content is coming out bad in practice, that's important feedback.

In your climate, defection from the natural gas and electric grid is very far from being economical, because the peak energy demand for the year is dominated by heating, and solar peaks in the summer, so you would need to have extreme oversizing of the panels to provide sufficient energy in the winter.

I think the prediction here is that people will detach only from the electric grid, not from the natural gas grid. If you use natural gas heat instead of a heat pump for part of the winter, then you don't need to oversize your solar panels as much.

1denkenberger
Yes, but the rest of my comment focused on why I don't think defection from just the electric grid is close to economical with the same reliability.

If you set aside the pricing structure and just look at the underlying economics, the power grid will still be definitely needed for all the loads that are too dense for rooftop solar, ie industry, car chargers, office buildings, apartment buildings, and some commercial buildings. If every suburban house detached from the grid, these consumers would see big increases in their transmission costs, but they wouldn't have much choice but to pay them. This might lead to a world where downtown areas and cities have electric grids, but rural areas and the sparser... (read more)

jimrandomh3613

Many people seem to have a single bucket in their thinking, which merges "moral condemnation" and "negative product review". This produces weird effects, like writing angry callout posts for a business having high prices.

I think a large fraction of libertarian thinking is just the abillity to keep these straight, so that the next thought after "business has high prices" is "shop elsewhere" rather than "coordinate punishment".

5Stephen Fowler
I don't think people who disagree with your political beliefs must be inherently irrational. Can you think of real world scenarios in which "shop elsewhere" isn't an option?
3ZY
Based on the words from this post alone - I think that would depend on what the situation is; in the scenario of price increases, if the business is a monopoly or have very high market power, and the increase is significant (and may even potentially cause harm), then anger would make sense. 
1MinusGix
I agree that it is easy to automatically lump the two concepts together. I think another important part of this is that there are limited methods for most consumers to coordinate against companies to lower their prices. There's shopping elsewhere, leaving a bad review, or moral outrage. The last may have a chance of blowing up socially, such as becoming a boycott (but boycotts are often considered ineffective), or it may encourage the government to step in. In our current environment, the government often operates as the coordination method to punish companies for behaving in ways that people don't want. In a much more libertarian society we would want this replaced with other methods, so that consumers can make it harder to put themselves in a prisoner's dilemma or stag hunt against each other. If we had common organizations for more mild coordination than the state interfering, then I believe this would improve the default mentality because there would be more options.
9cubefox
This might be a possible solution to the "supply-demand paradox": sometimes things (e.g. concert or soccer tickets, new playstations) are sold at a price such that the demand far outweighs the supply. Standard economic theory predicts that the price would be increased in such cases.
3RamblinDash
Just to push back a little - I feel like these people do a valuable service for capitalism. If people in the reviews or in the press are criticizing a business for these things, that's an important channel of information for me as a consumer and it's hard to know how else I could apply that to my buying decisions without incurring the time and hassle cost of showing up and then leaving without buying anything.
lc160

Outside of politics, none are more certain that a substandard or overpriced product is a moral failing than gamers. You'd think EA were guilty of war crimes with the way people treat them for charging for DLC or whatever.

jimrandomh*Moderator Comment2211

Nope, that's more than enough. Caleb Ditchfield, you are seriously mentally ill, and your delusions are causing you to exhibit a pattern of unethical behavior. This is not a place where you will be able to find help or support with your mental illness. Based on skimming your Twitter history, I believe your mental illness is caused by (or exacerbated by) abusing Adderall.

You have already been banned from numerous community events and spaces. I'm banning you from LW, too.

Worth noting explicitly: while there weren't any logs left of prompts or completions, there were logs of API invocations and errors, which contained indications that whatever this was, it was still under development and not an already-scaled setup. Eg we saw API calls fail with invalid-arguments, then get retried successfully after a delay.

The indicators-of-compromise aren't a good match between the Permiso blog post and what we see in logs; in particular we see the user agent string Boto3/1.29.7 md/Botocore#1.32.7 ua/2.0 os/windows#10 md/arch#amd64 lang/p... (read more)

7gwern
Permiso seems to think there may be multiple attacker groups, as they always refer to plural attackers and discuss a variety of indicators and clusters. And I don't see any reason to think there is a single attacker - there's no reason to think Chub is the only LLM sexting service, and even if it was, the logical way to operate for Chub would be to buy API access on a blackmarket from all comers without asking any questions, and focus on their own business. So that may just mean that you guys got hit by another hacker who was still setting up their own workflow and exploitation infrastructure. (It's a big Internet. Like all that Facebook DALL-E AI slop images is not a single person or group, or even a single network of influencers, it's like several different communities across various third world languages coordinating churning out AI slop for Facebook 'engagement' payments, all sharing tutorials and get-rick-quick schemes.)

Ah, sorry that one went unfixed for as long as it did; a fix is now written and should be deployed pretty soon.

2cubefox
I can confirm the problem is fixed now. Thanks!

This is a bug and we're looking into it. It appears to be specific to Safari on iOS (Chrome on iOS is a Safari skin); it doesn't affect desktop browsers, Android/Chrome, or Android/Firefox, which is why we didn't notice earlier. This most likely started with a change on desktop where clicking on a post (without modifiers) opens when you press the mouse button, rather than when you release it.

4cubefox
May I ask whether there is anything planned on fixing this rendering/loading bug which occurs with Firefox? It affects unread/uncached posts opened in a background tab.

Standardized tests work, within the range they're testing for. You don't need to overthink that part. If you want to make people's intelligence more legible and more provable, what you have is more of a social and logistical issue: how do you convince people to publish their test scores, get people to care about those scores, and ensure that the scores they publish are real and not the result of cheating?

1M. Y. Zuo
Which tests are you referring to and how do they exactly measure general intelligence? (And not say IQ or how much the test taker crammed…)

And the only practical way to realize this, that I can think of now, is by predicting the largest stock markets such as the NYSE, via some kind of options trading, many many many times within say a calendar year, and then showing their average rate of their returns is significantly above random chance.

The threshold for doing this isn't being above average relative to human individuals, it's being close to the top relative to specialized institutions. That can occasionally be achievable, but usually it isn't.

1M. Y. Zuo
Well I agree it is a much higher bar than just ‘above average’, yet it still seems like the easiest way of delivering a credible proof that can’t be second guessed somehow. (That I can think of, hence the post) Since ‘cheating’ at this would also mean that the person somehow has gained insider information for a calendar year that was above and beyond what the same ‘specialized institutions’ could obtain. Which is so vanishingly unlikely that I think pretty much everyone (>99% of readers) would accept the results as the bonafide truth. But it probably is limited only to literal geniuses and above as a practical mechanism.
jimrandomh1412

The first time you came to my attention was in May. I had posted something about how Facebook's notification system works. You cold-messaged me to say you had gotten duplicate notifications from Facebook, and you thought this meant that your phone was hacked. Prior to this, I don't recall us having ever interacted or having heard you mentioned. During that conversation, you came across to me as paranoid-delusional. You mentioned Duncan's name once, and I didn't think anything of it at the time.

Less than a week later, someone (not mentioned or participating... (read more)

A news article reports on a crime. In the replies, one person calls the crime "awful", one person calls it "evil", and one person calls it "disgusting".

I think that, on average, the person who called it "disgusting" is a worse person than the other two. While I think there are many people using it unreflectively as a generic word for "bad", I think many people are honestly signaling that they had a disgust reaction, and that this was the deciding element of their response. But disgust-emotion is less correlated with morality than other ways of evaluating t... (read more)

4Richard_Kennaway
I doubt the interviewees are doing anything more than reaching for a word to express "badness" and uttering the first that comes to hand.
5Benquo
I can’t tell quite what you think you’re saying because “worse” and “morality” are such overloaded terms that the context doesn’t disambiguate well. Seems to me like people calling it “evil” or “awful” are taking an adversarial frame where good vs evil is roughly orthogonal to strong vs weak, and classifying the crime as an impressive evil-aligned act that increases the prestige of evil, while people calling it disgusting are taking a mental-health frame where the crime is disordered behavior that doesn’t help the criminal. Which one is a more helpful or true perspective depends on what the crime is! I expect people who are disgusted to be less tempted to cooperate with the criminal or scapegoat a rando than people who are awed.
4tailcalled
Counterpoint: you know for sure that the person who calls it disgusting is averse to the crime and the criminal, whereas the person who calls it evil might still admire the power or achievement involved, and the person who calls it awful might have sympathy for the criminal's situation.
7Raemon
The thing that has me all a'wuckled here is that I think morality basically comes from disgust. (or: a mix of disgust, anger, logic/reflectivity, empathy and some aesthetic appreciation for some classes of things).  I do share "people who seem to be operating entirely off disgust with no reflectivity feel dangerous to me", but, I think a proper human morality somehow accounts for disgust having actually been an important part of how it was birthed.
7Shankar Sivarajan
I disagree. I hold that people who exercise moral judgment based on their own reactions/emotions, whether those be driven by disgust or personal prejudice or reasoning from some axioms of one's own choosing, are fundamentally superior to those who rely on societal mores, cultural norms, the state's laws, religious tenets, or any other external source as the basis for their moral compass.
jimrandomh2414

LessWrong now has sidenotes. These use the existing footnotes feature; posts that already had footnotes will now also display these footnotes in the right margin (if your screen is wide enough/zoomed out enough). Post authors can disable this for individual posts; we're defaulting it to on because when looking at older posts, most of the time it seems like an improvement.

Relatedly, we now also display inline reactions as icons in the right margin (rather than underlines within the main post text). If reaction icons or sidenotes would cover each other up, they get pushed down the page.

Feedback welcome!

2Screwtape
Feedback: a month or so out, I love the sidenotes. They're right where I want footnotes to be, visible without breaking the flow.
8Mateusz Bagiński
My feedback is that I absolutely love it. My favorite feature released since reactions or audio for all posts (whichever was later).

LessWrong now has collapsible sections in the post editor (currently only for posts, but we should be able to also extend this to comments if there's demand.) To use the, click the insert-block icon in the left margin (see screenshot). Once inserted, they 

They start out closed; when open, they look like this:

When viewing the post outside the editor, they will start out closed and have a click-to-expand. There are a few known minor issues editing them; in particular the editor will let you nest them but they look bad when nested so you shouldn't, and t... (read more)

3MondSemmel
I love the equivalent feature in Notion ("toggles"), so I appreciate the addition of collapsible sections on LW, too. Regarding the aesthetics, though, I prefer the minimalist implementation of toggles in Notion over being forced to have a border plus a grey-colored title. Plus I personally make extensive use of deeply nested toggles. I made a brief example page of how toggles work in Notion. Feel free to check it out, maybe it can serve as inspiration for functionality and/or aesthetics.
2Steven Byrnes
Nice. I used collapsed-by-default boxes from time to time when I used to write/edit Wikipedia physics articles—usually (or maybe exclusively) to hide a math derivation that would distract from the flow of the physics narrative / pedagogy. (Example, example, although note that the wikipedia format/style has changed for the worse since the 2010s … at the time I added those collapsed-by-default sections, they actually looked like enclosed gray boxes with black outline, IIRC.)

The Elicit integrations aren't working. I'm looking into it; it looks like we attempted to migrate away from the Elicit API 7 months ago and make the polls be self-hosted on LW, but left the UI for creating Elicit polls in place in a way where it would produce broken polls. Argh.

I can find the polls this article uses, but unfortunately I can't link to them; Elicit's question-permalink route is broken? Here's what should have been a permalink to the first question: link.

jimrandomh*25

This is a hit piece. Maybe there are legitimate criticisms in there, but it tells you right off the bat that it's egregiously untrustworthy with the first paragraph:

I like to think of the Bay Area intellectual culture as the equivalent of the Vogons’ in Hitchhiker’s Guide to the Galaxy. The Vogons, if you don’t remember, are an alien species who demolish Earth to build an interstellar highway. Similarly, Bay Area intellectuals tend to see some goal in the future that they want to get to and they make a straight line for it, tunneling through anything in their way.

7Amalthea
It's not an entirely unfair characterization.
3ROM
The piece is unfair towards Bay Area Rationalists, but the critiques of Lumina can stand separate from what the author thinks about LW readers. "Haters gonna occasionally make some valid points" and such. Sometimes people who unfairly dislike you can also make valid critiques.  I think it's a fair point to note that: * Lumina have not done any clinical trials * They circumnavigated the FDA by classifying it as a cosmetic  * They aren't following best practice guidelines for probiotics (granted I don't actually know how important that is)
jimrandomh166

This is tragic, but seems to have been inevitable for awhile; an institution cannot survive under a parent institution that's so hostile as to ban it from fundraising and hiring.

I took a look at the list of other research centers within Oxford. There seems to be some overlap in scope with the Institute for Ethics in AI. But I don't think they do the same sort of research or do research on the same tier; there are many important concepts and important papers that come to mind having come from FHI (and Nick Bostrom in particular), I can't think of a single idea or paper that affected my thinking that came from IEAI.

gwern291

I would say that the closest to FHI at Oxford right now would probably be Global Priorities Institute (GPI). A lot of these papers would've made just as much sense coming out of FHI. (Might be worth considering how GPI apparently seems to have navigated Oxford better.)

Harry let himself be pulled, but as Hermione dragged him away, he said, raising his voice even louder, "It is entirely possible that in a thousand years, the fact that FHI was at Oxford will be the only reason anyone remembers Oxford!"

That story doesn't describe a gray-market source, it describes a compounding pharmacy that screwed up.

1Metacelsus
Compounding pharmacies are gray-market. (Buying on "evolutionpeptides.com" would be black-market.)

Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that's more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.

2Vladimir_Nesov
That AI should mitigate something, is compatible with it being regrettable intentionally inflicted damage. In contrast, resource-inefficiency of humans is not something we introduced on purpose.

Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI's perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine mysel... (read more)

2ryan_greenblatt
I'm in favor of logging everything forever in human accessible formats for other reasons. (E.g. review for control purposes.) Hopefully we can resolve safety privacy trade offs. The proposal sounds reasonable and viable to me, though the fact that it can't be immediately explained might mean that it's not commercially viable.
2Vladimir_Nesov
Compute might get more expensive, not cheaper, because it would be possible to make better use of it (running minds, not stretching keys). Then it's weighing its marginal use against access to the sealed data.

At this point we should probably be preserving the code and weights of every AI system that humanity produces, aligned or not, just on they-might-turn-out-to-be-morally-significant grounds. And yeah, it improves the incentives for an AI that's thinking about attempting a world takeover, if it has low chance of success and its wants are things that we will be able to retroactively satisfy in retrospect.

It might be worth setting up a standardized mechanism for encrypting things to be released postsingularity, by gating them behind a computation with its difficulty balanced to be feasible later but not feasible now.

jimrandomh*4145

I've been a Solstice regular for many years, and organized several smaller Solstices in Boston (on a similar template to the one you went to). I think the feeling of not-belonging is accurate; Solstice is built around a worldview (which is presupposed, not argued) that you disagree with, and this is integral to its construction. The particular instance you went to was, if anything, watered down on the relevant axis.

In the center of Solstice there is traditionally a Moment of Darkness. While it is not used in every solstice, a commonly used reading, which t... (read more)

0Jeffrey Heninger
While I would love to see the entire rationalist community embrace the Fulness of the Gospel of Christ, I am aware that this is not a reasonable ask for Solstice, and not something I should bet on in a prediction market. While I criticize the Overarching Narrative, I am aware that this is not something that I will change. My hopes for changing Solstice are much more modest:  1. Remove the inessential meanness directed towards religion. There already has been some of this, which is great ! Time Wrote the Rocks no longer falsely claims that the Church tortured Galileo. The Ballad of Smallpox Gone no longer has a verse claiming that preachers want to "Screw the body, save the soul // Bring new deaths off the shelves". Now remove the human villains from Brighter Than Today, and you've improved things a lot. 2. Once or twice, acknowledge that some of the moral giants whose shoulders we're standing on were Christian. The original underrated reasons to be thankful had one point about Quaker Pennsylvania. Unsong's description of St. Francis of Assisi also comes to mind. If you're interested, I could make several other suggestions of things that I think could be mentioned without disrupting the core purposes of Solstice.
7jefftk
I think a bigger factor is that not very many people can sing unknown songs from sheet music, so it wouldn't help very much to include it on the slides.

There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.

Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance ... (read more)

4kave
How did this experiment go?
3kave
I had previously guessed air movement made me feel better because my body expected air movement (i.e. some kind of biophilic effect). But this explanation seems more likely in retrospect! I'm not quite sure how to run the calculation using the diffusivity coefficient to spot check this, though.
5Adam Scholl
Huh, I've also noticed a larger effect from indoors/outdoors than seems reflected by CO2 monitors, and that I seem smarter when it's windy, but I never thought of this hypothesis; it's interesting, thanks.
6Gunnar_Zarncke
This indicates that how we breathe plays a big role in CO2 uptake. Like, shallow or full, small or large volumes, or the speed of exhaling. Breathing technique is a key skill of divers and can be learned. I just started reading the book Breath, which seems to have a lot on it. 
5Gunnar_Zarncke
Ah, very related: Exhaled air contains 44000 PPM CO2 and is used for Mouth-to-mouth resuscitation without problems. 
3M. Y. Zuo
That's a really neat point, has it ever been addressed in prior literature, that you've gone over?

I'm reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.

I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it's going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I'm uncertain of the sign of its effect on rates of actual child abuse).

There's an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I'm putting here.

Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.

I think the world having access to deepfakes, and deepfake-porn tech... (read more)

5Ben Pace
I'm reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy. I definitely appreciate that someone signing this writes this reasoning publicly. I think it's not crazy to think that it will be good to happen. I feel like it's a bit disingenuous to sign the letter for this reason, but I'm not certain.

I went to an Apple store for a demo, and said: the two things I want to evaluate are comfort, and use as an external monitor. I brought a compatible laptop (a Macbook Pro). They replied that the demo was highly scripted, and they weren't allowed to let me do that. I went through their scripted demo. It was worse than I expected. I'm not expecting Apple to take over the VR headset market any time soon.

Bias note: Apple is intensely, uniquely totalitarian over software that runs on iPhones and iPads, in a way I find offensive, not just in a sense of not wanti... (read more)

Load More