As I think about "what to do about AI x-risk?", some principles that seem useful to me:

  1. Short timelines seem plausible enough that, for the next year or so, I'd like to focus on plans that are relevant if takeoff begins in the next few years. In a year, if it looks more like there are some fundamental bottlenecks on true creative thinking, I may consider more projects that only payoff in longer-timeline stories.
  2. Given "short timelines", I feel most optimistic on plans that capitalize on skills that I'm already good at (but maybe multiclassing at things that I can learn quickly with LLM assistance).
  3. I think "UI design" is a skill that I (and Lightcone more broadly) am pretty good at. And, I believe the Interfaces as a Scarce Resource hypothesis – the world is often bottlenecked on ability to process and make-use-of information in complicated, messy domains.

(In addition to LessWrong, the Lightcone team has worked on the S-Process, which makes it much easier for grantmakers to argue and negotiate pretty complex positions about how much they value various orgs).

If I've got a UI-shaped hammer, what are some nails that seem like they need doing? (In particular, that are somehow relevant to x-risk)

Some thoughts so far

Cyborgism

Last fall, I was oriented around building good UI for LLM-assisted-thinking. 

In addition to it generally seeming like a fruitful green field, I had a specific hypothesis that, once LLMs Get Real Gud, there will be an important distinction between "being able to get useful work out of them given a minute's work, and, being able to get useful work out of them of them in <5 seconds." The latter is something that can become a true "part of your exobrain." The former is still more like a tool you're using.

I'm less bullish on that now because, while I think most people aren't quite tackling this with the particular taste I'd apply, it does sure seem like everyone is working on "do stuff with LLMs" and it's not where the underpicked fruit is.

"Schleppy work" in narrow, technical domains

It seems like there may be narrow, technical domains specific narrow domains, where there's some kinds of tasks that rarely get done because they're too hard to think about –you need tons of context, the context is littered around various places. Or, maybe it's all technically in one place but you have to sift through a lot of extraneous details.

A past example of this would be "hoverovers in coding IDEs for types and docs", linters, etc. The ability to right-click on a function call to go to the original implementation of the function. 

For a less technical example: spellchecking and grammarchecking in word processor docs.

A possible (current, important) example might be "something something Mech Interp a la Chris Olah's earlier work on distill.pub". I don't actually know how Mech Interp works these days, I vaguely believe there are visualizers/heat maps for neurons, but I'm not sure how useful those actually are for the kinds of analysis that are most important.

An example John Wentworth has mentioned a couple times is "automatically generating examples of your abstractions and making sure they type-check." (This involves both UI, and some deep technical work to actually verify the type checking)

What kinds of markets need to exist, that are difficult because of evaluation or reputation or (cognitive) transaction costs?

It's sort of surprising that Amazon, Uber or Lugg work. Why aren't people receiving bobcats when they order a stapler all the time? A large part of the answer are rating systems, and an ecosystem where it's not trivial to build up a reputation. 

What are some places where you can't easily buy or find a thing because of adversarial optimization? 

...

With those illustrative examples: do you work on x-risk or x-risk adjacent things? What are some places in your work where it's confusing or annoying to find things, or figure things out?

New Comment
23 comments, sorted by Click to highlight new comments since:

while I think most people aren't quite tackling this with the particular taste I'd apply, it does sure seem like everyone is working on "do stuff with LLMs" and it's not where the underpicked fruit is

I disagree, I think pretty much nobody is attempting anything useful with LLM-based interfaces. Almost all projects I've seen in the wild are terrible and there are tons of unpicked low-hanging fruits.

I'd been thinking, on and off, about ways to speed up agent-foundations research using LLMs. An LLM-powered exploratory medium for mathematics is one possibility.

A big part of highly theoretical research is flipping between different representations of the problem: viewing it in terms of information theory, in terms of Bayesian probability, in terms of linear algebra; jumping from algebraic expressions to the visualizations of functions or to the nodes-and-edges graphs of the interactions between variables; et cetera.

The key reason behind it is that research heuristics bind to representations. E. g., suppose you're staring at some graph-theory problem. Certain problems of this type are isomorphic to linear-algebra problems, and they may be trivial in linear-algebra terms. But unless you actually project the problem into the linear-algebra ontology, you're not necessarily going to see the trivial solution when staring at the graph-theory representation. (Perhaps the obvious solution is to find the eigenvectors of the adjacency matrix of the graph – but when you're staring at a bunch of nodes connected by edges, that idea isn't obvious in that representation at all.)

This is a bit of a simplified example – the graph theory/linear algebra connection is well-known, so experienced mathematicians may be able to translate between those representations instinctively – but I hope it's illustrative.[1]

As a different concrete example, consider John Wentworth's Bayes Net Algebra. This is essentially an interface for working with factorizations of joint probability distributions. The nodes-and-edges representation is more intuitive and easy to tinker with than the "formulas" representation, which means that having concrete rules for tinkering with graph representations without committing errors would significantly speed up how quickly you can reason through related math problems. Imagine if the derivation of such frameworks was automated: if you could set up a joint PD in terms of formulas, automatically project the setup into graph terms, start tinkering with it by dragging nodes and edges around, and get errors if and only if back-projecting the changed "graph" representation into the "formulas" representations results in a setup that's non-isomorphic to the initial one.

(See also this video, and the article linked above.)

A related challenge are refactors. E. g., suppose you're staring at some complicated algebraic expression with an infinite sum. It may be the case that a certain no-loss-of-generality change of variables would easily collapse that expression into a Fourier series, or make some Obscure Theorem #418152/Weird Trick #3475 trivially applicable. But unless you happen to be looking at the problem through those lens, you're not going to be able to spot it. (Especially if you don't know the Obscure Theorem #418152/Weird Trick #3475.)

It's plausible that the above two tasks is what 90% of math research consists of (the "normal-science" part of it), in terms of time expenditure. Flipping between representations in search of a representation-chain where every step is trivial.

Those problems would be ameliorated by (1) reducing the friction costs of flipping between representations, and (2) being able to set up automated searches for simplifying refactors of the problem.

Can LLMs help with (1)? Maybe. They can write code and they can, more or less, reason mathematically, as long as you're not asking them for anything creative. One issue is that they're also really sloppy and deceptive when writing proofs... But that problem can potentially be ameliorated by fine-tuning e. g. r1 to justify all its conclusions using rigorous Lean code, which could be passed to automated proof-checkers before being shown to you.[2]

Can LLMs help with (2)? Maybe. I'm thinking something like the Pantheon interface, where you're working through the problem on your own, and in a side window LLMs offer random ideas regarding how to simplify the problem.

LLMs have bad research taste, which would extend to figuring out what refactorings they should try. But they also have a superhuman breadth of knowledge regarding theorems/math results. A depths-first search might thus be productive here. Most of LLM suggestions would be trash, but as long as complete nonsense is screened off by proof-checkers, and the ideas are represented in a quickly-checkable manner (e. g., equipped with one-sentence summaries), and we're giving LLMs an open-ended task, some results may be useful.

I expect I'd pay $200-$500/month for a working, competently executed tool of this form; even more the more flexible it is. I expect plenty of research mathematicians (not only agent-foundations folks) would, as well. There's a lucrative startup opportunity there.

@johnswentworth, any thoughts?

  1. ^

    A more realistic example would concern ansatzes, i. e., various "weird tricks" for working through problems. They likewise bind to representations, such that the idea of using one would only occur to you if you're staring at a specific representation of the problem, and would fail to occur if you're staring at an isomorphic-but-shallowly-different representation.

  2. ^

    Or using o3 with a system prompt where you yell at it a lot to produce rigorous Lean code, with a proof-checker that returns errors if it ever uses a placeholder always-passes "sorry" expression. But I don't know whether you can yell at it loudly enough using just the system prompt, and this latest generation of LLMs seems really into Goodharting, so it might straight-up try to exploit bugs in your proof-checker.

Nod, this feels a bit at the intersection of what I had in mind with "Cyborgism", and the "Schleppy work in narrow domains" section.

Some thoughts: for this sort of thing, there's a hypothesis ("making it easier to change representations will enable useful thinking in hmath", and a bunch of annoying implementation details you need to test the hypothesis (i.e. actually getting an LLM to do all that work reliably).

So my next question here is "can we test out a version of this sort of thing powered by some humans-in-a-trenchcoat", or otherwise somehow test the ultimate hypothesis without having to build the thing." I'm curious for your intuitions on that

"can we test out a version of this sort of thing powered by some humans-in-a-trenchcoat"

Response lag would be an issue here. As you'd pointed out, to be a proper part of the "exobrain", tools need to have very fast feedback loops. LLMs can plausibly do the needed inferences quickly enough (or perhaps not, that's a possible failure mode), but if there's a bunch of humans on the other end, I expect it'd make the tools too slow to be useful, providing little evidence regarding faster versions.

(I guess it'd work if we put von Neumann on the other end or something, someone able to effortlessly do mountainous computations in their head, but I don't think we have many of those available.)

or otherwise somehow test the ultimate hypothesis without having to build the thing

I think the minimal viable product here would be relatively easy to build. It'd probably just look like a LaTeX-supporting interface where you can define a bunch of expressions, type natural-language commands into it ("make this substitution and update all expressions", "try applying method #331 to solving this equation"), and in the background an LLM with tool access uses its heuristics plus something like SymPy to execute them, then updates the expressions.

The core contribution here would be removing LLM babble from the equation, abstracting the LLM into the background so that you can interact purely with the math. Claude's Artefact functionality and ChatGPT's Canvas + o3 can already be more or less hacked into this (though there are some issues, such as them screwing up LaTeX formatting).

"Automatic search for refactors of the setup which simplify it" should also be relatively easy. Just the above setup, a Loom-like generator of trees of thought, and a side window where the summaries of the successful branches are displayed.

Also: perhaps an unreliable demo of the full thing would still be illustrative? That is, hack together some interface that allows to flexibly edit and flip between math representations, maybe powered by some extant engine for that sort of thing (e. g., 3Blue1Brown's Manim? there are probably better fits). Don't bother with fine-tuning the LLMs, with wrapping them in proof-checkers, and with otherwise ensuring they don't make errors. Give the tool to some researchers to play with, see if they're excited about a reliable version.

"making it easier to change representations will enable useful thinking in hmath"

Approaching it from a different direction, how much evidence do we already have for this hypothesis?

  • Various visual proofs, interactive environments, and "intuitive" explanations of math (which mostly work by projecting the math into different representations) seem widely successful. See e. g. the popularity of 3Blue1Brown's videos.
  • ML/interpretability in particular seems to rely on visualizations heavily. See also Chris Olah's essays on the subject.
  • I think math and physics researchers frequently describe doing this sort of stuff in their head; I know I do. It seems common-sensical that externalizing this part of their reasoning would boost  their productivity, inasmuch as it would allow to scale it beyond the constraints of the human working memory.
  • We could directly poll mathematicians/physics/etc. with a description of the tool (or, as above, an unreliable toy demo), and ask if that sounds like something they'd use.

Overall, I think that if something like this tool could be built and made to work reliably, the case for it being helpful is pretty solid. (Indeed, if I were more confident that AGI is 5+ years away, making object-level progress on alignment less of a priority, I'd try building it myself.) They key question here is whether it can actually be made to work flexibly/reliably enough on the back of the current LLMs.

On which point, as far as the implementation side goes, the core places where it might fail are:

  • Is the current AI even up for the task? That is, is there a way to translate the needed tasks into a format LLMs plus proof-verifiers can reliably and non-deceitfully solve?
  • If AIs are up to the task, can even they do it fast enough? A one-minute delay between an interaction and a response is potentially okay-ish, although already significantly worse than a five-seconds' delay. But if it takes e. g. ten minutes for the response to be produced, because an unwieldy overcomplicated LLM scaffold in the background is busy arguing with itself, and 5% of the time it just falls apart, that'd make it non-viable too.[1]
  1. ^

    Perhaps we could set it up so that, e. g., the first time you instantiate the connection between two representations, the task is handed off to a big LLM, which infers a bunch of rules and writes a bunch of code snippets regarding how to manage the connection, and the subsequent calls are forwarded to smaller, faster LLMs with a bunch of context provided by the big LLM to assist them. But again: would that work? Is there a way to frontload the work like this? Would smaller LLMs be up for the task?

What is the least flexible version of this tool you'd be willing to pay $20 / month for? Examples:

  1. Graph-To-Matrix Switcher Paste a small graph or adjacency matrix, hit Translate, instantly see the other form plus the eigen-stuff that often makes the problem trivial. Expectation: no (90%) - thinking "I could flip the representation" is the bottleneck, actually doing it is not a place where tooling is lacking.
  2. Natural language -> verified Lean - e.g. user can paste some math they're working on, LLM tries to translate it into a Lean script and then report back if it was able to do so. Expectation: no (80%) - an LLM chatbot with access to lean checker tool is probably better than any simple-to-build dedicated UI here
  3. Math Clippy for Jupyter notebooks: a sidebar that looks at the current cell, checks if you're doing anything mathy, and provides hints or links to relevant theorems with a 1 sentence gloss as to why it might be relevant. Expectation: no (80%) - feels like there might be something there that would be useful in the way copilot is useful (i.e. sometimes gets the easy stuff right almost instantly, sometimes is wrong in a way that tickles the user's brain again almost instantly, in neither case breaking the flow) but I doubt the form factor is right for math
  4. Tex from drawing of equation: user writes out math by hand on a tablet, auto OCR and produce tex, option to hit "accept" and replace selected group of symbols with rendered tex, maybe also very basic error checking of all accepted tex so far. Expectation: yes (75%) but also I expect this already exists.

I'm curious if there are any tools in this space that you keep checking every year or two to see of anyone has built.

Here's some problems I ran into the past week which feel LLM-UI-shaped, though I don't have a specific solution in mind. I make no claims about importance or goodness.

Context (in which I intentionally try to lean in the direction of too much detail rather than too little): I was reading a neuroscience textbook, and it painted a general picture in which neurotransmitters get released into the synaptic cleft, do their thing, and then get reabsorbed from the cleft into the neuron which released them. That seemed a bit suspicious to me, because I had previously done a calculation involving diffusion rate of serotonin, and I had cached that serotonin diffused pretty fast (and therefore would probably clear from the cleft mostly via diffusion). So I hit the LLMs + google to figure it out.

The LLMs just repeated basically the same words which were in the textbook. That's usual LLM behavior. So then I googled diffusion rates for all the major neurotransmitters (and helpfully found a paper which measured diffusion rate in the cleft itself, so I knew roughly how much slower it would be in the cleft compared to free solution). I also googled for some images of synapses to estimate their size (which was surprisingly tricky to find in text form - many sources repeated the distance between the two neurons, but few mentioned the radius of the synapse). Then I did the math, and concluded that, indeed, all of the small molecule neurotransmitters probably clear the cleft mainly via diffusion. (... In the CNS, anyway. Neurumuscular junctions are maybe different, because the synapse radius is bigger.)

This seems like the sort of thing where the right UI/scaffolding combo would make it a lot faster to get the correct info with LLM assistance. But if I just directly ask the question to an LLM in a standard chat interface, it parrots back the thing the textbook says (which is presumably blindly repeated all over the place).

Next up, having established that all the major neurotransmitters clear the cleft mainly via diffusion, I wanted to know how long it takes them to be reabsorbed from the extracellular space outside the cleft. This matters a lot because it determines how localized the signal is, for purposes of receptors outside the cleft itself. This was more difficult to figure out, because lots of papers would offhandedly claim that the reabsorption time was a millisecond or two - a number which happens to roughly match the time it takes small molecule neurotransmitters to diffuse out of the cleft... and therefore the number one would end up with if one incorrectly assumed that the transmitter is mostly cleared from the cleft via reabsorption, and tried to measure the reabsorption time via looking at the time it takes the cleft to clear. And because that number was uninformedly repeated in many papers, the LLM would of course also repeat it often. But when I dug into the papers, it was clearly wrong; proper measurements were scarce, but those I found suggested reabsorption on a slower timescale (and dominated by reabsorption into glia rather than neurons, in many cases).

So it's a similar pattern to the first half of this comment: there's some thing which people will often say, and therefore the LLM will say that thing in response to a question. But if one asks "why do people say this thing?", and looks for object-level evidence, the thing is clearly wrong. It seems like there should be some LLM UI/scaffolding which would make it a lot easier to get the real answers in those cases, but I don't know what that UI/scaffolding would look like.

Another guise of the same problem: it would be great if an LLM could summarize papers for me. Alas, when an LLM is tasked with summarizing a paper, I generally expect it to "summarize" the paper in basically the same way the authors summarize it (e.g. in the abstract), which is very often misleading or entirely wrong. So many papers (arguably a majority) measure something useful, but then the authors misunderstand what they measured and therefore summarize it in an inaccurate way, and the LLM parrots that misunderstanding.

Plausibly the most common reason I read a paper at all is because I think something like that might be going on, but I expect the paper's data and experimental details can tell me what I want to know (even though the authors didn't understand their own data). If I didn't expect that sort of thing, then I could just trust the abstract, and wouldn't need to read the paper in the first place, in which case there wouldn't be any value-add for an LLM summary.

I work mostly as a distiller (of xrisk-relevant topics). I try to understand some big complex thing, package it up all nice, and distribute it. The "distribute it" step is something society has already found a lot of good tech for. The other two steps, not so much.

Loom is lovely in the times I've used it. I would love to see more work done on things like this, things that enhance my intelligence while keeping me very tightly in the loop. Other things in this vein include:

  • memory augmentation beyond just taking notes (+Anki where appropriate). I'm both interested in working memory and long-term memory, but moreso in the former.
  • For digesting complex pieces, I'd like something better than a chat window for interactively getting up to my desired level of familiarity with some existing work. When I'm digesting the paper, do I want the one-sentence summary, the abstract, or to have a better understanding than the researchers behind it? NotebookLM is sort of doing this but I've found it lacking for this task for UI reasons (I also wish that it would automatically bring in other relevant sources).

If you want a semi-amibtious idea shaped around UI, how about "serendipity generation for AI Safety/other related topics"

This would be a LW addon or separate site with a few features:

  • Showing random extracts of interesting articles/blogposts/papers, curated by humans (perhaps in a "news ticker" style)
  • Users could sign up to get matched for one on ones with other users based on relative domain expertise (so matching people who are unlikely to already know one another), for friendly coffee chat or professional networking
  • Update board for people to post things they're working with as a scrolling Twitter like Feed with no algorithm

Designing this well so it isn't a pain to use would be a fine balance but I think there's a lot of potential for creating "a big target area for luck" and removing friction between different social cliques in AIS for idea sharing. Also could offer this setup to other fields if you needed revenue, I think physics or CS in general might benefit from such a service.

Another idea I've been thinking about:

Consider the advantage prediction markets have over traditional news. If I want to keep track of some variable X, such as "the amount of investment going into Stargate", and all I have are traditional news, I have to constantly manually sift through all related news reports data in search of related information. With prediction markets, however, I can just bookmark this page and check it periodically.

An issue with prediction markets is that they're not well-organized. You have the tag system, but you don't know which outcomes feed into other events, you don't necessarily know what prompts specific market updates (unless someone mentions that in the comments), you don't have a high-level outline of the ontology of a given domain, etc. Traditional news reports offer some of that, at least: if competently written and truthful, they offer causal models and narratives behind the events.

It would be nice if we could fuse the two. An interface for engaging with the news that combines conciseness of prediction-market updates with an attempt at a model-based understanding offered by traditional news.

One obvious idea is to arrange it into the form of a Bayes net. People (perhaps the site's managers, perhaps anyone) could set up "causal models", in which specific variables are downstream of other variables. Other people (forecasters/experts hired by the project's managers, or anyone, like in prediction markets) could bet on which models are true[1], and within the models, on the values of specific variables[2]. (Relevant.)

Among other things, this would ensure built-in "consistency checks". If, within a given model, a variable X is downstream of outcomes A, B, C, such that X only happens if all of A, B, C happen, but the market-estimated P(X) isn't equal to P(ABC), this would suggest either that the prediction markets are screwing up, or that there's something wrong with the given model.

Furthermore, one way for this to gain notoriety/mainstream appeal is if specific high-status people or institutions set up their own "official" causal models. For example, an official AI 2027 causal model, or an official MIRI model of AI doom which avoids the multlple-stage fallacy and clearly shows how it's convergent.

Tons of ways this might not work out, but I think it's an interesting idea to try. (Though maybe it's something that should be lobbed off to Manifold Markets' leadership.)

  1. ^

    Or, perhaps in an even more fine-grained manner, which links between different variables are true.

  2. ^

    Ideally, with many variables shared between different models.

Yeah this one has been pretty high on my list (or, a fairly similar cluster of ideas)

Hm. Galaxy-brained idea for how to use this as a springboard to make prediction markets go mainstream:

  • Convince friendly prominent alignment research institutions (e. g. MIRI, the AI Futures project) to submit their models to the platform.
  • Socially pressure AGI labs to submit their own official models there as well, e. g. starting from Anthropic. (This should be relatively low-cost for them; at least, inasmuch as they buy their own hype and safety assurances.)
  • Now you've got a bunch of high-profile organizations making implicit official endorsements of the platform.
  • Move beyond the domain of AI, similarly starting with friendly smaller organizations (EA orgs, etc.) then reaching out to bigger established ones.
  • Everyone in the world ends up prediction-market-pilled.
  • ???
  • Civilizational sanity waterline rises!

(Note that it follows the standard advice for startup growth, where you start in a very niche market, gradually eat it all, then expand beyond this market, iterating until your reach is all-pervading.)

You could make something like Blind (the big tech employee anon social net) for unionizing/coordinating AI lab employees. "I'll delay my project if you delay yours." Or anon danger polls of employees at labs. I'm not sure of the details but I suspect there is fruit. ("Your weekly update: 25% of your coworkers expect to see 2035.")

I think there's a possibility for ui people to make progress on the reputation tracking problem by virtue of tight feedback loops relative to people thinking more abstractly about it. The most rapid period of learning in this regard that I know of is early days at PayPal eBay where they were burning millions a day in fraud at certain points.

Secondly: the chat interface for llm is just bad for power users. Ai Labs is slightly better but still bad.

Edit: meant aistudio

I think there's a possibility for ui people to make progress on the reputation tracking problem by virtue of tight feedback loops relative to people thinking more abstractly about it.

Are there particular reputation-tracking-problems you're thinking of? (I'm sure there are some somewhere, but I'm looking to get more specific)

I'm working on a poweruser LLM interface but honestly it's not going to be that much better than Harpa AI or Sider.

Feels complicated to atomize for some of the same reasons it's a candidate. Think the modern most successful area was PayPal where they had the feedback loop of millions a day being lost to fraud at one point early on.

Ai Labs is slightly better but still bad.

Could you give a link to this or a more searchable name? "Ai Labs" is very generic and turns up every possible result. Even if it's bad, I'd be interested in investigating something "slightly better" and hearing a bit about why.

I've linked to a short form describing why I work on wise AI advisors. I suspect that there a lot of work that could be done to figure out the best user interface for this:

https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/?commentId=Zcg9idTyY5rKMtYwo

If you're interested, I could share some thoughts on specific things to experiment with.

I still want something even closer to Givewell but for AI Safety (though it is easier to find where to donate now than before). Hell, I wouldn't mind if LW itself had recommended charities in a prominent place (though I guess LW now mostly asks for Lightcone donations instead).

(Apologies if this isn't the right post to comment this on)

One thing that I've just noticed is that there is no "Share post with another user" button. To share a post, I have to:

  1. Hit share
  2. Copy the link
  3. Go to the user's page
  4. Click message
  5. Paste the link
  6. Hit "Submit"

That's a lot of friction to get a friend's eyeballs on a post. I think it would make collaborating on LW easier if sharing a post to another user was more like sharing a post on X (Formerly Twitter).

--
Related to sharing, I notice that clicking the "Share" button on iPhone takes a long time to load. This limits the amount of LW posts I'm willing to share to my Matter reading queue when triaging the front page. There is an argument to be made that this is a good thing though.

Does anything from this document seem interesting to you?

Having a simple cli tool to convert file formats, generate embeddings, share them in a standard format - seems relevant to increasing the transparency of the planet.

You might particularly want to increase transparency of what’s going on at ASI companies or govts, or what’s going on at lesswrong.

One big UI shaped problem, is that when I visit the website of an extremely corrupt and awful company with a lot of scandals, they often trick me into thinking they are totally good, because I'm too lazy to search up their Wikipedia page.

What if we create a new tiny wiki as a browser extension, commenting on every website?

The wiki should only say one or two sentences about every website, since we don't want to use up too much of the user's screen space while she is navigating the website.

The user should only see the wiki when scrolled to the top of the webpage. If the user clicks "hide for this site," the wiki collapses into a tiny icon (which is red or green depending on the organization's overall score). If the wiki for one website has already been shown for 5 minutes, it automatically hides (but it expands again the next week).

Details

Whenever people make an alternative to Wikipedia, they always start off by simply copying Wikipedia.

This is okay! Wikipedia as a platform does not own the work of its editors, its editors are not loyal to the platform but loyal to the idea of sharing their knowledge, and don't mind if you copy their work to your own platform. There's no copyright.

The current Wikipedia is longer than one or two sentences, so you might need to use summarize it with AI (sadly). But as soon as a user edits it, her edit replaces the AI slop.


Where do we display the one or two sentences about the website? The simplest way is to create a thin horizontal panel on the top or bottom of the website. A more adaptive way, is to locate some whitespace in the website and add it there.

It might only display in certain webpages within a website. E.g. for a gaming website, it might not display while the user is gaming, since even one or two sentences uses up too much screen space. It might only display in the homepage of the gaming website.

Font size is preferably small.


If the user mouse-hovers the summary, it opens up the full Wikipedia page (in a temporary popup). If the website has no Wikipedia page (due to Wikipedia's philosophy of "deletionism"), your wiki users can write their own. Even if it has a Wikipedia page, your wiki users can add annotations to the existing Wikipedia page (e.g. if they disagree with Wikipedia's praise of a bad company).

In addition to the full Wikipedia page, there might be a comments section (Wikipedia frustratingly disallows comments), and possibly a web search.

 

Worthwhile gamble

80%, trying to create it will fail. But 20%, it works, at least a little.

But the cost is a mere bit of UI work, and the benefit is huge.

It can greatly help the world on judging bad companies! This feels "unrelated to AI risk," but helps a lot if you think about it.

If it works, then whichever organization implements it first will win a lot of donations, and act as the final judge in savage fights over website reputations.

More from Raemon
Curated and popular this week