Posts

Sorted by New

Wiki Contributions

Comments

I encountered this while I was reading about an obscure estradiol ester, Estradiol undecylate, used for hormone replacement therapy and treating prostate cancer. It's very useful because it has a super long half-life, but it was discontinued. I had to reread the article to be sure I understood that the standard dose chosen arbitrarily in the first trials was hundreds of times larger than necessary, leading to massive estrogen overdoses and severe side effects that killed many people due to cardiovascular complications, and yet these insane doses were typical for decades and might've caused its discontinuation.

Although it has been over a decade, decent waterproof phone mounts now exist, too.

Thank you for writing this, this is by far the strongest argument for taking this problem seriously tailored to leftists I've seen and I'll be sharing it. Hopefully the frequent (probably unavoidable) references to EA doesn't turn them off too much.

Answer by brambleboyFeb 07, 202473

Here's why determinism doesn't bother me. I hope I get it across.

Deterministic systems still have to be simulated to find out what happens. Take cellular automata, such as Conway's Game of Life or Wolfram's Rule 110, . The result of all future steps is determined by the initial state, but we can't practically "skip ahead" because of what Wolfram calls 'computational irreducibility': despite the simplicity of the underlying program, there's no way to reduce the output to a calculation that's much cheaper than just simulating the whole thing. Same with a mathematical structure like the Mandelbrot Set: its appearance is completely determined by the function, and yet we couldn't predict what we'd see until we computed it. In fact all math is like this.

What I'm getting at is that all mathematical truths are predetermined, and yet I doubt this gives you a sense that being a mathematician is pointless, because obviously these truths have to be discovered. As with the universe: the future is determined, and yet we, or even a hypothetical outsider with a massive computer, have to discover it.

Our position is better than that, though: we're not just looking at the structure of the universe from the outside, we're within it. We're part of what determines the future: it's impossible to calculate everything that happens in the future without calculating everything we humans do. The universe is determined by the process, and the process is us. Hence, our choices determine the future.

I disagree that the Reversal Curse demonstrates a fundamental lack of sophistication of knowledge on the model’s part. As Neel Nanda explained, it’s not surprising that current LLMs will store A -> B but not B -> A as they’re basically lookup tables, and this is definitely an important limitation. However, I think this is mainly due to a lack of computational depth. LLMs can perform that kind of deduction when the information is external, that is, if you prompt it with who Tom Cruise’s mom is, it can then answer who Mary Lee Pfeiffer’s son is. If the LLM knew the first part already, you could just prompt it to answer the first question before prompting it with the second. I suspect that a recurrent model like the Universal Transformer would be able to perform the A -> B to B -> A deduction internally, but for now LLMs must do multi-step computations like that externally with a chain-of-thought. In other words, it can deduce new things, just not in a single forward pass or during backpropagation. If that doesn't count, then all other demonstrations of multi-step reasoning in LLMs don't count either. This deduced knowledge is usually discarded, but we can make it permanent with retrieval or fine-tuning. So, I think it's wrong to say that this entails a fundamental barrier to wielding new knowledge.