dysangel

coder

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

A neural net can approximate any function. Given that LLMs are neural nets, I don't see why they can't also approximate any function/behaviour if given the right training data. Given how close they are getting to reasoning with basically unsupervised learning on a range of qualities of training data, I think they will continue to improve, and reach impressive reasoning abilities. I think of the "language" part of an LLM as like a communication layer on top of a general neural net. Being able to "think out loud" with a train of thought and a scratch pad to work with is a useful thing for a neural net to be able to do, similar to our own trains of thought IMO. It also is useful from a safety stand-point, as it would be quite the feat for back propagation itself to manage to betray us, before the model's own visible thoughts do.