Dom Polsinelli

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

My interpretation of that was whenever you're having an opinion or discussion in which facts are relevant, make sure you actually know the statistics. An example is an argument (discussion?) my whole family had mid covid. The claim of some people was that generally, covid was only as bad as the flu. Relevant statistics were readily available for things like mortality rate and total deaths that some people making said claim were ignorant of (off by OOMs). With covid it seems obvious but for other things maybe not. Things people frequently have strong opinions about and don't frequently look up may include: the return on investment of college, the number of deaths due to firearms, the cost of alternative energy sources, how much taxes are actually going to change for a given bill.

I agree with a lot of what you said but I am generally skeptical of any emulation that does not work from a bottom up simulation of neurons. We really don't know about how and what causes consciousness and I think that it can't be ruled out that something with the same input and outputs at a high level misses out on something important that generate consciousness. I don't necessarily believe in p zombies, but if they are possible then it seems they would be built by creating something that copies the high level behavior but not the low level functions. Also on a practical level, I don't know how you could verify a high level recreation is accurate. If my brain is scanned into a supercomputer that can run molecular dynamics on every molecule in my brain then I think there is very little doubt that it is an accurate reflection of my brain. It might not be me in the sense that I don't have any continuity of consciousness, but it is me in the sense that it would behave like me in every circumstance. Conversely, a high level copy could miss some subtle behavior that is not accounted for in the abstraction used to form the model of the brain. If an LLM was trained on everything I ever said, it could imitate me for a good long while but it wouldn't think the same way I do. A more complex model would be better but not certain to be perfect. How could we ever be sure that our understanding was such that we didn't miss something subtle that emerges from the fundamentals? Maybe I'm misinterpreting the post, but I don't see a huge benefit from reverse engineering the brain in terms of simulating it. Are you suggesting something other than an accurate simulation of each neuron and its connections? If you are, I think that method is liable to miss something important. If you are not I think that understanding each piece but not the whole is sufficient to emulate a brain.