Liliet B
Liliet B has not written any posts yet.

"Many worlds can be seen as a kind of non-local theory, as the nature of the theory assumes a specific time line of "simultaneity" along which the universe can "split" at an instant."
As I understand, no it doesn't. The universe split is also local, and if at a difference at point A preserves the same particles at point B, then at point B we only have the same universe (where at point A we have multiple). The configurations merge together. It's more like vibration than splitting into paths that go into different directions. Macroscopic physics is inherently predictable, meaning that all the multiple worlds ultimately end up doing roughly the same thing!
Except... (read more)
why they haven't been able to solve it yet?
the magic part.
Bad / insufficiently curiosed-through advice is often infuriating because the person giving it seems to be assuming you're an idiot / have come to them as soon as you noticed the problem. Which is very rarely true! Generally, between spotting the problem and talking to another person about it, there's a pretty fucking long solution-seeking stage. Where "pretty fucking long" can be anything between ten minutes ("i lost my pencil and can't find it )=") (where actually common sense suggestions MIGHT be helpful - you might not have through up all the checklist yet) and THE PERSON'S ENTIRE LIFETIME (anything relating to... (read more)
When I worldbuild with magic, this is somehow automatically intuitive - so I always end up assuming (if not necessarily specifying explicitly) a 'magic field' or smth that does the thermodynamic work and that the bits of entropy are shuffled over to. Kind of like how looking something up on the internet is 'magic' from an outside observer's POV if people only have access nodes inside their heads and cannot actually show them to observers, or like how extracting power from the electricity grid into devices is 'magic' under the same conditions.
Only people didn't explicitly invent and build the involved internet and the electricity grid first. So more like how speech is basically telepathy, as Eliezer specified elsewhere~
Prior probabilities with no experience in a domain at all is an incoherent notion, since that implies you don't know what the words you're using even refer to. Priors include all prior knowledge, including knowledge about the general class of problems like the one you're trying to eyeball a prior for.
If you're asked to perform experiments on finding out what tapirs eat - and you don't know what tapirs even are, except that they eat something apparently, judging by the formulation of the problem - you're already going to assign a prior of ~0 of 'they eat candy wrappers and rocks and are poisoned by everything and anything else, including non-candy-wrapper plastics... (read more)
Ordinary language includes mathematics.
"One, two, three, four" is ordinary language. "The thing turned right" is ordinary language (it's also multiplication by -i).
Feynman was right, he just neglected to specify that the ordinary language needed to explain physics would necessarily include the math subset of it.
As an ADHD person for whom "reduce impulsiveness" is about as practical a goal as "learn telekinesis", reducing delay is actually super easy. Did you know people feel good about completing tasks and achieving goals? All you have to do to have a REALLY short delay between starting the task and an expected reward is explicitly, in your own mind, define a sufficiently small sub-task as A Goal. Then the next one, you don't even need breaks in-between if it goes well - even if what you're doing is as inherently meaningless as, I dunno, filling in an excel table from a printed one, you can still mentally reward yourself for each... (read more)
The ultimate prior is maximum entropy, aka "idk", aka "50/50: either happens or not". We never actually have it, because we start gathering evidence for how the world is before our brains even form enough to make any links between it.
Mirrors are useful even though you don't expect to see another person in them.
Sometimes you need a person to be a mirror to your thoughts.
It does affect your point.
I would propose an approximation of the system where each node has a terminal value of its own (which can be 0 for completely neutral nodes, but actually no they cannot - reinforcement mechanisms of our brain inevitably give something like 0.0001 because I heard someone say it was cool once or -0.002 because it reminds me of a sad event in my childhood)
As a simple example, consider eating food when hungry. You get a terminal value on eating food - the immediate satisfaction the brain releases in the form of chemicals as a response to recognition of the event, thanks to evolution - and an instrumental value on eating food, which... (read 663 more words →)