Do these things feel logically impossible per se?
Yes, same qualia as looking at an Escher staircase IRL but feels more fundamental.
Do they feel impossible because they contradict other things that you believe are true?
No. I can't break down why they feel impossible.
Do you draw the conclusion that the impossible-seeming things genuinely cannot exist or (in the case of self-perception?) genuinely do not exist, despite appearances?
Kinda but I can't maintain that because milliseconds later I perceive the "impossible" qualia again.
Free will ...
5. A second machine, designed solely to neutralize an evil super-intelligent machine will win every time, if given similar amounts of computing resources (because specialized machines always beat general ones).
This implies you have some resource you didn't fully imbue to the first AI, that you still have available to imbue to the second. What is that resource?
The claim that specialized machines always beat general ones seems questionable in the context of an AGI. Actually, I'm not sure I understand the claim in the first place. Maybe he means by analogy to a supervised learning system--if you take a network trained to recognize cat pictures, and also train it to recognize dog pictures, then given a fixed number of parameters you can expect it will get less good at recognizing cat pictures.
I feel a sense of impossibility that "anything could exist at all".
I feel a sense of impossibility when I contemplate the recursive nature of perceiving myself perceiving thoughts.
I feel a sense of impossibility about something unspeakable that comes before and is outside anything else.
How are we sure we mean the same thing by the word consciousness though? All I can tell for sure is that ppl think consciousness is "impossible" (cus they try to invent quantum phlogiston to explain it), and something about consciousness engendering moral...
spectrum of qualia of rapid muscle movement:
1. wiggle back and forth with no delay
2. effort required to make individual movements one in a row
some actions are only sometimes available to me in form #2 e.g. glossolalia, patterns of rapid eye movement
sometimes it seems like a matter of training e.g. learning to wiggle my ears
I've been experimenting a bit with using vimwiki: https://github.com/vimwiki/vimwiki
This topic (affordance/encoding) is one of the universal entry points to systemization of fully general agency.
Submission: low bandwidth oracle, ask:
IFF I'm going to die with P>80% in the next 10 years while >80% (modulo natural death rate) of the rest of humanity survives for at least 5 more years then, was what killed me in the reference class:
Repeat to drill down and know the most important hedges for personal survival.
The "rest of humanity survives" condition reduces the chance the question becomes entangled with the eschaton.
i.e. I'm pointing out that selfish utility functions are ...
Decided to upload source to github now that I know arbital's license: https://github.com/emma-borhanian/arbital-scrape
Please do not re-download the pages from arbital.com without good reason. I've added a single line of code to disable this. This is why I'm not uploading the source code to github, but did include it in the zip file you can download.
Running the code as-is will simply regenerate the HTML using the already-downloaded raw json.
Edit: This is being downvoted. I'm happy to reevaluate this and upload to github instead of merely including the source in the zip file. Please comment if this is what you wish.
Simulacra as free-floating schelling points could actually be good if they represent mathematical truth about coordination between agents within a reference class, intended to create better outcomes in the world?
But if a simulacrum corresponds to truth because people conform their future behavior to its meaning in the spirit of cooperation does it still count as a simulacrum?
It feels like you're trying to implicitly import all of good intent, in its full potential, stuff it into the word "truth", and claim it's incompatible with the use of schelling points
...Do you ever get the feeling that you're unsure what was true until the moment you said it? Like on the inside you're this highly contextual malleable thing but when you act it resolves and then you become consistent with something for a time?
Do you ever feel like you're writing checks you can't quite cash, running ahead, saying as true what you plan to *make* true, what becomes true in the saying it. Do you ever experience imposter syndrome?
Do you ever feel like we're all playing a game of pretend and nobody can quite step out of character?
> From the inside, this is an experience that in-the-moment is enjoyable/satisfying/juicy/fun/rewarding/attractive to you/thrilling/etc etc.
people’s preferences change in different contexts since they are implicitly always trying to comply with what they think is permissible/safe before trying to get it, up to some level of stake outweighing this, along many different axes of things one can have a stake in
to see people’s intrinsic preferences we have to consider that people often aren’t getting what they want and are tricked into wanting suboptimal thi...
Without bound, as in, without there existing some specific bound you will never surpass.
Make yourself easier for your past self to index on. e.g. for an evil version, if Horde Prime wants Horde clones to work to benefit Horde Prime, Horde Prime can work to place himself in the center of the universe, which he previously programmed other Hordes to care about.
I'm saying something closest to #3. In order to specify an individual, you have to be ... (read more)