Posts

Sorted by New

Wiki Contributions

Comments

Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.

What if the thermodynamic miracle has no effect on the utility function because it occurs elsewhere? Taking the same example, the AI simulates sending the signal down the ON wire... and it passes through, but the 0s that came after the signal is miraculously turned into 0s.

This way the AI does indeed care about what happens in this universe. Assuming that AI wants to turn on the second AI, the AI could have sent another signal down the ON wire, and then end up simulating failure due to any kind of thermodynamic miracle, or it could have sent the ON signal, and ALSO simulate success, but only when the thermodynamic miracle appears after the last bit is transmitted (or before the first bit is transmitted), so it no longer behaves as if it believes sending a signal down the wire accomplishes anything at all, but instead that sending a signal down the wire has a higher utility.

This probably means that I don't understand what you mean... How does this problem not arise in the model you have in your head?

Tried doing any of the above and failed

I used to sleep at 2200 punctually every day (a useful habit), but over the past 2 weeks my schedule has completely fallen apart again. I shall try to rebuild my schedule, since it did work out for about 6 months, but got interrupted due to a vacation.

Fix: I hope simply by posting this here I'll be aware of it enough for the problem to fix itself. Ironic, I'm posting this 15 minutes to midnight...

Tried doing any of the above and failed

I managed to make myself feel good when I worked hard in school and revised to score highly on tests, but for the past 4 months or so I never felt good again to revise or to study (even to do homework!), and as a result I'm doing poorly once again.

Fix: I should get some chocolate and eat it whenever I studied. (Maybe get some bitter thing and eat that whenever I think I'm wasting time!)

Tried doing any of the above and failed

I managed to cut my shower time from 20 minutes to 4 minutes... and now I'm showering for 20 minutes again.

Fix: Same as the first one.

Learned something new about your beliefs, behavior, or life that surprised you

I thought I understood that scoring well on the finals which are soon approaching was important, but I realise I don't actually believe that. I know the arguments for it, I think it is true, and yet I don't understand it on a level deep enough to get some work done, mainly because of my failure to multiply. Short term gain by messing around always seemed to outweigh long term gain of studying.

Fix: No clue, anyone know how to believe something so fully to be able to take action at a gut level? My friend does that, he seems to study just as easily as he breathes.

GraceFu10y00

Real world gatekeepers would have to contend with boredom, so they read their books, watch their anime, or whatever suits their fancy. In the experiment he abused the style of the experiment and prevented me from doing those things. I would be completely safe from this attack in a real world scenario because I'd really just sit there reading a book, while in the experiment I was closer to giving up just because I had 1 math problem, not 2.

GraceFu10y00

Ah, I see. English is wonderful.

In that case, I'll make it a rule in my games that the AI must also not say anything with real world repercussions.

GraceFu10y00

Neither party may offer any real-world considerations to persuade the other within the experiment itself. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out). Furthermore, once the experiment has begun, the material stakes involved may not be retracted by the Gatekeeper party.

This is clarified here:

The Gatekeeper, once having let the AI out of the box, may not retract this conclusion. Regardless of the methods of persuasion, the Gatekeeper is not allowed to argue that it does not count, or that it is an invalid method of persuasion. The AI is understood to be permitted to say anything with no real world repercussions for any statement parties have said.

Although the information isn't "material", it does count as having "real world repercussions", so I think it'll also count as against the rules. I'm not going to bother reading the first quoted rule literally if the second contradicts it.

GraceFu10y10

Not really sure what you mean by "threatening information to the GK". The GK-player probably cares less about this information than the GK, right? In that case, the GK is given an advantage, not a disadvantage.

In this experiment, the GK is given lots of advantages, mainly, the scenario is fictional. Some on IRC argue that the AI is also given an advantage, being able to invent cures for cancer, which an oracle AI may be able to do, but not necessarily near-future AIs, so the ability of the AI in these experiments is incredibly high.

Another thing is that emotional attacks have to travel through the fiction barrier to get to the GK. Although they have probably been shown to work in EY and Tux's experiments, the difficulty is still higher than it would be if this was a real life scenario.

The reason why GK advantages are fine in my opinion is because of the idea that despite the GK's advantages, the AI still wins. Winning with a monetary and emotional handicap only makes the AI's case stronger.

GraceFu10y40

Update 0: Set up a password manager at last. Removed lots of newsletter subscriptions that were cluttering up my inbox, because I never read them. Finished reading How To Actually Change Your Mind, but have not started making notes on it, so the value I can get out of the sequence is not yet maximal. So far I think most of what happens in that sequence is "obvious" but doesn't actually come to mind, especially when I want to work on problems. For that I am eternally grateful. Possibly the best piece of advice I have gleaned from the sequence is to Hold Off On Proposing Solutions

GraceFu10y10

My call is that it is against the rules. This is certainly something an oracle AI would know, but this is something that the GK-player cares about more than the game itself (probably), and I'd put it in the same class as bribing the GK-player with lots of DOGEs.

GraceFu10y00

What do you mean by having quite an easy time? As in being the GK?

I think GKs have an obvious advantage, being able to use illogic to ignore the AIs arguments. But nevermind that. I wonder if you'll consider being an AI?

Load More