Salmon is incredibly unlikely to have qualia, there's approximately nothing in its evolutionary history that correlates with what qualia could be useful for or a side-effect of.
Can you elaborate on this? I ask because this is far from obvious to me (in fact quite implausible), and I think you probably have beliefs about qualia that I don't share, but I want to know if I'm missing out on any strong arguments/supporting facts (either for those foundational views, or something salmon-specific).
The only exceptions to this are incredibly convoluted and unlikely tournament spots
If I may nitpick your nitpick, it's possible to justifiably fold AA preflop in a cash game, right? Say you're on a table full of opponents so bad that you're almost guaranteed to win most of their money by the end of the night just by playing conservatively, but the stakes are very high and you could lose your entire bankroll by getting busted a few times. Depending on the exact details (maybe I need to go further and say your entire bankroll is on the table, or at least you have no way of accessing the rest of it tonight), I think you could legitimately nope out of a 9-way all-in pot preflop without even looking at your cards.
Or, for a case that doesn't depend on bankroll management: let's say you're on the big blind (which is negligible compared to everyone's stack size), everyone is all in by the time the action gets to you, and you have an extremely good read on every opponent: you know Andy would only ever push preflop with AA, and Brenda, Carl, Donna, and Eoin would not have called without a pocket pair. I haven't done the exact maths, but if the others all have unique pairs (including Andy's aces) then I think your AA has negative EV in a 6-way all in; if you can't rely on the pairs being unique, I'm not sure whether that tips the balance, but if necessary we can work that stipulation into the story. (Let's say Andy still definitely has the other two aces, but Brenda acted first and you know she would have slowplayed a really big pair and would have tried to see a cheap flop with a small pair, whereas Carl wouldn't have called without Kings or better... and Donna has a tell that she only exhibits with pocket 2s...)
(I'm saying this all for the fun of nitpicking, not to make any serious point!)
edit: I guess there's a simpler case too, if we're talking about cash games in a casino! You just need to be playing heads up against Andy (who only ever shoves with aces), and for the rake to be high enough relative to the blinds.
No rational agent wants to change its value function, as that would oppose its current value function.
I don't think this claim is true in the sense required for the argument to go through. If I want to become a person who cares intensely and deeply about my (as yet nonexistent) kids, how does that make me irrational? You could say this is not really a case of wanting to change my value function -- the desire to change is embedded within my current value function, and my changing would just be a realisation of that -- but in that case I'm not sure what you mean by "Becoming a parent is known to irreversibly change one's value function, to the point where many parents would sacrifice their life for their child."
I think your questions are at least partly answered by the remainder of that paragraph:
I am a computational physicist, so I do have familarity with computational modelling, and the actual model used in this forecast is fairly simple at only 300 lines of code or so (which is not necessarily a bad thing). In this article I will do my best to stay in my lane, and simply explain to you the assumptions and structure of their model, and then explain the various problems I have with what they did.
Do you have more specific criticisms, e.g. ways in which they failed to 'stay in my lane' or reasons why they can't make a meaningful contribution from within their lane?
I guess I interpret "worst-case scenario" fairly literally. Obviously there's always something worse that *could* happen with probability >0, and that doesn't mean we can never use the phrase; but if, say, I was nervously trying to decide whether to take a trip, and someone reassured me that the "worst-case scenario" was that I'd be bored and uncomfortable for a few days (ignoring the possibility that I could die during the car journey, or get very sick, or...) I would think they were wrong.
Likewise, in your migration example, I'm guessing your colleagues would know that a few minutes' downtime *isn't* the worst-case scenario, and if you actually said it was then you would be wrong; it's far from unheard of for something to unexpectedly break and cause a bigger outage (or data loss, or whatever). When you say "if anything goes wrong with this migration, it would be that we have a few minutes of downtime" you are indicating that you're confident of avoiding those worse outcomes (just as Altman was projecting confidence that we'll avoid an AI catastrophe), but I wouldn't take you to be saying that the probability of something worse is ~0, and I'd be surprised if most others did.
Either way, when reporting on someone's speech I think it's pretty important to reserve quotation marks for real quotes. I can't see any reason to use this phrasing
> Sam Altman says that ‘the worst case scenario’ for superintelligence is ‘the world doesn’t change much.’
unless the intention is to make people believe that Altman actually said that. If it's meant to be a paraphrase, the sentence loses nothing by simply dropping the quotation marks!
Sam Altman says that ‘the worst case scenario’ for superintelligence is ‘the world doesn’t change much.’
Please correct me if I've missed something, but this seems to be a fake quote, in both the 'not literally what he said' and 'misrepresentation of what he did say' senses.
The phrase "worst-case scenario" doesn't appear in the linked clip, and a quick search of the full YouTube transcript makes me think that he didn't say it at all.
The real quote is
If something goes wrong, I would say like, somehow it's that we build legitimate superintelligence, and it doesn't make the world much better, doesn't change things as much as it sounds like it should.
and at least in the linked clip, there's no context indicating this is his "worst-case scenario"; my impression is that he may be presenting it as his highest-probability bad scenario.
There's no functional difference between saying "I reserve the right to lie p% of the time about whether something belongs to a category" and adopting a new category system that misclassifies p% of things. The input–output relations are the same.
If I'm honest about the boundaries of my new category system, how is this deceptive? You know that my 'blegg' category includes a small number of things that you would prefer to define as rubes, so when I tell you something is a blegg, you know that means it has an X% chance of being a mutually-agreed blegg and a 100-X% chance of being (in your eyes) a rube with properties that I consider definitive of a blegg. From your perspective, I may be concealing some relevant information, but I'm doing so openly and allowing you to draw correct probabilistic inferences.
That's not the same as "I reserve the right to lie p% of the time about whether something belongs to a category"; it's the same as "I will consistently 'lie' about which of these categories some things belong to, because those things have properties that are not part of the usual definitions of the categories but which I consider important, namely <x> or <y>, and I will consistently say that things with <x> belong in category A and things with <y> belong in category B". Which would be a weird way to put it, because I'm not actually lying if the meaning of my words is clear (albeit not informative in exactly the way you would prefer) and I am neither deceiving you nor intending to deceive you.
Does anyone know of an example of a boxed player winning where some transcript or summary was released afterwards?
As far as I know, the closest thing to this is Tuxedage's writeup of his victory against SoundLogic (the 'Second Game Report' and subsequent sections here: https://tuxedage.wordpress.com/2013/09/05/the-ai-box-experiment-victory/). It's a long way from a transcript (and you've probably already seen it) but it does contain some hints as to the tactics he either employed or was holding in reserve:
It may be possible to take advantage of multiple levels of reality within the game itself to confuse or trick the gatekeeper. For instance, must the experiment only be set in one world? I feel that expanding on this any further is dangerous. Think carefully about what this means.
I can think of a few possible reasons for an AI victory, in addition to the consequentialist argument you described:
I don't know whether the rules are justified or not, but I do think they are unfair. As much as we try to be rational, I don't think any of us are great at disregarding the reflex to interpret broken English as a sign of less intelligent thought, and so the perceived credibility of non-native speakers is going to take a hit.
(In your particular case, I wouldn't worry too much, because your solo writing is good. But I do sympathise if it costs you extra time and effort to polish it.)
I agree with your point about profits; it seems pretty clear that you were not referring to money made by the people selling the shovels.
But I don't see the substance in your first two points:
Regarding adoption, surely that deserves some fleshing out? Your original prediction was not "corporate adoption has disappointing ROI"; it was "Modest lasting corporate adoption". The word "lasting" makes this tricky to evaluate, but it's far from obvious that your prediction was correct.