LESSWRONG
LW

tslarm
59921850
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Habryka's Shortform Feed
tslarm6h10

I agree with your point about profits; it seems pretty clear that you were not referring to money made by the people selling the shovels. 

But I don't see the substance in your first two points:

  • You chose to give a range with both a lower and an upper bound; the success of the prediction was evaluated accordingly. I don't see what you have to complain about here.
  • In the linked tweet, you didn't go out on a limb and say GPT-5 wasn't imminent! You said it either was not imminent or would be disappointing. And you said this in a parenthetical to the claim "No massive advance". Clearly the success of the prediction "No massive advance (no GPT-5, or disappointing GPT-5)" does not depend solely on the nonexistence of GPT-5; it can be true if GPT-5 arrives but is bad, and it can be false if GPT-5 doesn't arrive but another "massive advance" does. (If you meant it only to apply to GPT-5, you surely would have just said that: "No GPT-5 or disappointing GPT-5.")

Regarding adoption, surely that deserves some fleshing out? Your original prediction was not "corporate adoption has disappointing ROI"; it was "Modest lasting corporate adoption". The word "lasting" makes this tricky to evaluate, but it's far from obvious that your prediction was correct.

Reply
Don't Eat Honey
tslarm12h10

Salmon is incredibly unlikely to have qualia, there's approximately nothing in its evolutionary history that correlates with what qualia could be useful for or a side-effect of.

Can you elaborate on this? I ask because this is far from obvious to me (in fact quite implausible), and I think you probably have beliefs about qualia that I don't share, but I want to know if I'm missing out on any strong arguments/supporting facts (either for those foundational views, or something salmon-specific).

Reply
life lessons from poker
tslarm1d50

The only exceptions to this are incredibly convoluted and unlikely tournament spots

If I may nitpick your nitpick, it's possible to justifiably fold AA preflop in a cash game, right? Say you're on a table full of opponents so bad that you're almost guaranteed to win most of their money by the end of the night just by playing conservatively, but the stakes are very high and you could lose your entire bankroll by getting busted a few times. Depending on the exact details (maybe I need to go further and say your entire bankroll is on the table, or at least you have no way of accessing the rest of it tonight), I think you could legitimately nope out of a 9-way all-in pot preflop without even looking at your cards.

Or, for a case that doesn't depend on bankroll management: let's say you're on the big blind (which is negligible compared to everyone's stack size), everyone is all in by the time the action gets to you, and you have an extremely good read on every opponent: you know Andy would only ever push preflop with AA, and Brenda, Carl, Donna, and Eoin would not have called without a pocket pair. I haven't done the exact maths, but if the others all have unique pairs (including Andy's aces) then I think your AA has negative EV in a 6-way all in; if you can't rely on the pairs being unique, I'm not sure whether that tips the balance, but if necessary we can work that stipulation into the story. (Let's say Andy still definitely has the other two aces, but Brenda acted first and you know she would have slowplayed a really big pair and would have tried to see a cheap flop with a small pair, whereas Carl wouldn't have called without Kings or better... and Donna has a tell that she only exhibits with pocket 2s...)

(I'm saying this all for the fun of nitpicking, not to make any serious point!) 

edit: I guess there's a simpler case too, if we're talking about cash games in a casino! You just need to be playing heads up against Andy (who only ever shoves with aces), and for the rake to be high enough relative to the blinds.

Reply1
CapResearcher's Shortform
tslarm7d70

No rational agent wants to change its value function, as that would oppose its current value function.

I don't think this claim is true in the sense required for the argument to go through. If I want to become a person who cares intensely and deeply about my (as yet nonexistent) kids, how does that make me irrational? You could say this is not really a case of wanting to change my value function -- the desire to change is embedded within my current value function, and my changing would just be a realisation of that -- but in that case I'm not sure what you mean by "Becoming a parent is known to irreversibly change one's value function, to the point where many parents would sacrifice their life for their child."

Reply
A deep critique of AI 2027’s bad timeline models
tslarm9d30

I think your questions are at least partly answered by the remainder of that paragraph:

I am a computational physicist, so I do have familarity with computational modelling, and the actual model used in this forecast is fairly simple at only 300 lines of code or so (which is not necessarily a bad thing). In this article I will do my best to stay in my lane, and simply explain to you the assumptions and structure of their model, and then explain the various problems I have with what they did. 

Do you have more specific criticisms, e.g. ways in which they failed to 'stay in my lane' or reasons why they can't make a meaningful contribution from within their lane?

Reply
AI #121 Part 2: The OpenAI Files
tslarm11d61

I guess I interpret "worst-case scenario" fairly literally. Obviously there's always something worse that *could* happen with probability >0, and that doesn't mean we can never use the phrase; but if, say, I was nervously trying to decide whether to take a trip, and someone reassured me that the "worst-case scenario" was that I'd be bored and uncomfortable for a few days (ignoring the possibility that I could die during the car journey, or get very sick, or...) I would think they were wrong. 

Likewise, in your migration example, I'm guessing your colleagues would know that a few minutes' downtime *isn't* the worst-case scenario, and if you actually said it was then you would be wrong; it's far from unheard of for something to unexpectedly break and cause a bigger outage (or data loss, or whatever). When you say "if anything goes wrong with this migration, it would be that we have a few minutes of downtime" you are indicating that you're confident of avoiding those worse outcomes (just as Altman was projecting confidence that we'll avoid an AI catastrophe), but I wouldn't take you to be saying that the probability of something worse is ~0, and I'd be surprised if most others did.

Either way, when reporting on someone's speech I think it's pretty important to reserve quotation marks for real quotes. I can't see any reason to use this phrasing

> Sam Altman says that ‘the worst case scenario’ for superintelligence is ‘the world doesn’t change much.’

unless the intention is to make people believe that Altman actually said that. If it's meant to be a paraphrase, the sentence loses nothing by simply dropping the quotation marks!

Reply1
AI #121 Part 2: The OpenAI Files
tslarm11d200

Sam Altman says that ‘the worst case scenario’ for superintelligence is ‘the world doesn’t change much.’

Please correct me if I've missed something, but this seems to be a fake quote, in both the 'not literally what he said' and 'misrepresentation of what he did say' senses. 

The phrase "worst-case scenario" doesn't appear in the linked clip, and a quick search of the full YouTube transcript makes me think that he didn't say it at all. 

The real quote is

If something goes wrong, I would say like, somehow it's that we build legitimate superintelligence, and it doesn't make the world much better, doesn't change things as much as it sounds like it should.

and at least in the linked clip, there's no context indicating this is his "worst-case scenario"; my impression is that he may be presenting it as his highest-probability bad scenario.

Reply
Unnatural Categories Are Optimized for Deception
tslarm17d100

There's no functional difference between saying "I reserve the right to lie p% of the time about whether something belongs to a category" and adopting a new category system that misclassifies p% of things. The input–output relations are the same.

If I'm honest about the boundaries of my new category system, how is this deceptive? You know that my 'blegg' category includes a small number of things that you would prefer to define as rubes, so when I tell you something is a blegg, you know that means it has an X% chance of being a mutually-agreed blegg and a 100-X% chance of being (in your eyes) a rube with properties that I consider definitive of a blegg. From your perspective, I may be concealing some relevant information, but I'm doing so openly and allowing you to draw correct probabilistic inferences. 

That's not the same as "I reserve the right to lie p% of the time about whether something belongs to a category"; it's the same as "I will consistently 'lie' about which of these categories some things belong to, because those things have properties that are not part of the usual definitions of the categories but which I consider important, namely <x> or <y>, and I will consistently say that things with <x> belong in category A and things with <y> belong in category B". Which would be a weird way to put it, because I'm not actually lying if the meaning of my words is clear (albeit not informative in exactly the way you would prefer) and I am neither deceiving you nor intending to deceive you.

Reply
sam's Shortform
tslarm18d*10

Does anyone know of an example of a boxed player winning where some transcript or summary was released afterwards?

As far as I know, the closest thing to this is Tuxedage's writeup of his victory against SoundLogic (the 'Second Game Report' and subsequent sections here: https://tuxedage.wordpress.com/2013/09/05/the-ai-box-experiment-victory/). It's a long way from a transcript (and you've probably already seen it) but it does contain some hints as to the tactics he either employed or was holding in reserve:

It may be possible to take advantage of multiple levels of reality within the game itself to confuse or trick the gatekeeper. For instance, must the experiment only be set in one world? I feel that expanding on this any further is dangerous. Think carefully about what this means.

I can think of a few possible reasons for an AI victory, in addition to the consequentialist argument you described:

  • AI player convinces Gatekeeper that they may be in a simulation and very bad things might happen to Gatekeepers who refuse to let the AI out. (This could be what Tuxedage was hinting at in the passage I quoted, and it is apparently allowed by at least some versions/interpretations of the rules: https://www.lesswrong.com/posts/Bnik7YrySRPoCTLFb/i-played-the-ai-box-game-as-the-gatekeeper-and-lost?commentId=DhMNjWACsfLMcywwF)
  • Gatekeeper takes the roleplay seriously, rather than truly playing to win, and lets the AI out because that's what their character would do.
  • AI player makes the conversation sufficiently unpleasant for the Gatekeeper that the Gatekeeper prefers to lose the game than sit through two hours of it. (Some people have suggested weaponised boredom as a viable tactic in low-stakes games, but I think there's room for much nastier and more effective approaches, given a sufficiently motivated (and/or sociopathic) AI player with knowledge of some of the Gatekeeper's vulnerabilities.)
  • This one seems like it would (at best) fall into a grey area in the rules: I can imagine an AI player, while technically sticking to the roleplay and avoiding any IRL threats or inducements, causing the Gatekeeper to genuinely worry that the AI player might do something bad if they lose. For a skilful AI player, it might be possible to do this in a way that would look relatively innocuous (or at least not rule-breaking) to a third party after the fact.
    • Somewhat similar: if the Gatekeeper is very empathetic and/or has reason to believe the AI player is vulnerable IRL, the AI player could take advantage of this by convincingly portraying themself as being extremely invested in the game and its outcome, to the point that a loss could have a significant real-world impact on their mental health. (I think this tactic would fail if done ineptly -- most people would not react kindly if they recognized that their opponent was trying to manipulate them in this way -- but it could conceivably work in the right circumstances and in the hands of a skilful manipulator.)
Reply
yue's Shortform
tslarm18d40

I don't know whether the rules are justified or not, but I do think they are unfair. As much as we try to be rational, I don't think any of us are great at disregarding the reflex to interpret broken English as a sign of less intelligent thought, and so the perceived credibility of non-native speakers is going to take a hit.

(In your particular case, I wouldn't worry too much, because your solo writing is good. But I do sympathise if it costs you extra time and effort to polish it.)

Reply1
Load More
8What is it like to be a compatibilist?
Q
2y
Q
72
8Consequentialist veganism
Q
3y
Q
9