>Sentence lengths have declined.
Data: I looked for similar data on sentence lengths in german, and the first result I found covering a similar timeframe was wikipedia referencing Kurt Möslein: Einige Entwicklungstendenzen in der Syntax der wissenschaftlich-technischen Literatur seit dem Ende des 18. Jahrhunderts. (1974), which does not find the same trend:
Year | wps |
1770 | 24,50 |
1800 | 25,54 |
1850 | 32,00 |
1900 | 23,58 |
1920 | 22,72 |
1940 | 19,60 |
1960 | 19,90 |
This data on scientific writing starts lower than any of your english examples from that time, and increases initially, but arrives...
I think your disagreement can be made clear with more formalism. First, the point for your opponents:
When the animals are in a cold place, they are selected for a long fur coat, and also for IGF, (and other things as well). To some extent, these are just different ways of describing the same process. Now, if they move to a warmer place, they are now selected for a shorter fur instead, and they are still selected for IGF. And there's also a more concrete correspondence to this: they have also been selected for "IF cold long fur, ELSE short fur" the entire t...
for AIs, more robust adversarial examples - especially ones that work on AIs trained on different datasets - do seem to look more "reasonable" to humans.
Then I would expect they are also more objectively similar. In any case that finding is strong evidence against manipulative adversarial examples for humans - your argument is basically "there's just this huge mess of neurons, surely somewhere in there is a way", but if the same adversarial examples work on minds with very different architectures, then that's clearly not why they exist. Instead, they have ...
Ok, thats mostly what I've heard before. I'm skeptical because:
edit: putting the thing I was originally going to say back:
I meant that I think there's enough bandwidth available from vision into configuration of matter in the brain that a sufficiently powerful mind could find adversarial-example the human brain hard enough to implement the adversarial process in the brain, get it to persist persist in that brain, take control, and spread. We see weaker versions of this in advertising and memetics already, and it seems to be getting worse with social media - there are a few different strains, which generally aren't hig...
This isn't my area of expertise, but I think I have a sketch for a very simple weak proof:
The conjecture states that V runtime and length are polynomial in C size, but leaves the constant open. Therefore a counterexample would have to be an infinite family of circuits satisfying P(C), with their corresponding growing faster than polynomial. To prove the existence of such a counterexample, you would need a proof that each member of the family satisfies P(C). But that proof has finite length, and can be used as the fo...
I think the solution to this is to add something to your wealth to account for inalienable human capital, and count costs only by how much you will actually be forced to pay. This is a good idea in general; else most people with student loans or a mortage are "in the red", and couldnt use this at all.
The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people
This is one of two claims here that I'm not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn't ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don't trust in...
Maybe I'm missing something, but it seems to me that all of this is straightforwardly justified through simple selfish pareto-improvements.
Take a look at Critchs cake-splitting example in section 3.5. Now imagine varying the utility of splitting. How high does it need to get, before [red->Alice;green->Bob] is no longer a pareto improvement over [(split)] from both player's selfish perspective before the observation? It's 27, and thats also exactly where the decision flips when weighing Alice 0.9 and Bob 0.1 in red, and Alice 0.1 and Bob 0.9 in green....
The timescale for improvement is dreadfully long and the day-to-day changes are imperceptible.
This sounded wrong, but I guess is technically true? I had great in-session improvements as I'm warming up the area and getting into it, and the difference between a session where I missed the previous day, and one where I didn't, is absolutely preceptible. Now after that initial boost, it's true that I couldn't tell if the "high point" was improving day to day, but that was never a concern - the above was enough to give me confidence. Plus with your external rotations, was there not perceptible strength improvement week to week?
So I've reread your section on this, and I think I follow that, but its arguing a different claim. In the post, you argue that a trader that correctly identifies a fixed point, but doesn't have enough weight to get it played, might not profit from this knowledge. That I agree with.
But now you're saying that even if you do play the new fixed point, that trader still won't gain?
I'm not really calling this a proof because it's so basic that something else must have gone wrong, but:
has a fixed point at , and doesn't. Then...
On reflection, I didn't quite understand this exploration business, but I think I can save a lot of it.
>You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in.
I think the key here is in the word "confirm". Its true that unless you believe p is a fixed point, you can't just try out p and see the result. However, you can change your beliefs about p based on your results from ex...
I don't think the learnability issues are really a problem. I mean, if doing a handstand with a burning 100 riyal bill between your toes under the full moon is an exception to all physical laws and actually creates utopia immediately, I'll never find out either. Assuming you agree that that's not a problem, why is the scenario you illustrate? In both cases, it's not like you can't find out, you just don't, because you stick to what you believe is the optimal action.
I don't think this would be a significant problem in practice any more than other kinds of h...
That prediction may be true. My argument is that "I know this by introspection" (or, introspection-and-generalization-to-others) is insufficient. For a concrete example, consider your 5-year-old self. I remember some pretty definite beliefs I had about my future self that turned out wrong, and if I ask myself how aligned I am with it I don't even know how to answer, he just seems way too confused and incoherent.
I think it's also not absurd that you do have perfect caring in the sense relevant to the argument. This does not require that you don't make mista...
This prediction seems flatly wrong: I wouldn’t bring about an outcome like that. Why do I believe that? Because I have reasonably high-fidelity access to my own policy, via imagining myself in the relevant situations.
This seems like you're confusing two things here, because the thing you would want is not knowable by introspection. What I think you're introspecting is that if you'd noticed that the-thing-you-pursued-so-far was different from what your brother actually wants, you'd do what he actually wants. But the-thing-you-pursued-so-far doesn't play the...
The idea is that we can break any decision problem down by cases (like "insofar as the predictor is accurate, ..." and "insofar as the predictor is inaccurate, ...") and that all the competing decision theories (CDT, EDT, LDT) agree about how to aggregate cases.
Doesn't this also require that all the decision theories agree that the conditioning fact is independent of your decision?
Otherwise you could break down the normal prisoners dilemma into "insofar as the opponent makes the same move as me" and "insofar as the opponent makes the opposite move" and con...
Would a decision theory like this count as "giving up on probabilities" in the sense in which you mean it here?
I think your assessments of whats psychologically realistic are off.
I do not know what it feels like from the inside to feel like a pronoun is attached to something in your head much more firmly than "doesn't look like an Oliver" is attached to something in your head.
I think before writing that, Yud imagined calling [unambiguously gendered friend] either pronoun, and asked himself if it felt wrong, and found that it didn't. This seems realistic to me: I've experienced my emotional introspection becoming blank on topics I've put a lot of thinking into. This...
I don't think the analogy to biological brains is quite as strong. For example, biological brains need to be "robust" not only to variations in the input, but also in a literal sense, to forceful impact or to parasites trying to control it. It intentionally has very bad suppressability, and this means there needs to be a lot of redundancy, which makes "just stick an electrode in that area" work. More generally, it is under many constraints that a ML system isn't, probably too many for us to think of, and it generally prioritizes safety over performance. Bo...
Probably way too old here, but I had multible experiences relevant to the thread.
Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not.
I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague.
I would say the backstories in dreams are d...
I think its still possible to have a scenario like this. Lets say each trader would buy or sell a certain amount when the price is below/above what they think it to be, but the transition being very steep instead of instant. Then you could still have long price intervalls where the amounts bought and sold remain constant, and then every point in there could be the market price.
I'm not sure if this is significant. I see no reason to set the traders up this way other than the result in the particular scenario that kicked this off, and adding traders who don'...
So I'm not sure what's going on with my mental sim. Maybe I just have a super-broad 'crypto-moral detector' that goes off way more often than yours (w/o explicitly labeling things as crypto-moral for me).
Maybe. How were your intuitions before you encountered LW? If you already had a hypocrisy intuition, then trying to internalize the rationalist perspective might have lead it to ignore the morality-distinction.
I don't strongly relate to any of these descriptions. I can say that I don't feel like I have to pretend advice from equals is more helpful than it is, which I suppose means its not face. The most common way to reject advice is a comment like "eh, whatever" and ignoring it. Some nerds get really mad at this and seem to demand intellectual debate. This is not well received. Most people give advice with the expectation of intellectual debate only on crypto-moral topics (this is also not well received generally, but the speaker seems to accept that as an "identity cost"), or not at all.
This excludes worlds which the deductive process has ruled out, so for example if has been proved, all worlds will have either A or B. So if you had a bet which would pay $10 on A, and a bet which would pay $2 on B, you're treated as if you have $2 to spend.
I agree you can arbitrage inconsistencies this way, but it seems very questionable. For one, it means the market maker needs to interpret the output of the deductive process semantically. And it makes him go bankrupt if that logic is inconsistent. And there could be a case where a proposit...
Why is the price of the un-actualized bet constant? My argument in the OP was to suppose that PCH is the dominant hypothesis, so, mostly controls market prices.
Thinking about this in detail, it seems like what influence traders have on the market price depends on a lot more of their inner workings than just their beliefs. I was thinking in a way where each trader only had one price for the bet, below which they bought and above which they sold, no matter how many units they traded (this might contradict "continuous trading strategies" because of finite wea...
But now, you seem to be complaining that a method that explicitly avoids Troll Bridge would be too restrictive?
No, I think finding such a no-learning-needed method would be great. It just means your learning-based approach wouldn't be needed.
You seem to be arguing that being susceptible to Troll Bridge should be judged as a necessary/positive trait of a decision theory.
No. I'm saying if our "good" reasoning can't tell us where in Troll Bridge the mistake is, then something that learns to make "good" inferences would have to fall for it.
...But there are decisi
So I don't see how we can be sure that PCH loses out overall. LCH has to exploit PCH -- but if LCH tries it, then we're seemingly in a situation where LCH has to sell for PCH's prices, in which case it suffers the loss I described in the OP.
So I've reread the logical induction paper for this, and I'm not sure I understand exploitation. Under 3.5, it says:
On each day, the reasoner receives 50¢ from T, but after day t, the reasoner must pay $1 every day thereafter.
So this sounds like before day t, T buys a share every day, and those shares never pay out - ot...
It seems to me that this habit is universal in American culture, and I'd be surprised (and intrigued!) to hear about any culture where it isn't.
I live in Austria. I would say we do have norms against hypocrisy, but your example with the drivers license seems absurd to me. I would be surprised (and intrigued!) if agreement with this one in particular is actually universal in American culture. In my experience, hypocrisy norms are for moral and crypto-moral topics.
For normies, morality is an imposition. Telling them of new moral requirements increases how mu...
The payoff for 2-boxing is dependent on beliefs after 1-boxing because all share prices update every market day and the "payout" for a share is essentially what you can sell it for.
If a sentence is undecidable, then you could have two traders who disagree on its value indefinitely: one would have a highest price to buy, thats below the others lowest price to sell. But then anything between those two prices could be the "market price", in the classical supply and demand sense. If you say that the "payout" of a share is what you can sell it for... well, the ...
Because we have a “basic counterfactual” proposition for what would happen if we 1-box and what would happen if we 2-box, and both of those propositions stick around, LCH’s bets about what happens in either case both matter. This is unlike conditional bets, where if we 1-box, then bets conditional on 2-boxing disappear, refunded, as if they were never made in the first place.
I don't understand this part. Your explanation of PCDT at least didn't prepare me for it, it doesn't mention betting. And why is the payoff for the counterfactual-2-boxing determined b...
are the two players physically precisely the same (including environment), at least insofar as the players can tell?
In the examples I gave yes. Because thats the case where we have a guarantee of equal policy, from which people try to generalize. If we say players can see their number, then the twins in the prisoners dilemma needn't play the same way either.
But this is one reason why correlated equilibria are, usually, a better abstraction than Nash equilibria.
The "signals" players receive for correlated equilibria are already semantic. So I'm suspicious t...
Hum, then I'm not sure I understand in what way classical game theory is neater here?
Changing the labels doesn't make a difference classically.
As long as the probabilistic coin flips are independent on both sides
Yes.
Do you have examples of problems with copies that I could look at and that you think would be useful to study?
No, I think you should take the problems of distributed computing, and translate them into decision problems, that you then have a solution to.
Well, if I understand the post correctly, you're saying that these two problems are fundamentally the same problem
No. I think:
...the reasoning presented is correct in both cases, and the lesson here is for our expectations of rationality...
As outlined in the last paragraph of the post. I want to convince people that TDT-like decision theories won't give a "neat" game theory, by giving an example where they're even less neat than classical game theory.
Actually it could.
I think you're thinking about a realistic case (same algorithm, similar environment...
The link would have been to better illustrate how the proposed system works, not about motivation. So, it seems that you understood the proposal, and wouldn't have needed it.
I don't exactly want to learn the cartesian boundary. A cartesian agent believes that its input set fully screens off any other influence on its thinking, and the outputs screen off any influence of the thinking on the world. Its very hard to find things that actually fulfill this. I explain how PDT can learn cartesian boundaries, if there are any, as a sanity/conservative extension check. But it can also learn that it controls copies or predictions of itself for example.
The apparent difference is based on the incoherent counterfactual "what if I say heads and my copy says tails"
I don't need counterfactuals like that to describe the game, only implications. If you say heads and your copy tails, you will get one util, just like how if 1+1=3, the circle can be squared.
The interesting thing here is that superrationality breaks up an equivalence class relative to classical game theory, and peoples intuitions don't seem to have incorporated this.
What is and isn't an isomorphism depends on what you want to be preserved under isomorphism. If you want everything thats game-theoretically relevant to be preserved, then of course those games won't turn out equivalent. But that doesn't explain anything. If my argument had been that the correct action in the prisoners dilemma depends on sunspot activity, you could have written your comment just as well.
Right, but then, are all other variables unchanged? Or are they influenced somehow? The obvious proposal is EDT -- assume influence goes with correlation.
I'm not sure why you think there would be a decision theory in that as well. Obviously when BDT decides its output, it will have some theory about how its output nodes propagate. But the hypothesis as a whole doesn't think about influence. Its just a total probability distribution, and it includes that some things inside it are distributed according to BDT. It doesn't have beliefs about "if the output of ...
Adding other hypothesis doesn't fix the problem. For every hypothesis you can think of, theres a version of it that says "but I survive for sure" tacked on. This hypothesis can never lose evidence relative to the base version, but it can gain evidence anthropically. Eventually, these will get you. Yes, theres all sorts of considerations that are more relevant in a realistic scenario, thats not the point.
The problem, as I understand it, is that there seem to be magical hypothesis you can't update against from ordinary observation, because by construction the only time they make a difference is in your odds of survival. So you can't update them from observation, and anthropics can only update in their favour, so eventually you end up believing one and then you die.
Maybe the disagreement is in how we consider the alternative hypothesis to be? I'm not imagining a broken gun - you could examine your gun and notice it isn't, or just shoot into the air a few times and see it firing. But even after you eliminate all of those, theres still the hypothesis "I'm special for no discernible reason" (or is there?) that can only be tested anthropically, if at all. And this seems worrying.
Maybe heres a stronger way to formulate it: Consider all the copies of yourself across the multiverse. They will sometimes face situations where...
The audible links don't function for me, propably not for many outside America.
>To show how weird English is: English is the only proto indo european language that doesn't think the moon is female ("la luna") and spoons are male (“der Löffel”).
In the most of those speeches, gender is downleadable from the form of the word, with outtakes naturally. In German it really makes neither semantic nor phonetic sense - secondlanguagelers often don't learn it at all, but here chaos shows no weakness: It is rather the strong verbs that are currently going lost, bu... (read more)