The current education system focuses almost exclusively on the bottom 20%. If we're expecting a tyranny of the majority, we should see the top and bottom losing out. Also, note that very few children actually have an 80% chance of ending up in the middle 80%, so you would really expect class warfare not a veil of ignorance if people are optimising specifically for their own future children's education.
Yeah, I don't see why either. LessWrong allegedly has a utilitarian culture, and simply from the utilitarian "minimize abuse" perspective, you're spot on. Even if home-schooling has similar or mildly lower rates of abuse, the weight of that abuse is higher.
Grade inflation originally began in the United States due to the Vietnam War draft. University students where exempt from the draft as long as they maintained high enough grades, so students became less willing to stretch their abilities and professors less willing to accurately report their abilities.
The issue is that grades are trying to serve three separate purposes:
Regular feedback to students on how well they understand the material.
Personal recommendations from teachers to prospective employers/universities.
Global comparisons between student
I think the reason education got so bad is we don't have accurate signals. Most studies use the passing rate as their metric of "achievement", and that can only see changes among the bottom quintile. Or, they use standardized assessments, which usually do not go higher than the 90th percentile. I wrote a longer post here: https://www.lesswrong.com/posts/LPyqPrgtyWwizJxKP/how-do-we-fix-the-education-crisis
Maybe it's my genome's fault that I care so much about future me. It is very similar to future it, and so it forces me to help it survive, even if in a very different person than I am today.
When I say, "me," I'm talking about my policy, so I'm a little confused when you say I could have been a different snapshot. Tautologically, I cannot. So, if I'm trying to maximize my pleasure, a Veil of Ignorance doesn't make sense. The only case it really applies is when I make pacts like, "if you help bring me into existence, I'll help you maximize your pleasure," except those pacts can't actually form. What really happens is existing people try to bring into existence people that will help them maximize their pleasure, either by having similar policies to their own, or being willing to serve them.
I try to be pragmatic, which means I only find it useful to consider constructive theories; anything else is not defined, and I would say you cannot even talk about them. This is why I take issue with many simple explanations of utilitarianism: people claim to "sum over everyone equally" while not having a good definition for "everyone" or "summing equally". I think these are the two mistakes you are making in your post.
You say something like,
You never had the mechanism to choose who you would be born as, and the simplest option is pure chance.
but you cann...
Not quite. I think people working more do get more done, but it ends up lowering wages and decreasing the entropy of resource allocation (concentrates it to the top). If you're looking for the good of the society, you probably want the greatest free energy,
The temperature is usually somewhere between (economic boom) and (recessions), and in the United Kingdom. I couldn't find a figure for the Theil index, but the closest I got is that Croatia's was&n...
Dan Neidle: The 20,000% spike at £100,000 is absolutely not a joke – someone earning £99,999.99 with two children under three in London will lose an immediate £20k if they earn a penny more. The practical effect is clearer if we plot gross vs net income.
Can't it actually be good to encourage people to not work? I'd imagine if everyone in the United Kingdom worked half the number of hours, salaries wouldn't decrease very much. Their society, as a whole, doesn't need to work so many hours to maintain the quality of life, they only individually need to because they drive each others' wages down.
We know what societies that mutilate prisoners are like, because plenty of them have existed.
This is where I disagree. There are only a few post-industrial socieities that have done this, and they were already rotten before starting the mutilation (e.g. Nazi Germany). There is nothing to imply that mutilation will turn your society rotten, only that when your society becomes rotten mutilation may begin.
So, you're making two rather large claims here that I don't agree with.
When you look at the history of societies that punish people by mutilation, you find that mutilation goes hand in hand (no pun intended) with bad justice systems--dictatorship, corruption, punishment that varies between social classes, lack of due process, etc.
This seems more a quirk of scarcity than due to having a bad justice system. Historically, it wasn't just the tryannical, corrupt governments that punished people with mutlation, it was every civilization on the planet! I thin...
I don't understand your objection. Would you rather go to prison for five years or lose a hand? Would you rather unfairly be imprisoned for five years, and then be paid $10mn in compensation, or unfairly have your hand chopped off and paid $10mn in compensation? I think most people would prefer mutilation over losing years of their lives, especially when it was a mistake. Is your point that, if someone is in prison, they can be going through the appeal process, and thus, if a mistake occurs they'll be less damaged? Because currently it takes over eight yea...
Similar disclaimer: don't assume these are my opinions. I'm merely advocating for a devil.
If we're going for efficiency, I feel like we can get most of the safety gains with tamer measures. For example, you could cut off a petty thief's hand, or castrate a rapist. The actual procedure would be about as expensive as execution, but if a mistake was made there is still a living person to pay reparations to. I think you could also make the argument that this is less cruel than imprisoning someone for years—after all, people have a "right to life, liberty, and ...
If you're going to be talking about trust in society, you should definitely take a look at Gossner's Simple Bounds on the Value of a Reputation.
The bottom row is close to what I imagine, but without IO ports on the same edge being allowed to connect to each other (though that is also an interesting problem). These would be the three diagrams for the square:
The middle one makes a single loop which is one-third of them, and in this case. My guess for how to prove the recurrence is to "glue" polygons together:
There are pairs of sizes we can glue together (if you're okay with -sided polygons), but I haven't made much progress in this direction....
So, I'm actually thinking about something closer to this for "one loop":
This is on a single square tile, with four ports of entry/exit. What I've done is doubled the rope in each connection, so there is one connection going from the top to the bottom and a different connection going from the bottom to the top. Then you tie off the end of each connection with the start of the connection just clockwise to it.
Some friends at MIT solved this problem for a maths class, and it turns out there's a nice recurrence. Let be the probability there are...
Your math is correct, it's and for the number of tiles and connections. I wrote some code here:
https://github.com/programjames/einstein_tiling
Here's an example:
An interesting question I have is: suppose we tied off the ends going clockwise around the perimeter of the figure. What is the probability we have exactly one loop of thread, and what is the expected number of loops? This is a very difficult problem; I know several MIT math students who spent several months on a slightly simpler problem.
The sidelengths for the Einstein tile are all either or , except for a single side of length . I think it makes more sense to treat that side as two sides, with a angle between them. Then you would get fourteen entry/exit points:
The aperiodic tiling from the paper cannot be put onto a hexagonal grid, and some of the tiles are flipped vertically, so you need every edge to have an entry/exit to make a Celtic knot out of it. Also, I would recommend using rather than so the arcs t...
I'm not entirely sure what you've looked at in the literature; have you seen "Direct Validation of the Information Bottleneck Principle for Deep Nets" (Elad et al.)? They use the Fenchel conjugate
\[\mathrm{KL}(P||Q) = \sup_{f} [\mathbb{E}_P[f]-\log (\mathbb{E}_Q[e^f])]\]This turns finding the KL-divergence into an optimisation problem for \(f^*(x) = \log \frac{p(x)}{q(x)}\). Since
\[I(X;Y)=\mathrm{KL}(P_{X,Y}||P_{X\otimes Y}),\]you can train a neural network to predict the mutual information. For the information bottleneck, you would train two addition...
Reversible networks (even when trained) for example have the same partition induced even if you keep stacking more layers, so from the perspective of information theory, everything looks the same
I don't think this is true? The differential entropy changes, even if you use a reversible map:
where is the Jacobian of your map. Features that are "squeezed together" are less usable, and you end up with a smaller entropy. Similarly, "unsqueezing" certain features, or examining them more closely, gives a higher entropy.
A couple things to add:
Why are conservatives for punitive correction while progressives do not think it works? I think this can be explained by the difference between stable equilibria and saddle points.
If you have a system where people make random "mistakes" an amount of the time, the stable points are known as trembling-hand equilibria. Or, similarly, if they transition to different policies some H of the time, you get some thermodynamic distribution. In both models, your system is exponentially more likely to end up in states it is hard to transition out of (Ellis...
Oh, I did misread your post. I thought these were just people on some mailing list that had no relation to HPMOR/EA and you were planning on sending them books as advertising. This makes a lot more sense, and I'm much more cool with this form of advertising.
EDIT: I will point out, it still does scream "cult tactic" to me, probably because it is targeting specific people who do not know there is a campaign behind the scenes to get them to join the group. I don't think it is wrong to advertise to people who have given their consent, but I do think it is dangerous to have a culture where you discuss how to best advertise to specific people.
I’m confused. Are you perhaps missing some context/haven’t read the post?
Tl;dr: We have emails of 1500 unusually cool people who have copies of HPMOR (and other books) because we’ve physically sent these copies to them because they’ve filled out a form saying they want a copy.
Spam is bad (though I wouldn’t classify it as defection against other groups). People have literally given us email and physical addresses to receive stuff from us, including physical books. They’re free to unsubscribe at any point.
I certainly prefer a world where groups that try to i...
This screams "cult tactic" to me. Is the point of EA to identify high-value targets and get them to help the EA community, or to target high-value projects that help the broader community?
I'd recommend against that. It's too similar to Mormonism w/ Marriott.
Given that Euan begins his post with an axiom of materialism, it's referenced in the quote I'm responding to, and I'm responding to Euan, not talking to a general audience, I think it's your fault for intepreting it as "most people, full stop".
Dollars are essentially energy from physics, and trades are state transitions. So, in expectation entropy will increase. Suppose person controls a proportion of the dollars. In an efficient market, entropy will be maximal, so we want to find the distribution
For a given Total Societal Wealth Generation, this is the Boltzmann distribution
where is the temperature (frequency of trades). I subsumed as a single constant in my earl...
Exploitation is using a superior negotiating position to inflict great costs on someone else, at small benefit to yourself.
If someone is inflicting any cost on me for their own benefit, that is not a mutually beneficial trade, so your definition doesn't solve the problem. You cannot just look at subtrades either—after all, you can always break up every trade into two transactions where you first only pay a cost, and then only get a benefit at someone else's expense.
My definition is closer to this:
...A trade is exploitative when it decreases a society's w
For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.
If you have bad characteristics (e.g. you steal from your acquaintances), isn't it in your best interest to make sure this doesn't become common knowledge? You don't...
If you're not already aware of the information bottleneck, I'd recommend The Information Bottleneck Method, Efficient Compression in Color Naming and its Evolution, and Direct Validation of the Information Bottleneck Principle for Deep Nets. You can use this with routing for forward training.
EDIT: Probably wasn't super clear why you should look into this. An optimal autoencoder should try to maximize the mutual information between the encoding and the original image. You wouldn't even need to train a decoder at the same time as the encoder! But, unfortunat...
And I migrated my comment.
If you're not already aware of the information bottleneck, I'd recommend The Information Bottleneck Method, Efficient Compression in Color Naming and its Evolution, and Direct Validation of the Information Bottleneck Principle for Deep Nets. You can use this with routing for forward training.
Maybe, there's an evolutionary advantage to thinking of yourself as distinct from the surrounding universe, that way your brain can simulate counterfactual worlds where you might take different actions. Will you actually take different actions? No, but thinking will make the one action you do take better. Since people are hardwired to think their observations are not necessarily interactions, updating in the other direction has significant surprisal.
I think physicists like to think of the universe through a "natural laws" perspective, where things should work the same whether or not they were there to look at them. So, it seems strange when things do work differently when they look at them.
The reason wave function collapse is so surprising, is because not collapsing seems to be the norm. In fact, the best gravimeters are made by interfering the wavefunctions of entire molecules (ref: atom interferometer). We only see "wave function collapse" in particular kinds of operations, which we then define as observations. So, it isn't surprising that we observe wave function collapse—that's how the word "observe" is defined. What is surprising is that collapse even occurs to be observed, when we know it is not how the universe usually operates.
and that's because I think you don't understand them either.
What am I supposed to do with this? The one effect this has is to piss me off and make me less interested in engaging with anything you've said.
Why is that the one effect? Jordan Peterson says that the one answer he routinely gives to Christians and atheists that piss them off is, "what do you mean by that?" In an interview with Alex O'Conner he says,
...So people will say, well, do you believe that happened literally, historically? It's like, well, yes, I believe that it's okay. Okay. What
But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
...I'm not arguing against the claim th
In response to the two reactions:
Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists "a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness" you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitl...
I don't like this writing style. It feels like you are saying a lot of things, without trying to demarcate boundaries for what you actually mean, and I also don't see you criticizing your sentences before you put them down. For example, with these two paragraphs:
Surely there can’t be a single neuron replacement that turns you into a philosophical zombie? That would mean your consciousness was reliant on that single neuron, which seems implausible.
...The other option is that your consciousness gradually fades over the course of the operations. But surely
I did some more thinking, and realized particles are the irreps of the Poincaire group. I wrote up some more here, though this isn't complete yet:
https://www.lesswrong.com/posts/LpcEstrPpPkygzkqd/fractals-to-quasiparticles
Risk is a great study into why selfish egoism fails.
I took an ethics class at university, and mostly came to the opinion that morality was utilitarianism with an added deontological rule to not impose negative externalities on others. I.e. "Help others, but if you don't, at least don't hurt them." Both of these are tricky, because anytime you try to "sum over everyone" or have any sort of "universal rule" logic breaks down (due to Descartes' evil demon and Russell's vicious circle). Really, selfish egoism seemed to make more logical sense, but it doesn't h...
I wrote up my explanation as its own post here: https://www.lesswrong.com/posts/LpcEstrPpPkygzkqd/fractals-to-quasiparticles
I think you're looking for the irreducible representations of (edit: for 1D, ). I'll come back and explain this later, but it's going to take awhile to write up.
Utilitarianism is usually introduced as summing "equally" between people, but we all know some arrangements of atoms are more equal than others.
How do you choose to sum the utility when playing a Prisoner's Dilemma against a rock?
I think this is correct, but I would expect most low-level differences to be much less salient than a dog, and closer to 10^25 atoms dispersed slightly differently in the atmosphere. You will lose a tiny amount of weight for remembering the dog, but gain much more back for not running into it.
As it is difficult to sort through the inmates on execution day, an automatic gun is placed above each door with blanks or lead ammunition. The guard enters the cell numbers into a hashed database, before talking to the unlucky prisoner. He recently switched to the night shift, and his eyes droop as he shoots the ray.
When he wakes up, he sees "enter cell number" crossed off on the to-do list, but not "inform the prisoners". He must have fallen asleep on the job, and now he doesn't know which prisoner to inform! He figures he may as well offer all the priso...
I consider "me" to be a mapping from environments to actions, and weigh others by their KL-divergence from me.
As I like to say, ignorance does not excuse a sin, it makes two sins: the original, and the fact you didn't put in the effort to know better. So, if you really do just possess a better method of communication—for example, you prefer talking disagreements out over killing each other—you're completely justified in flexing superior on the clueless outsiders. This doesn't mean it will always be effective, just that you're not breaking the "cooperate unless defected against" strategy, and the rest of rational society shouldn't punish you for it.