I had the same experience. I was essentially going to say "meta is only useful insofar as it helps you and others do object-level things, so focus on building object-level things..." oh.
This whole post seems to mostly be answering "who has the best ethnic restaurants in Europe/America?" along with "which country has the best variety of good restaurants?" and not "who has the best food?" I think that's an important distinction. Clearly, Indian, Chinese, and middle eastern foods are the best.
I haven't heard of ECL before, so I'm sorry if this comes off as naive, but I'm getting stuck on the intro.
For one, I assume that you care about what happens outside our light cone. But more strongly, I’m looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone.
The laws of physics as we know them state that we cannot have any impact outside our light cone. Does ECL (or this post)...
No you're right, use 2 or 3 instead of 4 as an average dielectric constant. The document you linked cites https://ieeexplore.ieee.org/abstract/document/7325600 which gives measured resistances and capacitances for the various layers. For Intel's 14 nm process making use of low-k, ultra-low-k dielectrics, and air gaps, they show numbers down to 0.15 fF/micron, about 15 times higher than .
I remember learning that aspect ratio and dielectric constant alone don't suffice to explain the high capacitances of interconnects. Instead, you have to include fri...
This is an excellent writeup.
Minor nit, your assertion of is too simple imo, even for a Fermi estimate. At the very least, include a factor of 4 for the dielectric constant of SiO2, and iirc in real interconnects there is a relatively high "minimum" from fringing fields. I can try to find a source for that later tonight, but I would expect it ends up significantly more than . This will actually make your estimate agree even better with Jacob's.
Active copper cable at 0.5W for 40G over 15 meters is ~J/nm, assuming it actually hits 40G at the max length of 15m.
I can't access the linked article, but an active cable is not simple to model because its listed power includes the active components. We are interested in the loss within the wire between the active components.
This source has specs for a passive copper wire capable of up to 40G @5m using <1W, which works out to ~J/nm, or a bit less.
They write <1 W for every length of wire, so all you can say is <5 fJ/mm. You don't know how...
Indeed, the theoretical lower bound is very, very low.
Do you think this is actually achievable with a good enough sensor if we used this exact cable for information transmission, but simply used very low input energies?
The minimum is set by the sensor resolution and noise. A nice oscilloscope, for instance, will have, say, 12 bits of voltage resolution and something like 10 V full scale, so ~2 mV minimum voltage. If you measure across a 50 Ohm load then the minimum received power you can see is This is an underestimate, but t...
Maybe I'm interpreting these energies in a wrong way and we could violate Jacob's postulated bounds by taking an Ethernet cable and transmitting 40 Gbps of information at a long distance, but I doubt that would actually work.
Ethernet cables are twisted pair and will probably never be able to go that fast. You can get above 10 GHz with rigid coax cables, although you still have significant attenuation.
Let's compute heat loss in a 100 m LDF5-50A, which evidently has 10.9 dB/100 m attenuation at 5 GHz. This is very low in my experience, but it's what they cla...
Ah, I was definitely unclear in the previous comment. I'll try to rephrase.
When you complete a circuit, say containing a battery, a wire, and a light bulb, a complicated dance has to happen for the light bulb to turn on. At near the speed of light, electric and magnetic fields around the wire carry energy to the light bulb. At the same time, the voltage throughout the wire establishes itself at the the values you would expect from Ohm's law and Kirchhoff's rules and such. At the same time, electrons throughout the wire begin to feel a small force from an e...
The part your calculation fails to address is what happens if we attempt to drive this transmission by moving electrons around inside a wire made of an ordinary resistive material such as copper.
I have a number floating around in my head. I'm not sure if it's right, but I think that at GHz frequencies, electrons in typical wires are moving sub picometer distances (possibly even femtometers?) per clock cycle.
The underlying intuition is that electron charge is "high" in some sense, so that 1. adding or removing a small number of electrons corresponds to a hu...
I come more from the physics side and less from the EE side, so for me it would be Datta's "Electronic Transport in Mesoscopic Systems", assuming the standard solid state books survive (Kittel, Ashcroft & Mermin, L&L stat mech, etc). For something closer to EE, I would say "Principles of Semiconductor Devices" by Zeghbroeck because it is what I have used and it was good, but I know less about that landscape.
Hi Alexander,
I would be happy to discuss the physics related to the topic with others. I don't want to keep repeating the same argument endlessly, however.
Note that it appears that EY had a similar experience of repeatedly not having their point addressed:
...I'm confused at how somebody ends up calculating that a brain - where each synaptic spike is transmitted by ~10,000 neurotransmitter molecules (according to a quick online check), which then get pumped back out of the membrane and taken back up by the synapse; and the impulse is then shepherded along cell
It depends on your background in physics.
For the theory of sending information across wires, I don't think there is any better source than Shannon's "A Mathematical Theory of Communication."
I'm not aware of any self-contained sources that are enough to understand the physics of electronics. You need to have a very solid grasp of E&M, the basics of solid state, and at least a small amount of QM. These subjects can be pretty unintuitive. As an example of the nuance even in classical E&M, and an explanation of why I keep insisting that "signals do not...
This is the right idea, but in these circuits there are quite a few more noise sources than Johnson noise. So, it won't be as straightforward to analyze, but you'll still end up with essentially a relatively small (compared to L/nm) constant times kT.
Ok, I will disengage. I don't think there is a plausible way for me to convince you that your model is unphysical.
I know that you disagree with what I am saying, but from my perspective, yours is a crackpot theory. I typically avoid arguing with crackpots, because the arguments always proceed basically how this one did. However, because of apparent interest from others, as well as the fact that nanoelectronics is literally my field of study, I engaged. In this case, it was a mistake.
Sorry for wasting our time.
Dear spxtr,
Things got heated here. I and many others are grateful for your effort to share your expertise. Is there a way in which you would feel comfortable continuing to engage?
Remember that for the purposes of the prize pool there is no need to convince Cannell that you are right. In fact I will not judge veracity at all just contribution to the debate (on which metric you're doing great!)
Dear Jake,
This is the second person in this thread that has explicitly signalled the need to disengage. I also realize this is charged topic and it's easy for it to get heated when you're just honestly trying to engage.
Best, Alexander
Please respond to the meat of the argument.
1 nm is somewhat arbitrary but around that scale is a sensible estimate for minimal single electron device spacing ala Cavin/Zhirnov. If you haven’t actually read those refs you should - as they justify that scale and the tile model.
They use this model to figure out how to pack devices within a given area and estimate their heat loss. It is true that heating of a wire is best described with a resistivity (or parasitic capacitance) that scales as 1/L. If you want to build a model out of tiles, each of which is a few nm on a side (because the FETs are roughl...
I don’t think the thermal de Broglie wavelength is at all relevant in this context, nor the mean free path, and instead I’m trying to shift discussion to “how wires work”.
This is the crux of it. I made the same comment here before seeing this comment chain.
People have been sending binary information over wires since 1840, right? I don’t buy that there are important formulas related to electrical noise that are not captured by the textbook formulas. It’s an extremely mature field.
Also a valid point. @jacob_cannell is making a strong claim: that the energy l...
For what it's worth, I think both sides of this debate appear strangely overconfident in claims that seem quite nontrivial to me. When even properly interpreting the Landauer bound is challenging due to a lack of good understanding of the foundations of thermodynamics, it seems like you should be keeping a more open mind before seeing experimental results.
At this point, I think the remarkable agreement between the wire energies calculated by Jacob and the actual wire energies reported in the literature is too good to be a coincidence. However, I suspect th...
The post is making somewhat outlandish claims about thermodynamics. My initial response was along the lines of "of course this is wrong. Moving on." I gave it another look today. In one of the first sections I found (what I think is) a crucial mistake. As such, I didn't read the rest. I assume it is also wrong.
The original post said:
...A non-superconducting electronic wire (or axon) dissipates energy according to the same Landauer limit per minimal wire element. Thus we can estimate a bound on wire energy based on the minimal assumption of 1 minimal energy un
I suspect my experience is somewhat similar to shminux's.
I simply can't follow these posts, and the experience of reading them feels odd, and even off-putting at times (in an uncanny valley sort of way). At the same time, I can see that a number of people in the comments are saying that they find great value in them.
My first guess as to why I had trouble with them was that there are basically no concrete examples given, but now I don't think that's the reason. Personally, I get a strong sense of "I must be making some sort of typical mind fallacy" here. So...
Visual Information Theory. I was already comfortable with information theory and this was still informative. This blogger's other posts are similarly high-quality.
In the end, it is just another Abrams movie: slick, SFX-heavy, and as substantial & satisfying as movie theater popcorn.
Yep.
You might want to add a spoiler note at the top, though.
It might be wishful thinking, but I feel like my smash experience improved my meatspace-agency as well.
Story time! Shortly after Brawl came out, I got pretty good at it. I could beat all my friends without much effort, so I decided to enter in a local tournament. In my first round I went up against the best player in my state, and I managed to hit him once, lightly, over the course of two games. I later became pretty good friends and practiced with him regularly.
At some point I completely eclipsed my non-competitive friends, to the extent that playing with them felt like a chore. All I had to do was put them in certain situations where I knew how they would...
An exact copy of me may be "me" from an identity perspective, but it is a separate entity from a utilitarian perspective. The death of one is still a tragedy, even if the other survives.
You should know this intuitively. If a rogue trolley is careening toward an unsuspecting birthday cake, you'll snatch it out of the way. You won't just say, "eh, in another time that cake will survive," and then watch it squish. Unless you're some sort of monster.
I am impressed. The production quality on this is excellent, and the new introduction by Rob Bensinger is approachable for new readers. I will definitely be recommending this over the version on this site.
I didn't want to tell it to you before because I thought it might prejudice your decision unfairly.
If Draco has has the last half-hour of his memory sealed off, then why does Harry say these words to him? Shouldn't Draco respond, "What decision?"
Unless it's a more nuanced memory charm, such that he only subconsciously remembers the conversation.
...If you have a different version of QM (perhaps what Ted Bunn has called a “disappearing-world” interpretation), it must somehow differ from MWI, presumably by either changing the above postulates or adding to them. And in that case, if your theory is well-posed, we can very readily test those proposed changes. In a dynamical-collapse theory, for example, the wave function does not simply evolve according to the Schrödinger equation; it occasionally collapses (duh) in a nonlinear and possibly stochastic fashion. And we can absolutely look for experimental
I recommend reading the sequences, if you haven't already. In particular, the fun theory sequence discusses exactly these issues.
This is a little misleading. Feynman diagrams are simple, sure, but they represent difficult calculations that weren't understood at the time he invented them. There was certainly genius involved, not just perseverance.
Much more likely his IQ result was unreliable, as gwern thinks.
...Feynman was younger than 15 when he took it, and very near this factoid in Gleick's bio, he recounts Feynman asking about very basic algebra (2^x=4) and wondering why anything found it hard - the IQ is mentioned immediately before the section on 'grammar school', or middle school, implying that the 'school IQ test' was done well before he entered high school, putting him at much younger than 15. (15 is important because Feynman had mastered calculus by age 15, Gleick says, so he wouldn't be asking his father why algebra is useful at age >15.) - Given t
The idea is that it's not specifically for quotes related to rationality or other LessWrong topics.
Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea).
After you put the glass of water in contact with the cup of tea, you will quickly become uncertain about the state of the tea. In order to still know the microstate, you need to be fed more information.
That's probably it. When fitting a line using MCMC you'll get an anticorrelated blob of probabilities for slope and intercept, and if you plot one deviation in the fit parameters you get something that looks like this. I'd guess this is a non-parametric analogue of that. Notice how both grow significantly at the edges of the plots.
Quantum mysticism written on a well-known and terrible MRA blog? -8 seems high. See the quantum sequence if you haven't already. It looks like advancedatheist and ZankerH got some buddies to upvote all of their comments, though. They all jumped by ~12 in the last couple hours.
For real, though, this is actually useless and deserves a very low score.
Super-resolution microscopy is an interesting recent development that won the Nobel Prize in Chemistry last year. Here's another article on the subject. It has been used to image mouse brains, but only near the surface. It won't be able to view the interior of any brain, but still interesting.
What's the shaded area in the very first plot? Usually this area is one deviation around the fit line, but here it's clearly way too small to be that.
You probably know this, but average energy per molecule is not temperature at low temperatures. Quantum kicks in and that definition fails. dS/dE never lets you down.
It won't be ice. Ice has a regular crystal structure, and if you know the microstate you know that the water molecules aren't in that structure.
Expanding on the billiard ball example: lets say one part of the wall of the pool table adds some noise to the trajectory of the balls that bounce off of that spot, but doesn't sap energy from them on average. After a while we won't know the exact positions of the balls at an arbitrary time given only their initial positions and momenta. That is, entropy has entered our system through that part of the wall. I know this language makes it sound like entropy is in the system, flowing about, but if we knew the exact shape of the wall at that spot then it would...
An easy toy system is a collection of perfect billiard balls on a perfect pool table, that is, one without rolling friction and where all collisions conserve energy. For a few billiard balls it would be quite easy to extract all of their energy as work if you know their initial positions and velocities. There are plenty of ways to do it, and it's fun to think of them. This means they are at 0 temperature.
If you don't know the microstate, but you do know the sum of the square of their velocities, which is a constant in all collisions, you can still tell som...
Entropy is in the mind in exactly the same sense that probability is in the mind. See the relevant Sequence post if you don't know what that means.
The usual ideal gas model is that collisions are perfectly elastic, so even if you do factor in collisions they don't actually change anything. Interactions such as van der Waals have been factored in. The ideal gas approximation should be quite close to the actual value for gases like Helium.
I agree with passive_fist, and my argument hasn't changed since last time.
If we learn that energy changes in some process, then we are wrong about the laws that the system is obeying. If we learn that entropy goes down, then we can still be right about the physical laws, as Jaynes shows.
Another way: if we know the laws, then energy is a function of the individual microstate and nothing else, while entropy is a function of our probability distribution over the microstates and nothing else.
Great! It will certainly be accepted for publication in a peer-reviewed journal. The author will most likely win a Nobel Prize for his work and be hired to work at the top institution of his choice.
At this point I have to stop and ask for your credentials in astronomy. The link you posted reeks strongly of crackpot, and it's most likely not worth my time to study. Maybe you've studied cosmology in detail and think differently? If you think the author is wrong about their pet theory of general relativity, why do you think they're right in their disproof of LCDM?
Astronomy is extremely difficult. We don't know the relevant fundamental physics, and we can't perform direct experiments on our subjects. We should expect numerous problems with any cosmological model that we propose at this point. The only people who are certain of their cosmologies are the religious.
You need to do a lot more work for this sort of post to be useful. Cherry-picking weak arguments spread across the entire field of astronomy isn't enough.
I downvoted, because
... are not low status fun, but long-term life decisions that should not be taken lightly.
... are just "be rude to friends," which I consider... (read more)