All of calef's Comments + Replies

calef7-2

I’ve seen pretty uniform praise from rationalist audiences, so I thought it worth mentioning that the prevailing response I’ve seen from within a leading lab working on AGI is that Eliezer came off as an unhinged lunatic.

For lack of a better way of saying it, folks not enmeshed within the rat tradition—i.e., normies—do not typically respond well to calls to drop bombs on things, even if such a call is a perfectly rational deduction from the underlying premises of the argument. Eliezer either knew that the entire response to the essay would be dominated by... (read more)

2Noosphere89
I actually disagree with the uniform praise idea, because the responses from the rationalist community was also pretty divided in it's acceptance.
calef40

Probably one of the core infohazards of postmodernism is that “moral rightness” doesn’t really exist outside of some framework. Asking about “rightness” of change is kind of a null pointer in the same way self-modifying your own reward centers can’t be straightforwardly phrased in terms of how your reward centers “should” feel about such rewiring.

calef72

For literally “just painting the road”, cost of materials of paint would be $50, yes. Doing it “right” in a way that’s indistinguishable from if the state of a California did it would almost certainly require experimenting with multiple paints, time spent measuring the intersection/planning out a new paint pattern that matches a similar intersection template, and probably even signage changes (removing the wrong signs (which is likely some kind of misdemeanor if not a felony)), and replacing the signage with the correct form. Even in opportunity costs loss, this is looking like tens of hours of work, and hundreds-to-thousands in costs of materials / required tools.

Agree in general. For this particular case, there don't appear to be any signs. https://goo.gl/maps/BZifQWTNCg3gdTaV7

calef60

You could probably implement this change for less than $5,000 and with minimal disruption to the intersection if you (for example) repainted the lines over night / put authoritative cones around the drying paint.

Who will be the hero we need?

2Douglas_Knight
On twitter, Alyssa suggests $50. Why do you put it so much higher?
calef40

Google doesn’t seem interested in serving large models until it has a rock solid solution to the “if you ask the model to say something horrible, it will oblige” problem.

3scott loop
I think that is right call. Anecdotal bad outputs would probably go viral and create media firestorm with the stochastic parrots twitter crowd beating them over the head along the way. Not sure you can ever get it perfect but they should probably get close before releasing public.
calef130

The relevant sub-field of RL interested in this calls this “lifelong learning”, though I actually prefer your framing because it makes pretty crisp what we actually want.

I also think that solving this problem is probably closer to “something like a transformer and not very far away”, considering, e.g. memorizing transformers work (https://arxiv.org/abs/2203.08913)

Answer by calef160

I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities. Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what it’s worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen? Etc. etc.

If I got... (read more)

5Evan_Gaensbauer
The same point was made on the Effective Altruism Forum and it's a considerable one. Yet I expected that.  The problem frustrating me is that the relative number of individuals who have volunteered their own numbers is so low it's an insignificant minority. One person doesn't disagree with their own self unless there is model uncertainty or whatever. Unless individual posts or comments among all of that debate provide specific answers or timelines, not enough people are providing helpful, quantitative information that would take trivial effort to provide. Thank you though for providing your own numbers.
calef160

Something worth reemphasizing for folks not in the field is that these benchmarks are not like usual benchmarks where you train the model on the task, and then see how good it does on a held-out set. Chinchilla was not explicitly trained on any of these problems. It’s typically given some context like: “Q: What is the southernmost continent? A: Antarctica Q: What is the continent north of Africa? A:” and then simply completes the prompt until a stop token is emitted, like a newline character.

And it’s performing above-average-human on these benchmarks.

calef150

That got people to, I dunno, 6 layers instead of 3 layers or something? But it focused attention on the problem of exploding gradients as the reason why deeply layered neural nets never worked, and that kicked off the entire modern field of deep learning, more or less.

This might be a chicken or egg thing.  We couldn't train big neural networks until we could initialize them correctly, but we also couldn't train them until we had hardware that wasn't embarrassing / benchmark datasets that were nontrivial.

While we figured out empirical init strategies f... (read more)

calef140

For what it's worth, the most relevant difficult-to-fall-prey-to-Goodheartian-tricks measure is probably cross entropy validation loss, as shown in this figure from the GPT-3 paper:  

Serious scaling efforts are much more likely to emphasize progress here over Parameter Count Number Bigger clickbait.

Further, while this number will keep going down, we're going to crash into the entropy of human generated text at some point.  Whether that's within 3 OOM or ten is anybody's guess, though.

calef50

By the standards of “we will have a general intelligence”, Moravec is wrong, but by the standards of “computers will be able to do anything humans can do”, Moravec’s timeline seems somewhat uncontroversially prescient? For essentially any task that we can define a measurable success metric, we more or less* know how to fashion a function approximator that’s as good as or better than a human.

*I’ll freely admit that this is moving the goalposts, but there’s a slow, boring path to “AGI” where we completely automate the pipeline for “generate a function appro... (read more)

Relatedly, do you consider [function approximators for basically everything becoming better with time] to also fail to be a good predictor of AGI timelines for the same reasons that compute-based estimates fail?

Obviously yes, unless you can take the metrics on which your graphs show steady progress and really actually locate AGI on them instead of just tossing out a shot-in-the-dark biological analogy to locate AGI on them.

2Pattern
Past commentary by EY seems to consider this to be 'AI alarms' or 'the room is filling up with smoke but there's not fire alarm'.
calef130

In defense of shot-ness as a paradigm:

Shot-ness is a nice task-ambiguous interface for revealing capability that doesn’t require any cleverness from the prompt designer. Said another way, If you needed task-specific knowledge to construct the prompt that makes GPT-3 reveal it can do the task, it’s hard to compare “ability to do that task” in a task-agnostic way to other potential capabilities.

For a completely unrealistic example that hyperbolically gestures at what I mean: you could spend a tremendous amount of compute to come up with the magic password p... (read more)

calef40

Honestly, at this point, I don’t remember if it’s inferred or primary-sourced. Edited the above for clarity.

calef120

This is based on:

  1. The Q&A you mention
  2. GPT-3 not being trained on even one pass of its training dataset
  3. “Use way more compute” achieving outsized gains by training longer than by most other architectural modifications for a fixed model size (while you’re correct that bigger model = faster training, you’re trading off against ease of deployment, and models much bigger than GPT-3 become increasingly difficult to serve at prod. Plus, we know it’s about the same size, from the Q&A)
  4. Some experience with undertrained enormous language models underperfor
... (read more)
5Lukas Finnveden
To be clear: Do you remember Sam Altman saying that "they’re simply training a GPT-3-variant for significantly longer", or is that an inference from ~"it will use a lot more compute" and ~"it will not be much bigger"? Because if you remember him saying that, then that contradicts my memory (and, uh, the notes that people took that I remember reading), and I'm confused. While if it's an inference: sure, that's a non-crazy guess, and I take your point that smaller models are easier to deploy. I just want it to be flagged as a claimed deduction, not as a remembered statement. (And I maintain my impression that something more is going on; especially since I remember Sam generally talking about how models might use more test-time compute in the future, and be able to think for longer on harder questions.)
calef*130

I believe Sam Altman implied they’re simply training a GPT-3-variant for significantly longer for “GPT-4”. The GPT-3 model in prod is nowhere near converged on its training data.

Edit: changed to be less certain, pretty sure this follows from public comments by Sam, but he has not said this exactly

6Lukas Finnveden
Say more about the source for this claim? I'm pretty sure he didn't say that during the Q&A I'm sourcing my info from. And my impression is that they're doing something more than this, both on priors (scaling laws says that optimal compute usage means you shouldn't train to convergence — why would they start now?) and based on what he said during that Q&A.
calef90

OpenAI is still running evaluations.

calef290

This was frustrating to read.

There’s some crux hidden in this conversation regarding how much humanity’s odds depend on the level of technology (read: GDP) increase we’ll be able to achieve with pre-scary-AGI. It seems like Richard thinks we could be essentially post-scarcity, thus radically changing the geopolitical climate (and possibly making collaboration on an X-risk more likely? (this wasn’t spelled out clearly)). I actually couldn’t suss out what Eliezer thinks from this conversation—possibly that humanity’s odds are basically independent of the a... (read more)

calef30

Sure, but you have essentially no guarantee that such a model would remain contained to that group, or that the insights gleaned from that group could be applied unilaterally across the world before a “bad”* actor reimplemented the model and started asking it unsafe prompts.

Much of the danger here is that once any single lab on earth can make such a model, state actors probably aren’t more than 5 years behind, and likely aren’t more than1 year behind based on the economic value that an AGI represents.

  • “bad” here doesn’t really mean evil in intent, just an actor that is unconcerned with the safety of their prompts, and thus likely to (in Eliezer’s words) end the world
calef170

I don’t think the issue is the existence of safe prompts, the issue is proving the non-existence of unsafe prompts. And it’s not at all clear that a GPT-6 that can produce chapters from 2067EliezerSafetyTextbook is not already past the danger threshold.

5Razied
There would clearly be unsafe prompts for such a model, and it would be a complete disaster to release it publicly, but a small safety-oriented team carefully poking at it in secret in a closed room without internet is something different. In general such a team can place really very harsh safety restrictions on a model like this, especially one that isn't very agentic at all like GPT, and I think we have a decent shot at throwing enough of these heuristic restrictions at the model that produces the safety textbook that it would not automatically destroy the earth if used carefully.
calef80

If you haven't already, you might consider speaking with a doctor. Sudden, intense changes to one's internal sense of logic are often explainable by an underlying condition (as you yourself have noted). I'd rather not play the "diagnose a person over the internet" game, nor encourage anyone else here to do so. You should especially see a doctor if you actually think you've had a stroke. It is possible to recover from many different sorts of brain trauma, and the earlier you act, the better odds you have of identifying the problem (if it exists!).

0wMattDodd
Thank you for the response. My leading theories, based on the research I've been able to do, are either a dissociative episode or a stroke, as they seem to fit my experience the best--although not what I consider WELL. I discussed it extensively with my aunt, who is a psychologist, and her theory is (predictably) dissociative episode, although she admits it doesn't fit terribly well. Her recommendation was to wait and see, since I seem to have returned to normal and don't show any signs of permanent damage. I would very much like to have a basic CT and/or MRI scan done to eliminate the possibility of any obvious brain irregularities, but since I am 1) Poor 2) Uninsured and 3) American, even just that would entail an extremely significant long-term financial burden. My friends and family seem to be about evenly split between those who are angry with me for not getting a scan regardless and those who are baffled why I would even want a scan.
calef70

What can a "level 5 framework" do, operationally, that is different than what can be done with a Bayes net?

I admit that I don't understand what you're actually trying to argue, Christian.

0ChristianKl
Do well at problems that require developing ontology to represent the problem like Bongard's problems (see Chapman's post on metarationality) Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It's not as easy as simply picking up another mental trick. But in this thread, my point isn't to argue that everybody should adopt meta-rationality but to illustrate that it's actually a different way of looking at the world.
calef60

Hi Flinter (and welcome to LessWrong)

You've resorted to a certain argumentative style in some of your responses, and I wanted to point it out to you. Essentially, someone criticizes one of your posts, and your response is something like:

"Don't you understand how smart John Nash is? How could you possibly think your criticism is something that John Nash hadn't thought of already?"

The thing about ideas, notwithstanding the brilliance of those ideas or where they might have come from, is that communicating those ideas effectively is just as import... (read more)

0dropspindle
I am nearly certain Flinter is just Eugene's new way of trolling now that there aren't downvotes. Don't feed the troll
calef10

I've found that I only ever get something sort of like sleep paralysis when I sleep flat on my back, so +1 for sleeping orientation mattering for some reason.

calef40

This is essentially what username2 was getting at, but I'll try a different direction.

It's entirely possible that "what caused the big bang" is a nonsensical question. 'Causes' and 'Effects' only exist insofar as there are things which exist to cause causes and effect effects. The "cause and effect" apparatus could be entirely contained within the universe, in the same way that it's not really sensible to talk about "before" the universe.

Alternatively, it could be that there's no "before" because the universe has a... (read more)

0Brillyant
It makes a lot of sense that the nature of questions regarding the "beginning" of the universe is nonsensical and anthropocentric, but it still feels like a cheap response that misses the crux of the issue. It feels like "science will fill in that gap eventually" and we ought to trust that will be so. Matter exists. And there are physical laws in the universe that exist. I accept, despite my lack of imagination and fancy scientific book learning, that this is basically enough to deterministically allow intelligent live beings like you and I to be corresponding via our internet-ed magical picture boxes. Given enough time, just gravity and matter gets us to here—to all the apparent complexity of the universe. I buy that. But whether the universe is eternal, or time is circular, or we came from another universe, or we are in a simulation, or whatever other strange non-intuitive thing may be true in regard to the ultimate origins of everything, there is still this pesky fact that we are here. And everything else is here. There is existence where it certainly seems there just as easily could be non-existence. Again, I really do recognize the silly anthropocentric nature of questions about matters like these. I think you are ultimately right that the questions are non-sensical. But, to my original question, it seems a simple agnostic-ish deism is a fairly reasonable position given the infantile state of our current understanding of ultimate origins. I mean, if you're correct, we don't even know that we are asking questions that make sense about how things exist...then how can we rule out something like a powerful, intelligent creative entity (that has nothing to with any revealed religion)? I'm not asking rhetorically. How do you rule it out?
calef10

If you aren't interested in engaging with me, then why did you respond to my thread? Especially when the content of your post seems to be "No you're wrong, and I don't want to explain why I think so."?

3Douglas_Knight
It is important to make disagreements common knowledge. That would justify a comment of the form you suggest. That is, however, not the comment I left.
calef10

What precisely is Eliezer basically correct about on the physics?

It is true that non-unitary gates allow you to break physics in interesting ways. It is absolutely not true that violating conservation of energy will lead to a nonunitary gate. Eliezer even eventually admits (or at least admits that he 'may have misunderstood') an error in the physics here. (see this subthread).

This isn't really a minor physics mistake. Unitarity really has nothing at all to do with energy conservation.

1Douglas_Knight
By that standard of admission, "Gauss the Sane" admitted that Eliezer was correct. I was very vague because I was not interested in engaging with you.
calef10

Haha fair enough!

calef10

I never claimed whether he was or not wasn't Important. I just didn't focus on that aspect of the argument because it's been discussed at length elsewhere (the reddit thread, for example). And I've repeatedly offered to talk about the object level point if people were interested.

I'm not sure why someone's sense of fairness would be rankled when I directly link to essentially all of the evidence on the matter. It would be different if I was just baldly claiming "Eliezer done screwed up" without supplying any evidence.

calef00

I never said that determining the sincerity of criticism would be easy. I can step through the argument with links, I'd you'd like!

4Richard_Kennaway
Your dedication to the cause of discerning who has rightly discerned who has rightly discerned errors in HPMOR greatly exceeds mine. I shall leave it there.
calef50

Yes, I wrote this article because Eliezer very publicly committed the typical sneering fallacy. But I'm not trying to character-assassinate Eliezer. I'm trying to identify a poisonous sort of reasoning, and indicate that everyone does it, even people that spends years of their life writing about how to be more rational.

I think Eliezer is pretty cool. I aso don't think he's immune from criticism, nor do I think he's an inappropriate target of this sort of post.

1Richard_Kennaway
The problem is that there is no way for anyone to check your claims about the cited thread without closely reading a large amount of contentious discussion of HPMOR and all the parts of HPMOR being talked about, in order to work out who is being wrong on the Internet. Whoever is going to do that?
calef20

Which makes for a handy immunizing strategy against criticisms of your post, n'est-ce pas?

It's my understanding that your criticism of my post was that the anecdote would be distracting. One of the explicit purposes of my post was to examine a polarizing example of [the fallacy of not taking criticism seriously] in action--an example which you proceed to not take seriously in your very first post in this thread simply because of a quote you have of Eliezer blowing the criticism off.

The ultimate goal here is to determine how to evaluate criticism. Learning how to do that when the criticism comes from across party lines is central.

calef20

I mean, if you'd like to talk about the object level point of "was the criticism of Eliezer actually true", we can do that. The discussion elsewhere is kind of extensive, which is why I tried to focus on the meta-level point of the Typical Sneer Fallacy.

9buybuydandavis
"I'm going to use Joe as an example of The Bad Thing, but whether or not he actually is an example isn't the real point." On my meta-level point, do you see how this would rankle a person's basic sense of fairness regardless of how they felt about Joe?
5Richard_Kennaway
I'm not particularly interested in that. It just seemed to me that the example was the point of the article and the meta-stuff was there only to be a support for it. I mean, people in class (d) are straightforwardly committing what one might call the Sneer Fallacy. Sneering is their bottom line, and it's even easier to sneer than to make an argument. To adapt C.S. Lewis, it is hard to make an argument, but effortless to pretend that an argument has been made. A similar sentiment is expressed in the catchphrase "haters gonna hate". But you skip over that and go straight to a meta-fallacy of misidentifying someone as committing Sneer. This seems too small a target to be worth the attention of a post. Eliezer, on the other hand, is a big target. Therefore Eliezer, and not Sneer Fallacy Fallacy, is the real subject.
calef20

I suspect how reader's respond to my anecdote about Eliezer will fall along party lines, so to speak.

Which is kind of the point of the whole post. How one responds to the criticism shouldn't be a function of one's loyalty to Eliezer. Especially when su3su2u1 explicitly isn't just "making up most of" his criticism. Yes, his series of review-posts are snarky, but he does point out legitimate science errors. That he chooses to enjoy HPMOR via (c) rather than (a) shouldn't have any bearing on the true-or-false-ness of his criticism.

I've read su... (read more)

4buybuydandavis
Which makes for a handy immunizing strategy against criticisms of your post, n'est–ce pas? Nor, perhaps, is yanking in opposition to people's party affiliations useful in trying to get them to listen to an idea. I'm actually all for snark and ridicule, but then you really need to be hitting your target, because it is reasonable for people to update that a criticism is relatively unconcerned about finding the truth when it demonstrates another motivation being pursued.
0Pfft
I'm not sure he actually enjoyed it (e.g. 1, 2), be it through fault-finding or otherwise...
9Richard_Kennaway
My response, the moment I read the paragraph beginning "This is the point in the article where..." was, "This is the real subject of the post and will be a criticism of the person named. The preamble was written to generate priming and framing for the claims, which will be unsubstantiated other than by reference to a discussion somewhere else."
calef10

I mean, sure, but this observation (i.e., "We have tools that allow us to study the AI") is only helpful if your reasoning techniques allow you to keep the AI in the box.

Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).

I think that you think MIRI's claim is "This cannot be done safely." And I think your claim is "This obviously can be done safely" or perhaps "The onus is on MIRI to prove that this cannot be done safely."

But, again, MIRI's whole mission is to figure out the extent to which this can be done safely.

calef20

As far as I can tell, you're responding to the claim, "A group of humans can't figure out complicated ideas given enough time." But this isn't my claim at all. My claim is, "One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality." This is trivially true once the number of machines which are "smarter" than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the "smarter" ma... (read more)

1[anonymous]
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
calef140

This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understandin

... (read more)
3Sable
Isn't using a laptop as a metaphor exactly an example of I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can't accurately predict anything without more data. So maybe the next step in AI should be to create an "Aquarium," a self-contained network with no actuators and no way to access the internet, but enough processing power to support a superintelligence. We then observe what that superintelligence does in the aquarium before deciding how to resolve further uncertainties.
-1[anonymous]
That may be a valid concern, but it requires evidence as it is not the default conclusion. Note that quantum physics is sufficiently different that human intuitions do not apply, but it does not take a physicist a “prohibitively long” time to understand quantum mechanical problems and their solutions. As to your laptop example, I'm not sure what you are attempting to prove. Even if one single engineer doesn't understand how ever component of a laptop works, we are nevertheless very much able to reason about the systems-level operation of laptops, or the the development trajectory of the global laptop market. When there are issues, we are able to debug them and fix them in context. If anything the example shows how humanity as a whole is able to complete complex projects like the creation of a modern computational machine without being constrained to any one individual understanding the whole. Edit: gaaaah. Thanks Sable. I fell for the very trap of reasoning by analogy I opined against. Habitual modes of thought are hard to break.
V_V160

Edit: I should add that this is already a problem for, ironically, computer-assisted theorem proving. If a computer produces a 10,000,000 page "proof" of a mathematical theorem (i.e., something far longer than any human could check by hand), you're putting a huge amount of trust in the correctness of the theorem-proving-software itself.

No, you just need to trust a proof-checking program, which can be quite small and simple, in contrast with the theorem proving program, which can be arbitrarily complex and obscure.

calef-10

Perhaps because this might all be happening within the mirror, thus realizing both Harry!Riddle's and Voldy!Riddle's CEVs simultaneously.

calef60

It seems like Mirror-Dumbledore acted in accordance with exactly what Voldemort wanted to see. In fact, Mirror-Dumbledore didn't even reveal any information that Voldemort didn't already know or suspect.

Odds of Dumbledore actually being dead?

1MathMage
The Mirror in canon isn't limited to the viewer's knowledge (cf. the appearances of Harry's extended family); it's unlikely that the HPMOR version has been given that limitation, so fulfilling that limitation is not a strong indicator. Moreover, Quirrellmort's CEV probably is not to defeat Dumbledore, so the Mirror should not show it. I think odds are high that this was the real Dumbledore, and that he has been, not killed, but cast out of the time stream. Removed from game, if you will.
0buybuydandavis
Which is exactly how the mirror was always advertised to work?
5Vaniver
He's not dead; he's just out of time. It seems likely that Harry can restore him, if he manages to make it out of this alive and with his values intact.
calef260

Honestly, the only "winning" strategy here is to not argue with people on the comments sections of political articles.

If you must, try and cast the argument in a way that avoids the standard red tribe / blue tribe framing. Doing this can be hard because people generally aren't in the business of having politics debate with an end goal of dissolving an issue--they just want to signal their tribe--hence why arguing on the internet is often a waste of time.

As to the question of authority: how would you expect the conversation to go if you were an ... (read more)

5Vaniver
Exactly. I would also include Is That Your True Rejection?
3[anonymous]
Thank you so much for your comment, it is really helpful! I use the internet to put in practice what I am learning about critical thinking and argumentation (critical thinking course on Khan Academy). In environments like the Reddit Ethereum page it is much more reason centered and there are less dishonest participants so when my arguments are refuted it is very productive and I learn a lot. But on newspaper sites and blogs its more like a jungle. I think what you say "the challenger has already made up their mind" is the key. I will read the articles of the links you posted, thx!
calef30

Yeah, it's already been changed:

A blank-eyed Professor Sprout had now risen from the ground and was pointing her own wand at Harry.

calef100

So when Dumbledore asked the Marauder's Map to find Tom Riddle, did it point to Harry?

Shmi410

It tried to point to all the horcruxes in Hogwarts at once, and crashed because of an unchecked stack overflow.

calef40

Here's a discussion of the paper by the authors. For a sort of critical discussion of the result, see the comments in this blog post.

4Strilanc
Urgh...
calef10

This is a good point. The negative side gives good intuition for the "negative temperatures are hotter than any positive temperature" argument.

2Luke_A_Somers
What gives a better intuition is thinking in inverse temperature. Regular temperature is, 'how weakly is this thing trying to grab more energy so as to increase its entropy'. Inverse temperature is 'how strongly...' and when that gets down to 0, it's natural to see it continue on into negatives, where it's trying to shed energy to increase its entropy.
calef10

The distinction here goes deeper than calling a whale a fish (I do agree with the content of the linked essay).

If a layperson asks me what temperature is, I'll say something like, "It has to do with how energetic something is" or even "something's tendency to burn you". But I would never say "It's the average kinetic energy of the translational degrees of freedom of the system" because they don't know what most of those words mean. That latter definition is almost always used in the context of, essentially, undergraduate pro... (read more)

calef00

Because one is true in all circumstances and the other isn't? What are you actually objecting to? That physical theories can be more fundamental than each other?

1DanielLC
I admit that some definitions can be better than others. A whale lives underwater, but that's about the only thing it has in common with a fish, and it has everything else in common with a whale. You could still make a word to mean "animal that lives underwater". There are cases where where it lives is so important that that alone is sufficient to make a word for it. If you met someone who used the word "fish" to mean "animal that lives underwater", and used it in contexts where it was clear what it meant (like among other people who also used it that way), you might be able to convince them to change their definition, but you'd need a better argument than "my definition is always true, whereas yours is only true in the special case that the fish is not a mammal".
calef00

I just mean as definitions of temperature. There's temperature(from kinetic energy) and temperature(from entropy). Temperature(from entropy) is a fundamental definition of temperature. Temperature(from kinetic energy) only tells you the actual temperature in certain circumstances.

1DanielLC
Why is one definition more fundamental than another? Why is only one definition "actual"?
calef10

Only one of them actually corresponds with temperature for all objects. They are both equal for one subclass of idealized objects, in which case the "average kinetic energy" definition follows from the the entropic definition, not the other way around. All I'm saying is that it's worth emphasizing that one definition is strictly more general than the other.

3DanielLC
Average kinetic energy always corresponds to average kinetic energy, and the amount of energy it takes to create a marginal amount of entropy always corresponds to the amount of energy it takes to create a marginal amount of entropy. Each definition corresponds perfectly to itself all of the time, and applies to the other in the case of idealized objects. How is one more general?
calef50

I think more precisely, there is such a thing as "the average kinetic energy of the particles", and this agrees with the more general definition of temperature "1 / (derivative of entropy with respect to energy)" in very specific contexts.

That there is a more general definition of temperature which is always true is worth emphasizing.

2Luke_A_Somers
Rather than 'in very specific contexts' I would say 'in any normal context'. Just because it's not universal doesn't mean it's not the overwhelmingly common case.
calef10

I'm don't see the issue in saying [you don't know what temperature really is] to someone working with the definition [T = average kinetic energy]. One definition of temperature is always true. The other is only true for idealized objects.

0buybuydandavis
Nobody knows what anything really is. We have more or less accurate models.
0DanielLC
What do you mean by "true"? They both can be expressed for any object. They are both equal for idealized objects.
calef220

According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.

This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).

Wolframalpha helpfully supplies that this is 28 times the t... (read more)

Eniac180

I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.

Load More