All of AnthonyC's Comments + Replies

AnthonyC1414

Thanks for writing this. I said a few years ago, at the time just over half seriously, that there could be a lot of value in trying to solve non-AI-related problems even on short timelines, if our actions and writings become a larger part of the data on which AI is trained and through which it comes to understand the world.

That said, this one gives me pause in particular: 

I hope you treat me in ways I would treat you

I think that in the context of non-human minds of any kind, it is especially important to aim for the platinum rule and not the golden. We want to treat them the way they would want to be treated, and vice versa.

I agree with many of the parts of this post. I think xkcd was largely right, our brains have one scale and resize our experiences to fit. I think for a lot of people the hardest step is to just notice what things they actually like, and how much, and in what quantities before they habituate. 

However, the specific substitutions, ascetic choices, etc. are very much going to vary between people, because we have different preferences. You can often get a lot of economic-efficiency-of-pleasure benefit by embracing the places where you prefer things society... (read more)

In the world where AI does put most SWEs out of work or severely curtails their future earnings, how likely is it that the economy stays in a context where USD or other fiat currencies stay valuable, and for how long? At some level we don't normally need to think about, USD has value because the US government demands citizens use that currency to pay taxes, and it has an army and can ruin your life if you refuse. 

I've mentioned it before and am glad to see people exploring the possibilities, but I really get confused whenever I try to think about (absolute or relative) asset prices along the path to AGI/ASI.

The version of this phrase I've most often heard is "Rearranging deck chairs on the Titanic."

1CrimsonChin
That's precisely the same thing. Thank you, that phrase somehow had never stuck in my mind

Keep in mind that we're now at the stage of "Leading AI labs can raise tens to hundreds of billions of dollars to fund continued development of their technology and infrastructure." AKA in the next couple of years we'll see AI investment comparable to or exceeding the total that has ever been invested in the field. Calendar time is not the primary metric, when effort is scaling this fast.

A lot of that next wave of funding will go to physical infrastructure, but if there is an identified research bottleneck, with a plausible claim to being the major bottlen... (read more)

Agreed on population. to a first approximation it's directly proportional to the supply of labor, supply of new ideas, quantity of total societal wealth, and market size for any particular good or service. That last one also means that with a larger population, the economic value of new innovations goes up, meaning we can profitably invest more resources in developing harder-to-invent things. 

I really don't know how that impact (more minds) will compare to the improved capabilities of those minds. We've also never had a single individual with as much ... (read more)

Fair enough, thanks. 

My own understanding is that other than maybe writing code, no one has actually given LLMs the kind of training a talented human gets towards becoming the kind of person capable of performing novel and useful intellectual work. An LLM has a lot of knowledge, but knowledge isn't what makes useful and novel intellectual work achievable. A non-reasoning model gives you the equivalent of a top-of-mind answer. A reasoning model with a large context window and chain of thought can do better, and solve more complex problems, but still mo... (read more)

Great post. I think the central claim is plausible, and would very much like to find out I'm in a world where AGI is decades away instead of years. We might be ready by then.

If I am reading this correctly, there are two specific tests you mention: 

1) GPT-5 level models come out on schedule (as @Julian Bradshaw noted, we are still well within the expected timeframe based on trends to this point) 

2) LLMs or agents built on LLMs do something "important" in some field of science, math, or writing

I would add on test 2 that neither have almost all huma... (read more)

7Cole Wyeth
Me too! See my response to his comment - I think its not so clear that projecting those trends invalidates my model, but it really depends on whether GPT-5 is actually a qualitative upgrade comparable to the previous steps, which we do not know yet. This seems about right, but there are two points to keep in mind.  a) It is more surprising that LLMs can't do anything important because their knowledge far surpasses any humans, which indicates that there is some kind of cognitive function qualitatively missing. b) I think that about the bottom 30% (very rough estimate) of humans in developed nations are essentially un-agentic. The kind of major discoveries and creations I pointed to mostly come from the top 1%. However, I think that in the middle of that range there are still plenty of people capable of knowledge work. I don't see LLMs managing the sort of project that would take a mediocre mid-level employee a week or month. So there's a gap here, even between LLMs and ordinary humans. I am not as certain about this as I am about the stronger test, but it lines up with my experience with DeepResearch - I asked it for a literature review of my field and it had pretty serious problems that would have made it unusable, despite requiring ~no knowledge creation (I can email you an annotated copy if you're interested). Assuming the results of the paper are true (everyone would check) and at least somewhat novel/interesting (~sufficient for the journal to be credible) this would completely change my mind. As I said, it is a crux. 

It's also not clear to me that the model is automatically making a mistake, or being biased, even if the claim is in some sense(s) "true." That would depend on what it thinks the questions mean. For example:

  • Are the Japanese on average demonstrably more risk averse than Americans, such that they choose for themselves to spend more money/time/effort protecting their own lives?
  • Conversely, is the cost of saving an American life so high that redirecting funds away from Americans towards anyone else would save lives on net, even if the detailed math is wrong?
  • D
... (read more)

We're not dead yet. Failure is not certain, even when the quest stands upon the edge of a knife. We can still make plans, and keep on refining and trying to implement them.

And a lot can happen in 3-5 years. There could be a terrible-but-not-catastrophic or catastrophic-but-not-existential disaster bad enough to cut through a lot of problem. Specific world leaders could die or resign or get voted out and replaced with someone who is either actually competent, or else committed to overturning their predecessor's legacy, or something else. We could be lucky a... (read more)

Exactly, yes.

Also:

In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes. 

I came to comment mainly on this claim in the OP, so I'll put it here: In particular, at a glance, horses can reproduce, find their own food and fuel, self-repair, and learn new skills to execute independently or semi-independently. These advantages were not sufficient in practice to save (most) horses from the impact of engines, and I do not see why I should expect humans to fare... (read more)

Why do your teachers, parents and other adult authorities tell you to listen to a propaganda machine? Because the propaganda machine is working.

 

I forget where I read this, but there's a reason they call it the "news" and not the "importants."

2lsusr
Here's another one:

I would be interested in this too. My uninformed intuition is that this would be path dependent on what becomes abundant vs scarce, and how fast, with initial owners and regulators making what decisions at which points.

I'm in a similar position as you describe, perspective-wise, and would also like to understand the situation better. 

I do think there are good reasons why someone should maybe have direct access to some of these systems, though probably not as a lone individual. I seem to remember a few government shutdown/debt ceiling fight/whatever crises ago, there were articles about how there were fundamentally no systems in place to control or prioritize which bills got paid and which didn't. Money came into the treasury, money left to pay for things, first in f... (read more)

And unfortunately, this kind of thinking is extremely common, although most people don't have Gary Marcus' reach. Lately I've been having similar discussions with co-workers around once a week. A few of them are starting to get it, but still most aren't extrapolating but the specific thing I show them.

Ah, ok, then I misread it. I thought this part of the story was that it tested all of the above, then chose one, a mirror life mold, to deploy. My mistake.

Personally I got a little humor from Arthropodic. Reminds me of the observation that AIs are alien minds, and I wouldn't want to contend with a superintelligent spider.

3denkenberger
Or insect, crab, millipede, lobster, scorpion, etc...
AnthonyC140

I think this story lines up with my own fears of how a not-quite-worst case scenario plays out. I would maybe suggest that there's no reason for U3 to limit itself to one WMD or one kind of WMD. It can develop and deploy bacteria, viruses, molds, mirror life of all three types, and manufactured nanobots, all at once, and then deploy many kinds of each simultaneously. It's probably smart enough to do so in ways that make it look clumsy in case anyone notices, like its experiments are unfocused and doomed to fail. Depending on the dynamics, this could actual... (read more)

5joshc
> It can develop and deploy bacteria, viruses, molds, mirror life of all three types This is what I say it does.

we can't know how the costs will change between the first and thousandth fusion power plant.

Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and techn... (read more)

I'd say I agree with just about all of that, and I'm glad to see it laid out so clearly!

I just also wouldn't be hugely surprised if it turns out something like designing and building remote-controllable self-replicating globally-deployable nanotech (as one example) is in some sense fundamentally "easy" for even an early ASI/modestly superhuman AGI. Say that's the case, and we build a few for the ASI, and then we distribute them across the world, in a matter of weeks. They do what controlled self-replicating nanobots do. Then after a few months the ASI alre... (read more)

I very much agree with the value of not expecting a silver bullet, not accelerating arms race dynamics, fostering cooperation, and recognizing in what ways AGI realism represents a stark break from the impacts of typical technological advances. The kind of world you're describing is a possibility, maybe a strong one, and we don't want to repeat arrogant past mistakes or get caught flat footed.

That said, I think this chain of logic hinges closely on just what "…at least for a while" means in practice, yes? If one side has enough of an AI lead to increase it... (read more)

3Conrad K.
I think this is very fair! In a world where (i) AGI -> ASI is super fast; (ii) the military diffusion of ASI is exceptionally quick; and (iii) the marginal costs of scaling offensive capability is extremely low, then any sense of a limited/total war distinction does indeed break down and ASI will be the defining factor of military capability much, much sooner than we'd expect.  I think I'm instinctually sceptical of (iii) at least for a couple years after the advent of ASI though (the critical juncture for this strategy), where I think the modal outcome still looks like ASIs engage in routine cyberoperations all the time; are autonomously responsible for handling aerial warfare; and are fundamental to military operations/planning. But it's still really costly to engage in a total war scenario aimed at completely crippling a state such as China. It could play out as the need to engineer tons of drones/UAVs, the extremely costly development of a superweapon, the costs of having to secure every datacentre, etc. Within the period where we have to reckon with the effects of ASI, my guess is that the modal war - even with China - is still more a function of commitment than military advantage (which makes AGI realist rhetoric a risk amplifier).  Although I wouldn't say I'm hugely confident here, and I definitely don't feel very calibrated on just how likely this world is where the rapid diffusion of ASI also means very little/low marginal cost of scaling offensive capabilities. Though in is world, frankly, I don't think we avoid war at all unless there happen to be strong norms and sentiments against this kind of deployment. I guess the "maximise our ability to deploy ASI offensively" approach makes sense if the approach is "we must win the eventual war with China" built on relatively high credences we're in this rapid-diffusion-low-marginal-costs worlds. But given uncertainties about whether we're in this world; the potentially catastrophic consequences of war; and the

Of course I agree we won't attain any technology that is not possible, tautologically. And I have more than enough remaining uncertainty about what the mind is or what an identity entails that if ASI told me an upload wouldn't be me, I wouldn't really have a rebuttal. But the body and brain are an arrangement of atoms, and healthy bodies correspond to arrangements of atoms that are physically constructable. I find it hard to imagine what fundamental limitation could prevent the rearrangement of old-failing-body atoms into young-healthy-body atoms. If it's a practical limitation of repair complexity, then something like a whole-body-transplant seems like it could bypass the entire question.

I don't think the idea is that happy moments are necessarily outweighed by suffering. It reads to me like it's the idea that suffering is inherent in existence, not just for humans but for all life, combined with a kind of negative utilitarianism. 

I think I would be very happy to see that first-half world, too. And depending on how we got it, yeah, it probably wouldn't go wrong in the way this story portrays. But, the principles that generate that world might actually be underspecified in something like the ways described, meaning that they allow for ... (read more)

The specifics of what I'm thinking of vary a lot between jurisdictions, and some of them aren't necessarily strictly illegal so much as "Relevant authorities might cause you a lot of problems even if you haven't broken any laws." But roughly speaking, I'm thinking about the umbrella of everything that kids are no longer allowed to do that increase demands on parents compared to past generations, plus all the rules and policies that collectively make childcare very expensive, and make you need to live in an expensive town to have good public schools. Those are the first categories that come to mind for me.

Ah, yes, that does clear it up! I definitely am much more on board, sorry I misread the first time, and the footnote helps a lot.

 

As for the questions I asked that weren't clear, they're much less relevant now that I have your clarification. But the idea was: I'm off the opinion that we have a lot more know-how buried and latent in all our know-that data such that many things humans have never done or even thought of being able to do could nevertheless be overdetermined (or nearly so) without additional experimental data.

1juggins
I think it was my error as I realise now the first paragraph was a confusing setup. I've trimmed it a bit so hopefully it won't be so any more!

Overall I agree with the statements here in the mathematical sense, but I disagree about how much to index on them for practical considerations. Upvoted because I think it is a well-laid-out description of a lot of peoples' reasons for believing AI will not be as dangerous as others fear.

First, do you agree that additional knowing-that reduces the amount of failure needed to achieve knowing-how? 

If not, are you also of the opinion that schools, education as a concept, books and similar storage media, or other intentional methods of imparting know-how ... (read more)

3juggins
Thanks for the comment! Taking your points in turn: - I am curious that you see this as me saying superintelligent AI will be less dangerous, as to me it means it will be more. It will be able to dominate you in the usual hyper-competent sense but also may accidentally screw up some super-advanced physics and kill you that way too. It sounds like I should have stressed this more. I guess there are people that think AI sucks and will continue to suck, and therefore why worry about existential risk, so maybe by stressing AI fallibility I'm riding their energy a bit too hard to have made myself clear. I'll add a footnote to clarify better. - I agree that knowing-that reduces the amount of failure needed for knowing-how. My point is that the latter is the thing we actually care about though when we talk about intelligence. Memorising information is inconsequential without some practical purpose to put it to. Even if you're just reading stuff to get your world model straight, it's because you want to be able to use that model to take more successful actions in the world. - I'm not completely sure I follow your questions about failure-reduction-potential upper-bounds. My best guess is that you mean can sufficient knowing-that reduce the amount of failure required to acquire new skills to a very low level? I think theoretical knowledge is mostly generated by practical action -- trying stuff and writing down what happened -- either individually or on a societal scale. So if an ASI wants to do something radically new then there won't be any existing knowledge that can help it. For me, that means catastrophic or existential risk due to incompetence is a problem. I guess it reduces risk a little from the AI intentionally killing you, as it could mess up its plans in such a way as you survive, but long-term this reduction will be tiny as wiping out humans will not be in the ASI's stretch zone for very long. - Re your second point, I do not believe we will be able to recogni

I hope it would, but I actually think it would depend on who or what killed whom, how, and whether it was really an accident at all.

If an American-made AI hacked the DOD and nuked Milan because someone asked it to find a way to get the 2026 Olympics moved, then I agree, we would probably get a push back against race incentives.

If a Chinese-made AI killed millions in Taiwan in an effort create an opportunity for China to seize control, that could possibly *accelerate* race dynamics.

AnthonyC4-1

I think it's more a matter of Not Enough Dakka plus making it illegal to do those things in what should be reasonable ways. I agree there are economic (and regulatory) interventions that could make an enormous difference, but for various reasons I don't think any government is currently willing and able to implement them at scale. A crisis needs to be a lot more acute to motivate that scale of change.

1Rebecca
What are the illegal things that would be needed?

You would think so, I certainly used to think so, but somehow it doesn't seem to work that way in practice. That's usually the step where my wife does the seasoning and adds the liquids, so IDK if there is something specific she does that makes it work. But I'm definitely whipping them with the whisk attachment, which incorporates air, and not beating them with a paddle attachment. I suspect that's the majority of why it works.

I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That's because the expected value is the integral of r*F(r), not the value or integral of F(r).

I believe the expected value... (read more)

So, as you noted in another comment, this depends on your understanding of the nature of the types of errors individual perturbations are likely to induce. I was automatically guessing many small random perturbations that could be approximated by a random walk, under the assumption that any systematic errors are the kind of thing the sniper could at least mostly adjust for even at extreme range. Which I could be easily convinced is completely false in ways I have no ability to concretely anticipate.

That said, whatever assumptions I make about the kinds of ... (read more)

I used to use a ricer, but found that it always made the potatoes too cold by the time I ate them. Do you find this? If not, do you (even if you never thought of it this way) do anything specific to prevent it? If so, do you then reheat them, and how?

 

With a stand mixer and the whisk attachment I found removing the ricer step hasn't really mattered, but any other whipping method and yeah, it's very useful.

2Brendan Long
I rice my potatoes while they're still burning hot, which is annoying, but I'm impatient and it means the result is still warm. If you're (reasonably) waiting for the potatoes to cool down, you might be able to re-heat them in the microwave or on the stove without too much of a change to texture, although you'd have to be careful about how you stir it. Doesn't the stand mixer method overmix and produce glue-y mashed potatoes? I actually don't mind that texture but I thought that's why people don't usually do it that way.

Fair enough, I moved into a small space a few years ago and mostly buy smaller quantities now. I also like that the Little Potato Company's potatoes are already washed and I'm often boondocking/on a limited water supply. 

Costco is generally above average in most things, so definitely a good choice. I find the brands I mentioned to be more consistently high quality across locations and over time, but not too much better at their respective bests. So when I need a specific meal to be high quality, like on holidays, I'll make sure to go to Trader Joe's.

F... (read more)

In my experience that's true for a hand-held masher or hand mixer, but if I'm slow-whipping in a stand mixer with butter and cream, golds give a fluffier, smoother, lighter result.

2Brendan Long
I also like Yukon Golds best in mashed potatoes, but I use a ricer (similar to this one).

I really enjoyed this piece, not because of the specific result, but because of the style of reasoning it represents. How much advantage, under what kind of rules, can be overcome with what level of intelligence? 

Sometimes the answer is none. "I play x" overwhelms any level of intelligence at tic tac toe. 

In larger and more open games the advantage of intelligence increases, because you can do more by being better at exploring the space of possible moves. 

"Real life" is plausibly the largest and most open game, where the advantage of intelli... (read more)

The latter. And yes, I do agree with the superior on that specific, narrow mathematical question. If I am trying to run with the spirit and letter of the dilemma as presented, then I will bite that bullet (sorry, I couldn't resist). 

In real world situations, at the point where you somehow find yourself in such a position, the correct solution is probably "call in air support and bomb them instead, or find a way to fire many bullets at once- you've already decided you're willing to kill a child for a chance to to take out the target."

Similarly, if the ... (read more)

1Jim Buhler
Interesting, thanks. My intuition is that if you draw a circle of say a dozen (?) meters around the target, there's no spot within that circle that is more or less likely to be hit than any other, and it's only outside the circle than you start having something like a normal distribution. I really don't see why I should think the 35 centimeters on the target's right is any more (or less) likely than 42 centimeters on his left. Can you think of any good reason why I should think that? (Not saying my intuition is better than yours. I just want to get where I'm wrong if I am.)

Interesting, why don't you like them for mashing? That's specifically what I like them best for. Although IIUC a knish needs a different texture to hold together well. I also don't use golds for (unbreaded) potato cakes unless I mash them in advance and use them left over.

2Said Achmiz
Starchy potatoes are best for mashing, I find, texture-wise. (So, your standard russet potato.) Yukon Golds are more waxy.

I'm no chef, but I love to cook, and my thanksgiving meals are planned in spreadsheets with 10 minute increments of what goes where. Plus I currently live full-time in an RV so I've gotten used to improvising with nonstandard and less reliable tools. Take or leave my suggestions accordingly.

It's often a good idea, until and unless you know your oven really well, to put an oven thermometer in the oven on the rack and adjust accordingly. They're <$10. Try placing it in different spots and figure out how evenly or unevenly your oven heats, and how a pan in... (read more)

Yukon Golds are objectively the best potato

Correct :-)

Do you have a specific type of gold you use? The best I can reliably get are the organic golds from Trader Joe's, they come in a 3 lb bag. When I'm making home fries or anything diced I also really like The Little Potato Company.

Edit to add: now you have me wanting to make potatoes au gratin. It's been a while since I've made a good cheese sauce.

2Brendan Long
I get the 10 lbs bags at Costco (usually buying 20 lbs at a time). Are the Trader Joe's ones noticably better tasting? I'd love to try more potato varieties but no one seems to sell anything more interesting unless I want tiny colorful potatoes that cost $10/lb.
AnthonyC145

I think the principle is fine when applied to how variables effect movement of the bullet in space. I don't necessarily think it means taking the shot is the right call, tactically.

Note: I've never fired a real gun in any context, so a lot of the specifics of my reasoning are probably wrong but here goes anyway.

Essentially I see the POI as stating that the bullet takes a 2D random walk with unknown step sizes (though possibly with a known distribution of sizes) on its way to the target. As distance increase, variance in the random walk increases. 

Give... (read more)

2Jim Buhler
Say I tell you the bullet landed either 35 centimeters on the target's right or 42 centimeters on his left, and ask you to bet on which one you think it is. Are you indifferent/agnostic or do you favor 35 very (very very very very) slightly? (If the former, you reject the POI. If the latter, you embrace it. Or at least that's my understanding. If you don't find it more likely the bullet hits a spot a bit closer to the target, than you don't agree with the superior that aiming at the target makes you more likely to hit him over the child, all else equal.)

To the extent this reasoning works, it breaks the moment any agent has anything like a decisive strategic advantage. And that point no one else's instrumental or terminal goals can act as constraints.

This seems like it goes way too far. What exactly are we punishing the server or restaurant for if a patron drops some bills on the table and walks out when no one is looking?

1River
As I'm imagining this, it would not constitute accepting a tip unless the server or the restaurant keeps it. Ideally the server would notice before the customer was out the door and return the money to the customer. But surely that won't always happen, especially in the transition. In that case, let the restaurant donate the money to a nonprofit.

Edit to add: Just thinking about the converse, you could also make it sound more ridiculous by rewriting it with more obscure parts of the legendarium, too.

Conquer Morgoth with Ungoliant. Turn Maiar into balrogs. Glamdring among the morgul-blades.

I would assume that his children in particular would be quite familiar with their usage, though, and that seems to be who a lot of the legendarium-heavy letters are written to.

I also think that it sounds at least slightly less ridiculous to rewrite that passage in the language of Star Wars rather than Starcraft. Conquer the Emperor with the Dark Side. Turn Jedi into Sith. An X-Wing among the TIE fighters. Probably because it's more culturally established, with a more deeply developed mythos.

4AnthonyC
Edit to add: Just thinking about the converse, you could also make it sound more ridiculous by rewriting it with more obscure parts of the legendarium, too. Conquer Morgoth with Ungoliant. Turn Maiar into balrogs. Glamdring among the morgul-blades.

How does this interact with MA's salary transparency laws? If you are in a role where no one else shares your title, then no problem. Otherwise, this could enable an employer to pressure others to take pay cuts or smaller raises, or it could force them to tell prospective new employees a much lower lower bound in the salary range for the role they're applying to.

5jefftk
Pretty sure the salary transparency law doesn't apply to us, because you need 25+ MA employees. Even if it did, though, I think it would mostly mean giving moderately wider salary ranges? Which I expect would be fine; our two current open positions [1][2] have ranges of 23% and 30%. [1] https://securebio.org/careers/2024-lab-tech/ [2] https://securebio.org/careers/2024-director-operations/

The the first objection: To the extent that AGI participates in status games with humans, they will win. They'll be better at it than we are. You could technically have a gang of toddlers play basketball with an NBA all star team, but I don't think you can really say they're competing with each other, or that they're both playing the same game in the sense the people talking about status games mean it.

To the second objection: It is not at all clear to me whether any biological intelligence augmentation path puts humans on a level playing field with AI syst... (read more)

1azergante
I also think it is unlikely that AGIs will compete in human status games. Status games are not just about being the best: Deep Blue is not high status, sportsmen that take drugs to improve their performance are not high status. Status games have rules and you only win if you do something impressive while competing within the rules, being an AGI is likely to be seen as an unfair advantage, and thus AIs will be banned from human status games, in the same way that current sports competitions are split by gender and weight. Even if they are not banned given their abilities it will be expected that they do much better than humans, it will just be a normal thing, not a high status, impressive thing.

FWIW I think it probably would be, between those two. Land and houses are different, even if we usually buy them together. When I bought my first house, the appraisal included separate line items for the house vs the land it was on, and the land was a majority of the price I was paying. I don't know what the OP actually meant, but to my own thinking, owning land (in the limit of advanced technology making everything buildable and extractable) means owning some fixed share of Earth's total supply of energy, water, air, and minerals. Building a house, given ... (read more)

One pet peeve of mine is that actual weather forecasts for the public don't disambiguate interpretations of rain chance. Is it the chance of any rain at some point in that day or hour? Is it the expected proportion of that day or hour during which it will be raining?

AnthonyC100

I sympathize with this viewpoint, and it's hardly the worst outcome we could end up with. But, while both authors would seem to agree with a prohibition on calling up gods in a grab for power, they do so with opposite opinions about the ultimate impact of doing so. Neither offers a long-term possibility of humans retaining life, control, and freedom.

For Tolkien, I would point out first that the Elves successfully made rings free of Sauron's influence. And second, that Eru Iluvatar's existence guarantees that Sauron and Morgoth can never truly win, and at o... (read more)

I suppose, but 1) there has been no build-up/tolerance, the effects from a given dose have been stable, 2) there are no cravings for it or anything like that, 3) I've never had anything like withdrawal symptoms when I've missed a dose, other than a reversion to how I was for the years before I started taking it at all. What would a chemical dependency actually mean in this context?

My depression symptoms centered on dulled emotions and senses, and slowed thinking. This came on gradually over about 10 years, followed by about 2 years of therapy with little t... (read more)

Load More