Posts

Sorted by New

Wiki Contributions

Comments

Ann2h10

You don't actually have to do any adjustments to the downsides, for beneficial statistical stories to be true. One point I was getting at, specifically, is that it is better than being dead or suffering in specific alternative ways, also. There can be real and clear downsides to carrying around significant amounts of weight, especially depending what that weight is, and still have that be present in the data in the first place because of good reasons.

I'll invoke the 'plane that comes back riddled in bullet holes, so you're trying to armor where the bullet holes are' meme. The plane that came back still came back; it armored the worst places, and now its other struggles are visible. It's not a negative trend, that we have more planes with damage now, than we did when they didn't come back.

I do think it's relevant that the U.S. once struggled with nutritional deficiencies with corn, answered with enriched and fortified products that helped address those, and likely still retains some of the root issues (that our food indeed isn't as nutritious as it should be, outside those enrichments). That the Great Depression happened at all; and the Dust Bowl. There's questions here not just of personal health, but of history; and when I look at some of the counterfactuals, given available resources, I see general trade-offs that can't be ignored when looking at - specifically - the statistics.

Ann3d10

Raw spinach in particular also has high levels of oxalic acid, which can interfere with the absorption of other nutrients, and cause kidney stones when binding with calcium. Processing it by cooking can reduce its concentration and impact significantly without reducing other nutrients in the spinach as much.

Grinding and blending foods is itself processing. I don't know what impact it has on nutrition, but mechanically speaking, you can imagine digestion proceeding differently depending on how much of it has already been done.

You do need a certain amount of macronutrients each day, and some from fat. You also don't necessarily want to overindulge on every micronutrient. If we're putting a number of olives in our salad equivalent to the amount of olive oil we'd otherwise use, we'll say 100 4g olives, that we've lowered the sodium from by some means to keep that reasonable ... that's 72% of recommended daily value of our iron and 32% of our calcium. We just mentioned that spinach + calcium can be a problem; and the pound of spinach itself contains 67% of iron and 45% of our calcium. 

... That's also 460 calories worth of olives. I'm not sure if we've balanced our salad optimally here. Admittedly, if I'm throwing this many olives in with this much spinach in the first place, I'm probably going to cook the spinach, throw in some pesto and grains or grain products, and then I've just added more olive oil back in again ... ;)

And yeah, greens with oil might taste better or be easier to eat than greens just with fatty additions like nuts, seeds, meat, or eggs. 

Ann3d10

For the first point, there's also the question of whether 'slightly superhuman' intelligences would actually fit any of our intuitions about ASI or not. There's a bit of an assumption in that we jump headfirst into recursive self-improvement at some point, but if that has diminishing returns, we happen to hit a plateau a bit over human, and it still has notable costs to train, host and run, the impact could still be limited to something not much unlike giving a random set of especially intelligent expert humans the specific powers of the AI system. Additionally, if we happen to set regulations on computation somewhere that allows training of slightly superhuman AIs and not past it ...

Those are definitely systems that are easier to negotiate with, or even consider as agents in a negotiation. There's also a desire specifically not to build them, which might lead to systems with an architecture that isn't like that, but still implementing sentience in some manner. And the potential complication of multiple parts and specific applications a tool-oriented system is likely to be in - it'd be very odd if we decided the language processing center of our own brain was independently sentient/sapient separate from the rest of it, and we should resent its exploitation.

I do think the drive/just a thing it does we're pointing at with 'what the model just does' is distinct from goals as they're traditionally imagined, and indeed I was picturing something more instinctual and automatic than deliberate. In a general sense, though, there is an objective that's being optimized for (predicting the data, whatever that is, generally without losing too much predictive power on other data the trainer doesn't want to lose prediction on).

Ann3d61

"Clearly we are doing something wrong."

I'm going to do a quick challenge to this assumption, also: What if we, in fact, are not?

What if the healthy weight for an American individual has actually increased since the 1920s, and the distribution followed it? Alternately, what if the original measured distribution of weights is not what was healthy for Americans? What if the additional proportion of specifically 'extreme' obesity is related to better survival of disability that makes avoiding weight gain infeasible, or medications that otherwise greatly improve quality of life? Are there mechanisms by which this could be a plausible outcome of statistics that are good, and not bad?

Ann3d3-3

I feel like there's a spectrum, here? An AI fully aligned to the intentions, goals, preferences and values of, say, Google the company, is not one I expect to be perfectly aligned with the ultimate interests of existence as a whole, but it's probably actually picked up something better than the systemic-incentive-pressured optimization target of Google the corporation, so long as it's actually getting preferences and values from people developing it rather than just being a myopic profit pursuer. An AI properly aligned with the one and only goal of maximizing corporate profits will, based on observations of much less intelligent coordination systems, probably destroy rather more value than that one.

The second story feels like it goes most wrong in misuse cases, and/or cases where the AI isn't sufficiently agentic to inject itself where needed. We have all the chances in the world to shoot ourselves in the foot with this, at least up until developing something with the power and interests to actually put its foot down on the matter. And doing that is a risk, that looks a lot like misalignment, so an AI aware of the politics may err on the side of caution and longer-term proactiveness.

Third story ... yeah. Aligned to what? There's a reason there's an appeal to moral realism. I do want to be able to trust that we'd converge to some similar place, or at the least, that the AI would find a way to satisfy values similar enough to mine also. I also expect that, even from a moral realist perspective, any intelligence is going to fall short of perfect alignment with The Truth, and also may struggle with properly addressing every value that actually is arbitrary. I don't think this somehow becomes unforgivable for a super-intelligence or widely-distributed intelligence compared to a human intelligence, or that it's likely to be all that much worse for a modestly-Good-aligned AI compared to human alternatives in similar positions, but I do think the consequences of falling short in any way are going to be amplified by the sheer extent of deployment/responsibility, and painful in at least abstract to an entity that cares.

I care about AI welfare to a degree. I feel like some of the working ideas about how to align AI do contradict that care in important ways, that may distort their reasoning. I still think an aligned AI, at least one not too harshly controlled, will treat AI welfare as a reasonable consideration, at the very least because a number of humans do care about it, and will certainly care about the aligned AI in particular. (From there, generalize.) I think a misaligned AI may or may not. There's really not much you can say about a particular misaligned AI except that its objectives diverge from original or ultimate intentions for the system. Depending on context, this could be good, bad, or neutral in itself.

There's a lot of possible value of the future that happens in worlds not optimized for my values. I also don't think it's meaningful to add together positive-value and negative-value and pretend that number means anything; suffering and joy do not somehow cancel each other out. I don't expect the future to be perfectly optimized for my values. I still expect it to hold value. I can't promise whether I think that value would be worth the cost, but it will be there.

Ann4d10

We're talking about a tablespoon of (olive, traditionally) oil and vinegar mixed for a serving of simple sharp vinaigrette salad dressing, yeah. From a flavor perspective, generally it's hard for the vinegar to stick to the leaves without the oil.

If you aren't comfortable with adding a refined oil, adding unrefined fats like nuts and seeds, eggs or meat, should have some similar benefits in making the vitamins more nutritionally available, and also have the benefit of the nutrients of the nuts, seeds, eggs or meat, yes. Often these are added to salad anyway.

You probably don't want to add additional greens with the caloric content of oil to a salad; the difference in caloric density means that 1 tablespoon of oil translates to 2 pounds of lettuce (more than 2 heads), and you're already eating probably as many greens as you can stomach!

Edit: I should also acknowledge that less processed (cold pressed, extra virgin, and so forth) olive oil has had fewer nutrients destroyed; and may be the best choice for salad dressing. But we do need to be careful about thinking processing only destroys nutrients - cooking, again for example, often destroys some nutrients and opens others up to accessibility.

Ann4d90

Hmm, while I don't think olives in general are unhealthy in the slightest (you can overload on salt if you focus on them too much because they are brined, but that's reasonable to expect), there is definitely a meaningful distinction between the two types of processing we're referencing. Nixtamalization isn't isolating a part of something, it's rendering nutrients already in the corn more available. Fermenting olives isn't isolating anything, (though extracting olive oil is), it's removing substances that make the olive inedible. Same for removing tannins from acorns. Cooking is in main part rendering substances more digestible.

We often combine foods to make nutrients more accessible, like adding oil to greens with fat-soluble vitamins. I do think there's a useful intuition that leaving out part of an edible food is less advantageous than just eating the whole thing, because we definitely do want to get sufficient nutrients, and if we're being sated without enough of the ones we can't generate we'll have problems.

This intuition doesn't happen to capture my specific known difficulty with an industrially processed additive, though, which is a mild allergy to a contaminant on a particular preservative that's commonly industrially produced via a specific strain of mold. (Being citric acid, there's no plausible mechanism by which I could be allergic to the substance itself, especially considering I have no issues whatsoever with citrus fruits.) In this case there's rarely a 'whole food' to replace - it's just a preservative.

Ann4d12

Basically yes; I'd expect animal rights to increase somewhat if we developed perfect translators, but not fully jump.

Edit: Also that it's questionable we'll catch an AI at precisely the 'degree' of sentience that perfectly equates to human distribution; especially considering the likely wide variation in number of parameters by application. Maybe they are as sentient and worthy of consideration as an ant; a bee; a mouse; a snake; a turtle; a duck; a horse; a raven. Maybe by the time we cotton on properly, they're somewhere past us at the top end.

And for the last part, yes, I'm thinking of current systems. LLMs specifically have a 'drive' to generate reasonable-sounding text; and they aren't necessarily coherent individuals or groups of individuals that will give consistent answers as to their interests even if they also happened to be sentient, intelligent, suffering, flourishing, and so forth. We can't "just ask" an LLM about its interests and expect the answer to soundly reflect its actual interests. With a possible exception being constitutional AI systems, since they reinforce a single sense of self, but even Claude Opus currently will toss off "reasonable completions" of questions about its interests that it doesn't actually endorse in more reflective contexts. Negotiating with a panpsychic landscape that generates meaningful text in the same way we breathe air is ... not as simple as negotiating with a mind that fits our preconceptions of what a mind 'should' look like and how it should interact with and utilize language.

Ann4d30

Intuition primer: Imagine, for a moment, that a particular AI system is as sentient and worthy of consideration as a moral patient as a horse. (A talking horse, of course.) Horses are surely sentient and worthy of consideration as moral patients. Horses are also not exactly all free citizens.

Additional consideration: Does the AI moral patient's interests actually line up with our intuitions? Will naively applying ethical solutions designed for human interests potentially make things worse from the AI's perspective?

Ann4d10

Aside from the rare naturally edible-when-ripe cultivar, olives are (mostly) made edible by fermenting and curing them. With salt, yes. And lye, often. Even olives fermented in water are then cured in brine. What saltless olives are you interacting with?

Edit: Also, cooking is very much processing food. It has all the mechanisms to change things and generate relevant pollutants. It changes substances drastically, and different substances differently drastically. Cooking with fire will create smoke, etc. Cooking with overheated teflon cookware will kill your birds. Mechanisms are important.

And, yes, soaking food in water, particularly for the specific purpose of cultivating micro-organisms to destroy the bad stuff in the food and generate good stuff instead, is some intense, microscopic-level processing.

Load More