dr_s

Wiki Contributions

Comments

dr_s40

I mean, if a mere acquaintance told me something like that I don't know what I'd say, but it wouldn't be an offer to "talk about it" right away - I wouldn't feel like I'd enjoy talking about it with a near stranger, so I'd expect the same applies to them. It's one of those prefab reactions that don't really hold much water upon close scrutiny.

dr_s54

I find that rather adorable

In principle it is, but I think people do need some self awareness to distinguish between "I wish to help" and "I wish to feel like a person who's helping". The former requires focusing more genuinely on the other, rather than going off a standard societal script. Otherwise, if your desire to help ends up merely forcing the supposedly "helped" person to entertain you, after a while you'll effectively be perceived as a nuisance, good intentions or not.

dr_s53

Hard agree. People might be traumatised by many things, but you don't really want to convince them they should be traumatised, or define their identity about trauma (and then possibly insist that if they swear up and down they aren't that just means they're really repressing or not admitting - this has happened to me). That only increases the suffering! If they're not traumatised, great - they dodged a bullet! It doesn't mean that e.g. sex assault is less bad - the same way shooting someone isn't any less bad just because you happened to miss their vital organs (ok, so actually the funny thing is I guess that attempted murder is punished less than actual murder... but morally speaking, I'd say how good a shot you are has no relevance).

Answer by dr_s102

The thing is, it's hard to come up with ways to package the problem. I've tried doing small data science efforts for lesser chronic problems on myself and my wife, recording the kind of biometric indicators that were likely to correlate with our issues (e.g. food diaries vs symptoms) and it's still almost impossible to suss out meaningful correlations unless it's something as basic as "eating food X causes you immediate excruciating pain". In a non laboratory setting, controlling environmental conditions is impossible. Actual rigorous datasets, if they exist at all, are mostly privacy protected. Relevant diagnostic parameters are often incredibly expensive and complex to acquire, and possibly gatekept. The knowledge aspect is almost secondary IMO (after all, in the end, lots of recommendations your doctor will give you are still little more than empirical fixes someone came up with by analysing the data, mechanistic explanations don't go very far when dealing with biology). But even the data science, which would be doable by curious individuals, is forbidding. Even entire fields of actual, legitimate academia are swamped in this sea of noisy correlations and statistical hallucinations (looking at you, nutrition science). Add to that the risk of causing harm to people even if well meaning, and the ethical and legal implications of that, and I can see why this wouldn't take off. SMTM's citizen research on obesity seems the closest I can think of, and I've heard plenty of criticism of it and its actual rigour.

dr_s00

It doesn't change much, it still applies anyway because when talking about hypothetical really powerful models, ideally we'd want them to follow very strong principles regardless of who asks. E.g. if an AI was in charge of a military obviously it wouldn't be open, but it shouldn't accept orders to commit war crimes even from a general or a president.

dr_s42

I'm not sure if those are precisely the terms of the charter, but that's besides the point. It is still "private" in the sense that there is a small group of private citizens who own the thing and decide what it should do with no political accountability to anyone else. As for the "non-profit" part, we've seen what happens to that as soon as it's in the way.

dr_s20

Aren't these different things? Private yes, for profit no. It was private because it's not like it was run by the US government.

dr_s90

I think there's a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community - an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people's politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything - it's something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem - but those with an above average level of trust in free markets weren't so averse.

Such people don't necessarily have conflicts of interest (though some may, and that's another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.

dr_s20

I admit it's cheating a bit the spirit of the challenge, but in practice, I guess it's the round amount that makes me suspicious that it might be intentional. But it's true there doesn't seem to be a broader materials related pattern, so it may just be as you say.

dr_s20

I find a pattern in that buildings using Dreams together with either Wood or Silver have an 80% chance of being Impossible when made by a Self-Taught architect, but honestly this seems irrelevant since the other two types of background are a 100% guarantee so they're better value for money anyway.

Load More