Amelia Bedelia

Wiki Contributions

Comments

Sorted by

Hi Nate, great respect. Forgive a rambling stream-of-consciousness comment.

Without the advantages of maxed-out physically feasible intelligence (and the tech unlocked by such intelligence), I think we would inevitably be overpowered.

I think you move to the conclusion "if humans don't have AI, aliens with AI will stomp humans" a little promptly.

Hanson's estimate of when we'll meet aliens is 500 million years. I know very little about how Hanson estimated that & how credible the method is, and you don't appear to either: that might be worth investigating. But—

One million years is ten thousand generations of humans as we know them. If AI progress were impossible under the heel of a world-state, we could increase intelligence by a few points each generation. This already happens naturally and it would hardly be difficult to compound the Flynn effect.

Surely we could hit endgame technology that hits the limits of physical possibility/diminishing returns in one million years, let alone five hundred of those spans. You are aware of all we have done in just the past two hundred years — we can expect invention progress to eventually decelerate as untapped invention space narrows, but when that finally outweighs the accelerating factors of increasing intelligence and helpful technology it seems likely that we will already be quite close to finaltech.

In comparative terms, a five hundred year sabbatical from AI would reduce the share of resources we could reach by an epsilon only, and if AI safety premises are sound then it would greatly increase EV.

This point is likely moot, of course. I understand that we do not live in a totalitarian world state and your intent is just to assure people that AI safety people are not neoluddites. (I suppose one could attempt to help a state establish global dominance, then attempt to steer really hard towards AI-safety, but that requires two incredible victories for sufficiently murky benefits such that you'd have to be really confident of AI doom and have nothing better to try.)

  • Secondary comment: I think there's kind of a lot of room between 95% of potential value being lost and 5%!! A solid chunk of my probability mass about the future involves takeover by a ~random person or group of people who just happened to be in the right spot to seize power (e.g. government leader, corporate board) which could run anywhere from a 20 or 30% utility loss to the far negatives.

    (This is based on the idea that even if the alignment problem is solved such that we know how to specify a goal rigorously to an AI, it doesn't follow that the people who happen to be programming the goal will be selfless. You work in AI so presumably you have practiced rebuttals to this concept; I do not so I'll state my thought but be clear that I expect this is well-worn territory to which I expect you to have a solid answer.)

a guess that a fair number of alien species are smarter, more cognitively coherent, and/or more coordinated than humans at the time they reach our technological level. (E.g., a hive-mind species would probably have an easier time solving alignment, since they wouldn’t need to rush.)

  • Tertiary comment: I'd be curious about your reasoning process behind this guess.

    Is that genuinely just a solitary intuition, the chain of reasoning of which is too distributed to meaningfully follow back? It seems to assume that things like hive-mind species are possible or common, which I don't have information about but maybe you do. I'd be interested in evolutionary or anthropic arguments here, but the knowledge that you have this intuition does not cause me to adopt it.

Anyway this was fun to think about have a good day!! :D

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report.  (Keeping in mind that a basic report involves a lot of work by people who must be good at math.)


Unrelatedly but from the same advert. I had not realized it was that expensive - this rings some alarm bells for me but maybe it is fine, it is in fact a medical service. I have been waffling back and forth and will conclude I don't know enough of the details.

Regardless, the alarm bells still made me want to survey the comments and see if anyone else was alarmed. Summaries of the comments by top level:

> The words "evidence-based medicine" seems to imply "non evidence-based medicine"

> Will MetaMed make its research freely available?

> Proposals re: the idea that MetaMed might not improve the world save for their clients

> You should disclose that MIRI shares sponsors with MetaMed, detail question

> Please send this to the front page!

> I'm overall not impressed, here are a couple criticisms, what does MetaMed have over uptodate.com in terms of comparative advantage? (Nice going user EHeller, have some Bayes points.)

> Discussion of doctors and their understanding of probability

> MetaMed has gone out of business (3 years later)

> Is MetaMed a continuation of a vanished company called Personalized Medicine?

> A friend of mine has terrible fibromyalgia and would pay 5k for relief but not for a literature search of unknown benefit. I guess she's not the target audience? (long thread, MetaMed research is cited, EHeller again disputes its value compared to less expensive sources)

> An aside on rat poison

> How might MetaMed and IBM Watson compare and contrast?

> Error in advert: Jaan Tallinn is not the CEO but chairman, Zvi is the CEO.

> Is MetaMed LW-y enough that we should precommit to updating by prespecified amounts on the effectiveness of LW rationality in response to its successes and failures?

There I will cut off because the last commentor is after my own heart. Gwern responds by saying:

At a first glance, I'm not sure humans can update by prespecified amounts, much less prespecified amounts of the right quantity in this case: something like >95% of all startups fail for various reasons, so even if LW-think could double the standard odds (let's not dicker around with merely increasing effectiveness by 50% or something, let's go all the way to +100%!), you're trying to see the difference between... a 5% success rate and a 10% success rate. One observation just isn't going to count for much here.

And that is correct. But you don't have to make a single prediction, success/fail, you should be able to come up with predictions about your company that you can put higher numbers on and we can see how those empirically turn out. Or you could even keep track of all the startups launched by prominent LW members.

In contrast, Michael Vassar (who was also on the project) says,

Definitely, though others must decide the update size.

Which I don't think anyone followed through on, perhaps because they then agreed with gwern?

Anyway - it seems plausible the correct update size for a founder running a failed startup is a couple percentage points of confidence in them along certain metrics.

I think MetaMed seems like more of an update than that, my basic reasoning being: 1) I think it was entirely possible to see what was wrong with the idea before they kicked it up, 2) accounting for the possibility of bad faith, 3) Constantin's mortem suggests some maybe serious issues 4) I consider Zvi's post-mortem to be more deflective than an attempt at real self-evaluation. So maybe like 6-9 points?

I uh, I don't actually think Balsa is at all likely to be bad or anything. Please don't let that be your takeaway here. I expect them to write some interesting papers, take a few minutely useful actions, and then pack it in [65%]. There's no justification why these posts have been as long as they are except that I personally find the topic interesting and want to speak my mind.

I expect I got some things wrong here, feel free to let me know what errors you notice.

Note: I didn't read the HPMOR advert, I read the one here on LW which is different. It starts like this:

In a world where 85% of doctors can't solve simple Bayesian word problems...

In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...

In a world where "p-values" are anything the author wants them to be...

...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...

...there's also MetaMed.  Instead of just having “evidence-based medicine” in journals that doctors don't actually read, MetaMed will provide you with actual evidence-based healthcare.

You're right that he doesn't make any specific verifiable claims so much as be very glowing and excited. It does still make me less inclined to trust his predictive ability (or trust him, depending on how much is him believing in that stuff vs building up hype for whatever reason.)

I do think this ad doesn't line up with what you said re: "[...] nor says the organization is well run, and entirely focuses on his experience of the product (with the exception of one parenthetical)."

I think Constantin's postmortem is solid and I appreciate it. She says this:

But there was a mindset of “we want to give people the best thing, not the thing they want, and if there’s a discrepancy, it’s because the customer is dumb.”  I learned from experience that this is just not true -- when we got complaints from customers, it was often a very reasonable complaint, due to an error I was mortified that we’d missed.

As she says in the linked thread, Zvi's postmortem is "quite different." Constantin discusses the faults of their business strategy, Zvi attributes the failure to people wanting symbolic representation of healthcare rather than healthcare.

Is there truth to Zvi's position? It is the sort of thing I am inclined to nod my head along with and take him at his word - if Constantin weren't expressly saying that the issue was legitimate grievances, not signaling. I think her story is more plausible because it seems like less of a deflection and fits my model of the world better. But either way, I think the postmortem should've been about why Zvi failed to observe that facet of the world and what he plans to change, not about how the world sucks for having that facet.

I do agree with the quoted comment. A failed start-up is not the end of the world, it doesn't mean the founder is incompetent, or that they need to step back and let others try.

That is a fair point! I don't think Zvi et. al are obligated and I'm not like, going to call them fraudster hacks if they're not interested.

I said this more with the hopes that people frustrated with unaccountable governance would want to seize the mantle of personal responsibility, to show everyone that they are pure and incorruptible and it can be done. My post came across as more of a demand than I meant it to, which I apologize for.

Organizations can distribute their money how they want. My concern here is more "can pillars of the rat community get funding for crappy ideas on the basis of being pillars and having a buddy in grantmaking?" I want to judge EA orgs on their merits and I want to judge Zvi on his merits. If Balsa flops, who do we give less money?

Zvi said on his substack that he would consider this a worthwhile venture if there were a 2% chance of achieving a major federal policy goal. Are there lesser goals that Zvi thinks they can hit at 50% or 90%? If not, then okay. Sometimes that is just how it is and you have to do the low probability, high EV thing. But even if it's just the 2% thing, I would like Brier scores to update.

So the other concern is track record legibility. There is a lot of deferral among rats, some of it even necessary. Not every person can be a machine learning person. I've been reading LW for eight years and plenty of what Vance and Zvi write, but only heard of MetaMed a few months ago looking at Vance's LinkedIn.

Searching it up on the forums got very thin results. EY endorsed it strongly (which I believe counts as a ding on his track record if anyone is maintaining that anywhere), Alexander advertised it but remained neutral as to whether it was a good idea. So this was a big thing that the community was excited about - and it turned to shit. I believe it turned to shit without enough discussion in the aftermath of why, of what premises people had wrong. I have read the post mortems and found them lacking.

"Can you run a business well?" doesn't say much about someone's epistemics, but "Can you identify the best interventions with which to make use of your time?" absolutely does and "Can you win?" absolutely does and the way to see that is how the project empirically performs. This is a fallible test: you can do a good job at the identification and just suck at the business or just be unlucky, but I'm still going to update towards someone being untrustworthy or incompetent based on it.

Other good reasons not to do this: It is extremely plausible that making all your goals legible is inhibitive to policy work. A solution to that might be timed cryptography or an independent keeping track of their goals and reporting the results of the predictions sans what they were predicting. I am aware that this is a non-trivial inconvenience and would respect the founders considerably more if they went for it.

I am also keenly aware that this is a demand for rigor more isolated than the ice caps. I recognize the failure mode where you demand everyone wears their marks on their sleeve, but in practice only the black ones seem to stick over time. I think that's really bad because then you end up cycling veterans out and replacing them with new people who are no better or worse. Hopefully we can manage to not end up there.

I think I am much more ambivalent than I sounded in my first post, but I wanted to discuss this. Hopefully it doesn't cause anyone undue stress.

I'm sorry, this may come across as very rude, but:

MetaMed, a startup both you and Vance were on, failed abjectly and then received precious little coverage or updating from the broader rat community (as far as I've seen).

I am happy to believe your skills have improved or that the cause area is better (though this one is so nebulously ambitious that I can't help but feel a cold churn of pessimism). Certainly, demanding that every project a person attempts must meet with success is too high a bar.

But this time I would like to see you and your cofounders hold yourselves accountable to keep the communities funding you informed. In practice what I'd want is a legible goalset with predictions on whether each will be met on some future date.

[This comment is no longer endorsed by its author]Reply

That is a cute idea but they'd do it right away [>95%]. Even if you just gave it to like five moderators. They are largely conflict theorists who believe rationalists are [insert the strongest politically derogatory terms imaginable] and LW being down is morally good.

Maybe if there were real stakes they would consider it, like an independent party making a donation to both MIRI and an organization of SC's choice — except on second thought, I think they would find this too objectionable: "wow, you'll donate to charity but only if you get to humiliate me by forcing me to play your idiotic game? that really shows that [insert further derogation]")

So maybe with a different group. It would be particularly cool if it were a foreign entity, but that seems difficult to arrange.

I tend to have trouble evaluating this sort of thing due to cherry-picking.

Sam Altman made a twitter post; you can see 20 user-submitted prompts and their output at https://twitter.com/i/events/1511763146746212353, which might help a little if you want to build a model of the thing's strength.

The surviving worlds look like people who lived inside their awful reality and tried to shape up their impossible chances; until somehow, somewhere, a miracle appeared - the model broke in a positive direction, for once, as does not usually occur when you are trying to do something very difficult and hard to understand, but might still be so - and they were positioned with the resources and the sanity to take advantage of that positive miracle, because they went on living inside uncomfortable reality.

Can you talk more about this? I'm not sure what actions you want people to take based on this text.

What is the difference between a strategy that is dignified and one that is a clever scheme?

Load More