All of Amelia Bedelia's Comments + Replies

Hi Nate, great respect. Forgive a rambling stream-of-consciousness comment.

Without the advantages of maxed-out physically feasible intelligence (and the tech unlocked by such intelligence), I think we would inevitably be overpowered.

I think you move to the conclusion "if humans don't have AI, aliens with AI will stomp humans" a little promptly.

Hanson's estimate of when we'll meet aliens is 500 million years. I know very little about how Hanson estimated that & how credible the method is, and you don't appear to either: that might be worth investigating... (read more)

3Rob Bensinger
Thanks for the comment, Amelia! :) I think the "unboosted humans" hypothetical is meant to include mind-uploading (which makes the generation time an underestimate), but we're assuming that the simulation overlords stop us from drastically improving the quality of our individual reasoning. Nate assigns "base humans, left alone" an ~82% chance of producing an outcome at least as good as “tiling our universe-shard with computronium that we use to run glorious merely-human civilizations", which seems unlikely to me if we can't upload humans at all. (But maybe I'm misunderstanding something about his view.) I think we hit the limits of technology we can think about, understand, manipulate, and build vastly earlier than that (especially if we have fast-running human uploads). But I think this limit is a lot lower than the technologies you could invent if your brain were as large as the planet Jupiter, you had native brain hardware for doing different forms of advanced math in your head, you could visualize the connection between millions of different complex machines in your working memory and simulate millions of possible complex connections between those machines inside your own head, etc. Even when it comes to just winning a space battle using a fixed pool of fighters, I expect to get crushed by a superintelligence that can individually think about and maneuver effectively arbitrary numbers of nanobots in real time, versus humans that are manually piloting (or using crappy AI to pilot) our drones. Oh, agreed. But we're discussing a scenario where we never build ASI, not one where we delay 500 years. Yep! And more generally, to share enough background model (that doesn't normally come up in inside-baseball AI discussions) to help people identify cruxes of disagreement. Seems super unrealistic to me, and probably bad if you could achieve it. A different scenario that makes a lot more sense, IMO, is an AGI project pairing with some number of states during or afte

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report.  (Keeping in mind that a basic report involves a lot of work by people who must be good at math.)


Unrelatedly but from the same advert. I had not realized it was that expensive - this rings some alarm bells for me but maybe it is fine, it is in fact a medical service. I have been waffling back and forth and will conclude I don't know enough of the details.

Regardless, the alarm bells still made me want... (read more)

6Elizabeth
I'm torn because: 1. I think a lot of your individual arguments are incorrect (e..g. $5000 is a steal for MetaMed's product if they delivered what they promised. This includes promising only a 10% chance of success, if the problems are big enough). 2. I nonetheless agree with you that one should update downward on the chance of Balsa's success due to the gestalt of information that has come out on Zvi and MetaMed (e.g. Zvi saying MetaMed was a definitive test of whether people cared about health or signaling care, while Sarah lays out a bunch of prosaic problems). 3. I think "we" is a bad framing as long as the project isn't asking for small donor funding. 4. I do think grand vague plans with insufficient specifics (aka "goals") are overrewarded on LW. 1. OTOH I have a (less) grand vague project that I'm referring to in other posts but not laying out in totality in its own post, specifically because of this, and I think that might be leaving value on the table in the form of lost feedback and potential collaborators. A way for me to lay out grand vague plans as "here's what I'm working on", but without making status claims that would need to be debunked, would be very useful. 2. OTTH it's maybe fine or even good if I have to produce three object-level blog posts before I can lay out the grand vague goal. 5. But also it's bad to discourage grand goals just because they haven't reached the plan stage yet.

Note: I didn't read the HPMOR advert, I read the one here on LW which is different. It starts like this:

In a world where 85% of doctors can't solve simple Bayesian word problems...

In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...

In a world where "p-values" are anything the author wants them to be...

...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...

...there's also MetaMed.  Instead of

... (read more)
9Ben Pace
* As I understand it, you're updating against his recommendations of a product by his friends being strong evidence that the company won't later go out of business. This seems fine to me. * I'm saying that his endorsement of the product seems eminently reasonable to me, that it was indeed life-changing for him on a level that very few products ever are, and that in general with that kind of information about a product, I don't think he made any errors of judgment, and acted pro-socially.  * I will continue to take his product advice strongly, but I will not expect that just because a company is run by rationalists or that Eliezer endorses the product, that this is especially strong evidence that they will succeed on the business fundamentals. * I think you were mistaken to call it a "ding on his track record" because he did not endorse investing in the company, he endorsed using the product, and this seems like the right epistemic state to me. From the evidence I have about MetaMed, I would really want to have access to their product. * As an example, if he'd written a post called "Great Investment Opportunity: MetaMed" this would be a ding on his track record. Instead he wrote a post called "MetaMed: Evidence-Based Healthcare", and this seems accurate and to be a positive sign about his track record of product-recommendations.
7Amelia Bedelia
Unrelatedly but from the same advert. I had not realized it was that expensive - this rings some alarm bells for me but maybe it is fine, it is in fact a medical service. I have been waffling back and forth and will conclude I don't know enough of the details. Regardless, the alarm bells still made me want to survey the comments and see if anyone else was alarmed. Summaries of the comments by top level: > The words "evidence-based medicine" seems to imply "non evidence-based medicine" > Will MetaMed make its research freely available? > Proposals re: the idea that MetaMed might not improve the world save for their clients > You should disclose that MIRI shares sponsors with MetaMed, detail question > Please send this to the front page! > I'm overall not impressed, here are a couple criticisms, what does MetaMed have over uptodate.com in terms of comparative advantage? (Nice going user EHeller, have some Bayes points.) > Discussion of doctors and their understanding of probability > MetaMed has gone out of business (3 years later) > Is MetaMed a continuation of a vanished company called Personalized Medicine? > A friend of mine has terrible fibromyalgia and would pay 5k for relief but not for a literature search of unknown benefit. I guess she's not the target audience? (long thread, MetaMed research is cited, EHeller again disputes its value compared to less expensive sources) > An aside on rat poison > How might MetaMed and IBM Watson compare and contrast? > Error in advert: Jaan Tallinn is not the CEO but chairman, Zvi is the CEO. > Is MetaMed LW-y enough that we should precommit to updating by prespecified amounts on the effectiveness of LW rationality in response to its successes and failures? There I will cut off because the last commentor is after my own heart. Gwern responds by saying: And that is correct. But you don't have to make a single prediction, success/fail, you should be able to come up with predictions about your company that you

I think Constantin's postmortem is solid and I appreciate it. She says this:

But there was a mindset of “we want to give people the best thing, not the thing they want, and if there’s a discrepancy, it’s because the customer is dumb.”  I learned from experience that this is just not true -- when we got complaints from customers, it was often a very reasonable complaint, due to an error I was mortified that we’d missed.

As she says in the linked thread, Zvi's postmortem is "quite different." Constantin discusses the faults of their busines... (read more)

That is a fair point! I don't think Zvi et. al are obligated and I'm not like, going to call them fraudster hacks if they're not interested.

I said this more with the hopes that people frustrated with unaccountable governance would want to seize the mantle of personal responsibility, to show everyone that they are pure and incorruptible and it can be done. My post came across as more of a demand than I meant it to, which I apologize for.

Organizations can distribute their money how they want. My concern here is more "can pillars of the rat community get fund... (read more)

2Zvi
Yes, there are lesser goals that I could hit with 90% probability. Note that in that comment, I was saying that 2% would make the project attractive, rather than saying I put our chances of success at 2%. And also that the bar there was set very high - getting a clear attributable major policy win. Which then got someone willing to take the YES side at 5% (Ross).

EY endorsed it strongly (which I believe counts as a ding on his track record if anyone is maintaining that anywhere)

I don't think it's a ding on his track record.

  • He tried a product.
  • It worked shockingly well for him.
  • He recommended others use that product.

This is a basic prosocial act. You haven't made an argument that the product was low-quality, the failure of the company only shows that there wasn't enough of a market for that particular product to sustain the company. For the most part I'm glad Eliezer advertised it while I could still buy it, it seems ... (read more)

I'm sorry, this may come across as very rude, but:

MetaMed, a startup both you and Vance were on, failed abjectly and then received precious little coverage or updating from the broader rat community (as far as I've seen).

I am happy to believe your skills have improved or that the cause area is better (though this one is so nebulously ambitious that I can't help but feel a cold churn of pessimism). Certainly, demanding that every project a person attempts must meet with success is too high a bar.

But this time I would like to see you and your cofounders hold... (read more)

[This comment is no longer endorsed by its author]Reply
6Lukas_Gloor
I also have bad associations to MetaMed (not based on direct evidence, though). Mentioning here that there's a perception (I had the same impression – but see localdeity's comment for two retrospectives) that MetaMed didn't get a proper retrospective seems relevant. That said, it's been a while since then and it's not like Zvi hasn't done anything else in the meantime. (I think the regular "updates" posts on various topics are excellent and demonstrate types of skill that seem quite relevant for the announced project – though obviously other more org-related skills are also required.) I'd say it's on the (potential) funders to evaluate what they're comfortable with. I think the things you mention (at least lean implementations thereof) sound like good practices either way, whether or not the track record has flaws. Going beyond what's common practice could be worth it if the funders have specific concerns, but maybe they're fine without and that would be okay – and there's also the danger that too much accountability ties you down and creates bad incentives. (For instance, setting specific goals can make it harder to pivot in situations where pivoting would benefit the mission. Though maybe you can set goals in such a way that they're flexible and you get mostly just the positives out of goalsetting/accountability.) Overall, I think I agree with the spirit of the comment, but, at the same time, I don't find myself too worried about this. (I've never met Zvi and have no conflicts of interest.)

If you haven't seen it, there's a thread here with links to Sarah Constantin's postmortem and Zvi's semi-postmortem, plus another comment from each of them.

I'll excerpt Zvi's comment from that thread:

Most start-ups fail. Failing at a start-up doesn't even mean that you, personally are bad at start-ups. If anything the SV-style wisdom is that it means you have experience and showed you will give it your all, and should try again! You don't blow your credibility by taking investor money, having a team that gives it their all for several years, and coming up

... (read more)

> But this time I would like to see you and your cofounders hold yourselves accountable to keep the communities funding you informed

 

I love postmortems, but community  accountability seems like a weird frame here. Presumably the people funding this org have asked some questions and were satisfied, and that's pretty separate from a public disclosure.

That is a cute idea but they'd do it right away [>95%]. Even if you just gave it to like five moderators. They are largely conflict theorists who believe rationalists are [insert the strongest politically derogatory terms imaginable] and LW being down is morally good.

Maybe if there were real stakes they would consider it, like an independent party making a donation to both MIRI and an organization of SC's choice — except on second thought, I think they would find this too objectionable: "wow, you'll donate to charity but only if you get to humiliate me ... (read more)

5Ben
Maybe (to make it more similar to nuclear war): Lesswrong/sneerclub each have a button to take down the other for the day. Once pressed the other site is taken down, after a 15 minute delay (so mutual destruction is possible if return fire happens within 15mins). People would spend the whole day^[1] going back and forth between the two sites saying "look, one of them says they have already fired, we have to shoot NOW, before we go down." We don't even need a mechanically introduced false alarm system, I think people getting hyped and misreading/misrepresenting comments on the other site would do enough. [1] Or, at least the day until  00.15.01 am, when the inevitable first strike lands.
9lalaithion
It would be better if it was merely an organization that merely had contradictory goals (maybe a degrowth anarcho-socialist group? A hardcore anti-science christian organization?) but wasn't organized around the dislike of our group specifically.

Perhaps ironically/terrifyingly I think the LW/Sneerclub Petrov Day experiment is most interesting if it actually destroys the whole site forever, rather than symbolically taking down one page for a day. This is more analogous to the US/Soviets and their goals + level of hostility

(Although I expect that deal to still be heavily lopsided in favor of SneerClub, given that SneerClub's main goal seems more like "fuck LW" than "have a functioning nice community")

I tend to have trouble evaluating this sort of thing due to cherry-picking.

Sam Altman made a twitter post; you can see 20 user-submitted prompts and their output at https://twitter.com/i/events/1511763146746212353, which might help a little if you want to build a model of the thing's strength.

The surviving worlds look like people who lived inside their awful reality and tried to shape up their impossible chances; until somehow, somewhere, a miracle appeared - the model broke in a positive direction, for once, as does not usually occur when you are trying to do something very difficult and hard to understand, but might still be so - and they were positioned with the resources and the sanity to take advantage of that positive miracle, because they went on living inside uncomfortable reality.

Can you talk more about this? I'm not sure what actions you want people to take based on this text.

What is the difference between a strategy that is dignified and one that is a clever scheme?

I may be misunderstanding, but I interpreted Eliezer as drawing this contrast:

  • Good Strategy: Try to build maximally accurate models of the real world (even though things currently look bad), while looking for new ideas or developments that could save the world. Ideally, the ideas the field puts a lot of energy into should be ones that already seem likely to work, or that seem likely to work under a wide range of disjunctive scenarios. (Failing that, they at least shouldn't require multiple miracles, and should lean on a miracle that's unusually likely.)
  • Bad
... (read more)