All of brunoparga's Comments + Replies

As I understand it – with my only source being Ben's post and a couple of comments that I've read – Drew is also a cofounder of Nonlinear. Also, this was reported:

Alice and Chloe reported a substantial conflict within the household between Kat and Alice. Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn't mind polyamory "on the other side of the world”, but couldn't stand

... (read more)
9Noosphere89
This seems like a potentially downstream issue of rationalist/EA organizations ignoring a few Chesterton Fences that are really important, and one of those Chesterton Fences is not having dating/romantic relationships in the employment context if there is any power asymmetry issues. These can easily lead to abuse or worse issues. In general, one impression I get from a lot of rationalist/EA organizations is that there are very few boundaries between work, romantic/dating and potentially living depending on the organization, and the ones it does have are either much too illegible and high context, especially social context, and/or are way too porous, in that they can be easily violated. Yes, there are no preformed Cartesian boundaries that we can use, but that doesn't stop us from at least forming approximate boundaries and enforcing them, and while legible norms are never fun and have their costs, I do think that the benefits of legible norms, especially epistemically legible norms in the dating/romantic scene, especially in an employment context are very, very high value, so much that I think the downsides aren't enough to say that it's bad overall to enforce legible norms around dating/romantic relationships in the employment context. I'd say somewhat similar things around legible norms on living situations, pay etc.

My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)

I don't think "we're currently living in a simulation" or "ASI would have effects beyond imagination, at least for the median human imaginer" are such weird beliefs among this crowd that them proving true would qualify for OP to win the bet. Of course, they specifically say that if UAP are special cases in the simulation that counts, but not the mere belief in simulation.

Would you mind sharing how much you will win if the bet goes your way and everyone pays out?

Also, I would like to see more actions like yours, so I'd like to put money into that. I want to unconditionally give you $50; if you win the bet you may (but would be under no obligation to) return this money to me. All I'd need now is an ETH wallet to send money to.

I would like this to be construed as a meta-level incentive for people to have this attitude of "put up or shut up" while offering immediate payouts; not as taking a stance on the object-level question.

2RatsWrongAboutUAP
Eth: 0x1E9f00B7FF9699869f6E81277909115c11399296 Btc: bc1qegk25dy4kt2hgx0s6qla8gddv09cga874dr372   So far I have paid out $6164 and I stand to make $515,000 if I win. I appreciate your incentive offer.
1RatsWrongAboutUAP
I'm Currently on Vacation, I will follow up on this in a week

I hear you, thank you for your comment.

I guess I don't have a clear model for how big is the pool of people who:

  • have malicious intent;
  • access LessWrong and other spaces tightly linked to this;
  • don't yet have the kind of ideas that my research could provide them with.

As soon as someone managed to turn ChatGPT into an agent (AutoGPT), someone created an agent, ChaosGPT, with the explicit goal to destroy humankind. This is the kind of person that might benefit from having what I intend to produce: an overview of AI capabilities required to end the world, how far along we are in obtaining them, and so on. I want this information to be used to prevent an existential catastrophe, not precipitate it.

Thank you for your post. It is important for us to keep refining the overall p(doom) and the ways it might happen or be averted. You make your point very clearly, even in just the version presented here, condensed from your full posts on varios specific points.

It seems to me that you are applying a sort of symmetric argument to values and capabilities and arguing that x-risk requires that we hit the bullseye of capability but miss the one for values. I think this has a problem and I'd like to know your view as to how much this problem affects your overall ... (read more)

Gwern has posted several of Kurzweil's predictions on Predictionbook and I have marked many of them as either right or wrong. In some cases I included comments on the bits of research I did.

I couldn't get things to work here, but thank you Elizabeth, Raymond and Ben for trying to help me! Have fun!

I'm thinking a few things that are perhaps not super important individually, but ought to have at least some weight in such an index:

Standardization and transportation

  • What's the progress of adoption of the metric system?
  • Relatedly, can we all (including Chile, where I live) ditch US paper sizes and switch to ISO sizes?
  • Standardizing electric plugs and outlets, as well as domestic alternating current frequency and voltage
  • Low priority, but probably still desirable if one wants a truly unified world: everyone driving on the same side of the road
  • For ra
... (read more)
AlphaGo used about 0.5 petaflops (= trillion floating point operations per second)

Isn't peta- the prefix for quadrillion?

2aafarrugia
I agree - it's 10 to the power of 15 flops (better to specify it like that anyway, since "trillion" may be interpreted as 10 to the power of 12 or 18).
(Also, is there a reason there are almost no comments on these posts?)

They are reposts from slatestarcodex.com.

There's one factor to explain this coincidence that is not referenced here and I couldn't find it mentioned on the SSC post either: polar motion.

As a recap, latitude is the angle between a given point (like the tip of the Pyramid) and the Equator. The Equator is the points at the surface that are equidistant from both poles. And the poles are the points where the rotation axis intersects the surface. They're the points the Earth rotates around, sort of.

Well, it turns out that the axis of rotation is not fixed with respect to the surface. Thi... (read more)

Hi, I'm Bruno from Brazil. I have been involved with stuff in the Lesswrongosphere since 2016. While I was in the US, I participated in the New Hampshire and Boston LW meetup groups, with occasional presence in SSC and EA meetups. I volunteered at EAG Boston 2017 and attended EAG London later that year. I did the CFAR workshop of February 2017 and hung out at the subsequent alumni reunion. After having to move back to Brazil I joined the São Paulo LW and EA groups and tried, unsuccessfully, to host a book club to read RAZ over the course of 2018. (We ... (read more)

4Alexei
Sounds great. Welcome!