How does it work to optimize for realistic goals in physical environments of which you yourself are a part? E.g. humans and robots in the real world, and not humans and AIs playing video games in virtual worlds where the player not part of the environment.  The authors claim we don't actually have a good theoretical understanding of this and explore four specific ways that we don't understand this process.

Hangnails are Largely Optional Hangnails are both annoying and painful, often snagging on things and causing your fingers to bleed. Typical responses to hangnails include: * Ignoring them. * Pulling them out, which can lead to further bleeding or infection. * Trimming them with nail clippers, which often leaves a jagged edge. * Wrapping the affected finger in a bandage, requiring daily changes. Instead, use a drop of superglue to glue it to your nail plate, IMO a far superior option. It's $10 for 12 small tubes on Amazon. Superglue is also useful for cuts and minor repairs, so I already carry it around everywhere. Hangnails manifest as either separated nail fragments or dry peeling skin on the paronychium (area around the nail). In my experience superglue works for nail separation, and a paper (available free on Scihub) claims it also works for peeling skin on the paronychium. Is this safe? Cyanoacrylate glue is regularly used in medicine to close wounds, and now frequently replaces stitches. Medical superglue has slightly different types of cyanoacrylate, but doctors I know say it's basically the same thing. I think medical superglue exists to prevent rare reactions and for large wounds where the exothermic reaction from a large quantity might burn you, and the safety difference for hangnails is minimal [1]. But to be extra safe you could just use 3M medical grade superglue or Dermabond. [1]: There have been studies showing cytotoxicity in rabbits when injecting it in their eyes, or performing internal (bone or cartilage) grafts. A 2013 review says that although some studies have found internal toxicity, "[f]or wound closure and various other procedures, there have been a considerable number of studies finding histologic equivalence between ECA [commercial superglue] and more widely accepted modalities of repair."
avturchin13h10-8
11
ChatGPT 4.5 is on preview at https://chat.lmsys.org/ under name gpt-2.  It calls itself ChatGPT 2.0 in a text art drawing https://twitter.com/turchin/status/1785015421688799492 
Raemon1d236
0
Yesterday I was at a "cultivating curiosity" workshop beta-test. One concept was "there are different mental postures you can adopt, that affect how easy it is not notice and cultivate curiosities." It wasn't exactly the point of the workshop, but I ended up with several different "curiosity-postures", that were useful to try on while trying to lean into "curiosity" re: topics that I feel annoyed or frustrated or demoralized about. The default stances I end up with when I Try To Do Curiosity On Purpose are something like: 1. Dutiful Curiosity (which is kinda fake, although capable of being dissociatedly autistic and noticing lots of details that exist and questions I could ask) 2. Performatively Friendly Curiosity (also kinda fake, but does shake me out of my default way of relating to things. In this, I imagine saying to whatever thing I'm bored/frustrated with "hullo!" and try to acknowledge it and and give it at least some chance of telling me things) But some other stances to try on, that came up, were: 3. Curiosity like "a predator." "I wonder what that mouse is gonna do?" 4. Earnestly playful curiosity. "oh that [frustrating thing] is so neat, I wonder how it works! what's it gonna do next?" 5. Curiosity like "a lover". "What's it like to be that you? What do you want? How can I help us grow together?" 6. Curiosity like "a mother" or "father" (these feel slightly different to me, but each is treating [my relationship with a frustrating thing] like a small child who is bit scared, who I want to help, who I am generally more competent than but still want to respect the autonomy of." 7. Curiosity like "a competent but unemotional robot", who just algorithmically notices "okay what are all the object level things going on here, when I ignore my usual abstractions?"... and then "okay, what are some questions that seem notable?" and "what are my beliefs about how I can interact with this thing?" and "what can I learn about this thing that'd be useful for my goals?"
decision theory is no substitute for utility function some people, upon learning about decision theories such as LDT and how it cooperates on problems such as the prisoner's dilemma, end up believing the following: > my utility function is about what i want for just me; but i'm altruistic (/egalitarian/cosmopolitan/pro-fairness/etc) because decision theory says i should cooperate with other agents. decision theoritic cooperation is the true name of altruism. it's possible that this is true for some people, but in general i expect that to be a mistaken analysis of their values. decision theory cooperates with agents relative to how much power they have, and only when it's instrumental. in my opinion, real altruism (/egalitarianism/cosmopolitanism/fairness/etc) should be in the utility function which the decision theory is instrumental to. i actually intrinsically care about others; i don't just care about others instrumentally because it helps me somehow. some important aspects that my utility-function-altruism differs from decision-theoritic-cooperation includes: * i care about people weighed by moral patienthood, decision theory only cares about agents weighed by negotiation power. if an alien superintelligence is very powerful but isn't a moral patient, then i will only cooperate with it instrumentally (for example because i care about the alien moral patients that it has been in contact with); if cooperating with it doesn't help my utility function (which, again, includes altruism towards aliens) then i won't cooperate with that alien superintelligence. corollarily, i will take actions that cause nice things to happen to people even if they've very impoverished (and thus don't have much LDT negotiation power) and it doesn't help any other aspect of my utility function than just the fact that i value that they're okay. * if i can switch to a better decision theory, or if fucking over some non-moral-patienty agents helps me somehow, then i'll happily do that; i don't have goal-content integrity about my decision theory. i do have goal-content integrity about my utility function: i don't want to become someone who wants moral patients to unconsentingly-die or suffer, for example. * there seems to be a sense in which some decision theories are better than others, because they're ultimately instrumental to one's utility function. utility functions, however, don't have an objective measure for how good they are. hence, moral anti-realism is true: there isn't a Single Correct Utility Function. decision theory is instrumental; the utility function is where the actual intrinsic/axiomatic/terminal goals/values/preferences are stored. usually, i also interpret "morality" and "ethics" as "terminal values", since most of the stuff that those seem to care about looks like terminal values to me. for example, i will want fairness between moral patients intrinsically, not just because my decision theory says that that's instrumental to me somehow.
The cost of goods has the same units as the cost of shipping: $/kg. Referencing between them lets you understand how the economy works, e.g. why construction material sourcing and drink bottling has to be local, but oil tankers exist. * An iPhone costs $4,600/kg, about the same as SpaceX charges to launch it to orbit. [1] * Beef, copper, and off-season strawberries are $11/kg, about the same as a 75kg person taking a three-hour, 250km Uber ride costing $3/km. * Oranges and aluminum are $2-4/kg, about the same as flying them to Antarctica. [2] * Rice and crude oil are ~$0.60/kg, about the same as $0.72 for shipping it 5000km across the US via truck. [3,4] Palm oil, soybean oil, and steel are around this price range, with wheat being cheaper. [3] * Coal and iron ore are $0.10/kg, significantly more than the cost of shipping it around the entire world via smallish (Handysize) bulk carriers. Large bulk carriers are another 4x more efficient [6]. * Water is very cheap, with tap water $0.002/kg in NYC. But shipping via tanker is also very cheap, so you can ship it maybe 1000 km before equaling its cost. It's really impressive that for the price of a winter strawberry, we can ship a strawberry-sized lump of coal around the world 100-400 times. [1] iPhone is $4600/kg, large launches sell for $3500/kg, and rideshares for small satellites $6000/kg. Geostationary orbit is more expensive, so it's okay for GPS satellites to cost more than an iPhone per kg, but Starlink wants to be cheaper. [2] https://fred.stlouisfed.org/series/APU0000711415. Can't find numbers but Antarctica flights cost $1.05/kg in 1996. [3] https://www.bts.gov/content/average-freight-revenue-ton-mile [4] https://markets.businessinsider.com/commodities [5] https://www.statista.com/statistics/1232861/tap-water-prices-in-selected-us-cities/ [6] https://www.researchgate.net/figure/Total-unit-shipping-costs-for-dry-bulk-carrier-ships-per-tkm-EUR-tkm-in-2019_tbl3_351748799

Popular Comments

Recent Discussion

Hangnails are Largely Optional

Hangnails are both annoying and painful, often snagging on things and causing your fingers to bleed. Typical responses to hangnails include:

  • Ignoring them.
  • Pulling them out, which can lead to further bleeding or infection.
  • Trimming them with nail clippers, which often leaves a jagged edge.
  • Wrapping the affected finger in a bandage, requiring daily changes.

Instead, use a drop of superglue to glue it to your nail plate, IMO a far superior option. It's $10 for 12 small tubes on Amazon. Superglue is also useful for cuts and minor... (read more)

The history of science has tons of examples of the same thing being discovered multiple time independently; wikipedia has a whole list of examples here. If your goal in studying the history of science is to extract the predictable/overdetermined component of humanity's trajectory, then it makes sense to focus on such examples.

But if your goal is to achieve high counterfactual impact in your own research, then you should probably draw inspiration from the opposite: "singular" discoveries, i.e. discoveries which nobody else was anywhere close to figuring out. After all, if someone else would have figured it out shortly after anyways, then the discovery probably wasn't very counterfactually impactful.

Alas, nobody seems to have made a list of highly counterfactual scientific discoveries, to complement wikipedia's list of multiple discoveries.

To...

~Don't aim for the correct solution, (first) aim for understanding the space of possible solutions

3Johannes C. Mayer11h
Ok, I was confused before. I think Homoiconicity is sort of several things. Here are some examples: * In basically any programming language L, you can have program A, that can write a file containing a valid L source code that is then run by A. * In some sense, python is homoiconic, because you can have a string and then exec it. Before you exec (or in between execs) you can manipulate the string with normal string manipulation. * In R you have the quote operator which allows you to take in code and return and object that represents this code, that can be manipulated. * In Lisp when you write an S-expression, the same S-expression can be interpreted as a program or a list. It is actually always a (possibly nested) list. If we interpret the list as a program, we say that the first element in the list is the symbol of the function, and the remaining entries in the list are the arguments to the function. Although I can't put my finger on it exactly, to me it feels like the homoiconicity is increasing in further down examples in the list. The basic idea though seems to always be that we have a program that can manipulate the representation of another program. This is actually more general than homoiconicity, as we could have a Python program manipulating Haskell code for example. It seems that the further we go down the list, the easier it gets to do this kind of program manipulation.
4Answer by AnthonyC17h
I think it's worth noting that small delays in discovering new things would, in aggregate, be very impactful. On average, how far apart are the duplicate discoveries? If we pushed all the important discoveries back a couple of years by eliminating whoever was in fact historically first, then the result is a world that is perpetually several years behind our own in everything. This world is plausibly 5-10% poorer for centuries, maybe more if a few key hard steps have longer delays, or if the most critical delays happened a long time ago and were measured in decades or centuries instead.
This is a linkpost for https://dynomight.net/seed-oil/

A friend has spent the last three years hounding me about seed oils. Every time I thought I was safe, he’d wait a couple months and renew his attack:

“When are you going to write about seed oils?”

“Did you know that seed oils are why there’s so much {obesity, heart disease, diabetes, inflammation, cancer, dementia}?”

“Why did you write about {meth, the death penalty, consciousness, nukes, ethylene, abortion, AI, aliens, colonoscopies, Tunnel Man, Bourdieu, Assange} when you could have written about seed oils?”

“Isn’t it time to quit your silly navel-gazing and use your weird obsessive personality to make a dent in the world—by writing about seed oils?”

He’d often send screenshots of people reminding each other that Corn Oil is Murder and that it’s critical that we overturn our lives...

Strong upvoted. I learned a lot. Seriously interested in what you think is relatively safe and not extremely expensive or difficult to acquire. Some candidates I thought of but im not exactly well informed:
-- Grass fed beef
-- oysters/muscles
-- some whole grains? which?
-- fruit
-- vegetables you somehow know arent contaminated by anti-pest chemicals?

I really need some guidance here.

2JenniferRM5h
This bit caught my eye: I searched for [is olive oil cut with canola oil] and found that in the twenty teens organized crime was flooding the market with fake olive oil, but in 2022 an EU report suggested that uplabeling to "extra virgin" was the main problem they caught (still?). Coming from the other direction, in terms of a "solid safe cheap supply"... I can find reports of Extra Virgin Olive Oil being sold by Costco under their Kirkland brand that is particularly well sourced and tested, and my priors say that this stuff is likely to be weirdly high quality for a weirdly low price (because, in general, "kirklandization" is a thing that food producers with a solid product and huge margins worry about). I'm kinda curious if you have access to Kirkland EVOO and if it gives you "preflux"? Really any extra data here (where your sensitive palate gives insight into the current structure of the food economy) would be fascinating :-)
1Ann8h
Thanks for the reference! I'm definitely confused about the inclusion of "pre-prepared (packaged) meat, fish and vegetables" on the last list, though. Does cooking meat or vegetables before freezing it (rather than after? I presume most people aren't eating meat raw) actually change its processed status significantly?
1Freyja9h
Also as a brief pointer at another cool thing in Metabolical, Lustig claims that exercise is useful for weight loss mostly because of its beneficial impact on cell repair/metabolic system repair (something specific about mitochondria?) and not for the calorie deficit it may or may not create. I consider Lustig's science to be quite thorough, I like him a lot. The main point against him is that he personally doesn't look very metabolically healthy, which I would expect of someone who had spent his life investigating and theorising about what influences metabolic health. 

Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic.

In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind’s AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged.

Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds...

2Seth Herd12h
Nothing in this post or the associated logic says LLMs make AGI safe, just safer than what we were worried about. Nobody with any sense predicted runaway AGI by this point in history. There's no update from other forms not working yet. There's a weird thing where lots of people's p(doom) went up when LLMs started to work well, because they found it an easier route to intellligence than they'd been expecting. If it's easier it happens sooner and with less thought surrounding it. See Porby's comment on his risk model for language model agents. It's a more succinct statement of my views. LLMs are easy to turn into agents, so let's don't get complacent. But they are remarkably easy to control and align, so that's good news for aligning the agents we build from them. But that doesn't get us out of the woods; there are new issues with self-reflective, continuously learning agents, and there's plenty of room for misuse and conflict escalation in a multipolar scenario, even if alignment turns out to be dead easy if you bother to try.
2Seth Herd12h
That is a fascinating take! I haven't heard it put that way before. I think that perspective is a way to understand the gap between old-school agent foundations folks' high p(doom) and new school LLMers relatively low p(doom) - something I've been working to understand, and hope to publish on soon. To the extent this is true, I think that's great, because I expect to see some real insights on intelligence as LLMs are turned into functioning minds in cognitive architectures. Do you have any refs for that take, or is it purely a gestalt?
1quetzal_rainbow12h
If it is not a false memory, I've seen this on twitter of either EY or Rob Bensinger, but it's unlikely I find source now, it was in the middle of discussion.

Fair enough, thank you! Regardless, it does seem like a good reason to be concerned about alignment. If you have no idea how intelligence works, how in the world would you know what goals your created intelligence is going to have? At that point, it is like alchemy - performing an incantation and hoping not just that you got it right, but that it does the thing you want.

Summary. In this post, we present the formal framework we adopt during the sequence, and the simplest form of the type of aspiration-based algorithms we study. We do this for a simple form of aspiration-type goals: making the expectation of some variable equal to some given target value. The algorithm is based on the idea of propagating aspirations along time, and we prove that the algorithm gives a performance guarantee if the goal is feasible. Later posts discuss safety criteria, other types of goals, and variants of the basic algorithm.

Assumptions

In line with the working hypotheses stated in the previous post, we assume more specifically the following in this post:

  • The agent is a general-purpose AI system that is given a potentially long sequence of tasks, one by one,
...
2Charlie Steiner7h
So to sum up so far, the basic idea is to shoot for a specific expected value of something by stochastically combining policies that have expected values above and below the target. The policies to be combined should be picked from some "mostly safe" distribution rather being whatever policies are closest to the specific target, because the absolute closest policies might involve inner optimization for exactly that target, when we really want "do something reasonable that gets close to the target." And the "aspiration updating" thing is a way to track which policy you think you're shooting for, in a way that you're hoping generalizes decently to cases where you have limited planning ability?

Exactly! Thanks for providing this concise summary in your words. 

In the next post we generalize the target from a single point to an interval to get even more freedom that we can use for increasing safety further. 

In our current ongoing work, we generalize that further to the case of multiple evaluation metrics, in order to get closer to plausible real-world goals, see our teaser post

Thanks to Leo Gao, Nicholas Dupuis, Paul Colognese, Janus, and Andrei Alexandru for their thoughts. 

This post was mostly written in 2022, and pulled out of my drafts after recent conversations on the topic. My main update from then is the framing. Rather that suggesting searching for substeps of search as an approach I'm excited about, I now see it only as a way to potentially reduce inherent difficulties. The main takeaway of this post should be that searching for search seems conceptually fraught to the point that it may not be worth pursuing.

Searching for Search is the research direction that looks into how neural networks implement search algorithms to determine an action. The hope is that if we can find the search process, we can then determine...

Aren't LLMs already capable of two very different kinds of search? Firstly, their whole deal is predicting the next token - which is a kind of search. They're evaluation all the tokens at every step, and in the end choose the most probable seeming one. Secondly, across-token search when prompted accordingly. Say "Please come up with 10 options for X, then rate them all according to Y, and select the best option" is something that current LLMs can perform very reliably - whether or not "within token search" exists as well. But then again, one might of cours... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

This is a response to the post We Write Numbers Backward, in which lsusr argues that little-endian numerical notation is better than big-endian.[1] I believe this is wrong, and big-endian has a significant advantage not considered by lsusr.

Lsusr describes reading the number "123" in little-endian, using the following algorithm:

  • Read the first digit, multiply it by its order of magnitude (one), and add it to the total. (Running total: ??? one.)
  • Read the second digit, multiply it by its order of magnitude (ten), and add it to the total. (Running total: ??? twenty one.)
  • Read the third digit, multiply it by its order of magnitude (one hundred), and add it to the total. (Arriving at three hundred and twenty one.)

He compares it with two algorithms for reading a big-endian number. One...

What if you are not a person, but a computer, converting a string into an integer? In that case, having a simpler and faster algorithm is important, having to start with only the beginning of a string (what the user has typed so far) is plausible, and knowing the number's approximate value is useless. So in this case the little-endian algorithm is much better than the big-endian one.

For the most part this is irrelevant. If you are a computer with only partial input, recieving the rest of the input is so much slower than parsing it that it literally does... (read more)

3localdeity7h
One aspect neither of you have explicitly addressed is the speaking of numbers; speaking, after all, predates writing.  We say "one billion, four hundred twenty-eight million, [...]". Given that that's what we say, the first two pieces of information we need are "one" and "billion".  More generally, we need to get the first 1-3 digits (the leftmost comma-separated group), then we need the magnitude, then we can proceed reading off all remaining digits. Given that the magnitude is not explicitly written down, we get it by counting the digits.  If the digits are comma-separated into groups of 3 (and "right-justified", so that if there are 3n+1 or 3n+2 digits, then the extra 1-2 are the leftmost group), then it's generally possible to get the magnitude from your "peripheral vision" (as opposed to counting them one by one) for numbers less than, say, 1 billion, which are what you'd most often encounter; like, "52" vs "52,193" vs "52,193,034", you don't need to count carefully to distinguish those.  (It gets harder around 52,193,034,892 vs 52,193,034,892,110, but manually handling those numbers is rare.)  So if getting the magnitude is a mostly free operation, then you might as well just present the digits left-to-right for people who read left-to-right. Now, is it sensible that we speak "one billion, four hundred twenty-eight million, [...]"?  Seems fine to me.  It presents the magnitude and the most significant digits first (and essentially reminds you of the magnitude every 3 digits), and either the speaker or the listener can cut it off at any point and have an estimate accurate to as many digits as they care for.  (That is essentially the use case of "partially running the algorithm" you describe.)  I think I'd hate listening to "six hundred sixty three, six hundred twenty-seven thousand, four hundred twenty-eight million, and one billion", or even suffixes of it like "four hundred twenty eight million and one billion".  Tell me the important part first!
2quiet_NaN8h
I think that it is obvious that Middle-Endianness is a satisfactory compromise between Big and Little Endian.  More seriously, it depends on what you want to do with the number. If you want to use it in a precise calculation, such as adding it to another number, you obviously want to process the least significant digits of the inputs first (which is what bit serial processors literally do).  If I want to know if a serially transmitted number is below or above a threshold, it would make sense to transmit it MSB first (with a fixed length).  Of course, using integers to count the number of people in India seems like using the wrong tool for the job to me altogether. Even if you were an omniscient ASI, this level of precision would require you to have clear standards at what time a human counts as born and at least provide a second-accurate timestamp or something. Few people care if the population in India was divisible by 17 at any fixed point in time, which is what we would mostly use integers for.  The natural type for the number of people in India (as opposed to the number of people in your bedroom) would be a floating point number.  And the correct way to specify a floating point number is to start with the exponent, which is the most important part. You will need to parse all of the bits of the exponent either way to get an idea of the magnitude of the number (unless we start encoding the exponent as a floating point number, again.) The next most important thing is the sign bit. Then comes the mantissa, starting with the most significant bit.  So instead of writing  What we should write is: Standardizing for a shorter form (1.6e-19 C --> ??) is left as an exercise to the reader, as are questions about the benefits we get from switching to base-2 exponentials (base-e exponentials do not seem particularly handy, I kind of like using the same system of digits for both my floats and my ints) and omitting the then-redundant one in front of the dot of the mant
2localdeity8h
I generally agree, except I find words like "multiply" and "add" a bit misleading to use in this context.  If I read a number like 3,749,328, then it's not like I take 3 million, and then take 7, multiply by 100,000, and get 700,000, and then perform a general-purpose addition operation and compute the subtotal of 3,700,000.  First of all, "multiply by 100,000" is generally more like "Shift left by 5 (in our base-10 representation)"; but moreover, the whole operation is more like a "Set the nth digit of the number to be this".  If this were a computer working in base 2, "set nth digit" would be implemented as "mask out the nth bit of the current number [though in this case we know it's already 0 and can skip this step], then take the input bit, shift left by n, and OR it with the current number". (In this context I find it a bit misleading to say that "One hundred plus twenty yields one hundred and twenty" is performing an addition operation, any more than "x plus y yields x+y" counts as performing addition.  Because 100, by place-value notation, means 1 * 100, and 20 means 2 * 10, and 120 means 1 * 100 + 2 * 10, so you really are just restating the input.) Also, I might switch the order of the first two steps in practice.  "Three ... [pauses to count digits] million, seven hundred forty-nine thousand, ...".

Epistemic Status: Musing and speculation, but I think there's a real thing here.

I.

When I was a kid, a friend of mine had a tree fort. If you've never seen such a fort, imagine a series of wooden boards secured to a tree, creating a platform about fifteen feet off the ground where you can sit or stand and walk around the tree. This one had a rope ladder we used to get up and down, a length of knotted rope that was tied to the tree at the top and dangled over the edge so that it reached the ground. 

Once you were up in the fort, you could pull the ladder up behind you. It was much, much harder to get into the fort without the ladder....

4Screwtape14h
Epistemic status: memories from five years ago where I was stressed and sleep deprived at the time. So, the primary thing I thought the Megameetup did was have overnight space for the people who registered for overnight and space during the day for people who registered for the day. I closed registrations when I thought we had as many people as the space could hold, and made most of my calculations and planning based on the number of people who registered. (Mostly food, but I'd also been asked to check that certain people the community had had problems with weren't attending.) I knew Solstice was going on that weekend and had coordinated a little bit with the Solstice organizer, but mostly just to know the time and location so I knew when to send people over.  During the weekend- if I remember correctly, this was in the early afternoon on Saturday, so about five hours before Solstice and while the Megameetup was in full swing- people start pointing out that with registration closed, people who just planned to go to the afterparty didn't know if they were supposed to just show up or what. I don't remember the exact conversation, but basically over the course of about fifteen minutes I realized that lots of people were assuming that the megameetup would host Solstice's afterparty, and that an unknown number of people were attending Solstice who hadn't registered at all with Megameetup but expected to go to the afterparty.  I have five hours to prepare for an unknown number of people to converge on us, when we were already at what I thought was capacity for the venue with a little safety margin, while simultaneously trying to keep the event I knew I was planning on course. I could try and tell people not to, but lots of people including my co-organizers have been assuming obviously the afterparty is at the Megameetup and people who went to solstice can come, even if they didn't tell Megameetup they were coming, and if Megameetup isn't hosting this then someone else
2Algon7h
You can't just say that and not elaborate!

Attendee: knock knock Hey, is the organizer in there?

Me: Yeah, what's up?

Attendee: The fire department is here, and we think an attendee just left in an ambulance but we're not sure who or why.

Me: . . . I'll be right out.

And that's the most stressful thing that's ever happened to me as an event organizer.

2Screwtape14h
A history of the NYC Rationalist Megameetup is in my drafts. Someday I hope to finish it, ideally around when I announce 2024's iteration.

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA