Nikita Sokolsky

Wiki Contributions

Comments

Sorted by

If you have a Substack blog, consider using Datawrapper embeds for inserting tables or charts. For some reason Substack has been somewhat hiding this cool feature as its not visible in the menu and poorly advertised but it lets you create highly customizable charts/tables for your Substack posts - and is also supported on Wordpress or in the form of an HTML embed if you have a self-hosted blog/website. I wrote a brief guideline on how to use it in my blog - and you can see cool Datawrapper embeds in Nate Silver's substack.

Sadly Datawrapper not supported on LessWrong (yet) but LW already has native support for table embeds, so its less relevant.

Whatever happened to Lumina, btw? Was anyone able to see any changes in their dental health so far?

Side note: The original 'chickenpox bomber' illustration (using the Douglas F-4 Phantom) first appeared in Robert Ball's 1985 book "The Fundamentals of Aircraft Combat Survivability Analysis and Design", later refined in the 2nd edition from 2003. He never mentions Wald, Wallis or the SRG in the entire book, so I'm likewise convinced that the "mathematician outsmarts the military" story is pretty much BS.Image

Do you think this will have any impact on OpenAI's future revenues / ability to deliver frontier-level models?

Here’s the corrected link: https://pastebin.com/B824Hk8J

Are you running this from an EC2 instance or some other cloud provider? They might just have a blocklist in IPs belonging to data centers.

Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?

Side note: I don't think the "sci-fi bias" concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.

Hm, are you sure they're actually that protective against scrapers? I ran a quick script and was able to extract all 548 unique pages just fine: https://pastebin.com/B824Hk8J The final output was:

Status codes encountered:
200: 548
404: 20

I reran it two more times, it still worked. I'm using a regular residential IP address, no fancy proxies. Maybe you're just missing the code to refresh the cookies (included in my script)? I'm probably missing something of course, just curious why the scraping seems to be easy enough from my machine?

They could but if you’re managing your firewall it’s easier to apply a blanket rule rather than trying to divide things by subdomain, unless you have a good reason to do otherwise. I wouldn’t assume malicious intent.

They do have a good reason to be wary of scrapers as they provide a free version of ChatGPT, I'm guessing they just went ahead and configured it over their entire domain name rather than restricting it to the chat subdomain.

Load More