All of Zolmeister's Comments + Replies

I was referring to their (free) DDoS protection service, rather than their CDN services (also free). In addition to their automated system, you can manually enable an "under-attack" mode that aggressively captchas requests.

Setup is simply pointing DNS name-servers at Cloudflare. Caching HTML pages for logged out (i.e. cookie-less) users is a trivial config ("cache-everything").

habryka110

Oh, interesting. I had not properly realized you could unbundle these. I am hesitant to add a hop to each request, but I do sure expect Cloudflare to be fast. I'll look into it, and thanks for the recommendation.

2ProgramCrafter
It's a solution! However it comes with its own downsides. For instance, Codeforces users ranted on Cloudflare usage for a while, with following things (mapped to LessWrong) highlighted: * The purpose of an API is defeated: even the API endpoints on the same domain are restricted, which prevents users from requesting posts via GraphQL. In particular, ReviewBot will be down (or be hosted in LW internal infrastructure). * In China, Cloudflare is a big speed bump. * Cloudflare-protected sites are reported to randomly lag a lot. > I had been assuming that this is a server problem, but from talking to some people it seems like this is an issue with differential treatment of who is accessing CF. Lack of interaction smoothness might be really noticeable for new users, comparing to current state.
4habryka
Yeah, we considered setting up a Cloudflare proxy for a while, but at least for logged-in users, LW is actually a really quite dynamic and personalized website, and not a great fit for it (I do think it would be nice to have a logged-out version of pages available on a Cloudflare proxy somehow).

Along the same lines, I found this analogy by concrete example exceptionally elucidative.

While merely anti-bacterial, Nano Silver Fluoride looks promising. (Metallic silver applied to teeth once a year to prevent cavities).

Yudkowsky has written about The Ultimatum Game. It has been referenced here 1 2 as well.

When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.

1Isaac King
This isn't the ultimatum game though, since it's symmetric.

Sure, but it does not preclude it. Moreover, if the costs of the actions are not borne by the altruist (e.g. by defrauding customers, or extortion), I would not consider it altruism.

In this sense, altruism is a categorization tag placed on actions.

I do see how you might add a second, deontological definition ('a belief system held by altruists'), but I wouldn't. From the post, "Humane" or "Inner Goodness" seem more apt in exploring these ideas.

I do not see the contradiction. Could you elaborate?

2Yoav Ravid
Cause "according to the criterion of others' welfare" doesn't require "at ones own expense".
Answer by Zolmeister*61

John Carmack

  • 55-60% chance there will be "signs of life" in 2030 (4:06:20)
  • "When we've got our learning disabled toddler, we should really start talking about the safety and ethics issues, but probably not before then" (4:35:36)
  • These things will take thousands of GPUs, and will be data-center bound
    • "The fast takeoff ones are clearly nonsense because you just can't open TCP connections above a certain rate" (4:36:40)

Broadly, he predicts AGI to be animalistic ("learning disabled toddler"), rather than a consequentialist laser beam, or simulator.

1Optimization Process
Approved! Will pay bounty.

I found this section, along with dath ilani Governance, and SCIENCE! particularly brilliant.

This concept is introduced in Book 1 as the solution to the Ultimatum Game, and describes fairness as Shapely value.

When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.

_

Once you've arrived at a notion of a 'fair price' in some one-time trading situation where the seller sets a price and the buyer decides whether to accept, the seller does

... (read more)
1Zolmeister
I found this section, along with dath ilani Governance, and SCIENCE! particularly brilliant.

Eliezer: What do you want the system to do?

Bob: I want the system to do what it thinks I should want it to do.

Eliezer: The Hidden Complexity of Wishes

1[comment deleted]
Answer by Zolmeister110

Gwern has a fantastic overview of time-lock encryption methods.

A compute-hard real-time in-browser solution that doesn't rely on exotic encryption appears infeasible. (You'd need a GPU, and hours/days worth of compute for years of locking). For LW, perhaps threshold aggregate time-lock encryption would suffice (though vulnerable to collusion/bribery attacks, as noted by Gwern).

I agree with Quintin Pope, a public hash is simple and effective.

Vitalik's Optimism retro-funding post mentions a few instances where secret ballots are used today, and which could arguably be improved by these cryptographic primitives:

  • The Israeli Knesset uses secret votes to elect the president and a few other officials
  • The Italian parliament has used secret votes in a variety of contexts. In the 19th century, it was considered an important way to protect parliament votes from interference by a monarchy.
  • Discussions in US parliaments were less transparent before 1970, and some researchers argue that the switch to mo
... (read more)
2ryan_b
I have not read this one, thank you for the link!

If we cannot prove who anyone actually voted for, we can't prove who actually won at all.

Using zero-knowledge proofs it is possible to prove that votes were counted correctly, without revealing who anyone voted for. See MACI [1], which additionally provides inability to prove your own vote to a third party.

2ryan_b
From the MACI link, my objection is a generalized version of this: This is the level where trust is a problem in most real elections, not the voter level. I also note this detail: Emphasis mine. In total this looks like it roughly says "Assuming we trust everyone involved, we can eliminate some of the incentive to breach that trust by eliminating certain information." That is a cool result on the technical merits, but doesn't seem to advance the pragmatic goal of finding a better voting system.

if the two agents are able to accurately predict each others' actions and reason using FDT, then it is possible for the two agents to cooperate

Couldn't you equally require QV participants pre-commit to non-collusion?

6Bucky
I think this could work in small groups. However pre-commitment to non-collusion requires trust between the entire electorate, whereas collusion requires trust only between the two people who are colluding.
1leogao
I think the main difficulty here is that defining non-collusion might be a bit tricky, but it could probably work with some assumptions.

In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy.

My reading is that he claims compulsory education had little effect in Britain and the US, where literacy was already widespread.

When Britain first made education compulsory for 5-to-10-year-olds in 1880, over 95% of 15- year-olds were already literate. [1]

There's an interesting footnote where he references a paper on economic returns of compulsory educa... (read more)

Follow the white rabbit

The source makes explicit reference to refined starches:

c All foods are assumed to be in nutrient-dense forms; lean or low-fat and prepared with minimal added sugars; refined starches, saturated fat, or sodium

Though to be clear, I do not endorse the 'system' as proposed. I do not believe that it adequately reflects nuance in health effects of food consumption, nor do I believe it accurately represents modern food health science (where are their sources?).

For example, the hard-line stance against saturated fats is questionable [1] [2] [3]. Not explicitl... (read more)

Yes I count most (by GI) flour as equivalent to sugar [1]. As for keeping high GI carbs under 10%, I have insufficient information. To keep all carbs under 10% would be ketogenic, which while not specifically recommended (unless trying to lose weight), has shown interesting results in the literature [2].

2jefftk
It wouldn't make sense to change one half of the system without the other. If you don't think it is worth distinguishing sugar from other things with a high glycemic index, you should probably then have a higher limit for this combined category. I don't think it generally makes sense to put kids on a ketogenic diet for no reason. Additionally, aren't there reasons other than glycemic index to avoid sugar?

Pancakes contain significant quantities of carbohydrates (sugar), with glycemic index comparable to that of table sugar. Those pancakes look like they're closer to 3 sweets than 1 (sorry kids).

2jefftk
Sorry, are you saying that you want to count flour as equivalent to sugar? And keep calories from carbohydrates to under 10% of total calories? (These particular pancakes do not actually have very much flour. Much higher levels of egg and sour cream)

I think it balances prescribed burns with other methods of fire-suppression (fire-breaks, thinning), and incentivizes local coordination among neighbors.

Hold land-owners liable for fire-damage caused to their abutting neighbors.

2jefftk
I like this idea, but I think it's still unreasonably disadvantages prescribed burns?

I recommend Ample (lifelong subscriber). It has high quality ingredients (no soy protein), fantastic macro ratios (5/30/65 - Ample K), and an exceptional founder.

Since time is the direction of increased entropy, this feels like it has some deep connection to the notion of agents as things that reduce entropy (only locally, obviously) to achieve their preferences

Reminded me of Utility Maximization = Description Length Minimization.

It's hard for me to credibly believe that this harm happened due to the algorithm, that no humans at Google were clearly aware of what was going on, when Googlers were being sent out to events to pitch to this market

Never attribute to malice that which is adequately explained by stupidity. It sounds like the fraud involved was extremely sophisticated, as it was hiding behind state negligence. Google now requires these advertisers to be licenced by a reputable third party.

The problem I see here isn't just that the Ads team gets paid for participation i

... (read more)

You have not produced evidence that billboards are generally 'criminal mind control', only that they violate norms for shared spaces for people like Banksy. Ultimately this boils down to local political disagreement, rather than some clever ploy by The Advertisers to get into your brain.

You owe the companies nothing. Less than nothing, you especially don't owe them any courtesy. They owe you.

This is strictly true in the sense that advertisement is negative cost and negative value, but that is exactly why it is used as a tool for producing otherwise dif... (read more)

I was interested in her claim that the Bullet Cluster is evidence against dark matter.

The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.

Technically, the market I should make corresponds to what I think other people's probabilities are likely to be given they can see my market. I might give a wider market because only people that think they're getting a good deal with trade with me

Technically, market making is betting on price volatility by providing liquidity. To illustrate, I'll use a simple automated market maker.

Yes * No = Const

This means I will accept any trade of Yes/No tokens, so long as the product remains constant. Slippage is proportional to the quantity of tokens available. Pr... (read more)

Each of these functions takes ~30s to run, so it ends up being more efficient to put them in one job instead of multiple.

This is a perfect example of the AWS Batch API 'leaking' into your code. The whole point of a compute resource pool is that you don't have to think about how many jobs you create.
It sounds like you're using the wrong tool for the job (or a misconfiguration - e.g. limit the batch template to 1 vcpu).

The benefit of the pass-through approach is that it uses language-level features to do the validation

You get language-level validatio... (read more)

2SatvikBeri
This is true. We're using AWS Batch because it's the best tool we could find for other jobs that actually do need hundreds/thousands of spot instances, and this particular job goes in the middle of those. If most of our jobs looked like this one, using Batch wouldn't make sense. You're right. In the explicit example, it makes more sense to have that sort of logic at the call site. 

The reason to be explicit is to be able to handle control flow.

def run_job(make_dataset1: bool, make_dataset2: bool):
    if make_dataset1 && make_dataset2:
        make_third_dataset()

If your jobs are independent, then they should be scheduled as such. This allows jobs to run in parallel.

def make_datasets_handler(job):
    for dataset in job.params.datasets:
        schedule_job('make_dataset', {dataset})

def make_dataset_handler(job):
     {name, params} = job.params.dataset
     constructors.get(name)(**params)

Passing random params to fun... (read more)

2SatvikBeri
The datasets aren't dependent on each other, though some of them use the same input parameters. Sure, there's some benefit to breaking down jobs even further. There's also overhead to spinning up workers. Each of these functions takes ~30s to run, so it ends up being more efficient to put them in one job instead of multiple. So then you have to maintain check_dataset_params, which gives you a level of indirection. I don't think this is likely to be much less error-prone. The benefit of the pass-through approach is that it uses language-level features to do the validation – you simply check whether the parameters dict has keywords for each argument the function is expecting. I agree in general, but I don't think there are particularly good ways to test this without introducing indirection. The failure you're talking about here is tripping a try clause. I agree that exceptions aren't the best control flow – I would prefer if the pattern I'm talking about could be implemented with if statements – but it's not really a major failure, and (unfortunately) a pretty common pattern in Python. 

I dropped out of high school. It's not a place for smart people.

Some highlights from my .vimrc

" Prevent data loss
set undofile
" Flush to disk every character (Note: disk intensive, e.g. makes large copy-pastes slow)
set updatecount=1

" Directory browsing usability
let g:netrw_liststyle = 3 " tree list view
let g:netrw_banner = 0

" Copy for X11
vnoremap <C-c> "cy <Bar> :call system('xclip -selection clipboard', @c)<CR><CR>

Also worth checking out CoC (language server)

Twitch has recently begun experimenting with predictions for streamers using their channel-points currency.

The history of central banking (and large scale monetary policy generally), is fascinating. This lecture I found particularly enlightening (George Selgin): https://www.youtube.com/watch?v=JeIljifA8Ls

Noteworthy remarks:

  • Even before central banking, government regulation required banks purchase junk assets (causing failures)
  • Nonuniform currency price slippage (when each bank issued its own notes) may have been < 1%
  • The National Bank Act taxed private bank notes at 10%, effectively destroying private currency circulation
    • National Bank notes were backed b
... (read more)

I have removed the good/bad duality entirely, as I found it confusing.

https://www.lesswrong.com/posts/M2LWXsJxKS626QNEA/the-trouble-with-good

Puzzle 1:

score: 180

To use a more realistic example, it's hard for me to agree that a billionaire values their tenth vacation home more than a homeless person who is in danger of freezing in the winter.

I don't see "value" as a feeling. A freezing person might desire a warm fire, but their value of it is limited by what can be expressed.

That said, a person is a complex asset, and so the starving person might trade in their "apparent plight" (e.g. begging).

For example, the caring seller of the last sandwich might value alleviating "apparent plight" more than millions of shar... (read more)

Tap again directly on your prediction to remove it.

1WilliamKiely
Thanks! On mobile I had to zoom in to reliably tap directly on the bar, which I didn't try originally.

What if instead of producing new things to value, people change the things they value. Perhaps increased homogeneity of value creates more efficient economies of scale.

If I understand correctly, then Rocket Pool fits the bill. It is a network (with mild centralization) that allows people to buy and sell shares of a validator pool. Risk is spread across the network in case of node failure.

Note on 1, the withdrawal key is separate from the validator key, such that one can validate but not withdraw.

Edit: Though I agree on 2, that in the long term the fees such networks will be able to charge will decline significantly.

There will not be a secondary market for Eth2 stakes

Actually, Coinbase just announced intent to deliver this secondary market. A tokenized Eth2 stake may then also be traded on DeFi exchanges. https://blog.coinbase.com/ethereum-2-0-staking-rewards-are-coming-soon-to-coinbase-a25d8ac622d5

1Annapurna
Thank you for sharing. It's great to see these services already commit to Eth2 staking.  I wouldn't call this a secondary market though. This is more Coinbase becoming an intermediary between the customer and the Eth2 staking process.  A secondary market would be being able to buy and sell validator nodes. I do not think this would happen for two reasons:  1. Security. Every validator has a set of keys. A secondary market would imply the sharing of those keys.  2. Pricing. A secondary market would imply that the market value of validators (32 ETH) would fluctuate. Why would you sell a validator for lower than 32 ETH (and inversely buy a validator for more than 32 ETH) if the consensus protocol will always allow you to set up a node (and in phase 1.5 and beyond, exit a node) for 32 ETH?

It's not 'free', just very very cheap. If food at the mall was as cheap to produce as ketchup, they would probably just make the food free to bring in business.

It's based on an observation of the continual efficient pricing pressure of competitive markets combined with technological innovation which reduces the real cost of food.

1TAG
Business in the sense of the person who is now in the mall spending money on something other than food which is not free. Everything being cheaper makes sense, some things being free makes sense, everything being free doesn't.

And when I go spend my money I impose a cost on the world

You impose no such cost, as those willing to exchange your money for their services do so profitably.

0Slider
It is possible to trade oneself to be bankcrypt. You could have a worker that faced a situation that if we doesn't lower his pricer all the customers will go to the competitors. If people were completely farsighted and could factor in everything they would close shop immidietly. But it isn't unheard off to run an activity a little while with loss or run it by overworking oneself outside of ones capacity. There are uncertainties and closing and opening a shop isn't neccesarily frictionless. But that fricton can mask an area where activity is kept for inertias sake while actually being a little bit of burden. Doing all the risk adjusments and opportunity costs and everything correctly is cognitively very challenging. Trusting that everybody does every decision always correctly might be handy for mathematical ideal land but for limited cognition agents it just means that everybody has a story how their deal makes sense. And like nobody is the villain of their story, everybody is the mastermind of their business. It doesn't mean that everybody is a saint or that all business is suistainable.

Is working good for the rest of society?

Suppose you do some work and earn $100. The question from the rest of society’s perspective is whether we got more benefit than the $100 we paid you.

We can get more than $100 if e.g. you spend your $100 on a Netflix subscription...

If you receive $100 for work, that means you have already provided at least $100 in value to society. That society might gain additional benefit from how you spend your money is merely coincidental.

3Slider
Gaining an item worth $100 and losing $100 in cash is value neutral. If you buy one banana for 10 million dollars that doesn't make a banana 10 million worth to society.
4Donald Hobson
No, it means that there is at least 1 person prepared to pay $100 for the work. If you are manufacturing weapons that end up in the wrong hands. You might be doing quite a lot of harm to society overall. Your employer gains at least $100 in value. The externalities could be anything.  
2Gurkenglas
You can cause more than a dollar of damage to society for every dollar you spend, say by hiring people to drive around throwing eggs at people's houses. Though I guess in total society is still better off by a hundred dollars compared to if you had received them via UBI.
5paulfchristiano
I'm not sure what "coincidental" means here. The question is how much more or less than $100 of value you create by working, and that seems to depend about as much on how you spend your money as it does on how you earn your money.
Answer by Zolmeister70

Digital Rights Enforcement Agencies

Given a desire for digital rights in the face of Crypto-Anarchy, market-based polycentric law might yield a solution.

David Friedman's model for market-law involves defense agencies and arbitrators who mediate between those agencies. The system is stable as a repeated game, wherein the cost of fighting other agencies is higher than the cost of peaceful negotiation.

In the digital world, a 'defense agency' might look like a professional hacking group. This group would maintain a public identity and offer it's services to cl... (read more)

Load More