ryan_b

Sequences

National Institute of Standards and Technology: AI Standards

Wiki Contributions

Comments

Sorted by
Answer by ryan_b20

Welcome!

The short and informal version is that epistemics covers all the stuff surrounding the direct claims. Things like credence levels, confidence intervals, probability estimates, etc are the clearest indicators. It also includes questions like where the information came from, how it is combined with other information, what other information we would like to have but don't, etc.

The most popular way you'll see this expressed on LessWrong is through Bayesian probability estimates and a description of the model (which is to say the writer's beliefs about what causes what).

The epistemic status statement you see at the top of a lot of posts is for setting the expectation. This lets the OP write complete thoughts without the expectation that they demonstrate full epistemic rigor, or even that they endorse the thought per se.

ryan_b20

May I throw geometry's hat into the ring? If you consider things like complex numbers and quarternions, or even vectors, what we have are two-or-more dimensional numbers.

I propose that units are a generalization of dimension beyond spatial dimensions, and therefore geometry is their progenitor. 

It's a mathematical Maury Povich situation.

ryan_b21

I feel like this is mostly an artifact of notation. The thing that is not allowed with addition or subtraction is simplifying to a single term; otherwise it is fine. Consider:

10x + 5y -5x -10y = 10x - 5x + 5y -10y = 5x - 5y

So, everyone reasons to themselves, what we have here is two numbers. But hark, with just a little more information, we can see more clearly we are looking at a two-dimensional number:

5x - 5y = 5

5x = 5y +5

5x - 5 = 5y

x - 1 = y

y = x - 1

Such as a line.

This is what is happening with vectors, and complex numbers, quarternions, etc.

ryan_b42

The post anchors on the Christiano vs Eliezer models of takeoff, but am I right that the goal more generally is to disentangle the shape of progress from the timeline for progress? I strongly support disentangling dimensions of the problem. I have spoken against using p(doom) for similar reasons.

ryan_b20

Because that method rejects everything about prices. People consume more of something the lower the price is, even more so when it is free: consider the meme about all the games that have never been played in people's Steam libraries because they buy them in bundles or on sale days. There are ~zero branches of history where they sell as many units at retail as are pirated.

A better-but-still-generous method would be to do a projection of the increased sales in the future under the lower price curve, and then claim all of that as damages, reasoning that all of this excess supply deprived the company of the opportunity to get those sales in the future.

ryan_b20

This is not an answer, but I register a guess: the number relies on claims about piracy, which is to say illegal downloads of music, movies, videogames, and so on. The problem is that the conventional numbers for this are utter bunk, because the way it gets calculated by default is they take the number of downloads, multiply it by the retail price, and call that the cost.

This would be how they get the cost of cybercrime to significantly exceed the value of the software industry: they can do something like take the whole value of the cybersecurity industry, better-measured losses like from finance and crypto, and then add bunk numbers for piracy losses from the entertainment industry on top of it.

ryan_b64

This feels like a bigger setback than the generic case of good laws failing to pass.

What I am thinking about currently is momentum, which is surprisingly important to the legislative process. There are two dimensions that make me sad here:

  1. There might not be another try. It is extremely common for bills to disappear or get stuck in limbo after being rejected in this way. The kind of bills which keep appearing repeatedly until they succeed are those with a dedicated and influential special interest behind them, which I don't think AI safety qualifies for.
  2. There won't be any mimicry. If SB 1047 had passed, it would have been a model for future regulation. Now it won't be, except where that regulation is being driven by the same people and orgs behind SB 1047.

I worry that the failure of the bill will go as far as to discredit the approaches it used, and will leave more space for more traditional laws which are burdensome, overly specific, and designed with winners and losers in mind.

We'll have to see how the people behind SB 1047 respond to the setback.

ryan_b1610

As for OpenAI dropping the mask: I devoted essentially zero effort to predicting this, though my complete lack of surprise implies it is consistent with the information I already had. Even so:

Shit.

ryan_b50

I wonder how the consequences to reputation will play out after the fact.

  • If there is a first launch, will the general who triggered it be downvoted to oblivion whenever they post afterward for a period of time?
  • What if it looks like they were ultimately deceived by a sensor error, and believed themselves to be retaliating?
  • If there is mutual destruction, will the general who triggered the retaliatory launch also be heavily downvoted?
  • Less than, more than, or about the same as the first strike general?
  • Would citizens who gained karma in a successful first strike condemn their 'victorious' generals at the same rate as everyone else?
  • Should we call this pattern of behavior, however it turns out, the Judgment of History?
ryan_b73

It does, if anything, seem almost backwards - getting nuked means losing everything, and successfully nuking means gaining much but not all.

However, that makes the game theory super easy to solve, and doesn't capture the opposing team dynamics very well for gaming purposes.

Load More