I made the grave error of framing this post in a way that invites a definition debate. While we are well familiar that a definition debate is similar to a debate over natural categories, which is a perfectly fine discussion to have, the discussion here has suffered because several people came in with competing categories.
I strongly endorse Ben's post, and will edit the top post to incorporate it.
definitional gimbal lock
I really like this phrase. :)
Mostly agreed with what you say about the word "capitalism." But with the NYSE example, I think it would be natural to say that the company did something not particularly capitalist. Is the CCP-owned Air China a capitalist entity? It's certainly less capitalist than Southwest.
I think there's at least two ways meanings can be combined. The easy one is words with multiple meanings. For example, "capitalist" has two meanings: someone who believes in free markets, and someone who owns a lot of ca...
I had the movie version in my mind, where the disbelieving parent comes around on seeing the kid's success (c.f.: October Sky, Billy Elliott). I myself felt a version: my parents were very against me applying for the Thiel Fellowship, up until it became clear that I might (and did) win.
A gap in the proposed definition of exploitation is that it assumes some natural starting point of negotiation, and only evaluates divergence from that natural starting point.
In the landlord case, fair-market value of rent is a natural starting point, and landlords don't have enough of a superior negotiating position to force rent upwards. (If they did by means of supply scarcity, then that higher point would definitionally be the new FMV.) Ergo, no exploitation.
On the other hand, if the landlord tried to increase the rent on a renewing tenant much m...
I really like this thinking. I don't necessarily like the assignment of labels to concepts in this post. E.g.: I use capitalism in a manner mutually exclusive with slave labor because it requires self-ownership. And I don't think a definition of "exploitation" should require a strategic element; I would say that not allowing an employee to read mystery novels when customers aren't around is exploitative. But this idea of using an asymmetry of power to deepen the asymmetry is a clearly useful concept.
My intended meaning of the wording is that the "infliction" is relative to a more Pareto-optimal trade. E.g.: in the ultimatum game, us splitting a dollar with 99 cents to me and 1 cent to you is a positive-benefit trade, but is still inflicting a cost if you assume the negotation begins at 50/50.
The idea of the subtrade is an interesting thought, but I think any trade needs to be considered an atomic agreement. E.g.: while I might physically hand the convenience store clerk a dollar before they give me the candy bar, it can't be broken down into two trade...
I see. So the maximalization is important to the definition. I think then, under this definition, using Villiam's pie example from another thread, the person taking 90% of the pie would not be exploiting the other person if he knew they could survive with 9%.
I think this definition would also say that a McDonald's employee who puts me into a hard upsell is exploiting me so long as they never physically handle my credit card and don't have the capacity to trap me or otherwise do more than upselling. But if they handle my credit card and don't steal the numb...
An interesting proposal that I'll have to think about. I'm still uneasy with throwing lying in with uses of power.
Also, this one clearly does include the parenting example I gave, and is strictly broader than my proposed definition.
We have different intuitons about this term then.
I was very surprised after posting this then some commenters considered things like wage theft and outright fraud to be exploitation, whereas I consider such illegal behavior to be in a different category.
In the pie example, the obvious answer is that giving the other person only 10% of the pie prevents them from gaining the security to walk away next time I present the same offer.
Can you give some examplse of something contained by my definition which you think shouldn't be considered exploitation?
What would this look like for the example of the parent, the girlfriend, or the yelling boss?
When I was a kid and 9/11 happened, some people online were talking about the effect on the stock market. My mom told me that the stock exchange was down the street from the WTC and not damaged, so I thought the people on the Internet were all wrong.
warns not to give it too much credit – if you ask how to ‘fix the error’ and the error is the timeout, it’s going to try and remove the timeout. I would counter that no, that’s exactly the point.
I think you misunderstand. In the AI Scientist paper, they said that it was "clever" in choosing to remove the timeout. What I meant in writing that: I think that's very not clever. Still dangerous.
I'm still a little confused. The idea that "the better you can do something yourself, the less valuable it is to do it yourself" is pretty paradoxical. But isn't "the better you can do something yourself, the less downside is there in doing it yourself instead of outsourcing" exactly what you'd expect?
Hmmm? How does this support the point that "the better you can do something yourself, the less valuable it is to do it yourself."
I went from being at a normal level of hard-working (for a high schooler under the college admissions pressure-cooker) to what most would consider an insane level.
The first trigger was going to a summer program after my junior year where I met people like @jsteinhardt who were much smarter and more accomplished than me. That cued a senior year of learning advanced math very quickly to try to catch up.
Then I didn't get into my college of choice and got a giant chip on my shoulder. I constantly felt I had to be accomplishing more, and merely outdoing my peer...
In Chinese, the words for "to let someone do something" and "to make someone do something" are the same, 让 (ràng). My partner often makes this confusion. This one it did not get even after several promptings, up until I asked about the specific word.
Then I asked why both a Swede and a Dane I know say "increased with 20%" instead of "increased by 20%." It guessed that it had something to do with prepositions, but did not volunteer the preposition in question. (Google Translate answered this; "increased by 20%" translates to "ökade med 20%," and "med" common...
More discussion here: https://www.lesswrong.com/posts/gW34iJsyXKHLYptby/ai-capabilities-vs-ai-products
You're probably safe so long as you restrict distribution to the minimum group with an interest. There is conditional privilege if the sender has a shared interest with the recipient. It can be lost through overpublication, malice, or reliance on rumors.
A possible solution against libel is to provide an unspecific accusation, something like "I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk
FYI, this doesn't actually work. https://www.virginiadefamationlawyer.com/implied-undisclosed-facts-as-basis-for-defamation-claim/
It does not take luck to find someone who can help you stare into the abyss. Anyone can do it.
It's pretty simple: Get a life coach.
That is, helping people identify, face, and reason through difficult decisions is a core part of what life coaches do. And about all the questions that Ben cobbled together at the end (maybe not "best argument for" — I don't like that one) can be found in a single place: coaching training. All are commonly used by coaches in routine work.
And there are a lot more tools than the handful than the ones Ben found. These questi...
Quote for you summarizing this post:
“A person's success in life can usually be measured by the number of uncomfortable conversations he or she is willing to have.”
— Tim Ferriss
This post culminates years of thinking which formed a dramatic shift in my worldview. It is now a big part of my life and business philosophy, and I've showed it to friends many times when explaining my thinking. It's influenced me to attempt my own bike repair, patch my own clothes, and write web-crawlers to avoid paying for expensive API access. (The latter was a bust.)
I think this post highlights using rationality to analyze daily life in a manner much deeper than you can find outside of LessWrong. It's in the spirit of the 2012 post "Rational Toothpast...
Still waiting.
When will you send out the link for tomorrow?
https://galciv3.fandom.com/wiki/The_Galactic_Civilizations_Story#Humanity_and_Hyperdrive
I've hired (short-term) programmers to assist on my research several times. Each time, I've paid from my own money. Even assuming I could have used grant money, it would have been too difficult. And, long story short, there was no good option that involved giving funds to my lab so they could do the hire properly.
Grad students are training to become independent researchers. They have the jobs of conducting research (which in most fields is mostly not coding), giving presentations, writing, making figures, reading papers, and taking and teaching classes. Their career and skillset is rarely aligned with long-term maintenance of a software project; usually, they'd be sacrificing their career to build tools for the lab.
This is a great example of the lessons in https://www.lesswrong.com/posts/tTWL6rkfEuQN9ivxj/leaky-delegation-you-are-not-a-commodity
Really appreciate this informative and well-written answer. Nice to hear from someone on the ground about SELinux instead of the NSA's own presentations.
I phrased my question about time and space badly. I was interested in proving the time and space behavior of the software "under scrutiny", not in the resource consumption of the verification systems themsvelves.
LOL!
I know a few people who have worked in this area. Jan Hoffman and Peng Gong have worked on automatically inferring complexity. Tristan Knoth has gone the other way, including resource bounds in specs for program synthesis. There's a guy who did an MIT Ph. D. on building an operating system in Go, and as part of it needed an analyzer...
I must disagree with the first claim. Defense-in-depth is very much a thing in cybersecurity. The whole "attack surface" idea assumes that, if you compromise any application, you can take over an entire machine or network of machines. That is still sometimes true, but continually less so. Think it's game over if you get root on a machine? Not if it's running SELinux.
Hey, can I ask an almost unrelated question that you're free to ignore or answer as a private message OR answer here? How good is formal verification for time and space these days?
I...
Hmm. It looks like my reply notifications are getting batched now. I didn't realize I'd set that up.
I've reordered some of this, because the latter parts get into the weeds a lot and may not be worth reading. I advise that anybody who gets bored stop reading there, because it's probably not going to get more interesting.
For background, I haven't been doing security hands-on for the last few years, but I did it full time for about 25 years before that, and I still watch the space. I started out long enough ago that "cyber" sets my teeth on edge...
I agree with about everything you said as well as several more criticisms along those lines you didn't say. I am probably more familiar with these issues than anyone else on this website with the possible exception of Jason Gross.
Now, suppose we can magic all that away. How much then will this reduce AI risk?
I don't see what this parable has to do with Bayesianism or Frequentism.
I thought this was going to be some kind of trap or joke around how "probability of belief in Bayesianism" is a nonsense question in Frequentism.
I do not. I mostly know of this field from conversations with people in my lab who work in this area, including Osbert Bastani. (I'm more on the pure programming-languages side, not an AI guy.) Those conversations kinda died during COVID when no-one was going into the office, plus the people working in this area moved onto their faculty positions.
I think being able to backtrace through a tree counts as victory, at least in comparison to neural nets. You can make a similar criticism about any large software system.
You're right about the random forest there;...
I think you're accusing people who advocate this line of idle speculation, but I see this post as idle speculation. Any particular systems you have in mind when making this claim?
I'm a program synthesis researcher, and I have multiple specific examples of logical or structured alternatives to deep learning
Here's Osbert Bastani's work approximating neural nets with decision trees, https://arxiv.org/pdf/1705.08504.pdf . Would you like to tell me this is not more interpretable over the neural net it was generated from?
Or how abou...
Interesting. Do you know if such approaches have scales to match current SOTA models? My guess would be that, if you had a decision tree that approximated e.g., GPT-3, that it wouldn’t be very interpretable either.
Of course, you could look at any give decision and backtrace it through the tree, but I think it would still be very difficult to, say, predict what the tree will do in novel circumstances without actually running the tree. And you’d have next to no idea what the tree would do in something like a chain of thought style execution where the tree so...
I'm a certified life coach, and several of these are questions found in life coaching.
E.g.:
Is there something you could do about that problem in the next five minutes?
Feeling stuck sucks. Have you spent a five minute timer generating options?
What's the twenty minute / minimum viable product version of this overwhelming-feeling thing?
These are all part of a broader technique of breaking down a problem. (I can probably find a name for it in my book.) E.g.: Someone comes in saying they're really bad at X, and you ask them to actually rate their sk...
I realize now that this expressed as a DAG looks identical to precommitment.
Except, I also think it's a faithful representation of the typical Newcomb scenario.
Paradox only arises if you can say "I am a two-boxer" (by picking up two boxes) while you were predicted to be a one-boxer. This can only happen if there are multiple nodes for two-boxing set to different values.
But really, this is a problem of the kind solved by superspecs in my Onward! paper. There is a constraint that the prediction of two-boxing must be the same as the actual two-boxing. Traditi...
Okay, I see how that technique of breaking circularity in the model looks like precommitment.
I still don't see what this has to do with counterfactuals though.
I don't understand what counterfactuals have to do with Newcomb's problem. You decide either "I am a one-boxer" or "I am a two-boxer," the boxes get filled according to a rule, and then you pick deterministically according to a rule. It's all forward reasoning; it's just a bit weird because the action in question happens way before you are faced with the boxes. I don't see any updating on a factual world to infer outcomes in a counterfactual world.
"Prediction" in this context is a synonym for conditioning. is defined as .
If ...
While I can see this working in theory, in practise it's more complicated as it isn't obvious from immediate inspection to what extent an argument is or isn't dependent on counterfactuals. I mean counterfactuals are everywhere! Part of the problem is that the clearest explanation of such a scheme would likely make use of counterfactuals, even if it were later shown that these aren't necessary.
...
The
I'm having a little trouble understanding the question. I think you may be thinking of either philosophical abduction/induction or logical abduction/induction.
Abduction in this article is just computing P(y | x) when x is a causal descendant of y. It's not conceptually different from any other kind of conditioning.
In a different context, I can say that I'm fond of Isil Dillig's thesis work on an abductive SAT solver and its application to program verification, but that's very unrelated.
I'm not surprised by this reaction, seeing as I jumped on banging it out rather than checking to make sure that I understand your confusion first. And I still don't understand your confusion, so my best hope was giving a very clear, computational explanation of counterfactuals with no circularity in hopes it helps.
Anyway, let's have some back and forth right here. I'm having trouble teasing apart the different threads of thought that I'm reading.
...After intervening on our decision node do we just project forward as per Causal Decision Theory or do we w
Oh hey, I already have slides for this.
Here you go: https://www.lesswrong.com/posts/vuvS2nkxn3ftyZSjz/what-is-a-counterfactual-an-elementary-introduction-to-the
I took the approach: if I very clearly explain what counterfactuals are and how to compute them, then it will be plain that there is no circularity. I attack the question more directly in a later paragraph, when I explain how counterfactual can be implemented in terms of two simpler operations: prediction and intervention. And that's exactly how it is implemented in our causal probabilis...
"Many thousands of date problems were found in commercial data processing systems and corrected. (The task was huge – to handle the work just for General Motors in Europe, Deloitte had to hire an aircraft hangar and local hotels to house the army of consultants, and buy hundreds of PCs)."
Sounds like more than a few weeks.
Thanks; fixed both.
Was it founded by the Evil Twin of Peter Singer?
https://www.smbc-comics.com/comic/ev
Define "related?"
Stories of wishes gone awry, like King Midas, are the original example.
Am I the only one who finds parts of the early story rather dystopian? He sounds like a puppet being pulled around by the AI, gradually losing his ability to have his own thoughts and conversations. (That part's not written, but it's the inevitable result of asking the AI every time he encounters struggle.)