Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Rich D00

It's not exactly the same thing, but I've been known to try to explain my lack of outrage/engagement/joiner-ism when it comes to things like this, by saying: "I get why you disagree/why that's awful/whatever, but really, I just can't get that worked up just because somebody's wrong on the internet."

 

It's a little disingenuous, because the issue isn't really "someone being wrong on the internet", but rather that folks feel that there's something wrong in the world, as reflected by a third-party's opinion.  But since we all get our news and opinions delivered by internet these days, this has often (but far from always) worked to shift the topic for me.  

To be fair, sometimes it shifts the topic to a meta-discussion about whether "the internet" (or specific media/social media apps) are "the problem", but even that I find to be a more interesting (and less unhealthy) discussion than dancing around the picked-over carcass of some absent opponent's opinion.

 

The snarkier response might be: "You disagree with someone on the internet?  You should blog about it!"  But that just piles on the negativity.

 

Obviously, YMMV.

Rich D93

Regarding category 2, and the specific example of "lawyer", I personally think that most of this category will go away fairly quickly.  Full disclosure, I'm a lawyer (mostly intellectual property related work), currently working for a big corporation.  So my impression is anecdotal, but not uninformed.

TL; DR - I think most lawyer-work is going away with AI's, pretty quickly.  Only creating policy and judging seem to be the kinds of things that people would pay for other humans to do.  (For a while, anyway.)

 

I'd characterize legal work as falling primarily into three main categories: 

  1. transactional work (creating contracts, rules, systems for people to sign up to to advance a particular legal goal - protecting parties in a purchase, fairly sharing rights in something parties work together on, creating rules for appropriate hiring practices, etc.);  
  2. legal advocacy (representing clients in adversarial proceedings, e.g., in court, or with an administrative agency, or negotiations with another party); and
  3. legal risk-analysis (evaluating a current or proposed factual situation, and determining what risks are presented by existing legal regimes (either law or contract), deciding on a course of action, and then handing an appropriate task to the transactional or adversarial folks to carry out).

 

So in short: paper-pushers; sharks; and judges.

 

Note that I would consider most political positions that lawyers usually fill to be in one of these categories.  For instance, legislators (who obviously need not be lawyers, but often are) do transactional legal work.  Courtroom judges are clearly the third group.  Prosecutors/DAs are sharks.  

 

Paper-pushers:

I see AI taking over this category almost immediately.  (It's already happening, IMO.)

A huge amount of this work is preparing appropriate documents to make everyone feel that their position is adequately protected.  LLM's are already superficially good at this, and the fact that there are books out there to provide basic template forms for so many transactional legal matters suggest that this is an easily templatized category. 

As far as trusting the AI to do the work in place of a human, this is the type of work that most corporations or individuals feel very little emotion over.  I have rarely been praised for producing a really good legal framework document or contract.  And the one real exception is when it encapsulated good risk-analysis (judging).  

 

Sharks:

My impression is that this will take longer to be taken over, but not all that long.  (I think we could see it within a few years, even without real AGI coming in to existence.)

This work is about aggressively collecting and arguing for a specific side, pre-identified by the client.  So there is no judgment or human value that is necessarily associated with it.  So I don't think that the lack of a human presence will feel very significant to someone choosing an advocate.

At the moment, this is (IMHO) the category requiring the most creativity in its approach, but ...  Given what I see from current LLMs, I think that this remains essentially a word / logic game, and I can imagine AI being specifically trained to do this well.

My biggest concern here is regarding hallucination.  I'm curious what others with a real technical sense of how this can be limited appropriately would think about this.

 

Judges:

I think that this is the last bastion of human-lawyering.  It's most closely tied to specific human desires and I think people will feel that relinquishing judgment to a machine will FEEL hardest.

Teaching a machine to judge against a specific set of criteria should be easy-ish.  Automated sentencing guidelines are intended to do exactly this, and we already use them in many places.  And an AI should be able to create a general sense of what risks are presented by a given set of facts, I suspect.

But the real issue in judging is in deciding which of those risks present the most significant risk and consequence, BASED ON EXPECTED HUMAN RESPONSES.  That's what an in-house counsel at a company spends a lot of time advising on, and what a judge in a courtroom is basing decisions that extend or expand existing law decides on the basis of.

And while I think that AI can do that, I also think that most people will see the end result as being very dependent on the subjective view of the judge/counselor as to what is really important and really risky.  And that level of subjectivity is something that may well be too hard to trust to an AI that is not really transparent to the client community (either the company leadership, or the public at large).

So, I don't think it's a real lack of capability here, but that this role hits humans in a soft spot and they will want to retain this under visible human control for longer.  Or at least we will all require more experience and convincing to believe that this type of judging is being done with a human point of view.

Basically, this is already a space where a lot of people feel political pressure has a significant impact on results, and I don't see anyone being comfortable letting a machine of possibly alien / inscrutable political ideology make these judgments / give this advice.

 

 

So I think the paper-pushers and sharks are short-lived in the AI world.

Counselors/judges will last longer, I think, since they are roles that specific reflect human desire as expressed in law.  But even then, most risk-evaluating starts with analysis that I think AI's will be tasked to do, much like interns do today for courtroom judges.  So I don't think we'll need nearly as many.

 

On a personal note, I hope to be doing more advising (rather than paper-pushing and negotiating) to at least slightly future-proof my current role.

Rich D30

As a former smart person who decided that actual productive work was undervalued, so therefore I might as well become a lawyer, this line made me chuckle:

"Normally I would be against dumbing down our testing, but keeping smart people from becoming lawyers is not the worst idea."

Unfortunately, given what's on the LSAT, even removing the logic puzzle part of it probably doesn't help that much in dumbing it down.  I think it only ends up mattering in the broadest categories.  (That is, while folks' percentiles might change without the Logic Fun section, I suspect that most folks' deciles won't change by more than one, and most won't even change at all.)

In my experience, there are enough "Top 10" law schools (there are about 20 by my count) that anyone smart enough to be in the top 10-15% of LSAT who sends enough applications will get into at least one of those "top" schools.  So even at the limit, maybe someone who previously would have been admitted won't get into Stanford law with their "new" LSAT score.  But they'd still get into at least one of Harvard, Yale, Cornell, NYU, Columbia, Berkeley, or Georgetown.

So I guess my comment is: this wouldn't keep smart people from becoming lawyers - but it might discourage those that are smart, but either (1) aren't all THAT smart, or (2) aren't all that willing to think it through, from becoming lawyers.

 

But I agree that it's not the worst idea.