LESSWRONG
LW

1024
Dagon
13207Ω191455310
Message
Dialogue
Subscribe

Just this guy, you know?

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
No, That's Not What the Flight Costs
Dagon3d40

This is the problem with financial attribution.  Net is highly susceptible to that accounting trickery (aka structure decisions) regarding how expenses are distributed.  In truth, all of these are correlated in customer behavior - the credit card revenue comes because of operational/flight options.  How much of the expense should be attributed to each (and how much of the debt service for the enterprise, which is significant for airlines) is a choice they make.

They are tied together enough that there IS NO objective truth of the matter for what the post is claiming.  Revenue gives the closest approximation IMO, but really, it's everything combined with everything else.

Reply
No, That's Not What the Flight Costs
Dagon4d20

It would be nice to see an analysis of a median passenger, how much they pay in flights, how much they pay in foregone rewards from a non-airline card, etc.  Or a revenue analysis - valuation is very subject to accounting trickery regarding debt management and liability assignment choices.

Claude says 75-80% of airline revenue is ticket sales, 8-10% add-on fees, and 10-12% credit card loyalty programs.  

Reply
The Problem of the Concentration of Power
Dagon5d40

not downvoted, but also not upvoted - this wasn't really useful for the LW audience - nothing particularly new, and no added clarity on any of the questions.

I would be very happy to see, and almost certainly upvote even if I didn't agree, with a strong attempt at an operational definition of "power".  I think it's a missing element in a LOT of discussions about bargaining, AI takeover and influence, voluntary vs involuntary actions, etc.

Reply
</rant> </uncharitable> </psychologizing>
Dagon5d40

I fully agree with this, including the acknowledgement that people and contexts differ enough that I don't actually know how easy it is for others to include a disclaimer consistently, nor how they variously perceive the change in value and strength of their post/comment if they include a disclaimer.  Changing norms is hard, and typical mind fallacy is rampant.
 

I wish people were generally more careful, kind, clear, and humble.  This includes wishing this of myself.

Reply1
Stephen Martin's Shortform
Dagon5d30

Right.  A prerequisite for personhood is legible entityhood.  I don't think current LLMs or any visible trajectory from them have any good candidates for separable, identifiable entity.

A cluster of compute that just happens to be currently dedicated to a block of code and data wouldn't satisfy me, nor I expect a court.

The blockchain identifier is a candidate for a legible entity.  It's consistent over time, easy to identify, and while it's easy to create, it's not completely ephemeral and not copyable in a fungible way.  It's not, IMO, a candidate for personhood. 

Reply
Raemon's Shortform
Dagon5d20

I'm clueless enough, and engineering-mind enough, that hypothetical examples don't help me understand or solve a problem.   

I suspect I should have just stayed out, or asked for a clearer problem description.  I don't really feel tribal-ish in myself or my interactions on the site, so I suspect I'm just not part of the problem nor solution.  PLEASE let me know (privately or publically) if this is incorrect.

Reply1
Stephen Martin's Shortform
Dagon5d30

If we could get context windows large enough and crack problems which analogize to competence issues (hallucinations or prompt engineering into insanity for example) it's not clear to me what LLMs are lacking at that point. What would you see as being the issue then?

The issue would remain that there's no legible (legally clearly demarcated over time) entity to call a person.   A model and weights has no personality or goals.  A context (and memory, fine-tuning, RAG-like reasoning data, etc.) is perhaps identifiable, but is easily forked and pruned such that it's not persistent enough to work that way.   Corporations have a pretty big hurdle to getting legally recognized (filing of paperwork with clear human responsibility behind them).  Humans are rate-limited in creation.  No piece of current LLM technology is difficult to create on demand.

It's this ease-of-mass-creation that makes the legible identity problematic.  For issues outside of legal independence (what activities no human is responsible for and what rights no human is delegating), this is easy - giving database identities in a company's (or blockchain's) system is already being done today.  But there are no legal rights or responsibilities associated with those, just identification for various operational purposes (and legal connection to a human or corporate entity when needed).

Reply
Raemon's Shortform
Dagon6d20

Some of my instincts are opposite to this.  Full agreement with naming the positives in each group/position.  

I think abstraction is often the enemy of crux-finding.  When people are in far-mode, they tend to ignore the things that make for clear points of disagreement, and just assume that it's a value difference rather than a belief difference.  I think most of the tribal failures to communicate are from the default of talking abstractly.   

Agreed that it's often not necessary to identify or reinforce the group boundaries.  Focus on the disagreements, and figure out how to proceed in the world where we don't all agree on things.

I think the example of epistemic status recommendation is a good one - this isn't about groups, it's about a legitimate disagreement in when it's useful and when it's wasteful or misleading.  It's useful if it gets debated (and I have to say, I haven't noticed this debate) to clarify that it's OK if it's the poster/commenter choice, and it's just another tool for communication.

Reply
Stephen Martin's Shortform
Dagon6d20

I'm one of the people who've been asking, and it's because I don't think that current or predictable-future LLMs will be good candidates for legal personhood.  

Until there's a legible thread of continuity for a distinct unit, it's not useful to assign rights and responsibilities to a cloud of things that can branch and disappear at will with no repercussions.

Instead, LLMs (and future LLM-like AI operations) will be legally tied to human or corporate legal identity.  A human or a corporation can delegate some behaviors to LLMs, but the responsibility remains with the controller, not the executor.

Reply
The Basic Case For Doom
Dagon6d167

I think this continues to miss many of the objections and dismissals, and would benefit from some numeric estimates.  I'm either a hopeless doomer or a polyanna optimist, depending on who's asking and what their non-AI estimates of doom are.  I've estimated between 0.25% and 1% chance of civilizational disaster (mostly large-scale war or irreversible climate tipping point) since the mid-90s.  That's at least 5% per decade (with a fair variance).  With AI to accelerate things, I put it marginally higher, but still kind of expect that humans do most of the destruction.  
 

Further, I think the first two steps are often the most controversial/non-obvious ones.

We’re going to build superintelligent AI.

I haven't seen an operational definition yet, and I don't know what the scaling curve looks like, or whether the current (impressive, but not super) progress actually is the same dimension.  I'd give it less than 10% in the next decade that a transformative, creative superintelligence on the same scale as Eliezer seems to be imagining exists..

It will be agent-like, in the sense of having long-term goals it tries to pursue.

I have seen no progress on this, and don't think it's likely that a fully-orthogonal long-term goal set will happen.  I DO think it's likely that somewhat longer-term contexts and goals will happen, perhaps to weeks or months without human intervention/validation, but probably not fully so.

NOTE: this does NOT argue against AI being very powerful, and misused (accidentally or intentionally) by humans to do catastrophic harms.  I don't think it's automatic, but I think it's a huge risk of building such a powerful tool.  

Call it 10x more risky than nuclear weapons.  You don't need to argue that it's autonomously misaligned, just that HUMANS are prone to such things and this tool could be horrifically effective.

Reply
Load More
2Dagon's Shortform
6y
92
7Moral realism - basic Q
Q
3mo
Q
12
14What epsilon do you subtract from "certainty" in your own probability estimates?
Q
10mo
Q
6
3Should LW suggest standard metaprompts?
Q
1y
Q
6
8What causes a decision theory to be used?
Q
2y
Q
2
2Adversarial (SEO) GPT training data?
Q
3y
Q
0
24{M|Im|Am}oral Mazes - any large-scale counterexamples?
Q
3y
Q
4
17Does a LLM have a utility function?
Q
3y
Q
11
8Is there a worked example of Georgian taxes?
Q
3y
Q
12
9Believable near-term AI disaster
4y
3
2Laurie Anderson talks
4y
0
Load More