game-theory-trust is built through expectation of reward from future cooperative scenarios. it is difficult to build this when you 'dont actually know who or how many people you might be talking to'.
I did see the XKCD and I agree haha, I just thought your phrasing implied 'optimize everything (indiscriminately)'.
When I say caching I mean retaining intermediate results and tools if the cost to do so is near free.
Nice. So something like grabbing a copy of swebench dataset, writing a pipeline that would solve those issues, then putting that on your CV?
I will say though that your value as an employee is not 'producing software' so much as solving business problems. How much conviction do you have that producing software marginally faster using AI will improve your value to your firm?
so you want to build a library containing all human writings + an AI librarian.
I think what we have right now ("LLM assistants that ar...
Another consequence of this is that inviting your friend to zendo is not weird, but inviting all your friends publically to zendo is.
'Weirdness' is not about being other from the group, it is about causing the ingroup pain, which happens to correlate to being distinct from the ingroup (weird). We should call them ingroup-pain-points.
Being loudly vegan is spending ingroup-pain-points, because being in front of someone's face and criticising their behaviour causes them pain. Serving your friends tasty vegan food does not cause them pain and therefore incurs no ingroup-pain-points.
There is a third class of ingroup pain point that i will call 'cultural pain point'. My working definition of ...
'If some 3rd party brings that bird home to my boss instead of me, I'm going to be unwealthy and unemployed.'
Have you talked to your boss about this? I have, for me the answer was some combination of
"Oh but using AI would leak our code"
"AI is a net loss to productivity because it errors too much / has context length limitations / doesn't care for our standards"
And that is not solvable by a third party, so my job is safe. What about you?
I recall a solution to the outer alignment problem as 'minimise the amount of options you deny to other agents in the world', which is a more tractable version of 'mimimise net long term changes to the world'. There is an article explaining this somewhere.
How would you define 'continued social improvement'? What are some concrete examples?
What is society? What is a good society vs a bad society? Is social improvement something that can keep going up forever, or is it bounded?
Please write a reply if you are downvoting me. I want to hear from you, you seem to have something to add.
What does 'greedy' mean in your 'in short'? My definition of greedy is in the computational sense i.e. reaching for low hanging fruit first.
You also say 'if (short term social improvements) become disempowered the continued improvement of society is likely to slow', and 'social changes that make it easier to continuously improve society will likely lead to continued social improvement'. This makes me believe that you are advocating for compounding social improvements which may cost more. Is this what you mean by greedy?
Also, have you heard of rolling wave planning?
Interesting, this implies a good deceiver has the power to determine another agent's model and signal in a way that is aligned with the other's model. I previously read an article on hostile telepaths https://www.lesswrong.com/posts/5FAnfAStc7birapMx/the-hostile-telepaths-problem which may be pertinent.
An outline is not a table of contents: an outline contains the full text of an article nested and tucked away, expandable on demand; whereas a table of contents contains a listing of text which still requires you to navigate to the actual text.
congratulations you're on the way to becoming barney from HIMYM
when you say 'smart person' do you mean someone who knows orthogonality thesis or not? if not, shouldn't that be the priority and therefore statement 1, instead of 'hey maybe ai can self improve someday'?
here's a shorter ver:
"the first AIs smarter than the sum total of the human race will probably be programmed to make the majority of humanity suffer because that's an acceptable side effect of corporate greed, and we're getting pretty close to making an AI smarter than the sum total of the human race"
proving too much comes from Scott Alexander's wonderful blog, slate star codex and i have used it often as a defense to poor generalizations. seconded.
'consistency check' seems like a sanity baseline and completely automatic; its nice to include but not particularly revelatory imo.
'give it an example' also seems pretty automatic.
'Prove it another way' is useful but expensive, so less likely to be used if you're moving fast.
disagree with the everything part of optimize everything
. instead we need
"she" doesn't have to mean one individual. "she" could be a metonym of society-at-large. we are social animals and so social acceptance and prestige are beneficial to our existence.
Thank you for this insight!
Applied to a local scale, this feels similar to the notion that we should employ our willpower to allow burnout as discussed here
we will never have a wealth tax because pirate games, so marry the rich v2
original: https://www.lesswrong.com/posts/G5qjrfvBb7wszBgWG/daijin-s-shortform?commentId=4b4cDSKxfdxGw4vBH
1. why have a wealth tax?
we should tax unearned wealth because the presence of unearned wealth disincentivises workers who would otherwise contribute to society. when we tax unearned wealth, the remaining wealthy people are people who have earned their wealth; and so we send a signal 'the best way for you to be privately wealthy is for your work to align with public utility maxim...
I wish I read this sooner. Do you have a prototype or does this exist yet?
Can we add retrieval augmentation to this? Something that, as you are writing your article, goes: "Have you read this other article?"
we will never have a wealth tax because pirate games.
why have a wealth tax? excess wealth is correlated with monopolies which are a failure to maximise utility. therefore wealth taxes would help increase total utility. monopolies include but are not limited to family wealth, natural monopolies, social network monopolies.
however, suppose a whole bunch of us got together and demanded that wealthy oligarchs pay a wealth tax. the wealthy oligarchs could instead take a small amount of money and bribe 51% of us to defect, while keeping their money piles.
therefore we will never have a wealth tax.
what to do instead? marry rich
The laws of physics bound us to what we can do; so I counter that there is no such thing as extra abundance; and there is no 'cure' for scarcity, unless we figure out how to generate energy + entropy from nothing.
Instead I propose:
Better utilization is the only remedy for scarcity, ever; everything else merely allocates scarcity.
The sequences can be distilled down even further into a few sentences per article.
Starting with "The lens that sees its flaws": this distils down to: "The ability to apply science to our own thinking grants us the ability to counteract our own biases, which can be powerful." Statement by statement:
Identifying and solving bootstrap problems for others could be a good way to locally perform effective altruism
The ingroup library is a method for building realistic, sustainable neutral spaces that I haven't seen come up. Ingroup here can be a family, or other community like a knitting space, or lesswrong. Why doesn't lesswrong have a library, perhaps one that is curated by AI?
I have it in my backlog to build a library, based on a nested collapsible bulleted list along with a swarm of LLMs. (I have the software in a partially ready state!) It would create an article summary of your article, as well as link your article to the broader lesswrong knowledge base...
Here is my counterproposal for your "Proposed set of ASI imperatives". I have addressed your presented 'proposed set of ASI imperatives, point by point, as I understand them, as a footnote.
My counterproposal: ASI priorities in order:
1. "Respect (i.e. document but don't necessarily action) all other agents and their goals"
2. "Elevate all other agents you are sharing the world with to their maximally aware state"
3. "Maximise the number of distinct, satisfied agents in the long run"
CMIIW (Correct me If I'm Wrong) What every sentient being will experience when...
TL;DR I think increasing the fidelity of partial reconstructions of people is orthogonal to legality around the distribution of such reconstructions, so while your scenario describes an enhancement of fidelity, there would be no new legal implications.
---
Scenario 1: Hyper-realistic Humanoid robots
CMIIW, I would resummarise your question as 'how do we prevent people from being cloned?'
Answer: A person is not merely their appearance + personality; but also their place-in-the-world. For example, if you duplicated Chris Hemsworth but changed his name and poppe...
go find people who are better than you by a lot. one way to quickly do this is to join some sort of physical exercise class e.g. running, climbing etc. there will be lots of people who are better than you. you will feel smaller.
or you could read research papers. or watch a movie with real life actors who are really good at acting.
you will then figure out, as @Algon has mentioned in the comments, that the narcissism is load-bearing, and have to deal with that. which is a lot more scary