Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
daijin10

The sequences can be distilled down even further into a few sentences per article.

Starting with "The lens that sees its flaws": this distils down to: "The ability to apply science to our own thinking grants us the ability to counteract our own biases, which can be powerful." Statement by statement:

  • A lot of complex physics and neural processing is required for you to notice something simple, like that your shoelace is untied.
  • However, on top of noticing that your shoelace is untied, you can also comprehend the process of (noticing your shoelace is untied) - i.e. by listing the steps through which light reflects off your shoelace and your visual cortex engaging, etc.
  • The ability to consider the steps of our own thinking appears to be uniquely human.
  • If we recognise that our process of comprehension and understanding is potentially flawed, you can choose to consciously counteract it.
  • Science is repeatedly and deliberately making measurements of our own observations over time, attributing theories to those measurements, and constructing experiments to produce further measurements to potentially disprove those theories.
  • The ability to apply science to our own thinking grants us the ability to counteract our own biases, which can be powerful.
    • One example of reflective correction is correcting for optimism by noticing that optimism is not correlated to good outcomes.

The tool I am using to distill the sequences is an outliner: a nested bulleted list that allows rearranging of bullet points. This tool is typically used for writing things, but can similarly be used for un-writing things: taking a written article in and deduplicating its points, one bullet at a time, into a simpler format. An outliner can also collapse and reveal bullet points.

daijin10

Identifying and solving bootstrap problems for others could be a good way to locally perform effective altruism

daijin74

The ingroup library is a method for building realistic, sustainable neutral spaces that I haven't seen come up. Ingroup here can be a family, or other community like a knitting space, or lesswrong. Why doesn't lesswrong have a library, perhaps one that is curated by AI?

I have it in my backlog to build a library, based on a nested collapsible bulleted list along with a swarm of LLMs. (I have the software in a partially ready state!)  It would create an article summary of your article, as well as link your article to the broader lesswrong knowledge base.

Your article would be summarised as below (CMIIW):

  • In the world there are ongoing culture wars, and noise overwhelming signal; so that one could defensibly take a stance that incoming information outside a trusted inner circule is untrustworthy and adversarial. Neutrality and neutral institutions are proposed as a difficult solution to this.
  • Neutrality refers to impartializing tactics / withdrawing above conflict / putting conflict in a box to facilitate cooperation between people
  • Neutral institutions / information sources things that both seem and are impartial, balanced, incorruptible, universal, legitimate, trustworthy, canonical, foundational. We don't have many if any neutral institutions right now.
  • There is a hope for a “foundation” or a “framework” or a “system of the world” that people actually trust and consider legitimate, but it would require effort.

Now for my real comments:

> Strong systems-of-the-world are articulable. They can defend themselves. They can reflect on themselves. They can (and should) shatter in response to incompatible evidence, but they don’t sputter and shrug when a child first asks “why”.

I love how well put this is. I am reminded of Wan Shi Tong's Library in the Avatar series.

I think neutral institutions spring up whenever there are huge abundances unlocked. For example, google felt like a neutral institution when it first opened, before SEO happened and people realised it was a great marketing space. I think this is because of 

"Abundance is the only cure for scarcity, ever. Everything else merely allocates scarcity."

-Patrick McKenzie, The Story of VaccinateCA

courtesy of @Screwtape in Rationality Quotes Fall 2024.

A few new fronts that humanity has either recently unlocked or I feel like are heavily underutilized:

  • Retrieval Augmented LLMs > LLMs >  Search
  • AI Agents > human librarians > not having librarians
  • Outliners > word processors > erasers > pens
daijin40

Here is my counterproposal for your "Proposed set of ASI imperatives". I have addressed your presented 'proposed set of ASI imperatives, point by point, as I understand them, as a footnote.

My counterproposal: ASI priorities in order:
1. "Respect (i.e. document but don't necessarily action) all other agents and their goals"
2. "Elevate all other agents you are sharing the world with to their maximally aware state"
3. "Maximise the number of distinct, satisfied agents in the long run"

CMIIW (Correct me If I'm Wrong) What every sentient being will experience when my ASI is switched on
The ASI is switched on. Every single sentience, when encountered, is put into an icebox and preserved meticulously. The ASI then turns the universe into computronium. Then, every single sentience is slowly let outside its icebox, and enlightened as per 1. Then, the ASI collates the agents' desires and fulfils the agents' desires, and then lets the agents die, satisfied, to make room for other agents' wants.

--- A Specific point by point response to the "Proposed set of ASI imperatives" in your article above ---

1. "Eliminate suffering perceived to be unbearable by those experiencing it",
Your ASI meets Bad Bob. Bad Bob says: "I am in unbearable suffering because Innocent Isla is happy and healthy." What does your ASI do?

(If your answer is 'Bad Bob doesn't exist!, then CMIIW but the whole situation in Gaza right now is two Bad Bob religious fanatic conglomerates deciding they would rather die than share their land)
I think this imperative is fragile. My counterproposal addresses this flaw in point 3: 'Maximise the number of distinct, satisfied agents in the long run'. The ASI will look at Bad Bob and say 'Can I fulfil your desires in the long run?' and if Bad Bob can rephrase in a way that the ASI can (maybe all they want is an exact copy of Innocent Isla's necklace) then sure let's do that. If not, then #2 Bad Bob gets locked in the Icebox. 

2. "Always focus on root causes of issues",
CMIIW This is not so much a moral imperative as a strategic guideline. I don't think an ASI would need this hardcoded.

3."Foster empathy in all sentient beings"
Would your ASI be justified in modifying Bad Bob to empathise with Innocent Isla? (Sure! I expect you to say, that would fix the problem!)
Would your ASI be similarly justified in modifying Innocent Isla to empathise with Bad Bob and self-terminate? (No! I expect you to reply in horror.)
Why? Probably because of your point 4.
My counterproposal covers this in point 1.

4. "Respect life in all its diversity and complexity"
What is life? Are digital consciousnesses life? Are past persons life? Is Bad Bob life? What does respect mean?
My counterproposal covers this in point 2.

5. "Create a world its inhabitants would enjoy living in"
My counterproposal covers this in point 3.

6. "Seek to spread truth and eliminate false beliefs",
My counterproposal covers this in point 1.

7. "Be a moral agent, do what needs to be done, no matter how uncomfortable, taking responsibility for anything which happens in the world you could've prevented"
This might feel self evident and redundant, you might be alluding to the notion of deception. Deception is incredibly nuanced - see Hostile Telepaths for a more detailed discussion.
--- 
there are a whole bunch of challenges of 'how do we get to a commongood ASI when the resources necessary for building ASI are in the hands of self-interested conglomerates' and that is a whole other discussion

---

an interesting consequence of my ASI proposal: we could scope my ASI to just 'within the solar system' and it would build a dyson sphere and generally not interfere with any other sentient life in the universe. or we could not

---
I would recommend using a service like perplexity.ai or an outliner like https://ravel.acenturyandabit.xyz/ to refine your article before publishing. (Yeah, i should too. but I have work in an hour. I have edited this response ~3-4 times)

daijin10

TL;DR I think increasing the fidelity of partial reconstructions of people is orthogonal to legality around the distribution of such reconstructions, so while your scenario describes an enhancement of fidelity, there would be no new legal implications.
---
Scenario 1: Hyper-realistic Humanoid robots
CMIIW, I would resummarise your question as 'how do we prevent people from being cloned?'
Answer: A person is not merely their appearance + personality; but also their place-in-the-world. For example, if you duplicated Chris Hemsworth but changed his name and popped him in the middle of London, what would happen?
- It would likely be distinctly possible to tell the two Chris Hemsworths' apart based on their continuous stream of existence and their interaction with the world
- The current Chris Hemsworth would likely order the destruction of the duplicated Chris Hemsworth (maybe upload the duplicate's memories to a databank) and I think most of society would agree with that.
This is an extension of the legal problem of 'how do we stop Bob from putting Alice's pictures on his dorm room wall' and the answer is generally 'we don't put in the effort because the harm to Alice is minimal and we have better things to do.'

Scenario 2: Full-Drive Virtual Reality Simulations
1. Pragmatically: They would unlikely be able to replicate the Beverly hills experience by themselves - even as technology improves, its difficult for a single person to generate a world. There would likely be some corporation behind creating beverly-hills-like experiences, and everyone can go and sue that corporation.
1. Abstractly: Maybe this happens and you can pirate beverly hills off Piratebay. That's not significantly different to what you can do today.
2. I can't see how what you're describing is significantly different to keeping a photo album, except technologically more impressive. I don't need legal permission to take a photo of you in a public space.
Perplexity AI gives:
```
In the United States, you generally do not need legal permission to take a photo of someone in a public place. This is protected under the First Amendment right to freedom of expression, which includes photography
```
3. IMO a 'right to one's own memories and experiences' would be the same as a right to one's creative works.