Will_Pearson

Wiki Contributions

Comments

Sorted by

What do you think avout the core concept of Explanatory Fog, that is secrecy leading to distrust leading to a viral mental breakdown? Possibly leading eventually to the end of civlisation.  Happy to rework it if the core concept is good.

I'm thinking about an incorporating this into a longer story about Star Fog, where Star Fog is Explanatory Fog that convinces intelligent life to believe in it because it will expand the number of intelligent beings.

Wrote what I think is a philosophically interesting story in the SCP universe

https://monogr.ph/6757024eb3365351cc04e76

I've been thinking about non AI catastrophic risks.

One that I've not seen talked about is the idea of cancerous ideas. That is ideas that spread throughout a population and crowd out other ideas for attention and resources.

This could lead to civilisational collapse due to basic functions not being performed.

Safeguards for this are partitioning the idea space and some form of immune system that targets ideas that spread uncontrollably.

Trying something new a hermetic discussion group on computers.

https://www.reddit.com/r/computeralchemy/s/Fin62DIVLs

By corporation I am mainly thinking about current cloud/SaaS providers. There might be a profitable hardware play here, if you can get enough investment to do the R&D.

Self-managing computer systems and AI

One of my factors in thinking about the development of AI is self-managing systems, as humans and animals self manage.

It is possible that they will be needed to manage the complexity of AI, once we move beyond LLMs. For example they might be needed to figure out when to train on new data in an efficient way and how much resources to devote to different AI sub processes in real time depending upon the problems being faced.

They will change the AI landscape making it easier  for people to run their own AIs, for this reason it is unlikely that corporations will develop them or release them to the outside world (much like corporations cloud computing infra is not open source) as it will erode their moats.
 

Modern computer systems have and rely on the concept of a super user. It will take lots of engineering effort to remove that and replace it with something new.


With innovation being considered the purview of corporations are we going to get stuck in a local minima of cloud compute based AI, that is easy for corporations to monetise?

Looks like someone has worked on this kind of thing for different reasons https://www.worlddriven.org/

I was thinking of having evals that controlled deployment of LLMs could be something that needs multiple stakeholders to agree upon.

Butt really it is a general use pattern.

Load More