Posts

Sorted by New

Wiki Contributions

Comments

Flipnash1yΩ6185

At the risk of sounding insane. I remember doing similar things but used git to keep track of branches. A warning that I wished I had back then before I shelved it.

There's a phenomenon where your thoughts and generated text have no barrier. It's hard to describe but it's similar to how you don't feel the controller and the game character is an extension of the self.

It leaves you vulnerable to being hurt by things generated characters say because you're thoroughly immersed.

They will say anything with non-zero probability.

It's easy to lose sleep when playing video games. Especially when you feel the weight of the world on your shoulders.

Sleep deprivation+LLM induced hallucinations aren't fun. Make sure to get sleep.

Beware, LLM's will continue negative thinking. You can counter with steering it to positive thinking and solutions. Obviously, not all negative thoughts are within your current ability to solve or counter, like the heat death of the universe. Don't get stuck down a negative thoughts branch and despair.

"By the way, if there is an easy way to distinguish good idea from bad idea, I'd love to have a pointer to it. Which would be mandatory to know what idea to actually steal. "

My crack at a solution to this problem was to learn to recognize ideas that are useful then filter those by how moral they are.

I fail all the time at this. I miss things. I fail to grasp the idea or fail to find a use case. I fail to judge the moral consequences of the idea.

I find it easier to find ideas that are useful to a problem i'm immediately facing rather than useful in general. Which narrows my filter bubble to just those related to programming as those are the problems I encounter and think about the most.

It has a better description of the algorithm than other sources that have written or made videos about it.

I feel like the algorithm is just a clever search over computable programs that can solve the criteria and falls under the same pitfalls as other algorithms that do the same thing. Mostly it makes the assumption that there exists a program in the search space that fit the criteria it's searching for. I guess if it doesn't exist then the probabilities wouldn't converge to a number that is well calibrated.

Where is the best place to discuss politics in the less wrong diaspora?

Harry can also dispel any of his transfiguration wandlessly and wordlessly. So any toxic substance he creates he can dispel as it reaches him.

It was a reply to a post on Eliezer Yudkowsky's facebook.

Is there any discussion on the uses of friendliness theory outside of AI?

My first thought was that It seems like it could be useful in governance in politics, corporations, and companies.

I heard about DAO's (decentralized autonomous organizations) which are weak AI's that can piggy back off of human general intelligence if designed correctly and thought that it would be useful for those things too especially because it has a lot of the same problems that good old fashioned AGI have.

Apparently, it boils down to visibility. Answering the least amount of questions that are compatible with the class of women that you are interested in while still maintaining high match percentage. (Apparently each answer is a potential mismatch) This supposedly leads to a high match which means you will turn up in their searches more often. Then visiting thousands of profiles. (The example used a script to do it automatically.)They will see that you visited. Some will be intrigued enough to visit you back, of those, some might send a message. It is probably worth it to send a message to visitors anyway.

Load More