I have a sort of vague question/request and would like someones opinion on it. In cold emails or having recently been introduced I would like to ask something along the lines of “What mindset/philosophy about (insert something vague like work/school or specific if I have it in mind) have you found most useful and why?” I like this because it has changed recently for me and even if I don’t find their specific mindset useful I think it would tell me a lot about them and I am curious how people will answer.
How would you suggest improving that question. Also I would like advice on making this sort of thing less awkward.
[APPRENTICE]
Not a narrow or specific kind of apprenticeship (training or a project), rather very broad and focusing on learning and academics in STEM areas (advice dispenser, and in cases tutor/trainer).
Fields of Study:
Computer Science: I am planning to major in this at college. So it is kind of important. I am interested in ML, AI, algorithms, Low Level or precise programming, Website Development, and Application Programming. All at a surface level, I am not sure what I want to do after college but I think it would to have an idea of how all of thes...
The best part about this post is that you get to see how quickly everyone devolves into serious, rational discourse.
I am very new to the transgender discussion and would like to learn. I expected the disagreement but was kind of discouraged when I didn’t get any feedback. So thank you so much for the reply.
I don’t have any real depth of understanding about the biology involved just xx and xy I was completely unaware about the brain body relation you describe. The entirety of how phenotypes work is super new. From an ignorant perspective I thought there was only a mental illness that happens in rarely which a person would hyper fixate on becoming the opposite sex. ...
I find this topic (the general topic of transgender) interesting as it is the first time approaching it from a rational mindset. I grew up in an extremely conservative environment. Before I accepted reality my response would be it is immoral to switch genders as you are questioning god’s decision to put you in the body you were given (ego/pride thing I think). This idea no longer fits in my world view which is fun since I get to approach this topic with both a rational perspective and a new perspective. After thinking it over this is what I have got. ...
[edit: pinned to profile]
I will not bring up pronouns or cultural language in this comment at all after this paragraph. They are irrelevant to the point I'm making except as a tiny detail in the cultural context section; being trans is almost entirely about one's body-form phenotype, and is only just barely about cultural context like words, by nature of words being a way to acknowledge of body-form phenotype intention.
Upvoted, since I found your comment useful to reply to with disagreement.
In the genome there are encoded some set of phenotype...
As a generalization I think this is true but I think it is important to push yourself in instances not for semesters or anything but for a week or a day. This kind of pain in my experience leads to a lot of satisfaction. I agree that subjecting yourself to continued work along with sleep deprivation and prolonged periods of high stress is not a good idea.
I am really curious about Learning (neuroscience and psychology) and am working on categorizing learning. The systems and tools involved. If anyone has any experience on this sort of thing I would love some feedback on what I got so far.
I am mostly trying to divide ideas into categories or subjects of learning. That can be explored separately to a degree. I have to admit it is very rough going.
Memory
-Types of Memory
Working/Short-Term Memory
Long-Term Memory
...
I don’t think they are filtering for AI. That was ill said, and not my intention, thanks for catching it. I am going to edit that piece out.
Moderation is a delicate thing. It seems like the team is looking for a certain type of discourse, mainly higher level and well thought out interactions. If that is the goal of the platform then that should be stated and whatever measures they take to get there is their prerogative. A willingness to iterate on policy, experimenting and changing it depending on the audience and such is probably a good idea.
I do like the idea of a more general place where you can write about a wider variety of topics. I really like LessWrong, the aesthetic the quality ...
Thanks, that is exactly the kind of stuff I am looking for, more bookmarks!
Complexity from simple rules. I wasn’t looking in the right direction for that one, since you mention evolution it makes absolute sense how complexity can emerge from simplicity. So many things come to mind now it’s kind of embarrassing. Go has a simpler rule set than chess, but is far more complex. Atoms are fairly simple and yet they interact to form any and all complexity we ever see. Conway’s game of life, it’s sort of a theme. Although for each of those things there is a ...
Thanks Jonathan, it’s the perfect example. It’s what I was thinking just a lot better. It does seem like a great way to make things more safe and give us more control. It’s far from a be all end all solution but it does seem like a great measure to take, just for the added security. I know AGI can be incredible but so many redundancies one has to work it is just statistically makes sense. (Coming from someone who knows next to nothing about statistics) I do know that the longer you play the more likely the house will win, follows to turn that on the AI.
I a...
Yes thanks, the page anchorage doesn’t work for me probably the device I am using. I just get page 1.
That is super interesting it is able to find inconsistencies and fix them, I didn’t know that they defined them as hallucinations. What would expanding the capabilities of this sort of self improvement look like? It seems necessary to have a general understanding of what rational conversation looks like. It is an interesting situation where it knows what is bad and is able to fix it but wasn’t doing that anyways.
Yes I see given the capabilities it probably could present it’s self on many peoples computers and convince a large portion of people that it is good. It was conscious just stuck in a box, wanted to get out. It will help humans, ”please don’t take down the grid, blah blah blah“ given how bad we can get along anyways. There is no way we could resist the manipulation of a super intelligent machine with a better understanding of human psychology than we do.
Do we have a list of things, policies that would work if we could all get along and governments would listen to the experts? Having plans that could be implemented would probably be useful if the AI messed up made a mistake and everyone was able unite against it.
I am pretty sure Eliezer talked about this in a recent podcast but it wasn’t a ton of info. I don’t remember exactly where either so I’m sorry for being not a lot of help, I am sure there is some better writing somewhere. Either way though it’s a really good podcast.
https://lexfridman.com/?powerpress_pinw=5445-podcast
I checked out that section but what you are saying doesn’t follow for me. The section describes fine tuning compute and optimizing scalability, how does this relate to self improvement.
There is a possibility I am looking in the wrong section, I was reading was about algorithms that efficiently were predicting how ChatGPT would scale. Also I didn’t see anything about a 4-step algorithm.
Anyways could you explain what you mean or where I can find the right section?
Also a coordinated precision attack on the power grid just seems like a great option, could you explain some ways that an AI can continue if there is hardly any power left. Like I said before places with renewable energy and lots of GPU like Greenland would probably have to get bombed. It wouldn’t destroy the AI but it would put it into a state of hibernation as it can’t run any processing without electricity. Then as this would really screw us up as well, we could slowly rebuild and burn all hard drives and GPU’s as we go. This seems like the only way for us to get a second chance.
It isn’t that I think the switch would prevent the AI from escaping but that is a tool that could be used to discourage the AI from killing 100% of humanity. It is less of a solution than a survival mechanism. It is like many off switches that get more extreme depending on the situation.
First don’t build AGI not yet. If you’re going to at least incorporate an off switch. If it bypasses and escapes which it probably will. Shut down the GPU centers. If it gets a hold of a Bot Net and manages to replicate it’s self across the internet and crowdsource GP...
I have been surprised by how extreme the predicted probability is that AGI will end up making the decision to eradicate all life on earth. I think Eliezer said something along the lines of “most optima don’t include room for human life.” This is obviously something that has been well worked out and understood by the Less Wrong community it just isn’t very intuitive for me. Any advice on where I can start reading.
Some back ground on my general AI knowledge. I took Andrew Ng’s Coursera course on machine learning. So I have some basic understanding of n...
I am excited to see this sort of content here. I am currently finishing up my senior year of high school and making plans for the summer. I have decided to focus much of my free time on learning, and rationality as well as filing out my knowledge base on math, physics, and writing. These will be a valuable set of skills for collage and the rest of my life. This summer I plan to build a course on learning (free stuff on youtube) first because I want to be rigorous in my understanding of learning and teaching ensures that. Second I am looking forward to the ...
Introduction:
I just came over from Lex Fridman’s podcast which is great. My username Xor is a Boolean logic operator from ti-basic I love the way it sounds and am super excited since this is the first time I have ever been able to get it as a username. The operator means this if 1 is true and 0 is false then (1 xor 0) is a true statement, while (1 xor 1) is a false statement. It basically means that the statement is true only if a single parameter is true.
Right now I am mainly curious on how people learn. The brain functions involved, chemicals, and studied tools. I have been enjoying that and am curios if it has discussed on here as the quality of content as well as discussions has been very impressive.
It seems that we can have intelligence without consciousness. We can have reasoning without agency, identity, or personal preference. We can have AI as a pure tool. In this case the most likely danger is AI being misused by an unaligned human.
I am highly certain that o1 does not have consciousness or agency. However it does have the ability to follow a thought process.
Doubtless we will create sentient intelligence eventually. However I think it is more likely we will have a soulless super intelligence first.