Wiki Contributions

Comments

Sorted by

Thank you, that's good to know. I'll give it a download.

Borasko140

If we were to blur the name Eliezer Yudkowsky  and pretended we saw this a bunch of anonymous people talking on a forum like Reddit, what would your response be to somebody who posted the things Yudkowsky posted above. What pragmatic thing could you tell that person to possibly help them? Every word can be true but It seems overwhelmingly pessimistic in a way that is not helpful, mainly due to nothing in it being actionable. 

The position of timelines being too short and the best alignment research being too weak / too slow, while having no avenues or ideas to make things better, with no institution to trust, to the point where we are doomed now, doesn't lead to a lot of wanting to do anything, which is a guaranteed failure. What should I tell this person? "You seem to be pretty confident in your prediction, what do you suppose you should do to act in accordance with these believes? Wire head yourself and wait for the world to crumble around you? Maybe at least take a break for mental health reasons and let whatever will be will be. If the world is truly doomed you have nothing to worry about anymore". 

 I can agree that timelines seem short and good quality alignment research seems rare. I think research like all things humans do follows sturgeon's law. But the problem is aside from some research with is meant only for prestige building is you can't tell which will turn out to be crap or not. Nor can you tell where the future will go or what the final outcome will be. We can make use of trends like this person was talking about for predicting the future but there's always uncertainty. Maybe that's all this post is which is a rough assessments of one's personal long term outlook in the field, but it seems pre-mature to say the researchers mentioned in this article are doing things that probably won't help at this point. With this much pessimism towards our future world we might as well take the low probability of their help working and shoot the moon with it, what have we got to lose in a doomed world?

But that's the thing, the researchers working on alignment I'm sure will continue doing it even after reading this interaction. If they give up on it we are even more screwed. They might even feel a bit more resentful now knowing what this person thinks about their work, but I don't think it changed anything.

Maybe I was lucky to get into the AI field within the last couple of years, where short timelines were the expected, rather than something to be feared. I didn't have the hope of long timelines and now I don't have to feel crushed by them disappearing (forgive me if I assume too much).  We have the time we have to do the best we can, if things take longer, more power too us to get better odds of a good outcome.

Summary: While interesting, this conversation mainly updated me only to the views of the writers, not changing anything pragmatically about my view of research or timelines. 

If you think I completely misread the article, and that EY is saying something different than what I interpreted, please let me know.

I noticed every since I started reading math textbooks I like written communication a lot more than verbal communication. Written communication is faster to process and usually is direct with what it is trying to get across, rather than verbal communication which is usually loaded with ambiguity and non verbals. If its important I mostly prefer it in writing now. Of course the communicator does help a lot in both regards to understanding what the important part being communicated is. 

This leads me to be less interested in meetings at work, as communication usually needs to clarified multiple times verbally but I think would usually be easily squared away had the words been written.

I'm not a neuroscientist but I think a nice corollary to your theory would be to look into the minds of those with addictions. I've read a lot about addictions online and talked to a few addicts who are addicted to different things (Alcohol, narcotics, etc). They all seem to have induced a dim world within themselves, but a specific one that is usually only acted out upon with their drug of choice, rather than like a psychopathic child killing animals. But late stage addicts do routinely do socially harmful behavior. All addicts seem to have a similar ratchet effect take place where they get their high, the high becomes their baseline, the rest of the world goes dim or less stimulating by default and to get a new high they have to go to a more potent drug / stimulating resource. 

I also noticed they usually seem to hide their warped utility function from themselves, only addressing it when it takes over their entire life (rock bottom). Which I think might be an interesting way to look at alignment problems because addicts are basically misaligned mesa-optimized humans that has their "inner process" hiding their warped true reward function from themselves. From the point of view of addiction recovery it's also interesting to see how a misaligned person tries to change their values to something that produces better long term outcomes for that person, to varying degrees of success. 

I don't know if looking into addictions would be something your interested in, but I figured it was worth bringing up when I read your dim world theory. 

I am currently struggling my way through Probability and Statistics by DeGroot. I was reading it because it seemed to be the best introductory textbook I can find for probability and yet it still seems like there could be better ways to show the material sometimes. I've learned a good bit from it but I am feeling worried about gaining and retaining useful knowledge.

My current worries are about trade-offs. I do a few of the exercises at the end of the chapter, some I get right some I get wrong. For those I get wrong I usually try to see what I messed up, some equations seemed convoluted enough I'm not sure where to start to get to the correct answer.

So I get some right and I get some wrong. Am I able to move on to the next chapter now? What internal confidence / skill level should I have to where I can move on to the next chapter without worrying I might not know enough, while also not doing every single problem in the book. I know I will use probability for almost everything I do once I start making Machine Learning models, can I use that knowledge that I will be reinforcing these concepts in the future to accept a lower confidence level in my skill now? At the cost of off-loading that extra development into the future? Will I even need it as much as I thought in the future? 

I don't have a perfect mind either, no matter what level of competency my skills will decay over time unless reinforced through continuous practice, which I think will happen early-mid next year since that is when I plan to start going back into machine learning model creation. I assume more competency will decay slower / need less to time to get back to initial level. But any extra time put into building competency now leaves less time for building competency in programming.

So I am frustrated trying to find the balance between obtaining better confidence in skill vs time spent to obtain that confidence. If anybody else on LessWrong has self studied math books I would love your answer to this problem. I will continue to think about it as well, but it has been a nagging me for a while.

I've stopped bringing up the awkward truths around my current friends. I started to feel like I was using to much of my built up esoteric social capital on things they were not going to accept (or at least want to accept). How can I blame them? If somebody else told me there was some random field that a select few of people interested in will be deciding the fate of all of humanity for the rest of time and I had no interest in that field I would want to be skeptical of it as well.  Especially if they were to through out some figures like 15 - 25 years from now (my current timelines) is when humanities rein over the earth will end because of this field.

I found when I stopped bringing it up conversations were lighter and more fun. I've accepted we will just be screwing around talking about personal issues and the issues de jour, I don't mind it.  The truth is a bitter pill to get down, and if they no interest in helping AI research its probably best they don't live their life worrying about things they won't be able to change. So for me at least I saw personal life improvements on not bringing some of those awkward truths up. 

Sometimes it really does suck to not know what you don’t know. Having no college math education I didn’t know what I needed to know to be on par with what math undergraduates know. The further I make it into my self selected MOOC courses meant to stand in for a C.S degree the more I realize where I am lacking and what I need to do to fix it. If I really want to put my money where my mouth is and do research I’m gonna need to go pretty deep into math I’ve been avoiding. 

Coding MOOC’s try to do their best to offer their courses to anyone, even the people with little to no experience in math, hence Andrew Ng in his ML class brushing over all the math in that course. I appreciate it as somebody who doesn’t know math at that level and still wants to learn ML pragmatically, but the brutal truth is without rigorous understanding I will only be able to use research. Not contribute to research which is my goal. So I almost feel like I am starting at square one again staring at the mountain of what I need to know ahead of me. Wishing I started sooner but trying to not be hard on myself, while mentally shifting my timelines and trying to find the best math resources. I found some good ones though. 

I know it's better to figure out I’m doing something wrong and correct it, but man I wish studying was quicker and easier. Oh well, I was planning on doing this the rest of my life anyway, what's a little added time to make sure I get it right?

I completely agree with the problems of making big changes at once. But six months is a long time, I thought if I try to implement one new skill or life improvement a month, then by a year that's twelve new things that are better for me. A month is a long enough time for things to sink in without getting overwhelming, and then can be easily continued with a routine in the next month when adding another thing.

I'll be totally honest and say I don't always know what to add or I am too lazy to do it even during an entire month, but it's best not to be hard on yourself. 

One of thing I think is super important is that personal slips in self improve routine happens. binging social media, missing a workout, having a lot of cake on a diet, etc. The most important thing is to be update-less about your failure. Stay up too late? Set an alarm for mostly regular time and live with the consequences. Do everything you ideally would do that day with you hadn't broken your own rules. Not allowing myself to death spiral over bad decisions and force myself to continue like nothing happened is what I think helped me cement good practices the most.

tl;dr: Don't be too hard on yourself for failure, keep trying.

I have not, but I'll look into it. I hope it goes well for you!

Very interesting! I don't know much about the brain but I think this post did a good job of explaining the concept and showing it's importance. I wonder how the brain does this with neuroplasticity.  I've read this article from MIT about researchers rewiring eye inputs to the audio processing parts of the brain. Would the hypothalamus have that hyper-prior on what eye data "looks like" and create loops and systems that could de-code that data and reintegrate it with the undamaged processing systems? Could an AI system just as easily create or re-use existing substructures within it's code? I'm too new to ML learning to know if models can add layers during deployment, or how generalizability could be made within neuro networks past training.   

Load More