All of JeffreyK's Comments + Replies

I would say not interviewing people from psychology and religion may be a weak point in your, admittedly informal research. I wouldn't be able to speak of consciousness without speaking of how we see ourselves through one another, and in fact without that we enter ill health. The very basis of most contemporary psychology is how humans maintain health via healthy relationships, therapy is an unfortunate replacement for a lack of original healthy relationships. If this is required for human minds to thrive, it must be essential...because I'm not STEM or Rat... (read more)

Here is I think the seminal quote of the piece - "There is no future scenario where 2/3’s of all humanity are not significant"

Yes as I mention I view it as culture, which is similar as you say an artform...certainly in cultures it creates cultural norms...so we're on a similar page there. And I can see how it might not seem relevant to AI alignment to those deeply involved in training work or other direct aspects - but what I'm hoping to consider is the idea that since 6 Billion humans find religion significant in their life, they as a giant force may help or come against AI development, the simple point is team humanity is making AI, and a bunch of our team members are going to ... (read more)

2Vladimir_Nesov
It's part of content of civilization, and content of civilization is significant as a whole. Alignment is intended to ensure that civilization doesn't end up getting discarded. Similarly, when developing a language model (an AI that learns and can use language), paying particular attention to the word "violin" is not a relevant thing to do, but that word is part of the language, and a sensible language model must in particular develop an aptitude in working with it.

Haha meh...I don't think you're thinking big enough. There will always be ethicists and philosophers surrounding any great human endeavor who are not themselves technically proficient...certainly they should lifelong educate, but if your not good at coding or maths, you're just not gong to ever understand certain technical issues. So saying without that understanding their effectiveness is nil is just not understanding the nature of how humanity progresses on big issues. It's always a balance of abstract and concrete thinkers...they must work together. The... (read more)

2ChristianKl
To effectively deal with a topic you need to understand something about it.  If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem. Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that's important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.  Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller. To get back to AI safety, it's not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.  

I was happy to read your Golden Rule idea...I just posted my own version of the Golden AI Rule a few days ago. 

The Golden AI Rule: AI will only be as good to us as we are to each other. 

Well that's going to a level I wouldn't have been able to imagine before. microfiber cloths were a great advancement over using a towel or paper towel, which didn't wick up the water as well, and sometimes might scratch the lens. I have seen differences in quality between different microfibers...but going next level to no need for a cloth sounds great. 

These are a lot of good ideas. I comment above I think a good approach is to truly represent that we are a bunch of younger people who fear for the future...this would appeal to a lot of folks at his level, to know the kids are scared and need his help.

I agree with TekhneMakre...it comes across like an average looking unconfident person asking out a gorgeous celeb. Probably a friend approaching him is best, but an email can't hurt. I would get a few people together to work on it...my approach would be to represent truly who we are as a motivated group of people that has the desire to write this email to him by saying something like, "There's a great forum of AI interested and concerned folks that we are a part of, many of us on the younger side, and we fear for the future of humanity from misaligned AI a... (read more)

I haven't seen this hydrophobic yet, but some of my glasses are afraid of water hehe. jk. I disagree with letting the dust stay and your brain will adjust...it's just increasingly opaque and the clear light is good. Here's my best practice, I think the main problem is getting enough soapy water on them to lift all the gunk off and not scratch the lens in the cleaning, so I rinse them first under fast water, then I use dishwashing detergent, foam it up in my hands and using my foamy fingers clean the lenses and the frame and do it twice, rinse good, then dry with a good micro fiber like they give you at opticians shops. I feel happy when I see nice and clear. 

1Zian
I've also seen stores use Kimwipes to clean lenses.
2jefftk
This was the main thing that made cleaning them annoying for me. With hydrophobic it's great but needing to do this.

I really really hope you get into AI work. I'm a big advocate for arts and other human qualities being in AI dev. Of course much of it isn't really understood yet how it will integrate, but if we get folks like you in there early you'll be able to help guide the good human stuff in when it becomes more clear how.  Viliam commenting below that AI lacks such human instincts is exactly the point...it needs to get them ASAP before things start going down the wrong road. I would guess that eventually we will be evaluating progress by how much an AI does show these qualities. Of course it's still early now.