Benquo and falenas108. I live in DC also. Let's start a DC meet up. How about some Sunday morning 8 am Starbuck's 22nd & P, They have roomy upstairs seating. Or anywhere else convenient to you all.
Jack
Benquo and falenas108. I live in DC also. Let's start a DC meet up. How about some Sunday morning 8 am Starbuck's 22nd & P, They have roomy upstairs seating. Or anywhere else convenient to you all.
Jack
I'd attend a DC meetup, but maybe we should push it out a month or so at least- Otherwise it causes confusion about the Baltimore meeting, which has already been fully organized... no need to split attendance by having two meetings at the same time at two places so close to each other.
I suspect (perhaps "fear") that, outside of very specific goal-oriented fields like entrepreneurship, this is more likely a symptom self-deception about our goals.
You tell yourself that your ultimate goal is, for example, to make the world a happier place. And so it is for this ultimate reason, that you decide to be a video game programmer. What a coincidence that you're a video game enthusiast that always dreamed of making the next Mario Bros. What a coincidence that it happens to pay extraordinarily well.
And if someone points out that you could probably increase world happiness more by, say, donating some of that money to charity, naturally you can come up with some convoluted explanation of why this is not (at least provably) so.
I think even more so though, it happens on a small scale. When I'm working, I take breaks to cruise the internet. Ostensibly, to recharge and give my brain a break. While this is indeed what I'm doing, this explanation has usually run dry within 10 minutes. After this point, my actual goal has become putting off work because something else seems more interesting, and I'd be lying to myself to claim otherwise.
In short, we sometimes fall short of our "goals" because they're actually not our goals. Canonically, this.
I doubt that simply donating money to charity is an efficient way to make the world a better place. There are studies that question, for instance, how much good all the money has done that we've given to developing nations.
It's definitely possible, I think, that creating a great video game might bring more happiness to the world than simply writing a check for a charity.
I am not saying, by the way, that being charitable is a bad idea. However, I do think you need to be strategic for it to be effective. For instance, it might be better to help a struggling neighbor or cousin by getting actively involved in their problems and helping them in a more involved manner. Or, if you have specific skill that can be helpful for a charity organization, that may be a better investment than just giving them money.
My point is, there is no simple, clear path to making the world a better place. We all have to actively think about how to make it happen. And it may happen in unexpected ways.
Any time want to perform a complex activity, we need to balance our time between evaluating different strategies for performing this activity, versus performing the mundane steps of this activity, themselves. If we just jump right into the activity without adequate planning (and without reevaluating our plan periodically) then we may perform it with a low efficiency. On the other hand, if we invest too much time in planning, we end up never actually "doing it."
At it's simplest level, your idea can thought of as getting stuck in local maxima of efficiency, when additional time could be spent in strategizing to find higher possibilities for efficiency.
Because this is the 'smallest generalisation' sufficient to permit newcomblike problems (such as the three I mentioned).
Btw, don't read too much into the fact that I've called these things 'robots' because in a sense everything is a robot. What I mean is something like "an agent or machine whose algorithm-governing-behaviour is 'given to us' without us having to do any decision theory". Or if we want to stick more closely to the AI context in which Wei proposed UDT, we're just talking about "another subroutine whose source code we can inspect in order to try to figure out what it does."
I interpreted the question PG was asking as, "why is it worth considering newcomb-like problems?"
(Of course, any philosophical idea is worth considering, but the question is whether this line of reasoning has any practical benefits for developing AI software)
Given these two things. what exactly is the SIAI going to be funding?
Hmm... that list of projects worries me a little...
It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question "OK, I'm a Christian, now what should I do?" The fact that they don't address any follow up questions really hurts their credibility.
Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.
I'd say my most valuable skill derives from the fact that I had very unusual parents with whom I also moved a lot, so that they had a strong influence on me. Consequently, the environment of my childhood was pretty unique, giving me neural patterns that deviate significantly from those of many other people.
This means I sometimes behave in ways that seem "dumb", but in other instances act in ways that seem unusually intelligent.
I excel in areas where unique neural patterns are rewarded: This includes (naturally) the stock market, some types of programming, and some types of non-fiction writing. It also means that I tend to have more success using lateral approaches to solve problems, since my atypical neurology makes it more likely that I will conceive of lateral approaches that have not yet been tried by others.
The biggest downside to this (I'm speculating here) is that success when using lateral problem solving correlates less directly with overall effort. Hence, there is less of a psychological reward for exerting a large effort. I suspect this makes lateral thinkers, like myself, trend towards having a lower discipline, compared to others who have managed comparable achievements.
I agree to any and all conditions for a DC meetup. Push it out, later in the day, any other location.
I just wanted to get the ball rolling. Let's start with a date. How about May 15?
That works for me too... anyone here have enough karma so that we can break this out as a separate top level post? :-)