All of Matt_Stevenson's Comments + Replies

Would you classify MC-AIXI as a General AI?

5Eliezer Yudkowsky
Given Vast quantities of computing power, it would qualify as a very silly AGI which will eventually try dropping an anvil on its own head just to see what happens. Roughly, no.
0[anonymous]
No.

I think a better example than frictionless surfaces and no air resistance would be idealized symmetries. Once something like Coulomb's Law was postulated physicists would imagine the implications of charges on infinite wires and planes to make interesting predictions.

We use the trolley problem and its variations as thought experiments in order to make predictions we can test further with MRIs and the like.

So a publication on interesting trolley problem results would be like theoretical physics paper showing relativity predicts some property of black holes.

I would compare the trolley problem to a hypothetical physics problem. Just like a physicist will assume a frictionless surface and no air resistance, the trolley problem is important because it discards everything else. It is a reductionist attempt at exploring moral thought.

0lionhearted (Sebastian Marshall)
Interesting thought, but it wouldn't be difficult to take the time to make situations more lifelike and realistic. There's plenty of real life situations that let you explore moral thought without the flaws listed above.

I think you are looking at the Trolley Problem out of context.

The Trolley Problem isn't suppose to represent a real-world situation. Its a simplified thought experiment designed to illustrate the variability of morality in slightly differing scenarios. They don't offer solutions to moral questions, they highlight the problems.

3lionhearted (Sebastian Marshall)
I understand the supposed purpose of trolley problems, but I think they're conducive to poor quality thinking none the less. Right, but I think there's better ways of going about it. I wanted to keep the post brief and information-dense so I didn't list alternative problems, but there's a number you could use based in real history. For instance, a city is about to be in lost in war, and the military commander is going through his options - do you order some men to stay behind and fight to the death to cover the retreat of the others, ask for volunteers to do it, draw lots? Try to have everyone retreat, even though you think there's a larger chance your whole force could be destroyed? If some defenders stay, does the commander lead the defensive sacrificing force himself or lead the retreat? Etc, etc. That sort of example would include imperfect information, secondary effects, human nature, and many different options. I think trolley problems are constructed so poorly that they're conducive to poor quality thought. There's plenty of examples you could use to discuss hard choices that don't suffer from those problems.

Didn't Harry also swear to keep what he and Draco experiment with secret? This is why he never told her about the magic gene either, unless I am misremembering things.

From Ch. 23

There's something called the Interdict of Merlin which stops anyone from getting knowledge of powerful spells out of books, even if you find and read a powerful wizard's notes they won't make sense to you, it has to go from one living mind to another

The Lazy Student, The Grieving Student, The Sports Fan: make the deadline for reports a curve instead of a cliff. Each day of delay costs some percentage of the grade.

I've always liked the "drop the n lowest scores" strategy. For example, 10 assignments given with the lowest 2 scores ignored.

You are pre-committing to a set of rules, where any excuse would have a much lower probability of being true. Any excuse would need to include 3 excuses. Combining the probabilities of each of the excuses will likely bring the total under your acceptabl... (read more)

7Random832
"Any excuse would need to include 3 excuses." - not as such; there is then the possibility that someone will wish to have an excuse to turn in an assignment they expect to do well on late to replace the grade of one of the other assignments which they did poorly on (or had no excuse).

Hi, I'm Matt Stevenson. 24 yr old computer scientist. I work on AI, machine learning, and motor control at a small robotics company.

I was hooked when I read Eliezer on OvercomingBias posting about AGI/Friendly AI/Singularity/etc...

I'd like to comment (or post) more, but I would need to revisit a few of the older posts on decision theory to feel like I'm making an actual contribution (as opposed to guessing the karma password). A few more hours in the day would be helpful.

Even if it is a gut feeling and not an explicit lie, he is still showing that his facts are weak since he's resorting to emotions.

9Kaj_Sotala
Unless you're an expert in a specific topic, then it seems to me rather likely that you're bound to believe in at least some things about it which are in fact false. We don't have the time or energy to comprehensively check the source of every statement we encounter, nor an ability to reliably keep track of which statements we have indeed checked. Even facts found in seemingly reliable sources, like textbooks on the topic, might be wrong. I don't think making an erroneous statement or two is enough for us to say that his facts were weak, or that he resorted too much to emotions. If you discuss any topic long enough, the odds are that you're going to slip a not-entirely-thought-out statement sooner or later. This is especially so since discussing a topic with someone else will force us to consider points of view we hadn't thought of ourselves, and make up new responses on the spot. Incidentally, having to quickly react to new points of view is what makes me a bit suspicious of the sometimes-heard claim "I debunked his claim in debate X, but then I heard him afterwards repeating it debate Y, so clearly he's intellectually dishonest". Yes, sometimes this is true, but it might also be that when the other person had more time to reflect on their opponent's arguments, they thought they found in them a fatal flaw and could thus save their original claim. I know it's happened to me.
2[anonymous]
"Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts." Emotions are highly fallible, but they're also efficient. If you're a perfect Bayesian, you'll think everything through completely, without emotion; if your brain contains a mere 30 billion neurons, you'll use thoughts for the most important things and also emotions for everything.
4byrnema
We (three) seem to agree that the friend resorted to emotional thinking... and that would be the reason he was careless or abusive of factual truths. But I'm not convinced this is evidence that his 'facts are weak', because, actually, is there any fact in the matter of whether a government is efficient? In other words, were they discussing something for which facts were entirely the most relevant parameter? His friend shouldn't have lied about a fact, but when in arguing-an-impression-mode, facts seem much more often useful as rhetorical devices than actual evidence.

I think this is a problem that applies to a lot of people who are socially dysfunctional, not just those who are high intelligence. Generalizing from one example?

2MichaelVassar
Not even close to one example. There are lots of ways to be socially dysfunctional with low IQ, but they look totally different from what I'm describing.

It seems that if there were karma transfers in place, betting against a funny picture post would be an almost guaranteed loss.

We could set up some ground rules that would exclude you specifically from starting the post, but I don't see how any rules set up in advance would prevent collusion to create the thread. Also, any karma lost for making an explicitly bad thread would be more than made up for with the 500 karma win.

0Kevin
Yes, I proposed a bet that was nearly a guaranteed win as long as I would be allowed to start the thread. 100 karma to Matt Stevenson for pointing that out.

I'm not sure what you are trying to argue here? I am saying that trying to use a reference class prediction in a situation where you don't have many examples of what you are referencing is a bad idea and will likely result in a flawed prediction.

You should only try and use the Outside View if you are in a situation that you have been in over and over and over again, with the same concrete results.

... then the data is most likely insufficient for reasoning in any other way If you are using an Outside View to do reasoning and inference than I don't know w

... (read more)

It seems like the Outside View should only be considered in situations which have repeatably provided consistent results. This is an example of the procrastinating student. The event has been repeated numerous time with closely similar outcomes.

If the data is so insufficient that you have a hard time casting it to a reference class, that would imply that you don't have enough examples to make a reference and that you should find some other line of argument.

This whole idea of outside view is analogous to instance based learning or case based reasoning. Y... (read more)

2taw
... then the data is most likely insufficient for reasoning in any other way. Reference class of smart people's predictions of the future performs extremely badly, even though they all had some real good inside view reasons for them.

Being Watched +4-7 - This can depend on who the other person is and the situation. I don't like paired programming since I'm an introverted thinker, and I find it really distracting. When there is someone else in the room doing work, it motivates me to do more work. I find the reverse can be true as well. If I'm around a bunch of people who are slacking off, I become less motivated.

Cripple your Internet +5 - This is a pretty effective technique, but I have a hard time being consistent with this at all.

One thing I've noticed is that my akrasia, as well... (read more)

Here you are relying on omega using two ordering systems that we already find highly correlated.

What if Omega asked you to choose between a blegg and a rube instead of A and B. Along with that, Omega tells you that it did not necessarily use the same ordering of blegg and rube when posing the question to the copy.

EDIT: More thoughts: If you can't rely on an obvious correlation between the player labels and choices, why not have a strategy to make a consistent mapping from the player labels to the choices.

The key to winning this game is having both partie... (read more)

1timtyler
Lexicographic ordering is indeed the most obvious one here.

Now, to return my procastinating energies to hacking out an improved ruby based bash replacement. rush just doesn't have the tab-completion I need to give myself the illusion of smooth productivity.

If you were to include a history of your commands across sessions, and maybe an option to dump all the commands from the current session into a .rb file, I would love you forever.

0gwern
'reify'? Roughly means to turn an abstraction (such as a goal) into something concrete (specific, implementable behaviors).

This is wonderful. I'm rather new to LW/OB and I've been reading through chains of posts.

I was about to start working on something just like this to help myself and other new readers.

Thank you.

2Vladimir_Nesov
Note that the Wiki has a complete list of posts on LessWrong.