EDIT: added the "rights of parents" and "simulation hypothesis" research interests.

I've started a lot of research projects and have a lot of research interests that I don't currently have time to develop on my own. So I'm putting the research interests together on this page, and anyone can let me know if they're interested in doing any joint projects on these topics. This can range from coauthoring, to simply having a conversation about these and seeing where that goes.

The possible research topics are:

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 7:20 AM

That's an impressive list, congratulations on doing so much different work! I'm interested in doing anything related to decision theory.

Will contact you in a few days for more details.

I'd definitely be interested to talk more about many of these, especially anthropics and reduced impact / Oracle AI, and potentially collaborate. Lots of topics for future Oxford visits! :-)

Hope you'll get interest from others as well.

Yep, we'll have a lot to talk about!

In descending order of utility:

  • Problems with CEV: The Mahatma Armstrong argument will be part of my doctorate (either I get in Oxford or not), as you now know. So I'm researching it either way. Researching with you, besides an honor, would also be very utilitarian.

  • Infinite ethics: I think we should account for the infinite causal link of events coming out of every action. After that, much of the remaining questions would be coming up with the right set of mathematical tools as to make the solution really simple and elegant, without having to deal with hyperreals, infinite shadows and whatnot all at the same time.

  • Fermi paradox: I love it, it's a nest of many of the things I like and/or find important. However, I don't have many insights here. But there some things I find obvious which are de-emphasized in this area: (1) how strong Milan's arguments against expansion are and (2) how the many overlapping explanations add up to one big explanation. From a comment I sent you a while back: "For example, is possible that while the expansion-desiring civilizations are more common, those are also the ones who couldn't address many evolutionary shaped behaviors that lead to existential risks and hence they all go extinct. Life can be rare to the point there are only 10 intelligent life originating planets, and it can be the case only half have the desire of expand, of those only half achieve the technology, of those only half don't go extinct…. and this result in a very small probability that the one remaining civilization will colonize."

  • Anthropic decision theory: I haven't got the time(or better, will power) to read properly your paper on it until this day. If I can convince myself there could be useful research if I read the paper, great. You can see some of my ideas on SSSA on my other comment I left you today.

Let's talk more about any of those points whenever you have the time. Specially, we should really talk about the Mahatma Armstrong thing, it seems to be a idea we stumble upon more or less at the same time. I wonder if you said it to me in February, and I remembered it again as mine later in March.

I expect to be quite busy with my Master's thesis, which is on a completely unrelated subject, but I would be interested in at least discussing, at possibly also co-authoring papers, on at least the following topics:

The social feasibility of reduced impact AI and Oracle AI. In the subsections of part 5.1. of Responses to Catastrophic AGI Risk, we argued that there would be a constant and strong incentive for anyone with an Oracle AI to turn it into an active one and give it more power, so they would be unlikely to be voluntarily reined in. If that argument is correct, then we would need regulation in order to do the reining in, but that has its own challenges.

The future of surveillance. I'm generally rather concerned with the negative sides of surveillance, but I also acknowledge that the current trend is a continual increase in the amount of surveillance, and we might just be forced to make the best out of it. There is a lot of bad stuff that currently goes on undetected that one could potentially avoid with more comprehensive surveillance, though it would also probably kill a lot completely harmless stuff. Also, there's the thing about being able to use it to stop x-risks and such. It seems like a comprehensive analysis of the exact function and (positive and negative) effects of privacy is something that we'd need in order to properly evaluate various surveillance scenarios.

The Turing test. I worked on a draft of a summary paper on measures of general intelligence last year, but we never got around finishing it. I e-mailed you with a link to it.

The problems with CEV / problems with total utilitarianism. I think that one of the most important questions relating to both CEV and utilitarianism would involve figuring out what exactly human preferences are. And it looks like psychology and neuroscience are starting to produce some very interesting hints about that. Luke posted about the possibility of humans having a hidden utility function, and in my review of Hirstein's Mindmelding book, I mentioned that

As the above discussion hopefully shows, some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations). [...] This kind of a breakdown seems like very promising material for some neuroscience-aware philosopher to tackle in an attempt to figure out just what exactly preferences are; maybe someone has already done so.

Will contact you in a few days for more details.

I'm interested in working on the total utilitarianism project.

Will contact you in a few days for more details.

I won't have much free time in the near future, so I can't promise quick progress on anything, but I'm interested in doing research related to many of these, particularly infinite ethics problems. I was under the impression that the cake or death problem was solved; if there is still a cake-or-death-related open problem, I might be interested in tackling that too.

Will contact you in a few days for more details.

[-][anonymous]10y20

Oracle AI and the (non-)differences between tool AIs and agents

Instrumental AGI is something that I am working on. There might be value in collaborating. Specifically I am interested in practical boxing mechanisms informed by real-world AGI designs and the fundamental limitations of finite computational substrates.

My informed prior belief is that boxed AI is not a fundamentally hard problem: it fully maps onto computer science and security problems which have already been solved in other contexts. Further, all of the arguments I have seen against boxing suffer from either invalid premises or flawed reasoning. Still, there's much to be done in validating (or disproving) my prior assumptions. Since I am actively working to create AGI, it would be nice to get some answers before we need them. Collaboration with a philosopher on some of the more fundamental epistemic issues might be a good idea.

I have considerable interest in the future of surveillance.

However I strongly value individual autonomy so I tend to view mass surveillance as evil. Looking at your links I rather suspect we are on the different sides of barricades.

Looking at your links I rather suspect we are on the different sides of barricades.

Not necessarily a problem. I think I need to expose myself to the potential problems of surveillance now, and what's really missing from the research now is a good credible case that surveillance can turn a democratic government despotic. What are the mechanisms, how have they operated before, what are the warning signs, etc...

My current position is that technology is making privacy hopeless, so a full "transparent society" is the only way to deal with this - but that position may be subject to change!

Not necessarily a problem. I think I need to expose myself to the potential problems of surveillance now, and what's really missing from the research now is a good credible case that surveillance can turn a democratic government despotic.

Isn't a US government where the president claims the power the political powers that distinguished a Roman dictator from other Roman rules a good good example of a government turning despotic? Those powers being to use the military inside his own borders, being allowed to kill citizens of his own country without judicial oversight and waging wars by his own decision.

The director of intelligence can keep his job despite the criminal act of lying under oath to congress. In what we call in Europe a democratic state something like that doesn't happen.

The US government imprisions a higher percentage of it's population than any other country. US congressional approval ratings are lower than the approval ratings of China's leadership.

When the UK is using antiterrorism legislation for Gleen Greenwalds boyfriend, do you really believe that the NSA that surveilled‎ Martin Luther King in his days doesn't also use antiterrorism legislation to go after political dissidents?

In another case that's very troubling, after the saving and loans debable the US government did persecute bankers. After the last crisis they didn't. That indicates a change of power away from a country where there's a rule of law.

Just take those political changes of the last two decades and extrapolate from them.


A while ago there was an EU paper that suggested that in a trainstation an AI should trigger a safety alert if a person is isn't entering a train and effectively is there too long.

You have the opportunity to punish various sorts of thoughtcrime with lowered credit score or with a score that increases the likelihood that you will get audited.

I think I need to expose myself to the potential problems of surveillance now, and what's really missing from the research now is a good credible case that surveillance can turn a democratic government despotic.

Don't forget that research doesn't happen in a vacuum. Part of the danger of poweful surveillance is that those organisations have power that they can use to dissuade such research.

Those powers being to use the military inside his own borders, being allowed to kill citizens of his own country without judicial oversight and waging wars by his own decision.

And these powers derive from the 9/11 trauma, not from mass surveillance. And we've seen far worse in the US in the last 60 years.

I certainly agree that surveillance enables bad governments, but I've yet to see a good argument that surveillance causes good governments to go bad (eg UK vs France).

The director of intelligence can keep his job despite the criminal act of lying under oath to congress. In what we call in Europe a democratic state something like that doesn't happen.

Alas, that does happen, and has happened, regularly over the last years, decades, and centuries.

Don't forget that research doesn't happen in a vacuum. Part of the danger of poweful surveillance is that those organisations have power that they can use to dissuade such research.

If I find anything like that happening, you'll all be the first to know!

When the UK is using antiterrorism legislation for Gleen Greenwalds boyfriend,

Nitpick: Who was carrying large numbers of classified documents on him when caught.

I mostly agree with your point.

Nitpick: Who was carrying large numbers of classified documents on him when caught.

Nitpick doubled: (a) Why does the UK care about documents classified by the US? and (b) The antiterrorism legislation used was designed (and was explicitly promised to be used only) for cases rather more serious than carrying classified documents.

Who was carrying large numbers of classified documents on him when caught.

Which isn't a crime. Even if it would be because it violated some secret UK gap order law it isn't terrorism.

Those powers being to use the military inside his own borders, being allowed to kill citizens of his own country without judicial oversight and waging wars by his own decision.

And these powers derive from the 9/11 trauma, not from mass surveillance. And we've seen far worse in the US in the last 60 years.

I certainly agree that surveillance enables bad governments, but I've yet to see a good argument that surveillance causes good governments to go bad (eg UK vs France).

The director of intelligence can keep his job despite the criminal act of lying under oath to congress. In what we call in Europe a democratic state something like that doesn't happen.

Alas, that does happen, and has happened, regularly over the last years, decades, and centuries.

[This comment is no longer endorsed by its author]Reply

surveillance can turn a democratic government despotic

There are more failure modes than this.

In general, it's fairly complicated because to discuss what's good or bad about X you need a fleshed-out value framework which will allow you to make explicit what "good" and "bad" means and which will also allow you to specify shades of grey and talk about trade-offs.

It's possible to ignore this issue and just attempt to discuss what kind of situations and consequences will X lead to without evaluating things as good or bad -- but that discussion will still be colored by the values and the preferences. Having these values "invisible" and unexamined tends to... complicate discussions :-)

However I strongly value individual autonomy so I tend to view mass surveillance as evil.

Indivdual autonomy is a complicated thing. You could say that an individual who is allowed to wear always recording Google Glass has more autonomy than a individual who's forbidden from using that technology in that way.

It worth analysing the effect of technology in detail instead of falling to automatic reactions.

Indivdual autonomy is a complicated thing.

No, I don't think it's particularly complicated.

an individual who is allowed

An individual who is allowed doesn't have much autonomy.

wear always recording Google Glass has more autonomy

I think you're confusing autonomy with some mix of data, power, and control.

I don't believe I'm "falling to automatic reactions" (though I wouldn't, would I? :-D)

though I wouldn't, would I?

It certainly doesn't sound like it.

It looks to me like you're just arguing with the words Christian's using rather then engaging with the substance of what he's trying to say.

I must confess to being unable to see substance.

I think the point is that in order to resist mass surveillance, you need to restrict the individual right to record what they see fit.

Mass surveillance is surveillance of the masses, not surveillance by the masses.

It's what NSA does, not what a bunch of Google geeks in Palo Alto do.

And the individual right to record what they see fit is subject to restrictions, of course. We can discuss what these restrictions might or should be, but that's not what I was talking about.

Mass surveillance is surveillance of the masses, not surveillance by the masses.

Surveillance by the masses makes surveillance of the masses trivially easy.

Only if the masses conveniently store all their audio and video records in locations that you can easily and cheaply access.

Which they/we currently do. Generally, accessing recording and data is easy compared with recording it in the first place.

Mass surveillance is surveillance of the masses, not surveillance by the masses.

If the masses are surveilling than you also have surveillance of the masses.

Mass surveillance -- of the masses -- is a reality right now and has been for many years.

Surveillance by the masses is a possibility that may or may not happen in the future in the form that we don't know and can only speculate about.

If I would live in a country where it's legal I would nowadays record sound around me 24/7 and safe it for my personal archive.

I however live in a country where I'm not allowed because laws indicate that would violate the rights of other people.

Do you argue that those laws don't reduce my autonomy at all?

Autonomy is generally defined as "the capacity of a rational individual to make an informed, un-coerced decision." (Wikipedia) (emphasis mine)

The laws which limit your right to record the sound around you limit your freedom in the same way the laws which limit your right to kill, rape, or rob other people limit your freedom.

Is that a yes or a no?

It's a no -- these laws do not reduce your autonomy in a significant way.

If you want to split hairs, technically speaking any restriction of any kind reduces your autonomy, but that doesn't sound like a useful direction for a discussion.

Do you think that UK style gap order laws that prevent defamation do reduce the autonomy of journalists?

Would you consider a government that forbids me from running an ad that tells people to smoke to reduce my autonomy?

Do you consider a government that forbids me to perform medicine without license to reduce my autonomy?

It's been already mentioned in this thread, but I'll repeat and expand a bit.

Autonomy is predominantly a negative right -- a right to be free from interference and coercion. Freedom is both a negative and a positive right -- not only it's a right to be free from restrictions, but it's also a right to have the capability to do something.

Moreover, although there is no sharp boundary, autonomy mostly refers to the freedom of your mind. It's a freedom from coercion in making choices. Freedom itself concerns itself more with the ability to act in the "external" physical world. They are connected, of course.

Given this distinction, your questions are about freedom, not about autonomy.

And yes, of course all and any kind of laws reduce your freedom. So what? I don't think there are many full-blown anarchists here.

autonomy mostly refers to the freedom of your mind. It's a freedom from coercion in making choices. Freedom itself concerns itself more with the ability to act in the "external" physical world.

Now that makes it sound like only things like mind control and enforcing thought crimes can be restrictions of autonomy. Being under surveillance doesn't interfere with someone's ability to make rational decisions.

Given this distinction, your questions are about freedom, not about autonomy.

That a copout. How about just answering the question as posed? A clear yes/no to the question would still help to be more clear about your position.

Moreover, although there is no sharp boundary, autonomy mostly refers to the freedom of your mind.

The extend to which I can safe information to have it accessible in the future is very near to freedom of mind.

Take someone with a hearing aid. Do you really consider that hearing aid to be irrelevant to someone freedom of mind? In a feature in which computer costs and storage get really cheap you could expect a hearing aid to safe audio of the enviroment to get better at distinguishing speech in a particular moment from other sounds.

How does surveiling your communication reduces your freedom of mind or autonomy when a secret gap order that disallows you from talking about something doesn't reduce your freedom of mind or autonomy?

A clear yes/no to the question would still help to be more clear about your position.

I believe a yes/no answer will mislead you further, but be my guest: I am not sure what "gap order laws" are, but for libel/defamation laws the answer is no. The answer is no for the second and the third questions as well.

The extend to which I can safe information to have it accessible in the future is very near to freedom of mind.

No, I don't think so. Frankly the claim that the ability to record other people's activities is a matter of the freedom of your mind looks ridiculous to me.

Do you really consider that hearing aid to be irrelevant to someone freedom of mind?

Yes, I do.

I am not sure what "gap order laws" are, but for libel/defamation laws the answer is no.

Sorry for the typo. I meant gag order laws. A libel suit where you are not allowed to say that you are being sued for libel.

Yes, I do.

Then why isn't scanning someone email also irrelevant to someone freedom of mind?

Then why isn't scanning someone email also irrelevant to someone freedom of mind?

Laws restricting your freedom are mostly like fences: they separate certain areas of behavior and post signs "Do not go there or bad things will happen to you".

Surveillance isn't like a fence. It is like living in an aquarium with no place to hide.

I don't understand this distinction you're trying to make between autonomy and freedom. Also, you haven't explained how mass surveillance interferes with individual autonomy, which is a pretty crucial part of an argument that mass surveillance is evil because you value individual autonomy.

I don't understand this distinction you're trying to make between autonomy and freedom.

Autonomy is mostly a negative right -- it's a right to be free from interference and coercion.

Freedom is both a negative and a positive right -- it's the lack of restrictions (negative) but it's also the capability to do things (positive).

Also, you haven't explained how mass surveillance interferes with individual autonomy

Do you think your behavior would change if you knew that every moment of your life was observed and recorded by government agents?

Or, simpler, have you read 1984?

Or, simpler, have you read 1984?

Can we avoid mentioning 1984 in surveillance discussions? I've encountered many anti-surveillance arguments that eventually boiled down to "1984 is scary". But 1984 was a work of fiction, and as far as predictions go, it was wrong all over the place.

I want to be able to phrase good anti-surveillance arguments that people will find just as valid as if 1984 had never been written.

But 1984 was a work of fiction, and as far as predictions go, it was wrong all over the place.

The point of mentioning 1984 isn't to use it as a study or a forecast. The point is reaction to the world depicted in it: most people find totalitarian, total-surveillance societies undesirable, disturbing, and basically evil.

1984 is also useful as a well-known reference. If someone says "I have nothing to hide, I don't need privacy" you can ask him whether he'd be fine with the levels of surveillance depicted in 1984. He might say "yes, I don't care", he might say "no, that's too much", he might say "only in a democratic society", etc.

But would you be fine with Bentham's Panopticon, for example? Are examples of Soviet Russia, Eastern Germany, etc. OK?

Autonomy is mostly a negative right -- it's a right to be free from interference and coercion.

Freedom is both a negative and a positive right -- it's the lack of restrictions (negative) but it's also the capability to do things (positive).

In that case, anything that a government (or any other external agent) does to decrease someone's freedom also decreases their autonomy, unless you want to redefine "interference" too. If you take away my capability to do things, you are interfering with me. Based on the examples you provided, it sounds to me like the difference between autonomy and freedom is that "autonomy" is an applause light, so limits to freedom that you approve of aren't limits to autonomy, but limits to freedom that you don't approve of are.

Do you think your behavior would change if you knew that every moment of your life was observed and recorded by government agents?

No. All the laws that I regularly break are laws that enough people regularly break that the police would very quickly give up even pretending to enforce them. Things that aren't illegal but I wouldn't want known would rarely end up exposed even if the surveillance system did not have well-designed protections for the surveyed, simply because the government agents involved wouldn't have time to keep track of and exploit everyone's embarrassing secrets.

Or, simpler, have you read 1984?

No, but I know what you are referring to. This is not an inevitable consequence of surveillance.

Selective enforcement of laws can be a huge problem. If for some reason a local cop decides he doesn't like you, having all your crimes recorded in advance will make retaliation much easier.

Full transparency mitigates selective enforcement - if you can always point out the similar crimes other people are doing, selective enforcement becomes untenable in any semi-democratic society.

if you can always point out the similar crimes other people are doing, selective enforcement becomes untenable in any semi-democratic society

That is empirically not true -- well, unless you don't consider the US to be a "semi-democratic society".

We don't have recordings of rich bankers doing cocaine. Saying "but they also do it" is very different from having the recorded proof of this fact.

The relevant terms are "selective enforcement" and "selective prosecution". Both are fully legal (as long as you don't show bias against any of the protected classes) and commonly practiced.

As a trivial example try telling the traffic warden that she can't give you a parking ticket because there is a bunch of illegally parked cars without tickets around.

This is true. If the surveillance system does not come with well-designed protections for those being watched, then problems like that can happen. Having a thorough surveillance system would make it possible to protect people from things like that, though of course that doesn't mean it will actually happen. I'm not actually confident that increased surveillance would be a good thing; I was just arguing that "individual autonomy is good; therefore surveillance is bad," is not a coherent argument.

Which direction would you like to develop in within these concept spaces?

Hi Stuart, I'm not sure if this post is still active but I've only recently come across your work and I'd like to help. At the moment I'm particularly interested in the Control problem, the Fermi paradox and the simulation hypothesis, but I'm sure a chat with you would spur my interest in other directions too. Would be great if you could get in touch so maybe we can figure out if I can be of any help.