Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lu_Tong 22 December 2017 09:52:56PM 1 point [-]

Which philosophical views are you most certain of, and why? e.g. why do you think that multiple universes exist (and can you link or give the strongest argument for this?)

Comment author: Wei_Dai 24 December 2017 01:26:37AM 0 points [-]

I talked a bit about why I think multiple universes exist in this post. Aside from what I said there, I was convinced by Tegmark's writings on the Mathematical Universe Hypothesis. I can't really think of other views that are particularly worth mentioning (or haven't been talked about already in my posts), but I can answer more questions if you have them?

Comment author: Wei_Dai 23 September 2017 10:02:37PM 2 points [-]

It seems to me that the original UDT already incorporated this type of approach to solving naturalized induction. See here and here for previous discussions. Also, UDT, as originally described, was intended as a variant of EDT (where the "action" in EDT is interpreted as "this source code implements this policy (input/output map)". MIRI people seem to mostly prefer a causal variant of UDT, but my position has always been that the evidential variant is simpler so let's go with that until there's conclusive evidence that the evidential variant is not good enough.

LZEDT seems to be more complex than UDT but it's not clear to me that it solves any additional problems. If it's supposed to have advantages over UDT, can you explain what those are?

Comment author: yannkyle 22 September 2017 04:25:39PM 0 points [-]

Hello, We are students in 11th grade from Paris, 17 years old. We're doing a project on the bitcoin and cryptomoney. This project is part of the high school diploma and we were wondering if we could ask you a few questions about the subject. First what is the "bitcoin" for you and what is it's use? Do you think cryptomoney could totally replace physical money and would it be better? How long have you been working on the subject and what do you stand for? Thank you.

Comment author: Wei_Dai 23 September 2017 04:36:13AM 0 points [-]

First what is the "bitcoin" for you and what is it's use? Do you think cryptomoney could totally replace physical money and would it be better?

I'm not the best person to ask these questions.

How long have you been working on the subject and what do you stand for?

I spent a few years in the 1990s thinking about how a group of anonymous people on the Internet can pay each other with money without outside help, culminating in the publication of b-money in 1998. I haven't done much work on it since then. I don't currently have strong views on cryptocurrency per se, but these thoughts are somewhat relevant.

Comment author: riceissa 14 September 2017 06:43:40PM 1 point [-]

In some recent comments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular "the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian". I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position?

I am especially curious since I was under the impression that many people on LessWrong hold essentially similar views.

Comment author: Wei_Dai 23 September 2017 04:24:48AM 2 points [-]

I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don't seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian's position on consciousness.

  1. I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian's anti-realism doesn't have this, so it seems unsatisfactory to me.
  2. Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian's moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.
Comment author: Habryka 17 September 2017 02:48:45AM 1 point [-]

We are planning to leave the wiki up, and probably restyle it at some point, so it will not be gone. User accounts will no longer be shared though, for the foreseeable future, which I don't think will be too much of an issue.

But I don't yet have a model of how to make the wiki in general work well. The current wiki is definitely useful, but I feel that it's main use has been the creation of sequences and collections of posts, which is now integrated more deeply into the site via the sequences functionality.

Comment author: Wei_Dai 17 September 2017 04:56:53PM 3 points [-]

The wiki is also useful for defining basic concepts used by this community, and linking to them in posts and comments when you think some of your readers might not be familiar with them. It might also be helpful for outreach, for example our wiki page for decision theory shows up in the first page of Google results for "decision theory".

Comment author: IlyaShpitser 17 September 2017 01:08:59AM *  4 points [-]

Google is using a much more complicated algorithm that is constantly tweaked, and is a trade secret -- precisely because as soon as it became profitable to do so, the ecosystem proceeded to game the hell out of PageRank.

Google hasn't been using PageRank-as-in-the-paper for ages. The real secret sauce behind Google is not eigenvalues, it's the fact that it's effectively anti-inductive, because the algorithm isn't open and there is an army of humans looking for attempts to game it, and modifying it as soon as such an attempt is found.

Comment author: Wei_Dai 17 September 2017 02:04:03AM 8 points [-]

Given that, it seems equally valid to say "this will work, for the same reason that PageRank worked", i.e., we can also tweak the reputation algorithm as people try to attack it. We don't have as much resources as Google, but then we also don't face as many attackers (with as strong incentives) as Google does.

I personally do prefer a forum with karma numbers, to help me find quality posts/comments/posters that I would likely miss or have to devote a lot of time and effort to sift through.

Comment author: paulfchristiano 16 September 2017 03:43:55AM 0 points [-]

I think that Facebook's behavior has probably gotten worse over time as part of general move towards cashing in / monetizing.

I don't think I've looked at my feed in a few years.

On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.

Comment author: Wei_Dai 17 September 2017 01:33:51AM 0 points [-]

On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.

Why? There are plenty of theoretical models in economics where at equilibrium total welfare does not get maximized. See this post and the standard monopoly model for some examples. The general impression I get from studying economics is that the conditions under which total welfare does get maximized tend to be quite specific and not easy to obtain in practice. Do you agree? In other words, do you generally expect markets to have socially efficient equilibria and expect Facebook to be an instance of that absent a reason to think otherwise, or do you think there's something special about Facebook's situation?

Comment author: John_Maxwell_IV 09 September 2017 07:02:58AM 3 points [-]

So what's the fix here? If people think mailing lists work better than peer review, maybe an organization like OpenPhil should set up a mailing list for academics working on AI safety and award grants based on discussions on the mailing list? Academia has a lot of momentum behind it, and it seems more efficient to redirect that momentum than try to set up something new from scratch.

Comment author: Wei_Dai 09 September 2017 07:21:01PM 0 points [-]

It's probably not as simple as that. Part of why online discussions work as well as they do is probably that there's no money riding on them. If funders start making grant decisions based on mailing list discussions, we might start seeing mailing lists becoming politicized to an uncomfortable and unproductive degree. I think for now the "fix" is just for people to monitor efforts to reform peer review in academia and adopt the ones that work well into the AI safety field, and also maintain a number of AI safety research institutions with diverse cultures instead of e.g. demanding that everyone publish in academic venues as a condition for funding.

Comment author: paulfchristiano 02 December 2016 06:08:40PM *  1 point [-]

How do you expect this to happen?

I think there are two mechanisms:

  • Public image is important to companies like Facebook and Google. I don't think that they will charge for a user-aligned version, but I also don't think there would be much cost to ad revenue from moving in this direction. E.g. I think they might cave on the fake news thing modulo the proposed fixes mostly being terrible ideas. Optimizing user preferences may be worth it in the interests of a positive public image alone.
  • I don't think that Facebook ownership and engineers are entirely profit-focused, they will sometimes do things just because they feel like it makes the world better at modest cost. (I know more people in Google and am less informed about FB.)

Relating the two, if e.g. Google organized its services in this way, if the benefits were broadly and understood, and if Facebook publicly continued to optimize for things that its users don't want optimized, I think it could be bad for the image of Facebook (with customers, and especially with hires).

I'd be quite surprised if any of these happened.

Does this bear on our other disagreements about how optimistic to be about humanity? Is it worth trying to find a precise statement and making a bet?

I'm probably willing to give > 50% on something like: "Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences." I could probably also make stronger statements re: scope of adoption.

And why isn't it a bad sign that Facebook hasn't already done what you suggested in your post?

I think these mechanisms probably weren't nearly as feasible 5 years ago as they are today, based on gradual shifts in organization and culture at tech companies (especially concerning ML). And public appetite for more responsible optimization has been rapidly increasing. So I don't think non-action so far is a very strong sign.

Also, Facebook seems to sometimes do things like survey users on how much they like content, and include ad hoc adjustments to their optimization in order to produce more-liked content (e.g. downweighting like-baiting posts). In in some sense this is just a formalization of that procedure. I expect in general that formalizing optimizations will become more common over the coming years, due to a combination of increasing usefulness of ML and cultural change to accommodate ML progress.

Comment author: Wei_Dai 09 September 2017 04:58:00PM *  0 points [-]

I'm curious if you occasionally unblock your Facebook newsfeed to check if things have gotten better or worse. I haven't been using Facebook much until recently, but I've noticed a couple of very user-unfriendly "features" that seem to indicate that FB just doesn't care much about its public image. One is suggested posts (e.g., "Popular Across Facebook") that are hard to distinguish from posts from friends, and difficult to ad-block (due to looking just like regular posts in HTML). Another is fake instant message notifications on the mobile app whenever I "friend" someone new, that try to entice me into installing its instant messaging app (only to find out that the "notification" merely says I can now instant message that person). If I don't install the IM app, I get more and more of these fake notifications (2 from one recent "friend" and 4 from another).

Has it always been this bad or even worse in the past? Does it seem to you that FB is becoming more user-aligned, or less?

ETA: I just saw this post near the top of Hacker News, pointing out a bunch of other FB features designed to increase user engagement at the expense of their actual interests. The author seems to think the problem has gotten a lot worse over time.

Comment author: IlyaShpitser 08 September 2017 03:40:39PM *  0 points [-]

You know what I will say, yall should stay in your lane, re: incentives.

Yudkowsky's incentives caused him to write HPMOR (which has precisely zero (0) academic value), and publish basically nothing. So as far as the mainstream is concerned his footprint does not exist. He's collecting a salary at MIRI, presumably. What is that salary buying?

Mainstream academics who collect a salary will say they teach undergraduates, and publish stuff to make grant agencies happy. Some of that stuff is useless, a lot of it is very useful indeed.


Reform attempts for "non-aligned" ecosystems like academia will almost certainly not work because (as you all are well aware) "aligning" is hard.


MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem, if it doesn't grow it will not have any impact.

Comment author: Wei_Dai 09 September 2017 07:36:48AM *  1 point [-]

You know what I will say, yall should stay in your lane, re: incentives.

I don't understand this. Please clarify? (Urban dictionary says "stay in your lane" means mind your own business, which is exactly what we're doing, namely trying to figure out what direction to push our own culture.)

and publish basically nothing

He's publishing mostly on Arbital these days. See this and this for examples. I'm not sure why he doesn't at least post links elsewhere to draw people's attention though. Hopefully that will change after LW 2.0 goes live.

So as far as the mainstream is concerned his footprint does not exist.

I'm not sure what you mean by this either. Certainly the people who work on AI safety at Berkeley, OpenAI, and DeepMind all know about Eliezer and MIRI's approach to AI alignment, even if they don't agree that it's the most promising one. Are you saying that if Eliezer had published in academia, they'd be more inclined to follow that approach, as opposed to the more ML-based approaches that they're currently following?

MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem

I think having "aligned" human institutions is too much to hope for. As I mentioned elsewhere in this thread, perhaps the best we can do is to have different bad incentives / inefficiencies in different institutions so that they're able to reach different sets of low hanging fruit, and not all suffer from the same collective blind spots.

View more: Next