All of RaelwayScot's Comments + Replies

Deutsch briefly summarized his view on AI risks in this podcast episode: https://youtu.be/J21QuHrIqXg?t=3450 (Unfortunately there is no transcript.)

What are your thoughts on his views apart from what you've touched upon above?

Demis Hassabis has already announced that they'll be working on a Starcraft bot in some interview.

This interview, dated yesterday, doesn't go quite that far - he mentions Starcraft as a possibility, but explicitly says that they won't necessarily pursue it.

If the series continues this way with AlphaGo winning, what’s next — is there potential for another AI-vs-game showdown in the future?

I think for perfect information games, Go is the pinnacle. Certainly there are still other top Go players to play. There are other games — no-limit poker is very difficult, multiplayer has its challenges because it’s an imperfect information game. And then there are

... (read more)

What is your preferred backup strategy for your digital life?

6Kaj_Sotala
Before reading the responses, I thought this comment meant "how are you preserving information about yourself so that an upload copy of you could eventually be constructed".
4username2
An online automatic backup service (www.code42.com/crashplan)
1Stingray
External HDD
0Good_Burning_Plastic
I just keep anything I couldn't re-download or re-generate on a couple days' notice in my Dropbox folder.
3ChristianKl
I use mega.nz to backup files on my computer. I use the service because it has client-side-encryption and provides 50GB of free storage. I use Evernote for all information like notes or articles I read that I want to remember. Anki has it's webserver where automatic updates happen. Gmail also automatically has the data in the cloud.
1Lumifer
"Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it" -- Linus Torvalds

I meant that for AI we will possibly require high-level credit assignment, e.g. experiences of regret like "I should be more careful in these kinds of situations", or the realization that one particular strategy out of the entire sequence of moves worked out really nicely. Instead it penalizes/enforces all moves of one game equally, which is potentially a much slower learning process. It turns out playing Go can be solved without much structure for the credit assignment processes, hence I said the problem is non-existent, i.e. there wasn't even need to consider it and further our understanding of RL techniques.

"Nonexistent problems" was meant as a hyperbole to say that they weren't solved in interesting ways and are extremely simple in this setting because the states and rewards are noise-free. I am not sure what you mean by the second question. They just apply gradient descent on the entire history of moves of the current game such that expected reward is maximized.

2Vaniver
It seems to me that the problem of value assignment to boards--"What's the edge for W or B if the game state looks like this?" is basically a solution to that problem, since it gives you the counterfactual information you need (how much would placing a stone here improve my edge?) to answer those questions. I agree that it's a much simpler problem here than it is in a more complicated world, but I don't think it's trivial.

Yes, but as I wrote above, the problems of credit assignment, reward delay and noise are non-existent in this setting, and hence their work does not contribute at all to solving AI.

1Vaniver
Credit assignment and reward delay are nonexistent? What do you think happens when one diffs the board strength of two potential boards?

I think what this result says is thus: "Any tasks humans can do, an AI can now learn to do better, given a sufficient source of training data."

Yes, but that would likely require an extremely large amount of training data because to prepare actions for many kind of situations you'd have an exponential blow up to cover many combinations of many possibilities, and hence the model would need to be huge as well. It also would require high-quality data sets with simple correction signals in order to work, which are expensive to produce.

I think, abov... (read more)

2bogus
This is a well-known problem, called reinforcement learning. It is a significant component in the reported results. (What happens in practice is that a network's ability to assign "credit" or "blame" for reward signals falls off exponentially with increasing delay. This is a significant limitation, but reinforcement learning is nevertheless very helpful given tight feedback loops.)

I agree. I don't find this result to be any more or less indicative of near-term AI than Google's success on ImageNet in 2012. The algorithm learns to map positions to moves and values using CNNs, just as CNNs can be used to learn mappings from images to 350 classes of dog breeds and more. It turns out that Go really is a game about pattern recognition and that with a lot of data you can replicate the pattern detection for good moves in very supervised ways (one could call their reinforcement learning actually supervised because the nature of the problem gives you credit assignment for free).

3moridinamael
I think what this result says is thus: "Any tasks humans can do, an AI can now learn to do better, given a sufficient source of training data." Games lend themselves to auto-generation of training data, in the sense that the AI can at the very least play against itself. No matter how complex the game, a deep neural net will find the structure in it, and find a deeper structure than human players can find. We have now answered the question of, "Are deep neural nets going to be sufficient to match or exceed task-specific human performance at any well-specified task?" with "Yes, they can, and they can do it better and faster than we suspected." The next hurdle - which all the major companies are working on - is to create architectures that can find structure in smaller datasets, less well-tailored training data, and less well-specified tasks.

Then which blogs do you agree with on the matter of the refugee crisis? (My intent is just to crowd-source some well-founded opinions because I'm lacking one.)

9polymathwannabe
LW avoids discussing politics for the same reason prudent Christmas dinner hosts avoid discussing politics. If you wish to take your crazy uncle to the pub for a more heated chat, there's Omnilibrium.

What are your thoughts on the refugee crisis?

6Gunnar_Zarncke
Tim on the LW Slack gave an impressive illustration of the different levels the refugee crisis can be seen. He was referring to Constructive Development Theory which you might want to look up for further context. I quote verbatim with his permission:
7Viliam
Another sad example of a problem that would be difficult but not impossible to solve rationally in theory, but in real life the outcome will be very far from optimal for many reasons (human stupidity, mindkilling, conflicts of interest, problems with coordination, etc.). There are many people trying to escape from a horrible situation, and I would really want to help them. There are also many people pretending to be in the same situation in order to benefit from any help offered to the former; that increases the costs of the help. A part of what created the horrible situation is in the human heads, so by accepting the refugees we could import a part of what they are trying to escape from. As usual, the most vocal people go to two extremes: "we should not give a fuck and just let them die", or trying to censor the debate about all the possible risks (including the things that already happened). Which makes it really difficult to publicly debate solutions that would both help the refugees and try to reduce the risk. Longer-term consequences: If we let the refugees in, it will motivate even more people to come. If we don't let the refugees in, we are giving them the choice to either join the bad guys or die (so we shouldn't be surprised if many of them choose to join the bad guys). Supporting Assad, as a lesser evil than ISIS is probably the best realistic option, but kinda disappointing. (Also anything that gives more power to Russia creates more problems in long term.) Doesn't solve the underlying problem, that the states in the area are each a random mix of religions and ethnicities, ready to kill each other. A long-term solution would be rewriting the map, to split the groups who want to cut each other's throats into different states. No chance to make Turkey agree on having Kurdistan as a neighbor. Etc. If I were a king of Europe, my solution would be more or less to let the refugees in, but to have them live under Orwellian conditions, which would expire in
1username2
Better yet, has anyone here changed any part of their life because of refugee crisis? Why did you do this? Why haven't you done this before? Thoughts are less interesting than actions.

There's a whole -osphere full of blogs out there, many of them political. Any of those would be better places to talk about it than LW.

Just speaking of weaknesses of the paperclip maximizer though experiment. I've seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.

I think many people intuitively distrust the idea that an AI could be intelligent enough to transform matter into paperclips in creative ways, but 'not intelligent enough' to understand its goals in a human and cultural context (i.e. to satisfy the needs of the business owners of the paperclip factory). This is often due to the confusion that the paperclip maximizer would get its goal function from parsing the sentence "make paperclips", rather from a preprogrammed reward function, for example a CNN that is trained to map the number of paperclips in images to a scalar reward.

0gjm
Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?

I think the problem here is the way the utility function is chosen. Utilitarianism is essentially a formalization of reward signals in our heads. It is a heuristic way of quantifying what we expect a healthy human (one that can raise up and survive in a typical human environment and has an accurate model of reality) to want. All of this only converges roughly to a common utility because we have evolved to have the same needs which are necessarily pro-life and pro-social (since otherwise our species wouldn't be present today).

Utilitarianism crudely abstract... (read more)

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)

My model of him has him having an attitude of "if I think that there's a reason to be highly confident of X, then I'm not going to hide what's true just for the sake of playing social games".

Viliam120

You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.

I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it's about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don't care about religion). Without MWI, something else would be "the most cont... (read more)

0hairyfigment
Actually, I can probably answer this without knowing exactly what you mean: the notion of improved Solomonoff Induction that gets him many-worlds seems like an important concept for his work with MIRI. I don't know where "his work often gets ripped apart" for that reason, but I suspect they'd object to the idea of improved/naturalized SI as well.
0hairyfigment
The Hell do you mean by "computational monism" if you think it could be a "weaker prior"?

Because he was building a tribe. (He's done now).


edit: This should actually worry people a lot more than it seems to.

3ChristianKl
Given the way the internet works bloggers who don't take strong stances don't get traffic. If Yudkowsky wouldn't have took positions confidently, it's likely that he wouldn't have founded LW as we know it. Shying away from strong positions for the sake of not wanting to be vulnerable is no good strategy.

I would love to seem some hard data about correlation between the public interest in science and it's degree of 'cult status' vs. 'open science'.

I mean "only a meme" in the sense, that morality is not absolute, but an individual choice. Of course, there can be arguments why some memes are better than others, that happens during the act of individuals convincing each other of their preferences.

Is it? I think, the act of convincing other people of your preferred state of the world is exactly what justifying morality is. But that action policy is only a meme, as you said, which is individually chosen based on many criteria (including aesthetics, peer-pressure, consistency).

2ChristianKl
"Only a meme" doesn't negate that it's about something real and that there can be resonable arguments why some memes are better than others.

Moral philosophy is a huge topic and it's discourse is not dominated by looking at DNA.

Everyone can choose their preferred state then, at least to the extent it is not indoctrinated or biologically determined. It is rational to invest energy into maintaining or achieving this state (because the state presumably provides you with a steady source of reward), which might involve convincing others of your preferred state or prevent them from threating it (e.g. by putting them into jail). There is likely an absolute truth (to the extent physics is consistent... (read more)

3ChristianKl
Basically your argument is: "I can't think of a way to justify morality besides saying that it's my own prefered state, therefore nobody can come up with an argument to justify morality."

What are the implications of that on how we decide what is are the right things to do?

2ChristianKl
Moral philosophy is a huge topic and it's discourse is not dominated by looking at DNA.

Because then it would argue from features that are built into us. If we can prove the existence of these features with high certainty, then it could perhaps serve as guidance for our decisions.

On the other hand, it is reasonable that evolution does not create such goals because it is an undirected process. Our actions are unrestricted in this regard, and we must only bear the consequences of the system that our species has come up with. What is good is thus decided by consensus. Still, the values we have converged to are shaped by the way we have evolved to behave (e.g. empathy and pain avoidance).

0ChristianKl
Our culture is just as backed into us as our DNA. It's all memes.

More why doing it is desirable at all. Is it a matter of the culture that currently exists? I mean, is it 'right' to eradicate a certain ethnic group if the majority endorses it?

0ChristianKl
Why do you think biology basis has something to do with the answer?

What is the motivation behind maximizing QUALY? Does it require certain incentives to be present in the culture (endorsement of altruism) or is it rooted elsewhere?

0username2
Many people think that society is supposed to have a goal for some reason. And QUALY is easy to measure.
0ChristianKl
Are you asking whether every human being that is alive has a motivation to maximize QUALY?

I mean a moral terminal goal. But I guess we would be a large step closer to a solution of the control problem if we could specify such a goal.

What I had in mind is something like this: Evolution has provided us with a state which everyone prefers who is healthy (who can survive in a typical situation in which humans have evolved with high probability) and who has an accurate mental representation of reality. That state includes being surrounded by other healthy humans, so by induction everyone must reach this state (and also help others to reach it). I haven't carefully thought this through, but I just want to give an idea for what I'm looking for.

0ChristianKl
Evolution doesn't produce terminal goals.

Is there a biological basis that explains that utilitarianism and preservation of our species should motivate our actions? Or is it a purely selfish consideration: I feel well when others feel well in my social environment (and therefore even dependent on consensus)?

0username2
Kin selection?
0ChristianKl
What do you mean with should?

Is that actually the 'strange loop' that Hofstadter writes about?

1Dagon
Hofstadter (as I remember - it's been a long time) took it a step further, granting consciousness to our models of others, and to the models of us that we model in others, etc....

Here they found dopamine to encode some superposed error signals about actual and counterfactual reward:

http://www.pnas.org/content/early/2015/11/18/1513619112.abstract

Could that be related to priors and likelihoods?

Significance

There is an abundance of circumstantial evidence (primarily work in nonhuman animal models) suggesting that dopamine transients serve as experience-dependent learning signals. This report establishes, to our knowledge, the first direct demonstration that subsecond fluctuations in dopamine concentration in the human striatum combin

... (read more)
0IlyaShpitser
Interesting, thanks!

Do Bayesianists strongly believe that the Bayes' theorem accurately describes how the brain changes its latent variables in face of new data? It seems very unlikely to me that the brain keeps track of probability distributions and that they sum up to one. How do Bayesianists believe this works at the neuronal level?

4Creutzer
The term you will want to use in your Google search is "Bayesian cognitive science". It's a huge field. But the short answer is, yes, the people in that field do assume that the brain does something that can be modelled as keeping and updating a probability distribution according to Bayes' rule. Much of it is computational-level modelling, i.e. rather removed from questions of implementation in the brain. A quick Google search did, however, find some papers on how to implement Bayesian inference in neural networks - though not necessarily linked to the brain. I'm sure some people do the latter sort of thing as well, though.

Ok, so the motivation is to learn templates to do correlation at each image location with. But where would you get the idea from to do the same with the correlation map again? That seems non-obvious to me. Or do you mean biological vision?

5Manfred
Nope, didn't mean biological vision. Not totally sure I understand your comment, so let me know if I'm rambling. You can think of lower layers (the ones closer to the input pixels) as "smaller" or "more local," and higher layers as "bigger," or "more global," or "composed of nonlinear combinations of lower-level features." (EDIT: In fact, this restricted connectivity of neurons is an important insight of CNNs, compared to full NNs.) So if you want to recognize horizontal lines, the lowest layer of a CNN might have a "short horizontal line" feature that is big when it sees a small, local horizontal line. And of course there is a copy of this feature for every place you could put it in the image, so you can think of its activation as a map of where there are short horizontal lines in your image. But if you wanted to recognize longer horizontal lines, you'd need to combine several short-horizontal-line detectors together, with a specific spatial orientation (horizontal!). To do this you'd use a feature detector that looked at the map of where there were short horizontal lines, and found short horizontal lines of short horizontal lines, i.e. longer horizontal lines. And of course you'd need to have a copy of this higher-level feature detector for every place you could put it in the map of where there are short lines, so that if you moved the longer horizontal line around, a different copy of of this feature detector would light up - the activation of these copies would form a map of where there were longer horizontal lines in your image. If you think about the logistics of this, you'll find that I've been lying to you a little bit, and you might also see where pooling comes from. In order for "short horizontal lines of short horizontal lines" to actually correspond to longer horizontal lines, you need to zoom out in spatial dimensions as you go up layers, i.e. pooling or something similar. You can zoom out without pooling by connecting higher-level feature detectors

I find CNNs a lot less intuitive than RNNs. In which context was training many filters and successively apply pooling and again filters to smaller versions of the output an intuitive idea?

7Manfred
In the context of vision. Pooling is not strictly necessary but makes things go a bit faster - the real trick of CNNs is to lock the weights of different parts of the network together so that you go through the exact same process to recognize objects if they're moved around (rather than having different processes for recognition for different parts of the image).

Could one say that the human brain works best if it is slightly optimistically biased, just enough to have benefits of the neuromodulation accompanied with positive thinking, but not so much that false expectations have a significant potential to severely disappoint you? Are there some recommended sequences/articles/papers on this matter?

1ChristianKl
Optimisim/Pessimism is an one-dimenional way of looking at it. I don't think it's helpful. If you focus a lot on doing gratitude excercises you are doing positive thinking but that doesn't create false expectations.

Perhaps the conditions that cause the Fermi paradox are actually crucial for life. If spaceflight was easy, all resources would have been exhausted by exponential growth pretty quickly. This would invalidate the 'big distances' point as evidence for a non-streamlined universe, though.

If we are in a simulation, why isn’t the simulation more streamlined? I have a couple of examples for that:

  • Classical physics and basic chemistry would likely be sufficient for life to exist.
  • There are seven uninhabitable planets in our solar system.
  • 99.9…% of everything performs extremely boring computations (dirt, large bodies of fluids and gas etc.).
  • The universe is extremely hostile towards intelligent life (GRBs, supernovae, scarcity of resources, large distances between celestial body).

It seems that our simulation hosts would need to have ac... (read more)

2tailcalled
It could be that the 'external' world is completely different and way, way bigger than our world. Their world might be to our world what our world is to a simple game of life simulation.
4moridinamael
The speed of light also allows simulation domains to by cleanly truncated for parallelization.
5drethelin
Video games are one kind of simulation we generally engage in, and the answers to these kind of questions are because they're enjoyable background or optimized for gameplay rather than something else. Games like half-life 2 spend a lot of time simulating really boring physics so that they can exist for the few situations they're actually kind of interesting. Lots of games have worlds where every single entity is hostile to the main player or damages them in some way. If we're in a simulation, we can't discount that we're being simulated in a specific way for non-obvious motivations.
7ZankerH
How do you know it isn't? Everything off the Earth could be a very simple simulation just designed to emit the right kind of EM radiation to look as if it's there. Likewise, large chunks of dead matter could easily be optimized away until a human interacts with them in sufficient detail. Other than your observation about classical physics, all your points are observations "from the inside" that could be optimized around without degrading our perception of the universe.
1[anonymous]
Wow! This will arm me with lots to disarm my political opponents confidence in mega projects! An excerpt from the text RaelwayScot's link:
2gwern
Does it cover anything beyond "What You Should Know About Megaprojects and Why: An Overview", Flyvbjerg 2014?

I would say be flexible as some topics are much more complex than others. I've found that most summaries on this list have a good length.

Perhaps you can revive one of these study groups: https://www.reddit.com/subreddits/search?q=spivak

Cross-posting to all of them might reach some people who are interested.

This Baby Rudin group is currently active: https://www.reddit.com/r/babyrudin/