Posts

Sorted by New

Wiki Contributions

Comments

This is potentially a naive question, but how well would the imagining deal with missing data? Say that 1% (or whatever the base rate is) of tissue samples would be destroyed during slicing or expansion - would we be able to interpolate those missing pieces somehow? Do we know any bounds on the error would that introduce in the dynamics later?

wolajacy6mo21-13

I strong downvoted, because I think public protest are not a good way of pushing for change.

  1. They are a symmetric weapon.
  2. They lock you in certain positions. There is a lot of momentum in a social movement that is carried through such public displays, which makes it difficult to change or reverse your position (for example, if we learned that for reason X it is much better to speed up the development of AI, which I don't think is that improbable a priori).
  3. They promote tribalistic, collective mindset. Protests like this are antithetic to the deep, 1-1 dialogue that LW stands for. I feel that the primary motivation for attending a protest is building camadarie and letting out emotions, which has more downsides than upsides, especially long-term. It also suports us-vs-them mentality.
  4. Even if they change anyone's mind, it is for wrong reasons. Public protests by necessity have to dumb down the message to a point that you can write on a poster. They lump people together present a unified front, and by doing that, lose nuance and diversity of opinions. If anyone changes their mind, it is because of reasons other than its merit.
  5. They are an ineeffective way of using resources. The marginal value of spending time at a protest is negative for most of the people with any background in AI safety. It is much better to think, read papers, write papers, do experiments, chat with people around you, attend research seminars etc., than to picket on a street. Protests signal that you do not have anything more to offer than your presence.

There are some rare situations in which protests are a good choice, but mostly as the option of last resort. A possible counterpoint, that you are mostly advocating for awareness as opssosed to specific points is null, since pretty much everyone is aware of the problem now - both society as a whole, policymakers in particular, and people in AI research and alignment.

wolajacy10mo10

FYI, in ther answer you linked to, there is another, way easier way of doing it (& it worked for me):

tl;dr:

  • have the Android command line tools installed on a development machine, and USB debugging enabled on your device. The device does not need to be rooted
  • adb forward tcp:9222 localabstract:chrome_devtools_remote
  • wget -O tabs.json http://localhost:9222/json/list
wolajacy11mo90

Interesting point of view. I don't think I agree with the sex triggers section: it seems that applying this retroactively would predict that the internet and video games would be banned by now (it is of course the case that in many instances they are stigmatized, but nowhere near the extent that would result in banning them).

Also, the essay does not touch on the most important piece of equation, which is the immense upside of AGI - the metaphore about the nuclear weapons spitting out gold, up until they got large enough. This means there is a huge incentive for private companies to unilaterally improve the tech, plus the Moore's law of the compute being cheaper every year. If you can get the AI comprehend text a bit better (or do any sort of other "backend" task), this is much different from the production of child porn, growing weed, or killing people more effectively, which are very localized sources of profit. I think only human cloning comes as the a close example, but still not quite (the gains are very uncertain and temporarily discontinued, it's more difficult to hide the experiments, the technology is much more specialised, whilist compute is needed in every other part of the economy, and 'doing AI' is not so well-defined category as 'using human stem cells').

Suppose you want to make a binary decision with a specified bias $p$. If, say, $p=1/8$ then you can throw a coin 3 times, and if you got, say, $HHH$, you take it as positive, else negative.

But if $p$ is a big number (say $1/1000$), or a weird number, say $1/\pi$, then this method fails. There is another really beautiful method I learned some time ago, which allows you to get any biased coin in a constant =2 expected number of throws! (I lost the source, unfortunately) 

It works as follows: you throw the coin until the first time you get a head - assume this happened on your $n$-th throw. Then, you accept if and only if the $n$-th digit in the binary expansion of $p$ is 1. It is easy to show that this comes out to the bias exactly = p, and the expected number of coin throws is always 2. 

This line of reasoning, of "AGI respecting human autonomy" has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad - not because of some external circumstances, but because of us being human. It's like in the Great Divorce - given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.

Of course, the answer is that "AGI will figure this out somehow". Which is equivalent to saying "I don't know". Which I think contradicts the argument "If all goes well, it literally doesn't matter what you do; how you live is essentially up to you from that point on".

The correct argument is, IMO: "there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible".

I was trying to guess what the idea is before reading the post, and my first thought was: in a multi-player game, there is a problem where, say, two players are in a losing position, and would like to resign (and go play something else), two other players are in a so-so position and want to possibly resign, and the final player is clearly winning and wants to continure. But there is no incentive to straight-up resign unilaterally, as then you have to sit and wait idly until the game finishes.

So, we introduce "fractional resignations", we get something like [1, 1, 0.6, 0.6, 0.1], compare it to the pre-agreeded threshold (say, =3) - and end the game if it passes this bar.

Can you please link some of those Youtube channels you mentioned in the comment? I'd like to learn more about the topic - ideally, grasp the big ideas & what-I-don't-know (coming from the pure math angle, so not much grounding in the natural sciences).

For reference, I found Introduction to Biology - The Secret of Life (an MIT course at edX) to be very helpful in this kind of exploration.

The argument is very unclear clear to me. What does "unbounded" mean? What does it mean to "retrocausally compress 'self'"?

 Are you postulating that:
 - the notion of "an individual" does not make sense even in principle
 - there exists something like "self"/"individual" in general, but we don't know how to define rigorously
 - there exists something like "self"/"individual", but specific individuals (people, in this case) are not able to precisely define 'themselves'
 - some fourth option?

(The second and third paragraph are even less clear to me, so if they present separate lines of thought, maybe let's start with the first one)

Load More