All of ZankerH's Comments + Replies

ZankerH-2-1

From what I was allowed to read, I think you're deliberately obfuscating and misrepresenting the active and passive choices. If that was unintentional, you need to work on good faith argumentation.

1James Stephen Brown
I would genuinely like to understand what you mean, but it’s not clear to me a present. You are allowed to read the entire post. A starting point to understanding your point of view would be if you could please, in good faith, answer the question I asked in the previous comment. Do you believe that we should let poor people die?
ZankerH-3-2

Stopped reading when the author proposed I do so, thank you for the notice.

1James Stephen Brown
The article is about people living in poverty who fail to succeed in an open economic competition (the Covid point was a side point that had "shaken my faith"). I proposed that if you think we should let these people die, then you may as well stop reading. Do you think we should let poor people die? Or did I not phrase that clearly enough?
ZankerH1-3

Modern datacenter GPUs are basically the optimal compromise between this and still retaining enough general capacity to work with different architectures, training procedures, etc. The benefits of locking in a specific model at the hardware level would be extremely marginal compared to the downsides.

My inferences, in descending order of confidence:

(source: it was revealed to me by a neural net)

84559, 79685, 87081, 99819, 37309, 44746, 88815, 58152, 55500, 50377, 69067, 53130.

ofcourse you have to define what deceptions means in it's programming.

That's categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of "X is true". Therefore, they never engage in deliberate deception.

-2David turner
they need to make large language models not hullucinate . here is a example how. hullucinatting should only be used for creativity and problem solving.  here is how my chatbot does it . it is on the personality forge website . https://imgur.com/a/F5WGfZr
1David turner
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org) i wonder if something like this can be used with my idea for ai safety

>in order to mistreat 2, 3, or 4, you would have to first mistreat 1

What about deleting all evidence of 1 ever having happened, after it was recorded? 1 hasn't been mistreated, but depending on your assumptions re:consciousness, 2, 3 and 4 may have.

1andrew sauer
Huh? That sounds like some 1984 logic right there. You deleted all evidence of the mistreatment after it happened, therefore it never happened?

That’s Security Through Obscurity. Also, even if we decided we’re suddenly ok with that, it obviously doesn’t scale well to superhuman agents.

>Some day soon "self-driving" will refer to "driving by yourself", as opposed to "autonomous driving".

Interestingly enough, that's what it was used to mean the first time the term appeared in popular culture, in the film Demolition Man (1993).

2Lone Pine
But when will my Saturn-branded car drive me to Taco Bell?
[+]ZankerH-18-18
-2MSRayne
This is one of the things I despise about this community. People here pretend to be altruists, but are not. It is incoherent to value humans and not to value the other beings we share the planet with who, in the space of minds, are massively closer to humans than they are to any AI we are likely to create. But you retreat to moral irrealism and the primacy of arbitrary whims (utility functions) above all else when faced with the supreme absurdity of human supremacy.
1andrew sauer
See this sort of thing is why Clippy sounds relatively good to me, and why I don't agree with Eliezer when he says humans all want the same thing and so CEV would be coherent when applied over all of humanity.

We have no idea how to make a useful, agent-like general AI that wouldn't want to disable its off switch or otherwise prevent people from using it.

1weverka
We don't have to tell it about the off switch!

Global crackdown on the tech industry?

>The aliens sent their message using a continuous transmission channel, like the frequency shift of a pulsar relative to its average or something like that. NASA measured this continuous value and stored the result as floating point data.

 

Then it makes no sense for them to publish it in binary without mentioning the encoding, or making it part of the puzzle to begin with.

Your result is virtually identical to the first-ranking unambiguously permutation-invariant method (MLP 256-128-100). HOG+SVM does even better, but it's unclear to me whether that meets your criteria.

Could you be more precise about what kinds of algorithms you consider it fair to compare against, and why?

3D𝜋
I am going after pure BP/SGD, so neural networks (no SVM), no convolution,... No pre-processing either. That is changing the dataset. It is just a POC, to make a point: you do not need mathematics for AGI. Our brain does not. I will publish a follow-up post soon.
ZankerH190

The issue with MNIST is that everything works on MNIST, even algorithms that utterly fail on a marginally more complicated task. It's a solved problem, and the fact that this algorithm solves it tells you nothing about it.

If the code is too rigid or poorly performant to be tested on larger or different tasks, I suggest F-MNIST (fashion MNIST), which uses the exact same data format, has the same number of categories and amount of data points, but is known to be far more indicative of the true performance of modern machine learning approaches.

 

https://github.com/zalandoresearch/fashion-mnist

9lsusr
I like this idea. It seems to me like a fair test. I will run the code overnight with default settings and see what happens.

Square error has been used instead of absolute error in many diverse optimization problems in part because its derivative is proportional to the magnitude of the error, whereas the derivative of the absolute error is constant. When you're trying to solve a smooth optimization problem with gradient methods, you generally benefit from loss functions with a smooth gradient than tends towards zero along with the error.

Sounds like you need to work on that time preference. Have you considered setting up an accountability system or self-blackmailing to make sure you're not having too much fun?

0Good_Burning_Plastic
Why?

This is why anti-semitism exists.

0Jacob Falkovich
On the list of "Top 100 causes of anti-Semitism", this is #827.

Yes, with the possible exception of moral patients with a reasonable likelihood of becoming moral agents in the future.

0Zarm
Is it ok to eat severely mentally disabled humans than?

Meat tastes nice, and I don't view animals as moral agents.

3fubarobfusco
Are you claiming that a being must be a moral agent in order to be a moral patient?

Define "optimal". Optimizing for the utility function of min(my effort), I could misuse more company resources to run random search on.

0Thomas
The optimal is to either minimize the energy or the time required, by my book. Or to minimize algorithmic steps. Doesn't really matter which one of those definitions you adopt, they are closely related. It's like the Kolmogorov's complexity. Which program language to use as the reference? Doesn't really matter. Just use the one I gave, or modify it in any sensible way. Then find a very good solution for 23142314 - or any other interesting number. They are all interesting.

In which case, best I can do is 10 lines

MakeIntVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
0Thomas
Good enough, congratulations! The next (week) question might be, how to optimally produce an arbitrary large number out of zero. For example, 15 lines is enough to produce 23142314. But is this the minimum?

Well, that does complicate things quite a bit. I threw those lines out of my algorithm generator and the frequency of valid programs generated dropped by ~4 orders of magnitude.

0Thomas
You can't even shift by 1. You have to create 1 first, out of zero. Just like God.

Preliminary solution based on random search

MakeIntVar A
Inc A
Shl A, 5
Inc A
Inc A
A=A*A
Inc A
Shl A, 1

I've hit on a bunch of similar solutions, but 2 * (1 + 34^2) seems to be the common thread.

0Lumifer
Let's rewrite this in something C-like: int a // a = 0 int b // b = 0 int c // c = 0 a++ // a = 1 a++ // a = 2 b = a * a // b = 4 c = a << a // c = 8 c = b * c // c = 32 c = c + a // c = 34 b = b >> a // b = 1 c << b // c = 1156 c++ // c = 1157 c = c * a // c = 2314 13 lines.
0Thomas
You can't do You must first create 5 in a variable, say B.

Define "shortest". Least lines? Smallest file size? Least (characters * nats/char)?

0Thomas
Least lines.

My mental model of what could possibly drive someone to EA is too poor to answer this with any degree of accuracy. Speaking for myself, I see no reason why such information should have any influence on future human actions.

I'd argue that this is not the case, since the vast majority of people who don't expect to be "clerks" still end up in similar positions.

0Galap
Have any stats on that? (note I'm not trying to be that annoying guy who asks for statistics to try and win an argument if the other party fails to produce them; I really want to see info on people's expected vs actual employment outcomes)
0Lumifer
See my answer to Dagon.

Is there any reason to think that % in prison "should" be more equal?

Since we're talking about optimizing for "equality" between two fundamentally unequal things, why not?

Are you saying having the same amount of men and women in prison would be detrimental to the enforcement of gender equality? How does that follow?

gjm100

"Gender equality" is a fuzzy term. Taken sufficiently literally, it's absurd (We demand equal rights for men to bear children! We demand equal rates of breast cancer for men and women!). So, when the goal is reasonable discussion (as opposed to, say, making one's ideological opponents look silly), we should either avoid using the term or interpret it more charitably.

I think there is a useful thing that the term "gender equality" is gesturing towards, even though taken absolutely literally those words don't point in quite the right direc... (read more)

Having actually lived under a regime that purported to "change human behaviour to be more in line with reality", my prior for such an attempt being made in good faith to begin with is accordingly low.

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

I'm thinking about lo

... (read more)
0ingive
You're excluding being aligned with objective reality (accepting facts, etc) with said effectiveness. Otherwise, it's useless. I'm unsure why you're presuming rearranging people's brains isn't done constantly independent of our volition. This simply starts questioning how we can do it, with our current knowledge. Why would it lead to megalomania and genocide, when it's not aligned with reality? An understanding of neuroscience and evolutionary biology, presuming you were aligned with reality to figure it out and accept facts, would be enough and still understanding that we can be wrong until we know more. As I said "this includes uncertainty of facts (because of facts like an interpretation of QM)." which makes us embrace uncertainty, that reality is probabilistic with this interpretation. It's not absolute. I'm not.

Despair and dedicate your remaining lifespan to maximal hedonism.

-2skeptical_lurker
Google do not strike me as incompitant, and they do have ethics oversite for AI. Worry, yes, despair, no.

NRx is systematized hatred.

Am NRx, this assertion is false.

Even if it kill all humans, it will be one human which will survive.

Unless it self-modifies to the point where you're stretching any meaningful definition of "human".

Even if his values will evolve it will be natural evolution of human values.

Again, for sufficiently broad definitions of "natural evolution".

As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.

If we're to believe Hanson, the first (and possibly only) wave of... (read more)

0turchin
Its evolution could go wrong from our point of view, but older generation always thinks that younger ones are complete bastards. When I say "natural evolution" I meant complex evolution of values based on their previous state and new experience, and it is rather typical situation for any human being who's values are evolving from childhood, and also under influence of experiences, texts and social circle. This idea is very different from Hanson's em world. Here we deliberately upload only one human, who is trained to become are a core of future friendly AI. He knows that he is going to make some self-improvements but he knows dangers of unlimited self-improvement. His loved ones are still in flesh. He is trained to be not a slave as in the Hanson's Em world, but a wise ruler.

Two things:

  • all other points have a negative x coordinate, and the x range passed to the tessellation algorithm is [-124, -71]. You probably forgot the minus sign for that point's x coordinate.

  • as mentioned above, the algorithm fails to converge because the weights are poorly scaled. For a better graphical representation, you will want to scale them to the range between one and one half of the nearest point distance, but to make it run, just increase the division constant.

4DataPacRat

The range is specified by the box argument to the compute_2d_voronoi function, in form [[min_x, max_x], [min_y, max_y]]. Points and weights can be specified as 2d and 1d arrays, e.g., as np.array([[x1,y1], [x2, y2], [x3, y3], ..., [xn, yn]]) and np.array([w1, w2, w3, ..., wn]). Here's an example that takes specified points, and also allows you to plot point radii for debugging purposes: http://pastebin.com/h2fDLXRD

2DataPacRat
Thank you kindly for your help so far. :) I started entering the live city data, and everything was going fine. Had to tweak the weights a bit to avoid some initial problems... then I got to Washington DC, and nothing I try seems to get it to work again. http://pastebin.com/q1JhUpSp is what I've ended up with; if I comment out DC's lines, I get a plot, if I put it back in, python just errors out, no matter what I set the weight divisor to. Any thoughts?

You can use the pyvoro library to compute weigted 2d voronoi diagrams, and the matplotlib library to display them. Here's a minimal working example with randomly generated data:

http://pastebin.com/wNaYAPvN

edit: It seems this library uses the radical voronoi tessellation algorithm, where "weights" represent point radii. This means if you specify a point radius greater than the distance between it and the closest point, the tessellation will not function correctly, and as a corollary, if a point's radius is smaller than half of the minimal distanc... (read more)

2DataPacRat
Welp, it looks like it's been longer since I tried tweaking basic code than I thought. I'm having trouble just trying to adjust the box's range to be from -124 to -71 and 25 to 53 (ie, longitude and latitude) instead of 1-10/1-10. I'm going to keep puzzling away, but anyone reading this, feel free to offer advice. :) (I have some TV to watch later with the fam, so I won't mind doing some drudge work during the shows of typing out the city-list into an array of X/Y coordinates and population/weight, to paste into the Python script in place of randomly-generated points. ... Once I figure out how to get the script to accept a fixed array instead of randomly-generated points.)

A perfect example of a fully general counter-argument!

0AlexanderRM
If I were to steelman the usefulness of the argument, I'd say the conclusion is that positions on economics shouldn't be indispensable parts of a political movement, because that makes it impossible to reason about economics and check whether that position is wrong. Which is just a specific form of the general argument against identifying with object-level beliefs*. *For that matter, one should perhaps be careful about identifying with meta-level beliefs as well, although I don't know if that's entirely possible for a human to do, even discounting the argument that there might be conservation of tribalism. It might be possible to reduce ones' identity down to a general framework for coming up with good meta-level beliefs, and avoid object-level
2Ben Pace
Nup, because you can bottom out in surveys of economic consensus :-)

humanity not extinct or suffering ->FAI black box -> humanity still not extinct or suffering

6Viliam
Selling the cat and donating the money to MIRI would kill two birds with one stone.
3IlyaShpitser
There's more to life than project mayhem.
2CronoDAS
Interesting advice, but not useful in context.

In some sense it is voodoo (not very interpretable)

There is research in that direction, particularly in the field of visual object recognising convolutional networks. It is possible to interpret what a neural net is looking for.

http://yosinski.com/deepvis

*linear algebra computational graph engine with automatic gradient calculation

I really wonder how this will fit into the established deep learning software ecosystem - it has clear advantages over any single one of the large players (Theano, Torch, Caffee), but lacks the established community of any of them. As a researcher in the field, it's really frustrating that there is no standardisation and you essentially have to know a ton of software frameworks to effectively keep up with research, and I highly doubt Google entering the fray will change this.

https://xkcd.com/927/

1passive_fist
Add Julia to the mix as well (which I currently use and I find personally better than those other ones). I think TensorFlow's niche would be in the area of prototyping new ML algorithms as it seems pretty general, flexible, and fast. If you just want a simple deep neural net, it might be better to use Caffe or Theano. Those do not provide a flexible and general optimization framework, though. TensorFlow also seems more powerful in the area of language processing, as you'd expect.

I need some calibration here. Is this satire?

Two things come to mind, providing energy or highly directional interstellar communication.

5g_pepper
It seems to me that if the structure was built by aliens to provide energy, that would be an example of Nancy's "something aliens constructed to make their lives better", wouldn't it?
0turchin
That is almost true, but don't give us information about their final goals. But also: building directional energy weapon?

Frankly, both of those suggestions sound about equally ridiculous to me. But then again, it may just be scope insensitivity because of how minute both likelihoods are to begin with.

0turchin
If it is alien structure, what purpose it have most probably? In your opinion?

Imagining the orientations as a series of rotations along individual, orthonormal basis axes, you may run into the problem of gimbal lock. Try visualising the desired final result as an orientation represented by a quaternion.

0[anonymous]
The insight that linearly interpolating spherical coordinates does not necessarily result in smooth motion was one of these key experiences that made me trust my intuition in new areas much less.

How do you know it isn't? Everything off the Earth could be a very simple simulation just designed to emit the right kind of EM radiation to look as if it's there. Likewise, large chunks of dead matter could easily be optimized away until a human interacts with them in sufficient detail. Other than your observation about classical physics, all your points are observations "from the inside" that could be optimized around without degrading our perception of the universe.

9James_Miller
The Fermi paradox and quantum physics (as opposed to unlimited layers all the way down) are massive simulation streamlines.
ZankerH-10

I definitely value it higher than the momentary high of getting to impose your values on others, which seems to be the opposite of the current US foreign policy.

3Elo
Upvote because disapproval is not wrong around my universe. not sure if people are trying to downvote in support (aka they also disapprove) or against your disapproval.

Speaking for myself, I find most of his contributions relevant and interesting.

6Gunnar_Zarncke
I have also upvoted a significant number of his posts esp. if those were 'excessively' downvoted. I agree that there is a common theme and that he repeats himself but one could read that cheritably as providing context for his posts which are not always about th same thing but highlight differnt albeit tangential aspects of some general topic.
gjm130

The question was specifically about the ones that get lots of downvotes. That is, the ones where he's riding his hobbyhorse of complaining about the phenomenon of men not getting any sex even though they'd like to, and specifically the fact that he is in that situation. Do you find those relevant and interesting?

(Most recent examples, in reverse-historical order: one, two, three though that one only kinda fits the pattern, four, five.)

How severe would you rate the horror aspect as? This seems interesting, but I absolutely couldn't handle Amnesia.

0[anonymous]
I actually havent played Amnesia myself, but I can say this combines elements from it and a more existential horror of what the copied humans have become and what can be done to them and what the... I'm gonna say humans 'corrupted' by the AI have become. There is definitely overlap in horror mechanics and tone with Amnesia at times with the corrupted humans but that is just one type of horror in the game.

There's usually an informal standard that's large enough to represent a significant boost to a police officer's income, but small enough that it's worth it for most people to pay rather than risk more fines or worse. There's not much negotiation involved.

Load More