Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: madhatter 26 April 2017 06:07:13AM *  3 points [-]

Suppose it were discovered with a high degree of confidence that insects could suffer a significant amount, and almost all insect lives are worse than not having lived. What (if anything) would/should the response of the EA community be?

Comment author: ZankerH 27 April 2017 07:48:12AM *  1 point [-]

My mental model of what could possibly drive someone to EA is too poor to answer this with any degree of accuracy. Speaking for myself, I see no reason why such information should have any influence on future human actions.

Comment author: Lumifer 06 April 2017 12:43:01AM 1 point [-]

I think this value is overrated.

Necessary for a clerk. Less necessary if you don't expect to be one.

Comment author: ZankerH 06 April 2017 08:53:29AM 1 point [-]

I'd argue that this is not the case, since the vast majority of people who don't expect to be "clerks" still end up in similar positions.

Comment author: gjm 24 January 2017 03:10:44AM 0 points [-]

Is there any reason to think that % in prison "should" be more equal?

(Some alleged psychological differences between typical men and typical women are controversial. Some are less so. That men are statistically more prone to violence is surely one of the ones that's less so.)

Comment author: ZankerH 24 January 2017 01:06:51PM 3 points [-]

Is there any reason to think that % in prison "should" be more equal?

Since we're talking about optimizing for "equality" between two fundamentally unequal things, why not?

Are you saying having the same amount of men and women in prison would be detrimental to the enforcement of gender equality? How does that follow?

Comment author: ingive 16 January 2017 02:57:54PM *  0 points [-]

I didn't really mean that. It was just setting an emotional stage for the rest of the comment. What do you think of the rest?

Comment author: ZankerH 16 January 2017 05:11:26PM *  1 point [-]

Having actually lived under a regime that purported to "change human behaviour to be more in line with reality", my prior for such an attempt being made in good faith to begin with is accordingly low.

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology.

Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide? And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation? If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point? Because that's essentially what you're proposing from the point of view of all potential futures where you fail.

Comment author: turchin 10 October 2016 11:13:53AM 5 points [-]

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Comment author: ZankerH 10 October 2016 03:56:07PM 1 point [-]

Despair and dedicate your remaining lifespan to maximal hedonism.

Comment author: turchin 17 August 2016 11:25:25AM 0 points [-]

I think that superAI via uploading in inherent safe solution. https://en.wikipedia.org/wiki/Inherent_safety It also could go wrong in many ways, but it is not its default mode.

Even if it kill all humans, it will be one human which will survive.

Even if his values will evolve it will be natural evolution of human values.

As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.

It is also feasible plan which consist of many clear steps, one of them is choosing and educating right person for uploading. He should be educated in ethics, math, rationality, brain biology etc. I think he is reading LW and this comment))

This idea could be upgraded to be even more safe. One way is to upload a group of several people which will be able to control each other and also produce mutual collective intelligence.

Another idea is broke Super AI into center and circumference. In the centre we put uploaded mind of very rational human, which make important decisions and keep values, and in periphery we put many Tool AIs, which do a lot of dirty work.

Comment author: ZankerH 17 August 2016 01:29:39PM 1 point [-]

Even if it kill all humans, it will be one human which will survive.

Unless it self-modifies to the point where you're stretching any meaningful definition of "human".

Even if his values will evolve it will be natural evolution of human values.

Again, for sufficiently broad definitions of "natural evolution".

As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.

If we're to believe Hanson, the first (and possibly only) wave of human em templates will be the most introvert workaholics we can find.

Comment author: ZankerH 25 April 2016 09:17:15AM 2 points [-]
  • acausal
  • arational
  • agnonstic
  • Gnon
  • gnonstic
  • Moloch
  • outreact
  • postrational
  • postrationalist
  • underreact
Comment author: DataPacRat 11 April 2016 10:57:36PM 1 point [-]

Thank you kindly for your help so far. :)

I started entering the live city data, and everything was going fine. Had to tweak the weights a bit to avoid some initial problems... then I got to Washington DC, and nothing I try seems to get it to work again. http://pastebin.com/q1JhUpSp is what I've ended up with; if I comment out DC's lines, I get a plot, if I put it back in, python just errors out, no matter what I set the weight divisor to. Any thoughts?

Comment author: ZankerH 11 April 2016 11:25:08PM *  3 points [-]

Two things:

  • all other points have a negative x coordinate, and the x range passed to the tessellation algorithm is [-124, -71]. You probably forgot the minus sign for that point's x coordinate.

  • as mentioned above, the algorithm fails to converge because the weights are poorly scaled. For a better graphical representation, you will want to scale them to the range between one and one half of the nearest point distance, but to make it run, just increase the division constant.

Comment author: DataPacRat 11 April 2016 09:03:22PM 1 point [-]

Welp, it looks like it's been longer since I tried tweaking basic code than I thought. I'm having trouble just trying to adjust the box's range to be from -124 to -71 and 25 to 53 (ie, longitude and latitude) instead of 1-10/1-10. I'm going to keep puzzling away, but anyone reading this, feel free to offer advice. :)

(I have some TV to watch later with the fam, so I won't mind doing some drudge work during the shows of typing out the city-list into an array of X/Y coordinates and population/weight, to paste into the Python script in place of randomly-generated points. ... Once I figure out how to get the script to accept a fixed array instead of randomly-generated points.)

Comment author: ZankerH 11 April 2016 10:02:38PM 3 points [-]

The range is specified by the box argument to the compute_2d_voronoi function, in form [[min_x, max_x], [min_y, max_y]]. Points and weights can be specified as 2d and 1d arrays, e.g., as np.array([[x1,y1], [x2, y2], [x3, y3], ..., [xn, yn]]) and np.array([w1, w2, w3, ..., wn]). Here's an example that takes specified points, and also allows you to plot point radii for debugging purposes: http://pastebin.com/h2fDLXRD

Comment author: DataPacRat 11 April 2016 09:24:49AM 7 points [-]

Seeking help with Voronoi map generation

I'm hoping somebody here can help me create a particular map. I'd like to build a weighted Voronoi map of North America, with the weights corresponding to each urban area's population. Or, put another way, I'd like to start with http://lpetrich.org/Science/GeometryDemo/GeometryDemo_GMap.html , input the urban areas listed at https://en.wikipedia.org/wiki/List_of_urban_areas_by_population , and then tweak how the map is produced so that if one metroplex has a population of 1,000,000 and another has 10,000,000, the border between them is about 90% of the way closer to the smaller city.

I'm trying to build a scifi setting to put a story in, and have certain suspicions about what such a map would look like, but would like to confirm my intuition. I'm running Fedora Linux, and don't mind compiling oddball software, I just don't know which packages I'd need to even try to generate this thing.

By any chance, anyone here already able to generate the final product with just a few mouse-clicks? :) If not, anyone have any advice on how to get started?

Comment author: ZankerH 11 April 2016 12:59:23PM *  4 points [-]

You can use the pyvoro library to compute weigted 2d voronoi diagrams, and the matplotlib library to display them. Here's a minimal working example with randomly generated data:


edit: It seems this library uses the radical voronoi tessellation algorithm, where "weights" represent point radii. This means if you specify a point radius greater than the distance between it and the closest point, the tessellation will not function correctly, and as a corollary, if a point's radius is smaller than half of the minimal distance between it and a neighbour, the specified weight will not affect the tessellation process. Therefore, you need a secondary algorithm that takes the point weights and mutual distances into account to produce the desired result here.

View more: Next