Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Gunnar_Zarncke 30 October 2014 02:40:14PM 2 points [-]

Because you can't create ICs yourself? You know a clockmaker (mechanical clocks) can create his own tools and working mechanical clocks in the farthest backwaters with only steel rods and his suitcase of tools (not very large; I have seen one). I admit that creating refined steel requires some more sizable - but nothing technologically advanced.

The same cannot be said about any part of modern electronics. Sure. You can print your case yourself. Sure, you can layout your board yourself. But you still depend on extremely highly integrated ICs and/or FPGAs. This means that if civilization collapses it falls down to mechanical tools - because those can be created locally with 'minimum' tools (at least where this knowhow was left). If not for the ICs we could have the same for electronics. Thus this could reduce an Xrisk.

Comment author: V_V 30 October 2014 05:38:31PM 1 point [-]

Ah ok, I hadn't understood the context of your question.

Comment author: Gunnar_Zarncke 29 October 2014 04:27:34PM 1 point [-]

Yes. I wonder about the minimum infrastructure needed to create computing hardware comparable to modern ICs.

Comment author: V_V 30 October 2014 02:15:31PM 2 points [-]

There exist lots of off-the-shelf programmable ICs, from single-board microcontrollers to FPGAs.
Why would you need to print your own custom ICs?

Comment author: So8res 25 October 2014 04:30:22PM 0 points [-]

E.g. assume that the users (the programmers) would use a remote controlled robotic arm to press the shutdown button. If the agents turns out to be a paperclipper, it may disassemble the robotic arm just to turn it into paperclips. The agent is not "intentionally" trying to resist shutdown, but the effect will be the same. Symmetrically there could be scenarios where the agent "accidentally" presses the shutdown button itself.

Yep! In fact, this is exactly the problem discussed in section 4.1 and described in Theorem 6, is it not?

Comment author: V_V 25 October 2014 05:52:27PM 0 points [-]

Section 4.1 frames the problem in terms of the agent creating a sub-agent or successor. My point is that the issue is more general, as there are manipulative actions that don't involve creating other agents.
Theorem 6 seems to address the general case, although I would remark that even if epsilon == 0 (that is, even UN is indifferent to manipulation) you aren't safe.

Comment author: V_V 25 October 2014 01:57:55PM 1 point [-]

I think there is an issue with the utility indifference framework that has not been mentioned in the paper and in the comments so far:

If the agent is able to affect with its action a1 the probability of the shutdown button being pressed, that is, a1 can be a manipulative action, and if the agent is indifferent to the button being pressed, then it may happen that it "accidentally" performs a manipulative action.

E.g. assume that the users (the programmers) would use a remote controlled robotic arm to press the shutdown button. If the agents turns out to be a paperclipper, it may disassemble the robotic arm just to turn it into paperclips. The agent is not "intentionally" trying to resist shutdown, but the effect will be the same. Symmetrically there could be scenarios where the agent "accidentally" presses the shutdown button itself.

If I understand correctly, UN is already supposed to penalize manipulative actions, but UN is untrusted, hence the problem still exist.
Corrigibility implemented using utility indifference might make sense as a precaution, but it is not foolproof.

Comment author: V_V 23 October 2014 02:40:26PM 44 points [-]

Done

Comment author: V_V 19 October 2014 03:19:19PM *  2 points [-]

Generally speaking, given a decision problem and a strategy to solve it, one way to measure it''s quality is the "regret"): the difference (or the ratio) between the payoff of the theoretically optimal strategy and the payoff of the strategy under consideration.

If the strategies are algorithms, then you can further refine the concept by including resource constraints (e.g. running in polynomial time, or running within X seconds).

In general, I don't think there is really a definition that fits well all cases in a non-trivial way: a clock optimizes keeping time, a rock optimizes laying around and occasionally falling, but we don't usually think of these things as agents trying to optimize an utility function.

Comment author: Azathoth123 18 October 2014 04:13:54PM 2 points [-]

Um, until recently the various Iraqi militants weren't very organized.

Comment author: V_V 18 October 2014 09:46:59PM 1 point [-]

Kinda. And until recently they sucked at fighting the government.

Comment author: Randaly 17 October 2014 07:06:48PM *  4 points [-]

Maybe, but this is the exact opposite of polymath's claim- not that fighting a modern state is so difficult as to be impossible, but that fighting one is sufficiently simple that starting out without any weapons is not a significant handicap.

(The proposed causal impact of gun ownership on rebellion is more guns -> more willingness to actually fight against a dictator (acquiring a weapon is step that will stop many people who would otherwise rebel from doing so) -> more likelihood that government allies defect -> more likelihood that the government falls. I'm not sure if I endorse this, but polymath's claim is definitely wrong.)

(As an aside, this is historically inaccurate: almost all of the weapons in Syria and Libya came either from defections from their official militaries (especially in Libya), or from foreign donors, not from private purchases. However, private purchases were important in Mexico and Ireland.)

Comment author: V_V 18 October 2014 02:27:32PM 1 point [-]

but that fighting one is sufficiently simple that starting out without any weapons is not a significant handicap.

I didn't claim that fighting a government is simple. My claim is that the hardest part of fighting a government is forming an organized militia with sufficient funds and personnel. If you manage to do that, then acquiring weapons is probably comparatively easy.

Comment author: turchin 17 October 2014 03:13:25PM 1 point [-]

So, do you think that half of the population will be infected?

Comment author: V_V 17 October 2014 03:50:20PM 1 point [-]

No.

Comment author: turchin 17 October 2014 12:59:17PM 0 points [-]

This joke maybe good in any other site but not on Lesswrong which is based on idea of unlimited AI self-improving. Of cause Ebola will end it exponential growth - I just interested to know how and when. Will it burn out in Africa, or we get herd immunity after 100 million victims, or effective vaccine will be created, or we will nuke all places with Ebola?

Comment author: V_V 17 October 2014 02:45:46PM *  1 point [-]

This joke maybe good in any other site but not on Lesswrong which is based on idea of unlimited AI self-improving.

Some people here, including the founder, believe that recursive AI self-improvement is a realistic possibility, but I'm pretty sure that even the most hardcore believers acknowledge that there are physical limits, and that you can't just expect an exponential function to be a good fit for a trend when you get close to the limit.

The basic function you should be looking for modelling this kind of phenomena is the logistic function. It's the basic model for phenomena that include both positive feedback mechanisms (e.g. self-replication) and negative feedback mechanisms (e.g. resource constraints).

If you look at the graph of the logistic function, you may notice that initially, when positive feedback is dominant, it very closely resembles an exponential, then it becomes about linear around the middle point and then, negative feedback is dominant, it becomes close to a negative exponential.

If a disease had a constant basic reproduction number , and it could infect anyone, and infected people never died because of the infection and remained infectious for life, then the prevalence of the disease over time would be well approximated by a logistic function, with the world population size as the supremum value (the "capacity").

In an actual epidemic, of course, people can die or heal, and the R factor varies over time as the disease spreads to different places, people and institution change their behavior, better treatment becomes available, and so on, thus you don't really get an exact logistic trend, but that's the first-order model for forecasting the long-term prevalence disease, not an exponential model that neglects feedback loops.
An exponential model is only useful when the disease prevalence is still quite far from the capacity, that is, when a typical infected person is mostly surrounded by uninfected (and infectable) people.

View more: Next