All of _rpd's Comments + Replies

_rpd-40

Apparently being a postman in the 60s and having a good Johnny Cash impression worked out well ...

http://infamoustribune.com/dna-tests-prove-retired-postman-1300-illegimitate-children/

[This comment is no longer endorsed by its author]Reply
5gjm
Or, alternatively, not.
_rpd10

Or we are an experiment (natural or artificial) that yields optimal information when unmanipulated or manipulated imperceptibly (from our point of view).

_rpd10

I really like this distinction. The closest I've seen is discussion of existential risk from a non-anthropocentric perspective. I suppose the neologism would be panexistential risk.

0G0W51
Panexistential risk is a good, intuitive, name.
_rpd50

The desire to know error estimates and confidence levels around assertions and figures, or better yet, probability mass curves. And a default attitude of skepticism towards assertions and figures when they are not provided.

_rpd10

Yes, until the distance exceeds the Hubble distance of the time, then the light from the spaceship will red shift out of existence as it crosses the event horizon. Wiki says that in around 2 trillion years, this will be true for light from all galaxies outside the local supercluster.

_rpd10

Naively, the required condition is v + dH > c, where v is the velocity of the spaceship, d is the distance from the threat and H is Hubble's constant.

However, when discussing distances on the order of billions of light years and velocities near the speed of light, the complications are many, not to mention an area of current research. For a more sophisticated treatment see user Pulsar's answer to this question ...

http://physics.stackexchange.com/questions/60519/can-space-expand-with-unlimited-speed/

... in particular the graph Pulsar made for the ans... (read more)

0SoerenE
Wow. It looks like light from James' spaceship can indeed reach us, even if light from us cannot reach the spaceship.
_rpd70

this claim

Do you mean the metric expansion of space?

https://en.wikipedia.org/wiki/Metric_expansion_of_space

Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity.

0SoerenE
Thank you. It is moderately clear to me from the link that James' thought-experiment is possible. Do you know of a more authoritative description of the thought-experiment, preferably with numbers? It would be nice to have an equation where you give the speed of James' spaceship and the distance to it, and calculate if the required speed to catch it is above the speed of light.
_rpd10

Would you support a law to stop them?

Wiki says that desomorphine has been a Schedule 1 controlled substance in the US since 1936, shortly after its discovery. Mere possession is illegal, much less use.

0[anonymous]
So this example is to illustrate something we disagree about, not what society agrees about it generally. See christians comment demonstrating this point...
_rpd20

predict with high confidence a Republican win

Odd since most prediction markets have a 60/40 split in favor of a Democrat winning the US presidency.

E.g., https://iemweb.biz.uiowa.edu/quotes/Pres16_Quotes.html

Sanders vs. Trump.

The polls have Sanders ahead in this particular matchup ...

http://www.realclearpolitics.com/epolls/2016/president/us/general_election_trump_vs_sanders-5565.html

2JohnGreer
Yes, I've mostly seen a Democrat favored. I bet two bitcoin on Hilary a year ago based on FiveThirtyEight's posts.
_rpd00

"Prediction market". The DPRs implement some sort of internal currency (which, thanks to blockchains, is fairly easy), and make bets, receiving rewards for accurate predictions.

Taking this a little further, the final prediction can be a weighted combination of the individual predictions, with the weights corresponding to historical or expected accuracy.

However different individuals will likely specialize to be more accurate with regard to different cognitive tasks (in fact, you may wish to set up the reward economy to encourage such specializa... (read more)

_rpd00

AngelList says Anthony Aguirre is the founder.

0ChristianKl
I haden't checked AngelList. It's great to have a real company behind it in contrast to predictionbook which doesn't get development attention anymore.
_rpd00

I would say that actions that make a particular person happy can have consequences that decrease the collective happiness of some group. I might use a tyrant or an addict as examples. In answering the question "What else are you gonna do?" I'd propose at least "As long as you harm no group happiness, do what makes you happy," the Wiccan Rede "An' ye harm none, do what thou wilt" probably being too strict (rules out being Batman, for example).

_rpd00

When someone is about to be a parent (I think this question stick more to a man than a woman, considering the empathic link that's been biologicaly created between a child and his mother) is he really asking himself: Will they worth it ?

I think the situation is very different planned vs. unplanned. For me, once the decision was made I had no second thoughts. Also, the little munchkins re-write you emotionally once they arrive <- no one told me about this, so it was actually a bit of a shock.

_rpd00

Often helpline workers are people who formerly needed mental health advice themselves. At least, they'll have training on how to be helpful. I think it's very likely they'll be supportive, and unlikely that they'll be judgmental.

However, this is from a US perspective. Things may be different in other parts of the world.

_rpd00

That strategy has a good chance of discouraging her from getting treatment later.

Why do you say that? Also, if she is distressed, then she may want treatment now.

Getting her to call a mental health advice line that she doesn't trust likely won't be positive.

Granted, but why won't she trust the mental health advice line? If she is distressed, she may be willing to consider help from new sources.

If she is not distressed, then CronoDAS can use the mental health advice line to get educated on the options in case she does become distressed.

2ChristianKl
Basically because there's a high likelihood that the operator on the other side doesn't believe that the spirits she sees exist and suggest she's wrong for believing they exist. If that would be the case CronoDAS wouldn't have the problem he has.
_rpd20

I think "all human interaction is manipulation" is false on its face. I was putting forward Adler as a candidate for being a modern root of this meme. His teachings are still quite influential.

0ChristianKl
The fact that you consider a statement to be false on its face doesn't mean that there nobody in support of it. Pointing me to a different meme is besides the point.
_rpd00

If she is distressed by the symptoms, you could encourage her to contact someone that can educate her about treatment options. There may be a mental health advice line in your area that can refer her or you to free or low cost resources.

0ChristianKl
That strategy has a good chance of discouraging her from getting treatment later. Getting her to call a mental health advice line that she doesn't trust likely won't be positive.
_rpd40

My understanding is that Alder thought we all start with an inferiority complex because we all start as small, weak children.

2ChristianKl
Even if that's true, I don't think it implies that all "All human interaction is manipulation". It only implies that a lot of it is at it's driven by an inferiority complex.
_rpd20

He was the inferiority complex guy ...

"The striving for significance, this sense of yearning, always points out to us that all psychological phenomena contain a movement that starts from a feeling of inferiority and reach upward. The theory of Individual Psychology of psychological compensation states that the stronger the feeling of inferiority, the higher the goal for personal power." (From a new translation of "Progress in Individual Psychology," [1923] a journal article by Alfred Adler, in the AAISF/ATP Archives.

... everything i... (read more)

2ChristianKl
According to those quotes a lot of people are manipulative because of inferiority complexes. That suggests that he only talks about some children and not all children. Some children are manipulative for those reasons but that doens't mean all of them are.
0ChristianKl
Is there a specific quote from Adler about manipulation? Googling "Alfred Adler" manipulation doesn't give me good results. Given that Adler seems to be a theist, I'm also not sure whether he thinks that way.
_rpd00

I feel like there should be some constraint on harming group happiness while you "do what makes you happy."

2WalterL
It seems like "should" is doing a lot of heavy lifting in that sentence. If you had to turn that word into a sentence or two to let me understand what you mean, what would it be?
_rpd00

I take your point that theorists can appear to be concerned with problems that have very little impact. On the other hand, there are some great theoretical results and concepts that can prevent us futility wasting our time and guide us to areas where success is more likely.

I think you're being ungenerous to Bolstrom. His paper on the possibility of Oracle type AIs is quite nuanced, and discusses many difficulties that would have to be overcome ...

http://www.nickbostrom.com/papers/oracle.pdf

2TheAncientGeek
To be fair to Bostrom' he doesn't go all the way down the rabbit hole -- arguing that oracles aren't any different to agentive AGIs.
_rpd20

why would an AI become evil?

The worry isn't that the AI would suddenly become evil by some human standard, rather that the AI's goal system would be insufficiently considerate of human values. When humans build a skyscraper, they aren't deliberately being "evil" towards the ants that lived in the earth that was excavated and had concrete poured over it, the humans just don't value the communities and structures that the ants had established.

_rpd00

I think your criticism is a little harsh. Turing machines are impossible to implement as well, but they are still a useful theoretical concept.

2TheAncientGeek
Theoretical systems are useful so long as you keep track of where they depart from reality. Consider the following exchange: Engineer: The programme is acquiring more memory than it is releasing' so it will eventually fill the memory and crash. Computer Scientist: No it won't, the memory is infinite. Do the MIRI crowd make similar errors? Sure, consider Bostrom's response to Oracle AI. He assumes that an Oracle can only be a general intelligence coupled to a utility function that makes it want to answer questions and do nothing else.
_rpd90

There was quite a bit of commentary on the Jan 27 post ...

http://lesswrong.com/r/discussion/lw/n8b/link_alphago_mastering_the_ancient_game_of_go/#comments

tl;dr: reactions are mixed.

My personal reaction is that it is surprising that neural networks, even large ones fed with clever inputs and used in clever ways, could be used to boost Go play to this level. Although it has long been known that neural networks are universal function approximators, this achievement is a "no, really."

_rpd00

Yes the AI would know what we would approve of.

Okay, to simplify, suppose the AI has a function ...

Boolean humankind_approves(Outcome o)

... that returns 1 when humankind would approve of a particular outcome o, and zero otherwise.

At any given point, the AI needs to have a well specified utility function.

Okay, to simplify, suppose the AI has a function ...

Outcome U(Input i)

... which returns the outcome(s) (e.g., answer, plan) that optimizes expected utility given the input i.

But it doesn't have any reason to care.

Assuming the AI is corrigible (... (read more)

_rpd00

Emulating human brains is a rather convoluted solution to any problem.

Granted. In practice, it may be possible to represent aspects of humankind in a more compact form. But the point is that if ...

The AI would be very familiar with humans and would have a good idea of our [inventive] abilities.

... then to me it seems likely that "the AI would be very familiar with humans and would have a good idea of actions that would meet human approval."

Taking your analogy ... if we can model chimp inventiveness to a useful degree, wouldn't we also b... (read more)

0Houshalter
I just realized I misread your above comment and was arguing against the wrong thing somewhat. Yes the AI would know what we would approve of. It might also know what we want (note these are different things.) But it doesn't have any reason to care. At any given point, the AI needs to have a well specified utility function. Or at least something like a utility function. That gives the AI a goal it can optimize for. With my method, the AI needs to do several things. It needs to predict what a human judge would do, after reading some output it produces. I.e. if they would hit a big button that says "Approve". It needs to be able to predict what AI 2 will say after reading its output. I.e. what probability AI 2 will predict AI 1's output is human. And it needs to predict what actions will lead it towards increasing the probability of those things, and take them. AI 2, in turn, just needs to predict one thing. How likely it's input was produced by a human. How do you create a well specified utility function for doing things humans would approve of? You just have it optimize the probability the human will press the button that says "approve", and ditch the part about it pretending to be human. But the output most likely to make you hit the approve button isn't necessarily what you really want! It might be full of lies and manipulation, or a way to trick you. And if you go further than that, put it an an actual robot instead of a box, there's nothing stopping it from stealing the approve button and pressing it endlessly. Or just hacking it's own computer brain and setting reward equal to +INF (after which its behavior in the world is entirely undefined and unpredictable, and possibly dangerous.) There's no way to specify "do what I want you to do" as a utility function. Instead we need to come up with clever ways to contain the AI and restrain its power, so we can use it to do useful work. It could look at the existing research on Go playing or neural networks. Al
_rpd00

It's easy to detect what solutions a human couldn't have invented. That's what the second AI does

I think, to make this detection, the second AI would have to maintain high resolution simulations of the world's smartest people (if not the entire population), and basically ask the simulations to collaboratively come up with their best solutions to the problem.

Supposing that is the case, the second AI can be configured to maintain high resolution simulations of the entire population, and basically ask the simulations whether they collectively approve of a ... (read more)

0Houshalter
Emulating human brains is a rather convoluted solution to any problem. The AI would be very familiar with humans and would have a good idea of our abilities. To give an analogy, imagine we were the superintelligent AIs, and we were trying to tell apart chimps from humans pretending to be chimps. Let's say say one of the chimps produces a tool as a solution to a problem. Our goal is to guess whether it was really made by a chimp, or a human impersonator. You look at the tool. It's a spear made from a sharp rock tied to a stick. You look closely at the cord attaching the rock, and notice it was tied nicely. You know chimps don't know anything about knotcraft, let alone making cord, so you reject it as probably made by a human. Another tool comes to you, a spear made from steel, and you immediately reject it as far beyond the ability of the chimps. The last tool you examine is just a stick that has been sharpened at the end a little. Not the greatest, but definitely within the ability of chimps to produce. You note that it was probably produced by a chimp and let it pass.
_rpd00

There is regular structure in human values that can be learned without requiring detailed knowledge of physics, anatomy, or AI programming.

While there is some regular structure to human values, I don't think you can say that the totality of human values has a completely regular structure. There are too many cases of nameless longings and generalized anxieties. Much of art is dedicated exactly to teasing out these feelings and experiences, often in counterintuitive contexts.

Can they be learned without detailed knowledge of X, Y and Z? I suppose it dep... (read more)

_rpd00

I mean, the ability to estimate the abilities of superintelligences appears to be an aspect of reliable Vingean reflection.

0turchin
Or we could ask these AI to create the scale. We could use also its size to estimate power, like number on neurons. But real test needs to be powerful as well as universal optimization problem, something like ability to crack complex encryption or Go game.
0turchin
I created a list of steps or milestones of the future AI and we could use similar list to estimate level of current super AI. 1. AI autopilot. Tesla has it already. 2. AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems. 3. AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030. 4. AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030. 5. AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100 5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity 5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is. 1. Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100 1. AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100 1. Jupiter brain – huge AI using the entire pla
_rpd00

Although we use limited proxies (e.g., IQ test questions) to estimate human intelligence.

0turchin
limited proxies - yes, well said. also I would add solving problems which humans were unable to solve for long: aging, cancer, star travel, word peace, resurrection of dead.
_rpd00

The opportunities for detecting superintelligence would definitely be rarer if the superintelligence is actively trying to conceal the status.

What about in the case where there is no attempted concealment? Or even weaker, where the AI voluntary submits to arbitrary tests. What tests would we use?

Presumably we would have a successful model of human intelligence by that point. It's interesting to think about what dimensions of intelligence to measure. Number of variables simultaneously optimized? Optimization speed? Ability to apply nonlinear relatio... (read more)

0turchin
Probably winning humans in ALL known domains, including philosophy, poetry, love, power.
_rpd00

Whatever mechanism that you use to require the AI to discard "solutions that a human couldn't invent", use that same mechanism to require the AI to discard "actions of which humankind would not approve."

I believe that the formal terminology is to add the condition to the AI's utility function.

0Houshalter
It's easy to detect what solutions a human couldn't have invented. That's what the second AI does, predict how likely an input was produced by an AI or a human. If it's very unlikely a human produced it, it can be discarded as "unsafe". However it's hard to know what a human would "approve" of. Since humans can be tricked, manipulated, hacked, intimidated, etc. That is the standard problem with oracles that I am trying to solve with this idea.
_rpd00

I wonder if this is true in general. Have you read a good discussion on detecting superintelligence?

0turchin
Can't remember ad hoc; but if superintelligence is able to do anything, it could easily pretend to be more stupid than it is. May be only "super super intelligence" could solve him. But it also may depend of the length of the conversation. If it say just Yes or No once, we can't decide, if it say longer sequences we could conclude something, but for any length of sentences is maximum level of intelligence that could be concluded from it.
_rpd00

I think that if you are able to emulate humankind to the extent that you can determine things like "solutions that a human couldn't invent" and "what a human given a year to work on it, would produce," then you have already solved FAI, because instead you can require the AI to "only take actions of which humankind would approve."

To use AI to build FAI, don't we need a way to avoid this Catch 22?

0Houshalter
How do you program the AI to do what humankind would approve? A superintelligent AI, perhaps even a human-level AI, would probably know what humans would approve of. The hard part is making it care about what humans think.
_rpd00

it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.

It's easy to ask hard questions. I think it can be argued that emulating a human is a hard problem. There doesn't seem to be a guarantee that the "minimum threshold" doesn't involve converting planetary volumes to computronium.

I think the same problem is present in trying to specify minimum required computing power for a task prior to prior to performing the task. It isn't obvious to me that calculating "minimum required computing power for X" is any less difficult than performing some general task X.

0Houshalter
Yes, this is a problem. One way would be to just discard solutions that a human couldn't invent with greater than 1% probability. Another solution would be to not have that requirement at all. Instead have it try to mimic what a human given a year to work on it, would produce. So if humans can't solve the problem, it can still show us how to make progress on it.
_rpd00

provided the field is important within the context of human societal development and in engaging the material I gain a nuanced understanding of the content and a deep appreciation of how the originators created the system.

I'll suggest investigating the problem of "squaring the circle." It has it's roots in the origins of mathematics, passes through geometric proofs (including the notions of formal proofs and proof from elementary axioms), was unsolved for 2000 years in the face of myriad attempts, and was proved impossible to solve using the ... (read more)

-1IlyaShpitser
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
_rpd00

How did Belmopan, Brasília, Abuja and Islamabad do it?

Well all of these are deliberate decisions to build a national capital. They overcame the bootstrap problem by being funded by a pre-existing national tax base.

dozens of new cities built just in Singapore during the past half century

Again, government funding is used to overcome the bootstrap problem. Singapore is also geographically small, and many of these "cities" would be characterized as neighborhoods if they were in the US.

Las Vegas

Well, wikipedia says it began life as a wat... (read more)

_rpd00

Gentrification simply means that rents go up in certain parts of the city. It doesn't have directly something to do with new investments.

In my experience gentrification is always associated with renovation and new business investment. The wikipedia article seems to confirm that this is not an uncommon experience.

_rpd00

I think Seattle's South Lake Union development, kickstarted by Paul Allen and Jeff Bezos, is a counter example ...

http://crosscut.com/2015/05/why-everywhere-is-the-next-south-lake-union/

Perhaps gentrification is a more general counter example. But you're right, most developers opt for sprawl.

0ChristianKl
No, it's not in California. In California a city like Mountain View blocks a company like Google from building new infrastructure on it's edges. In what sense? Gentrification simply means that rents go up in certain parts of the city. It doesn't have directly something to do with new investments.
_rpd00

But similar profits are available at lower risk by developing at the edges of existing infrastructure. In particular, incremental development of this kind, along with some modest lobbying, will likely yield taxpayer funded infrastructure and services.

0ChristianKl
It seems like you can't do incremental development by building more real estage inside the cities because of the cities not wanting to give new building permits that might lower the value of existing real estage.
_rpd00

High quality infrastructure and community services are expensive, but taxpayers are reluctant to relocate to the new community until the infrastructure and services exist. It's a bootstrap problem. Haven't you ever played SimCity?

0polymathwannabe
Then how are new cities ever founded? How did Belmopan, Brasília, Abuja and Islamabad do it? Look at the dozens of new cities built just in Singapore during the past half century. The OP's proposal to build a city in the middle of the desert strikes me as similar to the history of Las Vegas. What parts of it can be replicated?
0ChristianKl
It's expensive but interest rates are low and the possible profit is huge.
_rpd10

Perhaps a Mathematics for Philosophers book like this http://www.amazon.com/dp/1551119099 ?

_rpd10

We can expect lower food prices. High food prices have been an important political stressor in developing nations.

_rpd00

They mainly decided not to cut their production.

And there is a good reason for this decision. Saudi Arabia tried cutting production in the '80s to lift prices, and it was disastrous for them. Here's a blog post with nice graphs showing what happened ...

Understanding Saudi Oil Policy: The Lessons of ‘79

_rpd20

KIC 8462852 Faded at an Average Rate of 0.165+-0.013 Magnitudes Per Century From 1890 To 1989

Bradley E. Schaefer (Submitted on 13 Jan 2016)

KIC 8462852 has been dimming for a century. The comet explanation is very unlikely.

1ahbwramc
I just posted a comment on facebook that I'm going to lazily copy here:
_rpd30

If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn't 'intend' to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to 'delete' or 'turn off'. AI will operate in an environment that is many times more complex: "mindspace".

_rpd00

A scenario not mentioned: my meat self is augmented cybernetically. The augmentations provide for improved, then greatly improved, then vast cognitive enhancements. Additionally, I gain the ability to use various robotic bodies (not necessarily androids) and perhaps other cybernetic bodies. My perceived 'locus' of consciousness/self disassociates from my original meat body. I see through whatever eyes are convenient, act through whatever hands are convenient. The death of my original meat body is a trauma, like losing an eye, but my sense of self is uninterrupted, since its locus has long since shifted to the augmentation cloud.