Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: CronoDAS 23 May 2017 05:06:05PM 0 points [-]

Ah, pilot wave theory. It gets around the "no local realism" theorem by using non-local hidden variables...

Comment author: korin43 23 May 2017 08:07:10PM 0 points [-]

Does it use anything non-local? The experiments in the article use macroscopic fluids, which presumably don't have non-local effects.

Comment author: korin43 23 May 2017 04:42:51PM 1 point [-]

"The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features."

Comment author: korin43 23 May 2017 04:44:46PM 0 points [-]

Note that the theory seems to have been around since the 1930's, but these experiments are new (2016).

Comment author: korin43 23 May 2017 04:42:51PM 1 point [-]

"The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features."

[Link] Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?

2 korin43 23 May 2017 04:38PM
Comment author: korin43 13 May 2017 02:43:23PM *  3 points [-]

From the perspective of the God of Evolution, we are the unfriendly AI:

  • We were supposed to be compelled to reproduce, but we figure out that we can get the reward by disabling our reproductive functions and continuing to go through the motions.
  • We were supposed to seek out nutritious food and eat it, but we figured out that we could concentrate the parts that trigger our reward centers and just eat that.

And of course, we're unfriendly to everything else too:

  • Humans fight each other over farmland (= land that can be turned into food which can be turned into humans) all the time
  • We're trying to tile the universe with human colonies and probes. It's true that we're not strictly trying to tile the universe with our DNA, but we are trying to turn it all into human things, and it's not uncommon for people to be sad about the parts of the universe we can never reach and turn into humantronium.
  • We do not love or hate the cow/chicken/pig, but they are made of meat which can be turned into reward center triggers.

As to why we're not exactly like a paperclip maximizer, I suspect one big piece is:

  • We're not able to make direct copies of ourselves or extend our personal power to the extent that we expect AI to be able to, so "being nice" is adaptive because there are a lot of things we can't do alone. We expect that an AI could just make itself bigger or make exact copies that won't have divergent goals, so it won't need this.
Comment author: lmn 11 April 2017 09:32:49PM 1 point [-]

The big difference is the proximity to actual diversity, when you work with and live with and see diverse people every day, you get acclimated to it and accept it as the norm;

Kind of like how the mayor of London said people must now accept a certain level of terrorism as 'Part & Parcel' of living in a big city?

Comment author: korin43 24 April 2017 10:48:45PM 0 points [-]

This makes me wonder how much of the liberal/conservative divide with how seriously we take minor acts of terrorism has to do with direct experience with big cities. If you don't live in a city, hearing about a terrorist attack in a city is probably really scary, but if you've actually lived in a big city, a few people dying every few years is incredibly uneventful (for comparison, 318 people were murdered in my city last year).

Comment author: Daniel_Burfoot 19 April 2017 02:13:42AM *  2 points [-]

I really want self-driving cars to be widely adopted as soon as possible. There are many reasons, the one that occurred to me today while walking down the street is : look at all the cars on the street. Now imagine all the parked cars disappear, and only the moving cars remain. A lot less clutter, right? What could we do with all that space? That's the future we could have if SDCs appear (assuming that most people will use services like Lyft/Uber with robotic drivers instead of owning their own car).

Comment author: korin43 20 April 2017 01:24:38AM 0 points [-]

I sometimes wonder if there is more low hanging fruit in lives that could be saved if car safety was improved. Self driving cars are obviously one way to do that, but I worry that we're ignoring easier solutions because self driving cars will solve the problem eventually (not that I know what those easier solutions are).

Comment author: korin43 29 March 2017 07:38:22PM 7 points [-]

As a software engineer, it seems strange to me that Arbital is trying to be an encyclopedia, debate system, and blogging site at the same time. What made you decide to put those features together in one piece of software?

Comment author: eternal_neophyte 22 March 2017 10:23:43AM *  0 points [-]

In combination with an AI with social skills that are fundamentally stunted in some way, this might actually work. If the AI cannot directly interface with the world in any meaningful way without the key and it doesn't have any power to persuade a human actor to supply it with the key, it's pretty much trapped ( unless there is some way for it to break its own encryption ).

Edit: notwithstanding the possibility that some human being may be stupid enough to supply it with the key despite not asking for it.

Comment author: korin43 23 March 2017 08:20:28PM 0 points [-]

I think being encrypted may not actually help much with the control problem, since the problem isn't that we expect an AI to fully understand what we want and then be evil, it's that we're worried that an AI will not be optimizing what we want. Not knowing what the outputs actually do doesn't seem like it would help at all (except that the AI would only have the inputs we want it to have).

Comment author: korin43 21 March 2017 03:18:12PM 0 points [-]

"In this blogpost, we're going to train a neural network that is fully encrypted during training (trained on unencrypted data). The result will be a neural network with two beneficial properties. First, the neural network's intelligence is protected from those who might want to steal it, allowing valuable AIs to be trained in insecure environments without risking theft of their intelligence. Secondly, the network can only make encrypted predictions (which presumably have no impact on the outside world because the outside world cannot understand the predictions without a secret key). This creates a valuable power imbalance between a user and a superintelligence. If the AI is homomorphically encrypted, then from it's perspective, the entire outside world is also homomorphically encrypted. A human controls the secret key and has the option to either unlock the AI itself (releasing it on the world) or just individual predictions the AI makes (seems safer)."

View more: Next