Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 24 May 2017 04:25:51PM *  0 points [-]

Yes and yes and yes (those are all examples mentioned in the article). If you have a specific example of a quantum phenomenon that pilot wave theory doesn't exhibit, I'd like to know. Pilot wave advocates claim that pilot wave theory results in the same predictions, although I haven't had time to chase down sources or work this out for myself.

Comment author: Manfred 25 May 2017 06:14:39AM 0 points [-]

My knowledge of it is pretty superficial, but I'm pretty confused about how it represents states with a superposition of particle numbers. For fixed number of (non relativistic) particles you can always just put the interesting mechanics (including spin, electromagnetic charge, etc!) in the wavefunction and then add an epiphenomenal ontologically-fundamental-particle like a cherry on top. We'll, epiphenomenal in the Von Neumann measurement paradigm, presumably advocates think it plays some role in measurement, but I'm still a bit vague on that.

Anyhow, for mixtures of particle numbers, I genuinely don't know how a Bohmian is supposed to get anything intuitive or pseudo-classical.

Comment author: Manfred 15 May 2017 09:44:55PM *  0 points [-]

I think the problem that the idea of doing special meta-reasoning runs into is that there's no clear line between meta-level decisions and object-level decisions.

"Will changing this line of code be good or bad?" is the sort of question that can be analyzed on the object level even if that line of code is part of my source. If you make an agent that uses a different set of rules when considering changes to itself, the agent still might create successor agents or indirectly cause it's code to be changed, in ways that follow the original object-level rules, because those decisions fall under object-level reasoning​.

Conversely, I think that means that it's useful and important to look at object-level reasoning applied to the self, and where that ends up.

Comment author: Manfred 15 May 2017 06:45:30AM *  0 points [-]

If we think about "what evolution was 'trying' to design humans for," I think it's pretty reasonable to ask what was evolutionarily adaptive in small hunter-gatherer tribes with early language. Complete success for evolution would be someone who was an absolutely astounding hunter-gatherer tribesperson, who had lots of healthy babies.

As someone who is not a lean, mean, hunting and gathering machine, nor someone with lots of healthy children, I feel like evolution has not gotten the outcomes it 'tried' to design humans to achieve.

Comment author: Manfred 10 April 2017 08:22:28AM 0 points [-]

Net utility according to what function? Presumably pleasure - pain, right? As people have pointed out, this is not at all the utility function animals (including humans) actually use to make choices. It seems relevant to you presumably because the idea has some aesthetic appeal to you, not because God wrote it on a stone tablet or anything.

I think once people recognize that questions that seem to be about "objective morality" are really usually questions about their own moral preferences, they tend to abandon system-building in favor of self-knowledge.

Comment author: Manfred 03 April 2017 11:11:42PM 1 point [-]

I think we're far enough out from superhuman AI that we can take a long view in which OpenAI is playing an indirect rather than a direct role.

Instead of releasing specific advances or triggering an endgame arms race, I think OpenAI's biggest impacts on the far future are by advancing the pure research timeline and by affecting the culture of research. The first seems either modestly negative (less time available for other research before superhuman AI) or slightly positive (more pure research might lead to better AI designs), the second is (I think) a fairly big positive.

Best use of this big pile of money? Maybe not. Still, that's a high bar to clear.

Comment author: DustinWehr 03 April 2017 10:06:59PM *  13 points [-]

A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That's an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.

I hope everyone is aware of that perception problem.

Comment author: Manfred 03 April 2017 10:46:33PM *  3 points [-]

We've returned various prominent AI researchers alive the last few times, we can't be that murderous.

I agree that there's a perception problem, but I think there are plenty of people who agree with us too. I'm not sure how much this indicates that something is wrong versus is an inevitable part of the dissemination (or, if I'm wrong, the eventual extinction) of the idea.

Comment author: entirelyuseless 04 March 2017 03:28:16AM *  0 points [-]

It is not the responsibility of a decision theory to tell you how to form opinions about the world; it should tell you how to use the opinions you have. EDT does not mean reference class forecasting; it means expecting utility according to the opinions you would actually have if you did the thing, not ignoring the fact that doing the thing would give you information.

Or in other words, it means acting on your honest opinion of what will give you the best result, and not a dishonest opinion formed by pretending that your opinions wouldn't change if you did something.

Comment author: Manfred 04 March 2017 08:10:37AM 0 points [-]

I think this deflationary conception of decision theory has serious problems. First is that because it doesn't pin down a decision-making algorithm, it's hard to talk about what choices it makes - you can argue for choices but you can't demonstrate them without showing how they're generated in full. Second is that it introduces more opportunities to fool yourself with verbal reasoning. Third because historically I think it's resulted in a lot of wasted words in philosophy journals, although maybe this is just objection one again.

Comment author: Manfred 03 March 2017 06:35:16PM *  0 points [-]

My take on EDT is that it's, at its core, vague about probability estimation. If the probabilities are accurate forecasts based on detailed causal models of the world, then it works at least as well as CDT. But if there's even a small gap between the model and reality, it can behave badly.

E.g. if you like vanilla ice cream but the people who get chocolate really enjoy it, you might not endorse an EDT algorithm that thinks of of probabilities as frequency within a reference class. I see the smoking lesion as a more sophisticated version of this same issue.

But then if probabilities are estimated via causal model, EDT has exactly the same problem with Newcomb's Omega as CDT, because the problem with Omega lies in the incorrect estimation of probabilities when someone can read your source code.

So I see these as two different problems with two different methods of assigning probabilities in an underspecified EDT. This means that I predict there's an even more interesting version of your example where both methods fail. The causal modelers assume that the past can't predict their choice, and the reference class forecasters get sidetracked by options that put them in a good reference class without having causal impact on what they care about.

Comment author: Manfred 08 February 2017 07:56:58PM *  0 points [-]

What's the best way to invest in an UNU betting fund? Or is the answer to start one yourself?

Comment author: Thomas 06 February 2017 05:52:00PM *  0 points [-]

There is some wit here, but no proper solution.

Comment author: Manfred 06 February 2017 07:03:57PM 0 points [-]

Since you clearly have something in mind, once you reveal it, are we going to go "Oh, yeah, that's much more sensible than the gliders that are dropped near the boundary of glide vs. stall air pressure," or are we going to go "well, that's arbitrary."

Rigid hot-air-balloon shapes that start out at 1500 Celsius and fall to earth once they are no longer keeping the air under them hot. Seed crystals in a hailstorm. Any object that falls faster if broken and will break if dropped from the higher altitude. Solid-state electrostatic thrusters pointed downward, that arc and fail if the pressure is too high. Spinning propeller craft thrusting downward that undergo a laminar to turbulent transition if the pressure is too high.

View more: Next