If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top-level comments on this article" and ".

New to LessWrong?

New Comment


52 comments, sorted by Click to highlight new comments since:

I don't have enough karma yet to post this as an original post but, for those who remember Grognor:

1 - https://grognor.github.io/ A memorial site where people can submit tributes for publication and otherwise 'remember' him.

2 - https://www.youcaring.com/floridaexoticbirdsanctuary-955648 A fundraiser in honor of him.

He passed sometime in June and I know that he was a valued member of LessWrong and cared a lot about the communities he inhabited.

Any attempts for posthumous digital immortality? That is collecting all the data about the person with the hope that the future AI will create his exact model.

http://grognor.stacky.net/index.php?title=Main_Page

Grognor did a good job collecting his own data. I don't have access to his alt twitter account, as it is a private account. But maybe someone else who does can help if the demand arises.

This is sad.

Some of his old tweets are pretty dark:

I haven't talked to anyone face to face since 2015

https://twitter.com/Grognor/status/868640995856068609

I just want to remind everyone that this thread exists.

I say this as often as I can. Reach out to me and say hello.

[-][anonymous]100

Latest results on KIC 8462852 / Boyajians Star:

After comparing data from Spitzer and Swift - an infrared and ultraviolet telescope - whatever the heck the three dimensional distribution of the material causing the brightness dips, the long-term secular dimming of the star is being caused by dust. Over the course of a year of observations the star dimmed less in the infrared than in the ultraviolet, with the light extinction dependent upon wavelength in a way that screams dust of a size larger than primordial interstellar dust (and thus likely in the star system rather than somewhere between us) but still dust.

Still a weird situation. There cannot be a very large amount of dust in total since there is no infrared excess, so we must be seeing small amounts of it pass directly between the star and us.

The dipping is also semiperiodic, to the point that a complex of dips beginning in May was predicted months in advance.

I read in one Russian blog that they calculated the form of objects able to produce such dips. It occurred to be 10 million kilometres strips orbiting the star. I think it is very similar to very large comet tails.

That's interesting... is the dust size still consistent with artificial objects?

The dust probably is just dust - scattering of blue light more than red is the same reason the sky is blue and the sun looks red at sunset (Rayleigh scattering / Mie scattering). It comes from scattering off of particles smaller than a few times the wavelength of the light - so if visible light is being scattered less than UV, we know that lots of the particles are of size smaller than ~2 um. This is about the size of a small bacterium, so dust with interesting structure isn't totally out of the question, but still... it's probably just dust.

Can someone help me out with Paul Christiano's email/contact info? Couldn't find it anywhere online.

I might be able to discuss possibilities for implementing his Impact Certificate ideas with some very capable people here in India.

I don't, and maybe you've already been contacted, but you could try contacting him on social sites like this one (user paulfchristiano) and Medium, etc. Typical internet stalking skillset.

Warning: please don't read if you are triggered by a discussion of post-mortem analysis (might come up in the comments).

I want to have my body donated to science, well, afterwards, and to convince my twin sister to organize the same thing; there's probably a dearth of comparative post-mortem studies of adult (aged) human twins. However, my husband said he wouldn't do it. I don't want to argue with him about something we both hope won't be an issue for many years to come, so, in pure scientific interest:

what would you think it would be interesting to study in such a setting?

Sorry if I offended you, it wasn't my intention. Just can't ask this on facebook, my Mom would eat me alive.

You could look into joining a twin registry. Discordant-twin designs are fairly powerful, but still need _n_>50 or something like that to be worth doing. Plus if you keep your own novel set of data, people will be less interested in analyzing it compared to a twin registry using a familiar set of questionnaires/scales/measures. (One of the reasons you see so much from twin registries or the UK Biobank: consistent measurements.) It would've been best if you two had been enrolled as kids, but perhaps better late than never.

Consider creating detailed records of lifestyle differences between you and your sister. Perhaps keep a diary (in effect creating a longitudinal dataset for folks to look at later).

There is an enormous interest in disentangling lifestyle choices from genetics for all sorts of health and nutrition questions.


Thank you for considering this, I think this could be very valuable.

Thank you for considering this, I think this could be very valuable.

Do you think that having one pair of twins is enough to get valuable data from it?

In the hierarchy of evidence, this would be a "case study." So the value is not as high as a proper study, but non-zero.

I think she will be open to it. Here's hope. People usually don't get it, how having a twin makes you feel you live an experiment - same clothes or different clothes (but people say different things to you when they see you in them - "why?"), same favourite poems and different ones (so weird, really). Always thought it a shame, to have so much material go to waste.

I have a patent law question.

Summary/main question: Should patents ever be granted for a common, unoriginal idea, before any original work has been done, to protect the claimant's future work in the area of the claim. If we are not allowed to grant patents like that, what sort of schemes do we favor for bringing incentives to make progress in competitive arenas of research closer to the societal value of the expected findings?

Companies often seem to need a promise that if they can make an idea work and find an audience, all of those unprotected advancements they must make (market research, product development and building awareness in the audience(marketing)), wont just be stolen by some competitor the moment people start buying the thing.

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50. They aren't compensated proportionate to the value they produced. If it can't be produced for $3.50, it will be produced by their largest, most complacent competitors to safeguard their stranglehold on the market. The incumbents will go completely unpunished for having sat on their hands for long enough to allow these new innovators to threaten them, the idea will threaten them, and then it will serve them, and it will serve as an example to anyone who tries to threaten them in future, and innovation will generally be discouraged.

The expected rewards for solving a problem that takes a long time to solve are generally much lower than the societal value of the solution, because there's a high chance that another team will solve it first, and most of the resources invested in development will have been in vain. If a working group had exclusive rights to the solutions to some problem, whatever they turn out to be, the amount they aught to invest will be much closer to the solutions' actual value.

It's a way of limiting the inefficiencies of competition. It sort of reminds me of bitcoin-NG, if I've understood it correctly, the protocol periodically elects a single working group to process the bulk of the transactions, to prevent costly duplication of efforts.

So, to reiterate, should patents ever be granted before any original work has been done, to protect the claimant's future work in the area of the claim, and if not, what should we do instead, or what do we do instead, to bring the incentive to make progress in competitive arenas of research closer to the actual societal value of the expected findings?

That's not a patent law question, that's a social theory question using a bizarre form of patents as the mechanism.

And my answer is "absolutely not". I have no interest in preventing people to work on what they want, nor in protecting someone's unproven idea with no evidence that it's the right person to solve it or that there will be any success. Ideas are cheap, working systems are valuable.

Also, I'll take "the inefficiencies of competition" over the inefficiencies of monopoly any day, especially in public pursuits where governments have any say.

If you want to understand how companies can have incentives to produce new products I think it's worth to read startup literature like Eric Ries "The Lean Startup".

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50.

A small startup is unlikely to successfully run a patent battle in China. Having a patent won't protect the company from getting copied.

Let's look at an example. In the Quantified Self field, it would be nice to have a toilet that regularly does urine analysis and gives me data. In 1988 someone filed a patent for a toilet in which that's directly build. That doesn't mean that any such product hit the market. Did that original company produce a product for the European or US market? No, there's no toilet that you can buy from the original company. On the other hand, if another person would have tried to put something on the market they could have been sued. There's no company that produced a product that can be easily brought.

Most startups fail and when startups who filled patents fail, the patents are often brought by other parties who then use the patents to sue and do patent trolling.

China provides interesting opportunities. It's cheaper for someone to ship an item from China via Aliexpress to me than it is for someone to ship the same item to me from an Amazon Fulfillment Center. I can buy a 0.70 cent free shipping item from Aliexpress while I can't buy that from Amazon.

It's cheap to run a Kickstarter campaign and let a Chinese company produce your product. Doing this usually means that employees from the company are going to pass your design around and your product will get sold in an unbranded version on Aliexpress.

This means that the dream that Kickstarter promised where everybody can produce his idea and bring it to market comes with the side problem of copycat products being produced but that's still much better than it was in the past. It's also worth noting that you could in theory build your product in the US and not have factory employees pass the design around but given that the Chinese factories are so efficient the Kickstarter inventors still go and let a Chinese company produce their products.

That a bit sad but 10 years ago the same person had no way to bring their product to market at all.

Upvoted for asking an interesting question, but my answer would be "probably not". Whether patents are a good idea even as is is debatable — see Michele Boldrin and David Levine's Against Intellectual Monopoly — and I expect beefing them up to be bad on the margin.

I'm unclear on whether the proposed super-patents would

  1. be the same as normal patents except fileable before the work of sketching a plausible design has been done, or

  2. would be even more powerful, by also allowing the filer to monopolize a market in which they carry out e.g. "market research, product development and building awareness", even if that involves no original design work,

but in any case the potential downsides hit me as more obvious than the potential upsides.

Item 1 would likely lead to more patents being filed "just in case", even without a real intention of bringing a real product to market. This would then discourage other profit-seeking people/organizations from investigating the product area, just as existing patents do.

Item 2 seems to take us beyond the realm of patents and intellectual work; it's about compensating a seller for expenses which produce positive spillovers for other sellers. As far as I know, that's not usually considered a serious enough issue to warrant state intervention, like granting a seller a monopoly. I suspect that when The Coca-Cola Company runs an advert across the US, Wal-Mart sells more of its own knockoff colas, but the US government doesn't subsidize Coca-Cola or its advertising on those grounds!

A few days ago I asked for LW articles regarding the Chinese Room argument and got into a conversation with the user hairyfigment. As I am certainly not convinced of the validity of the Chinese room argument myself I tried to understand the Chinese gym extension of the argument and if/why it matters to the original point. In particular I pointed to the relevance of the brain not evidently being a digital computer. I went back to the 2014 book The Future of the Brain: Essays by the World's Leading Neuroscientists which is a recent exposition of our current (quite poor) understanding of the brain. In particular I went back to the chapter The Computational Brain by Gary Marcus. Here are some quotes that I believe are relevant. Unfortunately I can not provide the full chapter for copyright reasons but I do recommend the book.

[...] we still haven't even resolved the basic question of whether brains are analog, digital, or (as I suspect but certainly can't prove) a hybrid of the two.

and

Going hand in hand with the neural network community's odd presumption of initial randomness was a needless commitment to extreme simplicity, exemplified by models that almost invariably included a single neuronal type, abstracted from the details of biology. We now know that there are hundreds of different kinds of neurons , and the exact details—of where synapses are placed, of what kinds of of neurons are interconnected where-make an enormous difference. Just in the retina (itself a part of the brain), there are roughly twenty different types of ganglion cells; there, the idea that you could adequately capture what's going on with a single kind of neuron is absurd. Across the brain as a whole, there are hundreds of different types of neurons, perhaps more than a thousand, and it is doubtful that evolution would sustain such diversity if each type of neurons were essentially doing the same type of thing.

Is the non or partially digital nature of the brain relevant to certain arguments based on neural networks presented in the sequences?

Does it open the possibility that Searle's argument on syntactic symbol manipulation might be relevant?

Apart from the digital/analog point what about the neural complexity and variety? What if anything does it show about the current state of AI research?

Ah, you mean to ask if the brain is special in a way that evades our ability to construct an analogy of the chinese room argument for it? E.g. "our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry, therefore there is nothing in my body that understands English."

I think such an argument is totally valid imitation. It doesn't necessarily bear on the Chinese room itself, which is a more artificial case, but it certainly applies to AI in general.

"our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry"

The question is what the word "just" means in that sentence. Ordinarily it means to limit yourself to what is said there. The implication is that your behavior is explained by those simple laws, and not by anything else. But as I pointed out recently, having one explanation does not exclude others. So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals, or in other ways. In other words, the argument is false because the word "just" here implies something false.

Yeah, whenever you see a modifier like "just" or "merely" in a philosophical argument, that word is probably doing a lot of undeserved work.

The implication is that your behavior is explained by those simple laws

I don't think the laws of physics (chemistry) are actually simple in the case of large systems. Note that this understanding applies to the Chinese Room idea too - the contents of the rules/slips of paper are not "simple" by any means.

But I'm more concerned about a confusion in interpreting

and not by anything else

Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors (instead of trying to understand the lower-level physics/chemistry)? Or are you saying that the physics is insufficient and you must supplement it with something else in order to identify all causes of behavior?

I agree with the first, and disagree with the second.

Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors

There's that word, "merely," there, like your other word "just," which makes me say no to this. You could describe the situation as "there are many models," but you are likely to be misled by this. In particular, you will likely be misled to think there is a highly accurate model, which is that someone did what he did because of chemicals, and a vague and inaccurate model, which says for example that someone went to the store to buy milk. So rather than talking about models, it is better simply to say that we are talking about two facts about the world:

Fact 1: the person went to the store because of the behavior of chemicals etc. Fact 2: the person went to the store to buy milk.

These are not "merely" two different models: they are two different facts about the world.

Or are you saying that the physics is insufficient

I said in my comment, "So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals." If the first were insufficient, it would not be an explanation. Both are sufficient, and both are correct.

you must supplement it with something else in order to identify all causes of behavior?

Yes, if we mean by "cause", "explanation," which is normally meant, then you have to mention both to mention all causes, i.e. all explanations, since both are explanations, and both are causes.

Fact 1: the person went to the store because of the behavior of chemicals etc. Fact 2: the person went to the store to buy milk. These are not "merely" two different models: they are two different facts about the world.

Not independent facts, surely. The person went to the store to buy milk because of the behavior of chemicals, right? Even longer chains ... because they were thirsty and they like milk because it reminds them of childhood because their parents thought it was important for bone growth because ... because ... ends eventually with because of the quantum configuration of the universe at some point. and you can correctly shortcut to there at any point in between.

I said they were two different facts, not two independent facts. So dependent or not (and this question itself is also more confused and complicated than you realize), if you do not mention them both, you are not mentioning everything that is there.

if you do not mention them both, you are not mentioning everything that is there.

Hmm. I don't think "mention everything that is there" is on my list of goals for such discussions. I was thinking more along the lines of "mention the minimum necessary". I'm still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior, even while acknowledging that there are higher-level models which are way easier to understand.

I'm still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior

It is sufficient to describe them in the way that it does describe them, which certainly includes (among other things) all physical motions. But it is obvious that physics does not make statements like "the person went to the store to buy milk," even though that is a true fact about the world, and in that way it does not describe everything.

Ok, one more attempt. Which part of "the person went to the store to buy milk" is not described by the quantum configuration of the local space? The person certainly is. Movement toward and in the store certainly is. The neural impulses that correspond to desire for milk very probably are.

Which part of "the person went to the store to buy milk" is not described by the quantum configuration of the local space?

All of it.

The person certainly is.

The person certainly is not; this is why you have arguments about whether a fetus is a person. There would be no such arguments if the question were settled by physics.

Movement toward and in the store certainly is.

Movement is, but stores are not; physics has nothing to say about stores.

The neural impulses that correspond to desire for milk very probably are.

Indeed, physics contains neural impulses that correspond to the desire for milk, but it does not contain desire, nor does it contain milk.

Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle's conclusion but I am examining my thought process for errors.

Searle's argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.

In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.

Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?

I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It's perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like "well, I sure feel conscious"?

The reason LWers are so confident that this simulation is conscious is because we think of concepts like "consciousness," to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It's just like how the concept of "apples" exists because apples exist, and when I correctly think I see an apple, it's because there's an apple. Talking about "consciousness" is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label "consciousness" are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it's a misunderstanding of what we have access to when we encounter consciousness.

I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated.

The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).”.

Talking about "consciousness" is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label "consciousness" are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation.

In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?

It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks).

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same.

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.

Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it's all sound waves pushing air particles back and forth”.

As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?

No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn't have prefaced with "Either the brain is capable of doing things that would require infinite resources for a computer to perform". We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it's infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.
So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.

Is anyone interested in starting a small team (2-3 people) to work on this Kaggle dataset?

https://www.kaggle.com/c/porto-seguro-safe-driver-prediction

Has Eliezer written anything outlining why he's working on AI rather than directly on life extension? I could guess (we need AI to speed up research, we need to make sure we don't die from AI first, etc.) but I'd prefer to read it explicitly. Posts not from Eliezer but answering the same question would also be welcome.

This article has substantive advice on how to be open minded:

http://fallibleideas.com/paths-forward

Good article. Seems like a friend.

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.

There's nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.

All solutions are equal because they all solve the problem.

If my problem is that I want to have a successful job interview, then I don't have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

No it can't. It can only be used in situations where it happens to agree with reality. That's not the same as the theory being correct.

The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.

There's nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.

You have misunderstood the problem. The problem is not that people come to different conclusions. Rather, the problem is that people are completely arbitrarily assigning scores to ideas. Since there is no objective reality underlying their scoring, there no rational way for any two people to come to agreement on scores.

All solutions are equal because they all solve the problem.

If my problem is that I want to have a successful job interview, then I don't have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.

Making a judgement about whether to take a job is a yes or no judgement. Making a decision about whether to say X during a job interview is a yes or no judgement. That doesn't prevent you from modelling salary with a scalar. If you judge that you should always take the job that earns you as much money as possible then if job A money > job B money, you will say yes to A and no to B.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

An idea either solves a problem or it doesn't.

There is no way to assign probabilities to ideas. Theories such as quantum mechanics assign probabilities to events, e.g. - radioactive decay of an atom. Assigning a probability to a theory makes no sense since there is no rule for assigning probabilities in the absence of an explanatory theory.

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

Is this intended to contradict something in the article?

There's nothing bad about two people with different priors coming to different conclusions.

People often disagree, np, but if there's no possible way to agree – if everything is just arbitrary – then you have a problem.

If my problem is that I want to have a successful job interview

That's not a well-defined problem.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

Maximizing a single metric has a binary outcome: either you did the thing which maximizes it or you didn't.

Neat paper about the difficulties of specifying satisfactory values for a strong AI. h/t Kaj Sotala.

The design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. [] Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results.

I think it's slightly lacking in sophistication about aggregation of numerical preferences, and in how revealed preferences indicate that we don't actually have incommensurable or infinitely-strong preferences, but is overall pretty great.

On the subject of the problem, I don't think we should program in values that are ad-hoc on the object level (what values to use - trying to program this by hand is destined for failure), or even the meta level (whose values to use). But I do think it's okay to use an ad-hoc process to try to learn the answers to the meta-level questions. After all, what's the worst that could happen? (irony). Of course, the ability to do this assumes the solution of other, probably more difficult philosophical/AI problems, like how to refer to peoples' values in the first place.

Note that these three things (standing, measurement, and aggregation) are unsolved for human moral decisionmaking as well.