Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Erfeyah 17 October 2017 08:25:08PM 0 points [-]

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.

Comment author: gwern 17 October 2017 08:21:24PM 0 points [-]

You could look into joining a twin registry. Discordant-twin designs are fairly powerful, but still need n>50 or something like that to be worth doing. Plus if you keep your own novel set of data, people will be less interested in analyzing it compared to a twin registry using a familiar set of questionnaires/scales/measures. (One of the reasons you see so much from twin registries or the UK Biobank: consistent measurements.) It would've been best if you two had been enrolled as kids, but perhaps better late than never.

Comment author: gwern 17 October 2017 07:05:04PM 1 point [-]
Comment author: Dagon 17 October 2017 05:30:36PM *  0 points [-]

Yup, looks that way. LW 2.0 is running, but seems to have gone further toward the "publish thoughts, get some comments" and away from the conversational feel we had here.

So it goes.

Comment author: Manfred 17 October 2017 04:45:14PM 1 point [-]

Plausibly? LW2 seems to be doing okay, which is gonna siphon off posts and comments.

Comment author: Lumifer 17 October 2017 04:15:52PM 1 point [-]

The patient is fading: pulse is weak, blood pressure is low and dropping, brain is getting anoxic. Absent a radical intervention, we don't anticipate a good prognosis.

Comment author: IlyaShpitser 17 October 2017 04:10:57PM 1 point [-]

In the hierarchy of evidence, this would be a "case study." So the value is not as high as a proper study, but non-zero.

Comment author: root 17 October 2017 03:28:53PM 2 points [-]

Is LW 1.0 dead?

Comment author: ChristianKl 17 October 2017 03:01:16PM 0 points [-]

Thank you for considering this, I think this could be very valuable.

Do you think that having one pair of twins is enough to get valuable data from it?

Comment author: ChristianKl 17 October 2017 02:57:18PM *  1 point [-]

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.

There's nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.

All solutions are equal because they all solve the problem.

If my problem is that I want to have a successful job interview, then I don't have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

Comment author: WalterL 17 October 2017 02:04:48PM 0 points [-]

Bummer.

Comment author: morganism 16 October 2017 11:47:06PM 0 points [-]

Physically active white men at high risk for plaque buildup in arteries

https://www.eurekalert.org/pub_releases/2017-10/uoia-paw101617.php

"When these findings were stratified by race and gender, they found that white men were at the highest risk-they were 86 percent more likely to have CAC. There was no higher odds of CAC for black participants who exercised at this level, and while there was a similar trend for white women it was not statistically significant."

"High levels of exercise over time may cause stress on the arteries leading to higher CAC," said Rana, "however this plaque buildup may well be of the more stable kind, and thus less likely to rupture and causes heart attack, which was not evaluated in this study."

Comment author: alanforr 16 October 2017 06:35:23PM 0 points [-]

This link proposes a new improvement on epistemology:

http://fallibleideas.com/essays/yes-no-argument

Comment author: Dagon 16 October 2017 03:01:13PM 3 points [-]

That's not a patent law question, that's a social theory question using a bizarre form of patents as the mechanism.

And my answer is "absolutely not". I have no interest in preventing people to work on what they want, nor in protecting someone's unproven idea with no evidence that it's the right person to solve it or that there will be any success. Ideas are cheap, working systems are valuable.

Also, I'll take "the inefficiencies of competition" over the inefficiencies of monopoly any day, especially in public pursuits where governments have any say.

Comment author: MaryCh 16 October 2017 06:18:37AM 0 points [-]

I think she will be open to it. Here's hope. People usually don't get it, how having a twin makes you feel you live an experiment - same clothes or different clothes (but people say different things to you when they see you in them - "why?"), same favourite poems and different ones (so weird, really). Always thought it a shame, to have so much material go to waste.

Comment author: gwern 16 October 2017 01:31:52AM *  2 points [-]

Yes: "The Power of Simulation: What Virtual Creatures Can Teach Us", Katherine Hayles 1999:

The designer's intentions, implicit in the fitness criteria he specifies and the values he assigns to these criteria, become explicit when he intervenes to encourage "interesting" evolutions and prohibit "inelegant" ones ("3-D Morphology", pp. 31, 29). For example, in some runs creatures evolved who achieved locomotion by exploiting a bug in the way conservation of momentum was defined in the world's artifactual physics: they developed appendages like paddles and moved by hitting themselves with their own paddles. "It is important that the physical simulation be reasonably accurate when optimizing for creatures that can move within it," Sims writes. "Any bugs that allow energy leaks from non-conservation, or even round-off errors, will inevitably be discovered and exploited by the evolving creatures," ("Evolving Virtual Creatures," p. 18). In the competitions, other creatures evolved to exceptionally tall statures and controlled the cube by simply falling over on it before their opponents could reach it ("3-D Morphology," p. 29.) To compensate, Sims used a formula that took into account the creature's height when determining its starting point in the competition; the taller the creature, the further back it had to start. Such adjustments clearly show that the meaning of the simulation emerges from a dynamic interaction between the creator, the virtual world (and the real world on which its physics is modeled), the creatures, the computer running the programs, and in the case of visualizations, the viewer watching the creatures cavort. In much the same way that the recursive loops between program modules allow a creature's morphology and brain to co-evolve together, so recursive loops between these different components allow the designer's intent, the creatures, the virtual world, and the visualizations to co-evolve together into a narrative that viewers find humanly meaningful...compared to artificial intelligence, artificial life simulations typically front-load less intelligence in the creatures and build more intelligence into the dynamic process of co-adapting to well-defined environmental constraints. When the environment fails to provide the appropriate constraints to stimulate development, the creator steps in, using his human intelligence to supply additional adaptive constraints, for example when Sims put a limit on how tall the creatures can get.

Comment author: gwern 16 October 2017 01:15:16AM 2 points [-]

There was something else going on, though. The AI was crafting super weapons that the designers had never intended. Players would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces.

Checking into this one, I don't think it's a real example of learning going wrong, just a networking bug involving a bunch of low-level stuff. It would be fairly unusual for a game like Elite Dangerous to have game AI using any RL techniques (the point is for it to be fun, not hard to beat, and they can easily cheat), and the forum post & news coverage never say it learned to exploit the networking bug. Some of the comments in that thread describe it as random and somewhat rare, which is not consistent with it learning a game-breaking technique. Eventually I found a link to a post by an ED programmer Mark Allen who explains what went wrong with his code: https://forums.frontier.co.uk/showthread.php?t=256993&page=11&p=4002121&viewfull=1#post4002121

...Prior to 1.6/2.1 the cached pointer each weapon held to its data was a simple affair pointing at a bit of data loaded from resources, but as part of the changes to make items modifiable I had to change this so it could also be a pointer to a block of data constructed from a base item plus a set of modifiers - ideally without the code reading that data caring (or even knowing) where it actually came from and therefore not needing to be rewritten to cope. This all works great in theory, and then in practice, up until a few naughty NPC's got into the mix and decided to make a mess. I'll gloss over a few details here, but the important information is that a specific sequence of events relating to how NPCs transfer authority from one players' machine to another, combined with some performance optimisations and an otherwise minor misunderstanding on my part of one of the slightly obscure networking functions got the weapon into an odd state. The NPC's weapon which should have been a railgun and had all the correct data for a railgun, but the cached pointer to its weapon data was pointing somewhere else. Dangling pointers aren't all that uncommon (and other programmers may know the pains they can cause!) but in this case the slightly surprising thing was that it would always be a pointer to a valid WeaponData...It then tells the game to fire 12 shots but now we're outside the areas that use the cached data, the weapon manager knows its a railgun and dutifully fires 12 railgun shots :) . Depending on which machine this occurred on exactly it would either be as a visual artefact only that does no damage, or (more rarely but entirely possible) the weapon would actually fire 12 shots and carve a burning trail of death through the space in front of it. The hilarious part (for people not being aimed at) is that the bug can potentially cause hybrids of almost any two weapons... In my testing I've seen cases of railguns firing like slugshots, cannons firing as fast as multicannons, or my favourite absurd case of a Huge Plasma Accelerator firing every frame because it thought it was a beam laser... Ouch.

(I would also consider the mascara example to not be an example of misbehaving but dataset bias. The rest check out.)

Comment author: IlyaShpitser 15 October 2017 11:09:06PM *  0 points [-]

Consider creating detailed records of lifestyle differences between you and your sister. Perhaps keep a diary (in effect creating a longitudinal dataset for folks to look at later).

There is an enormous interest in disentangling lifestyle choices from genetics for all sorts of health and nutrition questions.


Thank you for considering this, I think this could be very valuable.

Comment author: Lumifer 15 October 2017 10:06:29PM 0 points [-]

comes with the situation where you are modeling yourself

Maybe, but that's not the context in this thread.

Comment author: satt 15 October 2017 10:01:54PM 1 point [-]

Upvoted for asking an interesting question, but my answer would be "probably not". Whether patents are a good idea even as is is debatable — see Michele Boldrin and David Levine's Against Intellectual Monopoly — and I expect beefing them up to be bad on the margin.

I'm unclear on whether the proposed super-patents would

  1. be the same as normal patents except fileable before the work of sketching a plausible design has been done, or

  2. would be even more powerful, by also allowing the filer to monopolize a market in which they carry out e.g. "market research, product development and building awareness", even if that involves no original design work,

but in any case the potential downsides hit me as more obvious than the potential upsides.

Item 1 would likely lead to more patents being filed "just in case", even without a real intention of bringing a real product to market. This would then discourage other profit-seeking people/organizations from investigating the product area, just as existing patents do.

Item 2 seems to take us beyond the realm of patents and intellectual work; it's about compensating a seller for expenses which produce positive spillovers for other sellers. As far as I know, that's not usually considered a serious enough issue to warrant state intervention, like granting a seller a monopoly. I suspect that when The Coca-Cola Company runs an advert across the US, Wal-Mart sells more of its own knockoff colas, but the US government doesn't subsidize Coca-Cola or its advertising on those grounds!

Comment author: ChristianKl 15 October 2017 09:17:22PM 1 point [-]

If you want to understand how companies can have incentives to produce new products I think it's worth to read startup literature like Eric Ries "The Lean Startup".

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50.

A small startup is unlikely to successfully run a patent battle in China. Having a patent won't protect the company from getting copied.

Let's look at an example. In the Quantified Self field, it would be nice to have a toilet that regularly does urine analysis and gives me data. In 1988 someone filed a patent for a toilet in which that's directly build. That doesn't mean that any such product hit the market. Did that original company produce a product for the European or US market? No, there's no toilet that you can buy from the original company. On the other hand, if another person would have tried to put something on the market they could have been sued. There's no company that produced a product that can be easily brought.

Most startups fail and when startups who filled patents fail, the patents are often brought by other parties who then use the patents to sue and do patent trolling.

China provides interesting opportunities. It's cheaper for someone to ship an item from China via Aliexpress to me than it is for someone to ship the same item to me from an Amazon Fulfillment Center. I can buy a 0.70 cent free shipping item from Aliexpress while I can't buy that from Amazon.

It's cheap to run a Kickstarter campaign and let a Chinese company produce your product. Doing this usually means that employees from the company are going to pass your design around and your product will get sold in an unbranded version on Aliexpress.

This means that the dream that Kickstarter promised where everybody can produce his idea and bring it to market comes with the side problem of copycat products being produced but that's still much better than it was in the past. It's also worth noting that you could in theory build your product in the US and not have factory employees pass the design around but given that the Chinese factories are so efficient the Kickstarter inventors still go and let a Chinese company produce their products.

That a bit sad but 10 years ago the same person had no way to bring their product to market at all.

Comment author: alanforr 15 October 2017 08:30:45PM 0 points [-]

Why would we need more research to work out that the simulation hypothesis is a bad idea? Computational universality implies that if we were being simulated on a computer, it would be impossible for us to know about the underlying hardware. Any hardware that implements a universal set of computational gates can support universal computation. There are lots of different kinds of universal gates, so you can't tell what gates are being used by looking at the results of a computation. So the simulation hypothesis does no work in explaining what we observe. The simulation hypothesis also implies we can't understand the real laws of physics, the physics of the simulator, since no experiment we conduct can tell us anything about the hardware. Another problem: the simulation might be programmed to change the laws of physics arbitrarily so it ruins all of our existing knowledge of the laws of physics and everything else.

There are no answers to these criticisms so the simulation hypothesis is false.

Comment author: satt 15 October 2017 07:53:03PM *  1 point [-]

I believe the following is a comprehensive list of LW-wide surveys and their turnouts. Months are those when the results were reported.

  1. May 2009, 166
  2. December 2011, 1090
  3. December 2012, 1195
  4. January 2014, 1636
  5. January 2015, 1503
  6. May 2016, 3083

And now in the current case we have "about 300" responses, although results haven't been written up and published. I hope they will be. If the only concern is sample size, well, 300 beats zero!

Comment author: morganism 15 October 2017 07:39:39PM 0 points [-]

Glucosamine dampens the brain hyperexcitability seen in seizures or epilepsy

http://www.sciencecodex.com/dietary-supplement-dampens-brain-hyperexcitability-seen-seizures-or-epilepsy-615985

Since neural excitability in the hippocampus is a key feature of seizures and epilepsy, they hypothesized that acutely increasing protein O-GlcNAcylation might dampen the pathological hyperexcitability associated with these brain disorders.

Comment author: ChristianKl 15 October 2017 05:27:58PM 0 points [-]

But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.

From the outside view a person who has no luck building contacts with existing institutions is unlikely to be a good person to start a new institute.

Of course getting someone like Eric S. Raymond to be open to write a book with you is a good sign.

Comment author: entirelyuseless 15 October 2017 03:20:13PM 0 points [-]

It is partly in the territory, and comes with the situation where you are modeling yourself. In that situation, the thing will always be "too complex to deal with directly," regardless of its absolute level of complexity.

In response to comment by Jiro on Feedback on LW 2.0
Comment author: gjm 15 October 2017 02:58:32PM 1 point [-]

So LW2 (as a whole, it seems) is bad and everyone involved should feel bad ... because the people who designed it think that web typography is a mature science whose recommendations can be taken at face value?

Maybe I'm being dim, but that seems really strange to me. It isn't clear to me what even makes you confident that they think that; I'm not sure what it means to say that something is designed around that idea (it can be designed via a process that assumes that idea, I guess, but that really isn't the same); and getting from there to "LW2 is bad and everyone involved should feel bad" seems like an enormous leap.

I wonder whether I'm missing some vital context here.

Comment author: entirelyuseless 15 October 2017 02:53:17PM 0 points [-]

We can make similar answers about people's intentions.

Comment author: MaryCh 15 October 2017 11:09:19AM 1 point [-]

Warning: please don't read if you are triggered by a discussion of post-mortem analysis (might come up in the comments).

I want to have my body donated to science, well, afterwards, and to convince my twin sister to organize the same thing; there's probably a dearth of comparative post-mortem studies of adult (aged) human twins. However, my husband said he wouldn't do it. I don't want to argue with him about something we both hope won't be an issue for many years to come, so, in pure scientific interest:

what would you think it would be interesting to study in such a setting?

Sorry if I offended you, it wasn't my intention. Just can't ask this on facebook, my Mom would eat me alive.

Comment author: turchin 15 October 2017 10:38:18AM *  2 points [-]
Comment author: Tenoke 15 October 2017 09:21:08AM *  1 point [-]

Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.

I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably hurt a lot more than help - e.g. someone coming back in 4 months and seeing how we've went down to only 300 (!) responders in the annual survey is going to assume LW is even more dead than it really is. This reasoning goes beyond the survey.

Comment author: Lumifer 15 October 2017 02:58:46AM 1 point [-]

what is the requirements for the framework to be applicable?

This framework lives in the map, not in the territory. It is a model feature, applicable when it makes a model more useful. Specifically, it makes sense when the underlying reality is too complex to deal with directly. Because of the complexity we, basically, reduce the dimensionality of the problem by modeling it as a simpler combination of aggregates. "Values" are one kind of such aggregates.

If you have an uncomplicated algorithm with known code, you don't need such simplifying features.

Comment author: Lumifer 15 October 2017 02:53:18AM 0 points [-]

No.

Comment author: MakoYass 15 October 2017 12:20:37AM 1 point [-]

I have a patent law question.

Summary/main question: Should patents ever be granted for a common, unoriginal idea, before any original work has been done, to protect the claimant's future work in the area of the claim. If we are not allowed to grant patents like that, what sort of schemes do we favor for bringing incentives to make progress in competitive arenas of research closer to the societal value of the expected findings?

Companies often seem to need a promise that if they can make an idea work and find an audience, all of those unprotected advancements they must make (market research, product development and building awareness in the audience(marketing)), wont just be stolen by some competitor the moment people start buying the thing.

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50. They aren't compensated proportionate to the value they produced. If it can't be produced for $3.50, it will be produced by their largest, most complacent competitors to safeguard their stranglehold on the market. The incumbents will go completely unpunished for having sat on their hands for long enough to allow these new innovators to threaten them, the idea will threaten them, and then it will serve them, and it will serve as an example to anyone who tries to threaten them in future, and innovation will generally be discouraged.

The expected rewards for solving a problem that takes a long time to solve are generally much lower than the societal value of the solution, because there's a high chance that another team will solve it first, and most of the resources invested in development will have been in vain. If a working group had exclusive rights to the solutions to some problem, whatever they turn out to be, the amount they aught to invest will be much closer to the solutions' actual value.

It's a way of limiting the inefficiencies of competition. It sort of reminds me of bitcoin-NG, if I've understood it correctly, the protocol periodically elects a single working group to process the bulk of the transactions, to prevent costly duplication of efforts.

So, to reiterate, should patents ever be granted before any original work has been done, to protect the claimant's future work in the area of the claim, and if not, what should we do instead, or what do we do instead, to bring the incentive to make progress in competitive arenas of research closer to the actual societal value of the expected findings?

Comment author: MakoYass 14 October 2017 11:33:32PM 0 points [-]

A person with less than 6% hair is bald, a person with 6% - 15% hair might be bald, but it is unknowable due to the nature of natural language. A person with 15% - 100% hair is not bald.

We can't always say whether someone is bald, but more often, we can. Baldness remains applicable.

Comment author: IlyaShpitser 14 October 2017 10:48:09PM 0 points [-]

They recommend using LR only in cases where a probability-based model is warranted.

Well, yeah.

In response to comment by PECOS-9 on Feedback on LW 2.0
Comment author: Viliam 14 October 2017 10:43:53PM *  0 points [-]

Very useful indeed. Bookmarked. Thanks!

Comment author: entirelyuseless 14 October 2017 04:08:00PM 0 points [-]

Isn't a big part of the problem the fact that you only have conscious access to a few things? In other words, your actions are determined in many ways by an internal economy that you are ignorant of (e.g. mental energy, physical energy use in the brain, time and space etc. etc.) These things are in fact value relevant but you do not know much about them so you end up making up reasons why you did what you did.

Comment author: morganism 14 October 2017 07:57:53AM 0 points [-]

NIST cautions of using bayes for weighting in legal forensics

https://www.eurekalert.org/pub_releases/2017-10/nios-nuc101217.php

"Bayesian theory does not support using an expert's opinion, even when expressed numerically, as a universal weight of evidence. Among different ways of presenting information, it has not been shown that LR is most appropriate."

Comment author: DragonGod 14 October 2017 06:19:28AM 0 points [-]

Please help me with the maths, I'm trying to do it myself (without calculus or measure theory as I haven't yet learned them), but I'm not optimistic.

Comment author: Stuart_Armstrong 14 October 2017 06:17:04AM 0 points [-]

Actually, a small probability of caring about Us a bit, can suffice.

Comment author: Stuart_Armstrong 14 October 2017 06:12:14AM 1 point [-]

I'm more looking at "formalising human value-like things, into something acceptable".

Comment author: Stuart_Armstrong 14 October 2017 06:11:17AM 1 point [-]

That is a key feature.

Comment author: Stuart_Armstrong 14 October 2017 06:09:42AM 1 point [-]

This framework is not applicable

Then what is the requirements for the framework to be applicable? Many human values, the ones we haven't self-analysed much, behave like H and its buttons: swayed by random considerations that we're not sure are value-relevant or not.

Comment author: Stuart_Armstrong 14 October 2017 06:06:57AM 0 points [-]

In the same way a particular person can know that he went to the store to buy milk

Yes. Isn't this fascinating? What is going on in human minds that, not only can we say stuff about our own values and rationality, but about those of other humans? And can we copy that into an AI somehow?

That will be the subject of subsequent posts.

Comment author: ImmortalRationalist 14 October 2017 02:07:41AM 1 point [-]

This LessWrong Survey had the lowest turnout since Scott's original survey in 2009

What is the average amount of turnout per survey, and what has the turnout been year by year?

Comment author: entirelyuseless 14 October 2017 01:41:30AM *  0 points [-]

They don't have to acknowledge compulsive-obsessive behavior. Obviously they want both milk and sweets, even if they don't notice wanting the sweets. That doesn't prevent other people from noticing it.

Also, they may be lying, since they might think that liking sweets is low status.

Comment author: Lumifer 13 October 2017 11:43:29PM 1 point [-]

We're talking about "mak[ing] judgements about rationality and values". That's entirely SOP for humans and introspection allows you to observe it in real time. This is not some kind of an unconscious/hidden/masked activity.

Moreover other humans certainly behave as if they make judgements about rationality (usually expressed as "this makes {no} sense") and values of others. They even openly verbalise these judgements.

Comment author: Dagon 13 October 2017 10:43:04PM 0 points [-]

In these matters, introspection is fairly suspect. And simply unavailable when talking about humans other than oneself (which I think Stuart is doing, maybe I misread).

Comment author: Lumifer 13 October 2017 04:54:34PM 0 points [-]

How do you know this is true?

By introspection?

Comment author: Dagon 13 October 2017 04:52:29PM 1 point [-]

We can and do make judgements about rationality and values.

How do you know this is true? Perhaps we make judgements about predicted behaviors and retrofit stories about rationality and values onto that.

Comment author: turchin 13 October 2017 04:27:34PM *  2 points [-]

I would add that values are probably not actually existing objects but just useful ways to describe human behaviour. Thinking that they actually exist is mind projection fallacy.

In the world of facts we have: human actions, human claims about the actions and some electric potentials inside human brains. It is useful to say that a person has some set of values to predict his behaviour or to punish him, but it doesn't mean that anything inside his brain is "values".

If we start to think that values actually exist, we start to have all the problems of finding them, defining them and copying into an AI.

Comment author: Lumifer 13 October 2017 03:24:09PM 1 point [-]

the question is, what are the values/preferences/rewards of H?

Why isn't the answer "None. This framework is not applicable"?

I have a pen. It leaves marks on paper. If I press a button the tip retracts and it no longer leaves marks on paper. What are the values/preferences/rewards of my pen?

Comment author: Lumifer 13 October 2017 03:18:13PM 1 point [-]

If we want to describe human values, we need to find values that don't go crazy when transformed into utility functions.

That seems misguided. If you want to describe human values, you need to describe them as you find them, not as you would like them to be.

Comment author: turchin 13 October 2017 03:14:54PM 0 points [-]

What about a situation when a person says and thinks that he is going to buy a milk, but actually buy milk plus some sweets? And do it often, but do not acknowledge compulsive-obsessive behaviour towards sweets?

Comment author: entirelyuseless 13 October 2017 02:38:21PM 1 point [-]

The problem with your "in practice" argument is that it would similarly imply that we can never know if someone is bald, since it is impossible to give a definition of baldness that rigidly separate bald people from non-bald people while respecting what we mean by the word. But in practice we can know that a particular person is bald regardless of the absence of that rigid definition. In the same way a particular person can know that he went to the store to buy milk, even if it is theoretically possible to explain what he did by saying that he has an abhorrence of milk and did it for totally different reasons.

Likewise, in practice we can avoid money pumps by avoiding them when they come up in practice. We don't need to formulate principles which will guarantee that we will avoid them.

Comment author: turchin 13 October 2017 02:19:47PM 0 points [-]

Also, the question was not if I could judge other's values, but is it possible to prove that AI has the same values as a human being.

Or are you going to prove the equality of two value systems while at least one of them of them remains unknowable?

Comment author: toonalfrink 13 October 2017 02:13:50PM 0 points [-]

I'd like to note that "caring about Us a bit" can also be read as "small probability of caring about Us a lot".

Comment author: turchin 13 October 2017 02:13:41PM 1 point [-]

May I suggest a test for any such future model? It should take into account that I have unconsciousness sub-personalities which affect my behaviour but I don't know about them.

Comment author: Stuart_Armstrong 13 October 2017 02:02:43PM 1 point [-]

In practice, "humans don't have values" and "humans have values, but we can never know what they are" are not meaningfully different.

I also wouldn't get too hung up on utility function; a utility function just means that the values don't go wrong when an agent tries to be consistent and avoid money pumps. If we want to describe human values, we need to find values that don't go crazy when transformed into utility functions.

Comment author: Stuart_Armstrong 13 October 2017 02:00:30PM 0 points [-]

We can and do make judgements about rationality and values. Therefore I don't see why AIs need fail at it. I'm starting to get a vague idea how to proceed... Let me work on it for a few more days/weeks, then I'll post it.

Comment author: entirelyuseless 13 October 2017 01:57:56PM 2 points [-]

The implied argument that "we cannot prove X, therefore X cannot be true or false" is not logically valid. I mentioned this recently when Caspar made a similar argument.

I think it is true, however, that humans do not have utility functions. I would not describe that, however, by saying that humans are not rational; on the contrary, I think pursuing utility functions is the irrational thing.

Comment author: toonalfrink 13 October 2017 01:57:40PM 0 points [-]

I'm positive. Humans strongly update their utility function based on the morality of the people around them. Do you ever find yourself a bit paralyzed in a new social environment because you don't know about the local customs?

On the other hand, humans are also notorious for trying to fix someone's problem before properly listening to them. Hmm.

Comment author: turchin 13 October 2017 01:32:43PM *  0 points [-]

I think you proved that values can't exist outside a human mind, and it is a big problem to the idea of value alignment.

The only solution I see is: don't try to extract values from the human mind, but try to upload a human mind into a computer. In that case, we kill two birds with one stone: we have some form of AI, which has human values (no matter what are they), and it has also common sense.

Upload as AI safety solution also may have difficulties in foom-style self-improving, as its internal structure is messy and incomprehensible for normal human mind. So it is intrinsically safe and only known workable solution to the AI safety.

However, there are (at least) two main problems with such solution of AI safety: it may give rise to neuromorphic non-human AIs and it is not preventing the later appearance of pure AI, which will foom and kill everybody.

The solution to it I see in using first human upload as AI Nanny or AI police which will prevent the appearance of any other more sophisticated AIs elsewhere.

Comment author: Yosarian2 12 October 2017 08:56:28PM 1 point [-]

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

That certanly is not true of me or of my life overall, except during a few short periods. I don't have the same access to other people's internal state, but I doubt it is true of most people.

There certanly are a significant number people who it may be true of, people who suffer from depression or chronic pain or who are living in other difficult circumstances. I highly doubt that that's the majority of people, though.

In response to comment by gjm on Feedback on LW 2.0
Comment author: Jiro 12 October 2017 08:45:16PM *  0 points [-]

Context here suggests that it's something like "the idea that typographical choices for LW2 should match those for the web as a whole"

The idea that the study of typographical choices for the web is a mature science whose (nontrivial) recommendations can all be taken at face value.

In response to comment by Jiro on Feedback on LW 2.0
Comment author: gjm 12 October 2017 04:30:59PM 1 point [-]

It is fundamentally designed around a bad idea.

That seems rather extreme. What specific bad idea do you mean?

(Context here suggests that it's something like "the idea that typographical choices for LW2 should match those for the web as a whole", but even if LW2's design makes that assumption and even if it's a bad assumption it doesn't seem fundamental enough to justify your last paragraph.)

Comment author: entirelyuseless 12 October 2017 03:01:43PM 1 point [-]

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

If they aren't so very aware of it, it is not "simply true," even if there is some truth in it.

Comment author: SquirrelInHell 12 October 2017 11:33:00AM 0 points [-]

"Well calibrated" isn't a simple thing, though. It's always a conscious decision of how willing you are to tolerate false positives vs false negatives.

I beg to differ; being well calibrated has a mathematically precise definition. E.g. if you are thinking of a binary suffering/not suffering classification (oversimplified but it's just to make a point), then I want my perception to assign such probabilities, that if you compare with true answers, cross-entropy is minimized. That's pretty much what I care about when I'm fixing my perception.

Of course there's the question of how aware at each moment you want to be of certain information. But you want to be well calibrated nonetheless.

if you think that almost everyone you see is suffering, you may be doing just that.

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

In response to comment by Jiro on Feedback on LW 2.0
Comment author: username2 12 October 2017 09:41:09AM 0 points [-]

The recommendation by "someone else" is anything but anonymous, adamzerner's comment quotes and links directly from Matthew Butterick, author of the online book that provides said guidance (and also explicitly makes the point about print vs. online).

While I fully agree with you about strong distaste for the visual design of LW2 (at least using default display settings in the current beta) you have failed to make a valid argument here.

Comment author: WalterL 12 October 2017 02:12:26AM 0 points [-]
Comment author: BrassLion 11 October 2017 09:17:47PM 1 point [-]

I will say that lesserwrong is already useful to me, and I'm poking around reading a few things. I haven't been on LessWrong (this site) in a long time before just now, and only got here because I was wondering where this "LesserWrong" site came from. So, at the very least, your efforts are reaching people like me who often read and sometimes change their behavior based on posts, but rarely post themselves. Thanks for the all work you did - the UX end of the new site is much, much better.

Comment author: Jiro 11 October 2017 07:35:54PM *  0 points [-]

Looks like LW 2.0 is using a 20px font size, and 25px line height, which is in range of what is recommended.

Is "what was recommended" similar to "mistakes were made"? It blames it on someone else, while leaving the "someone else" unnamed.

Existing recommendations about text size (and particularly, about not fitting too much text on a line) do not consider that Lesswrong has a different usage pattern than most sites. There are references dating back to 1971, but I can't figure out if any scientific studies were actually conducted at the time to determine this, and at any rate, printed text is not the web.

Also, beware of using some recommendation just because it's easy to measure.

This is basically breaking the site in order to fit "recommendations". LW 2.0 is bad, and everyone involved should feel bad. It is fundamentally designed around a bad idea.

Comment author: morganism 11 October 2017 08:40:44AM 1 point [-]

Universal Basic Income and the Threat of Tyranny

http://quillette.com/2017/10/09/universal-basic-income-threat-tyranny/

"The World Bank gives us a list of countries ordered by what percentage of their merchandise exports comes from fuels. At 50% or more we find, in this order: Iraq, Angola, Algeria, Brunei, Kuwait, Azerbaijan, Qatar, Saudi Arabia, Kazakhstan, Russia, Oman, Norway, Colombia, Bolivia and Bahrain. Can we notice a trend? How many of these countries provide a good set of political rights for their citizens?"

" A country that generates its wealth from its citizens has no choice but to keep those citizens happy, at least to some degree; a country that generates its wealth from oil wells, only needs to keep a handful of mercenaries happy as they guard the access to those wells."

Comment author: MaryCh 11 October 2017 05:55:54AM 0 points [-]

Makes sense. Although I would still think my environment is mostly direct and mostly easygoing:)

Comment author: lifelonglearner 11 October 2017 04:20:15AM *  1 point [-]

We're now a lot more active at LW2.0! Some of my stuff which wasn't that popular here is getting more attention there.

Maybe you could try it too?

In response to comment by pcm on Feedback on LW 2.0
Comment author: PECOS-9 10 October 2017 10:46:05PM 0 points [-]

I think your point about less information per screen identifies what has been bothering me. It makes it much harder to skim or to refer back to previous paragraphs.

Comment author: Gunnar_Zarncke 10 October 2017 10:00:51PM 0 points [-]

Currently (6 votes) it at first looks like Domainmodeling is leading. But depending on how lower ranks are weighed it could also be Stackoverflow or (my favorite) "Modelling the programs operation".

Comment author: Gunnar_Zarncke 10 October 2017 09:56:31PM 0 points [-]

I didn't mean this to be about what is 'required' but how the environment overall is perceived to be. When I discussed this with my boys (who also have different environments - school, friends, at home - I left the specific environment open too. I talked more like how they see 'the world' around them.

Comment author: Gunnar_Zarncke 10 October 2017 09:53:35PM 0 points [-]

I would average. After all even in one environment there are very many samples I guess, even if they cluster. But don't worry too much. It's just an LW poll :-)

View more: Next