Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Willpower Schedule
Comment author: entirelyuseless 28 August 2016 05:23:45AM 0 points [-]

I don't see why this would "supersede" other models. I don't have to test it in my own case, because I already know that I am less willing to work if I did not expect to have to work. That doesn't mean that willpower is not a consumable resource. For example, you can compare it with money. If I go out and expect to spend $20, I might tell people, "I can't afford that," if the thing is going to cost $100. But if I had expected it to cost $100, I might have spent that amount. None of that shows that money is not a consumable resource.

Comment author: WhySpace 28 August 2016 03:52:46AM 0 points [-]

I don't mean to sound dismissive, but is that any better than any other boxing technique, like requiring it to ask verbal permission of a physical operator?

Unless I'm missing something, the AI would still have all the same incentives to influence the operator's answers, and solving those problems would be just as difficult for a digital operator as a physical one.

Comment author: WhySpace 28 August 2016 03:44:40AM 0 points [-]

I actually brought up a similar question in the open thread, but it didn't really go very far. May or may not be worth reading, but it's still not clear to me whether such a thing is even practical. It's likely that all substantially easier AIs are too far from FAI to still be a net good.

I've come a little closer to answering my questions by stumbling on this Future of Humanity Institute video on "Reduced Impact AI". Apparently that's the technical term for it. I haven't had a chance to look for papers on the subject, but perhaps some exist. No hits on google scholar, but a quick search shows a couple mentions on LW and MIRI's website.

Comment author: morganism 27 August 2016 10:54:25PM 0 points [-]

Scientists use ultrasound to jump-start a man’s brain after coma

Deep brain stim with ultrasonic raises patient from coma. Aimed at the thalmus, very low power required.

Might be useful for raising astronauts from slumber on a mars mission too. NASA are studying extended hibernation/somulance.

http://sciencebulletin.org/archives/4666.html

paper, paywalled http://www.brainstimjrnl.com/article/S1935-861X%2816%2930200-5/abstract

Comment author: morganism 27 August 2016 10:49:52PM *  0 points [-]

A reply to the case that tDCS doesn't have enough current to actually be affective.

Response to the Response: Does tDCS Actually Deliver DC Stimulation?

paywalled http://www.brainstimjrnl.com/article/S1935-861X%2816%2930212-1/fulltext

there is a journal for brain stim now

http://www.brainstimjrnl.com/

Comment author: ChristianKl 27 August 2016 08:58:24PM -1 points [-]

Why do you think that reading a history of how people who didn't know what DNA was thought about taxonomy will help dissolving the question?

Comment author: ChristianKl 27 August 2016 08:45:17PM 0 points [-]

I feel it's important to note that he was talking about writing styles, not philosophy.

Do you think how one reasons in writing about a subject has nothing todo with philosophy?

Comment author: ChristianKl 27 August 2016 08:35:17PM 0 points [-]

1) offered a solution to the problem of ubnfounded foundations.

The solution offered at the beginning is basically: "Don't try to let your reasoning be based on underlying foundations in the first place."

That leaves the open question about how to reason. GS is an answer to that question.

"One the one hand, on the other hand, on the third hand"-reasoning as advocated in Superforcasting where there doesn't have to be a shared foudnation for all three hands is another. That's what Tetlock calls "foxy" thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation. But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.

I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.

3) offered a claim that the problem doesn't exist in the first place.

No. The problem exist if you take certain assumptions for granted. If haven't claim that you don't have the problem if you make those assumption and follow certain heuristics.

This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it's complex and demonstrating that it's an internally consistent approach takes time and is done in Science and Sanity over many pages.

3) offered a claim that the problem doesn't exist in the first place.

No, I do see that the problem exist if you follow certain heuristics.

Comment author: onlytheseekerfinds 27 August 2016 08:13:07PM 0 points [-]

Have you tried any of your own proposed tactics?

Comment author: entirelyuseless 27 August 2016 08:11:29PM 0 points [-]

I agree that we are not in agreement. And I do think that if we continue to respond to each other indefinitely, or until we agree, it will probably result in a fight. I admit that is not guaranteed, and there have been times when people that I disagree with changed their minds, and times when I did, and times when both of us did. But those cases have been in the minority.

"We are all trying to reach a certain goal and a truer map of reality helps us get there..." The problem is that people are interested in different goals and a truer map of reality is not always helpful, depending on the goal. For example, most of the people I know in real life accept false religious doctrines. One of their main goals is fitting in with the other people who accept those doctrines. Accepting a truer map of reality would not contribute to that goal, but would hinder it. I want the truth for its own sake, so I do not accept those doctrines. But they cannot agree with me, because they are interested in a different goal, and their goal would not be helped by the truth, but hindered.

Comment author: turchin 27 August 2016 07:39:20PM 0 points [-]

Ask AI to scan at least one human brain with Ph.d in ethics and age after 40, and let the AI run it in a simulation to give judgment of all AI's decisions. It will dramatically reduce chances of many obviously stupid decisions.

Comment author: WikiLogicOrg 27 August 2016 12:19:47PM 0 points [-]

Yes I feel that you are talking in vague but positive generalities.

First, on a side note, what do you mean by "but positive"? As in idealistic? Excuse my vagueness. I think it comes from trying to cover too much at once. I am going to pick on a fundamental idea i have and see your response because if you update my opinion on this, it will cover much of the other issues you raised.

I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that of our thought and the inputs that influence them. It is on a similar vein to "I think therefore i am" (although, maybe it should be "thoughts, therefore thoughts are" to keep the pedantics happy) . I did not mention it in the article but if we try and break it down like this, we can see that our only purpose is to satisfy our urges. For example, if we experience a God telling us we should worship them and be 'good' to be rewarded, we have no reason to do this unless we want to satisfy our urge to be rewarded. So no matter our believes, we all have the same core drive - to satisfy our internal demands. The next question is whether these are best satisfied cooperatively or competitively. However i imagine you have a lot of objections thus far so i will stop to see what you have to say about that. Feel free to link me to anything relevant explaining alternate points of view if you think a post will take too long.

Comment author: turchin 27 August 2016 10:36:39AM 0 points [-]

Good point. So The Silence of the Space is a sign of some kind of threat in the sky.

Comment author: James_Miller 27 August 2016 06:02:31AM 0 points [-]

This is certainly possible. But if we are in a simulation of the base universe then it's strange that we experience the Fermi paradox given the universe's apparent age.

Comment author: Prometheus 27 August 2016 05:15:12AM 0 points [-]

It could be the universe is only "old" by our standards. Maybe a few trillion years is a very young universe by normal standards, and it's only because we've been observing a simulation that it seems to be an "old" universe.

Comment author: Prometheus 27 August 2016 04:58:31AM 0 points [-]

There's also the possibility that the universe is filled with aliens, but they are quiet in order to hide themselves from a more advanced alien civilization or UFAI. And this advanced civilization or UFAI acts as a Great Filter to those who do not have the sense to conceal themselves from it. This would assume that somehow aliens had a way of detecting the presence of this threat, perhaps by intercepting messages from alien civilizations before they were destroyed by it. Either that, or there is no way of detecting the aliens or UFAI, and all civilizations are doomed to be destroyed by it as soon as they start emitting radio signals.

In response to Hedging
Comment author: UmamiSalami 27 August 2016 04:11:46AM 0 points [-]

It seems like hedging is the sort of thing which tends to make the writer sound more educated and intelligent, if possibly more pretentious.

Comment author: Dagon 27 August 2016 01:13:07AM 0 points [-]

You say blackmail, I say altruistic punishment.

Comment author: Houshalter 27 August 2016 01:00:53AM 1 point [-]

I'm not sure this is true. The internet contains billions of hours of video, trillions of images, and libraries worth of text. If they can use unsupervised, semi-supervised, or weakly-supervised learning, they could take advantage of nearly limitless data. And neural networks can do unsupervised learning well, by learning features for one task and then transferring those to another task.

Deepmind has also had a paper on approximate bayesian learning for neural net parameters. That would make them much more able to learn from limited amounts of data, instead of overfitting.

Anyway deep nets are not really going to take over traditional ML methods, but rather open up a whole new set of problems that traditional methods can't handle. Like processing audio and video data, or reinforcement learning.

In response to comment by gwern on Hedging
Comment author: 9eB1 27 August 2016 12:07:31AM *  0 points [-]

Yes, Muflax's site is the one I was thinking of. Sad that they deleted it, it had some very good articles on it as I recall.

In response to Hedging
Comment author: Douglas_Knight 26 August 2016 09:40:25PM 1 point [-]

This sounds very a priori, like you noticed that people sometimes misinterpret and tried to figure out how without paying attention to the specific ways in which they actually do. I recommend Robin Hanson, although I think that post is way too much in favor of disclaimers.

Comment author: WalterL 26 August 2016 07:54:42PM -1 points [-]

Touche.

Comment author: Manfred 26 August 2016 06:22:03PM 1 point [-]

I find this surprisingly unmotivating. Maybe it's because the only purpose this could possibly have is as blackmail material, and I am pretty good at not responding to blackmail.

Comment author: WalterL 26 August 2016 04:40:35PM -1 points [-]

Aw come on guys. Negative karma for literally pointing out a news site? What does that even mean?

In response to comment by 9eB1 on Hedging
Comment author: gwern 26 August 2016 04:30:34PM 1 point [-]

I first saw this at another site in the LW sphere quite a few years ago, but I can't remember where, and I'm glad to have seen it spread

I stole it from muflax's since-deleted site (who AFAIK invented it), and I think SSC borrowed it from me.

In response to Hedging
Comment author: Dagon 26 August 2016 04:28:47PM 1 point [-]

It matters a lot who your audience is, and what are your goals in a specific interaction. Fluttershy's points about status-signaling are a great example of ways that precision can be at odds with effectiveness.

Also, you're probably wrong in most of your frequency estimates. Section III of this SlateStarCodex post helps explain why - you live in a bubble, and your experiences are not representative of most of humanity.

Unless you're prepared to explain your reference set (20% of what exactly?) cite sources for your measures, it's worth acknowledging that you don't know what you're talking about, and perhaps just not talking about it.

Rather than caveat-ing or specifying your degree in belief about percentage and definition of of evil men, just don't bother. Walk away from conversations that draw you into useless generalizations.

In other words, your example is mind-killing to start with. No communication techniques or caveats can make a discussion of how much you believe what percentage of men are evil work well. And I suspect that if you pick non-politically-charged examples, you'll find that the needed precision is already part of the discussion.

In response to Hedging
Comment author: 9eB1 26 August 2016 03:06:33PM 4 points [-]

As a matter of writing style, excessive use of hedging makes your writing harder to read. It's better to hedge once at the beginning of a paragraph and then state the following claims directly, or to hedge explicitly at the top of your article. At SlateStarCodex Scott sometimes puts explicit "Epistemic Status" claims at the top of the article (I first saw this at another site in the LW sphere quite a few years ago, but I can't remember where, and I'm glad to have seen it spread).

I am definitely guilty of excessive hedging when I write comments or essays, and I always have to go back and edit out "I think" and "it seems" from half my sentences.

In response to comment by Fluttershy on Hedging
Comment author: Fluttershy 26 August 2016 01:45:18PM 1 point [-]

For groups that care much more about efficient communication than pleasantness, and groups made up of people who don't view behaviors like not hedging bold statements as being hurtful, the sort of policy I'm weakly hinting at adopting above would be suboptimal, and a potential waste of everyone's time and energy.

In response to Hedging
Comment author: Fluttershy 26 August 2016 01:27:36PM 3 points [-]

Which is to say - be confident of weak effects, rather than unconfident of strong effects.

This suggestion feels incredibly icky to me, and I think I know why.

Claims hedged with "some/most/many" tend to be both higher status and meaner than claims hedged with "I think" when "some/most/many" and "I think" are fully interchangeable. Not hedging claims at all is even meaner and even higher status than hedging with "some/most/many". This is especially true with claims that are likely to be disputed, claims that are likely to trigger someone, etc.

Making sufficiently bold statements without hedging appropriately (and many similar behaviors) can result in tragedy of the commons-like scenarios in which people grab status in ways that make others feel uncomfortable. Most of the social groups I've been involved in allow some zero-sum status seeking, but punish these sorts of negative-sum status grabs via e.g. weak forms of ostracization.

Of course, if the number of people in a group who play negative-sum social games passes a certain point, this can de facto force more cooperative members out of the group via e.g. unpleasantness. Note that this can happen in the absence of ill will, especially if group members aren't socially aware that most people view certain behaviors as being negative sum.

Comment author: philh 26 August 2016 11:51:54AM 3 points [-]

I feel it's important to note that he was talking about writing styles, not philosophy.

Comment author: TheAncientGeek 26 August 2016 09:29:13AM *  0 points [-]

I think this argument, in order to work, needs some further premise to the effect that a decision only counts as "definitive" if it is universal,

Ok, but it would have been helpful to have argued the point.

if in some suitable sense everyone would/should arrive at the same decision; and then the second step ("Morality tells you what you should do") needs to say explicitly that morality does this universally.

AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.

In that case, the argument works -- but, I think, it works in a rather uninteresting way because the real work is being done by defining "morality" to be universal. It comes down to this: If we define "morality" to be universal, then no account of morality that doesn't make it universal will do. Which is true enough, but doesn't really tell us anything we didn't already know.

Who is defining morality to be universal? I dont think it is me. I think my argument works in a fairly general sense. If morality is a ragbag of values, then in the general case it is going to contain contradictions, and that will stop you making any kind of decision based on it.

Comment author: Risto_Saarelma 26 August 2016 05:18:19AM 3 points [-]

On Moldbug from 2012.

Comment author: ignoranceprior 26 August 2016 04:38:33AM *  1 point [-]

I don't know whether you've heard of it, but someone wrote an ebook called "Neoreaction a Basilisk" that claims Eliezer Yudkowsky was an important influence on Mencius Moldbug and Nick Land. There was a lot of talk about it on the tumblr LW diaspora a few months back.

Comment author: Elo 25 August 2016 11:00:45PM -2 points [-]

think like machines rather than humans

01101000 01100001 01101000 01100001 01101000 01100001 01101000 01100001

Comment author: Vaniver 25 August 2016 09:43:18PM 0 points [-]

I seem to recall a Yudkowsky anti-NRx comment on Facebook a year or two ago, but does anyone recall / have a link to an earlier disagreement on Yudkowsky's part?

Comment author: Good_Burning_Plastic 25 August 2016 09:42:43PM 1 point [-]

I don't think any of the thought leaders of either group were significantly influential to the other.

Yvain did say that he was influenced by Moldbug.

Comment author: Good_Burning_Plastic 25 August 2016 09:32:45PM 1 point [-]

Moldbug and Yudkowsky have been disagreeing with each other basically ever since their blogs have even existed.

Comment author: Dagon 25 August 2016 08:45:00PM 1 point [-]

I was around back in the day, and can confirm that this is nonsense. NRX evolved separtely. There was a period where it was of interest and explored by a number of LW contributors, but I don't think any of the thought leaders of either group were significantly influential to the other.

There is some philosophical overlap in terms of truth-seeking and attempted distinction between universal truths and current social equilibria, but neither one caused nor grew from the other.

Comment author: WalterL 25 August 2016 08:27:21PM 0 points [-]

Saw the site mentioned on Breibart:

Link: http://www.breitbart.com/tech/2016/03/29/an-establishment-conservatives-guide-to-the-alt-right/

Money Quote:

...Elsewhere on the internet, another fearsomely intelligent group of thinkers prepared to assault the secular religions of the establishment: the neoreactionaries, also known as #NRx.

Neoreactionaries appeared quite by accident, growing from debates on LessWrong.com, a community blog set up by Silicon Valley machine intelligence researcher Eliezer Yudkowsky. The purpose of the blog was to explore ways to apply the latest research on cognitive science to overcome human bias, including bias in political thought and philosophy.

LessWrong urged its community members to think like machines rather than humans. Contributors were encouraged to strip away self-censorship, concern for one’s social standing, concern for other people’s feelings, and any other inhibitors to rational thought. It’s not hard to see how a group of heretical, piety-destroying thinkers emerged from this environment — nor how their rational approach might clash with the feelings-first mentality of much contemporary journalism and even academic writing.

Led by philosopher Nick Land and computer scientist Curtis Yarvin, this group began a ..."

I wasn't around back in the day, but this is nonsense, right? Nrx didn't start on lesswrong, yeah?

Comment author: pepe_prime 25 August 2016 07:21:22PM *  0 points [-]

For injuries

R> sum(sapply(seq((80-30), 0), function(t) { 5000 * 3.431544214e-09 * t * 0.97^t * 0.63 * 50000 }))
# [1] 264.9444032

Rate should be 1.634253963e-08, yielding about $1261.78 lifetime loss.

Comment author: milindsmart 25 August 2016 05:56:59PM 0 points [-]

There is no guarantee that there exists some way for them to understand.

Consider the possibility that it's only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principles and tools... maybe?

What level of sophistication would you think is un-idiot-proof-able? Nuclear missiles? not-proven-to-be-friendly-AI?

Comment author: The_Jaded_One 25 August 2016 05:50:25PM *  1 point [-]

By how many orders of magnitude? Would you play Russian Roulette for $10/day?

Back of the envelope I would say my chances of dying in the next 6 months and also being successfully cryopreserved (assuming I magically completed the signup process immediately) are about 1 in 10000. That trades off against using my time and money at a time when I'm short of both.

Comment author: Riothamus 25 August 2016 02:37:03PM 0 points [-]

Echo chamber implies getting the same information back.

It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.

Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?

Comment author: The_Jaded_One 25 August 2016 02:09:33PM 0 points [-]

Yeah, but I'm not planning on magically becoming a randomly chosen 29 year old American male. If you condition on being wealthy and living in Mountain view or something I would expect the correlation to go away.

Comment author: Good_Burning_Plastic 25 August 2016 01:30:47PM 0 points [-]

Then you have the problem that I'm not in the USA (I plan to eventually move, once my career is strong enough to score the relevant visa); being in the US is the best way to ensure a successful, timely suspension. If you are in Europe you have to both pay more for transport and you will be damaged more by the long journey, assuming you die unexpectedly in Europe.

OTOH it looks like the mortality in your late 20s in the EU is less than half that in the US.

Comment author: TheAncientGeek 25 August 2016 10:15:28AM 0 points [-]

I'm think the problem doesn't make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren't necessarily expressable in the paradigm of another framework and I think this is case like that.

Do you realise that over the course of the discussion, you have

1) offered a solution to the problem of ubnfounded foundations.

2) offered a claim that a solution exists, but is too long to write down.

3) offered a claim that the problem doesn't exist in the first place.

Comment author: TheAncientGeek 25 August 2016 09:53:20AM 0 points [-]

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

Maybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber.

[*}An example might be the way looks mathematical to physics, which some people are willing to take fairly literally.

Comment author: TheAncientGeek 25 August 2016 08:19:48AM *  0 points [-]

Correct by whose definition? In a consistent reality that is possible to make sense of, one would expect evolved beings to start coming to the same conclusions.

I wouldn't necessarily expect that for the reasons given. You have given contrary opinion, not a counter argument.

From this question i assume you are getting at our inability to know things and the idea that what is correct for one, may not be for another. That is a big discussion but let me say that i premise this on the idea that a true skeptic realizes we can not know anything for sure and that is a great base to start building our knowledge of the world from.

I don't see how it addresses the circularity problem.

That vastly simplifies the world and allows us to build it up again from some very basic axioms. If

Or that. Is everyone going to be on the same axioms?

It is the case that your reality is fundamentally different from mine, we should learn this as we go. Remember that there is actually only one reality - that of the viewers.

The existence of a single reality isn't enough to guarantee convergence of beliefs for the reasons given.

Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.

There were many issues wrestled with for millennia that were suddenly solved. Why should this be any different?

That doesn't make sense. The fact that something was settled eventually doesn't mean that you probably problems are going to be settled at a time convenient for you.

I could ask me the opposite question of course but that attitude is not the one taken by any human who ever discover something worth while. Our chances of success may be tiny but they are better than zero, which is what they would be if no one tries. Ugh... i feel like i am writing inspirational greeting card quotes but the point still stands!

Yes I feel that you are talking in vague but positive generalities.

Comment author: The_Jaded_One 25 August 2016 06:56:48AM 0 points [-]

then you'd better not have turned down any loans with APY less than 900%.

Since I was unemployed with no assets, I wasn't (until very recently, i.e. yesterday) eligible for any kind of personal loan.

By how many orders of magnitude?

Mortality rate in your late 20s is low, and when you add that accidents, sudden deaths and murder are already very bad for cryo, that is further compounded.

Then you have the problem that I'm not in the USA (I plan to eventually move, once my career is strong enough to score the relevant visa); being in the US is the best way to ensure a successful, timely suspension. If you are in Europe you have to both pay more for transport and you will be damaged more by the long journey, assuming you die unexpectedly in Europe.

And how does the value of cryonics go up as your mortality rate does?

Well obviously it is worth more to mitigate death if your death is more likely. Especially when the kinds of ways you die when young are bad for yoir cryo chances.

Comment author: SquirrelInHell 25 August 2016 06:06:47AM 1 point [-]

Prometheus, thank you for your intelligent comment. What you are saying is testable, and I plan to get more data on this. My experience seems to not be limited to 12-hour periods, but I'll specifically control for that for now on.

Comment author: ThisSpaceAvailable 25 August 2016 01:58:24AM 1 point [-]

By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn't invalidate it. There's a rather narrow set of circumstances where your argument doesn't apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what it will be in a year, then you'd better not have turned down any loans with APY less than 900%.

And how does the value of cryonics go up as your mortality rate does? Are you planning on enrolling in a program with a fixed monthly fee?

Comment author: The_Jaded_One 24 August 2016 09:59:38PM 0 points [-]

How would you characterize the help you got getting a job? Getting an interview? Knowing what to say in an interview? Having verifiable skills?

Well, they taught me R and they helped me (along with some kind alumni) to go a bit further with neural networks than I otherwise would have. Having spent time hacking away at neural networks allowed me to pass the interview at the job I just got.

Knowing R caused me to get another generous offer that I have had to turn down.

Interview skills training with Robert was valuable, especially at the beginning. Robert seems to have a fairly sound understanding of how to optimise the process.

Comment author: The_Jaded_One 24 August 2016 09:53:19PM 0 points [-]

$1400 for lodging (commuting would cost even more)

Well, that's only a cost if (as in my case) you had to keep your normal home empty amd thereby double pay accommodation for that period.

Also some people on the course were local.

$2500 deposit (not clear on the refund policy)

I was told that this is fully refundable if you don't like the course within the first week, though I am not sure they would extend that to anyone (but you can ask).

Comment author: The_Jaded_One 24 August 2016 09:09:53PM 0 points [-]

Just a quick update, I signed the contract today and am now employed in the role of senior machine learning scientist at a company in Europe.

Comment author: NancyLebovitz 24 August 2016 08:49:52PM 1 point [-]

Naming Nature is focused on animals, but it or some of the books receommended with it might be the sort of thing you're looking for.

Comment author: Lumifer 24 August 2016 06:47:20PM 1 point [-]

AI is going to be god-like by default

Well, the default on LW is EY's FOOM scenario where an AI exponentially bootstraps itself into Transcendent Realms and, as you say, that's it. The default in the rest of the world... isn't like that.

Comment author: niceguyanon 24 August 2016 06:37:22PM 0 points [-]

Not necessarily, depends on your AI and how god-like it is.

I hope you're right. I just automatically think that AI is going to be god-like by default.

In the XIX century you could probably make the same argument about corporations

Not just corporations; you could make the some argument for sovereign states, foundations, trusts, militaries, and religious orgs.

Weak argument is that corporations with their visions, charters, and mission statements are ultimately run by a meatbag or run jointly by meatbags that die/retire, at least that's how it currently is. You can't retain humans forever. Corporations lose valuable and capable employee brains over time and replace them with new brains, which maybe better or worse, but you certainly cant keep your best humans forever. Power is checked; Bill Gates plans his legacy, while Sumner Redstone is infirm with kids jockeying for power and Steve Jobs is dead.

In response to Willpower Schedule
Comment author: Prometheus 24 August 2016 06:02:15PM 1 point [-]

I'm not sure if going to the bathroom is a "smart" adjustment between conscious and subconscious, or if it's closer to firing neurons in the region associated with it (that is to say, instead of a communication networks, it may be closer to just flipping on a switch). What would agree with the latter, is that studies show that the region of the brain associated with it is overly active when under the influence of alcohol. I think resting all day (and as a result, not wishing to do serious work) could probably be better explained by less blood flow to the brain (and as a result, less oxygen) due to lack of movement. On top of this, our bodies tend to operate in 12-hour cycles. If you are active for a while, your telling your brain it's in that 12-hour cycle. If your inactive, your telling it your in your inactive cycle.

In response to comment by Val on Inefficient Games
Comment author: Gram_Stone 24 August 2016 04:52:49PM 1 point [-]

I originally used 'fiat' instead of 'coercion'. I was just trying to make sure we don't miss other possibilities besides regulations for solving problems like these.

In response to comment by g_pepper on Inefficient Games
Comment author: Gram_Stone 24 August 2016 04:50:36PM 0 points [-]

That sounds accurate to me.

I can't think of anything off of the top of my head. I was really just trying to point out the general dynamic.

Comment author: WhySpace 24 August 2016 04:35:59PM *  0 points [-]

I'm pretty sure I also mostly disagree with claim 6. (See my other reply below.)

The only specific concrete change that comes to mind is that it may be easier to take one person's CEV than aggregate everyone's CEV. However, this is likely to be trivially true, if the aggregation method is something like averaging.

If that's 1 or 2 more lines of code, then obviously it doesn't really make sense to try and put those lines in last to get FAI 10 seconds sooner, except in a sort of spherical cow in a vacuum sort of sense. However, if "solving the aggregation problem" is a couple years worth of work, maybe it does make sense to prioritize other things first in order to get FAI a little sooner. This is especially true in the event of an AI arms race.

I’m especially curious whether anyone else can come up with scenarios where a maxipok strategy might actually be useful. For instance, is there any work being done on CEV which is purely on the extrapolation procedure or procedures for determining coherence? It seems like if only half our values can easily be made coherent, and we can load them into an AI, that might generate an okay outcome.

Comment author: Lumifer 24 August 2016 03:49:57PM 0 points [-]

Well, yes, but your example is a sub-type of my "more profitable" claim. The companies want the definitions to be clear because otherwise there is a large uncertainty cost which will affect profits. They don't care about destroying value as long as it's not their value.

I agree that companies often lobby for regulations which decrease their risk -- but typically what they want is to ossify the existing structures and put up barriers to newcomers and outside innovation. If you are large and powerful enough to influence regulations, you want to preserve your position as large and powerful. Generally speaking, that's not a good thing.

Comment author: Lumifer 24 August 2016 03:43:15PM 1 point [-]

AI gets it wrong then well that's it.

Not necessarily, depends on your AI and how god-like it is.

In the XIX century you could probably make the same argument about corporations: once one corporation rises above the rest, it will use its power to squash competition and install itself as the undisputed economic ruler forever and ever. The reality turned out to be rather different and not for the lack of trying.

Comment author: pcm 24 August 2016 03:29:20PM 0 points [-]

I expect that MIRI would mostly disagree with claim 6.

Can you suggest something specific that MIRI should change about their agenda?

When I try to imagine problems for which imperfect value loading suggests different plans from perfectionist value loading, I come up with things like "don't worry about whether we use the right set of beings when creating a CEV". But MIRI gives that kind of problem low enough priority that they're acting as if they agreed with imperfect value loading.

In response to comment by Lumifer on Inefficient Games
Comment author: moridinamael 24 August 2016 03:24:04PM 2 points [-]

I think they want both.

In the oil industry, it is in no one's interest that there be any uncertainty or vagueness in the regulations about what should be considered a "bookable reserve" which a company can formally count as part of its net assets. Everyone wants the definitions to be extremely clear because then investors can make decisions with confidence and clarity, more money flows through the system, and assets can be traded and sold easily.

A world without such regulations is worse for everyone, except perhaps the extremely skilled con artist, and even those people have to live in a system with less net cash flowing through it due to the aforementioned uncertainty.

On the net, if a company can lobby for a regulation that increases their profits, they will do so regardless of whether that regulation also creates profits for their competitors.

If possible, of course, they will select regulations that preferentially favor their own company. I'm sure this is very widespread. But it isn't the only use of regulation.

In response to comment by Val on Inefficient Games
Comment author: g_pepper 24 August 2016 03:22:38PM *  2 points [-]

I do not think that Gram_Stone is making the claim that fining or jailing those who do not pay their taxes is not coercion. Instead, I think that he is arguing that it is not the coercion per se that results in most people paying their taxes, but rather that (due to the coercion) failing to pay taxes does not have a favorable payoff, and that it is the unfavorable payoff that causes most people to pay their taxes. So, if there were some way to create favorable payoffs for desirable behavior without coercion, then this would work just as well as does using coercion.

Gram_Stone, please correct me if that is not accurate. Also, do you have any ideas as to how to make voluntary payment of taxes have a favorable payoff without using coercion?

Comment author: niceguyanon 24 August 2016 03:22:26PM 0 points [-]

3) World is OK with humans optimizing for the wrong things because humans eventually die and take their ideas with them good or bad. Power and wealth is redistributed. Humans get old, they get weak, they get dull, they lose interest. AI gets it wrong then well that's it.

Comment author: Lumifer 24 August 2016 03:11:10PM *  2 points [-]

Companies want to construct a better game where the optimal choice for them and their competitors is one that doesn't destroy value.

Didn't you mean to write

Companies want to construct a better game where they get more profitable and doing business is hard for the competitors.

..?

Comment author: moridinamael 24 August 2016 03:06:49PM 1 point [-]

I wish it were more widely understood that the groups who agitate to have regulations placed on certain industries are often composed of the participants of those industries, not outsiders trying to arbitrarily place shackles on them. Companies want to construct a better game where the optimal choice for them and their competitors is one that doesn't destroy value.

Comment author: cody-bryce 24 August 2016 02:54:03PM 0 points [-]

Quite possibly.

The epidemiological studies, as I understand it, make the association between claims of flossing and improved tooth health unambiguously exist (though not huge). HHS didn't analyse them and find them too week, exactly; they simply want controlled studies for this purpose (for good reason, of course). Nonetheless, everything we know makes it sound like flossing is at least a little effective.

Whether the effect justifies spending minutes every week, who knows.

Comment author: Val 24 August 2016 02:39:16PM 1 point [-]

In this case, we should really define "coercion". Could you please elaborate what you meant through that word?

One could argue, that if someone holds a gun to your head and demands your money, it's not coercion, just a game, where the expected payoff of not giving the money is smaller than the expected payoff of handing it over.

(Of course, I completely agree with your explanation about taxes. It's just the usage of "coercion" in the rest of your comment which seems a little odd)

Comment author: WikiLogicOrg 24 August 2016 02:34:38PM 0 points [-]

Thanks for taking the time to write all that for me. This is exactly the nudge in the right direction i was looking for. I will need at least the next few months to cover all this and all the further Google searches it sends me down. Perfect, thanks again!

Comment author: WikiLogicOrg 24 August 2016 02:26:50PM 0 points [-]

Thanks for the links and info. I actually missed this last time around, so cannot comment much more until i get a chance to research Jaynes and read that link.

Comment author: WikiLogicOrg 24 August 2016 02:24:56PM 0 points [-]

Who decides on what information is relevant? If i said i want to use men without beards and Alexander never had one, that would be wrong (at least my intuition tells me it would be) as i am needless disregarding information that skews the results. You say use all the info but what about collecting info on items such as a sword or a crown. I feel that is not relevant and i think most would agree. But where to draw the line? Gram_Stone pointed me to the reference class problem which is exactly the issue i face.

Comment author: WikiLogicOrg 24 August 2016 02:15:42PM *  0 points [-]

From the correct perspective, it is more extraordinary that anyone agrees.

Correct by whose definition? In a consistent reality that is possible to make sense of, one would expect evolved beings to start coming to the same conclusions.

Corrected by whose definition of correct?

From this question i assume you are getting at our inability to know things and the idea that what is correct for one, may not be for another. That is a big discussion but let me say that i premise this on the idea that a true skeptic realizes we can not know anything for sure and that is a great base to start building our knowledge of the world from. That vastly simplifies the world and allows us to build it up again from some very basic axioms. If it is the case that your reality is fundamentally different from mine, we should learn this as we go. Remember that there is actually only one reality - that of the viewers.

Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.

There were many issues wrestled with for millennia that were suddenly solved. Why should this be any different? You could ask me the opposite question of course but that attitude is not the one taken by any human who ever discover something worth while. Our chances of success may be tiny but they are better than zero, which is what they would be if no one tries. Ugh... i feel like i am writing inspirational greeting card quotes but the point still stands!

Object level disagreements can maybe be solved by people who agree on an epistemology. But people aren't in complete agreement about epistemology. And there is no agreed meta epistemology to solve epistemological disputes..that's done with same epistemology as before. Is there any resources you would recommend for me as a beginner to learn about the different views or better yet, a comparison of all of them?

Comment author: WikiLogicOrg 24 August 2016 01:50:03PM 0 points [-]

I think the probability is close to zero because trying to "drill down" to force agreement between people results in fights, not in agreement.

We are not in agreement here! Do you think its possible to discuss this and have one or both of us change our initial stance or will that attempt merely result in a fight? Note, i am sure it is possible to result in a fight but i do not think its a forgone conclusion. On the contrary, i think most worthwhile points of view were formed by hearing one or more opposing views on the topic.

they will each support their own position by reasons which are effective for them but not for the other person

Why must that be the case? On a shallow level it may seem so but i think if you delve deep, you can find a best case solution. Can you give an example where two people must fundamentally disagree? I suspect any example you come up with will have a "lower level" solution where they will find it is not in their best interest. I recognize that the hidden premise on my thinking that agreement is always possible, stems from the idea that we are all trying to reach a certain goal and a true(er) map of reality helps us get there and cooperation is the best long term strategy.

Comment author: WhySpace 24 August 2016 01:40:53PM *  0 points [-]

That was pretty much my take. I get the feeling that "okay" outcomes are a vanishingly small portion of probability space. This suggests to me that the additional marginal effort to stipulate "okay" outcomes instead of perfect CEV is extremely small, if not negative. (By negative, I mean that it would actually take additional effort to program an AI to maximize for "okay" outcomes instead of CEV.)

However, I didn't want to ask a leading question, so I left it in the present form. It's perhaps academically interesting that the desirability of outcomes as a function of “similarity to CEV” is a continuous curve rather than a binary good/bad step function. However, I couldn't really see any way of taking advantage of this. I posted mainly to see if others might spot potential low hanging fruit.

I guess the interesting follow up questions are these: Is there any chance that humans are sufficiently adaptable that human values are more than just an infinitesimally small sliver of the set of all possible values? If so, is there any chance this enables an easier alternative version of the control problem? It would be nice to have a plan B.

Comment author: turchin 24 August 2016 01:08:53PM 0 points [-]

Done using inverted text

Comment author: Romashka 24 August 2016 10:51:14AM 1 point [-]

It seems to me that the history of biological systematics/taxonomy is a great source of material for a study on dissolving the question (but I am neither a systematicist nor a historian). Are there any popular intros into the field that don't focus on individual botanists of the past? Serebryakov's "Morphology of plants", printed half a century ago, has a nice section on history, but it is limited in scope (and not quite "popular"). Other books often just list the people and what they did without interconnecting them, which is boring.

Comment author: qmotus 24 August 2016 10:31:07AM 2 points [-]

Uh, I think you should format your post so that somebody reading that warning would also have time to react to it and actually avoid reading the thing you're warning about.

View more: Next