When I read these AI control problems I always think that an arbitrary human is being conflated with the AI's human owner. I could be mistaken that I should read these as if AIs own themselves - I don't see this case likely so I would probably stop here if we are to presuppose this.
Now if an AI is lying/deceiving its owner, this is a bug. In fact, when debugging I often feel I am being lied to. Normal code isn't a very sophisticated liar. I could see an AI owner wanting to train its AI about lying an deceiving and maybe actually perform them on other peop...
What do Neoreactionaries think of the Islamic State? After all, it's an exemplar case of the reactionaries in those areas winning big. I know it's only a surface comparison, I'm sincerely curious about what a NR think of the situation.
While this is an interesting question - my take on the NRx was it was more anti-democracy then pro-Monarchy. So I think a better question for them would be: if fundamentalist Muslims become a democratic majority (via demographics) and vote in IS or the Muslim Brotherhood would that be a "big win" too? A less hypothetical question might be NRx's take on the state of Iraq's fledgling democracy.
Russia is already approximately 15% muslim, with huge differential birth rates between christians and muslims. And that 15% understates the real issue for violence and control - who has the most young men. I've seen numbers that by 2020 (!) half the Russian army will be muslim, and that majority will only grow from there.
Doesn't this analysis depend on army technology not changing? 100 years ago this would be spot on but if in the next decade we continue to see smaller armies of people being more and more effective you could have a Russia with an even s...
Being 100x more productive is about not solving hard problems you don't need to. Spending time thinking about ways to avoid the problem often pays off (feature definition, code reuse, slow implementations, ect). Much of the best practices that you read about are solving problems you wish you had - I wish my problem was poor documentation because that means someone actually cares to use it. I was always surprised by how bad the code was out in the wild until I realized it was survivor bias - the previous owner deferred solving some problem for a long time.
I don't think we would be that far behind.
NNs had lost favor in the AI community after 1969 (minsky's paper) and only have become popular again in the last decade. see http://en.wikipedia.org/wiki/Artificial_neural_network
The only crossover that comes to mind for me is the vision deep learning 'discovering' edge detection. There also is some interest in sparse NN activation.
AFAIK no one disputes the only swims left part.
From the North Pole every direction is South. Consider being less NRxic.
Question on infinities
If the universe is finite then I am stuck with some arbitrary number of elementary particles. I don't like the arbitrariness of it. So I think - if the universe was infinite it doesn't have this problem. But then I remember there are countable and uncountable infinities. If I remember correctly you can take the power set of an infinite set and get a set with larger cardinality. So will I be stuck in some arbitrary cardinality? Are the number of cardinality countable? If so could an infinite universe of countably infinite cardinali...
I'd look for a good headhunter in your field (assuming it is not too niche). Let them get the commission for finding you a job.
Why even read left wing articles if they upset you?
My take is that if the public space was skateboarder and homeless friendly, the author could easily write a similar article on how that scares [insert other victim group] away from the public space.
As for why it is written that way, Kling's book The Three Languages of Politics is a good explanation. The left likes to think in oppressed verses oppressor terms.
Thanks for posting this article. There is a park being planned near me and there are certain architectural features I now want it to consider ...
Brokerage accounts (fidelity/etrade) are better then bank accounts in every way (in the US). Use them with a margin account to safely maximize your investments. The margin account will basically function as an overdraft / short term loan at very favorable rates. Reasons:
This seems like awesome advice that I have never heard before. Do you think it might be dangerous for some people? Like is it a "you must be this tall to ride this ride" kind of thing?
Also, it seems like it might help to have this made actionable by talking about the steps someone would take to convert their financial service provider setup to this. Do you have a good method for picking a broker? If someone was not very financially savvy (like they didn't know what a brokerage even was exactly) what should they do right after reading here to start on the path to setting things up this way?
I have a hard time with the goal setting phase. Do I really want the goal? Would doing X really achieve the goal? Even if doing X achieves the goal would the cost be too high? Sometimes I think this is akrasia other times I think it is being accurate that the goals are wrong.
For most of the work stuff I find it easier to remember where to find things rather than the things themselves. The hard stuff is the undocumented and constantly changing locations and procedures where a search is likely to find out of date junk.
I think there would be more contributions. For instance, StarSlateCodex seems to get more engagement by discussing the Taboo topics. Its widely believed that many LWers left here but visit those types of sites. Could LW fully explore rationality without the those topics? Probably - but it would be dry and boring.
I think that the reddit code base (LW's) would be a better platform for the rationality community then a bunch of random unconnected websites. I had proposed an egalitarian software solution which I think would allow the Taboo topics to be discussed without forcing them on anyone.
What are the current developments? Is anything dominant now? Wiki claims Logical Positivism was dominant until 1960.
Also do the current developments matter? Would any of the hard sciences do things differently? Did the change affect the soft sciences?
I always liked that episode. Before I thought being emotionless was effortless for Spock. When I saw that episode I realized he had to work at it.
I don't have kids so take this with a grain of salt. Just give a disappointed/disapproval look every time he swears. Maybe practice in the mirror. Let guess culture work its magic.
Thanks for the insights. I am not in the industry. I hadn't thought about the tax and creditor aspects of life insurance. I can see how those could become murky really quick.
As for the cryogenics, yes I was thinking of some sort of life insurance policy. Maybe I should take it off my list since 'permanent death' would be financially devastating. My thinking was you probably have other things to focus on if you can't pay it out of pocket.
As for house and renter insurance, I don't think the insurance company's profit is a good indicator of how much expecte...
Other trigger points should be when to self-insure. The usual guidance is when you could easily pay the replacement costs. Insurance is always a low odds bet. The only economic reason for it is when losing the bet would devastate you financially.
From a quality of life POV, I would think that joint replacement (knee, hip, elbow) would be a huge improvement for many people. Outside of organ growing is there any progress on growing joints?
I agree. I was just trying to motivate my rant.
When Roomba came out I expected vast progress by now. Some company would actually make one that works all the time for the whole house. Now I am not second guessing the the IRobot corporation - maybe they could do it but the market is happy now. How hard is it with today's know how to make one that
The actual function of Karma as you describe doesn't bother me. I'll continue voting as usual. The anti-kibitzing option just hides the votes so I don't see them. For me I hope out of sight out of mind actually works for this problem.
I used to think this Karma Score stuff would be helpful to filter low quality posts. But I see many people get downvoted for tribal reasons and I also see many upvotes on posts that I have trouble deciphering (sockpuppets?). So usually, when I see a post downvoted to oblivion I end up clicking on it anyways which defeats the whole purpose of using the Karma Score to help me filter out bad posts. I also waste a bunch of cycles wondering about the votes (who are these people).
TL;DR I have decided to try using firefox to view lesswrong with the anti-kibitzing option turned on (see preferences).
Thanks, this is helpful
Does anyone know if any companies are applying NLP to software? Specifically, to the software ASTs (abstract syntax trees)?
I have been playing around with unfolding autoencoders and feeding them Python code but if there are researchers or companies doing similar I'd be interested in hearing about it.
Robotics will get scary very soon. quoted from link:
...The conference was open to civilians, but explicitly closed to the press. One attendee described it as an eye-opener. The officials played videos of low-cost drones firing semi-automatic weapons, revealed that Syrian rebels are importing consumer-grade drones to launch attacks, and flashed photos from an exercise that pitted $5,000 worth of drones against a convoy of armored vehicles. (The drones won.) But the most striking visual aid was on an exhibit table outside the auditorium, where a buffet of low
My view is that the company knows what the job is worth, and the applicant does not...
Is this a problem now a days with sites like glassdoor? Or maybe some industries are not well represented.
When interviewing if you can get multiple job offers then you can play them off each other (in some industries). I don't have any experience with government work though.
I mean it in this non-flattering sense rent-seeking.
I envision all sorts of arbitrary legal limits imposed on AIs. These limits will need people to dream them up, evangelize the needs for even more limits, and enforce the limits (likely involving creation of other 'enforcer' AIs). Some of the limits (early on) will be good ideas but as time goes on they will be more arbitrary and exploitable. If you want examples just think of what laws they will try to stop unfriendly AI and stop individuals from using AI to do evil (say with an advanced makerbot).
Once...
Why not try to exploit the singularity for fun and profit? Its like you have an opportunity to buy Apple stock dirt cheap.
At the very least you should be able to rule out bad investments (time or money).
I would think most people change their minds on these topics but would simply lie about 1 & 2. There are several threads about religious people turning atheist using this strategy.
I think the grand thing difficulty is that a change would require a large personal commitment if they wanted to be self-consistent. The difficulty is laziness - 'I'd have to rethink everything' or even worse 'I'd be evil to think that'.
Are you worried about his ethics or is he making a mistake in logic?
The columnist says "This opinion is not immoral. Such choices are inevitable. They are made all the time." Is that the part you disagree with?
It would depend on how bad travel was without cars yesterday. Historically, it was horses which must have been really bad. I think if they knew back then about speeds, traffic, and conditions they still would have done it. Parts of China and India have proved it quite recently (last 50 years).
Now if we had most people in high density housing, good transport (both public and private), and online ordering/delivery then maybe cars would be very restricted.
I mean unfriendly in the ordinary sense of the word. Maybe uninviting would be as good.
Perhaps a careful reading of that disclaimer would be friendly or neutral - I don't know. My quick reading of it was: by interacting with AI Impacts you could be waiving some sort of right. To be honest I don't know what a CCO is.
I have nothing further to add to this.
The hassles of flying these days have made buses more popular. For a Seattle to Portland trip I would consider it if we didn't have a train.
If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form.
My comment is intended as helpful feedback. If it is not helpful I'd be happy to delete it.
I am not sure. A quick search on LessWrong only lead me to Meet Up: Pittsburgh: Rationalization Game
What I am proposing would be more of an exercise in argument structure. Either the 'facts' are irrelevant to the given argument or there are more 'facts' needed to support the conclusion.
In college, I had a professor ask us to pick any subject, make-up any 'facts', and try to make a compelling argument. He then had us evaluate others peoples essays. Let's just say I wasn't impressed with some of my fellow classmate's arguments.
Sometimes you see this in the courtroom as a failure to state a claim
Would it be interesting to have an open thread where we try this out?
[pollid:814]
Looking at the very bottom of AI Impacts home page - the disclaimer looks rather unfriendly.
I'd suggest petitioning to change it to the LessWrong variety
Here is the text: To the extent possible under law, the person who associated CC0 with AI Impacts has waived all copyright and related or neighboring rights to AI Impacts. This work is published from: United States.
I don't think there is a way out. Basically, you have to continue to add some beliefs in order to get somewhere interesting. For instance, with just belief that you can reason (to some extent) then you get to a self existence proof but you still don't have any proof that others exist.
Like Axioms in Math - you have to start with enough of them to get anywhere but once you have a reasonable set then you can prove many things.
I would agree if you can't trust your reasoning then you are in a bad spot. Even Descartes 'Cogito ergo sum' doesn't get you anywhere if you think the 'therefore' is using reasoning. Even that small assumption won't get you too far but I would start with him.
Overdosing on politics to become desensitized is genius. However, I seem to have too high of tolerance for it.
The singularity aspect is more of a personal inconsistency I need to address. I can't think that the long term stuff doesn't matter and have a strong opinion on the long term issues.
Are human ethics/morals just an evolutionary mess of incomplete and inconsistent heuristics? One idea I heard that made sense is that evolution for us was optimizing our emotions for long term 'fairness'. I got a sense of it when watching the monkey fairness experiment
My issue is with 'friendly ai'. If our ethics are inconsistent then we won't be choosing a good AI but instead the least bad one. A crap sandwich either way.
The worst part is that we will have to hurry to be the first to AI or some other culture will select the dominate AI.
Just some thoughts/nits
How does this compare with Kurzwiel book Transcend?
Nit: in the chart for preventable causes of death it has firearm. Not sure what the prevention is? bulletproof vest? Should it be suicide instead?
Nit: I am not sure how reference 21 for firearms relates. Instead of of use common sense - maybe stay away from gangs would be better advice.
Politics as entertainment
For many policy questions I normally foresee long term 'obvious' issues that will arise from them. However, I also believe in a Singularity of some sort in that same time frame. And when I re-frame the policy question as will this impact the Singularity or matter after the Singularity the answer is usually no to both.
Of course, there is always the chance of no Singularity but I don't give it much weight.
So my question is: Has anyone successfully moved beyond the policy questions (emotionally)? Follow up question: once you are bey...
I normally use chrome. But I did see the problem with IE. IE is using the default video player. You want to use the htlm5 player.
Go to this page and select the 'use html5' button and try again. https://www.youtube.com/html5
Nothing is required that I know of. Here is a video. At the bottom right there is the settings icon - gear shaped. Speed is an option.
I have seen on a small minority of videos the option wasn't present in settings however researching on the title can sometimes get a link to a video that is.
I agree there still would be very easy ways to punish enemies or even more common 'friends' that don't toe the line.
I do think it would identify some interesting cliques or color teams. The way I envision using it would be more topic category based. For instance, for topic X I average this group of peoples opinions but a different group on topic Y.
On the positive side, if you have a minority position on some topic that now would be downvoted heavily you could still get good feedback from your own minority clique.
NNs connection to biology is very thin. Artificial neurons don't look or act like regular neurons at all. But as a coined term to sell your research idea its great.
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn't really fair. You use the tool best for the job. If you have lot... (read more)