Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?
Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.
Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking f...
In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?
Is there something wrong with climate change in the world today? Yes, it's hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in "green" and alternative energy if not for the publicity surrounding climate change?
It's easy to look back after the fact and say, "The market handled it!" But the truth is that the p...
It could be useful to attach a, "If you didn't like/agree with the contents of this pamphlet, please tell us why at," note to any given pamphlet.
Personally I'd find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.
That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn't a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it's just against NSA spying doesn't seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the dang...
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.
Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue...
While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn't taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?
Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we'll only need to do once; after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.
Google probably already has an AI (and AI-risk) team internally that they've just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they'd make it known they were taking their own precautions.
Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.
There has got to be enough writing by now that an effective chain mail can be written.
ETA: The chain mail suggestion isn't knocked down in luke's comment. If it's not relevant or worthy of acknowledging, please explain why.
ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.
Is "bad publicity" worse than "good publicity" here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that's kind of the goal here.
People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we'd need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I'd love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I'm not holding my breath for femtoengineering. Nevertheless, if such thi...
That's true. The process does rely on finding a solution to the worst case scenario. If you're going to be crippled by fear or anxiety, probably a very bad practice to emulate.
Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.
I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.
Damn I hope nobody replies to my comments...
Thank you. I no longer suspect you of being mind-killed by "politics is the mind-killer." Retracted.
Maybe I'm being too hasty trying to pinpoint people being mind-killed here, but it's hard to ignore that it's happening. I think I probably need to take my own advice right about now if I'm trying to justify my jumping to conclusions with statements like, "It's hard to ignore that it's happening."
I was planning to make a top-level comment here to the effect of, "INB4obvious mind-kill," but I think I just realized why the thought...
We can only go a step at a time. The other recent post about politics in Discussion was rife with obvious mind-kill. I'm seeing this thread filling up with it too. I'd advocate downvoting of obvious mind-kill, but it's probably not very obvious at all and would just result in mind-killed people voting politically without giving the slightest measure of useful feedback. I'm really at a loss for how to get over the mind-kill of politics and the highly paired autocontrarian mind-kill of "politics is the mind-killer" other than just telling people to shut the fuck up, stop reading comments, stop voting, go lie down, and shut the fuck up.
So because you already have the tool, nobody else needs to be told about it? I feel like I'm strawmanning here, but I'm not sure what your point is if not, "I didn't need to read this."
Do you have an actual complaint here or are you disagreeing for the sake of disagreeing
Because it sounds a damn lot like you're upset about something but know better than to say what you actually think, so you're opting to make sophomoric objections instead.
I don't really care how special you think you are.
See, that's the kind of stance I can appreciate. Straight to the point without any wasted energy. That's not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize t...
I'm just trying to encourage you to make you contributions moderately interesting. I don't really care how special you think you are.
Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, "Works in mysterious ways that we can't hope to fathom."
Wow, what an interesting perspective. Never heard that before.
Certainly; I wouldn't expect it to.
Hah. I like and appreciate the clarity of options here. I'll attempt to explain.
A lot about social situations is something we're directly told: "Elbows off the table. Close your mouth when you chew. Burping is rude, other will become offended." Others are more biologically inherent; murder isn't likely to make you popular a party. (At least not the positive kind of popularity...) What we're discussing here lies somewhere between these two borders. We'll consider aversion to murderers to be the least biased, having very little bias to it and being...
More or less, yeah. The totaled deltas weren't of the necessary magnitude order in my approximation. It's not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.
I'd need an expansion on "bias" to discuss this with any useful accuracy. Is ignorance a state of "bias" in the presence of abundant information to the contrary of the naive reasoning from ignorance? Please let me know if my stance becomes clearer when you mentally disambiguate "bias."
I iterated my entire comment history to find the source of an immediate -15 spike in karma; couldn't find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I'm unsure if the retroactive quality of it is immediate or on a timer but I don't see any reason it wouldn't be immediate. Feel free to test on me, I think the voting has stabilized.
Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn't realize it at the time, I wasn't actually advocating dismantling it, or that it was a bad thing to have at all.
I mean, it's not like none of us ever goes beyond the walls of LessWrong.
That's the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you phys...
I see. I'll have to look into it some time.
Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.
As a baseline, I need a program that will give me more information than simply being slightly more aware of my actions does. I want something that will give me surprising information I wouldn't have noticed otherwise. This is necessarily non-trivial, especially given my knack for metacognition.
A habit I find my mind practicing incredibly often is simulation of the worst case scenario. Obviously the worst case scenario for any human interaction is that they will become murderously enraged and do everything in their power to destroy you. This is generally safe to dismiss as nonsense/completely paranoid. After numerous iterations of this, you start ignoring the unrealistic worst-possible scenarios (that often make so little sense there is nothing you can do to react to them) and get down to the realistic worst case scenario. Often times in my youth...
I've noticed that I crystallize discrete and effective sentences like that a lot in response to talking to others. Something about the unique way they need things phrased for them to understand well results in some compelling crystallized wisdoms that I simply would not have figured out nearly as precisely if I hadn't explained my thoughts to them.
A lot can come across differently when you're trapped behind an inescapable cognitive bias.
ETA: I should probably be more clear about the main implication I intend here: Convincing yourself that you are the victim all the time isn't going to improve your situation in any way. I could make an argument that even the sympathy one might get out of such a method of thinking/acting is negatively useful, but that might be pressing the matter unfairly.
I'm not sure how to act on this information or the corresponding downvoting. Is there something I could have done to make it more interesting? I'd really appreciate knowing.
A good example would be any of the articles about identity.
It comes down to a question of what frequency of powerful realizations individual rationalists are having that make their way back to LessWrong. I'm estimating it's high, but I can easily re-assess my data under the assumption that I'm only seeing a small fraction of the realizations individual rationalists are having.
Oh, troll is a very easy perception to overcome, especially in this context. Don't worry about how I'll be perceived beyond delayed participation in making posts. There is much utility in negative response. In a day I've lost a couple dozen karma, I've learned a lot about LessWrong's perception. I suspect there is a user or two participating in political voting against my comments, possibly in response to my referencing the concept in one of my comments. Something like a grudge is a thing I can utilize heavily.
I don't consider the comment section useful or relevant in any way. I can see voting on articles being useful, with articles scoring high enough being shifted into discussion automatically. You could even have a second tier of voting for when a post has enough votes to pass the threshold into Main for the votes it gets once there.
The main problem with karma sorting is that the people that actually control things are the ones that read through all content, indiscriminately. Either all of LessWrong does this, making karma pointless, or a sufficiently dedicat...
My current heuristic is to take special note of the times LessWrong has a well-performing post identify one of the hundreds of point-biases I've formalized in my own independent analysis of every person and disagreement I've ever seen or imagined.
I'm sure there are better methods to measure that LessWrong can figure out for itself, but mine works pretty well for me.
If there are as many issues as you suggest, then we should start the discussion as soon as possible—so as to resolve it sooner. Can you imagine a LessWrong that can discuss literally any subject in a strictly rational matter and not have to worry about others getting upset or mind-killed by this or that sensitivity?
If I'm decoding your argument correctly, you're saying that there's no obviously good method to manage online debate?
I certainly hope not. If politics were less taboo on LessWrong, I would hope that mention of specific parties were still taboo. Politics without tribes seems a lot more useful to me than politics with tribes.
Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?
If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.