All of BaconServ's Comments + Replies

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.

Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?

0DanielDeRossi
I'd consider myself a little below average. Culture : Anglo-Caribbean (where I am now) USA- (where I'll be soon). Both professional and personal would be great. Not so much making new friends as navigating social situations and being able to 'read' people.

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking f... (read more)

In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?

3[anonymous]
s/all/most/ - you will never get them all. But yes, that's an accurate statement. Friendliness is taught in artificial intelligence classes at university, and gets mention in most recent AI books I've seen. Pull up the AGI conference proceedings and search for "friendly" or "safe" - you'll find a couple of invited talks and presented papers each year. Many project roadmaps include significant human oversight of the developing AGI, and/or boxing mechanisms, for the purpose of ensuring friendliness proactive response.

Is there something wrong with climate change in the world today? Yes, it's hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in "green" and alternative energy if not for the publicity surrounding climate change?

It's easy to look back after the fact and say, "The market handled it!" But the truth is that the p... (read more)

0passive_fist
The failure mode that I'm most concerned about is overreaction followed by a backlash of dismissal. If that happened, the end result would be far worse than obscurity.

It could be useful to attach a, "If you didn't like/agree with the contents of this pamphlet, please tell us why at," note to any given pamphlet.

Personally I'd find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.

That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.

7Lumifer
Robot Jesus! :-) And rapture is clearly just an upload.
BaconServ-40

How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?

NSA spying isn't a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it's just against NSA spying doesn't seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the dang... (read more)

0ChristianKl
I think your idea of a democracy in which letter writing is the way to create political change, just doesn't accurately describe the world in which we are living. If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner. I think 30 years is a valid time frame for FAI strategy. That timeframe is long enough to invest in rationality movement building. Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get's voted down to -3. People voted it down, so it's not ignored. If MIRI wouldn't respond to a highly upvoted solution on lesswrong, then I would agree that's a sign of concern.
-4ChristianKl
There's a reason why it's general advice to not talk about religion, sex and politics. It's not because the average person does well in discussing politics. Dismiss your opponent out-of-hand as unintelligent isn't the only failure mode of politics mindkill. I don't even think it's the most important one. How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data? Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution. The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions. CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn't. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters. If you want to do something you can, earn to give and give money to MIRI. You don't get points for pressuring people to address arguments. That doesn't prevent an UFAI from killing you. UFAI is an important problem but we probably don't have to solve it in the next 5 years. We do have some time to do things right.
BaconServ-20

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue... (read more)

0ChristianKl
Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995. The commerical incentives that exist encourage the development of such products. A customer in a store doesn't see whether a potato is engineered to have more vitamins. He doesn't see whether it's engineered to produce pesticides. He buys a potato. It's cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don't. In the case of potatos it might be harmless. We don't eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts. It's not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid. According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it's very difficult to explain to people who are invested into their AGI design that it's dangerous because that part needs complicated math. It easy to say in abstract that some AGI might become UFAI, but it's hard to do the assessment for any individual proposal.

While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn't taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?

8[anonymous]
I wasn't aware Kurzweil was ever taken seriously in the first place.
1passive_fist
It's bad because as I understand it, his goals are to make people adjust their behavior and attitude for the singularity before it happens (something that is well aligned with what MIRI wants to do) and if he isn't taken seriously then people won't do this. Such things include taking seriously transhumanist concepts (life extension, uploading, etc.) and other concepts such as cryonics. I can't speak for Kurzweil but it seems that he thinks that if people took these ideas seriously right now, we would be headed for a much smoother and more pleasant ride into the future (as opposed to suddenly being awoken to a hard FOOM scenario rapidly eating up your house, your lunch, and then you). I agree with this perspective.

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we'll only need to do once; after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

Google probably already has an AI (and AI-risk) team internally that they've just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they'd make it known they were taking their own precautions.

1ChristianKl
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn't sound like a road that we should go on. I wouldn't want to buy such food in the supermarket but I have no problem with buying genetic manipulated that adds extra vitamins. Then there are various issues with introducing new species. Issues about monocultures. Bioweapons. The whole work is dangerous. Safety is really hard.
BaconServ-40

Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.

There has got to be enough writing by now that an effective chain mail can be written.

ETA: The chain mail suggestion isn't knocked down in luke's comment. If it's not relevant or worthy of acknowledging, please explain why.

ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.

Is "bad publicity" worse than "good publicity" here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that's kind of the goal here.

0passive_fist
Based on my (subjective and anecdotal, I'll admit) personal experiences, I think it would be bad. Look at climate change.
6ChristianKl
Politically people who fear AI might go after companies like google. I don't think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI. If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want. I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.
5Mitchell_Porter
I do not represent Less Wrong, but you have crossed a limit with me. The magic moment came when I realized that BaconServ means spambot. Spammers are the people I most love to hate. I respond to their provocations with a genuine desire to find them and torture them to death. If you were any more obnoxious, I wouldn't even be telling you this, I would just be trying to find out who you are. So wake the fuck up. We are all real people with lives, stop wasting our time. Try to keep the words "I", "Less Wrong", and "signalling" out of your next two hundred comments. ETA This angry comment was written while under pressure and without a study of BaconServ's full posting history, and should not be interpreted as a lucid assessment.
BaconServ-30

People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.

Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we'd need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I'd love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I'm not holding my breath for femtoengineering. Nevertheless, if such thi... (read more)

9TheOtherDave
I don't. I often have conversations here that interest me, which is all the justification I need for continuing to have conversations here. If I stopped finding them interesting, I would stop spending time here. Perhaps those conversations are childish; if so, it follows that I am interested in childish conversations. Perhaps it follows that I myself am childish. That doesn't seem true to me, but presumably if it is my opinions on the matter aren't worth much. All of that would certainly be a low-status admission, but denying it or pretending otherwise wouldn't change the fact if it's true. It seems more productive to pursue what interests me without worrying too much about how childish it is or isn't, let alone worrying about demonstrating to others that I or LW meet some maturity threshold.
3shinoteki
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams. The human mind is finite, and there are infinitely many possible concepts. If you're interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate . Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology "Scientific"? and How probable is Molecular Nanotech?.
0TheOtherDave
(nods) IOW, it merely demonstrates our inadequate levels of self-awareness and meta-cognition.

That's true. The process does rely on finding a solution to the worst case scenario. If you're going to be crippled by fear or anxiety, probably a very bad practice to emulate.

Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.

I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.

Damn I hope nobody replies to my comments...

Thank you. I no longer suspect you of being mind-killed by "politics is the mind-killer." Retracted.

Maybe I'm being too hasty trying to pinpoint people being mind-killed here, but it's hard to ignore that it's happening. I think I probably need to take my own advice right about now if I'm trying to justify my jumping to conclusions with statements like, "It's hard to ignore that it's happening."

I was planning to make a top-level comment here to the effect of, "INB4obvious mind-kill," but I think I just realized why the thought... (read more)

We can only go a step at a time. The other recent post about politics in Discussion was rife with obvious mind-kill. I'm seeing this thread filling up with it too. I'd advocate downvoting of obvious mind-kill, but it's probably not very obvious at all and would just result in mind-killed people voting politically without giving the slightest measure of useful feedback. I'm really at a loss for how to get over the mind-kill of politics and the highly paired autocontrarian mind-kill of "politics is the mind-killer" other than just telling people to shut the fuck up, stop reading comments, stop voting, go lie down, and shut the fuck up.

So because you already have the tool, nobody else needs to be told about it? I feel like I'm strawmanning here, but I'm not sure what your point is if not, "I didn't need to read this."

[This comment is no longer endorsed by its author]Reply
3Dagon
"I didn't need to read this" is probably close to what prompted my comment. Along with "and I suspect most readers also won't get much out of it", I should have just said "this should have gone in discussion first, then (if it was popular) rewritten as a top-level post with a clearer summary". Since it's gotten a reasonable amount of comments and upvotes, I think I was incorrect in my assessment that most readers would be like me,
BaconServ-10

Do you have an actual complaint here or are you disagreeing for the sake of disagreeing

Because it sounds a damn lot like you're upset about something but know better than to say what you actually think, so you're opting to make sophomoric objections instead.

[This comment is no longer endorsed by its author]Reply
BaconServ-20

I don't really care how special you think you are.

See, that's the kind of stance I can appreciate. Straight to the point without any wasted energy. That's not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?

...Or is the average voter simply not cognizant enough to realize t... (read more)

5TheOtherDave
Sarcasm. We get the "oh this is just like theism!" position articulated here every ten months or so. Those of us who have been here a while are kind of bored with it. (Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)
-4CAE_Jones
Hypothesis: the above was deliberate downvote-bait.
6Shmi
Few places online appreciate drama-queening, you know.
Jack100

I'm just trying to encourage you to make you contributions moderately interesting. I don't really care how special you think you are.

Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, "Works in mysterious ways that we can't hope to fathom."

Wow, what an interesting perspective. Never heard that before.

Certainly; I wouldn't expect it to.

Hah. I like and appreciate the clarity of options here. I'll attempt to explain.

A lot about social situations is something we're directly told: "Elbows off the table. Close your mouth when you chew. Burping is rude, other will become offended." Others are more biologically inherent; murder isn't likely to make you popular a party. (At least not the positive kind of popularity...) What we're discussing here lies somewhere between these two borders. We'll consider aversion to murderers to be the least biased, having very little bias to it and being... (read more)

3TheOtherDave
OK. Thanks for the clarification.

More or less, yeah. The totaled deltas weren't of the necessary magnitude order in my approximation. It's not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.

0TheOtherDave
Gotcha; I understand now. If that's actually a reliable method of analysis for you I'm impressed by your memory, but lacking the evidence of its reliability that you have access to I hop you'll forgive me if it doesn't significantly raise my confidence in the retroactive-karma-penalty theory.
3Viliam_Bur
How specifically can you be surprised to hear "be specific" on LessWrong? (Because that's more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong. Giving specific examples of "LessWrong is unable to discuss X, Y, Z" is so much preferable to saying "you know... LessWrong is a hivemind... there are things you can't think about..." without giving any specific examples.
7Jack
Why don't you just post them explicitly? As long they don't involve modeling a vengeful far-future AI everyone will be fine. Plus, then you can actually test to see if they will be rejected.
6NancyLebovitz
I'm willing to take the risk. PM or public comment as you prefer.

I'd need an expansion on "bias" to discuss this with any useful accuracy. Is ignorance a state of "bias" in the presence of abundant information to the contrary of the naive reasoning from ignorance? Please let me know if my stance becomes clearer when you mentally disambiguate "bias."

3TheOtherDave
If you feel like responding, you can assume I mean by "bias" whatever you meant by it when you used the word. Conversely, if you feel like turning this into an opportunity for me to learn to clear up my mental confusions and then demonstrate my learning to you, that's of course your call. If I experience such an epiphany I may let you know whether your stance thereby becomes clearer to me.
BaconServ-10

I iterated my entire comment history to find the source of an immediate -15 spike in karma; couldn't find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I'm unsure if the retroactive quality of it is immediate or on a timer but I don't see any reason it wouldn't be immediate. Feel free to test on me, I think the voting has stabilized.

1TheOtherDave
I'm utterly unclear on what evidence you were searching for (and failing to find) to indicate a source of an immediate 15-point karma drop. For example, how did you exclude the possibility of 15 separate downvotes on 15 different comments? Did you remember the previous karma totals of all your comments?
BaconServ-40

Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn't realize it at the time, I wasn't actually advocating dismantling it, or that it was a bad thing to have at all.

I mean, it's not like none of us ever goes beyond the walls of LessWrong.

That's the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you phys... (read more)

4NancyLebovitz
I wouldn't mind seeing some of the ideas you think are worthwhile but would be rejected by the LW memetic immune system.

I see. I'll have to look into it some time.

BaconServ-40

Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.

4TheOtherDave
It wasn't retroactive when I did this test a while back. Natch, code changes over time, and I haven't tested recently.
BaconServ-20

As a baseline, I need a program that will give me more information than simply being slightly more aware of my actions does. I want something that will give me surprising information I wouldn't have noticed otherwise. This is necessarily non-trivial, especially given my knack for metacognition.

1linkhyrule5
By definition, I can't really guarantee that information it gives you will be surprising. I can tell you that I consider myself a fairly luminous person and that RescueTime still managed to surprise me. At any rate: I also don't think you're going to get better than RescueTime. It keeps track of everything, does it down to the second, and does it without you having to notice - and it certainly helped me, so there's one data point.

A habit I find my mind practicing incredibly often is simulation of the worst case scenario. Obviously the worst case scenario for any human interaction is that they will become murderously enraged and do everything in their power to destroy you. This is generally safe to dismiss as nonsense/completely paranoid. After numerous iterations of this, you start ignoring the unrealistic worst-possible scenarios (that often make so little sense there is nothing you can do to react to them) and get down to the realistic worst case scenario. Often times in my youth... (read more)

5Creutzer
I'm not saying this is generally inadvisable, but it seems dangerous for some kinds of people because of a serious possible failure mode: by focussing on the half-plausible worst-case scenario, you will cause yourself to assign additional probability to them. Furthermore, they will come true sometimes, which will give you a feeling that you were right to imagine them, an impression of confirmation, which could lead to a problematic spiral. If you have any inclination towards social anxiety, practice with extreme caution!

I've noticed that I crystallize discrete and effective sentences like that a lot in response to talking to others. Something about the unique way they need things phrased for them to understand well results in some compelling crystallized wisdoms that I simply would not have figured out nearly as precisely if I hadn't explained my thoughts to them.

A lot can come across differently when you're trapped behind an inescapable cognitive bias.

ETA: I should probably be more clear about the main implication I intend here: Convincing yourself that you are the victim all the time isn't going to improve your situation in any way. I could make an argument that even the sympathy one might get out of such a method of thinking/acting is negatively useful, but that might be pressing the matter unfairly.

4TheOtherDave
It sounds like you believe that treating silence as a way of expressing that the opinion enjoys social support is the result of bias, but that treating silence as a way of expressing that the opiner deserves courtesy though the opinion is wrong is not the result of bias. Do you in fact believe that? If so, can you provide any justification for believing it? Because it seems implausible.
BaconServ-20

I'm not sure how to act on this information or the corresponding downvoting. Is there something I could have done to make it more interesting? I'd really appreciate knowing.

0Jack
To be clear: I replied before you edited the comment to make it a question about downvotes. Before your edit you were asking for an explanation of the inferential silence. That is what I explained. The downvotes are probably a combination of the boringness, the superiority you were signalling and left-over-bad-feeling from other comments you've made tonight. But I didn't downvote. Given the subject and content of the comment it probably couldn't have been substantially less boring. It could, however, have been substantially shorter.

A good example would be any of the articles about identity.

It comes down to a question of what frequency of powerful realizations individual rationalists are having that make their way back to LessWrong. I'm estimating it's high, but I can easily re-assess my data under the assumption that I'm only seeing a small fraction of the realizations individual rationalists are having.

BaconServ-40

Oh, troll is a very easy perception to overcome, especially in this context. Don't worry about how I'll be perceived beyond delayed participation in making posts. There is much utility in negative response. In a day I've lost a couple dozen karma, I've learned a lot about LessWrong's perception. I suspect there is a user or two participating in political voting against my comments, possibly in response to my referencing the concept in one of my comments. Something like a grudge is a thing I can utilize heavily.

0gattsuru
I'd expect more loss than that if someone really wanted to disable you; systemic karma abuse would end up being either resulting in karma loss equal to either some multiple of your total post count, or a multiple of the number of posts displayed per user history page (by default, 10).
BaconServ-40

I don't consider the comment section useful or relevant in any way. I can see voting on articles being useful, with articles scoring high enough being shifted into discussion automatically. You could even have a second tier of voting for when a post has enough votes to pass the threshold into Main for the votes it gets once there.

The main problem with karma sorting is that the people that actually control things are the ones that read through all content, indiscriminately. Either all of LessWrong does this, making karma pointless, or a sufficiently dedicat... (read more)

0gattsuru
In this case, the content was already in a post. Mechanically, I'm not sure how you'd handle automatically upvoting articles into Discussion: people do that by hand often, but they have to do it by hand because most contents lose usefulness and sometimes even readability when pulled from context. ((At a deeper level, it's quite easy to imagine or select posts that belong in Discussion or solely as comment and will quickly get high Karma values, and just as easy to think of posts that belong in Main but shouldn't have anything that would make folk upvote them to start with.)) At least at this point, it's easy enough (and often necessary enough) to change Sorting regularly just to find an article more than once, so I'm not sure sorting is the most meaningful part of Karma. The ability to prevent posters from regularly creating Main articles seems more relevant, and a number of folk at least treat Main articles more seriously.
BaconServ-10

My current heuristic is to take special note of the times LessWrong has a well-performing post identify one of the hundreds of point-biases I've formalized in my own independent analysis of every person and disagreement I've ever seen or imagined.

I'm sure there are better methods to measure that LessWrong can figure out for itself, but mine works pretty well for me.

1RolfAndreassen
Not quite sure what you mean here; could you give an example? But this aside, it seems that you are in some sense discussing the performance of LessWrong, the website, in identifying and talking about biases; while I was discussing the performance of LessWrongers, the people, in applying rationality to their real lives.
BaconServ-20

If there are as many issues as you suggest, then we should start the discussion as soon as possible—so as to resolve it sooner. Can you imagine a LessWrong that can discuss literally any subject in a strictly rational matter and not have to worry about others getting upset or mind-killed by this or that sensitivity?

If I'm decoding your argument correctly, you're saying that there's no obviously good method to manage online debate?

I certainly hope not. If politics were less taboo on LessWrong, I would hope that mention of specific parties were still taboo. Politics without tribes seems a lot more useful to me than politics with tribes.

Load More