Good point! I assume we'll have decay built into the system, based on age of the data points... some form of that is built into the architecture of FreeNet I believe, where less-accessed content eventually drops out from the network altogether.
I wasn't even thinking about old people... I was more thinking about letting errors of youth not follow you around for your whole life... but at the same time, valuable content (that which is still attracting new readers who mark it as valuable) doesn't disappear.
That said, longevity on the system means you've had more time to contribute... But if your contributions are generally rated as crappy, time isn't going to help your influence without a significant ongoing improvement to your contributions' quality.
But if you're a cranky old nutjob, and there are people out there who like what you say, you can become influential in the nutjob community, if at the expense of your influence in other circles. You can be considered a leading light by a small group of people, but an idiot by the world at large.
You may be a dreamer, but so am I. Perhaps we should talk. :)
As it happens, I do have in mind a desgin of a distributed, open source approach that should circumvent this problem, at least in the area of social news. I am not sure however if the Less Wrong crowd would find it relevant for me to discuss that in an article.
I'd love to discuss my concept. It's inspired in no small part by what I learned from LessWrong, and by my UI designer's lens. I don't have the karma points to post about it yet, but in a nutshell it's about distributing social, preference and history data, but also distributing the processing of aggregates, cross-preferencing, folksonomy, and social clustering.
The grand scheme is to repurpose every web paradigm that has improved semantic and behavioral optimization, but distribute out the evil centralization in each of them. I'm thinking of an architecture akin to FreeNet, with randomized redundancies and cross-checking, to circumvent individual nodes from gaming the ruleset.
But we do crowd-source the ruleset, and distribute its governance as well. Using a system not unlike LW's karma (but probably a bit more complex), we weigh individual users' "influence." The factors on which articles, comments, and users can be rated is one of the tough questions I'm struggling with. I firmly believe that given a usable yet potentially deep and wide range of evaluation factors, many people will bother to offer nuanced ratings and opinions... Especially if the effort is rewarded by growth in their own "influence".
So, through cross-influencing, we recreate online the networks of reputation and influence that exist in the real social world... but with less friction, and based more on your words and deeds than in institutional, authority, and character bias.
I'm hoping this has the potential to encourage more of a meritocracy of ideas. Although to be honest, I envision a system that can be used to filter the internet any way you want. You can decide to view only the most influential ideas from people who think like you, or from people who agree with Rush Limbaugh, or from people who believe in the rapture... and you will see that. You can find the most influential cute kitty video among cute kitty experts.
That's the grand vision in a nutshell, and it's incredibly ambitious of course, yet I'm thinking of bootstrapping it as an agile startup, eventually open-sourcing it all and providing a hosted free service as an alternative to running a client node. If I can find an honest and non-predatory way to cover my living expenses out of it, it would be nice, but that's definitely not the primary concern.
I'm looking for partners to build a tool, but also for advisors to help set the right value-optimizing architecture... "seed" value-adding behavior into the interface, as it were. I hope I can get some help from the LessWrong community. If this works, it could end up being a pretty influential bit of technology! I'd like it to be a net positive for humanity in the long term.
I'm probably getting ahead of myself.
This touches directly on work I'm doing. Here is my burning question: Could an open-source optimization algorithm be workable?
I'm thinking of a wikipedia-like system for open-edit regulation of the optimization factors, weights, etc. Could full direct democratization of the attention economy be the solution to the arms race problem?
Or am I, as usual, a naive dreamer?
From a utilitarian perspective we should restrict ourselves to things that are possible, and unless you're a governor or an obscenely powerful lobbyist, I don't think you are going to be raising taxes anytime soon.
Advertising is, by nature, diametrically opposite to rational thought. Advertising stimulates emotional reptilian response. I advance the hypothesis that exposure to more advertising has negative effects on people's receptivity to and affinity with rational/utilitarian modes of thinking.
So far, the most effective tool to boost popular support for SIAI and existential risk reduction has been science-fiction books and movies. Hollywood can markedly influence cultural attitudes, on a large scale, with just a few million dollars... and it's profitable. Like advertising, they often just pander to reptilian and emotional response... but even then they can also educate and convince.
What most people know of and believe about AI and existential risk is what they learned from Steven Spielberg, Oliver Stone, Isaac Asimov, etc. If Spielberg is a LW reader (maybe he lurks?), I am much more optimistic for mankind than if ads run on Craigslist.
If you want people to support the right kind of research, I advance that it could be most effectively and humanely accomplished using the Direct Belief Transfer System that is storytelling.
Who wants to write The Great Bayesian Novel? And the screenplay?
How about this analogy: if I sign up for travel insurance today then I needn't necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I'm going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.
Alex, I see your point, and I can certainly look at cryonics this way... And I'm well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it's now feeling like one of these no-fun grown-up things I'm going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this "bah, no big deal, just get it done" attitude which is a helpful model at this point. I tend to be the agonizing type.
But I think I'm also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering... might it be a sort of self-preservation instinct of society (not individuals)? So, defining "society" as the system of beliefs and tools and skills we've evolved to deal with fore-knowledge of death, I guess I'm asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it?
So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can't make me slap my forehead by convincing each individual cell in my hand to do it. They'll need the brain to coordinate, and you can't make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?
I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.
Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.
Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.
All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).
Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especially compared to the considerable arsenal of sophisticated technologies, art forms, and psychoses we've painstakingly evolved to cope with death.
That's where I am right now. Eliezer's comments have triggered a strongly rational dissonance, but I feel comfortable hanging around all the serious people, who are too busy doing the serious work of making the most of life to waste any time on silly things like immortality. Mostly, I'm terrified at the unfathomable enormity of everything that I'll have to do to adapt to a belief in cryonics. I'll have to change my approach to everything... and I don't have any cultural references to guide the way.
Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?
Is this a matter of genetic programming percolating too deep into the fabric of all our systems, be they genetic, nervous, emotional, instinctual, cultural, intellectual? Are we so hard-wired for death that we physically can't fathom or adapt to the potential for immortality?
I'm particularly interested in hearing about the experience of the LW community on this: How far can rational examination of life-extension possibilities go in changing your outlook, but also feelings or even instincts? Is there a new level of self-consciousness behind this brick wall I'm hitting, or is it pretty much brick all the way?
Hello.
I'm Antoine Valot, 35 years old, Information Architect and Business Analyst, a frenchman living in Colorado, USA. I've been lurking on LW for about a month, and I like what I see, with some reservations.
I'm definitely an atheist, currently undecided as to how anti-theist I should be (seems the logical choice, but the antisocial aspects suggest that some level of hypocrisy might make me a more effective rational agent?)
I am nonetheless very interested in some of the philosophical findings of Buddhism (non-duality being my pet idea). I think there's some very actionable and useful tools in Buddhism at the juncture of rationality and humanity: How to not believe in santa, but still fulfill non-rational human needs and aspirations. Someone's going to have to really work on convincing me that "utility" can suffice, when Buddhist concepts of "happiness" seem to fit the bill better for humans. "Utility" seems too much like pleasure (unreliable, external, variable), as opposed to happiness (maintainable, internal, constant).
Anyway, I'm excited to be here, and looking forward to learning a lot and possibly contributing something of value.
A special shout-out to Alicorn: I read you post on male bias, and I dig, sister. I'll try to not make matters worse, and look for ways to make them better.
I'm a bit nervous, this is my first comment here, and I feel quite out of my league.
Regarding the "free will" aspect, can one game the system? My rational choice would be to sit right there, arms crossed, and choose no box. Instead, having thus disproved Omega's infallibility, I'd wait for Omega to come back around, and try to weasel some knowledge out of her.
Rationally, the intelligence that could model mine and predict my likely action (yet fail to predict my inaction enough to not bother with me in the first place), is an intelligence I'd like to have a chat with. That chat would be likely to have tremendously more utility for me than $1,000,000.
Is that a valid choice? Does it disprove Omega's infallibility? Is it a rational choice?
If messing with the question is not a constructive addition to the debate, accept my apologies, and flame me lightly, please.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm still not quite getting how this is going to work.
Lets say I am a spam blog bot. What it does is take popular (for a niche) articles and reposts automated summaries. So lets say it does this for cars. These aren't very good, but aren't very bad either. Perhaps it makes automatic word changes to real peoples summaries. It gets lots of other spam bots of this type and they form self-supportive networks (each up voting each other) and also liking popular things to do with cars. People come across these links and up vote them, because they go somewhere interesting. They gain lots of karma in these communities and then start pimping car related products or spreading FUD about rival companies. Automated astro-turf if you want.
Does anyone regulate the creation of new users?
How long before they stop being interesting to the car people? Or how much effort would it be to track them down and remove them from the circle of people you are interested in.
Also who keeps track of these votes? Can people ballot stuff?
I've thought a long these lines before and realised it is a non-trivial problem.
There's a few questions in there. Let's see.
Authentication and identity are an interesting issue. My concept is to allow anonymous users, with a very low initial influence level. But there would be many ways for users to strengthen their "identity score" (credit card verification, address verification via snail-mailed verif code, etc.), which would greatly and rapidly increase their influence score. A username that is tied to a specific person, and therefore wields much more influence, could undo the efforts of 100 bots with a single downvote.
But if you want to stay anonymous, you can. You'll just have to patiently work on earning the same level of trust that is awarded to people who put their real-life reputation on the line.
I'm also conceiving of a richly semantic system, where simply "upvoting" or facebook-liking are the least influential actions one can take. Up from there, you can rate content on many factors, comment on it, review it, tag it, share it, reference it, relate it to other content. The more editorial and cerebral actions would probably do more to change one's influence than a simple thumbs up. If a bot can compete with a human in writing content that gets rated high on "useful", "factual", "verifiable", "unbiased", AND "original" (by people who have high influence score in these categories), then I think the bot deserves a good influence score, because it's a benevolent AI. ;)
Another concept, which would reduce incentives to game the system, is vouching. You can vouch for other users' identity, integrity, maturity, etc. If you vouched for a bot, and the bot's influence gets downgraded by the community, your influence will take a hit as well.
I see this happening throughout the system: Every time you exert your influence, you take responsibility for that action, as anyone may now rate/review/downvote your action. If you stand behind your judgement of Rush Limbaugh as truthful, enough people will disagree with you that from that point on, anytime you rate something as "truthful", that rating will count for very little.