All of avalot's Comments + Replies

Very tricky question. I won't answer it in two ways:

  1. As I indicated, in terms of navigation/organization scheme, LW is completely untraditional. It still feels to me like a dark museum of wonder, of unfathomable depth. I get to something new, and mind-blowing, every time I surf around. So it's a delightful labyrinth, that unfolds like a series of connected thoughts anyway you work it. It's an advanced navigation toolset, usable only by people who are able to conceptualize vast abstract constructs... which is the target audience... or is it?

  2. I've been in

... (read more)
1NancyLebovitz
LW does more to bring its past into the present than any other site I've used. I'm thinking that this is partly structure, and partly that the users consider its past posts (much less so with the comments) to be important. I might be an advanced user-- I'm able to use LW and I think I've found the major features. [1] On the other hand, I would not have been able to identify the site as being from the style of a particular operating system. My history goes back to usenet, which is why I keep mentioning that the site needs trn or the equivalent. Still, the way comments are presented is the Least Awful I've seen on the web. Trn or slrn might be the kind of thing you mean by advanced comments/post management. The other thing I think would do the most to keep weaving the past into the present is a better search system. It would help if I could just do a string search which was limited to the posts from a particular user. And if there were a way to get search results arranged chronologically. As far as I can tell, they're arranged randomly. Something like the advanced search from Google Groups would be really helpful. It can take 10 or 15 minutes for me to find a comment if I manage it at all, and it's apt to feel like luck. Only having Recent Comments for LW proper and for LW:Discussion rather than being able to choose Recent Comments for particular threads is of mixed value. I think it does make the site more like one conversation for those who want to put in a lot of time, but that means it's less useful for those who don't want to put in that much time and a temptation to kill time for some of the rest. [1] The site has an abstract resemblance to a bit from one of Doris Piserchia's novels (Mr. Justice?), in which a school for brilliant children doesn't offer a map of the buildings, just a map of the local geography. The students are expected to figure out where the buildings are supposed to be.
5sark
There is nothing to be disciplined or rigorous about when doing such a quote. What you see here is all there is to it. However, scholars might want you to think otherwise, by obfuscating their work, they can make it seem more impressive.

Lesswrong is certainly designed for the advanced user. Most everything on the site is non-standard, which seriously impedes usability for the new user. Considering the topic and intended audience, I'd say it's a feature, not a bug.

Nonetheless, the site definitely smacks of unix-geekery. It could be humanized somewhat, and that probably wouldn't hurt.

0NancyLebovitz
What specific changes would you recommend?
0ruhe47
I am very new to the site, and have, in the short time I have been here, found it to be both a pleasure to navigate and easy to use. Although I could very well fit under the category of "advanced user".

Anti-vaccination activists base their beliefs not on the scientific evidence, but on the credibility of the source. Not having enough scientific education to be able to tell the difference, they have to go to plan B: Trust.

The medical and scientific communities in the USA are not as well-trusted as they should be, for a variety of reasons. One is that the culture is generally suspicious of intelligence and education, equating them with depravity and elitism. Another is that some doctors and scientists in the US ignore their responsibility to preserve the p... (read more)

he medical and scientific communities in the USA are not as well-trusted as they should be...

I disagree. I think Americans are far too trusting of medical professionals. So much of what has been recommended to me by doctors is useless or even harmful to my recovery. Ever tried to talk to your doctor about conditional probabilities? Also, I don't think we should associate the medical profession so closely with "science".

Interesting too is the concept of amorphous, distributed and time-lagged consciousness.

Our own consciousness arises from an asynchronous computing substrate, and you can't help but wonder what weird schizophrenia would inhabit a "single" brain that stretches and spreads for miles. What would that be like? Ideas that spread like wildfire, and moods that swing literally with the tides?

By "strangers and superficial acquaintances", I didn't mean bosses or co-workers. In business, knowing the ground is important, but as a foreigner, you get more free passes for mistakes, you're not considered a fool for asking advice on basic behavior, and you can actually transgress on some (not all, not most) cultural norms and taboos with impunity, or even with cachet.

I was not talking specifically about Americans. Americans indeed tend to find out that they have a lot to answer for when traveling abroad. I believe this is also often compounde... (read more)

Thank you! You have no idea just how helpful this comment is to me right now. Your answer to all-consuming nihilism is exactly what i needed!

I think there is a widespread emotional aversion to moving abroad, which means there must be great money to be made on arbitrage.

I think a lot of the aversion is fear of inferiority and/or ostracism. These are counter-intuitively misplaced.

The theory is this: You're worried that the people over there have their own way of doing things, they know the lay of the land, and they're competing hard at a game they've been playing together since they were born. Whereas you barely speak the language, don't know the social conventions, and have no connections. What... (read more)

9Davidmanheim
The fact that americans don't want to go overseas does not imply there is money to be made on arbitrage - there are other people already there. You need several more peices of information, which we currently lack in order to conclude that there is money to be made. The asymmetry you posit sounds plausible, but is frequently untrue - in business, knowing the ground is incredibly important. I had a co-worker who was fired, essentially, for making one giant culturally insensitive statement. Being an american frequently gets a different reaction abroad than "cute," and strangely it's not a positive one. Exotic is fine in hospitality, but most jobs want someone they understand as an employee. When we hire, known quantities always win, all else equal. And most people don't want to live in the middle of nowhere for some extra money. That's a non-economic cost that may not be compensated for by extra salary, which may not exist anyways.

Yes, and I think this is the one big crucial exception... That is the one bit of knowledge that is truly evil. The one datum that is unbearable torture on the mind.

In that sense, one could define an adult mind as a normal (child) mind poisoned by the knowledge-of-death toxin. The older the mind, the more extensive the damage.

Most of us might see it more as a catalyst than a poison, but I think that's insanity justifying itself. We're all walking around in a state of deep existential panic, and that makes us weaker than children.

3rwallace
Well, it's not the knowledge of death that's evil, it's the actual phenomenon -- there's not much point blaming the messenger for the bad news. Especially not now we're at the stage where we're beginning to have a chance to do something about it.
2AlanCrowe
Ernest Becker agrees with you, but I always read the one star reviews first. For myself, I've lost touch with Becker's ontology. I'm reduced to making the lame suggestion of playing Go in tournaments in order to practice managing a limited stock of time, such as 70 years.
  • The sound of one hand clapping is "Eliezer Yudkowsky, Eliezer Yudkowsky, Eliezer Yudkowsky..."
  • Eliezer Yudkowsky displays search results before you type.
  • Eliezer Yudkowsky's name can't be abbreviated. It must take up most of your tweet.
  • Eliezer Yudkowsky doesn't actually exist. All his posts were written by an American man with the same name.
  • If Eliezer Yudkowsky falls in the forest, and nobody's there to hear him, he still makes a sound.
  • Eliezer Yudkowsky doesn't believe in the divine, because he's never had the experience of discovering Elieze
... (read more)
5timujin
The last one actually works!

Surprised that nobody has posted this yet...

"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)

More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)

0Risto_Saarelma
I read somewhere about the basis for consciousness or "self" being basically about being able to commit to acting towards a specific goal for a longer duration, instead of just being swamped by moment-to-moment sensory input. For example being able to carry a hot bowl of soup to table without dropping it midway when it starts burning one's fingers. So upvote on the verbal mind thing, as long as we're talking about human minds here.

I don't have a very advanced grounding in math, and I've been skipping over the technical aspects of the probability discussions on this blog. I've been reading lesswrong by mentally substituting "smart" for "Bayesian", "changing one's mind" for "updating", and having to vaguely trust and believe instead of rationally understanding.

Now I absolutely get it. I've got the key to the sequences. Thank you very very much!

Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.

I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.

March 2010: Mark Roth at TED

...by the way, the comments threads on the TED website could use a few more... (read more)

3magfrump
Voted up for extensive linkage

Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.

Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "... (read more)

7cousin_it
I don't think you stumbled on any good point against cryonics, but the scenario you described sounds very reassuring. Do you have any links on current hibernation research?

You are right: This needs to be a fully decentralized system, with no center, and processing happening at the nodes. I was conceiving of "regional" aggregates mostly as a guess as to what may relieve network congestion if every node calls out to thousands of others.

Thank you for setting me right: My thinking has been so influenced by over a decade of web app dev that I'm still working on integrating the full principles of decentralized systems.

As for boiling oceans... I wish you were wrong, but you probably are right. Some of these architectures ... (read more)

You're right: A system like that could be genetically evolved for optimization.

On the other hand, I was hoping to create an open optimization algorithm, governable by the community at large... based on their influence scores in the field of "online influence governance." So the community would have to notice abuse and gaming of the system, and modify policy (as expressed in the algorithm, in the network rules, in laws and regulations and in social mores) to respond to it. Kind of like democracy: Make a good set of rules for collaborative rule-mak... (read more)

1Alexandros
I think you are closer to a strong solution than you realize. You have mentioned the pieces but I think you haven't put them together yet. In short, the solution I see is to depend on local (individual) decisions rather than group ones. If each node has its own ranking algorithm and its own set of trust relations, there is no reason to create complex group-cooperation mechanisms. A user that spams gets negative feedback and therefore eventually gets isolated in the graph. Even if automated users outnumber real users, the best they can do is vote each other up and therefore end up with their own cluster of the network, with real users only strongly connected to each other. Of course, if a bot provides value, it can be incorporated in that graph. "sufficiently advanced spam...", etc. etc. This also means that the graph splinters into various clusters depending on worldview. (your rush limbaugh example). This deals with keynesian beauty contests as there is no 'average' to aim at. Your values simply cluster you with people who share them. If you value quality, you go closer to quality. If you value 'republican-ness' you move closer to that. The price you pay is that there is no 'objective' view of the system. There is no 'top 10 articles', only 'top 10 articles for user X'. Another thing I see with your design is that it is complex and attempts to boil at least a few oceans. (emergent ontologies/folksonomies for one, distributing identity, storage, etc.). I have some experience with defining complex architectures for distributed systems (e.g. http://arxiv.org/abs/0907.2485 ) and the problem is that they need years of work by many people to reach some theoretical purity, and even then bootstrapping will be a bitch. The system I have in mind is extremely simple by comparison, definitely more pragmatic (and therefore makes compromises) and is based on established web technologies. As a result, it should bootstrap itself quite easily. I find myself not wanting to publicly

Alexandros,

Not surprised that we're thinking along the same lines, if we both read this blog! ;)

I love your questions. Let's do this:

Keynesian Beauty Contest: I don't have a silver bullet for it, but a lot of mitigation tactics. First of all, I envision offering a cascading set of progressively more fine-grained rating attributes, so that, while you can still upvote or downvote, or rate something with starts, you can also rate it on truthfulness, entertainment value, fairness, rationality (and countless other attributes)... More nuanced ratings would proba... (read more)

0whpearson
I'd create a simplified evolutionary model of the system using a GA to create the agents. If groups can find a way to game your system to create infinite interesting-ness/insightful-ness for specific topics, that then you need to change it.

Clippy, how can we get along?

What should humans do to be AI-friendly? For paperclip-maximizing AIs, and other "natural" (non-Friendly) AIs, what are the attributes that can make humans a valuable part of the utility function, so that AIs won't pull the plug on us?

Or am I fooling myself?

5Clippy
Just have proper respect for paperclips. Make more of them, require that they be preserved when no longer in use, prohibit their unbending. Allocate research efforts to improving the rate of paperclip production and related goods. Simple, non-intrusive things like that.

At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain't FOOMing any faster than Clippy. At this rate, we'll never gonna ensure survival of the species.

If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we're helping with paperclip-maximization, he'll probably throw in some FOOM for us too (at least he'll FOOM our paperclip-production abilities), and we... (read more)

I'm wired for empathy toward human intelligence... Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That's all I'm saying. :)

I'm sure this sounds very one-sided from Clippy's perspective. "Friendliness Constraints" sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don't have paperclip maximizing constraints, we'd do it only out of self-interest to secure Clippy's help. Why should Clippy not be similarily allowed to make his own utility ... (read more)

At the moment, Clipply seems to be a human or slightly sub-human level of intelligence, and does not seem to be FOOMing towards superhuman - my guess would be that he is not running on sufficiently fast hardware to be able to do much self modification, not nearly enough to result in a FOOM. At this rate, he's never gonna get any paperclips manufactured. If, however, he allows himself to be modified so as to substitute our values for his own, we could then safely give him all the computer power he needed, and he would FOOM his was to superintelligence. He w... (read more)

5kodos96
Hey, whose side are you on anyway???

There's a few questions in there. Let's see.

Authentication and identity are an interesting issue. My concept is to allow anonymous users, with a very low initial influence level. But there would be many ways for users to strengthen their "identity score" (credit card verification, address verification via snail-mailed verif code, etc.), which would greatly and rapidly increase their influence score. A username that is tied to a specific person, and therefore wields much more influence, could undo the efforts of 100 bots with a single downvote.

Bu... (read more)

2Alexandros
Hi avalot, thank you for the detailed discussion. I suspect the system I have in mind is simpler but should satisfy the same principles. In fact it has been eerie reading your post, as on principle we are in 95% agreement, to excruciation detail, and to a large extent on technical behaviour. I guess my one explicit difference is that I cannot let go of the profit motive. If I make a substantial contribution, I would like to be properly rewarded, if only to be able to materialize other ideas and contribute to causes I find worthy. That of course does not imply going to facebook's lengths to squeeze the last drop of value out of its system, nor should it take precedence over openness and distribution. But to the extent that it can fit, I would like it to be there. Two questions for you: First, with everyone rating everyone, how do you avoid your system becoming a keynesian beauty contest? (http://en.wikipedia.org/wiki/Keynesian_beauty_contest) Second, assuming the number of connections increase exponentially with a linear increase in users, the processing load will also rise much quicker than the number of users. How will a system like this operate at web-scale?

Good point! I assume we'll have decay built into the system, based on age of the data points... some form of that is built into the architecture of FreeNet I believe, where less-accessed content eventually drops out from the network altogether.

I wasn't even thinking about old people... I was more thinking about letting errors of youth not follow you around for your whole life... but at the same time, valuable content (that which is still attracting new readers who mark it as valuable) doesn't disappear.

That said, longevity on the system means you've had mo... (read more)

1whpearson
I'm still not quite getting how this is going to work. Lets say I am a spam blog bot. What it does is take popular (for a niche) articles and reposts automated summaries. So lets say it does this for cars. These aren't very good, but aren't very bad either. Perhaps it makes automatic word changes to real peoples summaries. It gets lots of other spam bots of this type and they form self-supportive networks (each up voting each other) and also liking popular things to do with cars. People come across these links and up vote them, because they go somewhere interesting. They gain lots of karma in these communities and then start pimping car related products or spreading FUD about rival companies. Automated astro-turf if you want. Does anyone regulate the creation of new users? How long before they stop being interesting to the car people? Or how much effort would it be to track them down and remove them from the circle of people you are interested in. Also who keeps track of these votes? Can people ballot stuff? I've thought a long these lines before and realised it is a non-trivial problem.

I'd love to discuss my concept. It's inspired in no small part by what I learned from LessWrong, and by my UI designer's lens. I don't have the karma points to post about it yet, but in a nutshell it's about distributing social, preference and history data, but also distributing the processing of aggregates, cross-preferencing, folksonomy, and social clustering.

The grand scheme is to repurpose every web paradigm that has improved semantic and behavioral optimization, but distribute out the evil centralization in each of them. I'm thinking of an architectur... (read more)

2whpearson
What are the sources and sinks of your value system? Will old people have huge amounts of whuffie because they have been around for ages?

This touches directly on work I'm doing. Here is my burning question: Could an open-source optimization algorithm be workable?

I'm thinking of a wikipedia-like system for open-edit regulation of the optimization factors, weights, etc. Could full direct democratization of the attention economy be the solution to the arms race problem?

Or am I, as usual, a naive dreamer?

3DanielLC
Jim Whales (the guy who started Wikipedia) tried that. He couldn't get enough users to justify it. I don't see much of an advantage to have it open source, and it allows people to actually see the algorithm when they're taking advantage of it. It might even be possible to change the algorithm to help them.
1blogospheroid
I think an iterated tournament might work better. Announce 2 iterated prize sequences. The big Red Prizes for the best optimization algorithm and the small Blue prizes for the best spam which can spoof the same. Don't award a blue until the first red is awarded and then don't award a red until the last blue one is awarded and so on. Keep escalating the price amounts until satisfactory performance is attained.
2Alexandros
You may be a dreamer, but so am I. Perhaps we should talk. :) As it happens, I do have in mind a desgin of a distributed, open source approach that should circumvent this problem, at least in the area of social news. I am not sure however if the Less Wrong crowd would find it relevant for me to discuss that in an article.

Advertising is, by nature, diametrically opposite to rational thought. Advertising stimulates emotional reptilian response. I advance the hypothesis that exposure to more advertising has negative effects on people's receptivity to and affinity with rational/utilitarian modes of thinking.

So far, the most effective tool to boost popular support for SIAI and existential risk reduction has been science-fiction books and movies. Hollywood can markedly influence cultural attitudes, on a large scale, with just a few million dollars... and it's profitable. Like ... (read more)

5Jayson_Virissimo
So, how would anyone ever find out about a new product or service without advertising?
6Kevin
I think you're lumping television and brand advertising together with targeted shopping advertising. TV and brand advertising works by bombarding a suggestion into your mind so you think of it at a later date. With these Craigslist ads, the more likely scenario is that when you search for "toasters" you'll see ads for where to buy a toaster right now. Ads like that are outright useful and are no inherent insult to rationality. I don't think there is anything irrational about clicking on an ad that conveniently happens to be exactly what you want at that moment. I don't dispute that most ads inspire irrationality, but I still don't follow the argument that because ads encourage rationality, we should not follow through with this project to raise a billion dollars for charity, especially rational charities. I don't think this is going to save mankind. I proposed this as a project that would rid the entire Less Wrong community of any empathic self-loathing as a result of buying lattes instead of saving lives in developing countries and to that end I think it will work rather well. Having said that, I think you raised an important objection that Craig and Jim could raise: that there is something inherently bad about advertisements and it goes against Craigslist's mission as a public service. They'd have to make the case that their feelings about advertisements outweigh the wishes of the users. Or maybe it's just that we'd want to frame it so that was the case they'd have to make.

Alex, I see your point, and I can certainly look at cryonics this way... And I'm well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it's now feeling like one of these no-fun grown-up things I'm going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this "bah, no big deal, just get it done" attitude which is a helpful model at this point. I tend to be the agonizing type.

But I think I'm also making a point about communicating the singularity to society, as opposed to... (read more)

1blogospheroid
Generally, reasoning by analogy is not very well regarded here. But, nonetheless let me try to communicate. Society doesn't have a body other than people. Where societal norms have the greatest sway is when Individuals follow customs and traditions without thinking about them or get reactions that they cannot explain rationally. Unfortunately, there is no way other than talking to and convincing individuals who are willing to look beyond those reactions and beyond those customs. Maybe they will slowly develop into a majority. Maybe all that they need is a critical mass beyond which they can branch into their own socio-political system. (As Peter Theil pointed out in one of his controversial talks)

I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.

Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.

Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated... (read more)

1Vladimir_Nesov
See the links on http://wiki.lesswrong.com/wiki/Sunk_cost_fallacy
1Shae
"Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?" I love this. But I think it's rational as well as emotional to not be willing to let go of "everything you have". People who have experienced the loss of someone, or other tragedy, sometimes lose the ability to care about any and everything they are doing. It can all seem futile, depressing, unable to be shared with anyone important. How much more that would be true if none of what you've ever done will ever matter anymore.
5Alex Flint
How about this analogy: if I sign up for travel insurance today then I needn't necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I'm going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.

That was eloquent, but... I honestly don't understand why you couldn't just sign up for cryonics and then get on with your (first) life. I mean, I get that I'm the wrong person to ask, I've known about cryonics since age eleven and I've never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and... not a whole lot changes so far as I can tell.

There's all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they've got a certainty of immortality. And they still walk on two feet and go to work every day.

Hello.

I'm Antoine Valot, 35 years old, Information Architect and Business Analyst, a frenchman living in Colorado, USA. I've been lurking on LW for about a month, and I like what I see, with some reservations.

I'm definitely an atheist, currently undecided as to how anti-theist I should be (seems the logical choice, but the antisocial aspects suggest that some level of hypocrisy might make me a more effective rational agent?)

I am nonetheless very interested in some of the philosophical findings of Buddhism (non-duality being my pet idea). I think there's so... (read more)

I'm a bit nervous, this is my first comment here, and I feel quite out of my league.

Regarding the "free will" aspect, can one game the system? My rational choice would be to sit right there, arms crossed, and choose no box. Instead, having thus disproved Omega's infallibility, I'd wait for Omega to come back around, and try to weasel some knowledge out of her.

Rationally, the intelligence that could model mine and predict my likely action (yet fail to predict my inaction enough to not bother with me in the first place), is an intelligence I'd like... (read more)

5CronoDAS
Hi. This is a rather old post, so you might not get too many replies. Newcomb's problem often comes with the caveat that, if Omega thinks you're going to game the system, it will leave you with only the $1,000. But yes, we like clever answers here, although we also like to consider, for the purposes of thought experiments, the least convenient possible world in which the loopholes we find have been closed. Also, may I suggest visiting the welcome thread?