Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."
I said: "I do!"
He paused a moment and then said: "Hmm. Yeah, so do I."
I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.
May I suggest "I'm tapping out", perhaps with a link to this very comment? It's a good line (and perhaps one way the dojo metaphor is valuable).
I think in this comment you did fine. Don't sweat it if the comment that signals "I'm stopping here" is downvoted, don't try to avoid it.
In this comment I think you are crossing the "mind reading" line, where you ascribe intent to someone else. Stop before posting those.
Absent statistical evidence drawn from written and dated notes, you should hold it very plausible that your impression you're good at it is due to cognitive bias. Key effects here include hindsight bias, the tendency to remember successes better than failures, the tendency to rewrite your memories after the fact so that you appear to have predicted the outcome, and the tendency to count a prediction as a success - the thousand-and-one-fold effect.
Startup idea:
We've all been waiting for the next big thing to come after Chatroulette, right? I think live video is going to be huge -- it's a whole new social platform.
So the idea is: Instant Audience. Pay $1, get a live video audience of 10 people for 1 minute. The value prop is attention.
The site probably consists of a big live video feed of the performer, and then 10 little video feeds for the audience. The audience members can't speak unless they're called on by the performer, and they can be "brought up on stage" as well.
For the performer, it's a chance to practice your speech / stand-up comedy routine / song, talk about yourself, ask people questions, lead a discussion, or limitless other possibilities (ok we are probably gonna have to deal with some suicides and jackers off).
For the audience, it's a free live YouTube. It's like going to the theater instead of watching TV, but you can still channel surf. It's a new kind of live entertainment with great audience participation.
Better yet, you can create value by holding some audience members to higher standards of behavior. There can be a reputation system, and maybe you can attend free performances to build up your ...
Interesting article on an Indian rationalist (not quite in the same vein as lesswrong style rationalism but a worthy cause nonetheless). Impressive display of 'putting your money where your mouth is':
Sceptic challenges guru to kill him live on TV
When a famous tantric guru boasted on television that he could kill another man using only his mystical powers, most viewers either gasped in awe or merely nodded unquestioningly. Sanal Edamaruku’s response was different. “Go on then — kill me,” he said.
I also rather liked this response:
When the guru’s initial efforts failed, he accused Mr Edamaruku of praying to gods to protect him. “No, I’m an atheist,” came the response.
H/T Hacker News.
What would be the simplest credible way for someone to demonstrate that they were smarter than you?
Just a thought about the Litany of Tarski - be very careful to recognize that the "not" is a logical negation. If the box contains not-a-diamond your assumption will likely be that it's empty. The frog that jumps out when you open it will surprise you!
The mind falls easily into oppositional pairs of X and opposite-of-X (which isn't the same as the more comprehensive not-X), and once you create categorizations, you'll have a tendency to under-consider outcomes that don't categorize.
Might as well right away move/call attention to the thing about the macroscopic quantum superposition here so we talk about that here.
If you had to tile the universe with something - something simple - what would you tile it with?
Here's a way to short-circuit a particular sort of head-banging argument.
Statements may seem simple, but they actually contain a bunch of presuppositions. One way an argument can go wrong is A says something, B disagrees, A is mistaken about exactly what B is disagreeing with, and neither of them can figure out why the other is so pig-headed about something obvious.
I suggest that if there are several rounds of A and B saying the same things at each other, it's time for at least one of them to pull back and work on pinning down exactly what they're disagreeing about.
Survey question:
If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?
If it comes out automatically, would you describe yourself as being adept at language (always finding the right word to describe something, articulating your thoughts easily, etc.) or is it something you struggle with?
I tend to have trouble with words - it can take me a long time (minutes) to recall the proper word to describe something, and when speaking I frequently ...
If any aspiring rationalists would like to try and talk a Stage IV cancer patient into cryonics... good luck and godspeed. http://www.reddit.com/r/IAmA/comments/bj3l9/i_was_diagnosed_with_stage_iv_cancer_and_am/c0n1kin?context=3
Nature doesn't grade on a curve, but neither does it punish plagiarism. Is there some point at which someone who's excelled beyond their community would gain more by setting aside the direct pursuit of personal excellence in favor of spreading what they've already learned to one or more apprentices, then resuming the quest from a firmer foundation?
I really should probably think this out clearer, but I've had an idea a few days now that keeps leaving and coming back. So I'm going to throw the idea out here and if it's too incoherent, I hope either someone gets where I'm going or I come back and see my mistake. At worst, it gets down-voted and I'm risking karma unless I delete it.
Okay, so the other day I was discussing with a Christian friend who "agrees with micro-evolution but not macro-evolution." I'm assuming other people have heard this idea before. And I started to think about the idea...
Daniel Dennett and Linda LaScola have written a paper about five non-believing members of the Christian clergy. Teaser quote from one of the participants:
I think my way of being a Christian has many things in common with atheists as [Sam] Harris sees them. I am not willing to abandon the symbol ‘God’ in my understanding of the human and the universe. But my definition of God is very different from mainline Christian traditions yet it is within them. Just at the far left end of the bell shaped curve.
Don't know if this will help with cryonics or not, but it's interesting:
Induced suspended animation and reanimation in mammals (TED Talk by Mark Roth)
[Edited to fix broken link]
Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.
It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.
What's the best way to respond to someone who insists on advancing an argument that appears to be completely insane? For example, someone like David Icke who insists the world is being run by evil lizard people? Or your friend the professor who thinks his latest "breakthrough" is going to make him the next Einstein but, when you ask him what it is, it turns out to be nothing but gibberish, meaningless equations, and surface analogies? (My father, the professor, has a friend, also a professor, who's quickly becoming a crank on the order of the Tim...
I have a line of thinking that makes me less worried about unfriendly AI. The smarter an AI gets, the more it is able to follow its utility function. Where the utility function is simple or the AI is stupid, we have useful things like game opponents.
But as we give smarter AI's interesting 'real world' problems, the difference between what we asked for and what we want shows up more explicitly. Developers usually interpret this as the AI being stupid or broken, and patch over either the utility function or the reasoning it led to. These patches don't lead t...
I'm looking for a quote I saw on LW a while ago, about people who deny the existence of external reality. I think it was from Eliezer, and it was something like "You say nothing exists? Fine. I still want to know how the nothing works."
Anyone remember where that's from?
Hello. Do people here generally take anthropic principle as strong evidence against positive singularity? If we take it that in the future it would be good to have many, happy people, like, using most matter available to make sure that this happens, we'd get really many happy people. However, we are not any one of those happy people. We're living in pre-singularity times, and this seems to be strong evidence that we're going to face a negative singularity.
This is pretty pathetic, at least if honestly reported. (A heavily reported study's claim to show harmful effects from high-fructose corn syrup in rats is based on ambiguous, irrelevant, or statistically insignificant experimental results.)
How do Bayesians look at formal proofs in formal specifications? Do they believe "100%" in them?
Is independent AI research likely to continue to be legal?
At this point, very few people take the risks seriously, but that may not continue forever.
This doesn't mean that it would be a good idea for the government to decide who may do AI research and with what precautions, just that it's a possibility
If there's a plausible risk, is there anything specific SIAI and/or LessWrongers should be doing now, or is building general capacity by working to increase ability to argue and to live well (both the anti-akrasia work and luminosity) the best path?
First Clay Millennium Prize goes to Grigoriy Perelman
Tricycle has a page up called Hacking on Less Wrong which describes how to get your very own copy of Less Wrong running on your computer. (You can then invite all your housemates to register and then go mad with power when you realize you can ban/edit any of their comments/posts. Hypothetically, I mean. Ahem.)
I've updated it a bit based on my experience getting it to run on my machine. If I've written anything terribly wrong, someone let me know =)
Nanotech robots deliver gene therapy through blood
What Would You Do With 48 Cores? (essay contest)
Random observation: type in the first few letters of 'epistemic' and google goes straight to suggesting 'epistemological anarchism'. It seems google is right on board with helping SMBC further philosophical education.
Does anyone know which arguments have been made about ETA of strong AI on the scale of "is it more likely to be 30, 100, or 300 years?"
Ben Goertzel: Creating Predictably Beneficial AGI
http://multiverseaccordingtoben.blogspot.com/2010/03/creating-predictably-beneficial-agi.html
Michael Arrington: "It’s time for a centralized, well organized place for anonymous mass defamation on the Internet. Scary? Yes. But it’s coming nonetheless."
http://techcrunch.com/2010/03/28/reputation-is-dead-its-time-to-overlook-our-indiscretions/
So, while in the shower, an idea for an FAI came into my head.
My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs. So, I figured that if you simply told two (or more) AGIs to fight over one database of information, the most rational AGI would be able to set the database to contain the correct information. (Another intuition of mine tells me that FAI is a problem of rationality: once ...
This is what non-reductionism looks like:
In a certain world, it's possible to build stuff. For example, you can build a ship. You build it out of some ingredients, such as wood, and by doing a bit of work. The thing is, though, there's only one general method that can possibly used to build a ship, and there are some things you can do that are useful only for building a ship. You have some freedom within this method: for example, you can give your ship 18 masts if you want to. However, the way you build the ship has literally nothing to do with the end res...
Let's say Omega opens a consulting service, but, for whatever reason, has sharply limited bandwidth, and insists that the order in which questions are presented be determined by some sort of bidding process. What questions would you ask, and how much would you be willing to pay per byte for the combined question and response?
Fictional representation of an artificial intelligence which does not value self-preservation., and the logical consequences thereof.
This will be completely familiar to most of us here, but "What Does a Robot Want?" seems to rederive a few of Eliezer's comments about FAI and UFAI in a very readable way - particularly those from Points of Departure. (Which, for some reason, doesn't seem to be included in any indexed sequence.)
The author mentions using these ideas in his novel, Free Radical - I can attest to this, having enjoyed it partly for that reason.
People gathering here, mostly assume that the evolution is slow and stupid, no match for intelligence at all. That the human, let alone superintelligence is for several orders of magnitude smarter than the process which created us in recent several billion years.
Well, despite many fancy mathematical theories of packing, some best results came from the so called digital evolution. Where the only knowledge is, that "the overlapping is bad and a smaller frame is good". Everything else is a random change and nonrandom selection.
Every previously intelligently developed solution, stupidly evolves fast from scratch here: http://critticall.com/SQU_cir.html
Does anyone have any spare money on In Trade? The new Osama Bin Laden contract is coming out and I would like to buy some. If anyone has some money on In Trade, I would pay a 10% premium.
Also, is there anyone here who thinks the In Trade Osama contracts are priced too highly? http://www.intrade.com/jsp/intrade/contractSearch/index.jsp?query=Osama+Bin+Laden+Conclusion
Here's a puzzle that involves time travel:
Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your wor...
Can't answer until I know the laws of time travel.
No, seriously. Is the resulting universe randomly selected from all possible self-consistent ones? By what weighting? Does the resulting universe look like the result of iteration until a stable point is reached? And what about quantum branching?
Considering that all I know of causality and reality calls for non-circular causal graphs, I do feel a bit of justification in refusing to just hand out an answer.
So healthcare passed. I guess that means the US goes bankrupt a bit sooner than I'd expected. Is that a good or a bad thing?
Should you get a presidential physical?
http://www.cnn.com/2010/HEALTH/03/18/executive.physicals/index.html
On a serious note, what is your (the reader's) favorite argument against a forum?
("I voted you down because this is not a meta thread." is also a valid response.)
The previous open thread has now exceeded 300 comments – new Open Thread posts may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.