I don't think you understand acausal trade.
For what it's worth, I don't think anybody understands acausal trade. And I don't claim to understand it either.
I don't think you understand acausal trade.
For what it's worth, I don't think anybody understands acausal trade. And I don't claim to understand it either.
For what it's worth, I don't think anybody understands acausal trade.
It does get a tad tricky when combined with things like logical uncertainty and potentially multiple universes.
Precommitment isn't meaningless here just because we're talking about acausal trade. What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was. As long as it irreversibly is in the state "AI that will simulate and torture people who don't give in to blackmail" while your decision whether to give into blackmail is still inside a box that it has not yet opened, then that serves as a precommitment.
(If you are thinking "the AI is already in or not in the world where the human refuses to submit to blackmail, so the AI's precommitment cannot affect the measure of such worlds", it can "affect" that measure acausally, the same as deciding whether to one-box or two-box in Newcomb can "affect" the contents of the boxes).
If you could precommit to not giving in to blackmail before you analyze what the AI's precommitment would be, you can escape this doom, but as a mere human, you probably are not capable of binding your future post-analysis self this way. (Your human fallibility can, of course, precommit you by making you into an imperfect thinker who never gives in to acausal blackmail because he can't or won't analyze the Basilisk to its logical conclusion.)
Precommitment isn't meaningless here just because we're talking about acausal trade.
Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)
What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.
The time of this kind decision is irrelevant.
we can also reasonably know that since we refuse, it doesn't get built in the first place.
The key is that the AI precommits to building it whether we refuse or not.
If we actually do refuse, this precommitment ends up being bad for it, since it builds it without any gain. However, this precommitment, by preventing us from saying "if we refuse, it doesn't get built", also decreases the measure of worlds where it builds it without gaining.
The key is that the AI precommits to building it whether we refuse or not.
The 'it' bogus is referring to is the torture-AI itself. You cannot precommit to things until you exist, no matter your acausal reasoning powers.
It's such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.
It's such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.
The best we can say is that it is a sufficiently predictable conclusion. Had the author not underestimated inferential distance he could easily have pre-empted your accusation with an additional word or two.
Nevertheless, it is still a naive (and incorrect) conclusion to draw based on the available evidence. Familiarity with human psychology (in general), internet forum arguing (in general), XiXiDu in particular or even a complete read of the opening thread would suggest that the advice you dismiss is clearly, obviously and overwhelmingly good advice for XiXIDu. You have also completely misread the style of dominance manoeuvre Anatoly was employing. Petty sniping of the kind you suggest wouldn't naturally fit with the more straightforward aggressively condescending style of the comment. ie. Even when interpreting Anatoly's motives in the worst possible light your interpretation is still sloppy.
Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.
'We' need to go on the expected consequences of our choices. Your choice was to accuse someone of questionable motives and use that as a premise to give advice for how to handle a serious mental health issue. You should expect that your behaviour will be negatively received by those who:
I can't read minds. So if you say you meant it, I will concede that you meant it.
I hope you understand, however, how it sounds exactly like what someone would say when their primary motivation is to shut an opponent up. Giving health advice out of the blue on such a subject as this is very unusual.
I can't read minds
Yet you spoke with the assumption that you could, and when many observers do not share your mind-reading conclusions. Hopefully in the future when you choose to do that you will not fail to see why you get downvotes. It's a rather predictable outcome.
First, consider just going away. It may be best for your physical and mental health to stay away from LW and LW-related topics.
XiXiDu should discount this suggestion because it seems to be motivated reasoning.
XiXiDu should discount this suggestion because it seems to be motivated reasoning.
The advice is good enough (and generalizable enough) that the correlation to the speaker's motives is more likely to be coincidental than causal.
Addicts tend to be hurt by exposing themselves to their addiction triggers.
Breaking the vicious cycle
I endorse this suggestion.
Don't Feed The Trolls!
The Singularity Institute is in the process of publishing Eliezer Yudkowsky’s Sequences of rationality posts as an electronic book. The Sequences are made up of multiple hundreds of posts. These are being downloaded and converted to LaTeX for publishing programmatically and that’s where the human tasks begin. These will entail:
The recent document publishing efforts at SIAI would not have been possible without the assistance of dedicated volunteers. This new project is the perfect opportunity to help out lesswrong while giving you an excuse to catch up on (or revisit) your reading of some foundational rational thinking material. As an added bonus every post reviewed will save the world with 3.5*epsilon probability.
We need volunteers who are willing to read some sequence posts and have an eye for detail. Anyone interested in contributing should contact me at cameron.taylor [at] singinst [dot] org.
For those more interested in academic papers we also have regular publications (and re-publications) that need proofreading and editing before they are released.
More literally a journey to making the dots of the 'i's line up just right with the 'f's and ensuring that the crossing of 'T' meets up neatly with the tip of the 'h' - all without breaking text searching and copy and paste.
Now, as we all know, science isn't just about little things like peer review and double blind placebo controlled studies. Far more important is presenting your work in accordance with the grand traditions of scientific publication - all while ensuring you flatter all the right people for their sometimes obsolete and possibly only slightly relevant past works. Of course you must do this all according to standard citation formulae developed a century or two ago back when the city in which a text document was published was somehow a useful piece of information.
Some may consider people like Galileo and Bacon to be the most influential figures in science but the man who made the greatest contribution to the way humanity seeks and disseminates knowledge is of course Donald Knuth. The man who took a decade off writing his multi-volume magnum opus [The Art of Computer Programming](http://en.wikipedia.org/wiki/The_Art_of_Computer_Programming) to create TeX, the foundation of LaTeX and without which science as we know it would be unrecognizable. These days presenting academic publications without using LaTeX may be nearly as uncouth and banal as writing about your research in first person rather than than the passive voice!
The above cynicism is largely sincere and only a trifle exaggerated. Yet at the same time I acknowledge that there is much value to be had in wearing a uniform and the time for lonely dissent is not on matters as trivial as presentation. The overhead of presenting work in a form that other academics are willing to accept is comparatively minor and the payoffs significant.
One of the many initiatives lukeprog has set in motion now that he is organizing things over at SingInst is the porting of all of SIAI's past publications from various adhoc formats to LaTeX with a standard publication template. You can see an early example of the new format here.
Unfortunately, Wei_Dai encountered a problem. In the first presentation of the converted document copy and pasting "The" would give something like "Ļe" and copying "fi" would give "ŀ". The problem is with the implementation of ligatures. Back when typesetting was done manually - I can only imagine using a whole bunch of little metal stamp like things that could be plugged into the right places - the typsetters had an extra collection of pseudo letters to use instead of combinations like "fi", "ffi" and "Th". The reason being that those particular combinations just don't look too good if they are placed together the same way that you would place them with other letters. You wind up with either having the too far apart or having parts of them overlap in a way that isn't particularly neat.
In the font SingInst uses the non-ligature versions of 'f' and 'i' combine with the dot of the 'i' only partially ovelapping the 'f' which somehow makes it jump out more easily to the reader. The way this is solved with the ligatures is actually increase the degree of overlap such that the f smoothly blends in to the i. Someone with far more highly honed aesthetic sense than I concluded that this is the best way to present English letters and it looks fairly good to me so I'll take their word for it.
The problem is that while ligatures are easy for humans to read "Notepad", "Word" and "Firefox" aren't nearly as smart. And unfortunately there isn't a consistent standard between fonts of which ligature means what so we end up with all sorts of random mess if we try to copy and paste from a ligature riddled document into our editor of choice. This left me with rather a lot of work to do while I was generating LaTeX files from those of the old SingInst publications that were only available in PDF form and that isn't a task I would wish on all the future consumers of SingInst literature.
Fortunately, the PDF format and the LaTeX are both advanced enough to handle making the visible text use the ligature characters while keeping the original text available for easy copy and pasting by the interested reader. This involves something called a 'cmap'. It is a mapping from an input encoding to the output encoding. With that cmap embedded in the pdf file any fully featured pdf reader is able to take the pretty text, strip apart the ligatures and figure out what they were originally.
Why then is Wei unable to copy our Th's and fi's? I haven't the slightest idea. My research suggests that the xelatex distribution we were using should just work and handle this sort of thing. So confident is it in managing such mappings that it outright rejects compatibility with the 'cmap' passage which could be used in the older 'pdflatex' compiler to handle this sort of task.
An analysis could be done on what the optimal problem solving strategy would have been at any point in that process. Among other things I would note that rather early on in the process I decided that the expected value of continuing to attack the problem was rather low - so I stopped billing Luke for the time. But since I really don't like being bested by a challenge I went ahead and did it anyway. Much frustration was involved but in this case I was rewarded with a large boost of personal satisfaction and with SingInst publications that are an iota or two more beautiful!
I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.
I'm impressed. (And will look them up when I get a chance.)