Check out "Story of your life" by the same author.
Ur'f znqr nyvraf jub jbhyq cebonoyl bcrengr ol GQG, zl cuvybfbcuvpny dhvooyrf jvgu GQG abgjvgufgnaqvat.
Hmm, just read that story before checking your spoiler and it was interesting, even despite the author's poor grasp of the physics he tried to explain. A light ray going from point A to point B is not taking the shortest path (measured in time) because it wants to reach B, the point B is merely a point on the geodesic curve the light ray is currently travelling along.
In other words, these light rays are taking the least time to reach the points they pass without intending to reach them, the points are just in the way.
That said, thanks for the recommendation! This story was still pretty good.
V qvfnterr gung gurfr nyvraf ner sbyybjvat GQG (be nal bgure qrpvfvba gurbel sbe gung znggre), fvapr gurl ner nyjnlf npgvat va n cerqrgrezvarq znaare naq arire npghnyyl znxr nal qrpvfvbaf. Gur jubyr pbaprcg bs n qrpvfvba gurbel jbhyq zrnavatyrff gb gurz.
What are your philosophical quibbles with TDT, if I may ask?
agree with your rot13. I guess it mostly just seemed related enough to be worth mentioning.
What are your philosophical quibbles with TDT, if I may ask?
A bunch of inferences which arise from the following: statement: "The supposition that an idealized rational agent's mind interacts with the universe in any way other than via the actions it chooses to carry out contains logical paradoxes."
I'm not confident in the opinion, it just represents my current state of understanding. When I've fleshed it out better in my head I will write it up and display it for criticism, unless I realize it is wrong during the intervening time (which is quite likely). One potential consequence is that TDT might ultimately be impossible to fully formalize without paradox via self-reference. The conclusion is that CDT is correct, as long as you follow the no-mind-reading rule. I reconstruct Newcombs and similar problems in such a way that the problem is similar but we aren't reading the agent's mind, and seem to always arrive at winning answers.
I'll have to reread before I can make a comment specific to this story. But I found the collection as a whole (Stories of Your Life and Others) incredibly stimulating. I don't think I've ever seen so many really original ideas between two covers.
Man, the Babylon story and the Arab world story were both incredible. Excellent worldbuilding passing off complex ideas and making them Buffy-spoken in terms of understandability, with scattered crunchy genius bonuses.
I liked this, and it was excellent. It possibly even conveys the idea of a sufficiently intelligent entity deriving complicated and useful results from little information, implementing superior evidence-gathering and processing to win, and possibly having sapient emotions.
Thanks, nice story.
My reaction was somewhat opposite to what the others described: I thought the beginning was a somewhat generic and implausible brand of superintelligence porn, but the end was cool. Mostly I enjoyed the way a "conversation" and battle between superintelligences was depicted, and the attacks and countermeasures were rather cool.
Oooh, I read this and...
Nf hfhny, V ybir ernqvat Puvnat. (Vagebqhprq gb uvz guebhtu "Uryy vf gur Nofrapr bs Tbq" juvpu vf rkpryyrag). Ohg rira gubhtu V jnf snfpvangrq ol guvf fgbel nf vg hasbyqrq, V sryg purngrq ol gur pyvznk. Vg jnf whfg fb sehfgengvat gb unir gur pbaarpgvba orgjrra gjb pyrire crbcyr (znavchyngvat gur znexrgf gb fraq n zrffntr! fdhrr!) or fb crggl naq fznyy. Creuncf gur cbvag jnf gung vagryyvtrapr nhtzragngvba vf begubtbany gb rguvpny nqinapr, ohg V jnfa'g pbaivaprq gung gurfr gjb crbcyr jrer fb hacyrnfnag gb ortva jvgu, fb gur jnfgr enaxyrq.
I see it as an example of the kind of story where the author has a really cool idea, but forces a pointless conflict onto it so that there will be a plot.
I would have liked to see the story's end without the second AI (augmented individual). However, I did like the story as it was. The issue I found with it was that their conflict of values was artificial. Human value is more complex than what was depicted (aesthetic hedonism(?) vs. utilitarianism), and unless the author had some thesis that such an augmented human would simplify their values, I would have enjoyed seeing them cooperate to a better end, for Earth and for the protagonist. Their goals did not conflict in any way (unless the protagonist was a paperclipper for intelligence), and they could have achieved a result that had greater value through cooperation, with a faster utopia for Reynolds and an isolated echo chamber for the protagonist, as well as a possible form of society of superintelligences.
I agree that the conflict was implausible, but then the magnitude and speed of growth of the main character's intelligence was already magical enough that I'd already put the whole thing into the "stories that should be judged based on the aesthetic, not anything remotely resembling plausibility" category.
I quite like Chiang myself. There is a quality to a few authors like him, Mieville, and Egan that I can't figure out what it is but really like. Possibly linguistics, good worldbuilding, and rarely having their characters be inexplicable idiots.
The should have understood the concept of love.
The superpowers are fun, but pretty implausible. I think a lot of the fun is because it's out of the mainstream. We have so many stories where being crazy-superpower-smart just means making fancy gadgets, or developing an inflated ego and pulling off one successful scheme before perishing at the hands of the hero.
After reading this story I spent about 30 seconds worrying that my ipad was broken because the display was now tinted pink. Even a restart didn't fix it. Then I realized.
Never seen this before, I enjoyed it.
Tangent: I've always had a strong appreciation for stories which have smart protagonists - even if the actual actions being taken by the protagonists are, on second thought, not quite the genius strategic moves that the author intended them to be. I think that at a fundamental level this is because reading stories like this requires more or less putting yourself into the shoes of a superintelligence to even determine if what they are doing is optimal or not, and after the story finishes when you go back to coding, reading, or playing music you still have the lingering thought patterns of a being which is, in some stories, many times more intelligent than you. It's a very useful state to be in, but for some reason when I stop reading books like that for a bit it becomes harder and harder to sustain over time. Does anyone else experience something like this?
Yeah. That's one nice thing about Eliezer's fiction, when he writes a smart character, he actually tries to come up with smart decisions for them to make. Though I guess it's easier to have the character pull the solution out of a hat if you designed the puzzle yourself.
I read it recently. I liked it overall but found the ending a bit strange/unsatisfying. Did anyone else have that experience?
What's unsatisfying to me is when I can find holes in a strategy presumably concocted by a superintelligence.
I thought the obvious solution to the ending was gur gjb bs gurz jbexvat gbtrgure, fvapr gurl frrz gb or trggvat rkcbaragvny orarsvgf sebz pbbcrengvba vs gurl jbex unys gurve gvzr ba gur bguref cevbevgl vg jbhyq fgvyy or n arg tnva. Nyfb, gur onfvyvfx fglyr oenva fuhgqbja jbeq jnf pbzcyrgryl bhg bs gur oyhr ng gur raq.
http://www.infinityplus.co.uk/stories/under.htm?2. 15-30 min read time, rated "pretty good" by me.
There are a couple of interesting features of this story that I would like to discuss - but I don't want to introduce any spoilers, so I'll just leave this here for now.