Comment author: Gurkenglas 12 June 2016 12:44:18AM *  0 points [-]

Would this agent be able to reason about off switches? Imagine an AI getting out, reading this paper on the internet, and deciding that it should kill all humans before they realize what's happening, just in case they installed an off switch it cannot know about. Or perhaps put them into lotus eater machines, in case they installed a dead man's switch.

Comment author: Gurkenglas 07 June 2016 08:22:59PM *  5 points [-]
Comment author: MrMind 24 May 2016 01:11:38PM 5 points [-]

Following the usual monthly linkfest on SSC, I stumbled upon an interesting paper by Scott Aaronson.
Basically, he and Adam Yedidia created a Turing machine which, from ZFC, cannot be proved to stop or run forever (it will run forever assuming a superset of said theory).
It is already known from Chaitin incompleteness theorem that every formal system has a limit complexity length, over which it cannot prove or disprove certain assertions. The interesting, perhaps surprising, part of the result is that said Turing machine has 'only' 7918 states, that is a registry less than two bytes long.
This small complexity is already sufficient to evade the grasp of ZFC.
You can easily slogan-ize this result by saying that BB(7918) (the 7918th Busy Beaver number) is uncomputable (whispering immediately after "... by ZFC").

Comment author: Gurkenglas 27 May 2016 01:37:34AM 0 points [-]

Huh. I expected the smallest number of states of a TM of indeterminate halting to be, like, about 30. Consider how quickly BB diverges, after all.

Comment author: Gurkenglas 12 April 2016 12:02:23AM 1 point [-]
Comment author: Gurkenglas 05 April 2016 07:35:35AM *  3 points [-]

Go for it. If we listened to cranks more, we could have finished that Tower of Babel.

Comment author: Elo 04 April 2016 08:46:23AM *  2 points [-]

user account: "Lamp" is banned for being eugine_nier. This is an update in case anyone was wondering.

so far accounts have been:

  • Eugine_Nier
  • Azazoth123
  • The_Lion
  • The_Lion2
  • Old_Gold
  • Lamp

(that I know of, I think there were more in between too that I forgot.)

If I could send this guy a message it would be this: You are quite literally wasting our time. And by "our" I mean; the moderators and the people who could be spending their time improving the place, coding and implementing a better place; instead are spending their time getting rid of you over and over. DON'T COME BACK. You are literally killing LW.

I don't want to get into the community's time or the time of the people you debate with; or the time of anyone who reads this post here. That time also adds up. Seriously.

Comment author: Gurkenglas 04 April 2016 10:37:09AM *  1 point [-]

Would they have used their time improving LWs code? I feel like the problems it has would be solved by way less programmer-time than has been lost by LW being not improved, but nobody's doing it because procrastination/it isn't fun/akrasia/ugh fields.

Comment author: moridinamael 11 March 2016 03:05:42PM *  4 points [-]

Almost any game that their AI can play against itself is probably going to work. Except stuff like Pictionary where it's really important how a human, specifically, is going to interpret something.

I know a little bit about training neural networks, and I think it would be plausible to train one on a corpus of well-played StarCraft games to give it an initial sense of what it's supposed to do, and then having achieved that, let it play against itself a million times. But I don't think there's any need to let it watch how humans play. If it plays enough games against itself, it will internalize a perfectly sufficient sense of "the metagame".

If we're talking about AI in RTS games, I've always dreamed of the day when I can "give orders" in an RTS and have the units carry the orders out in a relatively common-sense way instead of needing to be micromanaged down to the level of who they're individually shooting at.

Comment author: Gurkenglas 12 March 2016 03:55:06PM *  0 points [-]

It could become better than people at playing Pictionary, by drawing images that are most likely to be correctly recognized rather than the human way of translating the model in its head into a picture, and by analyzing what models are most likely to produce a picture rather than the human way of translating the picture into a model in its head. Except if you mean that it playing against itself would make it diverge into its own language of pictures.

Although it might optimize in a direction that doesn't follow the spirit of the game, anologous to writing out the name of its task.

Actually that could be interesting - could it invent a language that is maximally efficient at communicating concepts?

To your last one, you might enjoy a MOBA where individual players have only information about stuff in their line of sight, but there's an extra player whose job it is to see everything and give "orders". I think there was one like that...

Comment author: Gurkenglas 12 March 2016 03:45:50PM *  7 points [-]
Comment author: Gurkenglas 23 February 2016 06:33:20PM 0 points [-]

Que it reading Superintelligence and having an idea.

Comment author: Manfred 14 February 2016 12:59:26PM *  1 point [-]

A box that runs all possible turing machines may contain simulations of every finite intelligence, but in terms of actually interacting with the world it's going to be slightly less effective than a rock. You could probably fix this by doing something like approximate AIXI, but even if it is possible to evade thermodynamics, all of this takes infinite information storage, which seems even less likely.

Comment author: Gurkenglas 14 February 2016 04:04:26PM *  1 point [-]

That box is merely a proof that the intelligence of patterns in a nonhalting Turing machine needs not be bounded. If we cannot get infinite space/time, we run into sooner problems than Kolgomorov complexity. (As I understood it, OP was about how even infinite ressources cannot escape the complexity our laws of physics dictate.)

View more: Prev | Next