Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by JoshuaZ on Je suis Charlie
Comment author: loldrup 15 January 2015 11:28:04PM 0 points [-]

I think my chain falls of on the idea that we can assign reliable probabilities to various hypotheses, prior to our own thorough investigation of the available scientific material.

For the case of UFOs, wouldn't we have to have scientific reports explaining all unexplained observations of aerial phenomena that have occured in history, before we could reasonably claim that the probability is very low?

In response to comment by loldrup on Je suis Charlie
Comment author: Emile 15 January 2015 11:47:54PM 6 points [-]

I think my chain falls of on the idea that we can assign reliable probabilities to various hypotheses, prior to our own thorough investigation of the available scientific material.

Yep! We do it all the time! How likely do you think it is that the city of New York has just been destroyed by a nuclear blast? That your parents are actually undercover agents sent by Thailand? That there is a scorpion in the sandwich you're about to eat? Most people would consider those extremely unlikely without a second thought, and would not feel any need for a "thorough investigation of the available scientific material". And that's a perfectly sensible thing to do!

In response to Je suis Charlie
Comment author: Emile 15 January 2015 11:34:37AM 8 points [-]

I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.

Not really; consider how much effort is worth investigating the question of whether Barack Obama is actually secretly Transgender, in different scenarios:

  • You just thought about it, but don't have any special reason to privilege that hypothesis
  • Someone mentioned the idea a a thought experiment on LessWrong.com, but doesn't seem to think it's even remotely likely
  • Someone on the internet seems to honestly believe it (but may be a troll or time cube guy-level crazy)
  • A vocal group on the internet seems to believe it
  • Several people you know in real-life seem to believe it

If you think that even in the first case you should investigate, then you're going to spend your life running over every hypothesis that catches your fancy, regardless of how likely or useful it is. If you believe that in some cases it deserves a bit of investigation, but not others, you're going to need a few extra rules of thumbs, even before looking as the evidence.

Comment author: SolveIt 05 January 2015 02:35:59PM 3 points [-]

I can code, as in I can do pretty much any calculation I want and have little problem on school assignments. However, I don't know how to get from here to making applications that don't look like they've been drawn in MS Paint. Does anyone know a good resource on getting from "I can write code that'll run on command line" to "I can make nice-looking stuff my grandmother could use"?

Comment author: Emile 05 January 2015 03:40:32PM 4 points [-]

I've used The bootstrap framework to make web apps that don't look horribly ugly. Learning all the things you'd need to make apps that use that (so a bit of JS, CSS, HTML, etc. as sixesandseven says) would probably be a good start. (It would be probably easier than trying to make good-looking CSS from scratch, which is more of a pain).

Comment author: HungryHobo 04 January 2015 10:15:12PM *  12 points [-]

Thing is, with almost everything in software, one of the first things it gets applied to is... software development.

Whenever some neat tool/algorithm comes out to make analysis of code easier it gets integrated into software development tools, into languages and into libraries.

If the complexity of software stayed static then programmers would have insanely easy jobs now but the demands grow to the point where the actual percent of failed software projects stays pretty static and has done since software development became a reasonably common job.

Programmers essentially become experts in dealing with hideous complex systems involving layers within layers of abstraction. Every few months we watch news reports about how xyz tool is going to make programmers obsolete by allowing "anyone" to create xyz and 10 years later we're getting paid to untangle the mess made by "anyone" who did indeed make xyz... badly while we were using professional equivalents of the same tools to build systems orders of magnitude larger and more complex.

If you had a near human level AI, odds are, everything that could be programmed into it at the start to help it with software development is already going to be part of the suites of tools for helping normal human programmers.

Add to that, there's nothing like working with the code for (as opposed to simply using or watching movies about) real existing modern AI to convince you that we're a long long way from any AI that's anything but an ultra-idiot savant.

And nothing like working in industry to make you realize that an ultra-idiot savant is utterly acceptable and useful.

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Re: recursive self improvement, the crux is whether improvements in AI gets harder the deeper you go. There's not really good units for this.

but lets go with IQ. lets imagine that you start out with an AI like an average human. IQ 100.

If it's trivial to increase intelligence and it doesn't get harder to improve further as you get higher then ya, foom, IQ of 10,000 in no time.

If each IQ point gets exponentially harder to add then while it may have taken a day to go from 100 to 101, by the time it gets to 200 it's having to spend months scanning it's own code for optimizations and experimenting with cut-down versions of itself in order to get to 201.

Given the utterly glacial pace of AI research it doesn't seem like the former is likely.

Comment author: Emile 05 January 2015 12:52:36PM 4 points [-]

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Not "just because they're made of software" - but because there are many useful things that a computer is already better than a human at (notably, vastly greater "working memory"), so a human-level AI can be expected to have those and whatever humans can do now. And a programmer who could easily do things like "check all lines of code to see if they seem like they can be used", or systematically checking from where a function could be called, or "annotating" each variable, function or class by why it exists ... all things that a human programmer could do, but that either require a lot of working memory, or are mind-numblingly boring.

Comment author: Emile 01 January 2015 10:01:58PM 2 points [-]

I should be there.

Comment author: Metus 08 December 2014 12:19:59PM 11 points [-]

Physics: "what is energy?"

I am a graduate student of physics and I am inclined to say that I now know even less about what energy is.

Comment author: Emile 08 December 2014 04:03:19PM 0 points [-]

Maybe completely blanking on that question is a sign of having studied some physics?

Comment author: Emile 02 December 2014 08:59:37AM 0 points [-]
Comment author: [deleted] 24 November 2014 11:46:40AM 1 point [-]

I am considering deleting all of my comments on Less Wrong (or, for comments I can't delete because they've been replied to, editing them to replace their text with a full stop and retracting them) and then deleting my account. Is there an easier way of doing that than by hand?

(In case you're wondering, that's because thanks to Randall Munroe the probability that any given person I know in meatspace will read my comments on Less Wrong just jumped up by orders of magnitude.)

In response to comment by [deleted] on Open thread, Nov. 24 - Nov. 30, 2014
Comment author: Emile 24 November 2014 01:12:54PM 1 point [-]

?! But your name seems even less tractable to yourself than mine is, and I don't worry about that!

(also, if you take into account the probability that they will link those comments to you, and that they will think badly of you because of it, no?)

Comment author: Capla 21 November 2014 09:48:19PM *  1 point [-]

and you can't see the forest if you mind sees every tree.

Why not? I'd just ask to see every tree and the forest. Transcending current human limitation is exactly what the singularity is god for.

Comment author: Emile 22 November 2014 01:59:38PM 1 point [-]

exactly what the singularity is god for

... I'm not sure whether that is a misspelling ... (Freudian slip?)

Comment author: Emile 21 November 2014 08:34:41AM 8 points [-]

(ok, I deleted my duplicate post then)

Also worth mentioning: the Forum thread, in which Eliezer chimes in.

View more: Next