Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: dglukhov 21 March 2017 09:05:13PM 1 point [-]

Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:

"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that are controlled by A.I. like the Fujisawa project, and a 20 minute interview to Ray Kurzweil about how the singularity will make us all immortal by 2045.

Now, I get triggered as fuck whenever my teacher opens his mouth, because not only does he sell these claims with no other basis than "technology is growing exponentially", but he also implies that all of our problems can and will be solved by it, empowering us to keep fucking up things along the way. But I prefer to stay in silence, because most idiots at my class are beyond saving anyway and I don't get off on confronting others, but that is beside the point.

I wanted to make a case for why the singularity is beyond the limits of this current industrial civilization, and I will base my assessment on these pillars:

-Declining Energy Returns: We are living in a world where the return for oil is what, a tenth of what it used to be last century? Not to mention that even this lower-quality oil is facing depletion, at least from profitable sources. Renewables are at an extremely early stage as to even hope they run an industrial, exponentially growing civilization like ours at this point, and there are some physical laws that limit the amount of energy that can be actually absorbed from the sun, along with what can be efficiently stored at batteries, not to mention intermittency issues, transport costs, etc. One would think that more complex civilizations require more and more energy, especially at exponential growth rates, but the only argument that futurists spew out is some free market bullshit about solar, or like my teacher did, only expect the idea will come true because humans are awesome and technolgy is increasing at exponential rates. These guys think applied science and technology exist in a vacuum, which brings me to the next point.

-Economic feasibility: I know it is easy to talk about the wonders of tech and the bright future ahead of us, when one lives in the developed world, and is part of a priviliged socio-economical class, being as such isolated from 99% of the misery of this planet. There are people today that cannot afford clean water. In fact, most people that are below the top 20% of the population in terms of income probably won't be able to afford many of the new technological developments more than they do today. In fact, if the wealth gap keeps increasing, only the top 1% would be able to turn into cyborgs or upload their minds into robots or whatever it is that these guys preach. I think the argument of a post-scarcity era is a lot less compelling once you realize it will only benefit a portion of the populations of developed countries.

-Political resistance and corruption: Electric cars have been a thing ever since the 20th century, and who know what technologies have been hidden and lobbied against by the big corporations that rule this capitalist system. Yet the only hope for the singularity is that is somehow profitable for the stockholders. Look at planned obsolescence. We could have products that are 100 times more durable, that are more efficient, that are safer, that pollute less, but then where would profits go? Who is to tell you that they won't do the same in the future? In fact, a big premise of smart cities is that they will reduce crime by constant suirvellance; In fujisawa every lightpost triggered a motion camera and houses had centralized information centers that could be easily turned into Orwellian control devices, which sounds terrifying to me. We will have to wait and see how the middle class and below react to automation taking many jobs, and how the UBI experiment is carried out, if at all.

-Time constraints: Finally, people hope for the Singularity to reach us by 2045. That would imply that we need around 30 years of constant technological development, disregarding social decline, resource depletion, global warming, crop failuers, droughts, etc. If civilization collapses before 2045, which I think is very likely, then that won't come around and save us, and as far as I know, there is no other hope from futurologists other than a major breakthrough in technology at this point. Plus, like the video "Are humans smarter than bacteria?" very clearly states, humans need time to figure out the problems we face, then we need some more time to design some solution, then we need even more time to debate, lobby and finally implement some form of the original solution, and hope no other problems arise from it, because as we know technology is highly unpredictable and many times it creates more problems than it solves. Until we do all that, on a global scale, without destroying civil liberties, I think we will all be facing severe environmental problems, and developing countries may very well have fallen apart long before that.

What do you think? Am I missing something? What is the main force that will stop us reaching the Singularity in time? "

Comment author: cousin_it 21 March 2017 10:38:36PM *  5 points [-]

I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.

Comment author: cousin_it 12 March 2017 08:31:34PM 3 points [-]

Quixey has been shutdown.

Comment author: cousin_it 27 February 2017 08:56:08AM *  2 points [-]

Maybe it was too hard.

Here's another problem that might be easier. Make an O(n log n) sorting algorithm that's simple, stable, and in place. Today you can only get two out of three (merge sort isn't in place, heap sort isn't stable, and block sort isn't simple).

Comment author: cousin_it 05 March 2017 11:33:50AM *  1 point [-]

I've read some papers (Trabb Pardo, Huang-Langston 1, Huang-Langston 2, Munro-Raman-Salowe, Katajainen-Pasanen 1, Katajainen-Pasanen 2) and there seems to be a "wall" at sqrt(n) extra space. If we have that much space, we can write a reasonable-looking mergesort or quicksort that's stable, in place and O(n log n). But if we have strictly O(1) space, the algorithms become much more complicated, using a sqrt(n) buffer inside the array to encode information about the rest. Breaking through that wall would be very interesting to me.

Comment author: gjm 02 March 2017 02:49:28PM 1 point [-]

This claims to be a stable in-place sort and doesn't seem outrageously complicated. I haven't verified that it is stable, in-place, or in fact a sort at all.

Comment author: cousin_it 02 March 2017 02:56:36PM *  2 points [-]

It's O(n log^2 n) because it merges subarrays using something like STL's inplace_merge which is O(n log n). Devising an O(n) in-place merge, and thus an O(n log n) merge sort, is much harder. GrailSort and WikiSort have working implementations, both are over 500 lines.

Comment author: cousin_it 01 March 2017 08:40:01PM *  10 points [-]

A human brain uses as much power as a lightbulb and its size is limited by the birth canal, yet an evolutionary accident gave us John Von Neumann who was far beyond most people. An AI as smart as 1000 Von Neumanns using the power of 1000 lightbulbs could probably figure out how to get more power. Arguments like yours ain't gonna stop it.

Comment author: Elo 27 February 2017 05:22:49AM 0 points [-]

I am looking for a guide to getting into diffuse thinking modes.

I can't seem to find one, currently my plan looks like:

  1. workspace (make and find a nice place to sit for the following process)
  2. time (set aside time for it - 90mins seems like a good chunk)
  3. remove distractions (anything that takes my mind away, like needing to pee, needing to leave for a drink, changes in light or being too hot and too cold) (no phone buzzes, no email notifications, no calls) (don't be tired or wired, aim for the middle of the day, not too early, too late)
  4. paper or other supplies for recording work
  5. inspiration or things to vaguely guide my focus onto a topic, maybe a list of things I want to think about.

What should I add?

I have been meaning to write a post on "workspace" but have not gotten to it.

Comment author: cousin_it 28 February 2017 02:49:27PM 2 points [-]

Just get in the shower :-)

Comment author: ChristianKl 27 February 2017 09:32:21AM 0 points [-]

What does "simple" mean here?

Comment author: cousin_it 27 February 2017 09:58:55AM *  0 points [-]

Just use any definition that feels reasonable to you. If you have two solutions that are simple under different definitions, I want to see both!

Comment author: Thomas 27 February 2017 07:37:12AM 0 points [-]

No new problem this week.

But has anybody even tried to solve the last week's problem?

Comment author: cousin_it 27 February 2017 08:56:08AM *  2 points [-]

Maybe it was too hard.

Here's another problem that might be easier. Make an O(n log n) sorting algorithm that's simple, stable, and in place. Today you can only get two out of three (merge sort isn't in place, heap sort isn't stable, and block sort isn't simple).

Comment author: Johannes_Treutlein 24 February 2017 09:44:16AM *  0 points [-]

Thanks for the link! What I don't understand is how this works in the context of empirical and logical uncertainty. Also, it's unclear to me how this approach relates to Bayesian conditioning. E.g. if the sentence "if a holds, than o holds" is true, doesn't this also mean that P(o|a)=1? In that sense, proof-based UDT would just be an elaborate specification of how to assign these conditional probabilities "from the viewpoint of the original position", so with updatelessness, and in the context of full logical inference and knowledge of the world, including knowledge about one's own decision algorithm. I see how this is useful, but don't understand how it would at any point contradict normal Bayesian conditioning.

As to your first question: if we ignore problems that involve updatelessness (or if we just stipulate that EDT always had the opportunity to precommit), I haven't been able to find any formally specified problems where EDT and UDT diverge.

I think Caspar Oesterheld's and my flavor of EDT would be ordinary EDT with some version of updatelessness. I'm not sure if this works, but if it turns out to be identical to UDT, then I'm not sure which of the two is the better specified or easier to formalize one. According to the language in Arbital's LDT article, my EDT would differ from UDT only insofar as instead of some logical conditioning, we use ordinary Bayesian conditioning. So (staying in the Arbital framework), it could look something like this (P stands for whatever prior probability distribution you care about):

Comment author: cousin_it 24 February 2017 04:21:46PM *  1 point [-]

Also, it's unclear to me how this approach relates to Bayesian conditioning.

To me, proof-based UDT is a simple framework that includes probabilistic/Bayesian reasoning as a special case. For example, if the world is deterministic except for a single coinflip, you specify a preference ordering on pairs of outcomes of two deterministic worlds. Fairness or non-fairness of the coinflip will be encoded into the ordering, so the decision can be based on completely deterministic reasoning. All probabilistic situations can be recast in this way. That's what UDT folks mean by "probability as caring".

It's really cool that UDT lets you encode any setup with probability, prediction, precommitment etc. into a few (complicated and self-referential) sentences in PA or GL that are guaranteed to have truth values. And since GL is decidable, you can even write a program that will solve all such problems for you.

Comment author: username2 13 February 2017 04:22:46PM 2 points [-]

Are there interesting youtubers lesswrong is subscribed to ? I never really used youtube and after watching history of japan I get the feeling I'm missing out on some stuff.

Comment author: cousin_it 16 February 2017 07:53:53PM 0 points [-]

YouTube has tons of good stuff, it's a question of which addiction you want :-) I'm a longtime fan of Accursed Farms.

View more: Next