titotal

Wiki Contributions

Comments

Sorted by
titotal95

List of lethalities is not by any means a "one stop shop". If you don't agree with Eliezer on 90% of the relevant issues, it's completely unconvincing. For example, in that article he takes as an assumption that an AGI will be godlike level omnipotent, and that it will default to murderism. 

Answer by titotal54

building a bacteria that eats all metals would be world-ending: Most elements on the periodic table are metals. If you engineer a bacteria that eats all metals, it would eat things that are essential for life and kill us all. 

Okay, what about a bacteria that only eats "stereotypical" metals, like steel or iron? I beg you to understand that you can't just sub in different periodic table elements and expect a bacteria to work the same. There will always be some material that the bacteria wouldn't work on that computers could still be made with. And even making a bacteria that only works on one material, but is able to spread over the entire planet, is well beyond our abilities. 

I think list of lethalities is nonsense for other reasons, but he is correct in that trying to do a "pivotal act" is a really stupid plan. 

titotal31

Under peer review, this never would have been seen by the public. It would have incentivized CAIS to actually think about the potential flaws in their work before blasting it to the public. 

titotal269

I asked the forecasting AI three questions:

Will iran possess a nuclear weapon before 2030

539's Answer: 35%

Will iran possess a nuclear weapon before 2040

539's Answer: 30%

Will Iran posses a nuclear weapon before 2050

539's answer: 30%

Given that the AI apparently doesn't understand that things are more likely to happen if given more time, I'm somewhat skeptical that it will perform well in real forecasts. 

titotal1512

The actual determinant  here is whether or not you enjoy gambling. 

Person A  who regularly goes to a casino and bets 100 bucks on roulette for the fun of it will obviously go for bet 1. In addition to the expected 5 buck profit, they get the extra fun of gambling, making it a no-brainer. Similarly, bet 2 is a no brainer.  

Person B who hates gambling and gets super upset when they lose will probably reject bet 1. The expected profit of 5 bucks is outweighed by the emotional cost of doing gambling, a thing they are upset by. 

When it comes to bet 2, person B still hates gambling, but the expected profit is ridiculously high that it exceeds the emotional cost of gambling, so they take the bet. 

Nobody is necessarily being actually irrational here, when you account for non-monetary costs. 

titotal77

I believe this is important because we should epistemically lower our trust in published media from here onwards.  

From here onwards? Most of those tweets that chatgpt generated are not noticeably different from the background noise of political twitter (which is what it was trained on anyway).  Also, twitter is not published media so I'm not sure where this statement comes from. 

You should be willing to absorb information from published media with a healthy skepticism based on the source and an awareness of potential bias. This was true before chatgpt, and will still be true in the future.

titotal20

No, I don't believe he did, but I'll save the critique of that paper for my upcoming "why MWI is flawed" post.  

titotal62

I'm not talking about the implications of the hypothesis, I'm pointing out the hypothesis itself is incomplete. To simplify, if you observe an electron which has a 25% chance of spin up and 75% chance of spin down, naive MWI predicts that one version of you sees spin up and one version of you sees spin down. It does not explain where the 25% or 75% numbers come from. Until we have a solution to that problem (and people are trying), you don't have a full theory that gives predictions, so how can you estimate it's kolmogorov complexity?

I am a physicist who works in a quantum related field, if that helps you take my objections seriously. 

titotal44

It’s the simplest explanation (in terms of Kolmogorov complexity).

 

Do you have proof of this? I see this stated a lot, but I don't see how you could know this when certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved. 

titotal11-2

The basic premise of this post is wrong, based on the strawman that an empiricist/scientist would only look at a single piece of information. You have the empiricist and scientists just looking at the returns on investment on bankmans scheme, and extrapolating blindly from there. 

But an actual empiricist looks at all the empirical evidence. They can look the average rate of return of a typical investment, noting that this one is unusually high.They can learn how the economy works and figure out if there are any plausible mechanisms for this kind of economic returns. They can look up economic history, and note that Ponzi schemes are a thing that exists and happen reasonably often. From all the empirical evidence, the conclusion "this is a Ponzi scheme" is not particularly hard to arrive at. 

Your "scientist" and "empricist" characters are neither scientists nor empiricists: they are blathering morons. 

As for AI risk, you've successfully knocked down the very basic argument that AI must be safe because it hasn't destroyed us yet. But that is not the core of any skeptics argument that I know. 

Instead, an actual empiricist skeptic might look at the actual empirical evidence involved. They might say hey, a lot of very smart AI developers have predicted imminent AGI before and been badly wrong, so couldn't this be that again? A lot of smart people have also predicted the doom of society, and they've also been wrong, so couldn't this be that again? Is there a reasonable near-term physical pathway by which an AI could actually carry out the destruction of humanity? Is there any evidence of active hostile rebellion of AI? And then they would balance that against the empirical evidence you have provided to come to a conclusion on which side is stronger. 

Which, really, is also what a good epistemologist would do? This distinction does not make sense to me, it seems like all you've done is (perhaps unwittingly) smeared and strawmanned scientists. 

Load More