All of thefirechair's Comments + Replies

Answer by thefirechair-4-61

The CCP once ran a campaign asking for criticism and then purged everyone who engaged.

I'd be super wary of participating in threads such as this one. A year ago I participated in a similar thread and got the rate limit ban hit.

If you talk about the very valid criticisms of LessWrong (which you can only find off LessWrong) then expect to be rate limited.

If you talk about some of the nutty things the creator of this site has said that may as well be "AI will use Avada Kadava" then expect to be rate limited.

I find it really sad honestly. The group think here ... (read more)

5[anonymous]
https://en.m.wikipedia.org/wiki/Hundred_Flowers_Campaign is the source. Re-education camps to execution were the punishment. Thank you for telling me about the rate limit a year ago. I thought I was the only one. Were you given any kind of feedback from the moderators for the reason you were punished such or an advance warning to give you the opportunity to change anything?
Amalthea4120

Do you have an example for where better conversations are happening?

There's no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn't supported.

There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.

The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it's going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network's requ... (read more)

1Zack Sargent
There are three things to address, here. (1) That it can't update or improve itself. (2) That doing so will lead to godlike power. (3) Whether such power is malevolent. Of 1, it does that now. Last year, I started to get a bit nervous noticing the synergy between AI fields converging. In other words, Technology X (e.g. Stable Diffusion) could be used to improve the function of Technology Y (e.g. Tesla self-driving) for an increasingly large pool of X and Y. This is one of the early warning signs that you are about to enter a paradigm shift or geometric progression of discovery. Suddenly, people saying AGI was 50 years away started to sound laughable to me. If it is possible on silicon transistors, it is happening in the next 2 years. Here is an experiment testing the self reflection and self improvement (loosely "self training," but not quite there) of GPT4 (last week). Of 2, there is some merit to the argument that "superintelligence" will not be vastly more capable because of the hard universal limits of things like "causality." That said, we don't know how regular intelligence "works," much less how much more super a super-intelligence would or could be. If we are saved from AI, then it is these computation and informational speed limits of physics that have saved us out of sheer dumb luck, not because of anything we broadly understood as a limit to intelligence, proper. Given the observational nature of the universe (ergo, quantum mechanics), for all we know, the simple act of being able to observe things faster could mean that a superintelligence would have higher speed limits than our chemical-reaction brains could ever hope to achieve. The not knowing is what causes people to be alarmist. Because a lot of incredibly important things are still very, very unknown ... Of 3, on principle, I refuse to believe that stirring the entire contents of Twitter and Reddit and 4Chan into a cake mix makes for a tasty cake. We often refer to such places as "sewers," and o
2TAG
Meaning it hasn't happened, or it isn't possible? If it offers to improve them, we may well see that as a benevolent act...
6sanxiyn
Eh, I agree it is not mathematically possible to break one time pad (but it is important to remember NSA broke VENONA, mathematical cryptosystems are not same as their implementations in reality), but most of our cryptographic proofs are conditional and rely on assumptions. For example, I don't see what is mathematically impossible about breaking AES.

This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there... which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that's how you'd do it. 

You can see it at work - when a post is upvoted is it because it's well-written/useful or because it's saying the groupthink? When a post is downvoted is it because it contains forbidden ideas? 

When you talk about making a new faction - that is what this place is. And naming it Rationalists says something ver... (read more)

1SomeoneYouOnceKnew
Do you believe encouraging the site maintainers to implement degamification techniques on the site would help with your criticisms?
5Daniel Kokotajlo
...I have, and haven't found anything good. (I'm ignoring the criticisms hosted on LessWrong itself, which presumably don't count?) That's why I asked for specific links. Now it sounds like you don't actually have anything in mind that you think would stand up to minimal scrutiny. The RationalWiki article does make some good points about LW having to reinvent the wheel sometimes due to ignorance of disparagement of the philosophical literature. As criticisms go this is extremely minor though... I say similar things about the complaints about Yudkowsky's views on quantum physics and consciousness.
2[anonymous]
Do you have a specific criticism?  I tried that search, and the first result goes right back to lesswrong itself, you could just link the same article.  Second criticism is on the lesswrong subreddit.  Third is rational wiki, where apparently some thought experiment called Roko's Basilisk got out of hand 13 years ago.   Most of the other criticisms are "it looks like a cult", which is a perfectly fair take, and it arguably is a cult that believes in things that happen to be more true than the beliefs of most humans.  Or "a lack of application" for rationality, which was also true, pre machine learning.

I agree. When you look up criticism of LessWrong you find plenty of very clear, pointed, and largely correct criticisms. 

I used time-travel as my example because I didn't want to upset people but really any in-group/out-group forum holding some wild ideas would have sufficed. This isn't at Flat Earther levels yet but it's easy to see the similarities. 

There's the unspoken things you must not say otherwise you'll be pummeled, ignored or fought. Blatantly obvious vast holes are routinely ignored. A downvote mechanism works to push comments down.&nb... (read more)

8Daniel Kokotajlo
If you link me to 1-3 criticisms which you think are clear, pointed, and largely correct, I'll go give them a skim at least. I'm curious. You are under no obligation to do this but if you do I'll appreciate it.

You have no atomic level control over that. You can't grow a cell at will or kill one or release a hormone. This is what I'm referring to. No being that exists has this level of control. We all operate far above the physical reality of our bodies.

But we suggest an AI will have atomic control. Or that code control is the same as control.

Total control would be you sitting there directing cells to grow or die or change at will.

No AI will be there modifying the circuitry it runs on down at the atomic level.

2Raemon
Quick very off the cuff mod note: I haven't actually looked into the details of this thread and don't have time today, but skimming it it looks like it's maybe spiralling into a Demon Thread and it might be good for people to slow down and think more about what their goals are. (If everyone involved is actually just having fun hashing an idea out, sorry for my barging in)

I'd suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can't gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.

The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.

One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cul... (read more)

4Archimedes
Yes, physical constraints do impose an upper bound. However, I would be shocked if human-level intelligence were anywhere close to that upper bound. The James Webb Space Telescope has an upper bound on the level of detail it can see based on things like available photons and diffraction but it's way beyond what we can detect with the naked human eye.

The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn't supported by evidence. We haven't produced a sentient AI to know whether this is true or not.

For all we know, there may be a upper limit to "thinking" based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.

Humans have sleep for example to help us learn and retain information.

As for self modification - we don't have ato... (read more)

No being has cellular level control. Can't direct brain cells to grow or hormones to release etc. This is what I mean by it does not exist in nature. There is no self modification that is being propagated that AI will have.

Teleportation doesn't exist so we shouldn't make arguments where teleportation is part of it.

4green_leaf
Humans can already do that, albeit indirectly. Once again, you're "explaining" why something that already exists is impossible. It's sufficient for a self-modifying superhuman AI that it can do that indirectly (for it to be self-modifying), but self-modification of the source code is even easier than manipulation on the level of individual molecules.

You have no control down on the cellular level over your body. No deliberate conscious control. No being does. This is what I mean by does not exist in nature. Like teleportation.

2Richard_Kennaway
If I do weight training, my muscles get bigger and stronger. If I take a painkiller, a toothache is reduced in severity. A vaccination gives me better resistance to some disease. All of these are myself modifying myself. Everything you have written on this subject seems to be based on superficial appearances and analogies, with no contact with the deep structure of things.

We do have examples of these things in nature, in degrees. Like flowers turning to the sun because they contain light-sensing cells. Thus, it exists in nature and we eventually replicate it.

Steam engines is just energy transfer and use, and that exists. So does flying fast. 

Something not in nature (as far as we can tell) is teleportation. Living inside a star. 

I don't mean specific narrow examples in nature. I mean the broader idea. 

So I can see intelligence evolving over enormous time-frames, and learning exists, so I do concur we can speed up learning and replicate it... but the underlying idea of a being modifying itself? Nowhere in nature. No examples anywhere on any level. 

1green_leaf
Your argument is fundamentally broken, because nature only contains things that happen to biologically evolve, so it first has to be the result of the specific algorithm (evolution) and also the result of a random roll of a dice (the random part of it). Even if there were no self-modifying beings in nature (humans do self-modify) or self-modifying AI, it would still be prima facie possible for it to exist because all it means it is for the being to turn its optimization power at itself (this is prima facie possible, since the being is a part of the environment). So instead of trying to think of an argument about why something that already exists is impossible, you should've simply considered the general principle.
7quanticle
Any form of learning is a being modifying itself. How else would learning occur?
Answer by thefirechair15-24

Imagine LessWrong started with an obsessive focus on the dangers of time-travel. 

Because the writers are persuasive there are all kinds of posts filled with references that are indeed very persuasive regarding the idea that time-travel is ridiculously dangerous, will wipe out all human life and we must make all attempts to stop time-travel.

So we see some new quantum entanglement experiment treated with a kind of horror. People would breathlessly "update their horizon" like this matters at all. Physicists completing certain problems or working in certa... (read more)

1[comment deleted]
1[anonymous]
This is approximately my experience of this place.   That, and the apparent runaway cult generation machine that seems to have started. Seriously, it is apparent that over the last few years the mental health of people involved with this space has collapsed and started producing multiple outright cults. People should stay out of this fundamentally broken epistemic environment. I come closer to expecting a Heaven's Gate event every week when I learn about more utter insanity. 
4Henry Prowbell
  But why would intelligence reach human level and then halt there? There's no reason to think there's some kind of barrier or upper limit at that exact point. Even in the weird case where that were true, aren't computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it's own brain. That's already a superintelligence isn't it?  
4Henry Prowbell
A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it. You could imagine a gorilla thinking "there's no way a human could overpower us. I would just punch it if it came into my territory."  The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...) The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There's literally no way a puny human brain could predict what tactics it would use. I'd imagine it almost definitely involves inventing new branches of science.
9quanticle
We don't have any examples of steam engines, supersonic aircraft or transistors in nature either. Saying that something can't happen because it hasn't evolved in nature is an extraordinarily poor argument.
6Donald Hobson
1) True, we don't have any examples of this in nature. Would we expect them?   Lets say that to improve something, it is necessary and sufficient to understand it and have some means to modify it. Plenty of examples, most of the complicated ones are with humans understanding some technology and designing a better version.   At the moment, the only minds able to understand complicated things are humans, and we haven't got much human self improvement because neuroscience is hard.  I think it is fairly clear that there is a large in practice gap between humans and the theoretical/physical limits to intelligence. Evidence of this includes neuron signals traveling at a millionth light speed, most of the heuristics and biases literature, humans just sucking at arithmetic.  The AI's working on AI research is a positive feedback loop, and probably quite a strong one. It seems that, when a new positive feedback loop is introduced, the rate of progress should speed up, not slow down.   2) You attribute magical chess game winning powers to stockfish. But how in particular would it win? Would it use it's pawns, advance a knight? The answer is that I don't know which move in chess is best. And I don't know what stockfish will do. But these two probability distributions are strongly correlated, in the sense that I am confidant stockfish will make one of the best moves.  I don't know what an ASI will do, and I don't know where the vulnerabilities are, but again, I think these are correlated. If an SQL injection would work best, the AI will use an SQL injection. If a buffer overflow works better, the AI will use that.  There is an idea here that modern software is complicated enough that most of it is riddled with vulnerabilities. This is a picture backed up by the existence of hacks like stuxnet, where a big team of humans put a lot of resources and brainpower into hacking a particular "highly secure" target and succeeded.  I mean it might be that P=NP and the AI finds a
Answer by thefirechair2-1

You've touched on a point that many posts don't address - the realities of the real world. So many AI is going to kill us posts start with AI is coming then ? and then we all die. 

Look at something like Taiwan chip manufacture - it's incredibly cutting edge and complicated and some of it isn't written down! There are all kinds of processes that we don't patent for various reasons. So much of our knowledge is in process rather than actually written anywhere. 

And all of these processes are themselves the pinnacle of hundreds of other interlinked pr... (read more)

2Viliam
This suggests than industrial espionage would be one of the AI's priorities.
6Mitchell_Porter
Then for so long as it needs humans, it would act through humans. 

I read a study a few years back that found some women still had iron deficiency symptoms even as high as 60 on the ferritin test. Also was pointed out the "normal" scale for iron was devised the way most things were in the past - on healthy college age white males.

What is problematic about the ferritin test is it is treated like a yes/no rather than a continuum. Get 14 on the rest that where 10 is anemia and could be told it's not iron deficiency.

The best advice is likely "if you have the symptoms of iron deficiency, treat it".

It's definitely one of the mo... (read more)

3scrollop
When people come to us (GPs/Family Physicians) with hair loss for example, the first thing a doctor would do would be to check their bloods, specifically looking for ferritin levels (and other things eg thyroid etc). If the ferritin level is less than 60 then we would recommend increasing their iron intake. Thinking about this logically, one could say that the usual lower threshold of normal iron(around 20, differss with age/sex and lab) Is too low if you can increased chances of hair loss at levels below 60, hence I recommend a goal of ferritin > 60. I recommend that people purchase(in the UK) Ferrous Fumurate (has better bioavailabilty than ferrous sulphate), the more you take the better (upto 3 times a day; you may have GI side effects - abdo discomfort, diarrhoea/constipation/black faeces) and take with 200mg+ of Vitamin C (or fresh orange juice) which triples the absorption of iron, and don't have tea/coffee/dairy one hour either side of taking it (which reduces absorption).

How is that a flaw? 

The harms of it are well known and established. You can look them up.

It's beside the point however. Replace it with whatever cause you want - spreading democracy, ending the war on drugs, ending homelessness, making more efficient electrical devices. 

The argument is the path to the end is convoluted, not clear ahead of time. Although we can have guideposts and learn from history, the idea that today you can "optmize" on an unsolved problem can be faintly ridiculous. 

James Clear has zero idea of what is good or great and t... (read more)

Now let’s factor in two additional facts:

-- are these facts though?

I see this a bit on here, a kind of rapid fire and then and then, and this is a fact therefore... when perhaps slowing down and stopping on some of those points to break the cognitive cage being assembled is the move.

Such as opportunity cost. We can make clear examples of this, invest in stock A means you can't invest in stock B.

But in the world, there are plenty of examples that are not OR gates but AND gates. It's not an opportunity cost to choose between providing clean needles to homele... (read more)

1Jakub Supeł
One flaw with your argument though... you seem to think it would be a good thing? Why?

Animals can suffer - duty to prevent animal suffering - stop that lion hunting that gazelle - lion suffering increase - work out how to feed lions - conclude predators and prey exist - conclude humans are just very smart predators - eating meat ok.

I'd contend that some positions are taken very seriously but what the next perceived logical step for people is varies. An animal activist might be pro the world becoming vegetarian. A non-animal activist is pro strong animal welfare laws to prevent needless suffering.

Trying to resolve "humans are just smart pred... (read more)

It does open up the possibility of other people writing any comic that has existed. More Snoopy. More Calvin & Hobbes. 

1st panel: John cooking lasagna, garfield watching. 

2nd panel: garfield tangling in john's legs, lasagna going flying

3rd panel garfield eating lasagna from the floor, happy.

No words, copy style, short comic. 

Wow, this is going to explode picture books and book covers.

Hiring an illustrator for a picture book costs a lot, as it should given it's bespoke art.

Now publishers will have an editor type in page descriptions, curate the best and off they go. I can easily imagine a model improvement to remember the boy drawn or steampunk bear etc.

Book cover designers are in trouble too. A wizard with lighting in hands while mountain explodes behind him - this can generate multiple options.

It's going to get really wild when A/B split testing is involved. As you mention re... (read more)

gwern*110

Perhaps a full animated movie down the line. There are already programs that fill in gaps for animation poses. Boy running across field chased by robot penguins - animated, eight seconds.

Video is on the horizon (video generation bibliography eg. FDM), in the 1-3 year range. I would say that video is solved conceptually in the sense that if you had 100x the compute budget, you could do DALL-E-2-but-for-video right now already. After all, if you can do a single image which is sensible and logical, then a video is simply doing that repeatedly. Nor is there... (read more)

2Sable
Following up on your logic here, the one thing that DALLE-2 hasn't done, to my knowledge, is generate entirely new styles of art, the way that art deco or pointillism were truly different from their predecessors. Perhaps that'll be the new of of human illustrators?  Artists, instead of producing their own works to sell, would instead create their own styles, generating libraries of content for future DALLEs to be trained against.  They then make a percentage on whatever DALLE makes from image sales if the style used was their own.

Allow it to display info on a screen. Set up a simple poleroid camera that takes a phone every X seconds.

Ask the question, take physical photos of the screen remotely.

View the photos.

Large transmission of information in analog format.

Answer by thefirechair00

Sell 180 visas per day over 6 hours between 9am-3pm for 361 days of the year. A new auction every two minutes of one visa. Final day of the year sell remainder of visas and take four days off until new year.

Start Jan 1 and say Google bids $10000 x 10 visas. They win the first ten auctions over the first 20 minutes. The reference price is set at $10,000 for auction #11.

But $10,000 is too high for the next bidder who wants to pay $9000. No sale on auction #11.

Auction #12 starts with 2 visas for sale. You decrease the reference price by 2/180. 

So new min... (read more)