thefirechair
thefirechair has not written any posts yet.

thefirechair has not written any posts yet.

There's no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn't supported.
There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.
The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it's going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network's required just to make a sparkplug?
One tsunami in the RAM manufacturing district and that AI is crippled. Not to mention that so many pieces of information do not exist online. There are many things without patent. Many processes opaque.
We do in fact have multiple tries to get AI "right".
We need to stop giving future AI magical powers. It cannot suddenly crack all cryptography instantly. It's not mathematically possible.
This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there... which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that's how you'd do it.
You can see it at work - when a post is upvoted is it because it's well-written/useful or because it's saying the groupthink? When a post is downvoted is it because it contains forbidden ideas?
When you talk about making a new faction - that is what this place is. And naming it Rationalists says something very direct to those who don't agree - they're Irrationalists.
Perhaps looking to other communities is the useful path forward.... (read more)
Google lesswrong criticism and you'll find them easily enough.
I agree. When you look up criticism of LessWrong you find plenty of very clear, pointed, and largely correct criticisms.
I used time-travel as my example because I didn't want to upset people but really any in-group/out-group forum holding some wild ideas would have sufficed. This isn't at Flat Earther levels yet but it's easy to see the similarities.
There's the unspoken things you must not say otherwise you'll be pummeled, ignored or fought. Blatantly obvious vast holes are routinely ignored. A downvote mechanism works to push comments down.
Talking about these problems just invites people in the problems to attempt to draw you in with the flawed arguments.
Saying, hey, take three big steps back... (read more)
You have no atomic level control over that. You can't grow a cell at will or kill one or release a hormone. This is what I'm referring to. No being that exists has this level of control. We all operate far above the physical reality of our bodies.
But we suggest an AI will have atomic control. Or that code control is the same as control.
Total control would be you sitting there directing cells to grow or die or change at will.
No AI will be there modifying the circuitry it runs on down at the atomic level.
I'd suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can't gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.
The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.
One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it's not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.
The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn't supported by evidence. We haven't produced a sentient AI to know whether this is true or not.
For all we know, there may be a upper limit to "thinking" based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.
Humans have sleep for example to help us learn and retain information.
As for self modification - we don't have atomic level control over the meat we run on. A program or model doesn't have atomic level control... (read more)
No being has cellular level control. Can't direct brain cells to grow or hormones to release etc. This is what I mean by it does not exist in nature. There is no self modification that is being propagated that AI will have.
Teleportation doesn't exist so we shouldn't make arguments where teleportation is part of it.
You have no control down on the cellular level over your body. No deliberate conscious control. No being does. This is what I mean by does not exist in nature. Like teleportation.
The CCP once ran a campaign asking for criticism and then purged everyone who engaged.
I'd be super wary of participating in threads such as this one. A year ago I participated in a similar thread and got the rate limit ban hit.
If you talk about the very valid criticisms of LessWrong (which you can only find off LessWrong) then expect to be rate limited.
If you talk about some of the nutty things the creator of this site has said that may as well be "AI will use Avada Kadava" then expect to be rate limited.
I find it really sad honestly. The group think here is restrictive and bound up by verbose arguments... (read more)