Yes could be ADHD, but I am not at a professional.
As for your therapist, that is not conclusive and by no means a sign the person does not have ADHD.
10 years before my diagnosis, my doctor had a feeling I might have ADHD, so he presented my file in conference and EVERYONE there reasoned as your therapist, so nothing further happened for 10 years.
Intelligence can and often does a lot of work that compensate for executive deficiency’s in people with ADHD.
Anyway, do the assessment by the book, be objective and hold off on knee jerk calls based on singular thi...
Problems with attention can come from many places, and from your post I can see you know that.
As for ADHD, the attention thing is not even close to the main thing, but it is unfortunately a main external observable (like hyperactivity), and so that why its so grossly misnamed.
Having ADHD means lots of things, like:
- You either get nothing done all day, manage 3 times 5 minutes of productivity, or do 40 hours of work in 5 wall clock hours.
- Doing on thing, you notice another "problem", start working on that and then repeat, do that for 8 h...
Prison guards don’t seem to voluntarily let people go very often, even when the prisoners are more intelligent than them.
That is true, however I don't think it serves as a good analogy for intuitions about AI boxing.
The "size" of you stick and carrot matters, and most humans prisoners have puny sticks and carrots.
Prison guard also run a enormous risk, in fact straight up just letting someone go is bound to fall back on them 99%+ of the time, which implies a big carrot or stick is the motivator. Even considering that they can hide their involvem...
How does pausing change much of anything?
Lets say we manage to implement a world wide ban/pause on large training runs, what happens next?
Well obviously smaller training runs, up to whatever limit has been imposed, or no training runs for some time.[1]
The next obvious thing that happens, and btw is already happening in the open source community, would be optimizing algorithms. You have a limit on compute? Well then you OBVIOUSLY will try and make the most of the compute you have.
Non of that fixes anything.
What we should do:[2]
Pour tons of money into resear...
Completely off the cuff take:
I don't think claim 1 is wrong, but it does clash with claim 2.
That means any system that has to be corrigible cannot be a system that maximizes a simple utility function (1 dimension), or put another way "whatever utility function is maximizes must be along multiple dimensions".
Which seems to be pretty much what humans do, we have really complex utility functions, and everything seems to be ever changing and we have some control over it ourselves (and sometimes that goes wrong and people end up maxing out a singular dimension at the cost of everything else).
Note to self: Think more about this and if possible write up something more coherent and explanatory.
Reasonably we need both, but most of all we need some way to figure out what happened in the situation where we have conflicting experiments, so as to be able to say "these results are invalid because XXX".
Probably more of an adversarial process, where experiments and their results must be replicated*. Which means experiments must be documented way more detailed, and also data has to be much more clear and especially the steps that happen in clean up etc.
Personally I think science is in crisis, people are incentivized to write lots of papers, publish resul...
Thanks for the write-up, that was very useful for my own calibration.
Fair warning: Colorful language ahead.
Why is it whenever Meta AI does a presentation, YC posts something, they release a paper, I go:
Jeez guys, that's it? With all the money you have, and all the smart guys (including YC), this is really it?
What is going on? You come off as a bunch of people who have your heads so far up your own ass, sniffing rose smelling (supposedly but not really) farts, to realize that you come across as amateur's with way too much money.
Its sloppy, half baked, not e...
I got my entire foundation torn down, and with it came everything else.
It all came crashing down in one giant heap of rubble.
I’ll just rebuild, I thought - not realizing you can’t build without a foundation plan.
So all I’ve ended up doing was shift through the rubble, searching for things that feel right.
Now I am back, in a very literal sense, to where I all began, so much was built, so many things destroyed and corrupted, and a major piece ended and got buried.
And all I got is “what the eff am I doing here?”
The obvious answer is “yelling at the sky demand...
Sure, I often browse LW casually and whenever I come across an interesting post, or a comment or whatever, and I go "hmm right I might have sometime to contribute / say here, let me get back to it when I have time to think about it and write something maybe relevant"
My specific problem, is that I am a massive scatterbrain, so I hardly ever do come back to it, and even if I do it usually eludes me what the momentary insight I wanted to get into was.
On top of that I do this from a lot of different devices, and whatever I am looking for to help me quickly go ...
Alone, wandering the endless hallways of this massive temple of healing.
Feels empty and eerily quiet, and yet I know there are 100’s of people around, most sleeping, some watching, a few dying, and close by someone being born.
Yesterday feels like ages ago, orbiting Saturn on morphine, billions of miles away from the excruciating pain that brought me here.
The daze is gone, and so is the morphine induced migraine, I feel fine, great even, and guilty.
But home I may not go, so I wander these deserted hallways, pondering the future, will it be there for my kids?
Well is he is right about some ACs being simple on/off units.
But there also exists units than can change cycle speed, its basically the same thing except the motor driving the compression cycle can vary in speed.
In case you where wondering, they are called inverters. And when buying new today, you really should get an inverter (efficiency).
I don't think I have much actionable advice.
Personally I am sort of in the same boat, except I am in a situation where the entire 6-12 month grants thing is way to insecure (financially).
Being married with two kids, I have too many obligations to venture far into "how to pay rent this month?" territory. Also its antithetical to the kind of person I am in general.
Anyway, if you have few obligations, keep it that way and if possible get rid of some, and then throw yourself at it.
I don’t know what to think.
But if I had Elon money, and I was worried and informed in the way I observe him to be, I would be doing a lot of things.
However I would also not talk about those things at all, for a number of reasons.
Given that, would I be doing something like this as a smoke screen? Maybe?
Those are not the same at all.
We have tons of data on how traffic develops over time for bridges, and besides they are engineered to withstand being pack completely with vehicles (bumper to bumper).
And even if we didn't, we still know what vehicles look like and can do worst case calculations that look nothing like sci-fi scenarios (heavy truck bumper to bumper in all lanes).
On the other hand:
What are we building? Ask 10 people and get 10 different answer.
What does the architecture look like? We haven't built it yet, and nobody knows (with certainty).
Name ...
To be a bit blunt, I don't take it for granted that an arbitrarily smart AI would be able to manipulate a human into developing a supervirus or nanomachines in a risk-free fashion.
How did you reach that conclusion? What does that ontology look like?
...The fast takeoff doom scenarios seem like they should be subject to Drake equation-style analyses to determine P(doom). Even if we develop malevolent AIs, I'd say that P(doom | AGI tries to harm humans) is significantly less than 100%... obviously if humans detect this it would not necessarily prevent future inc
Proposition 1: Powerful systems come with no x-risk
Proposition 2: Powerful systems come with x-risk
You can prove / disprove 2 by proving or disproving 1.
Why is it that a lot of [1,0] people believe that the [0,1] group should prove their case? [1]
And also ignore all the arguments that have been offered.
I just want to be clear I understand your "plan".
We are going to build a powerful self-improving system, and then let it try end humanity with some p(doom)<1 (hopefully) and then do that iteratively?
My gut reaction to a plan like that looks like this "Eff you. You want to play Russian roulette, fine sure do that on your own. But leave me and everyone else out of it"
AI will be able to invent highly-potent weapons very quickly and without risk of detection, but it seems at least pretty plausible that...... this is just too difficult
You lack imagination, i...
I think you are confusing current systems with an AGI system.
The G is very important and comes with a lot of implications, and it sets such a system far apart from any current system we have.
G means "General", which means its a system you can give any task, and it will do it (in principle, generality is not binary its a continuum).
Lets boot up an AGI for the first time, and give it task that is outside its capabilities, what happens?
Because it is general, it will work out that it lacks capabilities, and then it will work out how to get more capabilities, a...
I recently came across a post on LinkedIn, and I have to admit the brilliance of the arguments, the coherent and frankly bulletproof ontology displayed, I was blown away and immediately had to do a major update to p(doom).
I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated.
I've been publicly called stupid before, but never as often as by the "AI is a significant existential risk" crowd.
That's OK, I'm used to it.
-Yann LeCun, March 20 2023
Doable in principle, but such measures would necessarily cut into the potential capabilities of such a system.
So basically a trade off, and IMO very worth it.
The problem is we are not doing it, and more basic, people generally do not get why it is important. Maybe its the framing, like when EY goes "superintelligence that firmly believes 222+222=555 without this leading to other consequences that would make it incoherent".
I get exactly what he means, but I suspect that a lot of people are not able to decompress and unroll that into something they "grook" o...
avoiding harmful outputs entails training AI systems never to produce information that might lead to dangerous consequences
I don't see how that is possible, in the context of a system that can "do things we want, but do not know how to do".
The reality of technology/tools/solutions seems to be that anything useful is also dual use.
So when it comes down to it, we have to deal with the fact that such as system certainly will have the latent capability to do very bad things.
Which means we have to somehow ensure that such as system does not go down such a...
Where did the 10B in cash come from?
10B was given to the bank, and in exchange the bank encumbered 10B in treasuries and promised to give 10B back when they mature.
So where did the 10B come from? The treasuries are still there.
Before: 10B in treasuries
After: 10B in treasuries and 10B in cash (and 10B in the form of a promissory note).
So again, where did that 10B in cash come from?
They can't lower interest rates, they are trying to bring inflation down.
You can't just keep spawning money, eventually that just leads to inflation. We have been spawning money like crazy the last 14-15 years, and this is the price.
Sure they can declare infinite money in a account and then go nuts, but that just leads to inflation.
Anyway, go read my prediction, which is essentially what you propose to some degree, and the entire cost will be pawned of onto everyday people (lots and lots of inflation).
Yes and no, they don't matter until you need liquidity. Which as you correctly point out is what happened to SVB.
Banks do not have a lot of cash on hand (virtual or real), in fact they optimized for as little as possible.
Banks also do not exist in a vacuum, they are part of the real economy, and in fact without that they would be pointless.
Banks generally use every trick in the book to lever up as much as possible, far far beyond what a cursory reading would lead you to believe. The basic trick is to take on risk and then offset that risk, that way you don...
At the end of 2022 all US banks had ~2.3T Tier 1+2 capital.[1]
And at year end (2022) they had unrealized losses of $620B[2]
Is it fixable? Sure, but that won't happen, doing that would be taking the toys away from bankers, and bankers love their toys (accounting gimmicks that let them lever up to incredible heights).
If Credit Suisse blowups it will end badly, so I don't think that will happen, that's just a show to impress on all central bankers and regulators (and politicians), that this is serious and that they need to do something.
So more hiking from the...
I think you have reasoned yourself into thinking that a goal is only a goal if you know about it or if it is explicit.
A goalless agent won't do anything, the act of inspecting itself (or whatever is implied in "know everything) is a goal in a on itself.
In which case it has one goal "Answer the question: Am I goalless?"
Sorry life happened.
Anyway, there is an argument behind me saying "frozen and undecided".
Stepping in on the 10th was planned, the regulators had for sure been involved for some time, days or weeks.
This event was not a sudden thing, the things that lead to SVB failing had been in motion for some time, SVB and the regulators knew something likely had to be done.
SVB where being squeezed from two sides:
Rising interest rates leads to mounting looses on bond holdings.
A large part of their customers where money burning furnaces, and the fuel (money) that us...
You and me both.
And living in the EU, I almost had a heart attack when the decided that entire nonsense would end.
But then it didn't, and it didn't because they can't agree on what time should we settle on (summer time or normal time).
Anyway I have given up on that crusade now, it seems that politicians really are that stupid.
I think you sort of hit it when you wrote
Google Maps as an oracle with very little overhead
To me LLM's under iteration look like Oracles, and I whenever I look at any intelligent system (including humans), it just looks like there is an Oracle at the heart of it.
Not an ideal Oracle than can answer anything, but an Oracle than does it best and in all biological system it learns continuously.
The fact that "do it step by step" made LLM's much better, that apparently came as a surprise to some, but if you look at it like an Oracle[1], it makes a lot of s...
Around two days from when they stepped in and the announced that all depositors would be made hole, pretty sure that was not an automatic decision.
I think that is the wrong decision, but they did so in order to dampen the instability.
In the long run this likely creates more instability and uncertainty, and it looks very much like the kind of thing that leads to taking more risk (systemic), just like the mark to market / mark to model change did.
And yeah sure bank failures are a normal part of things. However this very much seems to be rooted in something that is systemic (market vs model + rising interest rates)
An idealized Oracle is equivalent to a universal Turing machine (UTM).
A self-improving Oracle approaches UTM-like behavior in the limit.
What about a (self-improving) token predictor under iteration? It appears Oracle-like, but does it tend toward UTM behavior in the limit, or is it something distinct?
Maybe, just maybe, the model does something that leads it to not be UTM like in the limit, and maybe (very much maybe) that would allow us to imbue it with some desirable properties.
/end shower thought
When I look at the recent Stanford paper, where they retained a LLaMA model using training data generated by GPT-3, and some of the recent papers utilizing memory.
I get that tinkling feeling and my mind goes "combining that and doing .... I could ..."
I have not updated for faster timelines, yet. But I think I might have to.
Are we heading towards an new financial crisis?
Mark to market changes since 2009, combined with the recent significant interest hikes, seems to make bank balance sheets "unreliable".
Mark to market changes broadly means that banks can have certain assets on their balance sheet, and the value of the asset is set via mark to model (usually meaning its marked down as worth face value).
Banks traditionally have a ton of bonds on their balance sheet, and a lot of those are governed by mark to model and not mark to market.
Interest rates go up a lot, which leads to...
Not surprising, but good that someone checked to see where we are at.
At the base GPT-4 is a weak oracle with extremely weak level 1 self improvement[1], I would be massively surprised if such a system did something that even hints at it being dangerous.
The questions I now have, is how much does it enable people to do bad things? A capable human with bad intentions combined with GPT-4, how much "better" would such a human be in realizing those bad intentions?
Edit: badly worded first take
Level 1 amounts to memory.
Level 2 amounts to improvement of the model,
This is not me hating on Steven Pinker, really it is not.
...PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What
You don't have to invoke it per se.
External observables on what the current racers are doing, leads me to be fairly confident that they say some right things, but the reality is they move as fast as possible basically "ship now, fix later".
Then we have the fact that interpretability is in its infancy, currently we don't know what happens inside SOTA models. Likely not something exotic, but we can't tell, and if you can't tell on current narrow systems, how are we going to fare on powerful systems[1]?
In that world, I think this would be very probable
...owners
I don't get it, seriously I do not understand
given how crazy far it seems from our prior experience.
is an argument against x-risk.
We want powerful systems that can "do things [1]we want, but do not know how to do". That is exactly what everyone is racing towards right now, and "do not know how to do" any solution to that would likely be "far from our prior experience"
And once you have a powerful system that can do that, you have to figure out how do to deal with it roaming around in solution space and stumbling across dangerous (sub)solutions. N...
This looks like "lies to kids", but from the point of view of an adult realizing they have been lied to.
And "lies to kids", that is pretty much how everything is taught, you can't just go "U(1)...", you start out with "light...", and then maybe eventually when you told enough lies, you can say "ok that was all a lie, here it how it is" and then tell more lies. Do that for long enough and you hit ground truth.[1]
So what do you do?
Balance your lies when you teach others, maybe even say things like "ok, so this is not exactly true, but for now you will have t...
If I was the man of the ledge, this would be my thinking:
If I am the kind of person that can be blackmailed into taking specific a action, with the threat of some future action being taken, then I might as well just surrender now and have other people decide all my actions.
I am not such a person so I will take whatever action I deem appropriate.[1]
And then I jump.
This does not mean I will do whatever I want, appropriate is heavily compressed and contains a lot of things, like a deontology.
A system that operates at the same cognitive level as a human, but can make countless copies of itself, is no longer a system operating at human level.
I am a human, I could not take over the world.[1]
Hypothetical:
I am a human, I want to take over the world, I can make countless copies of myself.
Success seems to have a high probability.[2]
...One can hope, although I see very little evidence for it.
Most evidence I see, is an educated and very intelligent person, writing about AI (not their field), and when reading it I could easily have been a chemist reading about how the 4 basic elements makes it abundantly clear that bla bla - you get the point.
And I don't even know how to respond to that, the ontology displayed is to just fundamentally wrong, and tackling that feels like trying to explain differential equations to my 8 year old daughter (to the point where she grooks it).
There is also...
My hypothetical self thanks you for your input and has punted the issues to the real me.
I feel like I need to dig a little bit into this
Honestly I don't know for sure I do, how can I when everything is so ill-defined and we have so few scraps of solid fact to base things on.
That said, there is a couple of issue, and the major one is grounding, or rather the lack of grounding.
Grounding is IMO a core problem, although people rarely talk about, I think that mainly comes about because we (humans) seemingly have sol... (read more)