All of ljh2's Comments + Replies

Answer by ljh2100

Maybe this is a bit too practical and not as "world-modeling-esque" as your question asks? But I don't strongly believe that raw intelligence is enough of a "credential" to rely on.

You might hear it as-- he/she's the smartest guy/gal I know, so you should trust them; we have insanely great talent at this company; they went to MIT so they're smart; they have a PhD so listen to them. I like to liken this to Mom-Dad bragging points. Any X number of things are really just proxies for "they're smart"

I used to personally believe this of myself-- I'm smart and ca... (read more)

2ChristianKl
Practical examples are certainly welcome. 

I do agree with you. What would have been a better incentive, or do you think the prior system was better? 

Personally, it actually motivated me to be a bit more active and finish my post. But I have also noticed a bit of "farming" for points (which was very much a consideration I'm sure, hence "good heart token").

I think the reason it appealed to me was that the feedback mechanism was tangible and (somewhat) immediate. Contrast that with, say, pure upvotes, which feel non-impactful to me. 

I think an incentive is good, but one that is less than pure dollar values and more than ego-filling-warm-fuzzy-feeling upvotes.

Sorry, what does "hansonpilled" mean? Does Robin Hanson have some insight on this as well?

Those two links are the same. But yeah I'm referring to the latter, w.r.t fuzzing of the synthesized devices.

"Fuzzing" as a concept is used, but not very "block-level" (some some exceptions, e.g. you likely know about UVM's support for random data streams, coming from an FPGA background). The fuzzing analogue in hardware might be called "constrained random verification".

Fuzzing as I've heard it referenced is more of a jargon used in the software security world, the aforementioned AFL fuzzer being one example.

I do agree that traditional fuzzing isn't used in hardware is rather surprising to me.

2Gunnar_Zarncke
ups. corrected. Didn't know UVMs. Maybe one reason fuzzing isn't used more is that it is harder to detect failure? You don't get page faults or exceptions or some such with hardware. What is your idea there?

Oh I guess, while I'm on the topic of "bringing software paradigms into the hardware world", let me also talk about CirctIR briefly. 

I also believe LLVM was a bit of a boon for the software security world, enabling some really cool symbolic execution and/or reverse engineering tools. CirctIR is an attempt to basically bring this "intermediate representation" idea to hardware.

This "generator for intermediate language representation", by the way, is similar to what Chisel currently does w.r.t generating verilog. But CirctIR is a little more generic, and... (read more)

Hi, I'm a lurker. I work on CPUs. This also motivated me to post!

This is a rather niche topic, but I want to express it, because I greatly enjoy seeing other ramble about their deep-work domain expertise, so maybe someone will find this interesting too? This is relatively similar to the concept behind the podcast [What's your problem?], in which engineers talk about ridiculously niche problems that are integral to their field.

Anyways-- here's my problem.

Fuzzing (maybe known as mutation based testing, or coverage directed verification, or 10 other different... (read more)

2Gunnar_Zarncke
I'm not sure whether you mean fuzzing of the synthesis tools (quick google here) or fuzzing of the synthesized devices (e.g., here; corrected). I worked with FPGAs a while back before even unit testing was established practice in software. I'm surprised that fuzzing isn't used much, esp. as it seems much faster so close to the HW. 
3ljh2
Oh I guess, while I'm on the topic of "bringing software paradigms into the hardware world", let me also talk about CirctIR briefly.  I also believe LLVM was a bit of a boon for the software security world, enabling some really cool symbolic execution and/or reverse engineering tools. CirctIR is an attempt to basically bring this "intermediate representation" idea to hardware. This "generator for intermediate language representation", by the way, is similar to what Chisel currently does w.r.t generating verilog. But CirctIR is a little more generic, and frankly Chisel's generator (called FIRRTL) is annoying in many ways. Chris Lattner worked at SiFive for a bit, and made these same observations, so he spearheaded the CirctIR movement. Partially as a result, there are many similarities with FIRRTL and CirctIR (Chisel's goal is to make hardware design easier, and CirctIR's goal is to make designs portable and/or decouple these toolchain flows. Related goals, but still differentiable) . I've wanted for some time to play with this as well, but the fuzzing work has gotten me more interested currently and something I'm trying to make an MVP for at work.

I'm unfamiliar with the Berkeley area, is there a recommended parking area/garage?

1neotoky01
There are usually spots available south of Berkeley campus; along the streets that go North/South. Ellsworth, Dana, and Fulton street are my go-to; and it's good to check the streets that intersect them. Here's an example address of what I mean: 2339 Ellsworth St, Berkeley, CA 94704. From there its a 10 min walk to the Life Sciences building.
  1. Definitely not in the next 10 years. In some sense, that's what formal verification is all about. There's progress, but from my perspective, it's a very linear growth.
    The tools that I have seen (e.g. out of the RISC-V Summit, or DVCon) are difficult to adopt, and there's a large inertia you have to overcome since many big Semi companies already have their own custom flows built up over decades.
    I think it'll take a young plucky startup to adopt and push for the usage of these tools-- but even then, you need the talent to learn these tools, and frankly hardw
... (read more)

I thought I wrote an answer to this. Turns out I didn't. Also, I am a horrific procrastinator. 

  1. In some sense, I'd agree with this synthesis. 
    I say some sense, because the other bottleneck that lots of chip designs have is verification. Somebody has to test the new crazy shit a designer might create, right? To go back to our city planner analogy-- sure, perhaps you create the most optimal connections between buildings. But what if the designer but the doors on the roof, because it's the fastest way down?
    Yes, designs can be come up with faster, and
... (read more)
2Daniel Kokotajlo
Thanks! As before, this was helpful & I have some follow-up questions. :) Feel free to not reply if you don't want to. 1. Can verification be automated too, in the next 10 years? 2. Quantitatively, about how much time + money does a good version of this automated chip design save? E.g. "It normally takes 1 year to design a chip and 2 years to actually scale up production; this tech turns that 1 year into 1 month (when you include verification), for an overall time savings of 33%. As for cost, design is a small fraction of the cost (even a research team of hundreds for a year is nothing compared to the cost of a manufacturing line or whatever) so the effect is negligible." 3. y = 2? That's way lower y than I expected, especially considering that you "rebuff my original point that this isn't that big of a deal." A 2x improvement in 3 years is NOT a big deal, right? Isn't that slightly slower than the historical rate of progress from e.g. moore's law etc.? Or are you saying it's going to be a 2x improvement on top of the regular progress from other sources? Oh... maybe you are specifically talking about speed improvements rather than all-things-considered cost to train a model of a given size on a given dataset? It's the latter that I'm interested in, I probably misspoke. 4. What is post-silicon fabrication? When I google it it redirects to "post-silicon validation." If creating the design and verifying it is the barrier to entry, then won't this AI tech help reduce the barrier to entry since it automates the design part? I guess I just don't understand your point 3. 5. "Thus, suppose you completely eliminate post-silicon fabrication times. Where would this extra time go? I highly doubt we would change our society-accepted cadence of hardware rotations. Most definitely, it would go right back into creating new designs-- human brains. " I'm particularly keen to hear what you mean by this.
Answer by ljh21240

Just made this account to answer this. Source: I've worked in physical design/VLSI and CPU verification, and pretty regularly deal with RTL.

TL;DR - You're right-- it's not a big deal, but it simultaneously means more and less than you think.

The Problem

Jump to "What It Means" if you already understand the problem.

First, let me talk about about the purpose of floorplanning. The author's mention it a little bit, but it's worth repeating.

Placement optimizations of this form appear in a wide range of science and engineering applications, including hardware desi

... (read more)
9Daniel Kokotajlo
Awesome, thanks! And welcome to LW! I found this very helpful and now have some follow-up questions if you don't mind. :) 1. How does this square with Zac's answer below? It on the surface seems to contradict what you say; after all, it proposes 10x-1000x improvements to AI stuff whereas you say it won't even be 1%! I think I can see a way that your two answers can be interpreted as consistent, however: You identify the main benefit of this tech as reducing the clock time it takes for engineers to come up with a new good chip design. So even if the new design is only 1% better than the design the engineers would have come up with, if it happens a lot faster, that's a big deal. Why is it a big deal? Well, as Zac said, it means the latest AI architectures can be quickly supplemented by custom chips, and in general custom chips provide 10x - 1000x speedups. Would you agree with this synthesis? 2. I'd be interested in your best guess for what the median X's and Y's in this sentence are: "In about X years, we'll be in a regime where the latest AI models are run on specialized hardware that provides a factor-of-Y speedup over today's hardware." 3. ETA: Maybe another big implication of this technology is that it'll lower the barrier to entry for new chipmakers? Like, maybe 5 years from now there'll be off-the-shelf AI techniques that let people design cutting-edge new chips, and so China and Russia and India and everyone will have their own budding chip industry supported by generous government subsidies. Or maybe not -- maybe most of the barriers to entry have to do with manufacturing talent rather than design talent?

Mod here, I put a table in your comment.

(Tables aren't in comment editors right now, I made it in the post editor and copied it in.)

This is a great comment! Thank you for writing it!