Launch HN: Jazzberry (YC X25) – AI agent for finding bugs

20 points by MarcoDewey 3 hours ago

Hey HN! We are building Jazzberry (https://jazzberry.ai), an AI bug finder that automatically tests your code when a pull request occurs to find and flag real bugs before they are merged.

Here’s a demo video: https://www.youtube.com/watch?v=L6ZTu86qK8U#t=7

We are building Jazzberry to help you find bugs in your code base. Here’s how it works:

When a PR is made, Jazzberry clones the repo into a secure sandbox. The diff from the PR is provided to the AI agent in its context window. In order to interact with the rest of the code base, the AI agent has the ability to execute bash commands within the sandbox. The output from those commands is fed back into the agent. This means that the agent can do things like read/write files, search, install packages, run interpreters, execute code, and so on. It observes the outcomes and iteratively tests to pinpoint bugs, which are then reported back in the PR as a markdown table.

Jazzberry is focused on dynamically testing your code in a sandbox to confirm the presence of real bugs. We are not a general code review tool, our only aim is to provide concrete evidence of what's broken and how.

Here are some real examples of bugs that we have found so far.

Authentication Bypass (Critical)” - When `AUTH_ENABLED` is `False`, the `get_user` dependency in `home/api/deps.py` always returns the first superuser, bypassing authentication and potentially leading to unauthorized access. Additionally, it defaults to superuser when the authenticated auth0 user is not present in the database.

Insecure Header Handling (High)” - The server doesn't validate header names/values, allowing injection of malicious headers, potentially leading to security issues.

API Key Leakage (High)” - Different error messages in browser console logs revealed whether API keys were valid, allowing attackers to brute force valid credentials by distinguishing between format errors and authorization errors.

Working on this, we've realized just how much the rise of LLM-generated code is amplifying the need for better automated testing solutions. Traditional code coverage metrics and manual code review are already becoming less effective when dealing with thousands of lines of LLM-generated code. We think this is going to get more so over time—the complexity of AI-authored systems will ultimately require even more sophisticated AI tooling for effective validation.

Our backgrounds: Mateo has a PhD in reinforcement learning and formal methods with over 20 publications and 350 citations. Marco holds an MSc in software testing, specializing in LLMs for automated test generation.

We are actively building and would love your honest feedback!

jdefr89 2 hours ago

Ton of work already being done on this. I am a Vulnerability Researcher @ MIT and I know of a few efforts, just at my lab alone, being worked on. So far nearly everything I have seen seems to do nothing but report false positives. They are missing bugs a fuzzer could have found in minutes. I will be impressed when it finds high severity/exploitable bugs. I think we are a bit too far from that if its achievable though. On the flip side LLMs have been very useful reverse engineering binaries. Binary Ninja w/ Sidekick (their LLM plugin) can recover and name data structures quite well. It saves a ton of time. Also does a decent job providing high level overviews of code...

  • hanlonsrazor 2 hours ago

    Agree with you on that. There is nothing about LLMs that makes them uniquely suited for bug finding. However, they could excel re:bugs by recovering traces as you say, and taking it one step further, even recommending fixes.

    • winwang an hour ago

      One possibility is crafting (somewhat-)minimal reproductions. There's some work in the FP community to do this via traditional techniques, but they seem quite limited.

decodingchris 3 hours ago

Cool demo! You mentioned using a microVM, which I think is Firecracker? And if it is, any issues with it?

  • mp0000 3 hours ago

    Thanks! We are indeed using Firecracker. No issues so far

bigyabai 3 hours ago

> Jazzberry is focused on dynamically testing your code in a sandbox to confirm the presence of real bugs.

That seems like a waste of resources to perform a job that a static linter could do in nanoseconds. Paying to spin up a new VM for every test is going to incur a cost penalty that other competitors can skip entirely.

  • MarcoDewey 3 hours ago

    You are right that static linters are incredibly fast and efficient for catching certain classes of issues.

    Our focus with the dynamic sandbox execution is aimed at finding bugs that are much harder for static analysis to detect. These are bugs like logical flaws in specific execution paths and unexpected interactions between code changes.

    • winwang an hour ago

      Do you guide the LLM to do this specifically? So it doesn't "waste" time on what can be taken care of by static analysis? Would be interesting if you could also integrate traditional analysis tools pre-LLM.

bananapub 3 hours ago

how did and do you validate that this is of any value at all?

how many test cases do you have? how do you score the responses? how do you ensure random changes by the people who did almost all of the work (training models) doesn't wreck your product?

  • winwang an hour ago

    Not the OP but -- I would immediately believe that finding bugs would be a valuable problem to solve. Your questions should probably be answered on an FAQ though, since they are pretty good.