Israeli software start-up Digma has raised $6 million in seed funding for its recently launched “Continuous Feedback” platform, which aims to enable developers to continuously analyze their code to identify issues in real time, preventing bad code from making it to production and slowing down development.
The new platform is emblematic of the emerging Continuous Feedback sector within the software development space. Digma works by running locally on developers’ machines, flagging any potential regressions, anomalies, and other signs of bad code. The platform is built on the latest observability technologies such as OpenTelemetry and relies on AI machine learning processes to analyze the code runtime data and automatically suggest improvements.
“Over the years we’ve been continually frustrated by a conspicuous gap emerging in the development process,” said Digma CEO and co-founder Nir Shafrir. “Businesses are losing customers due to bad code put into production, or code that doesn’t perform as it should in the real world.
“At the developer level, Digma solves a common problem, which is that developers get feedback too late. They are expected to deliver fast, but they can’t see how their code behaves in the real world, so they can’t make informed design decisions and assess the impact of their changes.”
Widening bottlenecks in the coding pipeline
Experienced software developer Boaz Weisner, now employed at a leading tech firm in Tel Aviv, elaborated on how code review is a significant bottleneck in the computer programming pipeline.
“Depending on what you’re trying to build, [the pre-feedback stage of development] can take a while. If you’re building something large, you’ll probably break it up into multiple smaller projects, and work for one to three days on a certain chunk of code,” he said.
“Waiting for peers to review your code is, sometimes within an organization, one of the slowest processes. You finish writing your code and it could take you another two or three days until it gets [implemented] because you’re waiting for that review,” he added.
Despite this issue, however, Weisner’s prior experience with continuous feedback has left him less than impressed.“My team has tried a similar solution in the past, and it was trash. It gave generic insights that weren’t very actionable, and were already picked up by other tools in our CI, like linters,” he lamented. “The important part of peer reviewing is usually business logic related – what makes sense within the context of the business problem we’re trying to solve, not just the code itself.”
That said, if Digma’s platform were able to surmount those issues – perhaps by way of better AI training or human intervention – he admitted that it has potential.
“These tools are really easy to integrate, so if it seems something has got a chance of being good, it really doesn’t hurt to try them,” he said.
Roni Dover, Digma’s co-founder and CTO, noted that as more organizations consider implementing artificial intelligence in their coding work, the company’s platform offers a way to do so smoothly and efficiently.
“Organizations that do not adopt AI-generated code will fall behind in the productivity race, and developers who are reticent to use the technology will soon fall behind as well,” he said.
“The great challenge that stands before organizations now – given the limitations of the technology – is how to use it safely and responsibly. For that, automated and even AI-driven guardrails need to be in place. Continuous Feedback reduces the risk surrounding checking-in code changes to complex systems or when using GenAI code.”