When a transformative capability hits a platform your clients run on, you have two choices. You can wait until the market catches up and then help people figure it out. Or you can get in early, do the hard work of actually deploying it, and show up to client conversations with real answers instead of informed speculation.
At Continuus, we made the second choice with Snowflake Cortex Code.
Our team has been working hands-on with CoCo since its early availability, running it through real-world data environments, testing its boundaries, and building the implementation playbooks that help our clients get to value faster. This post is about what we've found.
We spend a lot of time thinking about where our clients are headed, not just where they are today. And when we looked at the trajectory of what CoCo represents, we saw something that doesn't come along often: a capability that genuinely changes the delivery model for data work, not just the speed of it.
Most productivity tools help engineers do the same things faster. CoCo changes what a mid-market data team is capable of doing at all. That distinction matters enormously for the companies we serve, where headcount constraints are real and the pressure to deliver AI-powered capabilities is accelerating.
We wanted to know exactly what CoCo could and couldn't do before we started recommending it. So we started building.
This is the single most important thing we've learned. CoCo's outputs are dramatically better when the Snowflake environment has clean schemas, consistent naming conventions, documented data relationships, and properly applied governance tags.
When CoCo has rich, accurate context to draw from, it produces code that requires minimal revision and integrates cleanly into existing pipelines. When the environment has accumulated technical debt, CoCo still helps, but it requires more oversight and correction cycles.
The practical implication: environment readiness is not optional if you want full CoCo value. This is now a standard part of how we scope engagements.
We've been tracking time-to-completion across dbt development work with and without CoCo assistance. The scaffolding work that used to consume a significant portion of a sprint, staging models, YAML definitions, unit testing frameworks, documentation, is now a fraction of the effort.
What this does to an engineering team's capacity is meaningful. The hours that used to go into boilerplate can now go into architecture decisions, performance optimization, and the higher-complexity problems that actually require deep expertise. For our clients, that shift has a direct impact on delivery timelines and project ROI.
Building production-ready Cortex Agents used to require a predictable sequence of specialist effort: semantic model design, Cortex Search configuration, orchestration logic, evaluation set construction, and iterative benchmarking. Each component took time, and the handoffs between them created friction.
CoCo compresses that entire workflow. Our team now uses CoCo to scaffold agent builds from the ground up, validate semantic model accuracy against evaluation sets, and iterate on agent performance in a fraction of the time it previously required. The baseline for what a mid-market team can build and maintain has shifted significantly.
Many of the companies we work with are still carrying legacy ETL infrastructure, SSIS packages, custom-built pipelines, aging Informatica jobs. Modernizing that logic into Snowflake-native workflows has historically been some of the most painstaking work in a migration engagement.
CoCo changes the math. When we load legacy logic into Snowflake as structured data and engage CoCo on the translation, it produces the equivalent modern SQL or Python at a pace that would be impossible manually. Our teams still review, validate, and refine the outputs. But the heavy lifting of the initial translation is no longer the bottleneck it once was.
Being genuinely useful to clients means understanding the edges, not just the highlights. A few things worth knowing:
CoCo works best with clear intent. Vague prompts produce workable but generic outputs. Specific, well-structured prompts that reflect your actual environment and goals produce outputs that are nearly production-ready. Learning to prompt CoCo well is a real skill, and one our team has invested in developing.
Human review is not optional. CoCo proposes; your team decides. The visual diff review step is there for a reason. We treat every CoCo output as a strong first draft that a qualified engineer reviews before committing. The organizations that get the most from CoCo are the ones that understand this partnership model clearly.
Governance setup pays forward. The upfront investment in tagging, access controls, and semantic model documentation is not just good practice. It is the foundation that determines how well CoCo performs in your specific environment. We've made this a standard deliverable in our Snowflake engagements.
We've built a structured approach to CoCo deployment based on what we've learned. It moves in three stages.
Readiness Assessment. Before any CoCo work begins, we evaluate your Snowflake environment against the conditions that drive CoCo performance: schema documentation, governance tagging, naming consistency, and data relationship definitions. We identify gaps and prioritize the ones that will have the most impact on CoCo output quality.
Pilot Deployment. We start with a defined, high-ROI use case, typically dbt pipeline development or legacy ETL modernization, where we can demonstrate measurable time savings quickly. This gives your team real experience working with CoCo in a structured, supported environment before expanding scope.
Scaled Implementation. Once the patterns are established and your team is working confidently with CoCo, we expand to more complex use cases: Cortex Agent development, ML pipeline automation, and intelligent administration workflows. By this stage, your team has the context and confidence to continue building independently.
CoCo is not just a productivity tool. It represents a meaningful shift in what is possible for mid-market data organizations.
The companies we're most excited to work with right now are the ones who see that clearly. They're not asking whether CoCo is worth paying attention to. They're asking how to structure the work so they capture the opportunity before their competition does.
That is exactly the conversation we're built for.