Mainsail AI Labs
What We’re Learning About AI Coding Assistant Adoption

AI-powered coding tools are advancing quickly—and so are the ways teams are adopting them. Engineering leaders within our portfolio have been gathering to compare notes, share what’s working, and help each other accelerate adoption of AI coding assistants and other tools.

Here are some of the takeaways from our first few AI Coding Assistant Roundtables. Keep in mind that new ideas, lessons, and tools are emerging at rapid pace, so we encourage companies to lean in, make time for AI experiments, and stay nimble and keep the momentum moving. The biggest mistake you can make with AI is not getting started right away.

(This article was originally posted in July 2025 and reflects our experiences with AI assisted coding to date. If you want to explore further how we help companies with AI, visit Mainsail AI Labs.)

🧰 Tool Adoption Snapshot

Most Used Tools:
A survey of our portfolio companies revealed that nearly half are currently using Cursor, with GitHub Copilot a close second. A few others are using JetBrains AI and Claude Code.

Some more quick points about our group’s preferences:

  • Cursor has been widely adopted and the current front runner due to its model access, usage tracking, and newly introduced agent ability.
  • GitHub Copilot is often used when teams are already using VS Code or Visual Studio and do not want to switch.
  • Claude Code is widely used for long-form coding tasks and POC generation. The group has noted that deep, detailed prompting (e.g., multi-page API specs) produces better results with Claude and similar tools.

Common Barriers:
Tools like Windsurf lack SSO/admin controls. Cursor’s VS Code dependency limits adoption in JetBrains shops. Copilot had lagged behind—but recent improvements have put it back in the running.

What about ChatGPT (or other general purpose LLM tools)?
Many engineers still rely on ChatGPT, Gemini, Claude, etc. for quick tasks, even when embedded tools like GitHub Copilot are available. These remain favored for ad hoc scripting, SQL query debugging and optimization, and administrative tasks like policy drafting.

Key Takeaway:
Don’t get stuck. The landscape is shifting fast—what wasn’t viable two months ago might be high-performing today.

Model Play:
Regardless of the tool, test different models. According to the engineering leaders who attended our Roundtable sessions, Sonnet 3.7 continues to lead, but Gemini 2.5 is gaining ground in their opinions for certain tasks.

✅ What’s Working

Rules & Prompts:
Cursor rules and GitHub coding guidelines are elevating consistency in style, tone, and structure and help control how the models generate code. Consider using these heavily across the team to drive compliance with team standards.

Domain Injection:
Injecting schema files, PRDs, or custom rules give AI context—and can drastically improve code quality.

Cross-Functional Use Beyond Engineers:
AI assistants are boosting output in SRE, QA, and Data teams too—from infrastructure scripting to test generation. AI use by designers (e.g., analyzing user surveys and building Figma designs) show early traction. Product teams are also beginning to explore workflows that integrate AI, though widespread adoption remains inconsistent.

Cultural Wins:
Framing AI as a career accelerator, not a replacement, and providing hands-on training are crucial for team buy-in.

🚧 What’s Not Working

Vendor Lock-In:
IDE-specific tools (like Cursor with VS Code) create friction for teams on JetBrains or other platforms.

Language Gaps:
Tools are still inconsistent with legacy codebases, particularly older Java or .NET environments.

Limited context windows of LLMs:
Some of our companies are addressing this by using commit messages or summarized markdown memory files to retain continuity. Claude Code’s ability to ingest summaries to preserve workflow memory was highlighted as one solution. The consensus was that manually maintaining context summaries is currently the best workaround.

DevOps and Infrastructure-as-Code Challenges:
DevOps teams have generally shown slower adoption. YAML/JSON confusion between frameworks like CloudFormation and SAM introduces AI-generated syntax errors, yet one-off CLI scripts using LLMs are still valuable for DevOps tasks.

Quality Assurance (QA) and CI/CD Limitations:
QA teams lag behind engineering in AI adoption. Tools like Katalon offer some AI support but lack full automation, and solutions like QA Tech offer partial automation with human review still required. QA’s future may lie in balancing exploratory AI-generated tests with human curation.

Mobile Development:
Mobile (iOS/Android) AI tooling is still underwhelming and no one has found anything that really excels.

🧪 Emerging AI Coding Practices

Architecting for AI:
Moving to mono-repos or tighter package boundaries helps AI tools reason better and faster as it gains access to the fuller context of the system.

Prompting for Performance:
Prompting models to optimize not just for readability—but also for speed, especially with complex queries. At the same time, while AI-generated code tends to be more verbose, the verbosity is surprisingly being found to improve long-term maintainability and readability, even if it’s less elegant than hand-written code.

AI Agents:
GitHub Copilot’s new review assistant and the various flavors of autonomous coding agents are showing early signs of traction and are helping with smaller tasks. Also, ensuring agent focus on scoped JIRA tickets has been seen to prevent runaway compute and unwanted refactors.

Our portfolio is also finding that greenfield projects benefit more from AI agents due to clean structure, whereas legacy systems often overwhelm context windows and require “headlight” work to focus agents. The use of exploratory agents to identify relevant files and pass tasks to scoped agents was a suggested tactic.

Metrics and ROI:
While some of these tools provide details around usage, it is generally just around code initially written by AI or accepted by the developer. The quantity of code written by AI in the end product is not reported on, which presents a skewed view of actual usage. Several tactics to measure usage were discussed among our group of engineering leaders, but in the end, the amount of AI code that makes it into the end product is still a difficult metric to capture accurately.

Final Takeaway: Keep Calm and Share On

As a leader, you must demonstrate the commitment to AI by prioritizing and using it for yourself, creating the internal conditions for discovery, sharing technical insights like these within your team, and celebrating AI use across your company.

Here are more of my thoughts on Leading an AI-First Company. I hope we get the chance to discuss your AI transformation, from culture all the way to code and customer impact.

 

➔ Was this helpful? Subscribe to our monthly Mainsail News & Growth Insights so you don’t miss future AI case studies and ideas.

This content piece has been prepared solely for informational purposes. The content piece does not constitute an offer to sell or the solicitation of an offer to purchase any security. The information in this content piece is not presented with a view to providing investment advice with respect to any security, or making any claim as to the past, current or future performance thereof, and Mainsail Management Company, LLC (“Mainsail” or “Mainsail Partners”) expressly disclaims the use of this content piece for such purposes.

The information herein is based on the author’s and portfolio leaders’ opinions and views and there can be no assurance other third-party analyses would reach the same conclusions as those provided herein. The information herein is not and may not be relied on in any manner as, legal, tax, business or investment advice.

Third-party images, logos, and references included herein are provided for illustrative purposes only. Inclusion of such images, logos, and references does not imply affiliation with or endorsement for or by such firms or businesses.

Certain information contained in this content piece has been obtained from published and non‐published sources prepared by other parties, which in certain cases have not been updated through the date hereof. While such information is believed to be reliable for the purposes of this content piece, neither Mainsail nor the webinar participants assume any responsibility for the accuracy or completeness of such information and such information has not been independently verified by any of them. The content piece will not be updated or otherwise revised to reflect information that subsequently becomes available, or circumstances existing or changes occurring after the date hereof, or for any other reason.

Certain information contained herein constitutes “forward-looking statements,” which can be identified by the use of terms such as “may,” “will,” “should,” “could,” “would,” “predicts,” “potential,” “continue,” “expects,” “anticipates,” “projects,” “future,” “targets,” “intends,” “plans,” “believes,” “estimates” (or the negatives thereof) or other variations thereon or comparable terminology. Forward looking statements are subject to a number of risks and uncertainties, which are beyond the control of Mainsail. Actual results, performance, prospects or opportunities could differ materially from those expressed in or implied by the forward-looking statements. Additional risks of which Mainsail is not currently aware also could cause actual results to differ. In light of these risks, uncertainties and assumptions, you should not place undue reliance on any forward-looking statements. The forward-looking events discussed in this content piece may not occur.

Mainsail undertakes no obligation to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise.

No representation, warranty or undertaking, express or implied, is given as to the accuracy or completeness of the information or opinions contained in the enclosed materials by Mainsail and no liability is accepted by such persons for the accuracy or completeness of any such information or opinions. For additional important disclosures, please click here.