Overview
Over the past few months, I’ve been experimenting with generative AI tools for various mobile development tasks. What started as casual use of ChatGPT for feedback and early trials with GitHub Copilot in Xcode quickly evolved into more focused exploration. Early suggestions often felt basic and clunky, but they revealed the potential of these tools. As they’ve matured, so has my interest in integrating them into my workflow.
AI has become a helpful assistant—accelerating repetitive tasks, unblocking challenges, and inspiring new ideas. In this article, I’ll share how I’ve incorporated generative AI into my workflow, highlight where it excels and where it falls short, and offer guidance for mobile engineers on making the most of these tools.
Choosing the right tools early
Selecting the right AI tools for your workflow is crucial. In my experiments, I've primarily used GitHub Copilot for Xcode for inline code suggestions, Cursor (with Claude Sonnet + Gemini) for multi-file assistance and architectural planning, and ChatGPT for general problem solving and summarisation. Each tool shines in different areas, so understanding their strengths helps integrate AI effectively into your development process.
Getting started with generative AI
If you’re new to generative AI or even just using AI in your development role, start with tasks that offer immediate value and are easy to validate. Here are a few example tasks you could try:
- Summarise a complex technical article
- Create a mock
- Debug/ solve an error by sharing a log or screenshot
- Pull strings to a localized file
- Add a new navigation flow to a project
As you gain confidence, gradually incorporate AI into small code updates or refactoring. While it’s tempting to build new projects in unfamiliar languages using AI, it’s best to first develop a foundational understanding to ensure code quality and maintainability.
Everyday programming assistance
Much of the mainstream marketing around generative AI focuses on its ability to speed up day-to-day development tasks. Promising smarter autocompletion, helping you write boilerplate/repetitive functions, or scaffold features faster. By passing in detailed development specs or requirements into your prompts, you can guide AI to better align with your intentions for a particular feature or module. While this is often the first touchpoint developers have with AI, it’s just the starting point.
Summarising technical content
Whether it’s a long changelog, a dense RFC, a technical post or article, AI tools are great at summarising content. We can use them to quickly pull out insights, implementation details, or pros and cons to suit our requirements.
Code comments and documentation
I’ve been impressed by how well AI can generate code comments and documentation. It’s great at labour-intensive tasks such as summarising function intent, explaining parameters, and auto-generating docs. That’s a real time-saver, especially in shared codebases. In many of our projects where we will be handing over codebases to clients - this is a really powerful tool to make otherwise labourious tasks take no time at all.
Sounding board for ideas
Sometimes, the most useful thing AI offers isn’t code—it’s conversation and how it can serve as a collaborative partner in thought. I’ve often used ChatGPT to bounce around ideas, whether I’m investigating a specific issue or exploring architectural strategies, and even questioned Cursor on different approaches it may suggest.
Engaging with AI in this way can surface alternative approaches, highlight concerns, and even challenge their assumptions. It can be akin to having a dialogue with a colleague who offers a fresh perspective, helping to refine concepts and ensure that the solutions we design are robust and well-considered. This collaborative process can be especially beneficial when planning complex systems or integrating new technologies, where multiple variables and potential outcomes must be evaluated.
Prototyping & ideation
AI is fantastic for getting ideas down quickly (we can call it vibe coding if we must). We can use it to prototype concepts that can then be shared with the team or use to explore different architectural directions.
Recently, I used Cursor to create a feature-rich React Native prototype. It started as an opportunity to practice some React Native, but it also served as a chance to see how far Cursor has come and look into some more advanced features such as docs and rules. It was a great exercise in improving my prompt engineering skills, revealing where I needed to be more specific with my prompts, and showing how important it is to still understand the language you’re coding in so you can understand the changes it is suggesting.
Modularisation helps
Beyond modularisation, promoting clean architecture and a strong separation of concerns further enhances AI’s ability to reason about your code. Well-defined layers and dependency boundaries allow AI suggestions to be more accurate and contextually appropriate, particularly when working across larger systems.
A well-modularised codebase, combined with solid design patterns, reduces the risk of AI-generated suggestions causing unintended ripple effects across your code. When your functions follow the single-responsibility principle, your components are clearly named, and classes are scoped appropriately, it becomes easier to ring-fence and allow the AI to understand your architecture and offer precise, relevant assistance.
Code context is key
The more context you provide, the better the results. Whether you’re referencing adjacent files or using tools like Cursor that can index your project, context drastically improves suggestion quality.
Prompt engineering
Good prompting matters. During a recent conversation with a former colleague, he shared how spending 15 minutes crafting a high-quality prompt can help avoid an hour of tweaking vague ones. That mindset really resonated. If you haven’t yet, look up the “Five Principles of Prompting”: clear intent, structure, constraints, context, and iteration. They make a world of difference.
I can recommend the book "Prompt Engineering for Generative AI" as a great starting point.
Building test infrastructure for AI-supported development
Having strong static analysis and type safety within your codebase also plays a key role here. Languages like Swift and TypeScript help catch syntax errors, type mismatches, and invalid references early, providing another layer of protection when incorporating AI-generated code.
When integrating AI into your development workflow, it’s tempting to let it handle the heavy lifting of test generation. While AI can efficiently produce a broad suite of tests, it’s crucial to remember that these are starting points, not final solutions. Think of it as drafting an initial version of your safety net; it’s still up to you to inspect for any gaps. AI can assist in expanding test coverage and catching obvious issues, but the nuanced understanding of your application’s context and the assurance that all edge cases are handled correctly remain firmly with the developer. In essence, while AI can assist in generating unit, integration, and end-to-end tests, it’s ultimately the developer’s responsibility to ensure these tests are accurate and effective, maintaining the integrity of the codebase.
Code versioning is your best friend
Treat AI-generated code like any other code—branch, test, and review it. I’ve discarded overconfident suggestions more than once. Commit early and often.
CI/CD integration is critical
Incorporating AI-assisted code into a robust CI/CD pipeline ensures that changes are automatically vetted. Automated testing, analysis, and code quality checks provide the guardrails that catch regressions and maintain code integrity before anything is merged to production.
Impact on estimation
One of the bigger complexities integrating AI into the development process has introduced is when it comes to project estimation. While AI can undoubtedly accelerate certain tasks like prototyping, test generation, or documentation, more advanced or ambiguous tasks come with an inherent element of exploration with the tools available to us. When we aim to use AI for solving complex problems, there is a level of experimenting with different prompts, refining approaches, and sometimes even discovering that manual intervention is still needed. This research and discovery phase can be unpredictable, making it harder to provide accurate estimates. It’s important to account for this variable when planning projects and making clients aware, as relying too heavily on AI may not always be realistic.
The limitations
Language support is uneven
Not all programming languages get equal treatment. I’ve personally found React Native and JavaScript perform really well—likely due to broader training data. Native Swift, on the other hand, can feel patchy at times. Prompting with links to the latest documentation can help bridge the gap.
You still need to understand the code
You may notice a recurring theme—AI should be your assistant, not your architect. It can propose patterns and scaffolding, but it’s your responsibility to assess, adapt, and understand them. One of my colleagues put it well: “Any code, whether it’s written by you or AI, still goes under your name.” That really stuck with me. You’ve got to be able to understand, explain and justify everything you push into your PRs.
Not always up to date
These tools are trained on historic data, so they may not know about the newest APIs, frameworks, or best practices. I’ve seen them suggest deprecated methods and outdated patterns. Supplementing with links to the latest documentation can help—but human review is essential.
Confidently wrong
Sometimes AI outputs look perfect but are just flat-out wrong. Especially with edge cases or less conventional solutions. I’ve seen this often enough that I now use those moments as learning opportunities—both to understand the model better (and help to improve any prompting) and to reinforce why experienced developers are still essential. This also ties into why a well executed test infrastructure is so important.
Security and compliance considerations
For teams working in regulated industries or handling sensitive data (whether it be user data or how credentials are used), it’s important to assess AI-generated code through a security and compliance lens. Reviewing external API interactions, data handling practices, and access controls remains crucial, even when using AI to accelerate delivery. It's also worth considering how the use of AI tooling is addressed within client agreements, ensuring clear transparency around its use and alignment with contractual obligations or industry regulations.
Team culture & senior responsibilities
As a developer, your responsibilities go beyond just shipping code. They include mentoring, setting standards, and building sustainable, scalable practices for the future. AI can support these responsibilities—but it cannot replace them.
- Mentoring: Junior developers paired with AI can move faster, but they still need feedback and guidance. Reviewing AI-generated code together can be a powerful teaching moment.
- Enforcing Standards: Using AI prompts supported by rules and good engineerig principles like SOLID and DRY ensures your codebase remains clean and maintainable.
- Reducing Technical Debt: With thoughtful and considered prompting, AI can help refactor legacy codebases more safely and efficiently.
Final thought
Generative AI won’t replace your experience or judgment—but it can make you faster, more productive, and even more creative.