Limited Time Offer: Get 20% off an App Review Get Started

Blog

Working With AI, Not Around It: Notes From a Rails Developer

charles martinez
Charles Martinez April 21, 2026

The first time I used an AI coding assistant, I let it drive. I’d describe what I wanted in one sentence, watch it edit files, accept blindly, and move on.

The first time I watched an AI do in minutes what would’ve taken me a day, I had to sit with it for a bit. I had mixed feelings. The speed was incredible. It was also the first time I genuinely wondered what I was still for. Years of practice, syntax I could type without thinking, bugs I could spot from across the room, and a lot of it was now table stakes that any developer with the right tool could match.

The answer, I eventually realised, is that the job didn’t disappear. It moved and pivoted. The thinking got harder, not easier. What to build, how to shape it, where the boundaries should live, those questions are still mine. Its not really about how fast you can type code anymore.

Here’s what worked for me throughout using AI on my day to day work.

Treat it like a teammate

AI is really good at spitting out code, and that’s the trap. You type a vague request, it produces something that looks right, the files get edited, the tests pass, maybe, and you move on. It feels productive.

It isn’t, really. Or at least, not as productive as it could be.

What worked for me best is treating AI like a pair programmer who happens to type faster than me (I mean by a lot, I haven’t done typing code by myself for a quite a while now). I treat it like paring with another human. I explain what I’m trying to do, what I’ve already tried, what constraints matter. I push back when something feels off. I ask why, not just what.

The same thing applies here. Before I ask Claude or Cursor anything substantial, I write out in plain English what I’m trying to accomplish, often as a comment or a scratch note. What’s the business context? What are the edge cases? What patterns does this codebase already use? It takes me two or three minutes, and it saves hours.

Three tools, three jobs

I bounce between Claude chat, Claude Code, and Cursor depending on what I’m doing. After a lot of trial and error, here’s the split that stuck:

Claude chat is where I think. When I’m weighing architectural choices, trying to understand a new library, or figuring out what a feature should even look like before I touch code, I go here. It’s the whiteboard. No files attached, no pressure to produce a diff, just a conversation to get my thinking straight. Usually there’s some part of an idea I want to clarify or sanity check for feasibility before committing to it. Also I still use Chat GPT time to time for very simple confirmations, Just to reduce the token usage on my Claude subscription.

Claude Code is where I do the heavier lifting. When I’ve got a clear idea of what needs to happen, a new feature, a refactor, a migration across the codebase, I use claude code for it. I can hand it a task, let it explore the repo, and review what it proposes. It feels closer to delegating than prompting.

Cursor is what I use for small things. Small edits, tab completions (I know we have solargraph in Ruby on Rails, But I’ve kinda grown into using Cursor as an IDE because of the tab completions that recognises pattern and intent), small rewrites. Things that don’t need a full conversation or prompt with AI.

Budget tip: I keep a Claude Pro subscription ($20) and a Cursor subscription ($20), and that combo covers most of my day-to-day without needing extra credits or an upgrade.

Review everything. Then review it again.

The biggest thing that’s changed for us developers is that we now review code almost every minute of the workday.

More often than you’d like, AI will confidently produce code that’s subtly wrong. Logic that looks great and passing but only handles the happy path. Tests that pass because they’re testing the wrong thing. A method that technically does what you asked, but if you haven’t crossed checked it, Might be a something that turns into a bug or tech debt within a few months or sooner.

So I read every line. I run the tests, make sure everything passes. I think about what could break. If I don’t understand why a piece of code works, I ask, either myself and AI, until I do. If I ship code I don’t understand, it isn’t the AI’s fault when something breaks later. It’s mine. Never blindly accept any AI Generated Code.

The whole point of using AI well is that it frees up your attention for the things humans are actually better at, which is judgement and context. If you AI tools for those too, you’re not leveraging the tool. You’re generating code and hoping. That’s vibe coding, and it catches up with you.

Rails, as a pleasant aside

Something I’ve noticed, more of an observation, Rails and AI get along really well. Part of it is how much Rails code exists in the world, so the models have seen plenty of it. But personally, I think the main reason has to do with convention over configuration.

Rails reads almost like plain English. Lines like has_many :comments, dependent: :destroy or before_action :authenticate_user! are basically self explanatory. I can read it to someone with no Rails knowledge and I’m pretty sure even a kid will understand it. That alone makes it easier for an AI to produce sensible code and easier for me to review what it produced.

But the deeper thing is what those conventions do for AI tools trying to understand a codebase. In a less opinionated framework, the model is staring at a pile of files and has to figure out how they relate to each other, burning tokens just to build a mental map. In Rails, that map is basically pre drawn. A route maps to a controller, which calls a model, which reads from a table. The model doesn’t have to guess; the framework’s conventions tell it. I came across this framing on RailsInsight, an MCP server built around this exact idea, and it matched my own experience pretty closely. The conventions that make Rails nice to write turn out to make it easy for AI to reason about too.

A few habits that have stuck

I write prompts the way I’d write a ticket for a teammate. Always treat yourself as the Lead Developer, and AI as another developer you would be handing over that ticket. Goal, context, what’s in scope, what isn’t. If Im not confident enough that I can hand something to another human developer, I shouldn’t be handing it to an AI either.

There’s a difference between asking an AI to decide for me and asking it to show me my choices. When I’m unsure, I want the choices, I want AI to present me options with clear trade offs and advantages, so I’m the one making the call. When I’m sure and pretty confident, I just ask for the change directly.

I keep the AI honest about what it doesn’t know. If I suspect a hallucinated method (AI hallucinates a lot), I say so and ask it to verify, or I clear the context and start fresh.

I don’t commit code I can’t explain. I try to read AI-generated code twice, and most of the time the second pass catches a surprising amount of nonsense that slipped past the first.

A collaborator, not a replacement

A lot of the anxiety around AI coding tools comes from framing them as replacements, as in, will this take my job? In my experience, that’s the wrong question, and it leads to the wrong relationship with the tool. If you treat AI as a replacement for thinking, it will replace your thinking, and the results will be mediocre. Thinking and Decision Making are now your strongest assets as a developer.

If you treat it as a collaborator, one with real strengths and real weaknesses, you get something genuinely useful. You stay the lead or senior engineer in the relationship. You bring the judgment. It brings the typing.

That split is, I think, where the job has quietly moved. A lot of what used to fill my day, wiring up a controller, writing the service object, creating specs and unit tests is work I barely touch directly anymore. What’s left, and what’s grown, is the harder stuff. Should this live in the model or a service? Is this boundary in the right place? What does this decision cost us in a year? Those questions don’t really compress. AI can help you explore them, but it can’t make them for you, and honestly, you wouldn’t want it to.

So the work has moved up a level. Less implementation, more architecture. Less “how do I write this,” more “should this exist, and if so, in what shape.” That’s not a loss. It’s the part of the job I always liked most, and now there’s more room for it.

The current AI situation is like either it makes you or breaks you. You either become a lazier developer or a better one. Now, I spend less time writing the same thirty lines of boilerplate I’ve written a thousand times, and more time on the parts of the job that actually need me. I also understand my codebase more than I used to, because every AI produced change is a change I have to read, question, and internalise.

The tools will keep getting better. What won’t change is that thoughtful use beats careless use, every time.

And yeah, we’re not getting replaced. We’re just getting better tools.

 

Ps. if you have any questions

Ask here