Using Claude Code with SQL Server and Azure SQL DB
9 Comments
Let’s start with a 7-minute demo video – I didn’t edit this down because I want you to be able to see what happens in real time. In this video, I point the desktop version of Claude Code at a Github issue for the First Responder Kit and tell it to do the needful:
That’s certainly not the most complex Github issue in the world, but the idea in that short video was to show you how easy the overall workflow is, and why you and your coworkers might find it attractive.
Now, let’s zoom out and talk big picture.
The rest of this blog post is not for people who already use Claude Code. I don’t wanna hear Code users complaining in the comments about how I didn’t cover X feature or Y scenario. This is a high-level ~2,000-word overview of what it is, why you’d want it, what you’ll need to talk to your team about, and where to go to learn more.
I should also mention that I use a ton of bullet points in my regular writing. As with all of my posts, none of this is written with AI, period, full stop. These words come directly from my booze-addled brain, written as of March 2026, and this stuff will undoubtedly drift out of correctness over time.
What’s Claude Code?
Think of it as an app (either desktop or command line) that can call other apps including:
- sqlcmd – Microsoft’s command-line utility for running queries. You’re used to using SSMS because it’s much prettier and more powerful, but sqlcmd is fine if all you need to do is run queries and get results, and that’s all Claude Code needs to get started. As you get more advanced, you can use something called an MCP that gives Claude Code an easier way to chat with the database.
- Git / Github – so that it can get the latest versions of your app code (or DBA scripts, or in this case, the First Responder Kit) from source control, make changes, and submit pull requests for you to review. For the purposes of this post, I’m just gonna use the term Github, but if your company uses a different source control method, the same principles apply.
That means it has access to:
- Your Github issues and pull requests – which may present confidentiality issues for your company.
- Your local file system – in theory, you might be able to lock this down, but in practice you’re probably going to gradually expand Claude Code’s permissions to let it do more stuff over time.
- A database server – so think about where you’re pointing this thing, and what login you give it. If it’s going to test code changes, it’s probably going to need to alter procs, create/alter/drop tables, insert/update/delete test data, etc. On harder/longer tasks, it’s also going to be processing in the background while you’re doing other stuff, so you’re probably going to want to give it its own SQL Server service for its development use so it doesn’t hose up yours.
- Your code base – and if everything before didn’t raise security and privacy concerns, this one certainly should.
Think of it as an outside contractor.
When your company hires outside contractors, they put a lot of legal protections in place. They’ll set up:
- A non-disclosure agreement to make sure the contractor doesn’t share your secrets with the rest of the world
- A contract specifying what exactly each side is responsible for and what they’ll deliver to each other
- Insurance requirements to make sure the contractor will be able to pay for any egregious mistakes
- Human resources standards to make sure the contractor isn’t high and hallucinating while they work
With AI tools, you don’t really get any of that. That means if you choose to hire one of these tools for your company, all of this is on you. Even worse, anybody on your team can endanger your entire company if they don’t make good decisions along the way. I can totally understand why some/most companies are a little gun-shy on this stuff. It’s right to be concerned about these risks.
Here – and most of the time when you see me working with AI on the blog or videos – I’m working with the open source First Responder Kit, or code that I use as part of my training classes. This stuff is all open source, licensed under the MIT License. I’m not concerned about AI companies stealing my code.
That’s the best way for you to get started, too: play around with Claude Code on an open source Github repo that you usually use as a user (not a developer), like the First Responder Kit, Ola Hallengren’s maintenance scripts, Erik Darling’s SQL Server Performance Monitor, DBAtools, or even Microsoft’s SQL Server documentation. Learn to use Claude Code there, and later on, after you’ve built up confidence and a few good wins, then think about bringing it into your own company to work on your day job stuff. And when you do that…
When your company brings in an outside contractor…
The security and legal teams are going to care about:
- What Claude Code has access to – aka, Github, your local file system, your development database server, etc.
- Where Claude Code sends that data for thinking/processing – you should assume that it’s sending all of the accessible data somewhere
- If you send that data outside your company walls for thinking/processing, your company is also going care about how the thinker/processor uses your data – as in, not just to process your requests, but possibly for analysis to help the overall public or paying users
This leads to one of the big decisions when you’re using Claude Code: where does the thinking/processing happen?
The thinking can be done locally or remotely.
Claude Code is an app, but the thinking doesn’t actually happen in the app. Claude Code sends your data, prompt, database schema, etc somewhere.
Most people use Anthropic’s servers. They’re the makers of Claude Code. For around $100/month per person, you get unlimited processing up in their cloud. The advantage of using Anthropic’s servers is that you’ll get the fastest performance, with the biggest large language models (LLMs) that have the best thinking power, most accurate answers, and largest memories (context.) The drawback, of course, is that you’re sending your data outside your company’s walls, and you may not be comfortable with that.
If you’re not comfortable with Anthropic, maybe your company is more comfortable with Google Gemini’s models, or OpenAI’s ChatGPT models. At any given time, it’s an arms race between those top companies (and others, like hosting companies like OpenRouter) as to who produces the best tradeoffs for processing speed, accuracy, and cost.
If you’re not comfortable with any of those, you can do the processing on your own server. When I say “server”, that could be a Docker container running on your laptop, an app installed on your gaming PC with a high-powered video card, or a shared server at your company with a bunch of GPUs stuffed in it.
In that case, it’s up to you to pick the best LLM that you can, that runs as quickly as possible, given your server’s hardware. There are tiny not-so-bright models that run (or perhaps, leisurely stroll) on hardware as small as a Raspberry Pi. There are pretty smart models that require multiple expensive and power-hungry video cards. But even the best local models can’t compete with what you get up in Anthropic’s servers today.
The good news is that you don’t have to make some kind of final decision: you can switch between hosted and local models by just changing Claude Code’s config file.
The contractor and prompt qualities affect the results.
Generally speaking, the better/newer LLM that you use, and the smaller of a problem you’re working with, the more vague prompts you can get away with, like “we’re having deadlock problems – can you fix that?”
On the other hand, the older/smaller/cheaper LLM that you use – especially small locally hosted models – the more specific and directed your prompts have to be to get great results. For example, you may have to say something like, “sp_AddCustomer and sp_AddOrder are deadlocking on the CustomerDetails table when both procs are called simultaneously. Can you reduce the deadlock potential by making code changes to one or both of those procs? You can use hints, query rewrites, retry logic, whatever, as long as the transactions still finish the same way.”
And no matter what kind of LLM you’re using, the more ambitious your code changes become, the more important the prompt becomes. When I’m adding a major new feature or proposing a giant change, I start a chat session with Claude – not Claude Code, but just plain old Claude, the chat UI like ChatGPT – and say something like:
I’m working on the attached sp_Blitz.sql script, which builds a health check report on Microsoft SQL Server. It isn’t currently compatible with Azure SQL DB because it uses sp_MSforeachdb and some of the dynamic SQL uses the USE command. I’d like to use Claude Code to perform the rewrite. Can you review the code, and help me write a good prompt for Claude Code?
I know, it sounds like overkill, using one AI to tell another AI what to do, but I’ve found that in a matter of seconds, it produces a muuuuch better prompt than I would have written, taking more edge cases of the code into account. Then I edit that prompt, clarify some of my design decisions and goals, and then finally take the finished prompt over to Claude Code to start work there.
For now, I use Claude Code on a standalone machine.
I really like to think of AI tools like Claude Code as an outside contractor.
I’m sure the contractor is a nice person, and I have to trust it at least a little – after all, I’m the guy who hired it, and I shouldn’t hire someone that I don’t trust. Still, though, I gotta put safeguards in place.
So I keep Claude Code completely isolated.
I know that sounds a little paranoid, but right now in the wild west of AI, paranoia is a good thing.
For me, it starts with isolated hardware. A few years ago, I got a Windows desktop to use for gaming, streaming, and playing around with local large language models (LLMs). It’s got a fast processor, 128GB RAM, a decently powerful NVidia 4090 GPU, Windows 11, Github, and SQL Server 2025.
I think of that computer as Claude Code’s machine: he works there, he lives there. That way, I can guarantee none of my clients’ code or data is on there, and it doesn’t have things like my email either. When I wanna work, stream, record videos from that Windows machine, I just remote desktop into it from my normal Mac laptop.
When I wanna do client work without sending the data to Anthropic, I’ve got Ollama set up on that machine too. It’s a free, open source platform for running your own local models. It supports a huge number of LLMs, and there is no one right answer for which model to use. I love finding utilities like llmfit which check hardware to see what models can be run on it, and finding posts like which models run best on NVidia RTX 40 series GPUs as of April 2025 or on Apple Silicon processors as of February 2026, because they help me take the guesswork out of experimenting. I copy client data onto that machine temporarily, do that local work, and then delete the client data again before reconfiguring Claude Code to talk to Anthropic’s servers.
How you can get started with Claude Code
Your mission, should you choose to accept it, is to add a new warning to sp_Blitz when a SQL Server has Availability Groups enabled at the server level, but it doesn’t have any databases in an AG. To help, I’ve written a short, terse Github issue for this request, and a longer, more explicit one so you can also see how the quality of the input affects the quality of your chosen LLM’s code.
To accomplish the task, the bare minimum tasks would be:
- Install Claude Code (I’d recommend the terminal version first because the documentation is much better – the desktop version looks cool, but it’s much harder to get started with)
- Clone the First Responder Kit repo locally
- Prompt Claude Code to write the code – tell it about the Github issue and ask it to draft a pull request with the improved code, for your review
Stretch goals:
- Set up a SQL Server instance for Claude Code to connect to – could be an existing instance or a new one
- Set up sqlcmd or the SQL Server MCP so Claude Code can connect to it – if you use the MCP, you’ll need to edit Claude Code’s config files to include the server, login, password you want it to use
- Prompt Claude Code to test its code
You don’t have to submit your actual work as a pull request – I’m not going to accept any of those pull requests anyway. (I’ll just delete them if they come in – and it’s okay if you do one, I won’t be offended.) These Github issues exist solely to help you learn Claude Code.
How I can help
Unfortunately, I can’t do free personalized support for tens of thousands of readers to get their Claude Code setups up and running. At some point, I might build a paid training class for using Claude Code with SQL Server, and at that point, the paid students would be able to get some level of support. For now, though, I wanted to get this blog post, video, and GitHub issues out there for the advanced folks to start getting ahead of the curve.
However, If your company would like to hire me to help get a jump start on using Claude Code to improve your DBA productivity, proactively find database issues before they strike, and finally start making progress on your known issues backlog, email me.













If your company is hiring, leave a comment. The rules:





























