Using Claude Code with SQL Server and Azure SQL DB

AI
9 Comments

Let’s start with a 7-minute demo video – I didn’t edit this down because I want you to be able to see what happens in real time. In this video, I point the desktop version of Claude Code at a Github issue for the First Responder Kit and tell it to do the needful:

That’s certainly not the most complex Github issue in the world, but the idea in that short video was to show you how easy the overall workflow is, and why you and your coworkers might find it attractive.

Now, let’s zoom out and talk big picture.

The rest of this blog post is not for people who already use Claude Code. I don’t wanna hear Code users complaining in the comments about how I didn’t cover X feature or Y scenario. This is a high-level ~2,000-word overview of what it is, why you’d want it, what you’ll need to talk to your team about, and where to go to learn more.

I should also mention that I use a ton of bullet points in my regular writing. As with all of my posts, none of this is written with AI, period, full stop. These words come directly from my booze-addled brain, written as of March 2026, and this stuff will undoubtedly drift out of correctness over time.

What’s Claude Code?

Think of it as an app (either desktop or command line) that can call other apps including:

  • sqlcmd – Microsoft’s command-line utility for running queries. You’re used to using SSMS because it’s much prettier and more powerful, but sqlcmd is fine if all you need to do is run queries and get results, and that’s all Claude Code needs to get started. As you get more advanced, you can use something called an MCP that gives Claude Code an easier way to chat with the database.
  • Git / Github – so that it can get the latest versions of your app code (or DBA scripts, or in this case, the First Responder Kit) from source control, make changes, and submit pull requests for you to review. For the purposes of this post, I’m just gonna use the term Github, but if your company uses a different source control method, the same principles apply.

That means it has access to:

  • Your Github issues and pull requests – which may present confidentiality issues for your company.
  • Your local file system – in theory, you might be able to lock this down, but in practice you’re probably going to gradually expand Claude Code’s permissions to let it do more stuff over time.
  • A database server – so think about where you’re pointing this thing, and what login you give it. If it’s going to test code changes, it’s probably going to need to alter procs, create/alter/drop tables, insert/update/delete test data, etc. On harder/longer tasks, it’s also going to be processing in the background while you’re doing other stuff, so you’re probably going to want to give it its own SQL Server service for its development use so it doesn’t hose up yours.
  • Your code base – and if everything before didn’t raise security and privacy concerns, this one certainly should.

Think of it as an outside contractor.

When your company hires outside contractors, they put a lot of legal protections in place. They’ll set up:

  • A non-disclosure agreement to make sure the contractor doesn’t share your secrets with the rest of the world
  • A contract specifying what exactly each side is responsible for and what they’ll deliver to each other
  • Insurance requirements to make sure the contractor will be able to pay for any egregious mistakes
  • Human resources standards to make sure the contractor isn’t high and hallucinating while they work

With AI tools, you don’t really get any of that. That means if you choose to hire one of these tools for your company, all of this is on you. Even worse, anybody on your team can endanger your entire company if they don’t make good decisions along the way. I can totally understand why some/most companies are a little gun-shy on this stuff. It’s right to be concerned about these risks.

Here – and most of the time when you see me working with AI on the blog or videos – I’m working with the open source First Responder Kit, or code that I use as part of my training classes. This stuff is all open source, licensed under the MIT License. I’m not concerned about AI companies stealing my code.

That’s the best way for you to get started, too: play around with Claude Code on an open source Github repo that you usually use as a user (not a developer), like the First Responder Kit, Ola Hallengren’s maintenance scripts, Erik Darling’s SQL Server Performance Monitor, DBAtools, or even Microsoft’s SQL Server documentation. Learn to use Claude Code there, and later on, after you’ve built up confidence and a few good wins, then think about bringing it into your own company to work on your day job stuff. And when you do that…

When your company brings in an outside contractor…

The security and legal teams are going to care about:

  1. What Claude Code has access to – aka, Github, your local file system, your development database server, etc.
  2. Where Claude Code sends that data for thinking/processing – you should assume that it’s sending all of the accessible data somewhere
  3. If you send that data outside your company walls for thinking/processing, your company is also going care about how the thinker/processor uses your data – as in, not just to process your requests, but possibly for analysis to help the overall public or paying users

This leads to one of the big decisions when you’re using Claude Code: where does the thinking/processing happen?

The thinking can be done locally or remotely.

Claude Code is an app, but the thinking doesn’t actually happen in the app. Claude Code sends your data, prompt, database schema, etc somewhere.

Most people use Anthropic’s servers. They’re the makers of Claude Code. For around $100/month per person, you get unlimited processing up in their cloud. The advantage of using Anthropic’s servers is that you’ll get the fastest performance, with the biggest large language models (LLMs) that have the best thinking power, most accurate answers, and largest memories (context.) The drawback, of course, is that you’re sending your data outside your company’s walls, and you may not be comfortable with that.

If you’re not comfortable with Anthropic, maybe your company is more comfortable with Google Gemini’s models, or OpenAI’s ChatGPT models. At any given time, it’s an arms race between those top companies (and others, like hosting companies like OpenRouter) as to who produces the best tradeoffs for processing speed, accuracy, and cost.

If you’re not comfortable with any of those, you can do the processing on your own server. When I say “server”, that could be a Docker container running on your laptop, an app installed on your gaming PC with a high-powered video card, or a shared server at your company with a bunch of GPUs stuffed in it.

In that case, it’s up to you to pick the best LLM that you can, that runs as quickly as possible, given your server’s hardware. There are tiny not-so-bright models that run (or perhaps, leisurely stroll) on hardware as small as a Raspberry Pi. There are pretty smart models that require multiple expensive and power-hungry video cards. But even the best local models can’t compete with what you get up in Anthropic’s servers today.

The good news is that you don’t have to make some kind of final decision: you can switch between hosted and local models by just changing Claude Code’s config file.

The contractor and prompt qualities affect the results.

Generally speaking, the better/newer LLM that you use, and the smaller of a problem you’re working with, the more vague prompts you can get away with, like “we’re having deadlock problems – can you fix that?”

On the other hand, the older/smaller/cheaper LLM that you use – especially small locally hosted models – the more specific and directed your prompts have to be to get great results. For example, you may have to say something like, “sp_AddCustomer and sp_AddOrder are deadlocking on the CustomerDetails table when both procs are called simultaneously. Can you reduce the deadlock potential by making code changes to one or both of those procs? You can use hints, query rewrites, retry logic, whatever, as long as the transactions still finish the same way.”

And no matter what kind of LLM you’re using, the more ambitious your code changes become, the more important the prompt becomes. When I’m adding a major new feature or proposing a giant change, I start a chat session with Claude – not Claude Code, but just plain old Claude, the chat UI like ChatGPT – and say something like:

I’m working on the attached sp_Blitz.sql script, which builds a health check report on Microsoft SQL Server. It isn’t currently compatible with Azure SQL DB because it uses sp_MSforeachdb and some of the dynamic SQL uses the USE command. I’d like to use Claude Code to perform the rewrite. Can you review the code, and help me write a good prompt for Claude Code?

I know, it sounds like overkill, using one AI to tell another AI what to do, but I’ve found that in a matter of seconds, it produces a muuuuch better prompt than I would have written, taking more edge cases of the code into account. Then I edit that prompt, clarify some of my design decisions and goals, and then finally take the finished prompt over to Claude Code to start work there.

For now, I use Claude Code on a standalone machine.

I really like to think of AI tools like Claude Code as an outside contractor.

I’m sure the contractor is a nice person, and I have to trust it at least a little – after all, I’m the guy who hired it, and I shouldn’t hire someone that I don’t trust. Still, though, I gotta put safeguards in place.

So I keep Claude Code completely isolated.

I know that sounds a little paranoid, but right now in the wild west of AI, paranoia is a good thing.

For me, it starts with isolated hardware. A few years ago, I got a Windows desktop to use for gaming, streaming, and playing around with local large language models (LLMs). It’s got a fast processor, 128GB RAM, a decently powerful NVidia 4090 GPU, Windows 11, Github, and SQL Server 2025.

I think of that computer as Claude Code’s machine: he works there, he lives there. That way, I can guarantee none of my clients’ code or data is on there, and it doesn’t have things like my email either. When I wanna work, stream, record videos from that Windows machine, I just remote desktop into it from my normal Mac laptop.

When I wanna do client work without sending the data to Anthropic, I’ve got Ollama set up on that machine too. It’s a free, open source platform for running your own local models. It supports a huge number of LLMs, and there is no one right answer for which model to use. I love finding utilities like llmfit which check hardware to see what models can be run on it, and finding posts like which models run best on NVidia RTX 40 series GPUs as of April 2025 or on Apple Silicon processors as of February 2026, because they help me take the guesswork out of experimenting. I copy client data onto that machine temporarily, do that local work, and then delete the client data again before reconfiguring Claude Code to talk to Anthropic’s servers.

How you can get started with Claude Code

Your mission, should you choose to accept it, is to add a new warning to sp_Blitz when a SQL Server has Availability Groups enabled at the server level, but it doesn’t have any databases in an AG. To help, I’ve written a short, terse Github issue for this request, and a longer, more explicit one so you can also see how the quality of the input affects the quality of your chosen LLM’s code.

To accomplish the task, the bare minimum tasks would be:

  1. Install Claude Code (I’d recommend the terminal version first because the documentation is much better – the desktop version looks cool, but it’s much harder to get started with)
  2. Clone the First Responder Kit repo locally
  3. Prompt Claude Code to write the code – tell it about the Github issue and ask it to draft a pull request with the improved code, for your review

Stretch goals:

  1. Set up a SQL Server instance for Claude Code to connect to – could be an existing instance or a new one
  2. Set up sqlcmd or the SQL Server MCP so Claude Code can connect to it – if you use the MCP, you’ll need to edit Claude Code’s config files to include the server, login, password you want it to use
  3. Prompt Claude Code to test its code

You don’t have to submit your actual work as a pull request – I’m not going to accept any of those pull requests anyway. (I’ll just delete them if they come in – and it’s okay if you do one, I won’t be offended.) These Github issues exist solely to help you learn Claude Code.

How I can help

Unfortunately, I can’t do free personalized support for tens of thousands of readers to get their Claude Code setups up and running. At some point, I might build a paid training class for using Claude Code with SQL Server, and at that point, the paid students would be able to get some level of support. For now, though, I wanted to get this blog post, video, and GitHub issues out there for the advanced folks to start getting ahead of the curve.

However, If your company would like to hire me to help get a jump start on using Claude Code to improve your DBA productivity, proactively find database issues before they strike, and finally start making progress on your known issues backlog, email me.


Row-Level Security Can Slow Down Queries. Index For It.

Execution Plans
3 Comments

The official Azure SQL Dev’s Corner blog recently wrote about how to enable soft deletes in Azure SQL using row-level security, and it’s a nice, clean, short tutorial. I like posts like that because the feature is pretty cool and accomplishes a real business goal. It’s always tough deciding where to draw the line on how much to include in a blog post, so I forgive them for not including one vital caveat with this feature.

Row-level security can make queries go single-threaded.

This isn’t a big deal when your app is brand new, but over time, as your data gets bigger, this is a performance killer.

Setting Up the Demo

To illustrate it, I’ll copy a lot of code from their post, but I’ll use the big Stack Overflow database. After running the below code, I’m going to have two Users tables with soft deletes set up: a regular dbo.Users one with no security, and a dbo.Users_Secured one with row-level security so folks can’t see the IsDeleted = 1 rows if they don’t have permissions.

Now let’s start querying the two tables to see the performance problem.

Querying by the Primary Key: Still Fast

The Azure post kept things simple by not using indexes, so we’ll start that way too. I’ll turn on actual execution plans and get a single row, and compare the differences between the tables:

If all you’re doing is getting one row, and you know the Id of the row you’re looking for, you’re fine. SQL Server dives into that one row, fetches it for you, and doesn’t need multiple CPU cores to accomplish the goal. Their actual execution plans look identical at first glance:

Single row fetch

If you hover your mouse over the Users_Secured table operation, you’ll notice an additional predicate that we didn’t ask for: row-level security is automatically checking the IsDeleted column for us:

Checking security

Querying Without Indexes: Starts to Get Slower

Let’s find the top-ranked people in Las Vegas:

Their actual execution plans show the top query at about 1.4 seconds for the unsecured table, and the bottom query at about 3 seconds for the secured table:

Las Vegas, baby

The reason isn’t security per se: the reason is that the row-level security function inhibits parallelism. The top query plan went parallel, and the bottom query did not. If you click on the secured table’s SELECT icon, the plan’s properties will explain that the row-level security function can’t be parallelized:

No parallelism

That’s not good.

When you’re using the database’s built-in row-level security functions, it’s more important than ever to do a good job of indexing. Thankfully, the query plan has a missing index recommendation to help, so let’s dig into it.

The Missing Index Recommendation Problems

Those of you who’ve been through my Fundamentals of Index Tuning class will have learned how Microsoft comes up with missing index recommendations, but I’mma be honest, dear reader, the quality of this one surprises even me:

The index simply ignores the IsDeleted and Reputation columns, even though they’d both be useful to have in the key! The missing index hint recommendations are seriously focused on the WHERE clause filters that the query passed in, but not necessarily on the filters that SQL Server is implementing behind the scenes for row-level security. Ouch.

Let’s do what a user would do: try creating the recommended index on both tables – even though the number of include columns is ridiculous – and then try again:

Our actual execution plans are back to looking identical:

With a covering index

Neither of them require parallelism because we can dive into Las Vegas, and read all of the folks there, filtering out the appropriate IsDeleted rows, and then sort the remainder, all on one CPU core, in a millisecond. The cost is just that we literally doubled the table’s size because the missing index recommendation included every single column in the table!

A More Realistic Single-Column Index

When faced with an index recommendation that includes all of the table’s columns, most DBAs would either lop off all the includes and just use the keys, or hand-review the query to hand-craft a recommended index. Let’s start by dropping the old indexes, and creating new ones with only the key column that Microsoft had recommended:

The actual execution plans of both queries perform identically:

Key lookup plan 1

Summary: Single-Threaded is Bad, but Indexes Help.

The database’s built-in row-level security is a really cool (albeit underused) feature to help you accomplish business goals faster, without trying to roll your own code. Yes, it does have limitations, like inhibiting parallelism and making indexing more challenging, but don’t let that stop you from investigating it. Just know you’ll have to spend a little more time doing performance tuning down the road.

In this case, we’re indexing not to reduce reads, but to avoid doing a lot of work on a single CPU core. Our secured table still can’t go parallel, but thanks to the indexes, the penalty of row-level security disappears for this particular query.

Experienced readers will notice that there are a lot of topics I didn’t cover in this post: whether to index for the IsDeleted column, the effect of residual predicates on IsDeleted and Reputation, and how CPU and storage are affected. However, just as Microsoft left off the parallelism thing to keep their blog post tightly scoped, I gotta keep mine scoped too! This is your cue to pick up this blog post with anything you’re passionate about, and extend it to cover the topics you wanna teach today.


Logical Reads Aren’t Repeatable on Columnstore Indexes. (sigh)

Sometimes I really hate my job.

Forever now, FOREVER, it’s been a standard thing where I can say, “When you’re measuring storage performance during index and query tuning, you should always use logical reads, not physical reads, because logical reads are repeatable, and physical reads aren’t. Physical reads can change based on what’s in cache, what other queries are running at the time, your SQL Server edition, and whether you’re getting read-ahead reads. Logical reads just reflect exactly the number of pages read, no matter where the data came from (storage or cache), so as long as that number goes down, you’re doing a good job.”

To illustrate it, we’ll start with the large version of the Stack Overflow database, and count the number of rows in the Users table.

Statistics io output shows that the first execution has to read pages up from disk because they’re not in cache yet:

The first execution has 4 physical reads and 329,114 read-ahead reads. Those were all read up off disk, into memory. But the whole time, logical reads stays consistent, so it’s useful for measuring performance tuning efforts regardless of what’s in cache.

The same thing is true if we create a nonclustered rowstore index too:

Statistics io output shows physical reads & readahead reads on the first execution, but logical reads stays consistent throughout:

But with columnstore indexes on SQL Server 2017 & newer…

On SQL Server 2017 or newer (not 2016), create a nonclustered columnstore index:

And watch lob logical reads while we run it 3 times:

Lob logical reads shows 22,342 for the first execution, then 10,947 for the next two passes.

This isn’t true on SQL Server 2016, which produces the same logical read numbers every time the columnstore query runs, clean buffer pool or not. Just 2017 and newer.

Actual live Brent reaction to this issue

<sigh> This is why we can’t have nice things.

This is also one of those reasons why it’s so hard to teach training classes. Stuff changes inside the product, and then years later, a demo you wrote no longer produces exactly the same results. You have to try re-running the demo from scratch, thinking you just made a mistake, and then you have to narrow down the root cause, and then to do it right, you really need to check each prior version to understand when the thing changed, and Google trying to find out if anybody else shared this and you just didn’t read that particular post, and then update your own training and write a blog post so that nobody else gets screwed by the same undocumented change, which of course they will, because not everybody reads your blog posts.

You won’t, though, dear reader. At least I helped you out, hopefully. And that makes it all worthwhile. (Not really. I’m going to go have a shot of my office tequila, and it’s not even 10AM as I’m writing this.)


I’m Not Gonna Waste Time Debunking Crap on LinkedIn.

AI
36 Comments

LinkedIn is full of absolute trash these days. Just flat out bullshit garbage. (Oh yeah, that – this post should probably come with a language disclaimer, because this stuff makes me mad.)

People wanna look impressive without actually putting in the work to gain real knowledge. They’re asking ChatGPT to write viral “expertise” knowledge posts for them, and they’re publishing this slop without so much as testing it.

I’m going to share an example that popped up on my feed, something LinkedIn thought I would find valuable to read:

AI bullshit slop

It’s pretty. It looks like it was written by an authoritative source.

But if you drill just a little deeper, there are telltale giveaways that the author is a lazy asshole who wastes other peoples’ time. They didn’t bother to put the least bit of fact-checking in. I’m not even talking about the overall accuracy, mind you – let’s just look at the comparison table. On the left side, there are two sections marked “Efficiency”, and on the right side, two sections marked “Usage”:

Efficiency Efficiency

That doesn’t make any sense. Then keep reading, and look at the bottom sections. On the left, they both say the same thing – but only one thing is checked:

WELL WHICH ONE IS IT

Thankfully, the situation is much better on the right side, where, uh, both things are checked, so that’s also meaningless:

So nice they checked it twice

I hate this bullshit. I hate it. Haaaaate it. I work so hard to help debunk query myths and help you write better queries, and then some jerk-off like this slaps a prompt into ChatGPT, creates a pretty (but altogether full of crap) table, and it gets engagement on LinkedIn – thereby spreading misinformation all over again.

If you’re lucky, and the thought slop leader hasn’t tried to hide their source, at least LinkedIn puts a little “content credentials” icon at the top of AI-generated images. You can hover your mouse over it like this:

Content Credentials

ChatGPT and Google Gemini are both labeling their images with hidden tags, helping sites like LinkedIn identify content that was AI-generated. However, ambitious authors can strip those tags out, trying to claim ownership of their content. (sigh) And they will, because they’re in a race to be the best slop leaders.

See, LinkedIn actually rewards bad content because commenters jump in to point out the inaccuracies, thereby making LinkedIn think the content was comment-worthy, and so it should be promoted to more viewers. Those viewers in turn don’t read the comments, and they just think the original post was merit-worthy – after all, it was recommended by LinkedIn – which spreads the misinformation further.

I love AI, and I use it every single day, but I hate the holy hell out of what’s happening right now.

So even though it drives me absolutely crazy to see this fake knowledge being passed off as truthful, I’m not gonna bother debunking it. These morons can create it faster than I can debunk it. I can’t even block these “authors” when I see them writing trash, because… they’re the very people who need to be reading my stuff! Sure, they’re slop leaders today, but tomorrow they may turn the corner and want to start actually learning SQL, and when they see the light, I wanna be there for them.

I’m just gonna keep offering you the best alternatives that I can: real-life, hands-on material that I’ve learned through decades of genuine hard work. Hopefully, you’ll continue to see my work as worthy, dear reader, and keep sharing the good stuff that you like, and keep investing in the training classes that I produce. Fingers crossed.


[Video] Office Hours: Back in the Bahamas Edition

Videos
1 Comment

Yes, I’m back on a cruise ship with another 360 degree video. Lest you think I’m being wildly irresponsible (or responsible perhaps?) with your consulting and training money, be aware that this particular cruise was free thanks to the fine folks in the casino department at Norwegian Cruise Lines. In between beaches and blackjack, let’s go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:22 Petr: How would you convince a customer running our app on SQL Server 2022 in a sync AOG to enable delayed durability on their production databases? Our load tests show a 10× improvement in write throughput and elimination of commit latency spikes, with acceptable data?loss risk.
  • 05:30 HeyItsThatGuyAgain: You’ve done a lot to support the “Accidental DBA.” Do you recommend any resources for the “Accidental Data Team Manager?”
  • 05:51 How Did I Even Get Here: I am one of only 4 DBAs at a big university that lumps Dev and Prod together. I am our only SQL Server DBA and I assist with Oracle. This is my first DBA job. I can study what I want in my free time but I’m overwhelmed. Do you have any advice for how to decide where to explore?
  • 07:17 ArchipelagoDBA: We have been using defautl sampling for our statistics updates. We have now noticed that this is causing low quality execution plans. Are there any risks to go flat out enforcing FULLSCAN for all or are there better to gradially roll it out to help with specific queries?
  • 08:31 SteveE: Hi Brent, How is the adoption of AI development tools looking out in the real world? Are you seeing many clients using fully automated AI development tools, an uptake in assisted programming using chatbots or are teams still using traditional methods?
  • 10:10 Subash: I need to take migration assistance report for 2025 servers.. For upgrade 2019 to 2025 so that my manager asked the compatibility changes report before upgrade I checked in DMA 2025 option is not available..i checked in SSMS 21 version available till 2022 not for 2025.help
  • 10:56 EagerBeaver: Hi Brent, can parameter sniffing happen on a query (not SP)? I run the same query twice with different parameter and second query got same cardinality estimation as the first one. Index statistics for used index are recalculated and 2 parameter have vast difference in Equal Rows.
  • 13:48 I_like_SQL_Server: Hi Brent, We have some wide and large tables with unindexed foreign keys (3rd p db). There are too many FKs so I cannot index them all, how should I think and how do I prioritize? You have a modul in Mastering index tuning about FKs but it doesn’t take up this specific challenge.
  • 15:02 Stumped: Every day something causes the log file in one of my very large databases to grow to over a terabyte, which fills the drive. How do I find out what is doing that?
  • 16:18 2400BaudSpeedster: I dislike copilot and can’t figure out how to like it. Is there anyway to avoid copilot integration besides not upgrading past a certain version? Any suggestions on how to get over it and somehow embrace it?

Who’s Hiring Database People? March 2026 Edition

Who's Hiring
11 Comments

Is your company hiring for a database position as of March 2026? Do you wanna work with the kinds of people who read this blog? Let’s make a love connection.

You probably don't wanna hire these two.If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it.
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • It has to be a data-related job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.


I’ve Been Using Macs for 20 Years. Here’s What You Wanted to Know.

Home Office
12 Comments

tl;dr – It’s easier because I’ve always chosen to run SQL Server VMs, so it was easier for me to switch than you might expect – but if you’re a Microsoft IT pro, I don’t recommend switching.

Home office setup, circa Jan 2026Now for the long version.

About 20 years ago, back in 2006, I excitedly blogged that my boss at the time had agreed to let me buy a Mac (with my employer’s money.) I’d been really frustrated with Windows for quite a while at that point. Even today, the Windows 11 start menu disgusts me. I literally paid for this operating system, why are you showing me ads and irrelevant garbage?!?

I’d been trying to make the switch from Windows over to Linux since around 2002, and I’d never been able to make it stick. I kept having problems on Linux with hardware, driver support, apps, and just plain usability. Apple’s Mac OS seemed to be a gateway drug to Linux: it was built atop FreeBSD, so I thought I’d be able to use Apples as a stepping stone to transition all the way over to Linux.

It didn’t end up working out that way. I was so delighted with Apples, and their ecosystem kept growing. Today, it includes phones, tablets, headphones, TVs, and the third party ecosystem stretches out far beyond that. I decided to stick with Apples rather than move on to Linux.

If you’re on Windows and you’re thinking about making the switch today, I actually wouldn’t recommend it for most folks. The mental work required in order to switch platforms is kind of a pain in the rear. You’ll be less productive for the first year or two, by far, and any supposed gains won’t come until long after you’ve had many frustrations along the way. If you do make the switch, I’d recommend a pre-built Linux machine like the ones from System76 or Lenovo. Macs are great, but the current OS (Tahoe) is a hot mess. I haven’t upgraded myself, still on Sequoia. Anyhoo, on to how it works for me.

The first big question:
how does SQL Server work?

My MacBook ProHot take: it doesn’t really, and it doesn’t matter. Hear me out.

When I first made the switch 20 years ago, I was a production DBA, and I didn’t run SQL Server locally anyway. I didn’t even run Management Studio locally – I used a jump box, a VM in each data center with all my tools installed. I’m a huge, huge believer in jump boxes:

  • When you need to run something for a long period of time without worrying about disconnects, jump boxes make it easy
  • When your employer decides to mandate workstation reboots to apply some stupid group policy or Windows update, no problem
  • When there’s a network blip between your workstation and the data center, your queries don’t fail
  • When you have multiple domains, like complex enterprises with a lot of acquired companies, no problem – you can set up multiple jump boxes or multiple logins
  • When you have to support a diverse environment that requires different versions of SSMS, some of which may not play well with each other, it doesn’t matter, because you just build different jump boxes for different needs
  • When you don’t have your laptop available, like if you’re visiting a friend or family, no problem, as long as you can VPN & RDP in
  • When your laptop dies, you can still tackle production emergencies while you get the new laptop up to speed – you just need RDP

Notice that none of those start with “if” – they start with “when”. You might be lucky enough to be early enough in your career that you haven’t hit those problems yet, and you may even make it all the way through your career without hitting them – just like you might make it all the way through your career without needing to restore a database or do a disaster recovery failover. You might hit the lotto, too.

There are two kinds of DBAs in the world: the experienced, prepared ones, and the ones who run SSMS and SQL Server locally on their laptop.

After witnessing a lot of nasty disasters, I’m pretty passionate about that, and you’re not going to convince me otherwise. When I have a long term relationship with a client, they give me a VPN account and a jump VM, and that’s the end of that. I know there are going to be commenters who say, “But Bryant, I work with small businesses who can’t afford a jump VM,” and I don’t have the time or energy to explain to them that I work with small businesses too, and their sysadmins already have their own jump boxes because they’re not stupid. Small jump boxes in AWS Lightsail are less than $50/month.

Consultants and trainers need SQL Server though.

I said I made the switch back when I was a production DBA, and I had no need for local SQL Server then. When you’re a consultant and/or trainer, though, you’re gonna have to do research, write demos, and show things to clients, which means you’re gonna need access to a SQL Server.

Most consultants and trainers I know use a local instance of SQL Server for that. In theory, you can run SQL Server on MacOS in a container. I don’t, because it still doesn’t give me SQL Server Management Studio. When I’m teaching you performance tuning, I have to meet you where you are, and show you screenshots & demos with the same tools you use on a daily basis, and that means SSMS. So for me, the container thing is useless since I need Windows anyway – there’s no SSMS on the Mac, at least not yet.

I use cloud VMs and cloud database services (like AWS RDS SQL Server and Azure SQL DB) because I’m picky about a few requirements.

I want my demos to use real-world-size databases and queries, which means the large versions of the Stack Overflow database. I want to deal with 100GB tables, and I want to create indexes on them in a reasonable time, live, during classes. There are indeed laptops large enough to handle that – for example, my MacBook Pro M4 Max has 16 cores and 128GB memory – but also…

I want redundancy, meaning multiple SQL Servers ready to go during class. If something goes wrong with a demo, I want to be able to switch over to another instance without losing my students’ valuable time. I can’t tell you how many presentations I’ve sat through where the presenter struggled with a broken demo, saying things like, “Hang on, let me try restarting this and restoring the database, I can’t understand why this is happening…” They’ve already lost the audience at that point. Like a timeless philosopher once said, ain’t nobody got time for that.

I teach live classes online, and if a local instance of SQL Server is struggling with a nasty query, it’s going to affect the video & audio quality of my live stream – especially if I’m running multiple local VMs, some of which may also be restoring databases in the background to prep for the next class.

I have to jump around from demo to demo when I’m working with clients on private issues. They may be facing several radically different issues, or they may want me to jump to an unplanned topic. Because of that, I need multiple instances ready to go with fresh, clean Stack Overflow databases. After each demo, I can kick off a database restore to reset the server back to baseline, while I switch over to another VM to keep moving on the next demo.

I have to teach onsite sometimes, and we’re talking about hardware requirements that are way beyond even the largest laptops. I would either have to haul around multiple laptops and a networking setup, or … just have internet access. I know that a decade ago, it was common to be in environments where you might not have internet, but that hasn’t happened to me in a long, long time.

So for me personally, local VMs are not the answer. It doesn’t matter whether my laptop is Windows, Mac, or Linux, I just can’t accomplish the above goals with local instances of SQL Server. Whenever I’m teaching, I fire up multiple cloud VMs, all based off my standardized SQL Server & SSMS image, with my demos ready to go. I open RDP connections to each of them, and then I can switch back and forth between them.

I typically use these AWS instance types:

  • i7i.xlarge: 4c/32GB, 937GB NVMe, ~$4 for 10 hours
  • i7i.2xlarge: 8c/64GB, ~1.9TB NVMe, ~$8 for 10 hours

Those costs can pile up – there are months where my VM bill is around $1,000! However, those are also months with high income, so I look at it as the cost of doing business. Again, those costs would be present whether I was running a Mac laptop or not.

If your time is effectively free, then a more cost-effective solution would be to buy or rent a big server, rent space in a colo somewhere, install a hypervisor, and manage remote connections to it. I have tried that (very briefly), and I don’t have the patience to deal with support problems on mornings when I’m trying to prep for a client engagement or training class.

I’m not trying to convince you to do any of this. Really, switching to Macs doesn’t make sense for most Microsoft data professionals, and it never has. However, SQL Server isn’t the only thing I do, and I personally happen to like the way Macs handle a lot of the other stuff I do.

In the midst of setting up my home office in a different room of the house
Skytech gaming PC at bottom right

I do still use Windows machines! I have a Skytech Windows gaming PC with an NVidia 4090 that I use for Claude Code and for local AI models, and I run an instance of SQL Server 2025 on there for quick query tests or simple blog posts when I’m in my home office. I also have a leftover Windows laptop that I use as a side monitor when I’m live streaming and looking at my side camera, answering audience questions. I just run Chrome on that though.

Things I love about Macs

The hardware is fantastic. Apple Silicon processors are ridiculously battery-efficient, powerful, and have brilliant thermal management. I haven’t heard a computer fan since the Silicon processors came out in 2020. There have been short 2-3 day trips where I haven’t bothered to bring a laptop charger because the thing just runs for days. The drawback is that Apple’s hardware, while fantastic, doesn’t offer cutting-edge features that you might find in other brands of laptops, phones, and tablets. For example, I’ve got a Huawei Mate XTS dual-fold phone that I absolutely adore, and I wish Apple offered something similar, but they probably won’t for another year or two at least.

The hardware actually has a resale value. I know the pricing seems expensive at first, but it holds way more value than PC laptops. In late 2024, I bought my M4 Max (16″, M4 Max, 128GB RAM, 2TB SSD, nano-texture display) for $5549, and out of curiosity, I just ran it through a couple of buy-your-Apple-device sites just now, and the average trade-in value was $3,300. I usually trade in my Macs every couple/few years, and the ownership cost usually averages out to about $100/month. That’s similar cost-of-ownership to buying a new $3,000 PC laptop every 3 years, and those things are worthless after 3 years of hard road use.

The shared memory architecture is great for AI users. My MacBook Pro has 128GB of memory, and there’s no division between memory used by the operating system and memory used by the video card. There’s not really even such a thing as a video card – it’s all integrated onboard in Macs. As a result, you can use MacBook Pros to run giant machine learning models that can’t possibly fit in 16-32GB PC video cards, let alone laptops. However, if you’re working with 16GB quantized models that do fit in a desktop NVidia graphics card like a 4090 or 5090, the NVidia card will absolutely smoke Apple Silicon processors in terms of token processing speed. (Sure, Apple fans will tout that the MBP can do the processing on the road, on battery, silently, but still, you’re not gonna be happy with Apple Silicon’s AI speeds if you’re moving from a desktop 4090 or 5090.)

The operating system is stable. I don’t remember the last time I had an OS crash or error that required a reboot. However, the term “stable” also means that there haven’t been any significant advancements in the last decade or so (which is why I was looking at moving back to Windows for a while there.)

There are a lot of ecosystem benefits. When you copy/paste on a MacBook Pro, the same copy/paste buffer is available on any of your devices – you can paste on your iPad or iPhone. AirDrop lets you easily push photos, files, contacts, whatever to other devices, including other devices around you. When a text comes in on your phone, you see it on all your devices, and your messages are synced across all of them. (Mark a message read on your laptop, and it’s read on your phone too, etc.) Pair your AirPod headphones on your phone, and they automatically work on your laptop too. Pretty much anytime you think about how data could be shared across different devices, it just already works on Apples.

There are a ton of neat apps available. Now, this is where it gets tricky. You’re reading this Microsoft database blog, and you’re likely doing a lot of work on the Microsoft platform. That also means you probably work for a company who licenses the Microsoft suite for all employees, and relies on it every day. You live in Outlook, Excel, Teams, and SSMS. I’mma be honest, dear reader: those apps are garbage on Macs. Oh sure, there are versions available (except for SSMS), but those versions are sad similes of their Windows equivalents. Outlook and Excel in particular are amazing at what they do on Windows.

So if you decide to switch, you’re likely going to end up using a lot of other apps instead. Long ago, fellow Mac & Linux user Jeremiah Peschka got me started on the concept of, “If the app is available in the browser, you should use the browser version,” and that’s paid off. I use the Google online suite for my email and calendaring, and a lot of web apps for my business work. When I do have to use a local app, it’s likely something that’s in my task bar below:

My taskbar

The less Microsoft apps you rely on to do your job, the easier you’ll find it to switch to Macs. The more of them you use – and in particular, if they include the O365 suite – then honestly, you shouldn’t switch. You’ll be happier in Windows.

Your questions from LinkedIn

That's Anthony Bourdain by painter Cassie Otts
Side camera laptop to show audience questions during streams

I posted on LinkedIn that I was going to write a blog post about this, and I asked y’all what you’d want to know. Here were your questions:

Are you benefiting from the shared memory architecture for local LLMs?” – Eugene Meidinger – Yes, I use LMStudio to run large local LLMs, and it’s really useful when I’m working with clients. I don’t wanna paste their code into a cloud-hosted LLM that may not take privacy seriously. I would only recommend this if you need the privacy aspect though – otherwise cloud-based LLMs from Anthropic, Google, and OpenAI are sooo much better and faster.

Is it a pain to run parallels for Windows only software?” – Eugene Meidinger – You’re going to laugh: I do have Parallels installed, but I only use it to run … Mac OS VMs, hahaha! I don’t even have a Windows VM set up. I had them pre-2020, but when Apple made the switch to Silicon processors, the ARM version of Windows was in a pretty sorry state. At that point I just decided to draw the line in the sand and be done with Windows locally, period. Besides, I’d switched to Mac so long ago at that point that SSMS was the only Windows-only app I had left.

“What was the learning curve like, and how long before you felt fully productive?” – Rebecca Lewis – It was terrible. AWFUL. It was probably 2 years before I felt fully productive at the same speed that I was before. This is the single biggest reason that I wouldn’t recommend that any seriously experienced Windows user make the switch.

Funny side note: I forced my mom to make the switch. I used to do tech support for her, but at one point, I was just so rusty on basic Windows consumer support questions that I said, “I haven’t used Windows for years, and I just don’t know how to fix your new printer.” So I bought her an iMac, introduced her to the nearest Genius Bar, and they took over tech support. That was awesome. I haven’t fielded a tech support question from her in years.

Did you switch because of SQL Server work, or despite it? I mean was the move to Mac about improving your SQL Server workflow, or was SQL Server just baggage you brought with you?” – Rebecca Lewis – Despite it. I was a production DBA at the time, but I’ve always just been curious and liked trying new things. SQL Server was definitely just baggage that I brought with me. 

Is there anything that Windows has that you miss or would like in Mac?” – Vlad Drumea – A lot!

  • Outlook, Excel, SSMS for sure. Yeah, technically they exist on Macs, but they’re nowhere near as fast or feature-complete as their Windows counterparts.
  • Power BI Desktop. The lack of a Mac version actually stopped me from using Power BI going forward – I tapered off my Power BI usage a couple years ago when it seemed clear that Microsoft wasn’t going to build a Mac client.
  • Games are a weak spot too. Often I’ll read about a game on Steam (like Decimate Drive), check the operating system requirements, and sigh.

I’d be most interested in whether there are ways to make the MSSQL extension for VS Code feel comfortable, now that Azure Data Studio is not long for this world.” – Daryl Hewison – When it comes to database developer tooling, Microsoft seems to have all the attention span of a toddler hopped up on espresso. I want my blog posts to meet people where they’re at, so to speak – I want the pictures to seem familiar – so I stick with SSMS for now. I’m going to stay that route in 2026, and in 2027, I’ll revisit to see whether the MSSQL extension for VS Code has been consistently improved, see if they’re staying on top of the Github issues, etc.

How do you like the terminal and package management?” – Phil Hummel – I don’t think any consumer operating system has really solved the problems of package management and virtual environments cleanly yet. For example, if I wanna experiment with a data analytics tool, it’s probably going to have all kinds of package requirements that slightly differ from other tools that I have installed, and the old & new packages won’t play well with each other. Virtualization still feels like the safest, cleanest answer to me.

Any issues with powershell?” – Ron Loxton – I’m probably the world’s lightest PowerShell user. I just use it to merge text files together, so it’s fine for me.

Why Mac over a Linux distro?” – Mark Johnson – I touched on this above, but I wanna reframe it as, “If you were gonna switch away from Windows in 2026, would you switch to Mac or Linux?” I’m heavily into the Apple ecosystem (I have an iPhone, iPad, Apple TVs, HomePods, etc.), and there are benefits to keeping everything in the ecosystem. However, if I wasn’t in that ecosystem – like if I used an Android phone as my daily driver – then I’d definitely buy a Linux laptop from a specialized vendor like System76, spend a week banging on it for basic tasks like USB, Bluetooth, wireless networking, pairing with a cell phone, editing PowerPoints, remote desktopping into places with Entra authentication, Zoom meetings, printing, closing the laptop to see if sleep/resume worked without it setting my laptop bag on fire, etc. If it worked, I’d go with that. If not, System76 has a 30-day return policy, so I’d return it, and give Macs a shot.

If you’ve got any questions about my Mac work, feel free to leave ’em in the comments and I’ll answer ’em there.


The Tech Consulting Market Isn’t Looking Good.

Consulting
17 Comments

After hearing the same dire news from several of my consulting friends, I put a poll up on LinkedIn:

Consultant poll results

About half of the consultants out there are having a tougher time bringing in new clients than they have in the past.

There are a couple things to keep in mind about the numbers. First, some folks call themselves consultants, but they’re really long-term contractors, working for the same client full time for months (or years!) on end. If these people have long-term contracts, they may not be looking for new work, so they may not see the shift in the market yet.

Second, I purposely said “tech consultants” because I wanted to cast a wide net: people who work with any kind of technology. Of course, my peer group on LinkedIn skews towards data people and developers, though.

Finally, I didn’t quantify anything here, and it’s a really short, simple poll with just 3 choices. There aren’t options for much better or much worse, nor does it ask for any quantifying data like number of incoming leads, billable vs unbillable time, or even whether or not the consultant is even looking for new work. It’s just a quick straw poll to help y’all look around the room to see what’s going on.

In the poll’s comments and on related social media discussions, opinions were all over the place about the root causes. Some people say it’s uncertainty, others say it’s the US economy, and of course there’s the elephant in the room, AI. I’ve talked to several existing clients where managers have said, “For 2026, anytime we wanna buy something, build something, or hire someone, we’re gonna try AI first and see what happens.” Yesterday, tech company Block announced that they’re laying off about half of their 10,000 employees in order to force the rest to use AI. Block has always been cutting-edge, and I wouldn’t be surprised if many other big companies – and not just tech ones – follow suit.

Oh I remember these.

February 2026 feels like March 2020.

We know something’s going on, and there’s a lot of fear, uncertainty, and doubt about what the implications are.

Back in March 2020, I remember sitting in Iceland, getting ready to go home because the US State Department told us to, and I was thinking, “Well, business is going to shut down for 6-12 months, so I guess I’m gonna be on the bench for a while.” I actually put serious thought into deciding which tool I was going to learn next because I’d have so much free time. As it turned out, the uncertainty cleared pretty rapidly, and in 2020-2022, most of us in tech worked more (and harder) than we ever had before, helping companies deal with chaotic change. Will 2026-2027 go the same way? It’s too early to tell.

This post doesn’t offer precision or analysis. I just wanted to give y’all a place to chat about it, and to know that you’re not alone. It’s certainly happened to me too – my new consulting pipeline almost shut off starting in December, although it hasn’t really affected me yet because I’d long planned to mostly be on vacation Dec-Feb, and then work on training material for SQL Server 2025 & SQLBits when I returned to the office in March. I’m keeping an eye on incoming leads, though, because it’s wild how quickly it shut down.


[Video] Office Hours in the Vegas Home Office

Videos
0

Let’s hang out in my home office in Vegas and go through your top-voted questions from https://pollgab.com/room/brento. The audio’s a little echo-y in this one because I just moved my office down to a small first floor guest bedroom with hardwood floors instead of carpet, and I haven’t treated the room for noise yet.

Here’s what we covered:

  • 00:00 Start
  • 03:56 DBAInAction: Hi Brent, Have you actually come across anyone running SQL Server AGs in containers, or is that more of a ‘paper’ solution? Honestly, do you think containerizing SQL Server is even worth the effort? Thank you!
  • 04:24 New Developer: After 30 years as a prod-support DBA I got laid off but got a Development DBA position. I know SQL, batch scripting, and can usually tell what a Powershell script is doing, but I’m not a developer, at least not yet. Any advice on how to proceed? I’m pretty lost.
  • 08:33 Little Bobby Tables’ mother: Even with DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE, I still can’t reproduce the slow first execution of a SP each day. Is there another SQL cache that can be cleared? Is there an OS cache for on-prem? (my entire Dev team has done your courses – thank you)
  • 11:07 Blommetje: Do you ever consider quitting SQL and doing something completely different or just retire?
  • 14:44 Dopinder: Should we be running regular CheckDB checks on log shipping secondary? Have you ever seen DB corruption travel from primary server to log shipping secondary server?
  • 16:02 CorporateDBA: To move sysadmin logins to least privilege, I tried SQL Audit, but the log growth is unmanageable. How do you identify the “working set” of permissions for a legacy user without killing disk I/O? Is there a lightweight way to profile needs over time?
  • 16:43 MyTeaGotCold: Know any good resources for Resource Governor?
  • 18:24 TheOneThatFailedTheInterview: Interview question: How to estimate resources for new SQL project (I know, it is very generic)? Answer like extrapolate from previous system did not make interviewers happy. Is there some scientific method or is it always a guesstimate?
  • 22:16 marcus-the-german: Hi Brent, you mentioned that you paint your green screen. Is this a special paint for green screen or just green?
  • 24:08 Andrew D: We are on Hyperscale planning to move from USWest to USWest2 so we can use Availability Zones. There is documentation around using FOG to move the primary but I can’t find anything about moving HA or Named replicas (assume manual) Are you aware of any gotchas?
  • 25:21 Raphra: Will get a chance to see you at Fabcon this year?
  • 26:58 marian: Hello Brent, Have you ever thought of taking some young apprentice and training him / her in the Jedi arts so he / she can carry on your outstanding community work?
  • 28:01 Felipe: Why does it seem harder for DBAs to work for companies outside their own countries? I see developers working under many different models for companies around the world.
  • 30:43 AndrewG: With the current job atmosphere. Is Jr/entry support dba dead? It seems hoping for a data analyst job ( even though this seems limited as well) is the way to go and hope for DBA experience.

I Don’t Take Private Questions at Pre-Conference Classes.

Writing and Presenting
4 Comments

When I first started teaching database sessions at conferences, I noticed a pattern.

When I finished the session, closed PowerPoint, and thanked everyone for attending, a big queue of attendees would instantly start forming at the podium. People would line up to ask private questions, one at a time.

And simultaneously, a small audience would form – people who wanted to hear every question and answer, but didn’t actually want to ask any questions of their own.

As a presenter, that sucked. There were sessions where I couldn’t leave the room until two hours after my scheduled time was over. Several different people in line would ask the exact same question, but because the line was long, they wouldn’t hear that the question had already been asked and answered, so I’d have to repeat it again.

So I came up with a better solution – for me, at least, and maybe it’ll work for you.

Whenever I’m teaching a day-long pre-conference class at a conference, I start with a few logistics slides. I explain what the attendees will learn during the class, how bio breaks will work, and that questions are welcome at any time through the day. Just raise your hand whenever, and we’ll cover your questions right there, as long as they’re related to what we’re discussing.

However, I explain that I have three rules.

First, I can’t take any private questions during bio breaks. I need to pee too, and I have to set up the demos for the next module. Without this, my bio breaks would be utterly frantic. Even WITH this, I still get people coming up during the bio breaks, and I have to gently remind them to raise their hands during the class because I need the break time to set up.

Second, the day will finish with an hour of open Q&A. I aim to end the class at around 4PM, and at that time, we’ll take a short 5-minute bio break. People who want to pack up and leave to beat rush hour traffic or catch a train are welcome to pack up and go, because the official training material is done. People who want to ask questions are welcome to stay, and the rest of the time is spent doing totally open Q&A. Any questions are welcome, whether they’re related to the training material or not – you’re welcome to ask whatever question has been giving you trouble at work.

Third, when the open Q&A is over, there are no private questions allowed. I’ll stand and answer questions as long as people keep asking them, but when you stop asking questions, THE CLASS IS OVER. I will thank everyone for attending, and I’ll pack up my laptop and go. If there’s a question you don’t feel comfortable saying in front of the group, you’re welcome to email me at help@brentozar.com, and I’ll hit that after I get home from the conference. (They’re given specific things to include with their help email, and told that if they don’t include those things, they’ll get my standard template response that suggests ways they can get help for free.) I make a joke out of it by saying, “When open Q&A is over, I’m going directly to my hotel room for a bottle of wine to recover, and you are not going to stand in between me and that wine.”

This reshapes the end of the training class.

I announce that the training material’s done, and that we’re going to take a 5-minute break so folks who want to leave can pack up and head out, and then we’ll switch to the open Q&A portion. I thank everyone for coming today, and take a bow, and the “finishing” round of applause hits. I put a 5-minute timer up on the screen, and I close all my apps so people can see that we’re done with that.

Then, when the open Q&A starts up, it’s genuinely fun. People understand that we can jump around to any topic they’re interested in, and the more questions they hear, the more comfortable they seem to be in posing their own.

But as the questions start to slow down after a while, I have to remind folks about Rule #3. I’ve learned over the years that if I don’t keep repeating it as the questions taper down, there’s always gonna be somebody who tries to pull me aside afterwards to say. “Hey, I just had a few quick questions,” and then they open up a Word doc with a wall of questions their boss sent them to class with.

As the Q&A slows down, as we approach 5PM, I’ll say in a joking tone, “Going once… going twice… any more questions? Remember, after this is over, I’m going to pack my laptop up, go to my hotel, and order room service and a bottle of wine, because I need to recover. When there are no more questions, this class is over, and there will be NO PRIVATE QUESTIONS, remember? You’re not going to line up and accost me afterwards, right? Any more questions?” I genuinely want to hit every single question, and I want everyone in the audience to learn from every question. Questions and answers are so much fun – heck, that’s even part of why I do so many of them in my Office Hours videos, and clearly thousands of y’all love watching me answer other peoples’ questions.

Eventually, the questions stop, and I close things out by thanking everyone for coming, and wishing them a safe drive home. I close my laptop, yank the HDMI cable out, and as I’m putting the laptop in the bag, at least a couple people still line up, every single time. I still say, “No, sorry, remember, we talked about this, we’re not doing private questions.” And they say the same thing every time – “yes but it’s just one question…”

By writing this blog post, I don’t expect anything to change. The people asking these questions aren’t listening to my instructions right there in front of the room, spoken out loud. They’re sure as hell not reading blog posts like this proactively, and filing that information away in their mind for whenever they see me at a pre-conference class. But those of you who are loyal long-term readers, the ones who really do pay attention, will see what happens, and you’ll smile and nod at me when it happens, because you’ll remember this blog post. And we’ll share a smile and a chuckle while I try to politely pack up my laptop bag and go home.

Wanna see me in action? I’m teaching an all-day training course at SQLBits called Dev/Prod Demon Hunters, and here’s how it’ll work.


10 Signs It Was Time to Hire Me

Clients and Case Studies
0

You’ve been reading my blog, watching my videos, and maybe even taking some of my training classes. You’ve heard me say things like “my clients” from time to time, and you’re wondering… why do companies actually hire me? What problem are they trying to solve?

Well, when clients meet with me for the first time, I have a series of questions I put up on the screen for them to talk through:

Why this? Why now? Why me?

The last one basically asks, “What’s the straw that broke the camel’s back? What made you pull the trigger to schedule a sales call with me today as opposed to waiting another couple of weeks?”

Here’s a rundown of several of my recent clients and the reasons they hired me, in no particular order:

1. “Our Azure bills are growing out of control.” The small company’s customer base had been growing 10-20% per year, but their Azure bills had more than doubled over the last year. They kept hitting 100% CPU, and they’d upsized from 4 cores, to 8 cores, and just before calling me, to 16 cores. Management told the tech team, “We’re going to 16 cores, but only for emergency purposes – you gotta get this database server back under control, and back down to 8 cores, max.”

2. “We’re playing Whack-a-Mole.” The company had migrated to Azure SQL DB Managed Instances, and ever since, every couple of weeks, they faced a performance emergency they’d never seen before. Management was tired of the surprises, and wanted to know if it was an MI problem or something else. (It was something else.)

3. “Our third party ERP app performs terrible.” The entire company’s staff was grinding to a halt because salespeople couldn’t place orders, manufacturing processes were timing out, the shipping dock was slowing down, etc. The ERP vendor was blaming the SQL Server hardware and storage, but the company wasn’t so sure.

4. “We want to become proactive.” The small company & team had been growing for years. They’d started as a 3-person shop, and were now approaching 30 people. Whenever they’d had performance problems in the past, the original founder would kinda wing it, but now they wanted to become more practiced and polished. They wanted to assess the server and the team’s existing skills, then build a learning plan.

5. “We know your tools, but we’re hitting a weird wall.” The large company with a team of 3 full time DBAs had been using the First Responder Kit scripts for years, been through all of my Mastering classes, and were able to handle most of their performance issues. However, they were stumped by an unusual recurring storm of poison waits that would strike at random days/times.

6. “The DBA’s gone, and we need a plan.” It never was really clear whether the DBA left on their own or was laid off by cost-cutting management, but either way, things had started going very wrong with the database server. Management needed a prioritized list of what to fix versus what could get by, and then assignments to various existing team members to divvy up the work.

7. “We need a data warehouse strategy.” The so-called data warehouse server had two dozen databases on it, and was getting hammered by half a dozen different teams who’d built a few dozen applications, including real time OLTP on it. The DBAs knew things were bad, but the teams needed an independent outside opinion, in writing, that management could use to build a new strategy over the coming years to get the house in order.

8. “We disagree about whether our caching is working.” Management brought me in because the developers swore the app was using Redis for caching, but the DBAs swore the app was hitting the database constantly. (It turned out they were both right.)

9. “We’re getting ready to refresh everything.” Every 4-5 years, this company would build new SQL Servers. They wanted advice on a range of topics like whether it was time to use readable replicas, if they should try Fabric Mirroring, and whether they should invest developer time to move to a newer version of Entity Framework.

10. “We want fast training targeted at our skill level.” The company had about 20 developers and 1 full time DBA. They knew they’d been shipping good-enough code for a decade, and they wanted to start leveling up. The DBA didn’t have the time or training material to bring the developers up to speed quickly. The company wanted a quick assessment of the code base, then a couple of days of training for the developers to make things better each time they touched existing code.

I love my job because every week is different. I get called into all kinds of companies to solve all kinds of problems, quickly. If you’d like my help, here’s my 2-day SQL Critical Care® process, and you can schedule a sales call from there. I look forward to working with you!


[Video] Office Hours: Ask Me Anything About Microsoft SQL Server and Azure SQL DB

Videos
1 Comment

Want to ask your own questions or pick the ones I discuss? Head over to https://pollgab.com/room/brento to get involved. In the meantime, let’s hang out in the backyard, where Beni makes a special guest appearance.

Here’s what we covered:

  • 00:00 Start
  • 00:57 Old and Tired DBA: Should I convert all of my parameters to local variables in my stored procs?
  • 02:21 MyTeaGotCold: Do you ever use hints like ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES or ASSUME_JOIN_PREDICATE_DEPENDS_ON_FILTERS? They’ve saved me a few times, but I think they’re not in most people’s toolbox.
  • 03:42 emre: We have a problem with VMware snapshot creation: Unable to quiesce on SCV: Exception occurred when quiescing virtual machines. Is using a newer version of SQL Writer without updating MS SQL Server possible? Is it a real thing? Thanks
  • 07:00 I_like_SQL_Server: Hi Brent, what is your favorite Tequila drink? (I’m posting this on a Friday so it’s legit.)
  • 09:45 I_like_SQL_Server: Hi Brent, I’m struggling with setting up proper statistics handling and wonder if you have any updated resources or planning on statistics maintenance? We have some large tables (for us a couple of terabytes is large) and it’s hard planning the correct sample rate.
  • 11:40 Elwood Blues: What are your pros / cons of SQL backup to disk vs back up to URL in context of Azure SQL VM? (Speed, cost, reliability,etc)

Free SQL Server Performance Monitor App by Erik Darling

Monitoring
8 Comments

Let’s hop right into it. I’ve got one lab server (named Colorful), so when I open the app, my estate overview isn’t particularly interesting – yours will be:

Performance Monitor Lite

Go into the server’s details, and you see the top wait types over time:

Wait stats

You can choose which wait types to display. By default, he just shows the top 20, plus poison wait types, to keep the graph clean and easy-to-understand. There’s a dropdown at the top right to choose your time span: last 1 hour, last 4 hours, 12 hours, 24 hours, 7 days, or custom.

Go into the Queries tab, Performance Trends, and there’s graphs for how long query execution is taking, proc execution time, and number of queries executed – so you can see if your server is busier than usual:

Query performance trends

Which queries have been running and taking the most time? I’m glad you asked, because you can click on tabs for Top Queries by Duration, Top Procedures By Duration, and Query Store by Duration to see grid results:

Query performance details grid

Scroll across to the right to see the query, download its estimated or actual (runtime) query plan, and right-click on the row to see more cool options:

Right-clicking on a query

Click “Copy Repro Script”, then switch over to SSMS, and you get a call stack for the query, including the parameters that it was compiled with:

Query repro script

So right there, in a matter of seconds, you can get started running the query, seeing its actual execution plan, and doing your performance tuning work.

Now, it’s not a superhero, because if there are no parameters in the plan cache (or Query Store or whatever you’re looking at in that moment), then it’s not going to be able to come up with the parameters to test with. Eagle-eyed viewers will also catch that the query in the screen shot isn’t quite right – the last character in each query was getting trimmed off – but Erik’s already fixed that. I’m just too lazy to redo the screenshot.

Let’s pause the screenshot tour to talk big picture.

The above workflow – the ability to get query plans and repro scripts quickly – is indicative of the fact that Erik’s going to use this in his day job as a performance tuning consultant. Open source tooling works best when the authors actually use their work on a daily basis, testing it out and proving its worth in making their jobs faster. The fact that Erik is gonna use this to make money is a good sign that the project is going to be useful to folks like you and me.

See, Erik can’t afford to waste his time building features for marketing fluff or buzzword scores – this is a tool he actually has to use. He’s not selling it, and he doesn’t have unlimited free time, so he’s gotta build features that matter.

As a result, right out of the gate, you get genuinely helpful features like that “Copy Repro Script”, something that every monitoring tool should have always had since the dawn of time.

SQL Server Performance Monitor has lots more features.

Rather than showing you screenshots of all of them, I’ll list the ones I think are going to resonate with you most:

  • Email alerts
  • Popup alerts in your tray for stuff like live blocking emergencies and lots of recent deadlocks
  • Currently running queries – you should never be using Activity Monitor again
  • Currently running Agent jobs, and how long they take on average & P95
  • Graphs for trending CPU, memory, file I/O read & write latencies, TempDB space used & latency, blocking, and more
  • Perfmon graphing for the most real-world useful numbers
  • A built-in MCP server so you can get AI-powered analysis

I’ve been using the Lite version for the last couple of days in my lab, and I gotta say, I’m struggling to come up with reasons why you would pay for SQL Server performance monitoring software. This app does everything that performance tuners need on a daily basis. I emphasize performance tuners here though, not DBAs, because it doesn’t focus on uptime requirements like troubleshooting backups, outages, Availability Groups, replication, etc.

It doesn’t require much of anything.

There are two versions: Lite and Full, both free. Lite is a standalone app that runs locally on your desktop/laptop, and doesn’t install anything on the monitored servers whatsoever. It’s a no-brainer. The Full version does require an installation on the monitored SQL Servers, and requires SQL Server Agent. Here’s a grid from the documentation comparing the two versions:

Version comparison

If your full time job focuses on performance tuning SQL Server and Azure SQL DB, and your boss won’t pay for commercial monitoring software, go read the documentation and give it a shot. It’s free and easy.

If you do have commercial performance monitoring software, you shouldn’t abandon it right away, but you should check with your boss to see when the renewal date is coming up. About a month before that point, put a reminder in your calendar to go install SQL Server Performance Monitor to see the current status of the project. Then, use it as a negotiation tactic with your sales reps to say, “We’re thinking about switching to this free tool – why shouldn’t we?” and see if you get a discount. And if not, hey – maybe SQL Server Performance Monitor will be enough for you!

If you’re a consultant who does performance tuning, this tool is a no-brainer because it helps you add value for your clients. You can install it, use it, and teach them how to use it when you’re not around. Unlike a lot of commercial performance monitoring products, it doesn’t try to give BS advice that harms your tuning efforts. (I’ll never forget the monitoring tool that had a big red button saying “free the plan cache” as if that was going to solve a problem.)


How to Help Copilot Encourage Good Database Standards

AI
13 Comments

I know a lot of y’all lag behind on upgrading SSMS, but v22.3 just introduced something that you need to be aware of. It’s going to impact any of your users who DO upgrade their SSMS, or who use Github Copilot. There’s something that you can do in order to improve Copilot’s code quality and make it match your preferred coding standards.

You can add database instructions as extended properties at the database or object level, and when Copilot works with those objects, it’ll read your instructions and use them to shape its advice.

For example, you can add a database-level property called a “constitution” with your company’s coding standards, like this:

Neato, huh? You can also define guidance at the object level:

Then, when SSMS Copilot or Github Copilot query the database schema to understand it, they’ll automatically read the Constitution.md and Agents.md properties, and take that into account when generating code for you. (It doesn’t work quite right just yet – you have to manually prompt it to go read the advice in v22.3 – but it’s clear that Microsoft intends it to work automatically without being reminded.)

That’s brilliant and I love it!

In a perfect world, this is going to let us define database & coding standards, check them into source control as part of our database schema, and when developers ask Copilot for code reviews or to write new queries & tables, our teams will actually get meaningful advice!

But at the same time, it poses a risk. If anyone adds extended properties to your databases, they can shape the advice you get from AI. That means it’s up to you, dear reader, to spearhead the drive for good coding standards in your databases, and make sure other people don’t steer the code in the wrong direction.

Here’s how to see what AI advice constraints have been set up in a database:

We’ve also added CONSTITUTION.md support to the First Responder Kit in the dev branch if you’d like to get a sneak peek before the May 2026 release. Our free health check script, sp_Blitz, warns you if someone’s added AI guidance at the database or object level, and sp_BlitzCache adds the CONSTITUTION.md guidance when building AI prompts for you, so your code standards are followed by ChatGPT.


[Video] Office Hours at Atlantis, Bahamas

Videos
4 Comments

It’s an overcast afternoon at Atlantis Paradise Island in the Bahamas, so since I can’t go into the water, might as well go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 00:48 jrl: What makes a good Office Hours question?
  • 01:22 Nils: Do you have any experience with Babelfish or similar tools to migrate from SQL Server to PostgreSQL or another DMBS.
  • 02:22 TeaEarlGrayHot: Why does it seem that when SQL escalates locks from row level locking, that it so seldom escalates to page locks and instead usually escalates to object level locks? Are there specific requirements to achieve page locks?
  • 03:48 Big Blue Couch: Hey Brent, What are you seeing your clients use for ETL Tools? Still SSIS? Or are there other good 3rd party tools seeing mass adoption?
  • 04:46 Margaret: Hi Brent – I know you have a version of WhoIsActive in the First Responder kit, but have you added any code to get the SQL behind API Cursors? This is the one place that WhoIsActive needs improvement. It’s not at all helpful to just have APICursor9251 in the sql_text field.
  • 05:37 NO: I wonder if you see SQL Server on Linux often in the wild or have some experience with it. SQL Server is the only thing that keeps us holding Windows machines that we would like to get rid of it. However, our DBA`s are saying that they dont know what effects this will cause
  • 06:09 StoicDBA: Uou mentioned that you are a Stoic. As a practicing Stoic myself, would you care to explain a few things about Stoicism means to you, which books you would recommend the most, and how Stoicism has helped you become a better DBA.
  • 12:10 Captain Calamity: For the CU22-23 DBmail debacle, should those shops always stay one CU behind the latest CU rather than always installing latest CU? What other CU calamities do you remember?
  • 13:18 Nolb: If you were about to start a new green field project, what would be your main reason(s) to stick with SQL server compared to something less costy like PostgreSQL

Updated First Responder Kit and Consultant Toolkit for February 2026

Two big sets of news this month! The Consultant Toolkit now supports imports to a database so you can track your clients’ health and performance over time, and sp_BlitzCache has a new @AI parameter.

  • Set @AI = 2, and get a prompt you can copy/paste into the AI of your choice to help you tune the query.
  • Set @AI = 1, and we’ll actually call ChatGPT or Google Gemini for you and return the advice.

@AI = 2 works on any SQL Server, Azure SQL DB, Amazon RDS, etc. @AI = 1 only works on SQL Server 2025 or Azure SQL DB. For more information on how to set it up, check out the documentation.

To get the new version:

Consultant Toolkit Changes

Brent Ozar's Consultant ToolkitIf you’re a consultant that does long term work, maintaining a client’s SQL Servers, then you probably want to track the health and performance of those servers over time. You want data – in a database.

We’ve got a new Loader app that watches a folder for incoming zip files, and when one shows up, it processes the data to load it into a SQL Server (or Azure SQL DB) repository for you.

This means you can set up the Consultant Toolkit at your clients on a scheduled daily task, upload the data to S3 (built in) or use your own file sync methods to get it to the location of your choosing, and then have the data automatically loaded into your database server for you.

To learn more about that, read the PDF documentation included with the Consultant Toolkit.

sp_Blitz Changes

  • Enhancement: the check for an AG secondary getting behind now works even if the secondary is offline. (#3783, thanks iant-at-scc.)
  • Enhancement: the check for linked servers now shows the name if the data source isn’t configured. (#3785, thanks Steve Earle.)
  • Fix: remove unused line from sp_Blitz documentation. (#3760, thanks Reece Goding.)

sp_BlitzCache Changes

  • Enhancement: add new @AI parameter to get advice from AI. (#3669, thanks Kori Francis for the debugging.)

sp_BlitzFirst Changes

  • Fix: improve performance when thousands of sessions have open transactions. (#3766, thanks Giorgio Cazzaniga.)

sp_BlitzIndex Changes

  • Enhancement: new check for heaps with page compression enabled. (#3780, thanks Vlad Drumea.)
  • Fix: case sensitivity error with new is_json column. (#3757, thanks michaelsdba.)
  • Fix: when debug = 1, not all result sets were shown. (#3776, thanks Vlad Drumea.)
  • Fix: typo with wrong priority for missing index warning. (#3778, thanks Vlad Drumea.)

sp_BlitzLock Changes

  • Enhancement: bail out early if no rows were found in the target table. (#3787, thanks Erik Darling.)
  • Fix: table existence checks now handle situations where some, but not all, tables were set up. (#3789, thanks Erik Darling.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


Building My Dev/Prod Demon Hunters Session, Part 1: The Strategy

Conferences and Classes
9 Comments

For SQLBits this year, I wanted to submit a performance tuning session about one of the most classic, timeless problems I run into: why is the same query fast in development but slow in production?

All of my regular readers – if there is such a thing – are now yelling out, “Parameter sniffing!”

And yes, that’s the most common cause, by far!

But there are a bunch of other problems we run into, and I wanted to build a session that would explore them.

The Session Design Challenges

Presentations usually default to one of two formats: either a bunch of slides, or a demo. Both of those approaches work completely fine, but when I’m giving an all-day session at a big conference like SQLBits, I wanna bring my A-game. I wanna surprise and delight people.

"Golden" - K-Pop Demon Hunters

The session needs to be interactive. Sometimes that means they follow along with demos on their laptop, but that can be hard with large quantities of attendees. The attendees all need fast laptops, they need the appropriate software & databases & scripts ahead of time, and they’ll probably need power plugs. Interactive doesn’t have to mean the attendee is hitting F5: it can also mean that I’ve got something up on the screen that requires their focus, requires them to think, and encourages them to raise their hand or point things out.

The session needs to shift gears a few times. No matter what storytelling element I use, attendees will get bored if I use the same one all day. After each bio break, I wanted to shift gears and change tactics. Perhaps we solve a different problem, or we use a different storytelling tool (slides vs demos vs something else.)

The session needs to tie into the SQLBits theme. I confess that I’ve been lazy about this in years past. It’s hard for me to justify the work required to write an entire all-day session that’s closely tied into a one-time event’s theme, and then not be able to give that session again, or have to spend a lot of time adapting it to the kind of theme I usually use (Fundamentals & Mastering.) However, this year I wanted to challenge myself, and because the theme is “cartoon”, I figured I could find a way to integrate it into the cartoon avatars I’m known for using.

How I’ll Tell the Story(ies)

I decided to break the overall session up into a series of individual, standalone stories. I would set up two different servers – Prod and Dev – and set up a series of challenges. I’d have a query that runs fast in dev, slow in prod (or vice versa), and then say to attendees, “Alright, now we gotta find out what’s going on.”

Put these anti-patterns in the past now

In each story, the root cause will be different, and the way we’ll solve it will be different.

This requires more planning than you’d think! Let’s say we’re gonna demonstrate just 3 problems: parameter sniffing, different statistics on each server, and different settings on each server. Well, I can’t just teach them in random order: they have to build on each other, and they have to illustrate my performance tuning method.

Let’s say the first problem we tackle together is the “different settings on each server” one. If I have the two dev & prod queries open side by side in SSMS, what would make me switch over to looking at server settings? I would probably look at the plans first, look at their compiled parameters – but if I’m trying to teach how to solve “different settings on each server”, then I’m wasting storytelling time there, and every minute counts when you’re building a session.

Instead, in this example, I would lead with the parameter sniffing problem as the first story because as a troubleshooter, I wanna lay the two plans out side by side and rule out parameter sniffing first. If the two environments get the same plan when the plans are optimized for the same parameters, great, we can focus on that before moving on to more ambitious problems.

That means when I get to the “different settings on each server” problem later on in the day, I can tell the story by saying, “Alright first, let’s rule out parameter sniffing. Yep, that’s not the problem, because even when the queries are compiled for the same parameters, we STILL get different plans, so let’s zoom out and find out what’s influencing these different plans, and figure out how to work around it.”

In our 3-problems scenario, we might choose to tell the story in this order:

  • Parameter sniffing
  • Different statistics on each server
  • Different settings on each server

Then, the way we tell the overall story starts to inform our troubleshooting method! We’re building a repeatable checklist where we can say, “Go through these things, in this order, every time.” However, I can’t reveal that checklist ahead of time, even in the beginning of the session! If I did, then I can’t run the day as a series of mysteries that we’re going to solve together. The agenda slide would explain the cause of each mystery, hahaha! So while I’ll be blogging about the process of building this training day session, I won’t be sharing the agenda slide or even telling you the module names. Gotta keep it a fun mystery for the attendees.

How I’ll Theme the Story

My first attempt was to look up what movies are releasing in April 2026, the month of Bits, and see if there was a cartoon movie I could tie into. There is indeed a Super Mario Galaxy movie, and I did think about theming the session around that. I did come close to using Mario Kart items, but it didn’t feel quite right.

So I asked ChatGPT for ideas! I’m not ashamed.

I basically copy/pasted in a bunch of information from SQLBits’ web site, my own blog posts about the conference, my bio, and my session planning notes. I said I’m looking for cartoons, animes, animated movies, etc that I could use for my session theme.

It came up with a bunch of good suggestions, but K-Pop Demon Hunters made me stand up out of my office chair. I instantly thought, “It needs to be Dev-Prod Demon Hunters.” It works so well on so many levels:

Cosplay ideas (for you, not me)

  • I can use the term “demon” for the problems we’re solving
  • In the movie, the demons are hidden, but once they’re exposed, they’re not that hard to conquer
  • The songs are catchy as hell (I dare you to listen to Golden only one time)
  • I’ve been to Asia a couple times recently, had a great time, and have some fun photos to use in slides
  • I can use temporary tattoos for the demon skin markings from the movie

The timing isn’t perfect: K-Pop Demon Hunters will be fading from popularity around the time of Bits, and the sequel won’t be out for a few years. However, it feels like the theme will work for a while, and I can use the session for a while at other conferences.

Wanna join in? Register now for SQLBits.


[Video] Office Hours Aboard the Groove Cruise

Videos
0

On the world’s largest floating electronic dance music festival, I took your top-voted database questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:54 Frost: What is your opinion on Group Managed Service Accounts?
  • 04:09 Elwood: What AI tool or skills do you recommend learning for DBAs to stay relevant?
  • 07:59 GSurgeon: What are the telltale signs that a database has flaws?
  • 09:20 Scottish: Utility Control Point – have you ever found a use case?
  • 09:34 Raddock: Last week you shared Andy Cutler’s post about solo consultancy. Andy argued that small companies are shifting from solo consultants to using LLMs. Do you have similar observations and do you think this is permanent or temporary?
  • 14:28 MyTeaGotCold: Should I expect my usage of FORCESEEK to increase as table size increases?
  • 16:12 Wombat: I’m getting error 666. How do I know what table and index is causing it?
  • 17:31 Andrew: I’ve been asked to explore switching the operating system for our SQL Server from Windows to Linux.
  • 19:04 Chris: Hope you’re enjoying your time away. If you worked for an employer, would you prefer working from home, the office, or a hybrid?
  • 21:47 How many questions can I do per Office Hours episode?

I Just Don’t Understand Why You Don’t Update SSMS.

A long time ago in a galaxy far, far away, SQL Server Management Studio was included as part of the SQL Server installer.

Back then, upgrading SSMS was not only a technical problem, but a political one too. Organizations would say things like, “Sorry, we haven’t certified that cool new SQL Server 1982 here yet, so you can’t have access to the installer.” Developers and DBAs were forced to run SSMS from whatever ancient legacy version of SQL Server that their company had certified.

These days, SQL Server Management Studio v22 has:

  • A totally separate standalone installer
  • A totally separate version numbering system (SSMS v22 as opposed to SQL Server’s year-based numbers)
  • No designed-in dependencies (you can run new versions of SSMS on your desktop and connect to any supported version of SQL Server)
  • A much, much, much faster release schedule than SQL Server
  • Pretty few known issues – the list looks long at first, but if you go through ’em, few are relevant to the kind of work you do, and frankly, it’s still a shorter list than most of the previous SSMS versions I’ve used
  • A lot more cool features than the old and busted version you’re running today

And current versions even have a built-in, kick-ass upgrade mechanism:

Easier than gaining weight on a cruise ship

You should upgrade.
It keeps improving, quickly.

For example, SSMS v22.2.1 – a seemingly tiny version number change – just got a massive improvement in code completions. T-SQL code completion has never been great – IntelliSense doesn’t even auto-complete foreign key relationships. SSMS v22.2.1’s code completion will make your jaw drop.

For example, I never remember the syntax to write a cursor. It’s the kind of thing I don’t have to do often, and for years, I’ve used text files with stuff like this that I rarely (but sometimes) need quickly. With SSMS’s latest update, I just start typing a comment:

Declare a cursor

In that screenshot, see the different text colors? I’d started a comment and just written “Declare a cursor to” – and SSMS has started to fill in the rest. My goal in this case isn’t to loop through all the tables, though, so I’ll keep typing, explaining that I want to iterate through rows:

Interesting cursor choice

SSMS guessed that I wanted to iterate through the Posts table – and that’s SO COOL because SSMS actually looked at the tables in the database that I was connected to! If I try that same thing in the master database’s context, I get a different code completion!

Now, this does mean that Github Copilot & SSMS are running queries against your server in order to do code completion, and that they’re sending this data up to the cloud to do code completion. I totally understand that that’s a big security problem for many companies, and … okay, maybe I just answered that question about why some of you aren’t upgrading. But look, you can turn that feature off if you want, and you can track what queries it’s running if you’re curious. Let’s keep moving on through the task I have at hand today. I’m not trying to run through the Posts table, I need to do something else, so let’s keep typing:

Uh that's an odd cursor choice

uh wait what

In the words of Ron Burgundy, that escalated quickly. That is most definitely NOT what I’m trying to do, but that’s the state of AI these days. It’ll gladly help you build a nuclear footgun with speed and ease. Let’s continue typing:

The cursor I want

(I don’t really need this specific thing, mind you, dear reader – it’s already built into sp_Blitz – but I’m just using this as an example for something a client asked me to do.) Now that I’ve clearly defined the comment, SSMS starts writing the code for me. I’m going to just tab my way through this, taking SSMS’s code completion recommendations for everything from here on out, just so you can see what it coded for me:

The completed code

In a matter of seconds, just by hitting tab and enter to let AI code for me, it’s done! Not only did it write the cursor, but it wrote the dynamic SQL for me to do the task too. Now all I have to do is click execute, and:

Presto! The power of AI!

This right here is the part where you expect me to make an AI joke.

But let’s stop for a second and just appreciate what happened. All I needed SSMS to do was just to build a cursor for me, and it went WAY above and beyond that. It wrote dynamic SQL too, because it understood that in order to get the right checkdb date, it has to be run inside dynamic SQL. That’s pretty impressive. I don’t mind troubleshooting some dynamic SQL that frankly, I probably would have written incorrectly the first time too!

Today, what we have is Baby’s First Code Completions. I can’t get angry about that – I’m elated about it, because we’ve never had code completions before, and now at least we have it! That’s fantastic, and it will absolutely make me more productive – in the places where I choose to use it, judiciously. I can’t rely on it to build whole tools for me out of nothing, but as an expert, using it to augment and speed things up, it’s helpful, period.

I expect it to get even better, quickly.

I’m not saying that because I’m optimistic or because I have inside information. Microsoft simply doesn’t have a choice, because the only AI model that SSMS v22.2.1 supports right now is GPT-4.1. That’s so old and underpowered that OpenAI is retiring it this month, so Microsoft is going to have to switch to a newer model – which will automatically give us better code completions.

You’ll see evidence of that in the code completion documentation, and under SSMS v22.2.1’s tools-options, under Text Editor, Code Completions:

Text completion settings

Because I installed the AI components of SSMS, I get a dropdown for Copilot Completions Model. That’s the brains of the operation, the cloud AI model that comes up with the ideas of what you’re trying to code, and codes it for you.

Today, as of this writing, the only option is GPT 4-1, the old and busted one. I’m excited to see which one(s) we get access to next. Github Copilot’s list of supported models is huge, and it includes some really heavy hitters that produce spectacular results, like Claude Opus 4.5 and Gemini 3 Pro.

Side note – if you’re on the free Copilot individual tier, you only get 2,000 code completions per month for free. You’re gonna wanna check the box in the above screenshot that says “Show code completions only after a pause in typing” – otherwise you’ll keep getting irrelevant suggestions like how to drop all your databases, ha ha ho ho, and you’ll run out of completion attempts pretty quickly.

So do it. Go update your SSMS, make sure to check the AI tools during the install, sign up for a free Github Copilot account if your company doesn’t already give you a paid one, configure SSMS with your Copilot account, and get with the program. You’ll thank me later when it starts auto-completing joins and syntax for you. It’s free, for crying out loud.


Who’s Hiring Database People? February 2026 Edition

Who's Hiring
7 Comments

Is your company hiring for a database position as of February 2026? Do you wanna work with the kinds of people who read this blog? Let’s make a love connection.

You probably don't wanna hire these two.If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it.
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • It has to be a data-related job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.