Diagnosis: Self-Loathing Index Disorder.

Your indexes may seem like they’re not quite living up to their potential– because they probably aren’t.

Our data says that your indexes may need your help to fully respect themselves again.

Low Fill Factor

We looks for any index where the fill factor is 80% or less. Fill factor is a measure of how full SQL Server fills index pages. If fill factor is at 75%, then when SQL Server rebuilds an index it’s going to write out all those beautiful, clean pages and leave 25% free on each and every page.

There’s no single “right” setting for fill factor. On many indexes, 80% is fine.

However, people frequently make the mistake of setting fill factor to low values like 80% across the board– even on indexes that do not fragment frequently. The best practice is to ONLY use a low fill factor on indexes where you know you need it. Setting a low fill factor on too many indexes will hurt your performance:

  • Wasted space in storage
  • Wasted space in memory (and therefore greater memory churn)
  • More IO, and with it higher CPU usage

Review all indexes diagnosed with low fillfactor. Check how much they’re written to. Look at the keys and determine whether insert and update patterns are likely to cause page splits.

Learn more about fillfactor and best practices here.

Hypothetical Index

“I index, therefore I am.”

Well, not really. Not every index is real. One surreal type of index is a hypothetical index. This is an index which exists as metadata, but can’t actually be used to run queries.

The biggest downside of having hypothetical index in your database is that it makes query tuning harder. You go to look at your indexes and say, “I’ve got an index on that column…. oh, except not really.” They clutter up your database and don’t do you any good at all.

Typically, if you’ve got hypothetical indexes the likely cause was that once upon a time (or more than once), someone ran the database tuning advisor (DTA) against your SQL Server, and the DTA died somewhere along the way. It creates hypothetical indexes as part of its process, and if it goes off the tracks and doesn’t complete, it doesn’t clean up its mess and leaves them behind.

Disabled Index

Disabled indexes are just that: they’re not enabled. They can’t be used by SQL Server.

If you’ve got a disabled index, someone either disabled the index by running a script or by right clicking on the object in SQL Server Management Studio and directing it to disable it.

The first step with a disabled index: Identify if it is the clustered index (the data of the table itself– which is always IndexId 1) or a non-clustered index (any IndexId greater than 1). You can see this in the sp_BlitzIndex® output in the details column. Here is an example:

Disabled Index:dbo.customer.customer_i2 (2)

This disabled index is on the dbo.customer table. The index name is “customer_i2”. The IndexId is 2. Since the IndexId is NOT 0 or 1, we know that this is a non-clustered index.

Disabled Clustered Indexes

A disabled clustered index can’t be read from or written to. The data is not deallocated, but it can’t be accessed. The only time I have ever come across a disabled clustered index, the operation had been performed as a mistake. (Typically if people are going to drop or truncate an object and want a first “safety” step, they choose to rename it rather than disable it as that safety so they have a quicker rollback option.)

Disabled NonClustered Indexes

It can be much more useful to disable nonclustered indexes, and it is a more common operation. As part of an index tuning operation, one might choose a step of disabling nonclustered indexes, monitoring performance and missing index information, and then either choosing to keep the index (by rebuilding it) or dropping it at another point in the cycle.

Nonclustered indexes are also frequently disabled as part of a data loading process– for instance, if I am reloading replicated tables on a transactional replication subscriber, I might choose to disable nonclustered indexes during the data load and rebuild them all at the end. This typically improves performance in that scenario, as they would need to be rebuilt at the end to address fragmentation anyway.

The big problem with disabled nonclustered indexes is simple: people forget them and they linger around forever, making index tuning more complicated.

Heaps with Forwarded Records or Deletes

In Portland, Oregon there’s a slogan you see on bumper stickers around town: Keep Portland Weird.

Heaps can be relied on to keep SQL Server weird.

A “heap” is simply a table without a clustered index. You can create nonclustered indexes on a heap just like you can on a table with a clustered index— so we do sometimes find heaps that are hundreds of gigabytes in size, and which have many other hundreds of gigabytes of nonclustered indexes on them. It’s easy for this to happen to people who don’t know how weird heaps can be in SQL Server. (And this isn’t dumb, because the same issue does not exist at all in other platforms.)

Heaps have two big issues:

  • Forwarding records: Imagine I’m updating a variable length field from something very short to something very long. There’s not enough room on the page holding that row for the new value. In a heap, that row gets moved off to a new page and a “forwarding record pointer” is left in its place. Every time I need to read that row, I have to go to its original address and then follow the forwarding record pointer to the new address. Over time, lots of forwarded records create lots of random IO– and reads get crazy in my heap!
  • Captive pages: This is a name I just made up, because I’ve never heard of a good name for it. When you delete data from a heap it does not work the same was as if you deleted the data from an index (either clustered or nonclustered). Let’s say that I have a 2 GB heap and I delete half the rows in it– which coincidentally were on exactly half of the pages allocated to the index. If I do not use a special high level lock, those pages still stay allocated to the heap. They can be reused by that heap, but they have to hang around empty, just in case. Captive.

These two phenomenon add up to one thing: heaps frequently harm themselves so much they drag down your performance.

Now that we’ve explained the complicated stuff, sp_BlitzIndex® is able to diagnose if any reads have had to follow forwarded record pointers since the last restart. It is also able to check if deletes have happened since the last restart (which may cause the “captive pages” syndrome).

Our rule of thumb for SQL Server is to always create a clustered index on a table unless you can prove there is a performance improvement for you application by using a heap. (If you can prove there’s a benefit, that’s great! Use it! But you’re the exception, not the norm.)

To temporarily fix heaps with forwarded fetches or unallocated space, this will list your heaps by forwarded fetch count. Just be aware that when you rebuild heaps, it’s a logged operation, and can slow things down while you work. Only do the resulting commands while keeping a close eye on other activity on the server, doing frequent log backups as necessary:

Heaps with reads or writes

This diagnosis is for active heaps– they’re being read from or written to– but they haven’t had forwarded records read or deletes occur since restart.

If you’ve got active heaps, give them a close look. Read through “heaps with forwarded records” or deletes above and assess whether in your case you can prove you need a heap to get the job done.