Wednesday, January 23, 2008

Performance Counters to watch

Hardware Counters

Memory \ Available Bytes Shows the available amount of physical memory on the server. An acceptable output for this may vary widely based on how much physical memory is in the machine. If you have 2GB of RAM installed on the machine, it is common to see SQL Server use 1.7GB of RAM. If no other processes are running on your SQL Server, make sure you have at least 80MB available for Windows at any given time. If you see this counter below that amount, I would recommend buying additional RAM immediately.

Memory \ Pages/sec Shows the number of pages that are read from or written to disk. This causes hard page faults, which cause SQL Server to go to page file versus memory. If this counter averages 20, you may want to add additional RAM to stop the paging.

Network Interface \ Bytes total/sec This counter shows the amount of traffic through your network interface in bytes per second. Once you do your baseline (I'll discuss this in a moment), you'll know you have a problem when this number drops or rises a huge amount.

Paging File \ % Usage Similar to the Memory \ Pages/sec counter, this shows the percentage of the page file that is being utilized. If you see more than 70 percent of your page file being utilized, look into more RAM for your server.

Physical Disk \ % Disk Time This counter shows how active your disk is in percentage form. If this counter sustains an average above 70 percent, you may have contention with your drive or RAM.

Processor \ % Processor Time This is one of the most important counters. It shows how active the processor is in percentage form. While the threshold to be concerned with is 85 percent on average, it may be too late if you wait that long. I generally look at either improving the performance of the queries or adding additional processors when the counter averages above 60 percent.

SQL Server Counters

SQLServer:Access Methods \ Full Scans/sec This shows the DBA how many full table or index scans are occurring per second. If this number is significantly higher than your baseline, the performance of your application may be slow.

SQLServer:Buffer Manager \ Buffer Cache Hit Ratio This shows the ratio of how many pages are going to memory versus disk. I like to see this number as close to 100 percent as possible, but generally 90 percent is very respectable. If you see this number as low, it may mean that SQL Server is not obtaining enough memory from the operating system.

SQLServer:Database Application Database \ Transactions/sec Shows the amount of transactions on a given database or on the entire SQL Server per second. This number is more for your baseline and to help you troubleshoot issues. For example, if you normally show 120 transactions per second as your baseline and you come to work one Monday and see your server at 5,000 transactions per second, you will want to question the activity on your server.

SQLServer:General Statistics \ User Connections Like the transactions per second, this counter is merely used for creating a baseline on a server and in the troubleshooting process. This counter shows the amount of user connections on your SQL Server. If you see this number jump by 500 percent from your baseline, you may be seeing a slowdown in your activity due to a good response from your marketing campaign.

SQLServer:Latches \ Average Latch Wait Time (ms) Shows the average time for a latch to wait before the request is met. If you see this number jump high above your baseline, you may have contention for your server's resources.

SQLServer:Locks \ Lock Waits/sec Shows the number of locks per second that could not be satisfied immediately and had to wait for resources.

SQLServer:Locks \ Lock Timeouts/sec This counter shows the number of locks per second that timed out. If you see anything above 0 for this counter, your users will experience problems as their queries are not completing.

SQLServer:Locks \ Number of Deadlocks/sec This counter shows the number of deadlocks on the SQL Server per second. Again, if you see anything above 0, your users and applications will experience problems. Their queries will abort and the applications may fail.

SQLServer:Memory Manager \ Total Server Memory Shows the amount of memory that SQL Server has allocated to it. If this memory is equal to the amount of total physical memory on the machine, you could be experiencing contention since you're not leaving Windows any RAM to perform its normal operations.

SQLServer:SQL Statistics \ SQL Re-Compilations/sec This counter shows the amount of SQL recompiles per second. If this number is high, stored procedure execution plans may not be caching appropriately. Like other counters, this needs to be placed into a baseline and watched to make sure it's not moving radically from that baseline.

Trace flag

To enable a trace: DBCC Traceon (flag) or DBCC Traceon (flag,-1). The later one apply the trace to all connections whereas the first only apply to current connection.
To disable a trace: DBCC Traceoff (flag)
To check trace status: DBCC TraceStatus(-1)

Deadlock Information (1204)
This commonly used trace flag detects deadlocks and outputs the deadlock information. I'll cover much more detail about deadlocks and this trace flag in the next chapter.

Detailed Deadlock Information (1205)
This trace flag sends detailed information about the deadlock to the error log.

Disable Parallel Checking (2528)
You can use this trace flag to disable SQL Server from using any processor other than the primary processor when performing consistency checks (DBCCs). If you have a multiprocessor machine running a DBCC command, enabling this flag worsens performance considerably. (SQL Server uses multiple processors for running a DBCC command starting with SQL Server 2000.)

Network Database Files (1807)
SQL Server will not allow you to create a database or log file on a networked drive by default. If you attempt to do this, you receive error 5105, which states 'Device Activation Error.' The 1807 trace flag provides a workaround for this restriction, and you can create database files on a mapped drive or UNC path.
However, just because you can do something doesn't mean you should. This trace is undocumented for a good reason. It is an incredibly bad idea to place a database file on a network drive. The support for this feature was added to help vendors like Network Appliance ( Storing data on a network drive opens another potential can of worms, because you must monitor and provide redundancy for that drive. If network connectivity is interrupted, your database goes down.

Send Trace Output to Client (3604)
Trace flag 3604 is the most commonly used trace flag. It sends the output of a trace to the client. For example, before you can run DBCC PAGE, which views data page information, you must run this trace flag.

Send Trace Output to Error Log (3605)
This trace is similar to trace flag 3604, but this flag sends the results to the error log.

Skip Automatic Recovery (3607)
Trace flag 3607 skips the recovery of databases on the startup of SQL Server and clears the TempDB. Setting this flag lets you get past certain crashes, but there is a chance that some data will be lost.

Skip Automatic Recovery Except Master (3608)
This trace is similar to 3607, but the TempDB in this case is not cleared and only the master database is recovered.

Log Record for Connections (4013)
This trace flag writes an entry to the SQL Server error log when a new connection is established. If you set this option, your error log can fill up quickly. For each connection that occurs, the trace flag writes two entries that look like this:

Login: sa saSQL Query Analyzer(local)ODBCmaster, server process ID (SPID): 57, kernel process ID (KPID): 57.
Login: sa XANADUsaSQL Query Analyzer(local)ODBCmaster, server process ID (SPID): 57, kernel process ID (KPID): 57.

Skip Startup Stored Procedures (4022)
This is a handy trace flag for troubleshooting. It forces SQL Server to skip startup stored procedures. This is especially useful if a stored procedure has been altered and causes harm to your system. After you set this trace flag, you can then debug the stored procedure and set it back to its original state.

Detailed Linked Server error message (7300)
The error messages that linked servers return are sometimes very generic. Turn on the 7300 trace flag if you want to receive more detailed information.

Ignore All Index Hints (8602)
Trace flag 8602 is a commonly used trace flag to ignore index hints that are specified in a query or stored procedure. This is a fantastic option when you're trying to determine if an index hint is hurting more than helping. Rather than rewriting the query, you can disable the hint using this trace flag and rerun the query to determine if SQL Server is handling the index selection better than the index hint.

Disable Locking Hints (8755)
Trace flag 8755 will disable any locking hints like READONLY. By setting this, you allow SQL Server to dynamically select the best locking hint for the query. If you feel the query's locking hint may be hurting performance, you can disable it and rerun the query.

Disable All Other Hints (8722)
Lastly, the 8722 trace flag will disable all other types of hints. This includes the OPTION clause.

Tip: By running all three 8602, 8755, and 8722 trace flags, you can disable all hints in a query. If you feel your performance is being negatively affected by a hint, you can set these rather than rewrite all the queries while you test. Generally speaking, there's no reason to place hints on queries in SQL Server 7.0 or 2000.

Tuesday, January 22, 2008

Upgrade SQL2000 to SQL2005

Because tempdb is used for the version store, applications that make heavy use of triggers in SQL Server 2000 must be aware of potentially increased demands on tempdb after an upgrade to SQL Server 2005.
To maintain a production system using SI (Snapshot Isolation), you should allocate enough disk space for tempdb so that there is always at least 10 percent free space. If the free space falls below this threshold, system performance might suffer because SQL Server will expend more resources trying to reclaim space in the version store. The following formula can give you a rough estimate of the size required by version store. For long-running transactions, it might be useful to monitor the generation and cleanup rate using Performance Monitor, to estimate the maximum size needed. [size of common version store] = 2 * [version store data generated per minute] * [longest running time (minutes) of the transaction]

Sunday, January 20, 2008

Estimate table size for growth

This code is mentioned in Inside Microsoft SQL Server 2005 - The Storage Engine
It contains 4 sps: sp_EstTableSize, sp_EstTableSizeNCI, sp_EstTableSizeCI, sp_EstTableSizeData
Modified code to output size in Mb. - 17 Jan 2008
From Kalen:
The procedure calculates the storage requirements for the table and all indexes by extracting information from the sys.indexes, sys.columns, and sys.types views.
The result is only an estimate when you have variable-length fields. The procedure has no way of knowing whether a variable-length field will be completely filled, half filled, or mostly empty in every row, so it assumes that variable-length columns will be filled to the maximum size.
If you know that this won't be the case with your data, you can create a second table that more closely matches the expected data. For example, if you have a varchar(1000) column that will only rarely use the full 1000 bytes, and 99 percent of your rows will use only about 100 bytes, you can create a second table with a varchar(100) column and run sp_EstTableSize on that table.

if object_id('dbo.sp_EstTableSizeNCI', 'P') IS NOT NULL  drop procedure dbo.sp_EstTableSizeNCI
create procedure dbo.sp_EstTableSizeNCI
 @ObjId int, @Num_Rows int, @DataPages int, @CIndex_Row_Size int, @Has_UniqueClusteredIndex int,
 @NCIPages int OUTPUT
**  Do not call the procedure directly. It will be called
**  from the main procedure:  sp_estTableSize
set nocount on
declare @Indid as smallint, @IndexName as sysname
declare @Num_Key_Cols as smallint, @Num_Variable_Key_Cols as smallint
declare @Fixed_Key_Size as int, @Max_Var_Key_Size as int
declare @Index_Null_Bitmap as smallint, @Variable_Key_Size as int
declare @Index_Row_Size as int, @Index_Rows_Per_Page as int
declare @Free_Index_Rows_Per_Page as int, @T as int
declare @Est_Num_Pages as int
declare @Fixed_Size as int
declare @Variable_Col as tinyint, @Var_Size as int
declare @LevelPages as int, @CLevel as int, @PrevCount as int
set @NCIPages = 0
set @PrevCount = 0

-- Get the index id's of the non-clustered indexes
declare NCI_cursor insensitive cursor
select indid, name
from sysindexes
where id = @ObjId
and     indid < 255  -- exclude pointers to text / ntext / image pages
and   IndexProperty(@ObjId, name, 'IsClustered') = 0
and IndexProperty(@ObjId, name, 'IsStatistics') = 0
and IndexProperty(@ObjId, name, 'IsHypothetical') = 0
for read only
declare @indkey as int, @TableName as sysname, @IndexColName as sysname
set @TableName = object_name(@ObjId)
open NCI_cursor
fetch next from NCI_cursor into @Indid, @IndexName
while @@FETCH_STATUS = 0
   begin  -- cursor loop
-- Initialize additive variables to zero
   set @Num_Key_Cols = 0
   set @Fixed_Key_Size = 0
   set @Num_Variable_Key_Cols = 0
   set @Max_Var_Key_Size = 0
-- Start with the first column in the non-clustered index
   set @indkey = 1
   while (@indkey <= 16)
--       use Index_Col to get each column of the non-clustered index
         set @IndexColName = INDEX_COL(@TableName, @Indid, @indkey)
         if @IndexColName IS NULL Break
         set @Num_Key_Cols = @Num_Key_Cols + 1
         select @Fixed_Size = sc.length * (1 - st.variable),
  @Variable_Col = st.variable,
  @Var_Size = sc.length * st.variable
         from sysobjects as so
         join syscolumns as sc
           on ( =
         join systypes as st
           on (sc.xtype = st.xtype)
         where   = @ObjId
         and = @IndexColName
         set @Fixed_Key_Size = @Fixed_Key_Size + @Fixed_Size
         set @Num_Variable_Key_Cols = @Num_Variable_Key_Cols + @Variable_Col
--  If the table has a non-unique clustered index, then the 'uniqueifer' is internally treated
-- as a variable length column.
 set @Num_Variable_Key_Cols = @Num_Variable_Key_Cols +
         case when @CIndex_Row_Size > 0  and @Has_UniqueClusteredIndex = 1
         then 1
         else 0
         set @Max_Var_Key_Size = @Max_Var_Key_Size + @Var_Size
--       Get the next column in the non-clustered index
         set @indkey = @indkey + 1
   set @Index_Null_Bitmap = case when @Fixed_Key_Size = 0
                                then 0
                                else (2 + (( @Num_Key_Cols + 7) / 8 ) )
   set @Variable_Key_Size = case when @Num_Variable_Key_Cols = 0
           then 0
           else 2 + @Num_Variable_Key_Cols + @Num_Variable_Key_Cols +
   set @Index_Row_Size = @CIndex_Row_Size +
    @Fixed_Key_Size + @Variable_Key_Size + @Index_Null_Bitmap + 9
   set @Index_Rows_Per_Page = 8096 / (@Index_Row_Size + 2)
-- Get the Fill Factor used in the index (i.e., the Pad_Index factor)
declare @Fill_Factor as int
select @Fill_Factor = IndexProperty (@ObjId, indid, 'IndexFillFactor')
from sysindexes
where id = @ObjId

--    According to Books Online (CREATE INDEX (T-SQL)):
-- for non-clustered indexes, space is always left on non-leaf pages
-- to accomomdate one additional row regardless of the setting of
-- FillFactor. I believe this is incorrect, and that, as in
-- SQL Server 6.x, a FillFactor from 0% to 99% results in one free row,
-- while a setting of 100% results in no free rows.
   set @Free_Index_Rows_Per_Page = case when @Fill_Factor = 100 then 0  else 1  end
--    Calculate the number of pages required to store all the index rows at each
--    level of the index
   set @LevelPages = 0
   set @CLevel = @Num_Rows
   set @T = (@Index_Rows_Per_Page - @Free_Index_Rows_Per_Page)
--    According to Books Online (CREATE INDEX (T-SQL)):
-- "The number of rows on an intermediate index page is never less than two,
--  regardless of how low the value of FILLFACTOR."
   set @T = case when @T < 2  then 2  else @T  end
   while (@LevelPages <> 1)
         set @LevelPages = ceiling (1.0 * @CLevel / @T)
         set @NCIPages = @NCIPages + @LevelPages
         set @CLevel = @LevelPages
-- Populate the #ncindexes table created in sp_EstTableSize
   insert into #ncindexes (IndexName, IndexPages) values (@IndexName, @NCIPages - @PrevCount)
   set @PrevCount = @NCIPages
fetch next from NCI_cursor into @Indid, @IndexName
end  -- cursor loop
close NCI_cursor
deallocate NCI_cursor
if object_id('dbo.sp_EstTableSizeCI', 'P') IS NOT NULL  drop procedure dbo.sp_EstTableSizeCI
create procedure dbo.sp_EstTableSizeCI
 @ObjId int, @Num_Rows int, @DataPages int,
 @Indid smallint, @Has_UniqueClusteredIndex int,
 @CIndex_Row_Size int OUTPUT, @CIPages int OUTPUT
**  Do not call the procedure directly. It will be called
**  from the main procedure:  sp_estTableSize
set nocount on
declare @Num_CKey_Cols as smallint, @Num_Variable_CKey_Cols as smallint
declare @Fixed_CKey_Size as int, @Max_Var_CKey_Size as int
declare @CIndex_Null_Bitmap as smallint, @Variable_CKey_Size as int
declare @CIndex_Rows_Per_Page as int
declare @Free_CIndex_Rows_Per_Page as int, @T as int
declare @Est_Num_Pages as int
declare @Fixed_Size as int
declare @Variable_Col as tinyint, @Var_Size as int
declare @indkey as int, @TableName as sysname, @IndexColName as sysname
set @TableName = object_name(@ObjId)
-- Initialize additive variables
set @Num_CKey_Cols = 0
set @Fixed_CKey_Size = case when @Has_UniqueClusteredIndex = 1
                          then 0
                          else 4
set @Num_Variable_CKey_Cols = 0
set @Max_Var_CKey_Size = 0
-- Start with the first column in the clustered index
set @indkey = 1
while (@indkey <= 16)  /* SQL Server 7.0 limits number of columns in an index to 16 */
--    use Index_Col to get each column of the clustered index
      set @IndexColName = INDEX_COL(@TableName, @Indid, @indkey)
      if @IndexColName IS NULL Break
      set @Num_CKey_Cols = @Num_CKey_Cols + 1
      select @Fixed_Size = sc.length * (1 - st.variable),
  @Variable_Col = st.variable,
  @Var_Size = sc.length * st.variable
      from sysobjects as so
      join syscolumns as sc
        on ( =
      join systypes as st
        on (sc.xtype = st.xtype)
      where   = @ObjId
      and = @IndexColName
      set @Fixed_CKey_Size = @Fixed_CKey_Size + @Fixed_Size
      set @Num_Variable_CKey_Cols = @Num_Variable_CKey_Cols + @Variable_Col
      set @Max_Var_CKey_Size = @Max_Var_CKey_Size + @Var_Size
--    Get the next column in the clustered index
      set @indkey = @indkey + 1
   end  /* while (@indkey <= 16) */
set @CIndex_Null_Bitmap = (2 + (( @Num_CKey_Cols + 7) / 8 ) )
set @Variable_CKey_Size = case when @Num_Variable_CKey_Cols = 0
        then 0
        else 2 + @Num_Variable_CKey_Cols + @Num_Variable_CKey_Cols +
set @CIndex_Row_Size = @Fixed_CKey_Size + @Variable_CKey_Size + @CIndex_Null_Bitmap + 9
set @CIndex_Rows_Per_Page = 8096 / (@CIndex_Row_Size + 2)
-- Get the Fill Factor used in the index (i.e., the Pad_Index factor)
declare @Fill_Factor as int
select @Fill_Factor = IndexProperty (@ObjId, object_name(@Indid), 'IndexFillFactor')
from sysindexes
where id    = @ObjId
and indid = @Indid
   set @Free_CIndex_Rows_Per_Page =
    when @Has_UniqueClusteredIndex = 1
  then case when @Fill_Factor = 100  then 0  else 2  end
    else case when @Fill_Factor = 100  then 0  else 1  end
-- Calculate the number of pages required to store all the index rows at each
-- level of the index
declare @LevelPages as int, @CLevel as int
set @LevelPages = 0
set @CLevel = @DataPages  -- number of pages needed to store table data
set @CIPages = 0
   set @T = (@CIndex_Rows_Per_Page - @Free_CIndex_Rows_Per_Page)
--    According to Books Online (CREATE INDEX (T-SQL)):
-- "The number of rows on an intermediate index page is never less than two,
--  regardless of how low the value of FILLFACTOR."
   set @T = case when @T < 2  then 2  else @T  end
while (@LevelPages <> 1)
      set @LevelPages = ceiling (1.0 * @CLevel / @T)
      set @CIPages = @CIPages + @LevelPages
      set @CLevel = @LevelPages
if object_id('dbo.sp_EstTableSizeData', 'P') IS NOT NULL  drop procedure dbo.sp_EstTableSizeData
create procedure dbo.sp_EstTableSizeData
 @ObjId int, @Num_Rows int,
 @Has_ClusteredIndex int, @Has_UniqueClusteredIndex int,
 @DataPages int OUTPUT
**  Do not call the procedure directly. It will be called
**  from the main procedure:  sp_estTableSize
--<-- Addition #1: Computed columns do not consume physical space
--<-- Addition #2: A non-unique clustered index on the table adds a "uniqueifer" to
--                 "subsequent" instances of duplicate keys. This "uniqueifer" appears
--                 not only on the index pages, but also on the leaf (data) pages.
--                 See sp_EstTableSizeCI for additional information.
set nocount on
declare @Num_Cols as smallint, @Num_Variable_Cols as smallint
declare @Fixed_Data_Size as int, @Max_Var_Size as int
declare @Null_Bitmap as smallint, @Variable_Data_Size as int
declare @Row_Size as int, @Rows_Per_Page as int
declare @Free_Rows_Per_Page as int
-- Pull together information about the columns, the size of the columns and whether the
--    column is fixed or variable
select @Num_Cols = count(*),
 @Fixed_Data_Size = sum(sc.length * (1 - st.variable)),
 @Num_Variable_Cols = sum(cast(st.variable as smallint)),
 @Max_Var_Size = sum(sc.length * st.variable)
from sysobjects as so
join syscolumns as sc
  on =
join systypes as st
  on sc.xtype = st.xtype
where = @ObjId
and ObjectProperty (, 'IsUserTable') = 1
and ColumnProperty (,, 'IsComputed')  = 0  --<-- Addition #1
set @Null_Bitmap = case when @Fixed_Data_Size = 0 then 0 else (2 + (( @Num_Cols + 7) / 8 ) ) end
set @Variable_Data_Size = case
       when @Num_Variable_Cols = 0
    then 0
    else 2 + (@Num_Variable_Cols + @Num_Variable_Cols) +
set @Row_Size = @Fixed_Data_Size + @Variable_Data_Size + @Null_Bitmap + 4 +
                case when @Has_ClusteredIndex = 1 and @Has_UniqueClusteredIndex = 0
                  then 4
                  else 0
set @Rows_Per_Page = 8096 / (@Row_Size + 2)
-- If there is a clustered index on the table, get the Fill Factor used
declare @Fill_Factor as int
select @Fill_Factor = case
      when IndexProperty (@ObjId, name, 'IndexFillFactor') IS NULL
         then 100
      when IndexProperty (@ObjId, name, 'IndexFillFactor') = 0
         then 100
      else IndexProperty (@ObjId, name, 'IndexFillFactor')
from sysindexes
where id = @ObjId
and IndexProperty(@ObjId, name, 'IsClustered') = 1
set @Fill_Factor = Coalesce (@Fill_Factor, 100)
set @Free_Rows_Per_Page = 8096 * ((100 - @Fill_Factor) / 100.0) / @Row_Size
set @DataPages = ceiling (1.0 * @Num_Rows / (@Rows_Per_Page - @Free_Rows_Per_Page) )

if object_id('dbo.sp_EstTableSize', 'P') IS NOT NULL  drop procedure dbo.sp_EstTableSize
create procedure dbo.sp_EstTableSize
 @TableName sysname = NULL,
 @Num_Rows int = NULL
-- Calculations and logic are based upon:
-- The Microsoft SQL Server 7.0 Books Online, plus
-- 'Inside Microsoft SQL Server 7.0' by Ron Soukup and Kalen Delaney,
--   published by Microsoft Press, plus
-- additional information generously supplied by Kalen Delaney.
set nocount on
-- If no or partial arguments were provided, print usage.
if (@TableName IS NULL) or (@Num_Rows IS NULL) goto usage
-- Verify that @TableName is an object name in the current database, and that it is a user
--    table
declare @ObjId int, @ErrMsg varchar(255)
select @ObjId = object_id(@TableName)
if (@ObjId IS NULL) OR (ObjectProperty (@ObjId, 'IsUserTable') = 0)
      set @ErrMsg = @TableName + ' is not a user table in the current database'
      goto ErrExit
-- Verify that the estimated number of rows provided by the user is a positive number
if (@Num_Rows <= 0)
      set @ErrMsg = 'Please enter a positive number for @Num_Rows (estimated number of rows)'
      goto ErrExit
-- Compute the estimated number of pages by:
--   calculating the pages needed to store data (sp_EstTableSizeData)
--   calculating the pages needed to store the clustered index (sp_EstTableSizeCI)
--   calculating the pages needed to store each non-clustered index (sp_EstTableSizeNCI)
--   adding the pages together to arrive at a grand total
-- From the SQL Server 7.0 Books Online section 'Nonclustered Indexes':
--   'If the clustered index is not a unique index, SQL Server 7.0 adds an internal
--    value to duplicate keys to make them unique. This value is not visible to users.'
-- Thanks to Kalen Delaney for establishing that the length of the internal value is
--    4-bytes (some Microsoft documentation incorrectly states that the length is 8-bytes).
-- Note that this 4-byte 'Uniqueifer' is added only to 'subsequent' instances of duplicate
--    keys. Since the code you are reading is designed to estimate space requirements, we can
--    not assume that the table given to us is populated with data. (If it were populated
--    with data, then it would make better sense to run the system stored procedure
--    sp_spaceused to determine the actual space requirements of the table.)
--    Since we can not assume that the table is populated, we can not make an accurate,
--    or even an informed guess, as to the distribution of data in the clustered index.
--    Thus, keeping in line with the view that we are creating an upper bound of storage
--    requirements, we assume that for a non-unique clustered index, that every row
--    pointer has this extra 4-byte Uniqueifer.
-- Thanks to Kalen Delaney for establishing that the Uniqueifer, when present uses space
--    not only in the index pages, but also in the data pages.
-- Since the presence of a clustered index, and whether it is unique or not impacts all
--    of the sub-procedures, the determination of its existence is determined up-front.
declare @CI_name as sysname, @Indid as smallint
declare @Has_ClusteredIndex as int, @Has_UniqueClusteredIndex as int
set @Has_ClusteredIndex = ObjectProperty (@ObjId, 'TableHasClustIndex')
if @Has_ClusteredIndex = 1
    set @CI_name = NULL
    set @Has_UniqueClusteredIndex = 0
  begin  -- get the name and index id of the clustered index
    select @CI_name = name, @Indid = indid
    from   sysindexes
    where id = @ObjId
    and   IndexProperty (@Objid, name, 'IsClustered') = 1
    set @Has_UniqueClusteredIndex = IndexProperty (@ObjId, @CI_name, 'IsUnique')

-- Create a temporary table to hold the details on the non-clustered indexes.
create table #ncindexes
 IndexName sysname,
 IndexPages int
-- Call the supporting sub-procedures
declare @DataPages as int, @CIndex_Row_Size as int, @CIPages as int, @NCIPages as int
exec sp_EstTableSizeData @ObjId, @Num_Rows,
    @Has_ClusteredIndex, @Has_UniqueClusteredIndex,
    @DataPages OUTPUT
if @Has_ClusteredIndex = 1
   exec sp_EstTableSizeCI @ObjId, @Num_Rows, @DataPages,
     @Indid, @Has_UniqueClusteredIndex,
     @CIndex_Row_Size OUTPUT, @CIPages OUTPUT
      set @CIPages = 0
      set @CIndex_Row_Size = 0
exec sp_EstTableSizeNCI @ObjId, @Num_Rows, @DataPages,
   @CIndex_Row_Size, @Has_UniqueClusteredIndex, @NCIPages OUTPUT
select 'UserTable' = @TableName
select 'Total Pages' = @DataPages + @CIPages + @NCIPages,
 'Data Pages' = @DataPages,
 'Clustered Index Pages' = @CIPages,
 'Non-Clustered Index Pages' = @NCIPages,
 'Total Size (Mb)' = (@DataPages + @CIPages + @NCIPages) * 8 / 1024
if exists (select * from #ncindexes)
      print   ' '
      print   'Non-Clustered Index Details'
      select  'Index Pages' = IndexPages, 'Index size (Mb)' = IndexPages * 8 / 1024, 'Index Name' = IndexName
      from    #ncindexes
      order by IndexName
print ' '
print 'Usage:'
print '   exec sp_EstTableSize [@TableName=]userTableName, [@Num_Rows=]estimatedNumberRowsInTable'
print ' '
print 'Note that the result is an ESTIMATE of the storage to be used by the table.'
print ' '
print 'It should be treated as an upper bound of the storage requirements because it is'
print 'assumed that variable length data (varchar, nvarchar, varbinary) will use the'
print 'maximum storage on every row. That is, it is assumed that a varchar(100) field'
print 'will always contain 100 bytes of data.'
print ' '
print 'If your knowledge of the actual data is such that you may assume that on average'
print 'only 20 bytes will be stored in the varchar(100) field, then for a more accurate'
print 'estimate, create a second table definition with a varchar(20) field and use that'
print 'table as input to sp_EstTableSize.'

print ' '
print @ErrMsg

Thursday, January 17, 2008


  ON table_name (column_name [ASC | DESC][,...n])
[FILLFACTOR = fillfactor]
[[,] [PAD_INDEX] = { ON | OFF }]
[[,] DROP_EXISTING  = { ON | OFF }]
[[,] IGNORE_DUP_KEY = { ON | OFF }]
[[,] SORT_IN_TEMPDB = { ON | OFF }]
[[,] ALLOW_ROW_LOCKS = { ON | OFF }]
[[,] MAXDOP = max_degree_of_parallelism]
[[,] ONLINE = { ON | OFF }]

SQL Server 2000 does not use ON and OFF. If you want an option set to ON, specify the option; if you don't want the option, don't specify it. SQL Server 2005 offers a backward-compatible syntax without the ON and OFF keywords, but for new development the new syntax is recommended.

To avoide index page spliting, assign the Fillfactor and PAD_Index.
Fillfactor generally applies only to the index's leaf page (the data page for a clustered index).
PAD_Index applies to the intermediate index pages, and use the same Fillfactor value.
If fillfactor is not specified, the server wilde default is used. The server wilde default value (is 0 by default) is set via the sp_configure procedure, with the fillfactor option. So if you want a fillfactor to apply to all databases on a server, set the server wilde default value.
Fillfactor and PAD_Index are only applicable when an index is created (or re-created). If there is a considerable number of transactions happending (such as bulk insert operation), you might need to rebuild the index as the bulk operation might filled up the reserved free space and caused page split which intern dropped the query performance.
When rebuild index, if no Fillfactor specified, the original Fillfactor value will be used.

Drop_Existing specifies that a given index should be dropped and rebuilt as a single transaction. It enhances performance when re-create a clustered index on a table with non-clustered indexes, as the non-clustered indexes will be ONLY re-created once after the clustered index is created and ONLY if the index keys changed (that is different index name or columns compare with the original index). Without this option, the non-clustered indexed will be re-created twice when a clustered index is re-created.
A clustered index can not be converted to non-clustered index using Drop_Existing. However, in SQL2000, a unique clustered index can be changed to non-unique index, and vice versa. In SQL 2005, all clustered indexes are unique. If you build a clustered index without specifying the UNIQUE keyword in SQL2005, SQL Server guarantees uniqueness internally by adding a uniqueifier to the rows when necessary.

Sort_in_TEMPDB: By default (without using this this option or turn it off) SQL Server uses space from the filegroup on which the index is to be created. When this option is used, sort buffers are allocated from tempdb, so much less space is needed in the source database. Also, when this option is used, performance can be grately improved if tempdb datbase is on a separate physical disk from the database you're working with. You can speed up index creation even more if tempdb database is on a faster disk than your user database and you use the SORT_IN_TEMPDB option. Alternative to using the SORT_IN_TEMPDB option, you can create separate filegroups for a table and its indexes, that is, the table is on one filegroup and its indexes are on another. If the two filegroups are on different disks, you can also minimize the disk head movement in return improve the index creation performance.


DBCC stood for Database Console Command (Database Consistency Checker prior to SQL2000). It's now divides into four categories: validation, maintenance, informational, and miscellaneous.

To get a list of all the DBCC commands: DBCC help ('?')

To get syntax help with a particular DBCC command: DBCC help ('checkdb')

The Validation commands are the CHECK commands:


The Informational commands are:

(you need to turn on trace flag 3604 to see the output. DBCC TRACEON(3604) )


DBCC CHECKDB runs all of the other DBCC validation commands in this order:

DBCC CHECKALLOC is run on the database;
DBCC CHECKTABLE is run on every table and indexed view in the database;
DBCC CHECKCATALOG is run on the database;
The Service Broker data in the database is verified.

In SQL Server 2005, all of the DBCC validation commands use database snapshot technology to keep the validation operation from interfering with ongoing database operations and to allow the validation operation to see a quiescent, consistent view of the data, no matter how many changes were made to the underlying data while the operation was under way. A snapshot of the database is created at the beginning of the CHECK command, and no locks are acquired on any of the objects being checked. The actual check operation is executed against the snapshot.
The "snapshot file" that DBCC CHECKDB creates with the original page images is not visible to the end user and its location cannot be configured; it always uses space on the same volume as the database being checked. This capability is available only when your data directory is on an NTFS partition.
If you aren't using NTFS, or if you don't want to use the space necessary for the snapshot, you can avoid creating the snapshot by using the WITH TABLOCK option with the DBCC command.
In addition, if you are using one of the REPAIR options to DBCC, a snapshot is not created because the database is in single-user mode, so no other transactions can be altering data. Without the TABLOCK option, the DBCC validation commands are considered online operations because they don't interfere with other work taking place in a database. With the TABLOCK option, however, a Shared Table lock is acquired for each table as it processed, so concurrent modification operations will be blocked.

For databases that have been upgraded from previous SQL Server versions, you must run DBCC CHECKDB with the DATA_PURITY option once, preferably immediately after the upgrade, as follows:


SQL Server 2005 includes a set of logical validation checks to verify that data is appropriate for the column's datatype. These checks can be expensive and can affect the server's performance, so you can choose to disable this, along with all the other non-core logical validations by using the PHYSICAL_ONLY option. All new databases created in SQL Server 2005 have the DATA_PURITY logical validations enabled by default.

After the purity check completes without any errors for a database, performing the logical validations is the default behavior in all future executions of DBCC CHECKDB, and there is no way to change this default. You can, of course, override the default with the PHYSICAL_ONLY option. This option not only skips the data purity checks, but it also skips any checks that actually have to analyze the contents of individual rows of data and basically limits the checks that DBCC performs to the integrity of the physical structure of the page and the row headers.

If the CHECKSUM option is enabled for a database, which is the default in all new SQL Server 2005 databases, a checksum will be performed on each allocated page as it is read by the DBCC CHECK commands. When the CHECKSUM option is on, a page checksum is calculated and written on each page as it is written to disk, so only pages that have been written since CHECKSUM was enabled will have this check done. The page checksum value is checked during the read and compared with the original checksum value stored on the page. If they do not match, an error is generated. In addition, pages with errors are recorded in the suspect_pages table in the msdb database.

Using the following script to see the progress reporting for DBCC CHECKDB, DBCC CHECKTABLE, and DBCC CHECKFILEGROUP:

select command,percent_complete,estimated_completion_time

from sys.dm_exec_requests
where command like 'dbcc %'

DBCC CHECKDB Best Practices:

· Use CHECKDB with the CHECKSUM database option and a sound backup strategy to protect the integrity of the data from hardware-caused corruption.

· Perform DBCC CHECKDB with the DATA_PURITY option after upgrading a database to SQL 2005 to check for invalid data values.

· Make sure there is enough disk space availabe to accommodate the database snapshot that will be created.

· Make sure there is enough space available in tempdb to allow the DBCC CHECKDB command to run.

use tempdb;

You can see the current size of the log file for all databases, as well as the percentage of the log file space that has been used, by running the DBCC command DBCC SQLPERF('logspace'). The following code, supplied by Cliff Dibble from Microsoft, returns the same information as DBCC SQLPERF('logspace') in a tabular format that can be further filtered or easily embedded into a stored procedure, table-valued function, or view:

  SELECT rtrim(pc1.instance_name) AS [Database Name]
   ,      pc1.cntr_value/1024.0 AS [Log Size (MB)]
   ,      cast(pc2.cntr_value*100.0/pc1.cntr_value as dec(5,2))
           as [Log Space Used (%)]
   FROM    sys.dm_os_performance_counters as pc1
   JOIN    sys.dm_os_performance_counters as pc2
   ON      pc1.instance_name = pc2.instance_name
   WHERE   pc1.object_name LIKE '%Databases%'
   AND     pc2.object_name LIKE '%Databases%'
   AND     pc1.counter_name = 'Log File(s) Size (KB)'
   AND     pc2.counter_name = 'Log File(s) Used Size (KB)'
   AND     pc1.instance_name not in ('_Total', 'mssqlsystemresource')
   AND     pc1.cntr_value > 0

The final condition is needed to filter out databases that have no log file size reported. This includes any database that is unavailable because it has not been recovered or is in a suspect state, as well as any database snapshots, which have no transaction log.


Considerations regarding database snapshots:

· Snapshots cannot be created for the model, master, or tempdb databases. (Internally, snapshots can be created to run the online DBCC checks on the master database, but they cannot be explicitly created).

· A snapshot inherits the security constraints of its source database, and because it is read-only, you cannot change the permissions.

· If you drop a user from the source database, the user is still in the snapshot.

· Snapshots cannot be backed up or restored; Snapshot will not affect source database backup.

· Snapshots cannot be attached or detached.

· Full-text indexing is not supported on database snapshots, and full-text catalogs are not propagated from the source database.

Server 'DHW01' is not configured for DATA ACCESS

When you try to run an openquery script against a local database, you received the following error:

Msg 7411, Level 16, State 1, Line 1
Server 'DHW01' is not configured for DATA ACCESS.

To rectify the issue, run the following script:

exec sp_serveroption 'DHW01', 'DATA ACCESS', 'TRUE'

Login failed for user 'Test'.

Following are reasons of the Login failed for user 'Test':

1. If you receive the following error

Cannot connect to DHW01

Login failed for user 'Test'. The user is not associated with a trusted SQL Server connection. (Microsoft SQL Server, Error: 18452)

This is because the Server authentication property is set to "Windows Authentication mode" (SQL2005) or "Windows only" (SQL2000). Change the server authentication property to "SQL Server and Windows Authentication mode" (SQL2005) or "SQL Server and Windows" (SQL2000) will fix the issue. The option is located under Security of the server property.