Friday, 17 February 2012

User defined functions in Select statements

A UDF called in a SELECT statement, gets called once for each row returned, meaning the recordset is somewhat like a cursor in terms of performance.

If possible, the functionality of the UDF should be included in SELECT statement, particularly if returning a large recordset

(Click on title for link to full original article)

Quorum doesn't move with group on Cluster (2008R2)

Having manually moved the SQL Resource group to another node on a Server2008R2 cluster, it was noticed that the Quorum was still on the original node.

The Quorum sits in a separate hidden resource group from SQL server and is quite happy operating on a different node.

This hidden resource group can be seen through the command line but not through the gui.

c:\cluster group Will return a list of all the resource groups and their status

The Quorum is in the resource group "Cluster Group"

To move this resource to a new node simply execute:
c:\cluster group "Cluster Group" /move

Friday, 19 August 2011

Current CPU Usage

You may need to programmatically collect the current CPU Utilization. Why, is beyond me but whatever. The following script cannot be a function - because for no real reason you cant use the WAITFOR command inside of a function. So instead it's a stored procedure with OUTPUT.

CREATE PROCEDURE GetCurrentCPU
    @CPU float output
AS

/* used to store current cpu data */
declare @CPU_THEN int
declare @IDLE_THEN int

/* used to store cpu data after 1 second delay */
declare @CPU_NOW int
declare @IDLE_NOW int

/* result */
declare @CPU_PERCENT float

/* get current CPU usage */
set @CPU_THEN = @@CPU_BUSY
set @IDLE_THEN = @@IDLE

WAITFOR DELAY '000:00:01'

/* get cpu usage after short delay */
set @CPU_NOW = @@CPU_BUSY
set @IDLE_NOW = @@IDLE

set @CPU_PERCENT = (@CPU_NOW - @CPU_THEN) / ((@IDLE_NOW - @IDLE_THEN + @CPU_THEN - @CPU_NOW) * 1.00) * 100

select @CPU = CONVERT(VARCHAR, @CPU_PERCENT, 1)

return @cpu

GO

Thursday, 17 June 2010

Unattended Installation

It takes a fair while to install SQL Server 2008 using the GUI. Clicking next through screen after screen, waiting for each form to validated. This is a nuisance particularly if you're installing more than one instance. An Unattended install is the answer to this, below is a configuartion file that will install a default instance. The rest of the file is self explantory.

It's invoked on the command with:
 
setup.exe /configurationfile=c:\temp\sql_install.ini

;SQLSERVER2008 Configuration File
;This is a standard non-clustered installation

[SQLSERVER2008]

; Sepcify the InstanceID for directory and registry structures and service names
INSTANCEID="mssqlserver"

; Specify The Instance Name (or 'mssqlserver' for adefault Instance)
INSTANCENAME="mssqlserver"

; Specify installer Action (INSTALL, UNINSTALL, or UPGRADE)
ACTION="Install"

; Specify features to Action
FEATURES=SQL,Tools,SQLENGINE,REPLICATION,FULLTEXT,BIDS,CONN,IS,BC,SDK,BOL,SSMS,ADV_SSMS,SNAC_SDK,OCS

; Display the command line parameters usage
HELP="False"

; Specifies that the detailed Setup log should be piped to the console.
INDICATEPROGRESS="True"

; Do not display the user interface.
;QUIET="True"

; Show progress only
QUIETSIMPLE="True"

; Specify that Setup should install into WOW64. Supports x64 Only.
X86="False"

; Specify path to the install media
MEDIASOURCE="C:\Install Files\en_sql_server_2008_enterprise_x86_x64_ia64\"

; Do not send error reports to Microsoft
ERRORREPORTING="False"

; Do not send feature usage data to Microsoft
SQMREPORTING="False"

; Install directory for shared files and components
INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server"

; Install directory for the WOW64 stuff
INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server"

; Specify the installation directory.
INSTANCEDIR="C:\Program Files\Microsoft SQL Server"

; Configure SQL Server Agent service
AGTSVCACCOUNT="pkrtest\sqladmin"
AGTSVCPASSWORD="v4OVdj8I"
AGTSVCSTARTUPTYPE="Automatic"

; Configure Integration Services service.
ISSVCACCOUNT="NT AUTHORITY\NetworkService"
ISSVCSTARTUPTYPE="Automatic"

; Configure SQL Server service.
SQLSVCACCOUNT="pkrtest\sqladmin"
SQLSVCPASSWORD="v4OVdj8I"
SQLSVCSTARTUPTYPE="Automatic"

; Configure SQL SErver Browser Service.
BROWSERSVCSTARTUPTYPE="Automatic"

; Configure Full Text Service
FTSVCACCOUNT="NT AUTHORITY\LOCAL SERVICE"

; Specify Collation settings.
ASCOLLATION="Latin1_General_CI_AS"
SQLCOLLATION="Latin1_General_CI_AS"

; Analysis Services file locations.
ASDATADIR="Data"
ASLOGDIR="Log"
ASBACKUPDIR="Backup"
ASTEMPDIR="Temp"
ASCONFIGDIR="Config"

; Enable the Network Protocols
TCPENABLED="1"
NPENABLED="0"

; Enable Mixed Mode security.
SECURITYMODE="SQL"
SAPWD="2manysecrets!"

; Do not enable FILESTREAM
FILESTREAMLEVEL="0"

; Specifies whether or not the MSOLAP provider is allowed to run in process.
ASPROVIDERMSOLAP="1"

; Windows account(s) to provision as SQL Server system administrators.
SQLSYSADMINACCOUNTS="PKRTEST\wils01mgmt" "PKRTEST\benn01mgmt"


You can also install a named instance using this file, by changing these two lines to the new instance name:


; Sepcify the InstanceID for directory and registry structures and service names
INSTANCEID="new_instance_name"

; Specify The Instance Name (or 'mssqlserver' for adefault Instance)

INSTANCENAME="new_instance_name"


And that's it, now you have a default instance installed with the least amount of fiddling inside the GUI.

Tuesday, 20 April 2010

Centralized Management Servers

This is pretty much an interim post until I get around to writing up a more detailed article, and forget half the set up details along the way. Centralized Managed Servers, other than just letting you run a query on multiple servers simultaneous, also let you manage jobs centrally. In brief, you create a master management server, and can push jobs out to the respective target servers.

By default, this behaviour only works when using Certificates. Which for playing with is a pain in the arse. However, there is a registry change you can make that eliminates the need to issue certificates for each master and target management server. As below:

\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQLerver\instance_name\SQLServerAgent\MsxEncryptChannelOptions (REG_DWORD)

This needs to be changed to 0 (zero) from 2 (two).

Monday, 19 April 2010

Promote ALL Mirrors After a Single Fail-Over

Whilst on the subject of making jobs mirror aware. Often a single database (or even application) is logically split into multiple databases on a single server. From SQL Servers perspective they are two distinct entities. This means a single database failing-over to its mirror partner would render the application unusable. Its back end is now straddling two different servers.

To get around this, I have an alert using a WMI query monitoring for a failover event, should any single database failover, the remaining databases should then be failed across with it.

You'll need a job that's fired when a fail-over event is detected, without a schedule. This example sends an email and then proceeds to fail over any abandoned databases.

EXEC msdb..sp_send_dbmail
    @profile_name=''OT-FUSION'',
    @recipients=''alex.wilson@mysqlserver.com'',
    @subject = ''Database Mirror Failover Event Occurred'',
    @body = ''A database just failed over, the abandoned databases will be failed over now...''

IF EXISTS (SELECT 1 FROM sys.database_mirroring WHERE db_name(database_id) = N''MyPokerDB'' AND mirroring_role_desc = ''PRINCIPAL'')
     ALTER DATABASE MyPokerDB SET PARTNER FAILOVER


This doesnt have a schedule, its execution is based on the Alert created below that execute this job, when a failover event is detected.

EXEC msdb.dbo.sp_add_alert @name=N'Mirror Check',
        @message_id=0,
        @severity=0,
        @enabled=1,
        @wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER',
        @wmi_query=N'SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE State = 7 OR State = 8'


Here we query for the mirror state 7 or 8, a full list of states can be found here

You may want to create this job and alert on both the principal and the mirror, if you suspect they are going to fail back and forth.

Friday, 26 March 2010

Mirroring - After an Automatic Failover

So get this, our principal database failed in our mirrored pair. Thats cool, it automatically failed over. What isnt so cool is that the jobs that run every 5 minutes weren't enabled on the mirror, nor were they disabled on the under-the-weather principal. Of course this happened in the middle of night. Alerts are being spewed forth from everywhere and the jobs are not running on the now-live database, causing further problems.

To prevent this happening in the future, I replaced the job with 'starter' jobs that determine if they should run the job for that server, by querying what role that database has in the mirror - principal or mirror. This allows us to leave the jobs on both servers enabled and scheduled. A scalar function achieves this, and the obvious modifications to any jobs that already exist.

create function IsMirrorRole (@database varchar(255), @role varchar(255))
returns tinyint
as

begin

    declare @response int

    select @response = count(*) from sys.database_mirroring
    where database_id = DB_ID(@database)
    and mirroring_role_desc = @role

    return @response
end

And this can then be used with the following additional to any jobs, or by creating a 'starter' job that invokes any other jobs that need to be called.

-- is payment the principal?
-- a non-zero is true
if dbo.IsMirrorRole('payment', 'principal') != 0
begin
    EXEC msdb.dbo.sp_start_job 'Check Point'
end

Tuesday, 23 February 2010

Trailing blanks

Following on from my post on 29th December 2008 on the LEN() function gotcha:

When comparing string values for equality, trailing blanks are ignored.
i.e. 'xyz' = 'xyz ' is true

However, rather than using the = operator, the LIKE operator will give a correct result
i.e. 'xyz' LIKE 'xyz ' is false

e.g.
IF ('xyz' = 'xyz ')
PRINT '= That shouldn''t work!'

IF ('xyz' LIKE 'xyz ')
PRINT 'LIKE That shouldn''t work!'

Thursday, 12 November 2009

Backing up the Log Tail

Its possible with a recovered database, to make it so that it will start accepting log restores again. Getting the LSN's accurate is a bit fiddly - but this solution is ideal if you're relocating a production database to a new server and you need the ability to fail back to the original, if things go wrong.

Assuming you are logshipping to the new server you should stop all connections to the source database, apply all the log (or diff) backups to the target server.

At this point you should back up the tail log, this is the the final log backup taken on the source database, and if you issue with the backup with 'norecovery' it effectively put that database into standby - you can then recover the warm standby (on the target).

Keep taking regular log backups on the new promoted server, and these can later be applied to the former live server.

This will seriously limit downtime if you need to failback.

backup log Poker to disk = 'y:\poker_tail_log.bak'
with norecovery

Tuesday, 10 November 2009

Assigning variables from multiple records in a table

It may be required to assign values to multiple variables from multiple rows in a table.

Consider the following table:





configNameconfigValue
configOne1
configTwo2
configThree3


If we wanted to assign:
  • @configOne to the configValue where configName = 'configOne'
  • @configTwo to the configValue where configName = 'configTwo'
  • etc.

We could achieve this with the following multiple select statements:
  • SELECT @configOne = configValue FROM TableName WHERE configName = 'configOne'
  • SELECT @configTwo = configValue FROM TableName WHERE configName = 'configTwo'
  • SELECT @configThree = configValue FROM TableName WHERE configName = 'configThree'

This of course requires 3 seperate reads on the table, a more efficient method to assign these variables would be to use a pivot table query.
i.e.
SELECT @configOne = [configOne]
, @configTwo = [configTwo]
, @configThree = [configThree]
FROM
(SELECT configName ,configValue
FROM TableName
WHERE configName IN ('configOne','configTwo','configThree')) AS p
PIVOT(
MAX(configValue)
FOR configName IN ( [configOne], [configTwo], [configThree])) AS pvt

Friday, 6 November 2009

No TRUNCATE_ONLY?

So, it looks like Microsoft removed the very useful 'truncate_only' option from the backup command. Citing that you should switch to simple recovery mode instead. I dont like that. However, instead you have two alternatives, outside of the simple recovery nonsese.

1. Use a NULL device are the destination for your backup command.
2. If you have Quest's Litespeed installed, use the --nowrite option.

NULL Device
backup log northwind to disk = 'nul:'

LiteSpeed
exec xp_cmdshell 'sqllitespeed.exe -B LOG -D northwind -F c:\null.bak --nowrite'

Tuesday, 1 September 2009

Which FileGroup?

I need to free up space quickly on a certain drive, to do so means deleting some data and then shrinking that file group. So I need to be able to quickly identify which tables belonges to which file group - this data is available in Management Studio, but its not easy or convenient to get via the GUI. The following script will list every table and its filegroup.

select o.Name as TableName, fg.groupname as FileGroupName
from sysobjects o
inner join sysindexes i
on i.id = o.id
inner join sysfilegroups fg
on i.groupid = fg.groupid
where type = 'U'
and i.indid in (0,1)

Determine Space Available in FileGroups

Sometimes its useful to know the space usage of individual database files. The following script uses the FILEPROPERTY function to display the space used by that file.


SELECT DB_NAME() AS DbName,
name AS FileName,
size/128.0 AS CurrentSizeMB,
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB
FROM sys.database_files;

Thursday, 20 August 2009

EXECUTE AS

If you want to test how something works for the little people (i.e. Those without system admin permissions on SQL), SQL 2005 introduced the EXECUTE AS command.

Any command run after this, will run in the context of the specified account

The EXECUTE AS is reverted back to the logged in user with the REVERT command

* The ORIGINAL_LOGIN() shows which the original login - as expected!

This is simple to implement, and allows you to run commands under the guise of the specified user, the following example should demonstate its use:

--Open a normal connection to a sql server
SELECT USER_NAME(),ORIGINAL_LOGIN()
dbo,rnorris


EXECUTE AS USER = 'smallPerson'
SELECT USER_NAME(),ORIGINAL_LOGIN()
smallPerson,rnorris

REVERT
SELECT USER_NAME(),ORIGINAL_LOGIN()
dbo,rnorris

As usual, full documentation and examples can be found in BOL

Tuesday, 18 August 2009

Upgrading to SQL Server

Upgrading and existing SQL Server instance to 2008 can be problematic if you have a large MSDB database. If you log ship any databases, or take frequent backups there will be a row created for every back up taken, be it a log backup, a differential or a full database backup. Over time this will grow.

During the installation of SQL Server 2008, the MSDB database gets updated and columns are added to a few tables, including the ones used for recording the backup activity. This means you could be recreating a table with millions of rows; this causes the installer to hang. Sometimes leaving you with a corrup MSDB database.

To work around this, you need to delete some history from the MSDB tables. Microsoft provide a stored procedure for this, but it uses a cursor and is too slow for a production upgrade.

I've created a replacement that deletes from all the appropriate tables, leaving just enough row to continue your log shipping processes. The amount to be retained is userdefinable in units of days.

Before running the procedure you will want to create the following indexes. You do not need to restart with the 'update catalog' option enabled.

CREATE INDEX [media_set_id] ON [dbo].[backupset] ([media_set_id])
GO

CREATE INDEX [restore_history_id] ON [dbo].[restorefile] ([restore_history_id])
GO

CREATE INDEX [restore_history_id] ON [dbo].[restorefilegroup] ([restore_history_id])
GO

Now compile the following stored procedure in the MSDB database:

CREATE PROCEDURE [dbo].[spDeleteBackupRestoreHistory] (@RetentionDay INT)

AS


DECLARE @OldDate DateTime,

@RowCount INT

SELECT @OldDate = DateAdd(dd, @RetentionDay * -1, GetDate())

IF OBJECT_ID('tempdb..#backup') IS NOT NULL
DROP TABLE #backup

CREATE TABLE #backup
(

backup_set_id int PRIMARY KEY,

media_set_id int,

backup_finish_date datetime,

is_exist char(1) DEFAULT 'N'

)

declare @msg nvarchar(2400)

WHILE (1 = 1)

BEGIN

TRUNCATE TABLE #backup

-- 'Preparing...'

INSERT INTO #backup(backup_set_id, media_set_id, backup_finish_date)

SELECT TOP 1000 backup_set_id, media_set_id, backup_finish_date

FROM msdb.dbo.backupset bs

WHERE bs.backup_finish_date < @OldDate

ORDER BY bs.backup_set_id

SET @RowCount = @@RowCount

select @msg = 'Earliest date '+cast(min(backup_finish_date) as varchar(60)) +' rows:'+str(@rowcount) from #backup

raiserror(@msg,1,1) with nowait

IF @RowCount = 0

GOTO _EXIT

BEGIN TRAN

-- 'delete table restorefile'

DELETE FROM msdb.dbo.restorefile

FROM msdb.dbo.restorefile rf

INNER JOIN msdb.dbo.restorehistory rh

ON rf.restore_history_id = rh.restore_history_id

INNER JOIN #backup bs

ON rh.backup_set_id = bs.backup_set_id


-- 'delete restorefilegroup'

DELETE FROM msdb.dbo.restorefilegroup

FROM msdb.dbo.restorefilegroup rfg

INNER JOIN msdb.dbo.restorehistory rh

ON rfg.restore_history_id = rh.restore_history_id

INNER JOIN #backup bs

ON rh.backup_set_id = bs.backup_set_id


-- 'delete restorehistory'

DELETE FROM msdb.dbo.restorehistory

FROM msdb.dbo.restorehistory rh

INNER JOIN #backup bs

ON bs.backup_set_id = rh.backup_set_id


-- 'delete backupfile'

DELETE FROM msdb.dbo.backupfile

FROM msdb.dbo.backupfile bf

INNER JOIN #backup bs

ON bs.backup_set_id = bf.backup_set_id


-- 'delete backupset'

DELETE FROM msdb.dbo.backupset

FROM msdb.dbo.backupset bs

INNER JOIN #backup b

ON bs.backup_set_id = b.backup_set_id


-- 'search media_set_id that not exists to backupset'

UPDATE #backup

SET is_exist = 'Y'

FROM #backup b

INNER JOIN msdb.dbo.backupset bs

ON bs.media_set_id = b.media_set_id


-- 'delete backupmediafamily'

DELETE FROM msdb.dbo.backupmediafamily

FROM msdb.dbo.backupmediafamily bmf

INNER JOIN #backup b

ON bmf.media_set_id = b.media_set_id

WHERE b.is_exist = 'N'


-- 'delete backupmediaset'

DELETE FROM msdb.dbo.backupmediaset

FROM msdb.dbo.backupmediaset bms

INNER JOIN #backup b

ON bms.media_set_id = b.media_set_id

WHERE b.is_exist = 'N'


COMMIT TRAN


END


_EXIT:

DROP TABLE #backup

GO

This procedure deletes batches of 1000 rows to save on the transaction log, and uses RAISERROR to report the progress in real time.


Wednesday, 29 July 2009

Importing Files to Variables

There is functionality with SQLServer 2005 to import files directly from disk into variables or tables.

This can be particularly useful for importing an xml document, or even an image file.

i.e. Text, Unicode text and binary data can be imported from a file using this method


Example 1 - Importing an xml document

DECLARE @xml xml

SELECT @xml = BulkColumn
FROM OPENROWSET(BULK N'c:\temp\Sample.xml',SINGLE_BLOB) AS a

SELECT @xml


Example 2 - Importing a text file

DECLARE @text VARCHAR(max)

SELECT @text = BulkColumn
FROM OPENROWSET(BULK N'c:\temp\Sample.xml',SINGLE_CLOB) AS a

SELECT @text



Example 3 - Importing an image

DECLARE @myImage VARBINARY(MAX)

SELECT @myImage = BulkColumn
FROM OPENROWSET(BULK N'c:\temp\test.jpg',SINGLE_BLOB) AS a

You can then write the binary data to a table


NB To import unicode text, you would use SINGLE_NCLOB into an NVARCHAR(MAX) data type

These are documented in Books On Line, so go take a look...

Tuesday, 28 July 2009

EXCEPT & INTERCEPT

A couple of handy new operators are available in SQL versions from 2005 onwards, these are EXCEPT and INTERCEPT.

Whilst not offering any new functionality, they do allow simplicity in writing TSQL code.

The EXCEPT operator can be used to return records in one table and not in another (much like an outer join with an IS NULL WHERE clause)

The INTERCEPT operator will return the overlapping rows from two tables.

Both these commands operator in the same manor as a UNION clause.

Below is an example of using the EXCLUDE statement:

SELECT contestId
FROM Contest
EXCEPT
SELECT contestId
FROM PlayerContest
--(30%)



SELECT contestId
FROM Contest C
WHERE NOT EXISTS (SELECT 1 FROM PlayerContest PC WHERE C.contestId = PC.contestId)
--(30%)


SELECT C.contestId
FROM Contest C
LEFT JOIN PlayerContest PC
ON C.contestId = PC.contestId
WHERE PC.contestId IS NULL
--(40%)


All statements return exactly the same results, though in terms of performance, the first two are more efficient than the last one. The relative percentage cost is shown in brackets after each statement.


The INTERCEPT statement works in much the same way, and full details of both operators can be found in BOL.

Thursday, 28 May 2009

Fragmention in SQL Server 2008

Here's a cool script I came accross - similar to SHOWCONTIG, but without being all invasive and locking your tables



select *
into dba..contig from sys.dm_db_index_physical_stats(0, null,null,null,null)


select
(select name from sys.filegroups fg where fg.data_space_id = si.data_space_id) as Filegroup,
OBJECT_NAME(c.object_id) as tablename,
si.name as indexname,
c.index_type_desc as Type,
c.avg_fragmentation_in_percent as FragPC,
(c.page_count * 8.0) / 1024.0 /1024.0 as GB,
page_count
from dba..contig c
join sys.indexes si
on si.object_id = c.object_id and si.index_id = c.index_id
where page_count > 1
order by 1, page_count desc

Tuesday, 19 May 2009

Enforcing uniqueness on a combination of columns in different tables

Given the above schema, I had the need to create some kind of constraint that would prevent a particular gameTypeId being linked to more than one lobby. I thought of a trigger or perhaps create a new LobbyGameType table, but this would require manual updating.

The solution I found was to create a new view with a unique index on it, and happily this works in standard edition too!

These are the steps I took:
  1. CREATE VIEW vwLobbyGameType
    WITH SCHEMABINDING
    AS
    SELECT L.lobbyId, TSGT.gameTypeId
    FROM dbo.Lobby L
    INNER JOIN dbo.LobbyTableSet LTS
    ON L.lobbyId = LTS.lobbyId
    INNER JOIN dbo.TableSetGameType TSGT
    ON LTS.tableSetId = TSGT.tableSetId

    GO

  2. SET QUOTED_IDENTIFIER ON
    SET ANSI_NULLS ON
    GO
    CREATE UNIQUE CLUSTERED INDEX [CIX_vwLobbyGameType]
    ON
    vwLobbyGameType(lobbyId,gameTypeId)
    GO
Upon trying to insert a record into TableSetGameType that would have resulted in a gameTypeId being linked to more than one tableSetId in one lobby, I got the following error:
Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.vwLobbyGameType' with unique index 'CIX_vwLobbyGameType'.
The statement has been terminated.

It works!!! :)

Monday, 29 December 2008

LEN() function gotcha

When using the LEN() function, be aware that before calculating the length of the passed string, SQL seems to perform an RTRIM() on the string, hence
LEN('ABCDE') = 5
and
LEN('ABCDE    ') = 5

You could consider using the DATALENGTH() function, however as this returns the number of bytes, this would give double the number of characters if you passes if an NVARCHAR or NCHAR string

I had thought that perhaps toggling ANSI_PADDING would have some effect on this, however that proves not to be the case.

As a work around, I append a character ('|') to the end of the string, then check the length of the string - 1.

Also worth noting here, when comparing 2 strings, trailing blanks also seem to be ignored:
i.e. 'ABCDE' = 'ABCDE    '

However please post if you have a better solution

Thursday, 4 December 2008

Determine Which Log To Restore

If your logshipping fails, you can usually fix it with a full database restore or even a differential, though if your database is a couple of terrabytes and not in the same building it could be problematic transferring that much data.

Rather than recreating the logshipping chain this method will demonstrate how to rectify a chain thats gone a bit bojiggity. Providing the database on the standby hasnt been recovered or corrupted, and that you still have all the log backups created - this should work.


-- get last restored log or backup for the poker database
select top 1 * from
msdb..restorehistory
where destination_database_name = 'poker'
order by 2 desc


Pay attention to the backup_set_id field, this is our reference to the actual log restore and LSN information that we need to determine the next log to restore.

-- backup_set_id is the value from the above query
select * from msdb..backupset
where backup_set_Id = 2


Here you need to look out for backup_start_date and backup_finish_date; then locate a a log backup that would have occurred after these dates. If you name the backups with the current date and time it makes locating them alot easier, otherwise you need to get the Windows file information.

Once you located it, you need to manually restore that log

restore log Poker
from disk = '\\server1\d$\Logs\LOG_Poker_20081117112135.bak'
with norecovery, stats =5


Now, if you're using the LogShipRestorer procedure mention previously, you can simply run that and it will apply any subsquent logs, otherwise you nee to apply them manually.

And there you go.

Log Shipping

Previously, I posted describing a query that can be used to determine the filename, datetime and LSN information for the last or any applied log. The following is a stored procedure that can be used as an alternative to the DTS method we've used whereby a single log is applied at any one time. This method allows successive log backups to be taken even though one may not have been applied.

In this method we create 2 jobs, the first on the principal server, the second on the standby, their purpose is to backup and restore logs independently of each other.

Job #1: Principal - Every 15 minutes create a log backup to a network share
Job #2 Standby - Every 5 minutes execute spLogShipRestorer

Depending on the size and activity of your database you may want to change the intervals between each job, on our live database that's creating 200MB logs every 15 minutes we've decreased the interval of backups to 5 minutes.

The spLogShipRestorer definition follows:

CREATE PROCEDURE spLogShipRestorer @dbname SYSNAME, @root SYSNAME
AS

BEGIN

CREATE TABLE #logs
(
logname SYSNAME NULL
)

SET NOCOUNT ON

DECLARE @file SYSNAME, @cmd NVARCHAR(4000), @sbyroot SYSNAME,
@backupfile SYSNAME, @globalerror INT

DECLARE @date DATETIME, @lastfile SYSNAME, @last_lsn NUMERIC(25,0)

-- set the default STANDBY file location
SELECT @sbyroot = 'D:\DATA\MSSQL\Data\'+ @dbname + 'UNDO'

-- set the previously applied logs details from
-- the system tables for the specified database
SELECT TOP 1
@date = restore_date,
@lastfile = (SELECT TOP 1 physical_device_name FROM msdb..backupmediafamily mf WHERE mf.media_set_id = bs.media_set_id),
@last_lsn = last_lsn
FROM msdb..restorehistory rh WITH(NOLOCK)
JOIN msdb..backupset bs WITH(NOLOCK)
ON bs.backup_set_id = rh.backup_set_id
WHERE rh.destination_database_name = @dbname
ORDER BY restore_date DESC

-- parse out the filename and display it
SELECT @lastfile = right(@lastfile, LEN(@lastfile) - LEN(@root))
SELECT @lastfile AS LastFileRestored

-- create a console command to list backup files only
-- DIR :Order By Name: :No Header Information: :Archive Attribute:
SELECT @cmd = 'dir ' + @root + 'LOG_' + @dbname + '_*.bak /ON /B /AA'

-- execute the command and store results in a table
INSERT INTO #logs
EXEC master..xp_cmdshell @cmd

-- clean up the crap from the table created
-- by the DIR command, leaving only valid files
DELETE FROM #logs
WHERE logname IS NULL
OR logname <= @lastfile OR logname LIKE '%file not found%' -- if there is no logs to apply then quit IF (SELECT COUNT(*) FROM #logs) = 0 BEGIN SELECT 'Nothing To Do: No Logs To Restore' RETURN 0 END -- otherwise loop thru the #logs table -- and apply each log in order DECLARE C CURSOR LOCAL FAST_FORWARD FOR SELECT logname FROM #logs ORDER BY logname OPEN C FETCH NEXT FROM C INTO @file WHILE @@FETCH_STATUS = 0 BEGIN -- build the full filename SELECT @backupfile = @root + @file -- display some information PRINT 'Procesing: '+ @file -- now restore the log RESTORE LOG @dbname FROM DISK = @backupfile WITH RESTART, NORECOVERY -- buffer the error code SET @globalerror = @@ERROR -- do some error checking/handling IF @globalerror <> 0
BEGIN
SELECT 'Error: ' + STR(@globalerror) as Error
BREAK
END

FETCH NEXT FROM C INTO @file
END

-- clean up the cursor
CLOSE C
DEALLOCATE C

-- set the database into standby
RESTORE DATABASE @dbname WITH STANDBY = @sbyroot

END


The only downsides of this method come from having to manually apply the first log after a full or differential backup has been applied, or when the log chain is broken. But it does mean you have all the log backups retained so you do not need to perform a full database restore to fix it.

The next next post will demonstrate how to determine which log needs to be applied next

Deleting From Computed Columns

You may have noticed that deleting from a table with a computed column works fine in Query Analyzer, yet when its a scheduled job it fails. This is because you need to SET ARITHABORT ON before executing the query as a job. By default Query Analyzer has that option enabled, under Tools..Options - that though is specific to your session. The actual SQL Server itself when referenced from a job has it set to OFF.

Tuesday, 2 December 2008

Set SQL Agent Job Logs

You can use the following script to alter the output/log path for each step in every job scheduled in SQL Server Agent. It sets the filename to be + +


update s
set output_file_name = 'C:\logs\ldsql2000\sqlagent\'+(select name from msdb..sysjobs j where j.job_id = s.job_id)+' '+step_name+'.txt'
from msdb..sysjobsteps s

Changing Ownership of SQL Agent Jobs & Schedules

If jobs or schedules are owned by a user account and that account is deleted, the job or schedule will fail to run.
When creating a new job in Management Studio, by default it will make the current logged in user the owner of that job. I suggest that on job creation, the owner should be changed to sa, as that account is likely to always be available.
I have not yet found any security implications from having jobs owned by sa, but please comment if you are aware of any.

If you inherit a number of jobs/schedules not owned by the sa account, there are a couple of scripts below to generate the proc calls to change the owners of the offending jobs & schedules. (This is for SQL2005 upwards)

-- Change job owners to sa
SELECT 'EXEC msdb.dbo.sp_update_job ' +
' @job_name = ' + char(39) + sj.[name] + char(39) + ',' +
' @owner_login_name = ' + char(39) + 'sa'''
FROM sysjobs sj
LEFT OUTER JOIN sys.server_principals sp
ON sj.owner_sid = sp.sid
WHERE sp.name != 'sa' OR sp.name IS NULL
ORDER BY 1

-- Change schedule owners to sa
SELECT 'EXEC msdb.dbo.sp_update_schedule ' +
' @schedule_id = ' + CAST(s.schedule_id AS VARCHAR) + ',' +
' @owner_login_name = ' + char(39) + 'sa'''
FROM sysschedules s
LEFT OUTER JOIN sys.server_principals sp
ON s.owner_sid = sp.sid
WHERE sp.name != 'sa' OR sp.name IS NULL

Thursday, 30 October 2008

Service Control

You can use an extended stored procedure to manage the services that are running locally to your instance, this applies to SQL Server 2000 and above.

-- CHECK SQL Server Agent
EXEC xp_servicecontrol 'QUERYSTATE', 'SQLServerAGENT'
GO

-- STOP SQL Server Agent
EXEC xp_servicecontrol 'STOP', 'SQLServerAGENT'
GO

-- START SQL Server Agent
EXEC xp_servicecontrol 'START', 'SQLServerAGENT'
GO

I have a a scheduled C# script that executes this every 30 minutes and sends an alert if the SQL Server Agent has stopped. Why a C# app? Well, you sure as hell wouldn't schedule such a job with Agent can you :)

Tuesday, 21 October 2008

Bit Masks

Rather than storing a bunch of settings as Bit fields, and COALESCING them or manually checking their value using a BIT MASK will allow you to store them ALL in a single field, much like the STATUS field in sysdatabases. You can use Logical operator AND and OR on them.

First off, define what your values mean in the bit mask, in this example, I'm choosing a sequence of numbers that double each time, that way, the sum of any combination of each value will be unique.

In this example, I'll use cheerleaders vitals as the data, so, first ill define what each value means. These should probably be stored in a table, but for this example I wont need to.


1 = IsFemale
2 = IsBlonde
4 = IsBrunette
8 = IsOVerTwentyOne
16 = IsMale
32 = IsOverThirtyFive


Now, if we were looking for a Female who is Blonde, we're looking for the value 3. If we're searching for a bloke over 35, then that value is 48.

We can store and populate the cheerleaders table like this:

create table tblCheerleader
(
CheerLeaderID int identity(1,1) primary key,
[Name] varchar(50),
Stats varbinary(1000)
)

insert tblCheerLeader ([Name], Stats)
values ('Miss Scarlett', 1+2+8)
insert tblCheerLeader ([Name], Stats)
values ('Chantelle', 1+4+8)
insert tblCheerLeader ([Name], Stats)
values ('Jenna', 1+32)
insert tblCheerLeader ([Name], Stats)
values ('Bob', 32+16+4)


A quick SELECT proves that the data is stored as BINARY despite being passed in as a decimal sum.

CheerleaderID Name Stats
-------------- ------------------ --------
1 Miss Scarlett 0x00000017
2 Chantelle 0x0000000F
3 Jenna 0x0000000A

Now, how to query them, you're looking to AND the value of the attribute with the value stored in the database, the operation will return same value you passed in, if the result is true.

So, if we're looking cheerleaders, and the only attribute we care about is the person needing to be blonde, then this works:

select * from tblCheerLeaders where stats & 1 = 1

if we're looking for multiple attributes, say blonde and over 35

select * from tblCheerLeaders where stats & 34 = 34

You can get more picky, next we're looking for females over 21 or males over 35

select * from tblCheerLeaders where (stats & 9 = 9) OR (stats & 48 = 48)

Thats it!

Thursday, 16 October 2008

DBCC ShrinkFile() & Logical Fragmentation

Ahhh, this is frightening shit.

What I'm doing is setting up a database with extra data so that I can simulate dropping a table and subsequently shrinking the database, then comparing the results of DBCC SHOWCONTIG.

Create a empty test database, create a couple of tables to hold a fair amount of test data, 2 tables are enough to demonstrate this.

create database testfrag
go

use testfrag
go

create table mytable1
(
f1 uniqueidentifier not null primary key clustered,
f2 varchar(5000),
f3 varchar(2000)
)

create table mytable2
(
f1 uniqueidentifier not null primary key clustered,
f2 varchar(5000),
f3 varchar(2000)
)

Now populate them with some data, run this statement for both tables

declare @i int
set @i = 0
while @i < 10000
begin
insert mytable1(f1, f2, f3)
values (newid(), replicate('K', 4500), replicate('S', 2000))
set @i=@i+1
end

Ensure that they're perfectly *defragmented*

dbcc indexdefrag(0,mytable2, 1)
dbcc dbreindex(mytable2, '')

Check numbers from DBCC SHOWCONTIG

Output
----------------------------------------------------------------------
TABLE level scan performed.
- Pages Scanned................................: 1000
- Extents Scanned..............................: 125
- Extent Switches..............................: 124
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [125:125]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 0.00%
- Avg. Bytes Free per Page.....................: 1565.0
- Avg. Page Density (full).....................: 80.66%

Drop a table, and shrink the database

drop table mytable1
dbcc shrinkfile(testfrag)

Now check the numbers provided by SHOWCONTIG again

Output
----------------------------------------------------------------------
TABLE level scan performed.
- Pages Scanned................................: 1000
- Extents Scanned..............................: 128
- Extent Switches..............................: 127
- Avg. Pages per Extent........................: 7.8
- Scan Density [Best Count:Actual Count].......: 97.66% [125:128]
- Logical Scan Fragmentation ..................: 98.80%
- Extent Scan Fragmentation ...................: 0.78%
- Avg. Bytes Free per Page.....................: 1565.0
- Avg. Page Density (full).....................: 80.66%

WTF? Its gone from perfectly defragemented, to perfectly fragemented.
Thats bad, right.

Thursday, 25 September 2008

Getting More From A Deadlock

Whilst on the subject of deadlocks, here is a cool way to capture a deadlock, and extract more information from it, other than a SPID and an obiturary! Note the use of the ROLLBACK in the error handler, this is present because the TRY/CATCH statement prevents the victim being killed, without it, you'll have a hung transaction.

BEGIN TRANSACTION
BEGIN TRY
EXEC spProcedure
COMMIT
END TRY

BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() as ErrorSeverity,
ERROR_STATE() as ErrorState,
ERROR_PROCEDURE as ErrorProcedure,
ERROR_LINE as ErrorLineNumber,
ERROR_MESSAGE as ErrorMessage
ROLLBACK
END CATCH;

You could also have the transaction retry after a WAITFOR if it wasnt time critical, and the timeout on your application is set accordingly; then if it really does fail a second time, actually let it fail.

Wednesday, 24 September 2008

Lock Contention

This mornings fiasco was interesting.

For no obvious reason, the number of deadlocks on the server increased, soon it was every other statement was throwing up a deadlock, and knocking off the victim. All the update and delete statements specified one or two rows only, but for some reason SQL Server was escalating what should have been a ROWLOCK into a PAGE or TABLE lock. I imagine this was decided upon by the optimizer because the the rows it was going to update existed in the same page and so it was more efficient to obtain a PAGE LOCK, though in a multiuser environment it seem this was fatal for the least expensive query, as it ended up the vitim and was killed. To overcome this the table hints WITH(ROWLOCK) ensured SQL Server chose an rowlock for each statement. Its not often a 2 users are going to want to update the same record, though with data ordered by date its reasonable to expect it to exist in the same page as another users data. So taking a page lock out on that prevents all users doing their own stuff to any record also located on that page. The same would apply for extents aswell.

So..

UPDATE card_deck
SET spenthand = 6
WHERE dealer_id = @dealerid

Became..

UPDATE card_deck WITH (ROWLOCK)
SET spenthand = 6
WHERE dealer_id = @dealerid

But if you want to have more control over what SQL Server decides is the deadlock victim, you can add the following to your sotred procedures for the lower priority tasks

SET DEADLOCK_PRIORITY LOW

This tells the optimzer that the current session is the preferred deadlock victim.

As a side note trace flag 1204 and 1205 will help identify which tables are causing deadlocks, but turn these flags off when you are done as they are heavy

Tuesday, 23 September 2008

Identifying a Failed Log Ship

In a previous post I demonstrated some SQL Server 2005 queries that would identify when a log shipping target was out of date, this post includes those queries in a stored procedure that will alert you when the last applied log is older than the timeframe you specify.

IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[spCheckLog]') AND OBJECTPROPERTY(id,N'IsProcedure') = 1)
DROP PROCEDURE [dbo].[spCheckLog]
GO

create procedure spCheckLog
@database sysname,
@duration_in_mins int,
@debug bit = 0
AS

/*
Author: Alex Wilson
Date: 20080923
Version: 1.0

Purpose: Check the metadata for the last applied log, if its considered too
old against the input parameters it will throw an alert

Usage: EXEC spCheckLog @database, @duration
EXEC spCheckLog 'poker', 20


*/

declare @Last datetime

select top 1
@Last = restore_date
from msdb..restorehistory rh with(nolock)
join msdb..backupset bs with(nolock)
on bs.backup_set_id = rh.backup_set_id
where rh.destination_database_name = @database
order by restore_date desc

if @debug = 1
begin
print 'Database Parameter (@database) : ' + cast(@database as varchar)
print 'Too Old Parameter (@duration_in_mins) : ' + cast(@duration_in_mins as varchar)
print 'Current Server Time : ' + cast(getdate() as varchar)
print 'Last Log Applied : ' + cast(@last as varchar)
end

if datediff(mi, @last, getdate()) >= @duration_in_mins
begin
print 'Sending Alert Email...'
/*send alert email here */
Print '...DONE'
end

Open Transactions

If you have lots of database on a server, that are all used together, say in CROSS database queries, or perhaps they are bolted together via an application or webservice. You are going to need at some point to identify deadlocks or open transactions across the entire server, not just individual databases. The following stored procedure makes that a bit easier.


IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[spShowOpenTransactions]') AND OBJECTPROPERTY(id,N'IsProcedure') = 1)
DROP PROCEDURE [dbo].[spShowOpenTransactions]

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

create procedure spShowOpenTransactions
AS

/*
Author: Alex Wilson
Date: 20080923
Version: 1.0

Purpose: Execute DBCC OpenTran For Each Database.
It uses sp_msforeachdb which may be removed from future versions

Usage: exec dba..spShowOpenTransactions
*/

exec master..sp_msforeachdb "print '?' + char(13) + replicate('-', 116); dbcc opentran ('?') with no_infomsgs; print char(13)+char(13) "

GO

Thursday, 18 September 2008

Preventing Statements Outside Transactions

In some cases you may need to prevent statements or sub-stored-procedures from being executed either outside of a transaction or directly in code or Query Analyzer; thus ensuring that a developer doesn't accidentally create the scenario where a 2-phase-commit can be breached. The following snippets will help with that.

-- ensure we are inside a transaction
-- otherwise throw a message and end
BEGIN TRAN TransactionSample
if (@@trancount = 0)
begin
print 'This Procedure Must Be Called From a Transaction'
return
end
else
begin
print 'Called From Transaction'
return
end
COMMIT TRAN TransactionSample


-- ensure the statements is called from within
--atleast 1 stored procedure and not in QA or code
if (@@nestlevel = 0)
begin
print 'This Proc Must Not Be Called Directly!'
end

else

begin
print 'Called From Stored Procedure'
end


Of course, they can be combined :)

Wednesday, 17 September 2008

Tracking Log Shipping

Here's a couple of queries, they only apply to SQL Server 2005+ that allow you to track what log was applied and when; and which was the last log applied to a database.

-- list all the logs applied to a database
select
restore_date,
(select top 1 physical_device_name from msdb..backupmediafamily mf where mf.media_set_id = bs.media_set_id) as [File],
last_lsn
from msdb..restorehistory rh with(nolock)
join msdb..backupset bs with(nolock)
on bs.backup_set_id = rh.backup_set_id
where rh.destination_database_name = 'poker'
order by restore_date desc

-- this one just grabs the last log applied, same as above but with rows restricted
select top 1
restore_date,
(select top 1 physical_device_name from msdb..backupmediafamily mf where mf.media_set_id = bs.media_set_id) as [File],
last_lsn
from msdb..restorehistory rh with(nolock)
join msdb..backupset bs with(nolock)
on bs.backup_set_id = rh.backup_set_id
where rh.destination_database_name = 'poker'
order by restore_date desc