If you are interesting in getting Microsoft Certified, now is a good time! I have two types of vouchers available to everyone who leaves a comment. The first type, good for the 13 exams listed below, will give you 50% discount on the cost of the exam, and a free retake if you fail the first time. It doesn’t get much better than that.The second voucher type doesn’t provide a discount, but will give you a free retake, and is good for all Microsoft Certification exams.

Leave me a comment with the exam(s) you are interested in taking, and I’ll send you the vouchers. Hurry, they expire June 30th, 2010!

Good Luck!

{ Comments on this entry are closed }

This issue was painful. It started off with a canceled release because the differential backup didn’t finish as expected. The idea was to perform a differential backup just before the release so I’d have a safety net in case something tragic happened. I had run a test diff a few hours earlier and it finished in 35 minutes. There wasn’t a lot of activity on the database before the release so I figured worst case the pre-release diff would take under 1.5 hours. Boy was I wrong!

After 2 hours the diff was showing only 7.5% complete and the file was 3x the size of my test diff from earlier. At that rate it wouldn’t be finished for over 25 hours, a full backup only takes 11! Frustrated that I had to cancel the release, I was determined to figure out why this happened.

Differential backups, backup all 64k extents that have changed since the last full backup. I was fairly confident that only a small amount of data had changed since the full backup. Running the script in Paul Randal’s post How Much of the database has changed since the last full backup? confirmed that only 0.33% of the extents had changed.

After exchanging sever tweets and emails with @PaulRandal I remembered that the regularly scheduled full backup job had started before the release and I had canceled it since I was going to be doing the diff. I didn’t think anything of it at the time, but that ended up being the issue.

I reenacted the scenario against a smaller 2GB test database. Here is how it played out.

  • Full backup – 2GB file in 40 seconds
  • Differential backup – 1MB file in 0 seconds
  • Full backup canceled after 20 seconds
  • Differential backup – 1GB file in 55 seconds

The output from the second diff was:

Processed 241608 pages for database test, file test_Data’ on file 1.
Processed 1 pages for database ‘test’, file ‘test_Log’ on file 1.
BACKUP DATABASE WITH DIFFERENTIAL successfully processed 241609 pages in 55.501 seconds (34.009 MB/sec).

Even after the second diff, Paul’s script said that only 4 out of 32000 extents had been modified. So, the canceled full backup does something that Paul’s script doesn’t detect, and SQL Server thinks all of the extents it touches have been modified.

This behavior doesn’t seem right to me. Ideally, the canceled full shouldn’t impact my ability of doing a diff, otherwise, it should return an error (or at least a warning) when I attempt a diff since the diff will be very large and may take longer than the full backup.

In case you were wondering, I did a test restore of the full backup and the 2nd diff and it worked fine.

Thank you to Paul Randal and Kimberly L. Tripp for helping me with this issue.

{ 1 comment }

I huge annoyance for me the past few days has been that pressing Ctrl-F in Chrome wouldn’t bring up the find box until I clicked inside the content area of the browser window.

The problem it turns out has to do with KatMouse. I love KatMouse! It is a utility that allows you to use your scroll wheel to scroll inactive windows. The problem was that scrolling the Chrome window with KatMouse would cause it to go inactive.

The Fix:

  1. Open KatMouse Settings.
  2. Select the Classes Tab.
  3. Drag the target on to the Chrome window.
    DragTargetToChromeWindow
  4. Double-click the “Chrome_RenderWidgetHostHWND” class and clear the checkbox labeled “Window has wheel scrolling support?” and click OK.
    SetClassProperties
  5. Click OK to close the KatMouse Properties window.

You should now be able to scroll in Chrome without the window losing focus.

{ 18 comments }

I ran into a database earlier this week with a lot of stored procedures that look like this:

CREATE PROCEDURE updateUser
  @id int, @FirstName varchar(50), @LastName varchar(50), @EmailAddress varchar(50)
AS
SET NOCOUNT ON
 
UPDATE Users
SET FirstName = @FirstName,
    LastName  = @LastName,
    EmailAddress = @EmailAddress
Where id = @id
GO

Every table had  similar CRUD procedures that had been generated. If any data in the Users table changed, say a user updated their email address, this procedure would handle the update. Is this a good idea?

Aaron Alton recently posted about UPDATE statements. In his post, he explains why it is a good idea to use the WHERE clause to filter out rows that don’t need to be updated. I want to expound on Aaron’s point, and say that you should also avoid needlessly updating columns that don’t need to be updated. The reason? Nonclustered Indexes.

Let’s take a look at the following table:

CREATE TABLE Users (
  id int,
  FirstName varchar(50),
  LastName varchar(50),
  EmailAddress varchar(50)
)
 
CREATE UNIQUE CLUSTERED INDEX cix_Users ON Users(id)
CREATE NONCLUSTERED INDEX ix_Users_FirstName ON Users(FirstName)
CREATE NONCLUSTERED INDEX ix_Users_LastName ON Users(LastName)
CREATE NONCLUSTERED INDEX ix_Users_EmailAddress ON Users(EmailAddress)
 
INSERT INTO Users
SELECT 1, 'John', 'Smith', 'jsmith@gmail.com'

If we were to call the updateUser procedure:

EXEC updateUser 1, 'John', 'Smith', 'jsmith@hotmail.com'

Even though the FirstName and LastName values aren’t changing, SQL Server will still update all of the columns resulting in the nonclustered indexes on the FirstName and LastName columns being locked and updated.

If we were to run the following UPDATE statement instead, the nonclustered indexes on FirstName and LastName would not need to be updated.

UPDATE Users
SET EmailAddress = 'jsmith@hotmail.com'
WHERE id = 1

So, how do we fix the problem with our update procedure listed above? If we know that updating an Email address is a common occurance, we might create a seperate procedure that only updates the EmailAddress column. Alternatively, we can use dynamic SQL to build the correct UPDATE statement for us.

Here is an example of how you could do this in a stored procedure:

CREATE PROCEDURE updateUser
  @id int, @FirstName varchar(50), @LastName varchar(50), @EmailAddress varchar(50)
AS
SET NOCOUNT ON
 
--Variables to hold the updated status
DECLARE @u_FirstName bit, @u_LastName bit, @u_EmailAddress bit
 
--Check to see which values were updated
SELECT
  @u_FirstName = CASE FirstName WHEN @FirstName THEN 0 ELSE 1 END
 ,@u_LastName = CASE LastName WHEN @LastName THEN 0 ELSE 1 END
 ,@u_EmailAddress = CASE EmailAddress WHEN @EmailAddress THEN 0 ELSE 1 END
FROM Users
WHERE @id = id
 
--If none of the values were updated return
IF (@u_FirstName = 0 AND @u_LastName = 0 AND @u_EmailAddress = 0) RETURN
 
DECLARE @sql nvarchar(4000)
 
SET @sql = '
DECLARE @first bit --for the first value
UPDATE Users SET
 @first = 1 '
 
IF @u_FirstName = 1 SET @sql += '
 ,FirstName = @FirstName '
 
IF @u_LastName = 1 SET @sql += '
 ,LastName = @LastName '
 
IF @u_EmailAddress = 1 SET @sql += '
 ,EmailAddress = @EmailAddress '
 
SET @sql += '
WHERE id = @id'
 
PRINT @sql
PRINT ''
EXEC sp_executesql @sql,
  N'@FirstName varchar(50), @LastName varchar(50), @EmailAddress varchar(50), @id int',
  @FirstName, @LastName, @EmailAddress, @id
 
GO

This trigger will show which columns have been updated:

CREATE TRIGGER t_Users
  ON  Users
  AFTER UPDATE
AS
       SET NOCOUNT ON
       IF UPDATE(FirstName) PRINT 'FirstName updated'
       IF UPDATE(LastName) PRINT 'LastName updated'
       IF UPDATE(EmailAddress) PRINT 'EmailAddress updated'
GO

To summarize, limiting the columns in the SET portion of the UPDATE statement will reduce locking, minimize index updates, and increase concurrency.

Let me know if I’ve left anything out.

{ 2 comments }

This Saturday, over 50 presenters are coming together to bring you more than 60 sessions on software development. Topics include: .NET, SQL Server, iPhone, Java, SEO, Silverlight and much more!

Portland Code Camp is a free community event being help this Saturday, May 30th at Reed College. For more information and to sign up, go to  PortlandCodeCamp.org.

{ 0 comments }

Please join me at the Oregon SQL Developers Meeting, tomorrow, April 8th. Free pizza, lots of giveaways, and 2 great sessions planned. Many people will walk away with either a TechNet Plus Direct annual subscription (Estimated Retail Value $349), or a voucher for a Microsoft Certification exam of your choice (Estimated Retail Value $125).

Logistics:

Wednesday, April 8th, 2009 at 6:30 pm (Pizza and networking start at 6:00 pm)
Microsoft Portland Office – 10260 SW Greenburg Rd, Suite 600

Sessions:

Rob Garrison will present about Data Encryption and SQL Server, providing significant information that is not available in Books Online, including Message Authentication Codes and how to use them to support highly efficient searches of encrypted data.

Rob Garrison is the OLTP Data Architect for WebMD’s Health Services group. He has been working with SQL Server full-time since 1999 and in IT since 1985. He has extensive experience with SQL Server development and architecture in both the banking and healthcare industries.

Then, Arnie Rowland, SQL Server MVP, will discuss Microsoft’s ‘Thrive’ Program which is a major effort supporting economic stimulus efforts. This discussion will include how Microsoft’s initiatives may be of direct benefit to you in keeping your skills fresh.

Visit http://osql-d.sqlpass.org for more details.

Hope to see you there!

{ 0 comments }

This week, I ran into a query that did 770 GB worth of logical reads…on a 3 GB database!!!

Kimberly Tripp asked “Who’s job is it anyway?” Should database design and database coding be done by the Application Developer, the DBA, or should there be a separate “Database Developer” role? There are many variables that make this a complicated question. Here are a few of my thoughts:

The Application Developer

PRO: Database development is part of the overall application development process. Basic T-SQL syntax is pretty easy for application developers to pick up, and most should already know how to write select statements. Also, application developers already have to be familiar with the requirements of the application.

CON: The best practices for good application code don’t translate well into good database code. Application developers are trained to break everything up into small manageable steps. When they need to do something in the database, the natural tendency is to try to use the tools that work well on the application side. This often results in things like cursors, while loops, temp tables, UDFs, and lots of small steps. Database development requires a very different mind set. Also, SQL Server is a huge product. It’s hard enough for someone dedicated to SQL Server to keep up with and master it, let alone someone who is trying to do application development as well.

The DBA

PRO: DBAs work with databases. They understand indexes and are familiar with database tools like SSMS, SSIS, and Profiler. They know that cursors are not the answer to everything.

CON: Most (not all) DBAs are, as the name implies, focused on administration and don’t write code. DBAs are responsible for database uptime and stability which may get in the way of meeting development deliverables.

The Database Developer

PRO: Dedicated database developers are focused on writing good database code. They should be good at thinking in terms of set-based logic, and be familiar with the various tools that SQL Server provides. In theory, this should result in higher quality database code in less time.

CON: Many companies don’t have enough work to justify a full-time person dedicated to database development. Even if they do, it is likely spread across multiple projects which requires the database developer to keep track of multiple sets of requirements and deal with competing priorities.

Other Thoughts

While dedicated database developers should on average write better code, it doesn’t always matter if a query runs in 1-2 seconds or if it runs in 100-200 ms. The less efficient code may be “good enough.”

On the other hand, most development is done on a very small subset of data. A good database developer will know how the query will react as the amount of data increases. Will it perform the same, get linearly worse, or will performance decrease exponentially as rows are added.

If you plan on scaling your application, it’s worth it to make sure your database development is done right. To me, it doesn’t matter if the work is done by the application developer, the DBA, or a dedicated database developer. Just use the person who does the best job.

I’d love to hear your thoughts.

{ 4 comments }

For you T-SQL experts out there, Adam Machanic just posted a challenge with a great prize. The best submission wins a full MSDN subscription valued at around $10,000.

Check it out!

{ 0 comments }

In this post, I’m going to share how I created the custom links back to my blog as seen in my syndicated posts on SQLServerPedia.com.

I’m syndicating my all of my SQL Server related posts over at SQLServerPedia.com. Since I sometimes blog about topics other than SQL Server, I didn’t want to syndicate my entire feed. Fortunately, WordPress creates feeds at the category level. Just add “/feed/” to the end of your category page URL to get to the category feed. The feed for my SQL Server related posts was located at http://robboek.com/category/sql-server/feed/.

With the category feed URL in hand, I created a feed on FeedBurner that points to the category feed and is the feed I gave out for syndication. This enabled me to add special FeedFlare that only shows up on my syndicated posts.

Creating custom FeedFlare is easy. It’s just an XML file with some custom tags that you upload to your site. You then put the URL to the XML file in to FeedBurner to add your custom FeedFlare to your feed. For more information see the FeedFlare Developer Guide.

I created two FeedFlare units. The first just adds “Originally posted at robboek.com” and links to my blog. The second is a permalink to the original blog post.

robboek-com.xml

<FeedFlareUnit>
    <Catalog>
        <Title>robboek.com</Title>
	<Description>Originally posted at robboek.com</Description>
    </Catalog>
    <FeedFlare>
        <Text>Originally posted at robboek.com</Text>
        <Link href="http://robboek.com" />
    </FeedFlare>
</FeedFlareUnit>

post-permalink.xml

<FeedFlareUnit>
    <Catalog>
        <Title>Permalink</Title>
        <Description>Adds the permalink of posts to help verify as the original source</Description>
    </Catalog>
    <FeedFlare>
        <Text>Permalink: ${link}</Text>
        <Link href="${link}" />
    </FeedFlare>
</FeedFlareUnit>

{ 1 comment }

I was tagged by Brent Ozar in a meme started by Denis Gobo. I’m supposed to post about my first computer and favorite games.

The first computer I can remember using was my my uncle’s Amiga 500. I remember playing Little Computer People with my cousin.

Amiga 500Little Computer People

The first computer my family owned was a Hyundai IBM 8088 clone. It had a green monochrome display, 640K of RAM, a 5.25” 360K floppy drive, and a 10 MB hard drive!

Growing up, I always got in trouble on my dad’s computers because I was constantly breaking his stuff every time I reconfigured the config.sys and autoexec.bat files to free up enough memory to play MS-DOS games.

The first computer I owned, was a Pentium 75 Mhz that I built in 1994. I remember doing a lot of desktop publishing work and mowing lawns to save money for it.

I was a big fan of adventure games growing up. The game that got me hooked was Maniac Mansion. I also liked Zak McKracken and the Alien Mindbenders, the King’s Quest series, and the Space Quest series, but my all-time favorites are definitely the Monkey Island games.

Maniac Mansion Maniac Mansion: Day of the Tentacle The Secret of Monkey Island Monkey Island 2: LeChuck's Revenge The Curse of Monkey Island Escape from Monkey Island

Other games I remember playing a lot are Captain Comic, Commander Keen, and Duke Nukem.

Captain Comic Commander Keen Duke Nukem

That’s about it for the really old stuff. I’ve played a lot of games since, but cut way back after having kids.

Other posts in this meme:

I’m tagging:

Let me know about your first computer and favorite old games!

{ 8 comments }