SharePoint 2010/2013 and capacity planning for TempDB & TempDB log files.

Hi,

I’m not sure how many of you know this, and it’s not something I’ve read much in the literature available from Microsoft on SharePoint scalability and sizing.

So I thought I would share my experiance, that is you can have real issues with large Scale SharePoint 2010/2013 platforms if you haven’t sufficiently sized your tempDB, and the tlog that goes with it.

I know microsoft recommend setting all DB’s to AutoGrow, however none of us want to over allocate the amount of storage required for our SharePoint databases, even with AutoGrow you can have the same problem once the LUN’s holding your TempDB + TempDB log fill up.

It’s generally best practice with SharePoint on a medium to large scale to ensure your TempDB & Tlog run on there own lun’s, with reasonable I/O (FC SAN) or even DAS SSD’s etc.., this prevent’s I/O contention between the content databases, and SharePoint stored procedures that manipulate data via temp tables.

I don’t know if you’ve ever looked into the stored procedures that SharePoint uses to retrieve data or carry out operations within timer jobs or for search, or just general DB read operations, but many of these procedures make extensive use of temporary tables, and cursors to work with data.

Thus once your environment starts to get larger (millions+ items) you can start to run into significant demands in terms of TempDB Log & TempDB space, including Random I/O requirements to complete maintaince and crawl update operations, this is in addition to maintaining normal SharePoint operation.

Even standard SPQuery’s onto the content DB’s require TempDB space, if your TempDB or TempDB log fill up, or the LUN’s they are on become full, then until you resolve the issue many parts of SharePoint just stop working.

One particular problem I have encountered on SharePoint 2010 can occur during a full crawl, a table in the CrawlStoreDB (dbo.MSSCrawlUrlChanges) appears to be tracking items crawled for reporting puroposes and stores a TimeStatme (DateTime) and Status (int) with associated item DocID (int), this table has two indexes one non-clustered on DocID (int) the other clustered on TimeStamp (datetime).

Now in a system with many millions of items this table can get rather large when you run a full crawl, I have seen it get as big as millions of rows & 10’s of Gbytes of space, the particular stored procedure that updates this table proc_MSS_CrawlReportPreprocessChanges attempts to load all records less than a particular time into a cursor (which means tempDB action), then it processes this cursor within a transaction, and inserts results as appropriate into a further temp table (more tempDB action).

Finally it clears down the dbo.MSSCrawlUrlChanges and inserts all the changes from the temp table back into the CrawlStoreDB table dbo.MSSCrawlUrlChanges.

This is all carried out under a single transaction so will hold space in TempDB log files for the duration of it’s execution.

These operations of temporary table plus cursor usage can have a real impact on the amount of Space you need in both tempDB and it’s associated transaction log.

This problem can be exacerbated, if you also run dbcc check’s, and index rebuilds as maintaince jobs on your SQL servers that happen to coinside with these TempDB application operations, as SQL DBCC + Index Rebuilds also make significantly demands of tempDB.

I’ve seen failures of these operations on systems with 100-150GB TempDB, with 20GB+ Tlogs.

I’m trying to work out some metric’s that cover sizing of TempDB & Tlog Size vs SharePoint content size and item numbers to provide a bit of guidance, I’ve only found this blog article so far that suggests tempDB should be 25% of your largest content DB, but it doesn’t cover the relationship between TempDB size, content DB size, and Log file size for temp DB, which is crucial if SharePoint is to stay online, I’ll update this article once I have something more specific.

Thanks for reading.

SharePoint 2010 SP1 Dec 11 CU – Issues with Office 2003

I know Microsoft don’t offer support for Office 2003 anymore (unless you have an extended support agreement), but as we all know there are still lot’s of places out there especially in larger Enterprises using Office 2003 on Windows XP.

Microsoft quote office 2003 as supported by SharePoint 2010, and the document here http://go.microsoft.com/?linkid=9690494 suggests if offers a “Good” level of interopertion with SharePoint 2010.

However here’s a tip for you, based on an issue we encountered during a SharePoint 2010 upgrade from SP1 to DEC 11 CU, I would suggest if you have Office 2003 clients don’t upgrade your SharePoint 2010 instances past SP1 if you have document libraries with multiple content types associated (later versions of Office 2007 / 2010 work fine).

You’ll get this error message ‘this.frm.ctNameToId’ is not an object, when you attempt to save a document for the first time into a library.

this.frm.ctNameToId is null or not an object

It appears as part of the SP2010 Dec 11 CU, some changes were made to the Javascript file BFORM.js & BFORM.debug.js in the TEMPATE\LAYOUTS\1033 folder of the 14 hive.

These changes cause the web page rendered in the “Save As” dialog of Office 2003 which allows you pick a content type for the file when it’s saved for the first time, to throw a JavaScript error, which then makes it impossible to complete the Save operation.

I have compared old and new versions of the function where the error is thrown “function ChoiceFValidate()” and can see an extra few lines of code have been added compared to the earlier version, see below

SP1 Dec 2011 CU (BFORM.js)
SP1 Dec 2011 CU

SP1 (BFORM.js)
SP1

Why this would cause Office 2003 to break is not clear at present it maybe due to the form configuration we have on our content types on our document libraries??

Either way we see the issue on versions of SharePoint 2010 SP1 DEC 2011 CU, and the file change date time for the BFORM.js file is 16/11/2011.

So if you have Office 2003 & Document Libraries with more than one content type, stick on SP2010 SP1 until you can get your users on a later version of office.  Might take this one up with M/Soft to see if they have a suggestion, might also investigate if a later CU from 2012 sorts the issue, hope you don’t have the same problem we did.

Import-SPEnterpriseSearchTopology / Export-SPEnterpriseSearchTopology

Recently I was working on a way to totally automate our SharePoint application build out, including farm provisioning one area where this is tricky is around deploying SSA’s (Search Service Applications).

We already had a nice PowerShell based farm build and code relase process to allow us to easily build and configure our farms from scratch and deploy our SharePoint application onto multiple environment configurations from dev (single server) through to production (multi-server farms).

However one area that was tricky and hard to automate was the configuration of the SSA, setup of crawl / query and index partitions etc.. as this is very different on small scale dev farms compared to large scale production farms.

We could have built a totally custom set of PowerShell scripts to do the job for us however with a bit of Simple token substitution and the use of these built in sharepoint commands you can deploy any SSA setup you like, using the XML file that is output when you call the Export-SPEnterpriseSearchTopology as your starting template.

Specific’s to follow later…