SharePoint 2013 Web Applications not creating on all servers in the farm, farm solutions not deploying…

Hi,

I figured this was worth Sharing after coming across some rather odd behaviour during the provisioning of a new SharePoint farm.

Now before I start, I will say that it’s not me that builds & configures the servers.

So We use a fully scripted setup via Powershell for our SharePoint farm provisioning process, it gets a farm configured joins servers to the farm, set’s up our services, configured search topology, get’s everything as we want for our system.., in this particular environment, the other servers all joined the farm without errors, but the Web Application in IIS was only getting created on one of the three WFE servers in the farm, rather than all of them, the server that was getting the IIS site created was the one where the PowerShell script was being run..

I know that from experience, SharePoint creates it’s IIS Web Applications on other servers in the farm via a “Web Application Provisioning” timer job that run’s on each server joined to the farm, so the problem had to be something to do with this job not running on all the servers, usually a web application appears on all WFE simultaneously or at least within a minute or so of issuing the PowerShell commands from one server to create it.

So I check event logs looking for errors in the timer service on the other WFE servers, nothing, I looked into the ULS logs for errors from OWSTIMER.EXE, nothing, I scratched my head and then slept on the problem, the next morning I was none the wiser, but checking the problematic environment I noticed that the missing WebApps had appeared in IIS while I was away over night.

Hmm I put this one down to experience and moved on, thinking nothing more of this until we tried to deploy some custom farm solutions, again via powershell, on the same farm, the WFE server the powershell was running on WSP deployed straight away, DLL’s GAC’ed etc.. on other two servers nothing deployed to GAC, and the Farm Solutions were stuck at the deploying phase…

WSP’s are again deployed to all servers via a timer job process, thinking about the two issues I started to wonder what the commonality between them was.

It was at this point as I switched from server to server via the Remote Desktop Connection Manager that I spotted the problem, I noticed the time on the windows desktop was different between servers, somehow the time zone was mismatched during the server builds between the WFE server the PowerShell scripts were running on and the other two WFE Servers, the other 2 WFE were 2 hours behind, UTC+8, rather than UTC+10.

I adjusted the time zone of all the servers to match and then WSP files started deploying correctly and also IIS web sites were getting created without issue.

So looks to me that job scheduling via the timer service doesn’t use UTC it uses local server time to decide when to run a job, so if your servers have been configured with different time zones, then you will get some strange behaviour from SharePoint timer jobs, a nice little thing to add to the farm server build QA check list (ensure servers are all set to the same time zone :-))

Thanks for reading.

SharePoint 2013 Configuration Database Registry Key Corruption

Hi,

Not sure if you’ve ever suffered from this problem, but if you have it can be puzzling as to what is causing it.

From what I’ve found out versions of SharePoint 2013, before the November 2014 CU have an issue that can cause the following registry key to become corrupt.

HKLM\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\15.0\secure\ConfigDB

This causes your WFE or Application server to then act as though it has been removed from the SharePoint farm.

Any attempt to run the PSConfig.exe (via powershell) or the Product’s and technologies config wizard to join the server back to the farm ends up with an error message either on the console or in the PSCDiag’s file in your logs folder, with a stack trace that looks similar to.

System.ArgumentNullException: Value cannot be null.
Parameter name: server
at Microsoft.SharePoint.Administration.SPSqlConnectionString.GetSqlConnectionString(String server, String failoverServer, String instance, String database, String userName, String password, String connectTimeout)
at Microsoft.SharePoint.Administration.SPSqlConnectionString..ctor(String connectionString)
at Microsoft.SharePoint.PostSetupConfiguration.InitializeTask.GetSetupType()
at Microsoft.SharePoint.PostSetupConfiguration.InitializeTask.Validate(Int32 nextExecutionOrder)
at Microsoft.SharePoint.PostSetupConfiguration.TasksQueue.Validate(Boolean useDefaultExecutionOrder)

What is happening here, is that the details of the connection string to the config DB are lost and the key in the registry end’s up with just the “ConnectionTimout=XXX” value rather than the full string that should read something like “Data Source=SQLServer;Initial Catalog=SP_Config;Integrated Security=True;Enlist=False;Pooling=True;Min Pool Size=0;Max Pool Size=100;Connect Timeout=45”

I believe the issue is related to a non thread safe dictionary object being used to rebuild the connection string this rebuild process can occur during an APP Pool recycle.

Their appears to be a fix in the November 2014 CU that resolves the problem, so if you are experiencing issues with the registry key becoming corrupt, you have two options.

Either deploy November 2014 CU or later, or if this isn’t feasible due to the complexities of updating multiple production servers, you can remove the write permissions of all users on the registry key, so they can read it, but not be able to update it.

This later step is a work around, and may cause issues if you attempt to remove and re-add the server to another SharePoint farm in future, but it will prevent the issue from re-occurring until you can update your SharePoint 2013 platforms with the appropriate or later CU.

Thanks for reading.

Performance Issues SharePoint 2013 – Things to look at

Hi,

Just thought I would share some experiences when dealing with intermittent performance issues I was working on recently with regard to SharePoint 2013.

Currently I’m involved with migrating a very large SharePoint platform (10TB’s of data) from SharePoint 2010 -> SharePoint 2013. The issue I encountered arose due to problems being reported in our performance test environment that we use to sign off the system from a performance perspective for production use.

I spent a few days diagnosing this problem before solving it. The main symptom we were experiencing was that of periodic, though no obvious pattern, http requests to SharePoint that would normally be served in less than 1’s, taking 10-30sec’s to respond, most strange.

To add some perspective on the platform we have many customisations such as http modules, custom site definitions, features event handlers, list and document library event handlers, custom search, workflows all sorts :-), so I had a lot of stuff to cover and rule out.

I would also add that before I started to dive into the fact that software was the issue, I had ruled out a Hardware bottleneck as the problem, as all the profiles etc.. from our SQL servers showed no long running transaction or SQL performance issues, the memory / cpu usage on all servers were nominal, no unexpected network or SAN utilisation etc….

So in my mind it had to be a software or configuration related issue.

Initially I investigated all our code customisations, gradually excluding them one by one, still no joy, even with the environment stripped down to bog standard SharePoint 2013 using a team site with the standard document library, the issue still occurred accessing http views if we left the perf test’s running long enough with enough concurrent users being simulated.

So, customisations ruled out, it’s got to be a config issue right?

We were seeing occasional errors in the ULS logs around the distributed cache (AppFabric Cache), and a bit of googling led me quite a few people blogging about known issues with the App Fabric Cache version that is supplied in the pre-req’s for SP2013, none of these tied in exactly with the timings of the performance issues, we were experiencing, however after reading the articles/blogs I decided it was prudent to update our AppFabric Cache to the latest version.

Microsoft provide CU1 of AppFabric Cache 1.1, on the installation media for SharePoint 2013, thus this is what most of you will likely have installed.

I would strongly recommend that if you are deploying SharePoint 2013 to a large scale multi server production environment that you update your app fabric cache to the latest available from Microsoft, as the earlier versions do have issues, as of writing this article the latest version is CU5, see KB2932678

I don’t know why Microsoft choose not to ship updates to Appfabric Cache with SP2013 CU’s or SP’s?

One of the other problems our SharePoint farms were experiencing, when under performance test load, even after the AppFabric patching were occasional timeouts being recorded in the ULS logs when dealing with the Distributed Cache, again google to the rescue :-).

The default timeout values for operations within the AppFabric defaults to 20ms, so next step for me was to up those values, I moved it to 10s, and increased the maxbuffer sizes from the defaults to 32KB, I found this script elsewhere on the net, but am adding it here for reference, here and have updated it with other finds from the URLs below, thanks to sources for helping me out.

$settings = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$settings.maxBufferPoolSize = "1073741824"
$settings.maxBufferSize = "33554432"
$settings.requestTimeout = "10000"
$settings.channelOpenTimeOut = "10000"
$settings.MaxConnectionsToServer = "100"
Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache -DistributedCacheClientSettings $settings
$settingsverify = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$settingsverify
$settingsvsc = Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache
$settingsvsc.ChannelOpenTimeOut = 10000
$settingsvsc.RequestTimeout=10000
$settingsvsc.MaxBufferSize = 33554432
Set-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache -DistributedCacheClientSettings $settingsvsc
$settingsaverify = Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache
$settingsaverify
$sts = Get-SPSecurityTokenServiceConfig
$sts.MaxServiceTokenCacheItems = "1500"
$sts.MaxLogonTokenCacheItems = "1500"
$sts.Update()

This resolved all the ULS log errors I was seeing, general load test performance was better, no more errors in ULS / Event logs but we still has periodic requests that were taking upwards of 20s-30s to respond (IIS Logs confirmed these times).

You can also check out these two sites here for further info on the subject above

http://habaneroconsulting.com/insights/sharepoint-2013-distributed-cache-bug#.VjiNhZVi-70
http://www.wictorwilen.se/how-to-patch-the-distributed-cache-in-sharepoint-2013

Next thing to check was to stop the distributed cache from blocking on garbage collection, I strongly suggest you do this, you need to change the config file for the distributed cache service, under normal circumstances this can be found here C:\Program Files\AppFabric 1.1 for Windows Server\DistributedCacheService.exe.config

Add the following section to the file.

<appsettings>
<add key="backgroundGC" value="true"></add>
</appsettings>

So it looks like

.....
</configSections>
<appsettings>
<add key="backgroundGC" value="true"></add>
</appsettings>
<datacacheconfig cacheHostName="AppFabricCachingService">
.....

Don’t forget to restart your cache service and save the changes

Stop-SPDistributedCacheServiceInstance -Graceful

Save the .config file

$instance = Get-SPServiceInstance | ? {$_.TypeName -eq "Distributed Cache" -and $_.Server.Name -eq $env:computername}
$instance.Provision()

The following updates, can also be used to resolve issues where you see periodic re-authentication requests in SharePoint when combined with ULS errors around authentication token cache issues…

Alas for me the performance problem still persisted…

So where next… I started to suspect that the application pool that was running SharePoint was recycling at random intervals, but I was seeing nothing in the ULS logs or Event logs to confirm this. I then used Perfmon, to monitor the ASP.Net counter for application restarts, and lo and behold when we hit a performance issue an application restart was occurring, so I started to investigate what was causing it, I’d already ruled out all our custom code by this point, and I saw nothing in the logs to explain what was going on, I even pushed the ULS logging levels to VerboseEx, an undocumented level of detail even grater than verbose.

Then I checked the IIS settings for the AppPools running SharePoint 2013, this was to make sure they were configured to report all re-cycle events to the event log, they were…

At that point though a value in the advanced section of the app pool config caught my eye, I then noticed the smoking GUN, the private memory pool limit…

It seems that for reasons I’ve yet to get to the bottom of this value was different from our previous SharePoint 2010 platforms, on SharePoint 2010 the values set in the appPools in IIS for the Private Memory limit are 0 (I.E. no limit), for some reason on our SharePoint 2013 kit, when the app pools are created, via powershell scripts, the limit was set to 2GB, and if your appPool attempts to exceed this memory allocation it get’s silently recycled, bingo, changed the value to 0 on SP2013 to match SP2010 and no more appPool recycle’s from Perfmon and no more requests taking ages to respond.

Is this new limit something to do with SP2013 being more cloud focused and configuring itself OOTB with a memory limit more suitable to a multi tenanted cloud hosted environment?, or is it due to some other changes somewhere in our server provisioning process? I’ll never know 🙂

Of course you may want to work out a suitable Private Memory Limit for your production platform’s, if you have loads of RAM on your Servers than just setting it to no-limit should be ok, if your server/servers have limited RAM you could try doubling this to 4GB, rather than unlimited if you are hitting the problem I encountered.

Hope this proves useful to others.

Thanks

#SPC2014 – Post SharePoint Conference Summary

Hi all,

Well after 4 days, and many sessions what’s there to report.

One word CLOUD this was the theme being pushed across 90%+ of the sessions.

The next version of on Premises SharePoint is going to be released in 2015 (this was announced in the keynote), not sure when in 2015 it’s likely to drop, but watch this space!. (Also heard rumors that SharePoint 2015 might be the last on Prem version of SharePoint??!!??, recent KeyNote by Bill Baer at SP24 conference, seems to have put this to bed, will link to it once it’s reposted, he mentioned that as long as there is demand, on prem versions will continue to be developed by Microsoft post SP2015.)

Office 365 / SharePoint online is now worth more in revenue per year than on premises SharePoint, a lot of the new features they have introduced are hitting Office 365 first, and some may not make it down to the on Premises version.

I think for SME’s Office 365 is great, I’m not so convinced for larger enterprises and corp’s that may put a higher value on there data, and prefer the control of it being on prem.

Certainly from a cost perspective Office 365 / Sky Drive for Business looks very compelling.

Some of the size limitations of the cloud offering may preclude anybody with large on Prem versions of SharePoint from adopting cloud, for example the 1TB per site collection limit, could mean large data re-org’s to move to the cloud, in addition if you have customised your SharePoint 2010/2013 on prem solutions in any significant way you may find there is no way to use the same customisations in the cloud.

I like (a lot) “Fort Knox” the new secure storage platform for SharePoint online / Office 365, (uses RBS technology and a custom Azure cloud storage solution). In total 3 encryption keys are required to decrypt any part of a shredded BLOB. (one for the keys store identifiers, one for the key store values, and one for each of the vaults that store your BLOB shreds), this makes your data very safe.

If I’ve understood it correctly looking at the architecture they have used, their is one key that is used to encrypt the details of which vault keys from the cloud key store have been used to encrypt you BLOB’s, these encrypted values are stored in the SharePoint databases, another set of keys are stored in a centralised key store, from that key store you obtain a further encryption key (per shred) that is used to access the BLOB shred from there at rest location is an Azure storage vault.

This architecture leans itself towards a possible SharePoint hybrid storage solution, that would be very secure.
If you could use the “Fort Knox” RBS provider on prem and your organisation owned the first key in this chain, it would be impossible for anybody outside of your organisation to decrypt your data, not even Microsoft would be able to.

Obviously your organisation would need to keep this key safe, as if it was lost you would loose access to all your BLOB data, but it could provide a compelling hybrid architecture that could reduce storage costs of on Prem solutions but still enable full customisations that aren’t available in the cloud.

I had a post session chat with a couple of representatives from M/Soft (who weren’t giving anything away) as to whether this might become a feature, but if it’s something that interests your organisation then there’s a chance Microsoft might make it available on Prem in the future, this I think would certainly be of interest to large enterprise / corporates, that are looking to reduce storage costs, and leverage cloud, but don’t want or can’t move their farms into the cloud.

I’ll write some more about other sessions as times permits.

Thanks for reading.

Office Web Apps 2013, Word Editor Problems with IOS 7 (iPad)

Hi,

Recently I’ve been looking at how we might be able to take advantage of Office Web Apps 2013, with SharePoint 2013 to provide a mobile device capability for the documents stored within our SharePoint systems.

I appears that the upgrade to IOS 7 causes problems for the Word Office Web Apps 2013 Editor when using an IPad, i’ve tested IOS 6.1.3 and do not have the same problems.

If you attempt to edit the content of a word document via OWA 2013, every time you press a key on the IOS popup keyboard, the keyboard disappears and the letter you typed doesn’t appear in the document you are editing.

Click Here for a video of the issue.

The PPTX editors and XLSX editors seems to work fine, I have tired an earlier version of IOS (6.1.3) on an IPad and this works correctly with the Word editor, so it defintily appears to be something going on between IOS 7 & OWA 2013, I have the August 2013 CU on OWA 2013.

I’ve raised a ticket with M/Soft let’s see what they come back with.

I now have had some response from Microsoft, they have confirmed the issues exists, and seem to suggest that Both SkyDrive + On Premise SP2013+OWA2013 have the problem. However from my testing I’ve only been able to reproduce the issue with on premise SP2013+OWA2013.

I’m going to request that they create a Hotfix for the problem, this will need need to be escalated to the development team and they will look to see if it’s possible to resolve the issue on the OWA 2013 server side, although they have said that if the problem is something to do with Safari on IOS 7 they may not be able to work around it, so we might have to go to Apple.

A thanks to Danny Mass who has commented below, he has also been in touch with M/Soft in Canada, and from correspondance with him & myself it appears that M/Soft are aware of the problem, and are targeting a Dec 2013 CU/Fix for SP2013/OWA2013 that will resolve the issue.

I still haven’t had any official feedback from Microsoft UK on this, however the December 2013 hotfix for Office Web Apps 2013 has been released kb 2850013, the text of what is fixed doesn’t mention anything about IOS 7 word editor problems.

However I can confirm that after installing this KB onto my home lab environment, the issue with the Word Editor on IOS 7 has been resolved.

So if you are having this issue download the update from here and install it on your Office Web Apps 2013 servers.

Thanks all for you comments and e-mails, we now have a resolution to our problem 🙂

SharePoint 2013 – Deferred Upgrade Option

Hi,

I’ve been doing a bit of work on this, and I wanted to dispel a couple of thoughts I came across regarding this feature that is now available with SharePoint 2013.

I’ve been looking at ways to optimise the Infrastructure required when attempting to upgrade from SharePoint 2010 -> SharePoint 2013.

The wise amongst us would say it’s just a simple case of building out your new SP2013 infra, moving your content databases across and upgrading them.

Two thoughts here

Capex Costs, if you have a SharePoint 2010 estate, multiple farms across the globe the costs of building duplicate infra can be significant.
Data volumes, if you have 10’s of TBytes of content databases, moving them from one set of servers to another takes time and the system needs to be offline.

If you Google around the internet, I’m not going to name any names or provide links etc.. you will find quite a lot of posts on what deferred upgrade is all about.

Some blogs / posts you might find may tell you that deferred upgrade does not modify the SharePoint 2010 schema, hmm I thought that might help me out in terms of not need to completely duplicate the SharePoint infra, I could duplicate the WFE of the SharePoint farm, and then just drop the content DB’s from one farm, add them to the other, and do a deferred upgrade on SP2013, and if something were to go wrong it would be possible to re-attach the DB’s to the old SP2010 farm as a rollback strategy.

After spending a few hours investigating this idea, I have concluded that some of the posts / blogs on the internet are misleading.

If you examine the contents of the dbo.Versions table in a upgraded SP2010 content databases even if you do not carry out the visual upgrade, and choose to leave everything looking like SP2010 (deferred upgrade) the DB schema is definitely changed.

The dbo.Versions table get’s an extra 120+ row’s added to it all with v15+ numbers.

If you then attempt to reconnect this database to an SP2010 farm, it no longer functions, due to schema incompatibly errors.

So just for the record the deferred upgrade definitely modifies the content database schema making it incompatible with SP2010.

SharePoint Config Database Growth

Hi,

Not sure if you are aware of this but every time a timer job runs in SharePoint it creates a record of this in the TimerJobHistory Table stored into the config DB of your SharePoint farm.

SharePoint has an internal timerjob hidden from the Central Admin screen that is supposed to clear this table down once a week, leaving at most 7 days of records.

However if for some reason you build up to many records in this table you can exhaust your transaction log on the config DB and the clear down job then fails.

Once this start’s to happen it can go un-noticed for a while, and the records contiue to build up, your DBA might see an issue every week in the SQL logs that the transaction log has filled up on the config DB, but unless they know sharepoint by the time they check the job will have failed and released the TLog space so the TLog will look normal again.

However the problem will then compound itself, as the job’s default schedule is to run weekly, so if it you exhausted the TLog this week, then next week it will have even more records to delete and the same will occur.

To resolve this issue you can do one of two things.

Increase the amount of TLog space available so the weekly job can clear down. However if you have your databases on AutoGrow the first time you might be aware of this problem is when you run out of disk space on the volume holding the Config DB TLog, you may also notice that the amount of space your config DB is consuming starts to get larger and larger.

The second thing is to change the setup of the Clear down job so that it doesn’t attempt to delete to may records in one go, this can be done via powershell, not CA as the job is hidden from the CA screen.

Find below my powershell solution to the problem it runs the job taking 5 days of history at a time and deleting it.

Finally it sets the retention on the job to 3 days, and the schedule to run daily so that hopefully we won’t have the problem again.

Check out the script below and see if it can help you if you have this problem.

Two parameters you can modify,

$daysToKeep = 730
$daysToPurgeInOneLoop = 5

$daysToKeep

Use this to set how far back in days the job will start at, this is designed so you can go back and clear up historical jobs that haven’t been cleared for some time

$daysToPurgeInOneLoop

Use this to set how many days are purged on each iteration, you’ll need to adjust this to make sure you don’t overwhelm the config DB transaction log size you have setup, or run your storage out of space if your TLog is set to AutoGrow.

cls
Write-Host “Clearing Down Timer Job History”
$daysToKeep = 730
$daysToPurgeInOneLoop = 5

while ($daysToKeep -gt 0)
{
$history = get-sptimerjob | where-object {$_.name -eq “job-delete-job-history”}
Write-Host ” ”
Write-Host -NoNewLine “Setting Days to Keep:”
Write-Host -ForegroundColor Green $daysToKeep
$history.DaysToKeepHistory = $daysToKeep
$history.update()
Write-Host -ForegroundColor Green “Starting Purge Job”
$lastTimeJobRan = $history.LastRunTime
$history.runnow()
Write-Host -NoNewLine -ForegroundColor Green “Waiting For Purge Job to Complete”
$jobFinished = $false
while ($jobFinished -eq $false)
{
Start-Sleep -Seconds 2
$runningJob = Get-SPTimerJob $history.Name
Write-Host -NoNewLine -ForegroundColor Yellow “.”
if ($lastTimeJobRan -ne $runningJob.LastRunTime)
{
$jobFinished = $true
}
}
Write-Host ” ”
Write-Host -ForegroundColor Green “Ending Purge Job”
$daysToKeep = $daysToKeep – $daysToPurgeInOneLoop
}

Write-Host -ForegroundColor Green “Setting Final Job History Retention to 3 days, and schedule to run daily @ 5am”
$history.DaysToKeepHistory = 3
$history.update()
$history.runnow()
Set-SPTimerJob -Identity $history -Schedule “Daily at 05:00”
Write-Host -ForegroundColor Yellow “Please check row counts on dbo.TimerJobHistory Table in Config DB to ensure run complete”

SharePoint 2010 – SQL Timeouts

Hi,

If you notice you have many “SQL Timeouts”, in the event logs of your SharePoint 2010 WFE servers, and you are running the SharePoint databases on a mirrored set of SQL servers you may be encountering this problem.

There is a .Net framework article / kb from Microsoft that says this is a known problem with .Net & Mirroring see KB 2605597 which is a hotfix to resolve the problem.

However there is also a further article KB 2600211 that has a section in it that says the KB above is included in the .Net 4.0.3 Update, see note from the web page for the .Net 4.0.3 update.

Don’t let this fool you into thinking that this update contains all of the changes that are in the original KB 2605597, it does not, the original KB has updated DLL’s for .Net 4.0 and earlier versions of .Net back to v2.0, if you apply the .Net 4.0.3 update you only get the .Net System.Data.dll update for v4.0 or later.

AS SharePoint 2010 is built on framework 3.5.1 you need to make sure you install the original KB 2605597 as this contains a new version of the System.Data.dll for framework 2 which is used by .Net v3.5.1 applications.

Hope this saves you some time in installing the rollup KB and not being able to work out why it hasn’t fixed the problem.

A further update to this 22nd Feb 2013, even after applying all the above fixes we still saw occasional SQL timeouts between the SharePoint WFE and the SQL servers, further diagnostic’s from using debugdiag, pointed the problem squarely at the the system.data.dll (Part of .Net Framework 2.0) it suggested we were still suffering from the same problem with the SSPI causing timeouts during the initial connection attempt from WFE to SQL.

Given this Microsoft recommended we deploy a further hotfix to .Net 2.0, KB 2784148 this was released in DEC 2012, and contains amongst other things a further update of the system.data.dll to v2.0.50727.7012.

It appears this latest version of the system.data.dll still hasn’t resolved our timeout issues, next steps time for some BID tracing, to see if that shed’s any more light.

An update on this saga (now August 2013), even though we have been through another two set’s of hotfix’s for our timeout issues, they still haven’t gone away completely. In conjunction with Microsoft they have now identified a SQL server scheduling problem that appears to be affecting SQL 2008 R2 when deployed on HP DLX80 Generation 6 or newer servers, this causes the SQL ring buffer to drop a connection sporadically when running on specific hardware, the infra I have seen this issue on is HP DL380 / DL 580 6th Generation or newer.

I’ll post another update on this subject once Microsoft have worked out how we get around this problem, but if you have SQL timeouts with SharePoint and are using the later generations of HP Servers you could be experiancing the same issue.

Update Microsoft have recently issued a new KB that is suppose to solve (or greatly reduce this problem), seems the underlying problem is with the operating system rather than SQL, so if you are running Windows 2008 R2 SP1 or Windows 7 SP1 this could be affecting you.

Please see the article here which gives instructions on how to apply the hotfix.

Content Database Creation Gremlins

Hi,

Recently we had some problems creating new content databases and adding them to SharePoint 2010 SP1.

From looking in the ULS logs I found the following.

10/22/2012 04:55:14.83 PowerShell.exe (0x477C) 0x0FB0 SharePoint Foundation Database 5586 Critical Unknown SQL Exception 2812 occurred. Additional error information from SQL Server is included below. Could not find stored procedure 'dbo.proc_GetDatabaseInformation'. aef39614-22b1-4cb2-9f8f-bfc624b9e7ba
10/22/2012 04:55:19.00 PowerShell.exe (0x477C) 0x0FB0 SharePoint Foundation Database 5586 Critical Unknown SQL Exception 208 occurred. Additional error information from SQL Server is included below. Invalid object name 'Groups'. aef39614-22b1-4cb2-9f8f-bfc624b9e7ba
10/22/2012 04:55:19.22 PowerShell.exe (0x477C) 0x0FB0 SharePoint Foundation PowerShell 6tf2 High Invalid object name 'Groups'. aef39614-22b1-4cb2-9f8f-bfc624b9e7ba
10/22/2012 04:55:19.23 PowerShell.exe (0x477C) 0x0FB0 SharePoint Foundation PowerShell 91ux High Error Category: InvalidData Target

After examinig our content database we were trying to Mount to sharepoint I noticed something strange, some of the schema objects in the database, specifically the ones mentioned above have been prefixed with the user account we used to run the power shell command, rather than “dbo” hence the errors during from SharePoint when attempting to mount the DB.

We traced this problem down to the fact that for some reason our DBA team had created the new content database with the default schema set to our admin user account rather than “dbo”.

Seem’s there must be a minor bug in the scripts that the Mount command invokes when building the new DB schema, most tables / proc’s views etc.. are created correctly regardless of default schema setting of the containing database, it appears as if a couple of the scripts that provision the schema into a new content database rely on the default schema value for the database and using it as the prefix in the create commands, thus if the default schema isn’t set to “dbo” you get the problem above.

One to look for if you run into this with your environments.

Thanks.