Giving a try to Dapper.net Micro-ORM

We all know that when we are looking for the best performance accessing DBs in .NET there is nothing faster than plain old ADO.NET DataReaders… but that could also make your code so verbose. Yes, you could build some helpers, but you could easily end up rebuilding the wheel. That’s what I loved about Dapper.NET – it is just a thin layer on top of ADO.NET, so you get the best performance, a lean implementation in your repositories (3 or 4 LoC), and full control of what’s going to the DB engine. I am not against EF, but many times you don’t know what is this guy doing down there.. and when comparing the performance, Dapper.Net is a clear winner. This is just another pragmatic library shared by the guys at StackExchange.

Comparing the Performance of Azure Table Storage with Other Repositories

I have been using Azure Table Storage—ATS in a couple of my personal projects, and I just love it. It is simple, the performance was decent and the storage quite cheap. A NoSQL key-value store like ATS is just perfect for storing lots of unrelated records like audit, and error. In our case, around 70% of our data fall into this category.

I had the perception that ATS was not that fast, but I did not notice much impact on the performance of the site. Anyway, the audit and error reporting operation were asynchronous. Probably the only major drawback during the project was the ATS poor API – I still cannot conceive the anemic LINQ support.

During the architecture definition of a new web-based project I started to consider some other options for the data storage. This new project required a data model with way more relations between entities – a productive API was key, although I wanted to stick to a NoSQL store for future-proof scalability.

One of the serious options that we started to contemplate was MongoDB. I had some quick experiences in the past with it but nothing serious. I knew that their support of LINQ was phenomenal, the option of growing to a massive scale thanks to sharding and replica sets, but what about performance…? How would ATS performance for read/write operations compare against MongoDB, or Azure SQL?

I built a simple MVC 5 application and deployed it in a Azure Web Role (XS). Using the ATS .NET Storage Client Library 2.2, I built a simple page which reads 100 records in a ATS table, and another page for reading each one of the records in that same ATS table. The average latency of each write and read operation is displayed. I based my application on the tutorials and walkthroughs available  from Microsoft. The idea was to build it with the techniques available, without any tuning.

I did the same thing with Azure SQL, procuring the smallest database I could. I used plain old fast ADO.NET DataReaders for implementing the operations.

For MongoDB, I launched a Extra Small VM with Linux CentOS. This is a 1Ghz CPU, and 786 MB RAM VM.  MongoDB was deployed with default settings.

Naturally, all these resources were located in the same region (US West). This diagram summarizes the topology:

Arqchitecture

And…these are the results:

Perf

Yes, the performance of plain vanilla ATS is just disappointing. After some research I found blog post with similar findings, which indicated how to improve the performance turning-off the nagle before the calls:

public static void InsertRandomEmployeeData()

        {

            string connStr = ConfigurationManager.ConnectionStrings["ConnString"].ConnectionString;

            /// For increased perf Turn off naggle alg

            /// http://alexandrebrisebois.wordpress.com/2013/03/24/why-are-webrequests-throttled-i-want-more-throughput/

            ServicePointManager.UseNagleAlgorithm = false;


            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connStr);

            CloudTableClient client = storageAccount.CreateCloudTableClient();

            CloudTable table = client.GetTableReference("Employees");


            table.CreateIfNotExists();

            var emp = new EmployeeEntity(GenerateRandomInt(1,10000), GenerateRandomString(0), GenerateRandomDouble(1.0,100000.0));

            TableOperation insertOp = TableOperation.Insert(emp);

            table.Execute(insertOp);

        }

The performance benefit is impressive – I wonder why this is not a default setting. Note how the read operation were not affected by that.

The performance of Azure SQL operations was really good (under 10 msecs on average), but the winner as you can see was MongoDB – impressive, with both operations under 2 msecs!

Well, that was a eye-opener. It is pretty obvious what we are going to use for next projects. Unfortunately, neither Azure nor Amazon Web Services offer a managed MongoDB service at this time, so I would need to setup and maintain my own set of VMs running MongoDB, which is not a big deal, but I would need to pay for this in addition to the storage.

Cheers, see you next time amigos.

Tip of the Month: Retrieving stock market data into your Google Docs-based spreadsheet

Yesterday I found this gem while creating a simple spreadsheet with a portfolio I am defining. Instead copy/pasting values of the stocks in my portfolio from any of the financial information providers, you can simply use the GoogleFinance(SYMBOL, ATTRIBUTE) function to get the latest indicators of your specified stock symbol.

Using GoogleFinance()

 

 

 

 

Where attributes could be:

  • price – Current Price of the stock
  • closeyest – Last closing price of the stock
  • priceopen – Current opening price of the stock
  • high – Daily high price of the stock
  • low – Daily low price of the stock
  • change – Change since the last posted closing price.
  • changepct – Percentage change since the last posted closing price
  • high52 – The 52 week high price for the stock
  • low52 – The 52 week low price for the stock.
  • eps – The calculated earnings per share
  • pe – The calculated price to earnings ratio. (Note that companies with negative earnings will not have a pe ratio).
  • volume – Number of shares traded.
  • marketcap – Market Capitalization
  • tradetime – Time of the last trade.
  • datadelay – The time delay from Google’s servers.
  • volumeavg – Average volume for the stock.
  • beta – Beta value of a stock.
  • shares – Outstanding shares of the stock.

The GoogleFinance() function could also pull historic data, allowing you to track the performance of any stock across a certain time period. To show historical data, type =GoogleFinance(“symbol”, “attribute”, “start_date”, “num_days|end_date”, “interval”) into a spreadsheet cell.

You’ll find details about the syntax of this formula below:

  • “Symbol” and “attribute” follow the same rules as above
  • “Start date” is the day you’d like to start showing information from
  • “num_days” | “end_date” [Optional] can be either the end date for the time period over which you want to see historical data, or the number of days from the start date. Any number less than 50 is considered to be num_days. Otherwise it is considered an end_date.
  • “interval” specifies granularity at which the stock data is shown, either daily or weekly. You can enter either “Daily” or “1” if you would like daily information and either “Weekly” or “7” for weekly data.

You can find more information about historical data and mutual fund data in the Google spreadsheets help center.

It is sad to find out that your old trusty MS Excel does not provide this functionality the same way the GoogleFinance() function does. Excel 2013 offers something much better, but not as flexible.

 

Deploying an Entire Environment using Azure and PowerShell, Part 2

In a previous post, I detailed how to automate the creation of a standard multi-server environment using the IaaS capabilities in Azure. During the last days I had the opportunity of enhancing these scripts a bit. This second part of the post describe the enhancements.

Basically, the original scripts followed these set of steps:

  1. Defining environment variables
  2. Create cloud service
  3. Create storage account
  4. Create VM-n

Pretty simple uh? Yeah, but the resulting environment presented some issues:

  • Are VMs created close enough (same subnet)? Most probably not..
  • What if machines are created in the same rack (Fault Domain), and there is a hardware issue? The whole environment is gone!
  • What if we have multiple WFEs? We would need to load balance them..

All these issues are resolved with the introduction of some Azure concepts such as: Affinity Groups, Availability Sets, Load Balancing. So I enhanced the scripts with them on mind, and the resulting steps are:

  1. Defining environment variables
  2. Create a new Affinity Group (New-AzureAffinityGroup)
  3. Create storage account (add it to the new Affinity Group)
  4. Create cloud service (add it to the new Affinity Group)
  5. For each new load balanced VM:
    1. Create the VM,
    2. Add it to the cloud service
    3. Add it to the same availability set
    4. Create and attach disks as necessary
    5. Configure endpoints (firewall)
    6. Configure load balancer and probe port.

The resulting scripts are:

$ErrorActionPreference = "Stop"   # stop on any error

 

function GetLatestImage($family){

$images = Get-AzureVMImage `

| where { $_.ImageFamily -eq $family } `

| Sort-Object -Descending -Property PublishedDate

 

$latestImage = $images[0]

return $latestImage

}

 

$myAzureSubscription = 'Windows Azure MSDN - Visual Studio Ultimate'

# Environment variables are defined here:

# ONLY LOWERCASE LETTERS HERE!!

$EnvironmentName = "azrtest"

$tag = get-date -format 'hhmmss'

# Create Storage Account through the Portal (vmstorageazrtest)

$StorageAccount = "vmstorage$EnvironmentName"

 

Write-Host $StorageAccount

 

$AzurePubSettingsFile = "C:\Windows Azure MSDN - Visual Studio Ultimate-12-19-2013-credentials.publishsettings"

# ExtraSmall, Small, Medium, Large, ExtraLarge, A6, A7

$VMSize = "Small"

# Region - East US, West US, East Asia, Southeast Asia, North Europe, West Europe

$Location = "Southeast Asia"

$AdminUserName = "admin2K"

$AdminPwd = "password2K"

$OSFamily = "Windows Server 2008 R2 SP1"

 

# Server names cannot be more than 15 chars 

$WFE_1_Name = 'WFESrv1'+$tag

$WFE_2_Name = 'WFESrv2'+$tag

$APPSRV_1_Name = 'AppSrv1'+$tag

$DBSRV_Name = 'DBSrv'+$tag

 

$myDataDiskSize        = 20  # in GB     # User-specified

 

# This must be unique

$CloudServiceName = "azrtest"+$tag

# Run GetLatestImage.ps1

$Image = GetLatestImage($OSFamily)

$ImageName = $Image.ImageName

 

# Affinity Groups - groups machines 'closer' 

$myAffinityGroupName   = $EnvironmentName+'-ag'  # User-defined

# Availability Sets - defines resources on different HA Fault Domains

# One Availability Set is created per application tier. Not needed for the DB server

$myAvailabilitySetName_WFE = $EnvironmentName+'wfe-as'  # User-defined

$myAvailabilitySetName_APPSRV = $EnvironmentName+'appsrv-as'  # User-defined

 

$myEndpointName        = $EnvironmentName+'-ep'  # User-defined

$myLoadBalancerName    = $EnvironmentName+'-lb'  # User-defined

 

Select-AzureSubscription –Default $myAzureSubscription

 

# Config subscription

import-azurepublishsettingsfile $AzurePubSettingsFile

 

 

# Step 1 - Create the Affinity Group

New-AzureAffinityGroup -Name $myAffinityGroupName -Location $Location

 

# Step 2 - Create Storage Account                       

# Create Storage Account through the Portal

# IF NO StorageAccount exists, ONE IS CREATED HERE!

#New-AzureStorageAccount -StorageAccountName $StorageAccount -Location $Location -Label "azrtest"

# Remove-AzureStorageAccount -StorageAccountName $StorageAccount

New-AzureStorageAccount -StorageAccountName $StorageAccount -AffinityGroup $myAffinityGroupName 

 

# Step 3 Create Azure Cloud Service

New-AzureService -ServiceName $CloudServiceName -AffinityGroup $myAffinityGroupName    

 

# Step 4               

#Get-AzureStorageAccount | Select Label

Set-AzureSubscription -SubscriptionName $myAzureSubscription -CurrentStorageAccount $StorageAccount

 

# Step 5

# Create WFE Machine(s)

# Create WFE #1

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $WFE_1_Name `

                  -availabilitysetname $myAvailabilitySetName_WFE `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureEndpoint -Name          $myEndpointName `

                    -Protocol      tcp `

                    -LocalPort     80 `

                    -PublicPort    80 `

                    -LBSetName     $myLoadBalancerName `

                    -ProbePort     8080 `

                    -ProbeProtocol tcp `

                    -ProbeIntervalInSeconds 15 `

| New-AzureVM -ServiceName $CloudServiceName        

 

# Create WFE #2

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $WFE_2_Name `

                  -availabilitysetname $myAvailabilitySetName_WFE `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureEndpoint -Name          $myEndpointName `

                    -Protocol      tcp `

                    -LocalPort     80 `

                    -PublicPort    80 `

                    -LBSetName     $myLoadBalancerName `

                    -ProbePort     8080 `

                    -ProbeProtocol tcp `

                    -ProbeIntervalInSeconds 15 `

| New-AzureVM -ServiceName $CloudServiceName    

            

 

# Create AppServer

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $APPSRV_1_Name `

                  -availabilitysetname $myAvailabilitySetName_APPSRV `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureEndpoint -Name          $myEndpointName `

                    -Protocol      tcp `

                    -LocalPort     80 `

                    -PublicPort    80 `

                    -LBSetName     $myLoadBalancerName `

                    -ProbePort     8080 `

                    -ProbeProtocol tcp `

                    -ProbeIntervalInSeconds 15 `

| New-AzureVM -ServiceName $CloudServiceName

 

# Create the DB Server

# We do not need to load balance the DB server...

# It would be better to use a SQL Server Image here..

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $DBSRV_Name `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk1' `

                    -LUN 1 `

| New-AzureVM -ServiceName $CloudServiceName

 

 

                    

Enjoy!

Deploying an Entire Environment using Azure and PowerShell

The IaaS capabilities of Azure could be very handy when you need to create temporary development/test environments during the SDLC. Automating the creation and clean-up of these environments could save a lot of time and compute time ($$).

Azure exposes an interface based on PowerShell to automate all the steps required to do this, and I spent some time researching how to properly do it. You will find many references and blog post on how to create VMs using the PowerShell API in Azure; however I did not find many updated, accurate references of how to do it for an entire environment. Probably because the API has evolved so quickly and these articles are no longer relevant.. The results are summarized in the following script which demonstrates the creation of a standard deployment of an enterprise multi-tier environment (web front-end, application server and database server).

$ErrorActionPreference = "Stop"   # stop on any error

 

function GetLatestImage($family){

$images = Get-AzureVMImage `

| where { $_.ImageFamily -eq $family } `

| Sort-Object -Descending -Property PublishedDate

 

$latestImage = $images[0]

return $latestImage

}

 

 

# Environment variables are defined here:

# ONLY LOWERCASE LETTERS HERE!!

$EnvironmentName = "azrtest"

# Create Storage Account through the Portal (vmstorageazrtest)

$StorageAccount = "vmstorage$EnvironmentName"

 

Write-Host $StorageAccount

 

$AzurePubSettingsFile = "C:\MyStuff\MyDrop\Dropbox\Personal\Windows Azure MSDN - Visual Studio Ultimate-12-19-2013-credentials.publishsettings"

$VMSize = "Small"

$Location = "Southeast Asia"

$AdminUserName = "admin2K"

$AdminPwd = "password2K"

$OSFamily = "Windows Server 2008 R2 SP1"

 

$server_A_Name = "WFEServer"

$server_B_Name = "DBServer"

$server_C_Name = "AppServer"

 

# This must be unique

$CloudServiceName = "vmstorageazrtest"

# Run GetLatestImage.ps1

$Image = GetLatestImage($OSFamily)

$ImageName = $Image.ImageName

 

# Create Storage Account through the Portal

# IF NO StorageAccount exists, ONE IS CREATED HERE!

#New-AzureStorageAccount -StorageAccountName $StorageAccount -Location $Location -Label "azrtest"

# Remove-AzureStorageAccount -StorageAccountName $StorageAccount

 

 

# Config subscription

import-azurepublishsettingsfile $AzurePubSettingsFile

#Get-AzureStorageAccount | Select Label

Set-AzureSubscription -SubscriptionName "Windows Azure MSDN - Visual Studio Ultimate" -CurrentStorageAccount $StorageAccount

 

# Create Azure Service

New-AzureService -ServiceName $CloudServiceName -Location $Location

 

# Create Machine (1) - Windows

# You can create a new virtual machine in an existing Windows Azure cloud service, or you can create a new cloud service by using the Location parameter.

New-AzureQuickVM -Windows -ServiceName $CloudServiceName -Name $server_A_Name -ImageName $ImageName -Password $AdminPwd -AdminUsername $AdminUserName -Verbose

New-AzureQuickVM -Windows -ServiceName $CloudServiceName -Name $server_B_Name -ImageName $ImageName -Password $AdminPwd -AdminUsername $AdminUserName -Verbose

New-AzureQuickVM -Windows -ServiceName $CloudServiceName -Name $server_C_Name -ImageName $ImageName -Password $AdminPwd -AdminUsername $AdminUserName -Verbose

 

 

This sample script will create 3 VMs running “Windows Server 2008 R2 SP1”: WFE, App Server and DB Server. All VMs will be “grouped” in the same Azure Cloud Service: $CloudServiceName.

If you plan to use it, make sure you edit the environment variables in the top of the script:

  • The Environment Name: $EnvironmentName
  • The storage account used to store the VHDs: $StorageAccount
  • Location of the Azure Publishing settings file: $AzurePubSettingsFile
  • Size of the VMs: $VMSize
  • Location –  use get-azurelocation for a list of locations: : $Location
  • Admin Username and Password – this is the local account you will use to remote into them: $AdminUserName and $AdminPwd
  • OS: $OSFamily

To shut down and clean-up the VMs created, you could use the following script:

#CleanUp

 

# Environment variables are defined here:

# ONLY LOWERCASE LETTERS HERE!!

$EnvironmentName = "azrtest"

# Create Storage Account through the Portal (vmstorageazrtest)

$StorageAccount = "vmstorage$EnvironmentName"

 

Write-Host $StorageAccount

 

$AzurePubSettingsFile = "C:\MyStuff\MyDrop\Dropbox\Personal\Windows Azure MSDN - Visual Studio Ultimate-12-19-2013-credentials.publishsettings"

$VMSize = "Small"

$Location = "Southeast Asia"

$AdminUserName = "admin2K"

$AdminPwd = "password2K"

$OSFamily = "Windows Server 2008 R2 SP1"

 

$server_A_Name = "WFEServer"

$server_B_Name = "DBServer"

# They must be unique

$CloudServiceName = "vmstorageazrtest"

$server_A_ImageName = $ImageName

$server_B_ImageName = $ImageName

 

# Config subscription

import-azurepublishsettingsfile $AzurePubSettingsFile

#Get-AzureStorageAccount | Select Label

Set-AzureSubscription -SubscriptionName "Windows Azure MSDN - Visual Studio Ultimate" -CurrentStorageAccount $StorageAccount

 

# Stop & Remove  VMs

$vm = Get-AzureVM -ServiceName $CloudServiceName -Name $server_A_Name

if($vm){

    Stop-AzureVM -ServiceName $CloudServiceName -Name $server_A_Name

    Remove-AzureVM -ServiceName $CloudServiceName -Name $server_A_Name

}

 

$vm = Get-AzureVM -ServiceName $CloudServiceName -Name $server_B_Name

if($vm){

    Stop-AzureVM -ServiceName $CloudServiceName -Name $server_B_Name

    Remove-AzureVM -ServiceName $CloudServiceName -Name $server_B_Name

}

 

 

# Remove any existing Azure Cloud Service

$azureService = Get-AzureService -ServiceName $CloudServiceName

if($azureService){

    Write-Host "Cloud service: $CloudServiceName found!, deleting it.."

    Remove-AzureService -ServiceName $CloudServiceName -Force

}

 

#Remove Storage Account

# Remove container first

Remove-AzureStorageContainer -Name vhds -Force

Remove-AzureStorageAccount -StorageAccountName $StorageAccount

Some aspects were not fully covered in this version of the script:

· Networking: VMs will be able to talk between each other, but we currently do not have any control on the addressing assigned to them.

· AD: VMs are created as standalone servers, not part of an AD domain.

I hope you can find this helpful and time saving.

A manifesto

“There is something about investing your humanity, your eccentricity, your exuberance in the things you do. Not everything you do is going to be successful, but that’s part of the allure. It is also what makes the work valuable: that you are really present and invested in what you’re doing.”

Louis Rossetto, Wired Magazine

Compiling Views in MVC4

By default, any typo or mistake in MVC razor views will only be detected during execution.. However, you can enable compiling of views and detect those errors earlier in the development cycle.

Just open your project file (.csproj) as text (unload your project first), and find the < MvcBuildViews> and change its value to true:

<MvcBuildViews>true</MvcBuildViews>

Beware that compilation time will be increased (almost double in my case). You may also get a weird “Unrecognized attribute ‘xmlns:xdt’on the web.config” error during compilation (especially after switching between build configurations ). To work around it, delete the \obj folder in your project folder, or use the pre-build step described in here.

A pre-build step that worked for me was:

del $(ProjectDir)obj\* /F /S /Q

Beefing-up the Azure Platform

During PDC 2010 in Redmond, WA Microsoft announced a bunch of improvements to the whole Azure platform, some of them desperately needed:

  • Support for the new Virtual Machine role, in addition to the existing Web and Worker roles. This could allow PaaS scenarios, where you can build, configure and upload your own Windows Server 2008 R2 VMs as VHDs – quiet similar to the AWS model. (Great!!!) In addition, the pricing model for the Windows Azure VM role is the same as the existing pricing model for Web and Worker roles.
  • Enhancements to the Web and Worker roles: with the introduction of Elevated Privileges and Full IIS support!!! – so we now can have multiple IIS sites per Web role and the ability to install IIS modules. (Cool!!)
  • Windows Azure will also provide Remote Desktop functionality, which enables customers to connect to a running instance of their application or service in order to monitor activity and troubleshoot common problems. So basically your Azure computing instances are no longer black-boxes. (Finally!!!! OMG, I am going to cry…)
  • The introduction of an Extra Small Windows Azure instance – great!!, now you can configure an instance to run low-priority Worker Roles, or Admin apps without ruining your budget.
Compute Instance Size CPU Memory Instance Storage I/O Performance Cost per hour
Extra Small 1.0 GHz 768 MB 20 GB Low $0.05
  • A range of new networking functionality under the Windows Azure Virtual Network name was introduced. Windows Azure Connect (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources, is the first Virtual Network feature that we’ll be making available as a CTP later this year. With this, your can establish VPN between your on-premises servers and your cloud machines. Much needed for some enterprise scenarios.
  • The Windows Azure portal will also be improved with SL technologies, and with access to new diagnostic information including the ability to click on a role to see type, and deployment time. (Finally, for god sake!!!)
  • A much needed update for the pretty basic Database Manager for SQL Azure (formerly “Project Houston”) was also announced.

Let’s wait these enhancements are released as soon as possible

TFS 2010: Changing Template for an Upgraded Team Project

The situation is fairly common, you have been using TFS 2008 for ALM in your work environment for a while, but suddenly, you have access to the latest version: TFS 2010, and wonder how cool could be make use of all the new features included in this new major version (like collections, work item hierarchies and finally a better integration with MS Project). You followed the Upgrade Guide (kindly provided by the TFS Rangers) and after some work you finally have your precious Team Projects safe in their new 2010 home. Clean and straightforward upgrade, wasn’t it?

Opps, wait a minute!, why I do not have access to those new cool dashboards, and reports from my upgraded project portals? Ok, let’s take a look at those functional testing capabilities I’ve been waiting for a long time.. opps.. The new MS Test Manager cannot connect to my upgraded projects!! The reason for all this and many other symptoms: the upgrade process did not upgrade the definition of your Team Projects.

You ended up with the same Work Items, Reports, etc. that you had in TFS 2008. No way!!

Steps to enable new features in upgraded team projects, or adding the missing dashboards are scattered through the blogosphere and the MSDN web site.

Based on my own experience, basically you have two options here:

  1. Branch the code from each upgraded project to a brand new Team Project in TFS 2010. Remove permissions to the upgraded project. This way you could have your source code history accessible from the new Team Project. Obviously, you will have trouble if you accidentally delete your old upgraded project from TFS. If you could live without your history then simply move the latest version to the new Team Project. This is very easy, and it is a valid option, but you will probably have this uneasy feeling that it could be done in a more elegant way (see option 2).
  2. Rebuild the entire definition of your Team Project to the latest ones included in TFS 2010!! This could sound a little drastic and unfortunately you will not easily find a comprehensive guide in the Web on how to accomplish this. That is exactly the goal of this article.

Guide to Rebuilding your Team Project Template

Note: this guide is based on the assumption that you can delete all the preexisting Work Items created in your upgraded projects.  If you know how to overcome this limitation just let me know.

  1. Log in your TFS 2010 server or a desktop machine with Team Explorer 2010 installed. You must be Project Collection Administrator to execute the following steps.
  2. Download the latest CMMI and Agile Templates from your TFS 2010.  Open the Team Explorer 2010 (or VS 2010) as an Administrator. Right-click on the Collection..Team project Collection Settings..Process Template Manager…and download both templates to a folder in your disk. We will use the path C:MSF for CMMI Process Improvement v5.0 in the following steps.
  3. Take a full back of your TFS 2010 server. Use the TFS Backup Power Tool for this.
  4. Open a command line prompt and change dir to C:<Program Files>Microsoft Visual Studio 10.0Common7IDE. You will need to execute the following steps for each one of the upgraded projects. In this case I will use a Collection named “Migrated”, and a Team Project called “TemplateChgTest”:

4.1 – Open Notepad and create a XML file called “ImCategories.xml” with the following content:

<?xml version="1.0" encoding="utf-8"?> 
<cat:CATEGORIES xmlns:cat="http://schemas.microsoft.com/VisualStudio/2008/workitemtracking/categories"> 
</cat:CATEGORIES>

4.2 – Clean-up the current category list for the project. This would allow us to delete all the Work Item definitions. Import the empty Category list (ImCategories.xml) for the TemplateChgTest project.

witadmin.exe exportcategories /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /f:"C:TFSUpgradeImCategories.xml"

4.3 – Delete each one of the Work Item definitions of the team project. You can get a list of the current definitions through the Team Explorer, or using the witadmin listwitd command.

witadmin destroywitd /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /n:[Wort Item Name]

4.4 – Import the new definitions from the template downloaded in step 2. Do this for each one of the Work Item definitions in the new template.

witadmin importwitd /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /f:"C:MSF for CMMI Process Improvement v5.0WorkItem TrackingTypeDefinitionsBug.xml"

4.5 – Import the new Category list from the downloaded template.

witadmin.exe importcategories /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /f:"C:MSF for CMMI Process Improvement v5.0WorkItem Trackingcategories.xml”

4.6 – At his point you should be able to connect to your upgraded projects from MS Test Manager. You can also check your new definitions from Team Explorer (remember to refresh your projects or restart VS).

5 – Let’s rebuild the project portal and the new dashboards and reports. Details of this process are further explained here, but  in summary you will need to do this:

5.1 – In case you want your portal in the same path, backup your upgraded content and delete the portal. Open the WSS-based project portal and delete it (Site Actions…Site Settings…Site Administration…Delete this site).

5.2 – Delete your upgraded SSRS reports. Find the SSRS Server URL in the TFS Admin Console applet (under the Reporting node). Open the SSRS Report Manager in your browser and delete the reports folder for the upgraded team project.

5.3 – Manually copy the Queries from a new Project using the same template. Just copy and paste the queries in the Team Explorer tree view.

5.4 – Create the following XML file in notepad (C:TFSUpgradeAddDashboards.xml):

<?xml version="1.0" encoding="utf-8"?> 
<Project xmlns="ProjectCreationSettingsFileSchema.xsd"> 
<TFSName>http://localhost:8080/tfs/Migrated</TFSName> 
<LogFolder>C:TFSUpgrade</LogFolder> 
<ProjectName>TemplateChgTest</ProjectName> 
<AddFeaturesToExistingProject>true</AddFeaturesToExistingProject> 
<ProjectReportsEnabled>true</ProjectReportsEnabled> 
<ProjectSiteEnabled>true</ProjectSiteEnabled> 
<ProjectSiteWebApplication>WSSTFS</ProjectSiteWebApplication> 
<ProjectSitePath>/Migrated/TemplateChgTest</ProjectSitePath> 
<ProjectSiteTitle>TemplateChgTest</ProjectSiteTitle> 
<ProjectSiteDescription>TemplateChgTest</ProjectSiteDescription> 
<ProcessTemplateName>MSF for CMMI Process Improvement v5.0</ProcessTemplateName>
</Project>

Configure the parameter based on your environment. Remember that you can get the ProjectSiteWebApplication parameter from the TFS Admin Console (Under the Name column in the SharePoint Web Applications node.

5.5 Execute the rebuild process. For this, open the Command Window in VS2010 and enter the following command:

File.BatchNewTeamProject C:TFSUpgradeAddDashboards.xml

Wait for the completion of the command, and the check the generated log file for exceptions. This must contain a line to consider the process as successfully completed:

2011-04-12T15:36:08 | Module: BatchTeamProjectCreator | Thread: 1 | Team Project Batch Creation succeeded.

6 – Finnish! You are ready to go. Check the new portal, Excel and SSRS reports.

Notes:

  • Keep in mind that these steps will also work if you need to change the definition of your Team Project from CMMI to Agile or vice versa.
  • You will need minor changes if you are using your own template.

Enjoy, please let me know any comments!

Large DBs in SQL Azure

Recently, one of my customers asked me that question: “Based on the updated SQL Azure plans, the maximum database size is now 50GB. What if my DB requires more storage?

The first recommendation could be: try to measure how your DB is growing, and (if possible) try to have there only the most relevant information – SSIS is a great option to download all that historic data to your on-premises servers. Another option is Data Sync. Some good articles on measuring your DB size are:

How to Tell If You Are Out of Room – SQL Azure Team Blog – Site Home – MSDN Blogs

CalculatingTheSizeOfYourSQLAzureDatabase

Well, according to Microsoft 50GB is the maximum size, and if you need more space you will need to partition your data (either horizontally or vertically). Unfortunately, SQL Azure won’t help you much with this, and you will need to make some changes in your app logic to handle this. This should be done in your Data Access Layer, and it will not be an easy process to implement, let me warn you. Following articles could give you some insight on the details and limitations of this process:

SQL Azure Horizontal Partitioning- Part 2 – SQL Azure Team Blog – Site Home – MSDN Blogs

Scaling out with SQL Azure – TechNet Articles – Home – TechNet Wiki