We all know that when we are looking for the best performance accessing DBs in .NET there is nothing faster than plain old ADO.NET DataReaders… but that could also make your code so verbose. Yes, you could build some helpers, but you could easily end up rebuilding the wheel. That’s what I loved about Dapper.NET – it is just a thin layer on top of ADO.NET, so you get the best performance, a lean implementation in your repositories (3 or 4 LoC), and full control of what’s going to the DB engine. I am not against EF, but many times you don’t know what is this guy doing down there.. and when comparing the performance, Dapper.Net is a clear winner. This is just another pragmatic library shared by the guys at StackExchange.
I have been using Azure Table Storage—ATS in a couple of my personal projects, and I just love it. It is simple, the performance was decent and the storage quite cheap. A NoSQL key-value store like ATS is just perfect for storing lots of unrelated records like audit, and error. In our case, around 70% of our data fall into this category.
I had the perception that ATS was not that fast, but I did not notice much impact on the performance of the site. Anyway, the audit and error reporting operation were asynchronous. Probably the only major drawback during the project was the ATS poor API – I still cannot conceive the anemic LINQ support.
During the architecture definition of a new web-based project I started to consider some other options for the data storage. This new project required a data model with way more relations between entities – a productive API was key, although I wanted to stick to a NoSQL store for future-proof scalability.
One of the serious options that we started to contemplate was MongoDB. I had some quick experiences in the past with it but nothing serious. I knew that their support of LINQ was phenomenal, the option of growing to a massive scale thanks to sharding and replica sets, but what about performance…? How would ATS performance for read/write operations compare against MongoDB, or Azure SQL?
I built a simple MVC 5 application and deployed it in a Azure Web Role (XS). Using the ATS .NET Storage Client Library 2.2, I built a simple page which reads 100 records in a ATS table, and another page for reading each one of the records in that same ATS table. The average latency of each write and read operation is displayed. I based my application on the tutorials and walkthroughs available from Microsoft. The idea was to build it with the techniques available, without any tuning.
I did the same thing with Azure SQL, procuring the smallest database I could. I used plain old fast ADO.NET DataReaders for implementing the operations.
For MongoDB, I launched a Extra Small VM with Linux CentOS. This is a 1Ghz CPU, and 786 MB RAM VM. MongoDB was deployed with default settings.
Naturally, all these resources were located in the same region (US West). This diagram summarizes the topology:
And…these are the results:
Yes, the performance of plain vanilla ATS is just disappointing. After some research I found blog post with similar findings, which indicated how to improve the performance turning-off the nagle before the calls:
The performance benefit is impressive – I wonder why this is not a default setting. Note how the read operation were not affected by that.
The performance of Azure SQL operations was really good (under 10 msecs on average), but the winner as you can see was MongoDB – impressive, with both operations under 2 msecs!
Well, that was a eye-opener. It is pretty obvious what we are going to use for next projects. Unfortunately, neither Azure nor Amazon Web Services offer a managed MongoDB service at this time, so I would need to setup and maintain my own set of VMs running MongoDB, which is not a big deal, but I would need to pay for this in addition to the storage.
Cheers, see you next time amigos.
Yesterday I found this gem while creating a simple spreadsheet with a portfolio I am defining. Instead copy/pasting values of the stocks in my portfolio from any of the financial information providers, you can simply use the GoogleFinance(SYMBOL, ATTRIBUTE) function to get the latest indicators of your specified stock symbol.
Where attributes could be:
- price – Current Price of the stock
- closeyest – Last closing price of the stock
- priceopen – Current opening price of the stock
- high – Daily high price of the stock
- low – Daily low price of the stock
- change – Change since the last posted closing price.
- changepct – Percentage change since the last posted closing price
- high52 – The 52 week high price for the stock
- low52 – The 52 week low price for the stock.
- eps – The calculated earnings per share
- pe – The calculated price to earnings ratio. (Note that companies with negative earnings will not have a pe ratio).
- volume – Number of shares traded.
- marketcap – Market Capitalization
- tradetime – Time of the last trade.
- datadelay – The time delay from Google’s servers.
- volumeavg – Average volume for the stock.
- beta – Beta value of a stock.
- shares – Outstanding shares of the stock.
The GoogleFinance() function could also pull historic data, allowing you to track the performance of any stock across a certain time period. To show historical data, type =GoogleFinance(“symbol”, “attribute”, “start_date”, “num_days|end_date”, “interval”) into a spreadsheet cell.
You’ll find details about the syntax of this formula below:
- “Symbol” and “attribute” follow the same rules as above
- “Start date” is the day you’d like to start showing information from
- “num_days” | “end_date” [Optional] can be either the end date for the time period over which you want to see historical data, or the number of days from the start date. Any number less than 50 is considered to be num_days. Otherwise it is considered an end_date.
- “interval” specifies granularity at which the stock data is shown, either daily or weekly. You can enter either “Daily” or “1” if you would like daily information and either “Weekly” or “7” for weekly data.
You can find more information about historical data and mutual fund data in the Google spreadsheets help center.
It is sad to find out that your old trusty MS Excel does not provide this functionality the same way the GoogleFinance() function does. Excel 2013 offers something much better, but not as flexible.
In a previous post, I detailed how to automate the creation of a standard multi-server environment using the IaaS capabilities in Azure. During the last days I had the opportunity of enhancing these scripts a bit. This second part of the post describe the enhancements.
Basically, the original scripts followed these set of steps:
- Defining environment variables
- Create cloud service
- Create storage account
- Create VM-n
Pretty simple uh? Yeah, but the resulting environment presented some issues:
- Are VMs created close enough (same subnet)? Most probably not..
- What if machines are created in the same rack (Fault Domain), and there is a hardware issue? The whole environment is gone!
- What if we have multiple WFEs? We would need to load balance them..
All these issues are resolved with the introduction of some Azure concepts such as: Affinity Groups, Availability Sets, Load Balancing. So I enhanced the scripts with them on mind, and the resulting steps are:
- Defining environment variables
- Create a new Affinity Group (New-AzureAffinityGroup)
- Create storage account (add it to the new Affinity Group)
- Create cloud service (add it to the new Affinity Group)
- For each new load balanced VM:
- Create the VM,
- Add it to the cloud service
- Add it to the same availability set
- Create and attach disks as necessary
- Configure endpoints (firewall)
- Configure load balancer and probe port.
The resulting scripts are:
The IaaS capabilities of Azure could be very handy when you need to create temporary development/test environments during the SDLC. Automating the creation and clean-up of these environments could save a lot of time and compute time ($$).
Azure exposes an interface based on PowerShell to automate all the steps required to do this, and I spent some time researching how to properly do it. You will find many references and blog post on how to create VMs using the PowerShell API in Azure; however I did not find many updated, accurate references of how to do it for an entire environment. Probably because the API has evolved so quickly and these articles are no longer relevant.. The results are summarized in the following script which demonstrates the creation of a standard deployment of an enterprise multi-tier environment (web front-end, application server and database server).
This sample script will create 3 VMs running “Windows Server 2008 R2 SP1”: WFE, App Server and DB Server. All VMs will be “grouped” in the same Azure Cloud Service: $CloudServiceName.
If you plan to use it, make sure you edit the environment variables in the top of the script:
- The Environment Name: $EnvironmentName
- The storage account used to store the VHDs: $StorageAccount
- Location of the Azure Publishing settings file: $AzurePubSettingsFile
- Size of the VMs: $VMSize
- Location – use get-azurelocation for a list of locations: : $Location
- Admin Username and Password – this is the local account you will use to remote into them: $AdminUserName and $AdminPwd
- OS: $OSFamily
To shut down and clean-up the VMs created, you could use the following script:
Some aspects were not fully covered in this version of the script:
· Networking: VMs will be able to talk between each other, but we currently do not have any control on the addressing assigned to them.
· AD: VMs are created as standalone servers, not part of an AD domain.
I hope you can find this helpful and time saving.
“There is something about investing your humanity, your eccentricity, your exuberance in the things you do. Not everything you do is going to be successful, but that’s part of the allure. It is also what makes the work valuable: that you are really present and invested in what you’re doing.”
Louis Rossetto, Wired Magazine
By default, any typo or mistake in MVC razor views will only be detected during execution.. However, you can enable compiling of views and detect those errors earlier in the development cycle.
Just open your project file (.csproj) as text (unload your project first), and find the < MvcBuildViews> and change its value to true:
Beware that compilation time will be increased (almost double in my case). You may also get a weird “Unrecognized attribute ‘xmlns:xdt’on the web.config” error during compilation (especially after switching between build configurations ). To work around it, delete the \obj folder in your project folder, or use the pre-build step described in here.
A pre-build step that worked for me was:
del $(ProjectDir)obj\* /F /S /Q
The situation is fairly common, you have been using TFS 2008 for ALM in your work environment for a while, but suddenly, you have access to the latest version: TFS 2010, and wonder how cool could be make use of all the new features included in this new major version (like collections, work item hierarchies and finally a better integration with MS Project). You followed the Upgrade Guide (kindly provided by the TFS Rangers) and after some work you finally have your precious Team Projects safe in their new 2010 home. Clean and straightforward upgrade, wasn’t it?
Opps, wait a minute!, why I do not have access to those new cool dashboards, and reports from my upgraded project portals? Ok, let’s take a look at those functional testing capabilities I’ve been waiting for a long time.. opps.. The new MS Test Manager cannot connect to my upgraded projects!! The reason for all this and many other symptoms: the upgrade process did not upgrade the definition of your Team Projects.
You ended up with the same Work Items, Reports, etc. that you had in TFS 2008. No way!!
Based on my own experience, basically you have two options here:
- Branch the code from each upgraded project to a brand new Team Project in TFS 2010. Remove permissions to the upgraded project. This way you could have your source code history accessible from the new Team Project. Obviously, you will have trouble if you accidentally delete your old upgraded project from TFS. If you could live without your history then simply move the latest version to the new Team Project. This is very easy, and it is a valid option, but you will probably have this uneasy feeling that it could be done in a more elegant way (see option 2).
- Rebuild the entire definition of your Team Project to the latest ones included in TFS 2010!! This could sound a little drastic and unfortunately you will not easily find a comprehensive guide in the Web on how to accomplish this. That is exactly the goal of this article.
Guide to Rebuilding your Team Project Template
Note: this guide is based on the assumption that you can delete all the preexisting Work Items created in your upgraded projects. If you know how to overcome this limitation just let me know.
- Log in your TFS 2010 server or a desktop machine with Team Explorer 2010 installed. You must be Project Collection Administrator to execute the following steps.
- Download the latest CMMI and Agile Templates from your TFS 2010. Open the Team Explorer 2010 (or VS 2010) as an Administrator. Right-click on the Collection..Team project Collection Settings..Process Template Manager…and download both templates to a folder in your disk. We will use the path C:MSF for CMMI Process Improvement v5.0 in the following steps.
- Take a full back of your TFS 2010 server. Use the TFS Backup Power Tool for this.
- Open a command line prompt and change dir to C:<Program Files>Microsoft Visual Studio 10.0Common7IDE. You will need to execute the following steps for each one of the upgraded projects. In this case I will use a Collection named “Migrated”, and a Team Project called “TemplateChgTest”:
4.1 – Open Notepad and create a XML file called “ImCategories.xml” with the following content:
<?xml version="1.0" encoding="utf-8"?> <cat:CATEGORIES xmlns:cat="http://schemas.microsoft.com/VisualStudio/2008/workitemtracking/categories"> </cat:CATEGORIES>
4.2 – Clean-up the current category list for the project. This would allow us to delete all the Work Item definitions. Import the empty Category list (ImCategories.xml) for the TemplateChgTest project.
witadmin.exe exportcategories /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /f:"C:TFSUpgradeImCategories.xml"
4.3 – Delete each one of the Work Item definitions of the team project. You can get a list of the current definitions through the Team Explorer, or using the witadmin listwitd command.
witadmin destroywitd /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /n:[Wort Item Name]
4.4 – Import the new definitions from the template downloaded in step 2. Do this for each one of the Work Item definitions in the new template.
witadmin importwitd /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /f:"C:MSF for CMMI Process Improvement v5.0WorkItem TrackingTypeDefinitionsBug.xml"
4.5 – Import the new Category list from the downloaded template.
witadmin.exe importcategories /collection:http://localhost:8080/tfs/Migrated /p:TemplateChgTest /f:"C:MSF for CMMI Process Improvement v5.0WorkItem Trackingcategories.xml”
4.6 – At his point you should be able to connect to your upgraded projects from MS Test Manager. You can also check your new definitions from Team Explorer (remember to refresh your projects or restart VS).
5 – Let’s rebuild the project portal and the new dashboards and reports. Details of this process are further explained here, but in summary you will need to do this:
5.1 – In case you want your portal in the same path, backup your upgraded content and delete the portal. Open the WSS-based project portal and delete it (Site Actions…Site Settings…Site Administration…Delete this site).
5.2 – Delete your upgraded SSRS reports. Find the SSRS Server URL in the TFS Admin Console applet (under the Reporting node). Open the SSRS Report Manager in your browser and delete the reports folder for the upgraded team project.
5.3 – Manually copy the Queries from a new Project using the same template. Just copy and paste the queries in the Team Explorer tree view.
5.4 – Create the following XML file in notepad (C:TFSUpgradeAddDashboards.xml):
<?xml version="1.0" encoding="utf-8"?> <Project xmlns="ProjectCreationSettingsFileSchema.xsd"> <TFSName>http://localhost:8080/tfs/Migrated</TFSName> <LogFolder>C:TFSUpgrade</LogFolder> <ProjectName>TemplateChgTest</ProjectName> <AddFeaturesToExistingProject>true</AddFeaturesToExistingProject> <ProjectReportsEnabled>true</ProjectReportsEnabled> <ProjectSiteEnabled>true</ProjectSiteEnabled> <ProjectSiteWebApplication>WSSTFS</ProjectSiteWebApplication> <ProjectSitePath>/Migrated/TemplateChgTest</ProjectSitePath> <ProjectSiteTitle>TemplateChgTest</ProjectSiteTitle> <ProjectSiteDescription>TemplateChgTest</ProjectSiteDescription> <ProcessTemplateName>MSF for CMMI Process Improvement v5.0</ProcessTemplateName> </Project>
Configure the parameter based on your environment. Remember that you can get the ProjectSiteWebApplication parameter from the TFS Admin Console (Under the Name column in the SharePoint Web Applications node.
5.5 Execute the rebuild process. For this, open the Command Window in VS2010 and enter the following command:
Wait for the completion of the command, and the check the generated log file for exceptions. This must contain a line to consider the process as successfully completed:
2011-04-12T15:36:08 | Module: BatchTeamProjectCreator | Thread: 1 | Team Project Batch Creation succeeded.
6 – Finnish! You are ready to go. Check the new portal, Excel and SSRS reports.
- Keep in mind that these steps will also work if you need to change the definition of your Team Project from CMMI to Agile or vice versa.
- You will need minor changes if you are using your own template.
Enjoy, please let me know any comments!
Recently, one of my customers asked me that question: “Based on the updated SQL Azure plans, the maximum database size is now 50GB. What if my DB requires more storage?
The first recommendation could be: try to measure how your DB is growing, and (if possible) try to have there only the most relevant information – SSIS is a great option to download all that historic data to your on-premises servers. Another option is Data Sync. Some good articles on measuring your DB size are:
Well, according to Microsoft 50GB is the maximum size, and if you need more space you will need to partition your data (either horizontally or vertically). Unfortunately, SQL Azure won’t help you much with this, and you will need to make some changes in your app logic to handle this. This should be done in your Data Access Layer, and it will not be an easy process to implement, let me warn you. Following articles could give you some insight on the details and limitations of this process:
We have been using Windows Azure for almost a year, hosting our portal (Hoteles.com.co), and the results have been quite good in general. Compared to the rest of PaaS offerings in the market, this is a great solution if your expertise is around the Microsoft stack – specially .NET and SQL Server.
For v2 we are planning to leverage more of the Azure platform, specially the Azure Storage. We will be servicing Hotel images directly from here. This could enhance the already good response times we have right now. Activating the CDN for this would bring some additional benefits as well.
The other good experience (should I say, the most important for us?? 😉 ) have been around the costs. After some changes, like moving out the Content Admin application from Azure to my regular Hosting provider, we are now paying an average of 60 USD per month. Not bad for a reliable and fast platform like this with access to data in SQL Server.
However, Azure is a new platform, and although it provides a good set of basic services, the portal still lacks of some important services, some available in some other PaaS offers (like AWS), such as:
- It stills lacks an out-of-the-box UI in the Azure portal that enable Admins to monitor the load of the instances (in term of CPU, RAM, Disk Access, etc) – pretty much the functionality offered by really good tools like Azure Diagnostic Manager by Cerebrata. Why is this important? Well, because either you or the platform need to make decisions base on the load. Should we allocate another instance to accommodate an increase in traffic? — this is the base of the elasticity paradigm. The Azure portal should bring some support to define such rules. The capability to define the number of instances per day of the week (i.e. what if the solution expects more traffic during the weekend..)
- In addition, there is a lack of traffic statistics reporting in the Azure Portal. We are currently handling this through Google Analytics – but it would be great to have this integrated in the Azure portal.
- It would be valuable to have access to some “Event Log” window from the Azure Portal with diagnostic information. Sometimes your application has problem and does not start but you cannot get the error info – it is like flying blind.
The good thing is that Azure is really strategic for Microsoft, and I expect to see this functionality shortly as part of the service.