Comparing the Performance of Azure Table Storage with Other Repositories

I have been using Azure Table Storage—ATS in a couple of my personal projects, and I just love it. It is simple, the performance was decent and the storage quite cheap. A NoSQL key-value store like ATS is just perfect for storing lots of unrelated records like audit, and error. In our case, around 70% of our data fall into this category.

I had the perception that ATS was not that fast, but I did not notice much impact on the performance of the site. Anyway, the audit and error reporting operation were asynchronous. Probably the only major drawback during the project was the ATS poor API – I still cannot conceive the anemic LINQ support.

During the architecture definition of a new web-based project I started to consider some other options for the data storage. This new project required a data model with way more relations between entities – a productive API was key, although I wanted to stick to a NoSQL store for future-proof scalability.

One of the serious options that we started to contemplate was MongoDB. I had some quick experiences in the past with it but nothing serious. I knew that their support of LINQ was phenomenal, the option of growing to a massive scale thanks to sharding and replica sets, but what about performance…? How would ATS performance for read/write operations compare against MongoDB, or Azure SQL?

I built a simple MVC 5 application and deployed it in a Azure Web Role (XS). Using the ATS .NET Storage Client Library 2.2, I built a simple page which reads 100 records in a ATS table, and another page for reading each one of the records in that same ATS table. The average latency of each write and read operation is displayed. I based my application on the tutorials and walkthroughs available  from Microsoft. The idea was to build it with the techniques available, without any tuning.

I did the same thing with Azure SQL, procuring the smallest database I could. I used plain old fast ADO.NET DataReaders for implementing the operations.

For MongoDB, I launched a Extra Small VM with Linux CentOS. This is a 1Ghz CPU, and 786 MB RAM VM.  MongoDB was deployed with default settings.

Naturally, all these resources were located in the same region (US West). This diagram summarizes the topology:

Arqchitecture

And…these are the results:

Perf

Yes, the performance of plain vanilla ATS is just disappointing. After some research I found blog post with similar findings, which indicated how to improve the performance turning-off the nagle before the calls:

public static void InsertRandomEmployeeData()

        {

            string connStr = ConfigurationManager.ConnectionStrings["ConnString"].ConnectionString;

            /// For increased perf Turn off naggle alg

            /// http://alexandrebrisebois.wordpress.com/2013/03/24/why-are-webrequests-throttled-i-want-more-throughput/

            ServicePointManager.UseNagleAlgorithm = false;


            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connStr);

            CloudTableClient client = storageAccount.CreateCloudTableClient();

            CloudTable table = client.GetTableReference("Employees");


            table.CreateIfNotExists();

            var emp = new EmployeeEntity(GenerateRandomInt(1,10000), GenerateRandomString(0), GenerateRandomDouble(1.0,100000.0));

            TableOperation insertOp = TableOperation.Insert(emp);

            table.Execute(insertOp);

        }

The performance benefit is impressive – I wonder why this is not a default setting. Note how the read operation were not affected by that.

The performance of Azure SQL operations was really good (under 10 msecs on average), but the winner as you can see was MongoDB – impressive, with both operations under 2 msecs!

Well, that was a eye-opener. It is pretty obvious what we are going to use for next projects. Unfortunately, neither Azure nor Amazon Web Services offer a managed MongoDB service at this time, so I would need to setup and maintain my own set of VMs running MongoDB, which is not a big deal, but I would need to pay for this in addition to the storage.

Cheers, see you next time amigos.

Deploying an Entire Environment using Azure and PowerShell, Part 2

In a previous post, I detailed how to automate the creation of a standard multi-server environment using the IaaS capabilities in Azure. During the last days I had the opportunity of enhancing these scripts a bit. This second part of the post describe the enhancements.

Basically, the original scripts followed these set of steps:

  1. Defining environment variables
  2. Create cloud service
  3. Create storage account
  4. Create VM-n

Pretty simple uh? Yeah, but the resulting environment presented some issues:

  • Are VMs created close enough (same subnet)? Most probably not..
  • What if machines are created in the same rack (Fault Domain), and there is a hardware issue? The whole environment is gone!
  • What if we have multiple WFEs? We would need to load balance them..

All these issues are resolved with the introduction of some Azure concepts such as: Affinity Groups, Availability Sets, Load Balancing. So I enhanced the scripts with them on mind, and the resulting steps are:

  1. Defining environment variables
  2. Create a new Affinity Group (New-AzureAffinityGroup)
  3. Create storage account (add it to the new Affinity Group)
  4. Create cloud service (add it to the new Affinity Group)
  5. For each new load balanced VM:
    1. Create the VM,
    2. Add it to the cloud service
    3. Add it to the same availability set
    4. Create and attach disks as necessary
    5. Configure endpoints (firewall)
    6. Configure load balancer and probe port.

The resulting scripts are:

$ErrorActionPreference = "Stop"   # stop on any error

 

function GetLatestImage($family){

$images = Get-AzureVMImage `

| where { $_.ImageFamily -eq $family } `

| Sort-Object -Descending -Property PublishedDate

 

$latestImage = $images[0]

return $latestImage

}

 

$myAzureSubscription = 'Windows Azure MSDN - Visual Studio Ultimate'

# Environment variables are defined here:

# ONLY LOWERCASE LETTERS HERE!!

$EnvironmentName = "azrtest"

$tag = get-date -format 'hhmmss'

# Create Storage Account through the Portal (vmstorageazrtest)

$StorageAccount = "vmstorage$EnvironmentName"

 

Write-Host $StorageAccount

 

$AzurePubSettingsFile = "C:\Windows Azure MSDN - Visual Studio Ultimate-12-19-2013-credentials.publishsettings"

# ExtraSmall, Small, Medium, Large, ExtraLarge, A6, A7

$VMSize = "Small"

# Region - East US, West US, East Asia, Southeast Asia, North Europe, West Europe

$Location = "Southeast Asia"

$AdminUserName = "admin2K"

$AdminPwd = "password2K"

$OSFamily = "Windows Server 2008 R2 SP1"

 

# Server names cannot be more than 15 chars 

$WFE_1_Name = 'WFESrv1'+$tag

$WFE_2_Name = 'WFESrv2'+$tag

$APPSRV_1_Name = 'AppSrv1'+$tag

$DBSRV_Name = 'DBSrv'+$tag

 

$myDataDiskSize        = 20  # in GB     # User-specified

 

# This must be unique

$CloudServiceName = "azrtest"+$tag

# Run GetLatestImage.ps1

$Image = GetLatestImage($OSFamily)

$ImageName = $Image.ImageName

 

# Affinity Groups - groups machines 'closer' 

$myAffinityGroupName   = $EnvironmentName+'-ag'  # User-defined

# Availability Sets - defines resources on different HA Fault Domains

# One Availability Set is created per application tier. Not needed for the DB server

$myAvailabilitySetName_WFE = $EnvironmentName+'wfe-as'  # User-defined

$myAvailabilitySetName_APPSRV = $EnvironmentName+'appsrv-as'  # User-defined

 

$myEndpointName        = $EnvironmentName+'-ep'  # User-defined

$myLoadBalancerName    = $EnvironmentName+'-lb'  # User-defined

 

Select-AzureSubscription –Default $myAzureSubscription

 

# Config subscription

import-azurepublishsettingsfile $AzurePubSettingsFile

 

 

# Step 1 - Create the Affinity Group

New-AzureAffinityGroup -Name $myAffinityGroupName -Location $Location

 

# Step 2 - Create Storage Account                       

# Create Storage Account through the Portal

# IF NO StorageAccount exists, ONE IS CREATED HERE!

#New-AzureStorageAccount -StorageAccountName $StorageAccount -Location $Location -Label "azrtest"

# Remove-AzureStorageAccount -StorageAccountName $StorageAccount

New-AzureStorageAccount -StorageAccountName $StorageAccount -AffinityGroup $myAffinityGroupName 

 

# Step 3 Create Azure Cloud Service

New-AzureService -ServiceName $CloudServiceName -AffinityGroup $myAffinityGroupName    

 

# Step 4               

#Get-AzureStorageAccount | Select Label

Set-AzureSubscription -SubscriptionName $myAzureSubscription -CurrentStorageAccount $StorageAccount

 

# Step 5

# Create WFE Machine(s)

# Create WFE #1

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $WFE_1_Name `

                  -availabilitysetname $myAvailabilitySetName_WFE `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureEndpoint -Name          $myEndpointName `

                    -Protocol      tcp `

                    -LocalPort     80 `

                    -PublicPort    80 `

                    -LBSetName     $myLoadBalancerName `

                    -ProbePort     8080 `

                    -ProbeProtocol tcp `

                    -ProbeIntervalInSeconds 15 `

| New-AzureVM -ServiceName $CloudServiceName        

 

# Create WFE #2

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $WFE_2_Name `

                  -availabilitysetname $myAvailabilitySetName_WFE `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureEndpoint -Name          $myEndpointName `

                    -Protocol      tcp `

                    -LocalPort     80 `

                    -PublicPort    80 `

                    -LBSetName     $myLoadBalancerName `

                    -ProbePort     8080 `

                    -ProbeProtocol tcp `

                    -ProbeIntervalInSeconds 15 `

| New-AzureVM -ServiceName $CloudServiceName    

            

 

# Create AppServer

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $APPSRV_1_Name `

                  -availabilitysetname $myAvailabilitySetName_APPSRV `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureEndpoint -Name          $myEndpointName `

                    -Protocol      tcp `

                    -LocalPort     80 `

                    -PublicPort    80 `

                    -LBSetName     $myLoadBalancerName `

                    -ProbePort     8080 `

                    -ProbeProtocol tcp `

                    -ProbeIntervalInSeconds 15 `

| New-AzureVM -ServiceName $CloudServiceName

 

# Create the DB Server

# We do not need to load balance the DB server...

# It would be better to use a SQL Server Image here..

New-AzureVMConfig -ImageName    $ImageName `

                  -InstanceSize $VMSize `

                  -Name         $DBSRV_Name `

                  -DiskLabel "OS" `

| Add-AzureProvisioningConfig -Windows `

                              -DisableAutomaticUpdates `

                              -AdminUserName $AdminUserName `

                              -Password      $AdminPwd `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk0' `

                    -LUN 0 `

| Add-AzureDataDisk -CreateNew -DiskSizeInGB $myDataDiskSize `

                    -DiskLabel 'DataDisk1' `

                    -LUN 1 `

| New-AzureVM -ServiceName $CloudServiceName

 

 

                    

Enjoy!

Deploying an Entire Environment using Azure and PowerShell

The IaaS capabilities of Azure could be very handy when you need to create temporary development/test environments during the SDLC. Automating the creation and clean-up of these environments could save a lot of time and compute time ($$).

Azure exposes an interface based on PowerShell to automate all the steps required to do this, and I spent some time researching how to properly do it. You will find many references and blog post on how to create VMs using the PowerShell API in Azure; however I did not find many updated, accurate references of how to do it for an entire environment. Probably because the API has evolved so quickly and these articles are no longer relevant.. The results are summarized in the following script which demonstrates the creation of a standard deployment of an enterprise multi-tier environment (web front-end, application server and database server).

$ErrorActionPreference = "Stop"   # stop on any error

 

function GetLatestImage($family){

$images = Get-AzureVMImage `

| where { $_.ImageFamily -eq $family } `

| Sort-Object -Descending -Property PublishedDate

 

$latestImage = $images[0]

return $latestImage

}

 

 

# Environment variables are defined here:

# ONLY LOWERCASE LETTERS HERE!!

$EnvironmentName = "azrtest"

# Create Storage Account through the Portal (vmstorageazrtest)

$StorageAccount = "vmstorage$EnvironmentName"

 

Write-Host $StorageAccount

 

$AzurePubSettingsFile = "C:\MyStuff\MyDrop\Dropbox\Personal\Windows Azure MSDN - Visual Studio Ultimate-12-19-2013-credentials.publishsettings"

$VMSize = "Small"

$Location = "Southeast Asia"

$AdminUserName = "admin2K"

$AdminPwd = "password2K"

$OSFamily = "Windows Server 2008 R2 SP1"

 

$server_A_Name = "WFEServer"

$server_B_Name = "DBServer"

$server_C_Name = "AppServer"

 

# This must be unique

$CloudServiceName = "vmstorageazrtest"

# Run GetLatestImage.ps1

$Image = GetLatestImage($OSFamily)

$ImageName = $Image.ImageName

 

# Create Storage Account through the Portal

# IF NO StorageAccount exists, ONE IS CREATED HERE!

#New-AzureStorageAccount -StorageAccountName $StorageAccount -Location $Location -Label "azrtest"

# Remove-AzureStorageAccount -StorageAccountName $StorageAccount

 

 

# Config subscription

import-azurepublishsettingsfile $AzurePubSettingsFile

#Get-AzureStorageAccount | Select Label

Set-AzureSubscription -SubscriptionName "Windows Azure MSDN - Visual Studio Ultimate" -CurrentStorageAccount $StorageAccount

 

# Create Azure Service

New-AzureService -ServiceName $CloudServiceName -Location $Location

 

# Create Machine (1) - Windows

# You can create a new virtual machine in an existing Windows Azure cloud service, or you can create a new cloud service by using the Location parameter.

New-AzureQuickVM -Windows -ServiceName $CloudServiceName -Name $server_A_Name -ImageName $ImageName -Password $AdminPwd -AdminUsername $AdminUserName -Verbose

New-AzureQuickVM -Windows -ServiceName $CloudServiceName -Name $server_B_Name -ImageName $ImageName -Password $AdminPwd -AdminUsername $AdminUserName -Verbose

New-AzureQuickVM -Windows -ServiceName $CloudServiceName -Name $server_C_Name -ImageName $ImageName -Password $AdminPwd -AdminUsername $AdminUserName -Verbose

 

 

This sample script will create 3 VMs running “Windows Server 2008 R2 SP1”: WFE, App Server and DB Server. All VMs will be “grouped” in the same Azure Cloud Service: $CloudServiceName.

If you plan to use it, make sure you edit the environment variables in the top of the script:

  • The Environment Name: $EnvironmentName
  • The storage account used to store the VHDs: $StorageAccount
  • Location of the Azure Publishing settings file: $AzurePubSettingsFile
  • Size of the VMs: $VMSize
  • Location –  use get-azurelocation for a list of locations: : $Location
  • Admin Username and Password – this is the local account you will use to remote into them: $AdminUserName and $AdminPwd
  • OS: $OSFamily

To shut down and clean-up the VMs created, you could use the following script:

#CleanUp

 

# Environment variables are defined here:

# ONLY LOWERCASE LETTERS HERE!!

$EnvironmentName = "azrtest"

# Create Storage Account through the Portal (vmstorageazrtest)

$StorageAccount = "vmstorage$EnvironmentName"

 

Write-Host $StorageAccount

 

$AzurePubSettingsFile = "C:\MyStuff\MyDrop\Dropbox\Personal\Windows Azure MSDN - Visual Studio Ultimate-12-19-2013-credentials.publishsettings"

$VMSize = "Small"

$Location = "Southeast Asia"

$AdminUserName = "admin2K"

$AdminPwd = "password2K"

$OSFamily = "Windows Server 2008 R2 SP1"

 

$server_A_Name = "WFEServer"

$server_B_Name = "DBServer"

# They must be unique

$CloudServiceName = "vmstorageazrtest"

$server_A_ImageName = $ImageName

$server_B_ImageName = $ImageName

 

# Config subscription

import-azurepublishsettingsfile $AzurePubSettingsFile

#Get-AzureStorageAccount | Select Label

Set-AzureSubscription -SubscriptionName "Windows Azure MSDN - Visual Studio Ultimate" -CurrentStorageAccount $StorageAccount

 

# Stop & Remove  VMs

$vm = Get-AzureVM -ServiceName $CloudServiceName -Name $server_A_Name

if($vm){

    Stop-AzureVM -ServiceName $CloudServiceName -Name $server_A_Name

    Remove-AzureVM -ServiceName $CloudServiceName -Name $server_A_Name

}

 

$vm = Get-AzureVM -ServiceName $CloudServiceName -Name $server_B_Name

if($vm){

    Stop-AzureVM -ServiceName $CloudServiceName -Name $server_B_Name

    Remove-AzureVM -ServiceName $CloudServiceName -Name $server_B_Name

}

 

 

# Remove any existing Azure Cloud Service

$azureService = Get-AzureService -ServiceName $CloudServiceName

if($azureService){

    Write-Host "Cloud service: $CloudServiceName found!, deleting it.."

    Remove-AzureService -ServiceName $CloudServiceName -Force

}

 

#Remove Storage Account

# Remove container first

Remove-AzureStorageContainer -Name vhds -Force

Remove-AzureStorageAccount -StorageAccountName $StorageAccount

Some aspects were not fully covered in this version of the script:

· Networking: VMs will be able to talk between each other, but we currently do not have any control on the addressing assigned to them.

· AD: VMs are created as standalone servers, not part of an AD domain.

I hope you can find this helpful and time saving.

Beefing-up the Azure Platform

During PDC 2010 in Redmond, WA Microsoft announced a bunch of improvements to the whole Azure platform, some of them desperately needed:

  • Support for the new Virtual Machine role, in addition to the existing Web and Worker roles. This could allow PaaS scenarios, where you can build, configure and upload your own Windows Server 2008 R2 VMs as VHDs – quiet similar to the AWS model. (Great!!!) In addition, the pricing model for the Windows Azure VM role is the same as the existing pricing model for Web and Worker roles.
  • Enhancements to the Web and Worker roles: with the introduction of Elevated Privileges and Full IIS support!!! – so we now can have multiple IIS sites per Web role and the ability to install IIS modules. (Cool!!)
  • Windows Azure will also provide Remote Desktop functionality, which enables customers to connect to a running instance of their application or service in order to monitor activity and troubleshoot common problems. So basically your Azure computing instances are no longer black-boxes. (Finally!!!! OMG, I am going to cry…)
  • The introduction of an Extra Small Windows Azure instance – great!!, now you can configure an instance to run low-priority Worker Roles, or Admin apps without ruining your budget.
Compute Instance Size CPU Memory Instance Storage I/O Performance Cost per hour
Extra Small 1.0 GHz 768 MB 20 GB Low $0.05
  • A range of new networking functionality under the Windows Azure Virtual Network name was introduced. Windows Azure Connect (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources, is the first Virtual Network feature that we’ll be making available as a CTP later this year. With this, your can establish VPN between your on-premises servers and your cloud machines. Much needed for some enterprise scenarios.
  • The Windows Azure portal will also be improved with SL technologies, and with access to new diagnostic information including the ability to click on a role to see type, and deployment time. (Finally, for god sake!!!)
  • A much needed update for the pretty basic Database Manager for SQL Azure (formerly “Project Houston”) was also announced.

Let’s wait these enhancements are released as soon as possible

Large DBs in SQL Azure

Recently, one of my customers asked me that question: “Based on the updated SQL Azure plans, the maximum database size is now 50GB. What if my DB requires more storage?

The first recommendation could be: try to measure how your DB is growing, and (if possible) try to have there only the most relevant information – SSIS is a great option to download all that historic data to your on-premises servers. Another option is Data Sync. Some good articles on measuring your DB size are:

How to Tell If You Are Out of Room – SQL Azure Team Blog – Site Home – MSDN Blogs

CalculatingTheSizeOfYourSQLAzureDatabase

Well, according to Microsoft 50GB is the maximum size, and if you need more space you will need to partition your data (either horizontally or vertically). Unfortunately, SQL Azure won’t help you much with this, and you will need to make some changes in your app logic to handle this. This should be done in your Data Access Layer, and it will not be an easy process to implement, let me warn you. Following articles could give you some insight on the details and limitations of this process:

SQL Azure Horizontal Partitioning- Part 2 – SQL Azure Team Blog – Site Home – MSDN Blogs

Scaling out with SQL Azure – TechNet Articles – Home – TechNet Wiki

Windows Azure… One Year After

We have been using Windows Azure for almost a year, hosting our portal (Hoteles.com.co), and the results have been quite good in general. Compared to the rest of PaaS offerings in the market, this is a great solution if your expertise is around the Microsoft stack – specially .NET and SQL Server.

For v2 we are planning to leverage more of the Azure platform, specially the Azure Storage. We will be servicing Hotel images directly from here. This could enhance the already good response times we have right now. Activating the CDN for this would bring some additional benefits as well.

The other good experience (should I say, the most important for us?? 😉 ) have been around the costs. After some changes, like moving out the Content Admin application from Azure to my regular Hosting provider, we are now paying an average of  60 USD per month. Not bad for a reliable and fast platform like this with access to data in SQL Server.

However, Azure is a new platform, and although it provides a good set of basic services, the portal still lacks of some important services, some available in some other PaaS offers (like AWS), such as:

  • It stills lacks an out-of-the-box UI in the Azure portal that enable Admins to monitor the load of the instances (in term of CPU, RAM, Disk Access, etc) – pretty much the functionality offered by really good tools like Azure Diagnostic Manager by Cerebrata. Why is this important? Well, because either you or the platform need to make decisions base on the load.  Should we allocate another instance to accommodate an increase in traffic?  — this is the base of the elasticity paradigm.  The Azure portal should bring some support to define such rules. The capability to define the number of instances per day of the week (i.e. what if the solution expects more traffic during the weekend..)
  • In addition, there is a lack of traffic statistics reporting in the Azure Portal. We are currently handling this through Google Analytics – but it would be great to have this integrated in the Azure portal.
  • It would be valuable to have access to some “Event Log” window from the Azure Portal with diagnostic information. Sometimes your application has problem and does not start but you cannot get the error info – it is like flying blind.

The good thing is that Azure is really strategic for Microsoft, and I expect to see this functionality shortly as part of the service.

Dissecting Azure Clouds

What are the challenges in porting your existing ASP.NET/SQL Server applications to Azure?

Bogota 102

If you plan to use SQL Azure then migrate your DB to Azure first. Below you will find some tips to perform this process. Yes, you will find lots of things that do not work on SQL Azure. The good thing is that moving the ASP.NET will be way easier… Once completed, then connect your local application to the DB instance in SQL Azure. This way you debug your application and find additional problems. Remember that you could get the connection string through the SQL Azure portal. Test your app. Does everything work? Then move your ASP.NET app to Azure and publish it. Congratulations, you are now connected to the Cloud!

Here are few points I found migrating my ASP.NET MVC app:

  1. There is no support for Session Affinity (Azure is Stateless) – I’m aware that Azure load balancing doesn’t support Session Affinity – hence the existing web application should be changed if it has session affinity.
  2. If you get this error debugging your app in Azure: "Windows Azure Tools: Failed to initialize the Development Storage service. Unable to start Development Storage. Failed to start Development Storage: the SQL Server instance ‘localhostSQLExpress’ could not be found. Please configure the SQL Server instance for Development Storage using the ‘DSInit’ utility in the Windows Azure SDK." 
    It is because the Dev Store is pointing to a named instance of SQLExpress and if you are using SQL Server like me, then you would need to do like what the error message said.
    Go to where the devstore is installed i.e.
    C:Program FilesWindows Azure SDKv1.0bindevstore
    and type
    dsinit /sqlinstance:.
    Take note of the "." which indicates your current default unnamed SQL Server instance.
    You will then be prompted with a screen that informs you that the installation is successful and the development storage is ready for use.
    You can now start the Development Storage service.
  3. If you get a lovely 403 – Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. It’s because you hit the directory instead of a page.  Check this page out to see how to set a default document in the web.config (web role’s don’t automatically load a default.aspx like you might expect – you have to set it): http://blogs.msdn.com/rakkimk/archive/2007/05/25/iis7-how-to-configure-the-default-document-of-the-website-in-its-web-config.aspx
  4. If your ASP.NET app is based on MVC, ensure that the System.Web.MVC assembly is included in the service package that you deploy to Windows Azure. To do this, for a Visual C# project, expand the References node in Solution Explorer for the MVCAzureStore project, right-click the System.WebMVC assembly and select Properties. Make sure the Copy Local option is set to True.

 

Here there are some tips for moving your DB to SQL Azure:

  1. SQL Azure Database exposes a Tabular Data Stream (TDS) endpoint to databases that are hosted in the cloud. TDS is the same network protocol that on premise SQL Server uses, therefore, a desktop client application can connect to SQL Azure Database in the same way it connects to an on-premise SQL Server instance.
  2. You won’t be able to connect to your remote SQL Azure DB using SQL Server Management Studio (SSMS) 2008.  I guess this will be supported in R2. Currently you can only connect using a “Script Window” to an specific DB. There is no support to “USE <DB>” though.
  3. Remember that SQL Azure only support a subset of the services provided by your on-premise SQL Servers (check out this list of supported SQL commands, or Unsupported Transact-SQL Statements (SQL Azure Database) – MSDN).
  4. There is no GUI-based admin tool, so you will need to create everything (Users, Logins, DB, Tables) using SQL commands and scripts. There is a couple of community provided GUI tools to enable basic operations on your DBs. Check SQL Azure Manager
  5. If you want to migrate one DB from SQL Server 2008 to SQL Azure, and you expect to find some “Attach DB” or “Restore Backup”…forget it, you will need to use your good-old Bulk Copy/BCP, INSERT scripts or  SS Integration Services (SSIS). If you want to give SSIS a try this could help you. Some other handy tools:
    1. I just found a plug-in called SQL Azure Data Sync Tool for SQL Server, but I have not tried it.
    2. SQL Server Migration Wizard http://www.azuresupport.com/2009/12/sql-azure-introduction/2/ . I ended up generating INSERT scripts using this tool.
  6. Be aware of some deprecated features while moving your DB structure and data using SQL scripts:
  • ‘ANSI_NULLS’ is not a recognized SET option.
  • Deprecated feature ‘SET ANSI_PADDING OFF’ is not supported in this version of SQL Server.
  • Deprecated feature ‘More than two-part column name’ is not supported in this version of SQL Server.  This a significant change if your using Schemas.
  • Deprecated feature ‘Data types: text ntext or image’ is not supported in this version of SQL Server.
  • Deprecated feature ‘Table hint without WITH’ is not supported in this version of SQL Server.
    A full list id found at Deprecated Database Engine Features in SQL Server 2008 – MSDN.

 

See you up there!