About Network Testing Labs

Contact Network Testing Labs
Independent Reviews of Network Hardware and Software



SQL Server 2012



Microsoft’s latest database version offers new data analysis features, greater uptime and some performance improvements. It also costs more, imposes extra burdens on administrators and needs tons of bandwidth.
By Barry Nance

Some people will like SQL Server 2012’s new features a lot. Other people? Not so much.

While SQL Server 2008 was little more than a service pack that Microsoft charged for, the 2012 version’s Standard, Business Intelligence and Enterprise Editions have many significant new features. However, most of the new features are reserved for the Enterprise Edition.

SQL Server 2012’s major new features are Business Intelligence improvements that help people analyze data, an AlwaysOn availability and uptime enhancement, Contained Databases for managing databases as a group and a quick-to-respond-to-queries ColumnStore Index.
Microsoft has abandoned the per-CPU licensing model with the new release. It instead has adopted a per-core model.

For earlier SQL Server versions, you bought one license per physical processor regardless of how many CPU cores it had. If you chose your server hardware smartly, you could buy eight CPU cores for the cost of one SQL Server license and save enough in licensing fees to pay for the new server.

To license SQL Server 2012 for that same server, you’ll need eight core licenses. Yes, the new core license fees are less than the previous per-CPU fees. A little math, however, reveals that Microsoft has conspicuously increased SQL Server’s price.

If you’re still using SQL Server 2000, Microsoft has no upgrade path
    for you to SQL Server 2012. You will first have to migrate your
    databases to SQL Server 2005 or 2008.

Business Intelligence
SQL Server 2012’s Business Intelligence improvements essentially let users view a database as a spreadsheet. Users can program sophisticated spreadsheet formulas and reports that operate directly on database contents. A user can, for example, program a new database report via these spreadsheet operations and then take a notebook computer running the new report (and connected wirelessly to the database server) into a meeting. The attendees can watch the report update in real time as database contents change. Note, though, that if the meeting room computer has no database server network connection (or loses it), the report display freezes to show whatever data was current when the computer was last connected.

Business Intelligence is a godsend for companies whose corporate policies allow (or encourage) users to program their own spreadsheets. BI is anathema for companies that want to control ad hoc manipulation of databases – and the decisions that ensue from such manipulation.

In companies that embrace Business Intelligence, network (and database) administrators will see their workloads blossom. As we tested Business Intelligence in the lab, we saw this effect firsthand. Extrapolating our results across a large company, we estimate that the unbridled use of SQL Server 2012’s Business Intelligence feature will likely increase administrator workload by ten to 25 per cent.

Think of AlwaysOn as database mirroring in which the secondary (substitute) server can be an active, already-in-use SQL Server 2012 instance. The secondary server takes up the slack when a primary instance fails. Because the substitute server may not have the horsepower of the primary server and because it’s also doing other work, response times may slow dramatically. But the application blithely carries on without suffering an outage. The mirror doesn’t have to be a standby server that sits idle until failover time.

Earlier SQL Server versions offered essentially two approaches to High Availability. You could configure SQL Server to perform log shipping, which instructed the failover server to replicate the primary server, or you could use clustering to cause a standby server to assume the role of primary server upon failover. Both approaches have their limitations. Failing over an individual database can take time, during which the database is unavailable. Cluster-based failover is costly for the extra server(s) that does no work until the primary server(s) fails.

SQL Server 2012’s AlwaysOn feature borrows the concept of Database Availability Groups from Exchange Server 2010. AlwaysOn, however, implements the concept with a somewhat different architecture.

Unfortunately, AlwaysOn uses a great deal of bandwidth. In tests involving 50 clients feeding an Online Transaction Processing (OLTP) SQL Server 2012 database with an average twenty transactions per second, AlwaysOn’s data replication and inter-server coordination more than doubled network utilization, from 22% to 47%.

SQL Server 2012 has other high availability enhancements. For the many applications that access multiple databases concurrently, SQL Server 2012 offers Availability Groups. You assign multiple databases to an Availability Group and, when a server dies, all the databases fail over as a cohesive unit. Availability Groups are particularly useful for transferring database accesses from a primary site to a remote site if a primary site suffers a catastrophic disaster. You can also set up multiple Availability Group assignments for a single SQL Server 2012 instance. If disaster strikes, AlwaysOn will divide up the database retrievals and updates across the multiple servers you’ve designated in your disaster plan. A single database superserver can thus fail over to several lesser-horsepower machines. Your standby servers don’t have to be expensive, idle-most-of-the-time copies of the primary.

The Availability Group concept worked well in the lab. When we “pulled the plug” on a database server, our simulated online transaction processing application kept running normally, completely unaware that it was accessing a different server.

Note that you’ll have to make separate arrangements for the application itself and for any other system components and data files that the application relies on. In that vein, be aware that many bright minds have created high availability mechanisms that protect more than just the database server. For example, CA’s ARCserve High Availability can perform sophisticated failovers for all of an application’s computing resources, and it supports Unix and Linux. It can restart a crashed background process (i.e., Windows Service), if that’s the cause of the problem. And it offers push-button failover and failback for the highest possible level of availability. ARCserve High Availability supports DAS, NAS and SAN, and, like SQL Server 2012, it can replicate data in a variety of ways – one-to-one, one-to-many and many-to-one. Unlike SQL Server 2012, ARCserve High Availability offers bandwidth tuning/throttling as well as data compression to use the network more frugally.

Another convenient, impressive and practical new SQL Server 2012 feature is replication to a Read-only Secondary. By copying database changes to the read-only secondary in a way that assures the integrity of related database contents in the secondary database, SQL Server 2012 makes backing up an active, in-use database painless and quick … you simply make periodic backup copies of the read-only secondary database, not the primary. If the read-only secondary is on a separate server, you even avoid using database server CPU and memory during the backup process. Furthermore, read-only secondaries become excellent candidates as the basis for data analysis and reporting, even while the primary database is actively in use. We liked read-only secondaries a lot.
SQL Server 2012’s new FileTable concept was somewhat less impressive, but only because we couldn’t think of a good, practical use for it. FileTable associates an NTFS file system directory with a database table. Any file you put in the directory appears in the database, and SQL Server 2012 reflects in the database any changes you make to a file. Backing up the database also backs up files in the associated directory. If you have ancillary data files that bear a critical relationship to the contents of a database and you want to back up the database plus the ancillary files as a consistent single unit, FileTable may be for you.

Contained Databases
Before SQL Server 2012, migrating a database meant much more than just copying database files. You also had to set up or at least synchronize database login user IDs, ensure that collation (i.e., the sort order to be used for each character set as well as the code page used to store non-Unicode character data) was configured the same for the two databases, verify compatibility levels, migrate scheduled jobs and do other tasks to manage database-related data not stored directly in the database files. SQL Server 2012’s Contained Databases feature makes database migration a bit easier by storing the collation setting and the database login user IDs within the database. You no longer have to synchronize database login IDs between the old server and the new one. However, you still have to worry about other database-related configuration steps, such as setting up scheduled jobs on the new server.

ColumnStore Indexes
SQL Server 2012’s ColumnStore Index stores data for columns you designate and then joins those database columns to give you a read-only, column-based index into the data (traditional indexes are row-oriented, storing data for each row and then joining those rows to complete the index).

Microsoft claims ColumnStore Index speeds up data retrieval by a factor of ten. Our tests confirmed the performance gain, exhibiting at least 10x and sometimes much faster (12x, 15x and even 20x) data retrieval speeds.

The big drawback to ColumnStore Indexes is their read-only status, which makes them useful only for queries in data warehouses with huge databases. OLTP databases and ColumnStore Indexes are, by their nature and almost by definition, mutually exclusive.

Even in a data warehouse milieu, frequently loading new data into read-only tables can be quite a hassle. Microsoft describes a workaround for the read-only problem by having you switch out table partitions in your data warehouse tables. If you are desperate for better performance, the workaround might be acceptable. Alternatively, you might opt to use SQL Server 2012’s read-only secondary feature to manage the database copies you use for analysis and reporting.

Speaking of indexes – SQL Server 2012’s improvements in online re-indexing are a welcome relief to administrators who from time to time have to re-index a database. SQL Server 2005 touted an online re-indexing feature, but the earlier version’s fine print mentioned that the indexing didn’t work for all data types (the problem types were varchar(max), nvarchar(max), varbinary(max) and XML). SQL Server 2012 removes the restriction so that administrators can have true online index maintenance for applications that are supposed to be online and available 24/7.

We don’t want to excessively greedy, but next we’d like to see in SQL Server an ability to re-index individual table partitions online. We have a few other issues, as well. Missing from SQL Server 2012 is any significant use of PowerShell. Other than a few commandlets for AlwaysOn and some backup/restore functions, SQL Server 2012 has no reliance on PowerShell. With the emphasis Microsoft is putting on PowerShell, we found the omission disappointing.

Ironically, the SQL Server 2012 installation process uses PowerShell. As with virtually every other extant Microsoft server product current version, Windows PowerShell 2.0 is a requirement for deploying SQL Server 2012.

We were also disappointed by the lack of improvements to SQL Server Management Studio (SSMS). Yes, Microsoft has given SSMS a Visual Studio 2010 makeover, which means you get better snippet management as well as integration with Team Foundation Server, but SQL Server 2012 offers no new DBA management tools. For instance, we would’ve liked to have seen better multi-server management and reporting features as well as some use of PowerShell in SSMS

Net Results
SQL Server 2012
Microsoft Corp.

Standard Edition: $1,793/core, or $898/server plus $209/client
Business Intelligence: $8,592/server plus $209/client
Enterprise Edition: $6,874/core

Pros: SQL Server 2012 is faster, it offers greater availability/uptime and it makes database migration simpler and easier

Cons: Business Intelligence opens a Pandora’s Box of workload issues, AlwaysOn increases network traffic dramatically, ColumnStore Indexes are read-only and SQL Server 2012 lacks new administrative tools

SQL Server 2012’s many new features (some of which, like programming language enhancements, we haven’t even touched on) are a good reason to upgrade. There’s something to like for nearly everyone.

Just be aware that the new version costs more, will likely increase administrator workloads and might use quite a bit more bandwidth than earlier SQL Server versions.


Features 30%
Performance 40%
Ease of Use 20%
Documentation and Installation 10%
Total Score
Testbed and Methodology
Virtually all our testing took place across 512 kb/s frame relay, T1 and T3 WAN links. The testbed network consisted of six Fast Ethernet subnet domains linked by Cisco routers.

Our lab's 50 clients consisted of computing platforms that included Windows 2000/2003/XP/Vista/Win7, Macintosh 10.x and Red Hat Linux (both server and workstation editions).

The network contained two Web servers (Microsoft IIS and Apache), three e-mail servers (Exchange, Notes and Sendmail) and several file servers (Windows 2003 and Windows 2008).

Our virtual computing environments consisted of VMware, XenServer and Microsoft Hyper-V.
A Compaq Proliant ML570 computer, with four 900 Mhz CPUs, 4G bytes RAM and 1.3 terabytes of disk storage, running Windows 2008 Server was our test platform for SQL Server 2012. A Dell PowerEdge server simulated our remote backup site for disaster recovery.

We exercised most of SQL Server 2012’s new features, focusing primarily on Business Intelligence, AlwaysOn, Contained Databases and ColumnStore Index.

Copyright 2012 Network Testing Labs


About Network Testing Labs

Contact Network Testing Labs