MySql 5.5 Best Performance on MySQL-Based Column Database

Some technologies come on the information technology landscape and stay, providing long-lasting benefits, whereas others are more of a short term fad and ultimately end up disappearing because the value they supplied was too niche oriented and/or they were quickly supplanted by another technology that is better. Recently, articles, blogs, analyst reports, and other media outlets have been noting the rise and usage of column-oriented databases in the areas of data warehousing, analytics, and other business intelligence/read-intensive situations. And on the MySQL front, there are a couple of column DB’s that are now available for you to use.

Are column-oriented databases a technology that is destined to stay and provide long-term benefits or will it be relegated to the forgotten pile of other software that came on the scene quickly and then disappeared?

Let’s look at three key questions that are consistently asked of column-oriented databases and see how the technology stacks up:

  1. How do column-oriented databases work?
  2. Do column-oriented databases really make a difference?
  3. What learning curve (application/database development, etc.) is involved with column-oriented databases?

How Do Column-Oriented Databases Work?

All the legacy relational databases currently being offered today were and are primarily designed to handle online transactional processing (OLTP) workloads. A transaction (e.g. an online order for a book through Amazon or another Web-based book dealer) typically maps to one or more rows in a relational database, and all traditional RDBMS designs are based on a per row paradigm. For transactional-based systems, this architecture is well-suited to handle the input of incoming data.

However, for applications that are very read intensive and selective in the information being requested, the OLTP database design isn’t a model that typically holds up well. Whereas transactions are row-based, most database queries are column-based. Inserting and deleting transactional data are well served by a row-based system, but selective queries that are only interested in a few columns of a table are handled much better by a column-oriented architecture. On average, a row-based system does 5-10x the physical I/O that a column-based database does to retrieve the same information. Taking into account that physical I/O is typically the slowest part of a query, and that an analytical query typically touches significantly more rows of data that a typical transactional database operation, the performance gap between row-oriented architectures and column-oriented architecture oftentimes widens as the database grows.

To get around their selective query inefficiencies, row-based RDBMS’s utilize indexing, horizontal partitioning, materialized views, summary tables, and parallel processing, all of which can provide benefits for intensive queries, but each comes with their own set of drawbacks as well. For example, while indexing can certainly help queries complete faster in some cases, they also require more storage, impede insert/update/delete and bulk load operations (because the indexes must be maintained as well as the underlying table), and can actually degrade performance when they become heavily fragmented. Moreover, in business intelligence/analytic environments, the ad-hoc nature of such scenarios makes it nearly impossible to predict which columns will need indexing, so tables end up either being over-indexed (which causes load and maintenance issues) or not properly indexed and so many queries end up running much slower than desired.

Those not familiar with a column-oriented database might wonder exactly what they are and what actual benefits they deliver over a legacy RDBMS. It’s important to note that, on the surface, a column-oriented database appears exactly like a traditional relational database: the logical concepts of tables and rows are the same, SQL commands are used to interact with the system, and most other RDBMS paradigms (e.g. security, backup/recovery, etc.) remain unchanged.

But, a column-oriented database specifically designed for analytics overcomes the query limitations that exist in traditional RDBMS systems by storing, managing, and querying data based on columns rather than rows. Because only the necessary columns in a query are accessed rather than entire rows, I/O activities as well as overall query response times can be reduced. In other words, if you don’t have to read an entire row to get the data you need, why do it?

The end result for column databases is the ability to interrogate and return query results against either moderate amounts of information (tens or hundreds of GB’s) or large amounts of data (1-n terabytes) in much less time that standard RDBMS systems can.

Column Vs Row

The good news for you who use MySQL is that the storage engine architecture allows column database vendors to plug their technology into MySQL and voila! You now have at your disposal a powerful alternative to other MySQL engines that can really tackle serious data needs.

Do Column-Oriented Databases Really Make a Difference?

So column databases look pretty good from a technical blueprint perspective, but do they really walk the talk in the real world? If they do, then their impact will be substantial because, in the end, the back end database used for BI or read-intensive work is the #1 overall contributor to a well running system.

The Data Warehouse Institute (TDWI) did a recent study and found that (not surprisingly…) the most important component in a business intelligence implementation was the database server itself.

Tdwi 1

Further, TDWI found that nearly half of those it polled are ready to replace their database used for business intelligence applications with another, more modern alternative. When asked what the technical reasons were for the replacement, the number one answer was the inability for the legacy RDBMS to service queries in the time needed:

Tdwi 2

Interesting information, but do column databases really have the capability to help the pain these folks talk about?

As an example of how a column-oriented database can outperform a legacy RDBMS, Calpont recently commissioned a well-known data warehouse industry expert – Bert Scalzo – to benchmark a leading row-based database (of which he has many years of experience in tuning for fast performance) against Calpont’s InfiniDB Server (Community Edition), which has as one of its core features, a column-oriented design. A Star Schema styled benchmark was conducted on two different machines to gauge performance on both mid and large-sized servers. The mid-sized server was an 8 CPU, 8GB RAM, 14 SATA 7200 RAID-0 no cache configuration, and the large server was a 16 CPU, 16GB RAM, 14 SAS 15K RPM RAID-0 with 512MB cache machine. Both were running 64-bit CentOS 5.4. The raw database size was 2TB.

As can be seen on the graphs below, various configurations were used for the row-based database, however no matter the configuration, the column-oriented InfiniDB database consistently beat the legacy database in storage footprint, load time, and query speed:

Database Obj Size
Load Test
Query Time
Run Time

In summary, the column database saves on storage costs, supplies faster access to new/incoming data, and runs query much faster than its row-based competitor.

Notice also, that in addition to producing overall faster query speeds, InfiniDB also supplied much better query predictability in terms of query time. Whereas the row-based database produced wildly varying minimum and maximum query times over the various runs, the column database had a far more tightly group of runs when it came to predictable response times. This translates into much better dependability from a business standpoint in ensuring BI reports and queries meet whatever service-level agreements are imposed from business users.

Lastly, whereas the row-based database had been maxed out performance-wise in the benchmark tests, if a user wished to get even faster performance from InfiniDB, they could move from the Community Edition to the Enterprise Edition, which supports massive parallel processing (MPP) and shave the query times literally in half with the addition of a new node. Further, they could get their query time in half again by adding two more nodes (for a total of four) and continue to work in MPP fashion with more nodes until they reach whatever final query times they desire.

Additional Considerations

In addition to better performance, the column-orientation aspect of column databases supplies a number of useful benefits to those wishing to deploy fast business intelligence databases.

First, there is no need for indexing as with traditional row-based databases. The elimination of indexing means: (1) less overall storage is consumed in column databases because indexes in legacy RDBMS’s often balloon the storage cost of a database to double or more the initial data size; (2) data load speed is increased because no indexes need to be maintained; (3) ad-hoc DML work speed is increased because no index updates are performed; (4) no indexing design or tuning work is imposed on the database IT staff.

Second, there is far less design work forced on database architects when column databases are used. The need for complicated partitioning schemes, materialized view or summary table designs, and other such work is completely removed because column databases need none of these components to achieve superior query performance.

What Learning Curve is involved with Column-Oriented Databases?

You’ll be pleased to find that the learning curve associated with moving from legacy, row-based RDBMS’s to a column database is very small if not completely non-existent. Unlike other databases that came on the scene in prior years which required either different programming paradigms (e.g. object-oriented databases) or learning new design methodologies and database access languages (e.g. OLAP databases), column databases look and handle just like standard relational databases. They use the same ANSI standard SQL language, security methods, and require no development paradigm changes.

In fact, column databases actually lessen the burden on both the development and administration staff because they do away with the need for indexing exercises, data partitioning schemes, supplementary object designs (e.g. materialized views), and other similar tasks. The ease of use factor, therefore, is greater with column databases than it is with traditional RDBMS’s. Moreover, they do not require such specialized in-house expertise to build highly-performant systems.

Conclusions

In the end, the answer as to why you should consider a column database over a legacy RDBMS comes down to the fact that column databases do indeed make a big impact in how data warehouses, BI databases, and read-intensive systems perform. This makes column databases a good choice for today as well as a technology whose benefits will extend many years down the road.

To download the Community Edition of InfiniDB that uses MySQL as its front end, as well as free documentation, go to http://www.infinidb.org/downloads. To obtain a trial of InfiniDB Enterprise, please visit: http://www.calpont.com/.

An Introduction to MySQL 5.5

What’s New in MySQL 5.5

It’s been a busy year for MySQL. Perhaps you’ve heard. Here are some recent improvements to the speed, scalability, and user-friendliness of the MySQL database and the InnoDB storage engine that we think deserve their own headlines. Now is a great time to beta test the 5.5 release and give feedback to the MySQL engineering team.

Improved Performance and Scalability

InnoDB Becomes Default Storage Engine
MySQL sometimes gets knocked about features such as as ACID-compliant transactions, foreign key support, and crash recovery. These features are strongest in the InnoDB storage engine, but MyISAM has always been the default, so new users could get the wrong impression. Starting in MySQL 5.5, InnoDB is the default storage engine, so that everyone can see this reliability and stability out of the box. As a bonus, the level of InnoDB in MySQL 5.5 is InnoDB 1.1, a rearchitected InnoDB with many performance and scalability features over and above the built-in InnoDB in 5.1 and before. (Since we are unifying the InnoDB within MySQL using the best and fastest technology, we are phasing out the Plugin versus Built-In distinction; MySQL 5.5 comes with the latest and greatest InnoDB 1.1.) Read more about the latest InnoDB enhancements below.

Better Metadata Locking within Transactions
If a table is referenced within a transaction, no other transaction can perform DDL such as DROP TABLE or ALTER TABLE until the first transaction commits. Previously, the lock was released at the end of a statement rather than the whole transaction. Read more about metadata locking within transactions.

Improved Performance and Scale on Win32 and Win64
If your company uses Windows by itself or in a mixed environment, you probably want to deploy MySQL databases on Windows. To make that a reality, the MySQL team has incorporated a number of Windows-specific features for speeding up and scaling up.

  • Windows API calls for much of the I/O done inside MySQL (a community contribution, hat tip to Jeremiah Gowdy).
  • Ability to build engines and other plugins as DLLs on Windows.
  • Network support for auto-detecting the MAC address (a community contribution, hat tip to Chris Runyan).
  • Much cleanup and simplifying of threading code.

Improved Availability

Semi-Synchronous Replication
This feature improves the reliability of failover, to avoid failing over to a slave that is missing some committed changes from the master. You can choose to have commits on the master node wait until at least one slave has logged the relevant events for the transaction. The semi-synchronous aspect is because the master does not wait for all the slaves to acknowledge, and there is a protocol to avoid the master waiting too long if the slaves fall behind. Read more about semisynchronous replication.

Replication Heartbeat
In replication, the heartbeat is a message sent at regular intervals from a master node to the slave nodes. You can configure the heartbeat period. If the message is not received, the master knows that the slave node has failed. You can now avoid the spurious relay log rotation when the master is idle, rely on an more precise failure detection mechanism, and have an accurate estimation for seconds behind master. (This is a different feature than Linux heartbeat, which is a similar health-checking system for cluster nodes.) To use this feature, you issue commands like:

CHANGE MASTER SET master_heartbeat_period= milliseconds;
SHOW STATUS like 'slave_heartbeat period'
SHOW STATUS like 'slave_received_heartbeats'

Improved Usability

SIGNAL/RESIGNAL
The SIGNAL and RESIGNAL statements allow you to implement familiar exception-handling logic in your stored procedures, stored functions, triggers, events, and database applications that call those things. SIGNAL passes execution back to an error handler, like THROW or RAISE statements in other languages. You can encode the error number, SQLSTATE value, and a message in a consistent way that can be interpreted by an error handler in the calling program. RESIGNAL lets you propagate the exception after doing some amount of error handling and cleanup yourself. With RESIGNAL, you can pass along the original error information or modify it. Read more about SIGNAL/RESIGNAL.

More Partitioning Options
With the new RANGE COLUMNS and LIST COLUMNS clauses of the CREATE TABLE statement, partitioning is now more flexible and also can optimize queries better. Instead of expressions, you specify the names of one or more columns. Both of these clauses let you partition based on DATE, DATETIME, or string values (such as CHAR or VARCHAR). Partition pruning can optimize queries on tables that use RANGE COLUMNS or LIST COLUMMS partitioning, and WHERE conditions that compare different columns and constants, such as

a = 10 AND b > 5

or

a < "2005-11-25" AND b = 10 AND c = 50

Read more about COLUMNS partitioning.

Performance Schema
The Performance Schema feature involves an optional schema, named performance_schema, with tables that you can query to see intimate details of low-level MySQL performance. You can get information about performance right at that moment, or various amounts of historical performance data. You can clear the data to reset the figures, filter and format the data using WHERE clauses, and generally interact with it using all sorts of SQL goodness. Performance Schema data now also includes details about the InnoDB storage engine. Read more about Performance Schema.

What’s New in InnoDB

To make a long story short: it’s all about performance and scalability! To those who enjoy trying all permutations of configuration settings, we apologize in advance for making so many of these improvements take no thought or effort at all.

Performance Improvements

At this year’s MySQL Conference & Expo, you’ll hear about the InnoDB Plugin 1.0.7, the first production-ready (GA) release of the InnoDB Plugin. Most of the enhancements listed here are from InnoDB 1.1, which is part of MySQL 5.5 and thus is still in beta. Download MySQL 5.5 and try them out.

Improved Recovery Performance
One of InnoDB’s great strengths is its ability to reliably recover data after any type of crash that affects the database. But this cleanup and checking makes the next restart take longer. Well, cover up your sundial. Put away your hourglass. The enterprising InnoDB team has improved the algorithms involved in recovery by a huge amount — in computer science terms, it’s a better big-O number. Now you will need to keep your finger ready on the stopwatch to see how long recovery takes. This feature is available both in InnoDB 1.1 and the InnoDB Plugin 1.0.7. Read more about faster recovery.

Multiple Buffer Pool Instances
With today’s buffer pools frequently in the multi-gigabyte range, pages are constantly being read and updated by different database threads. This enhancement removes the bottleneck that makes all the other threads wait when one thread is updating the buffer pool. All the structures normally associated with the buffer pool can now be multiplied, such as the mutex that protects it, the LRU information, and the flush list. You control how many buffer pool instances are used; the default is still 1. This feature works best with combined buffer pool sizes of several gigabytes, where each buffer pool instance can be a gigabyte or more. Read more about multiple buffer pool instances.

Multiple Rollback Segments
This feature is both a performance and a scalability improvement. By dividing the single rollback segment into multiple parts, InnoDB allows concurrent transactions to create undo data (from insert, update, and delete operations) without making each other wait. A happy consequence is that the old limit of 1023 simultaneous inserting / updating / deleting transactions is now much higher, for a total of approximately 128K concurrent writer transactions. This feature does not introduce any incompatibility in the InnoDB file format, and does not require using the newer Barracuda file format. However, the setup within the system tablespace only takes place when the system tablespace is created, so to take advantage of this feature, you must create a new instance (not just a new table or a new database) and import the data into it. Read more about multiple rollback segments.

Native Asynchronous I/O for Linux
This feature enables better concurrency of I/O requests on Linux systems. With asynchronous I/O, an I/O request can be sent off and the thread servicing the query does not need to wait for the I/O to complete; that aspect is delegated to the I/O helper threads. InnoDB already supported asynchronous I/O on Windows systems. On platforms other than Windows, InnoDB internally arranged its I/O calls as if they were asynchronous (leading to the term simulated asynchronous I/O), but behind the scenes the query thread really would block until the request finished. Now true asynchronous I/O support (called native asynchronous I/O so it won’t be confused with references to asynchronous already in the source) is available on Linux as well as Windows. This feature requires the libaio userspace library to be installed on Linux. It comes with a configuration option innodb_use_native_aio that you can turn off in case of any startup problems related to the I/O subsystem. Read more about asynchronous I/O for Linux.

Extended Change Buffering: Now with Delete Buffering and Purge buffering
InnoDB uses indexes to make queries faster. Secondary indexes, those on columns other than the primary key, require work (meaning disk writes) to keep them up to date when those those columns are inserted, deleted, or updated. For example, if you run the command DELETE FROM t WHERE c1 = 'something';, and you have a secondary index on column c2, what’s the rush to update that secondary index? Its contents might not be in the buffer pool, and maybe the index won’t be read for a long time.

InnoDB has had an optimization for a while now to delay disk writes for secondary index maintenance when the changes are due to inserts. This delay waits for the index contents to be read into the buffer pool for some other reason, such as a query, where the changes can be made quickly in memory and then flushed back to disk using the normal schedule for writing dirty blocks. When the changes in the buffer pool affect a group of sequential disk blocks, they can be flushed more efficiently than if the data was written piece by piece. Very clever!

In InnoDB 1.1, this technique is extended to include the different kinds of writes caused by deletes (an initial delete marking operation, followed later by a purge operation that garbage-collects all the deleted records). This optimization is under your control through the innodb_change_buffering configuration option, which has a new default of all. (We call the optimization change buffering rather than the old name insert buffering; the actual memory structure is still called the insert buffer.) Read more about enhanced change buffering.

Scalability Improvements

The scalability improvements in InnoDB 1.1 revolve around better isolation of threads and mutex contention. These are performance-type improvements that really kick in when the database server is heavily loaded. (For those of you who are not yet experts on InnoDB performance, mutexes are in-memory structures that prevent different threads from interfering with each others’ changes to important memory areas like the buffer pool.)

Improved Log Sys Mutex
Previously, a single mutex protected different memory areas related to the undo and logging information. In particular, this mutex blocked access to the buffer pool, while changes were being written there by DDL operations making changes to the data dictionary. Splitting the old log_sys mutex to create a separate log_flush_order mutex means that all of this internal processing can happen with less waiting and less blocking of other operations involving the buffer pool, without any configuration needed on your part. Read more about improved log sys mutex.

Separate Flush List Mutex
Along the same lines, operations involving the buffer pool and the flush list previously were protected by a single mutex, which could cause unnecessary delays. (The buffer pool mutex has historically been very hot, so any other operation that tied up the buffer pool was adding fuel to the fire.) Now the flush list has its own mutex, reducing contention with buffer pool operations and making InnoDB faster without any configuration needed on your part. Read more about separate flush list mutex.

Improved Purge Scheduling
The InnoDB purge operation is a type of garbage collection that runs periodically. Previously, the purge was part of the master thread, meaning that it could block some other database operations. Now, this operation can run in its own thread, allowing for more concurrency. You can control whether the purge operation is split into its own thread with the innodb_purge_threads configuration option, which can be set to 0 (the default) or 1 (for a single separate purge thread). This architectural change might not cause a big speedup with this single purge thread, but it lays the groundwork to tune other bottlenecks related to purge operations, so that in the future multiple purge threads could provide a bigger performance gain. The configuration option innodb_purge_batch_size can be set from 1 to 5000, with default of 20, although typical users should not need to change that setting. Read more about improved purge scheduling.

Better Instrumentation/Diagnostics

InnoDB Stats in Performance Schema
The Performance Schema has been part of MySQL 5.5 for a while now. InnoDB 1.1 is instrumented for the first time for Performance Schema monitoring, with statistics available for InnoDB-specific mutexes, rw-locks, threads, and I/O operations. The data is structured so that you can see everything, or filter to see just the InnoDB items. The information in the performance_schema tables lets you see how these items factor into overall database performance, which ones are the hottest under various workloads and system configurations, and trace issues back to the relevant file and line in the source code so you can really see what’s happening behind the scenes. Read more about InnoDB integration with Performance Schema.