Today, I am going to share how can you boost your database performance to new heights with SQL Server memory-optimized tables! Designed for speed and efficiency, these tables redefine data processing by keeping critical data in memory for blazing-fast results. Let's start.
Memory-Optimized Table
A memory-optimized table is a table that stores data entirely in memory to enhance data access and transaction speeds. For durability, along with in-memory it keeps another copy in disk. In case of unexpected shutdown, SQL Server restores data from disk. Like traditional table transactions, memory-optimized tables' transactions are fully atomic, consistent, isolated, and durable (ACID).
Non-durable Memory-Optimized Table
SQL Server supports another version of memory-optimized table whose another copy of data is not saved in disk. This means that transactions on these tables do not require any disk IO. Note that you cannot recover the data in case of server crash or fail over.
Natively Compiled Stored Procedures
Natively compiled stored procedures supports only subset of T-SQL not the full T-SQL set. For details, check SQL Server doc.
Things to Consider
Before implementing memory-optimized table, please consider below stuffs:
- It will take additional memory in the server. Make sure you have sufficient amount of memory in the server.
- It is optimized for high volume of INSERT operation not SELECT.
- It doesn't support many features and SQL syntax.
Unsupported features
Below features are nor supported for memory-optimized table. For details check Sql Server doc.
- Replication
- Mirroring
- Linked Servers
- Bulk Logging
- DDL Triggers
- Minimal Logging
- Data Compression
Unsupported T-SQL
Below Sql statements are not supported for memory-optimized table. For details visit Sql Server doc.
- Foreign Keys
- ALTER TABLE
- CREATE INDEX
- TRUNCATE TABLE
- DBCC CHECKTABL
- DBCC CHECKDB
Migrating to Memory-Optimized Table
SSMS is shifted with a great tool Memory Optimization Advisor which can assist you to convert a table to a memory-optimized table. Right click on the table name, and you will get it (Diagram - 1). In this demo, you are going to convert Sales.OrderTracking to a memory-optimized table. You can download AdventureWorks2016_EXT database from here.
Diagram - 1:- Memory Optimization Advisor
In validation state (Diagram - 2), it checks whether the table is eligible for migration. It checks for unsupported data types, sparse columns, unsupported seed, foreign key, constraints, indexes, triggers, row size limit and table partition and replication. For any unsupported item, change or remove it.
Diagram - 2:- Memory-optimized Table Validation
Warnings state (Diagram - 3), shows warning about the table. However, it doesn't prevent a table to be converted to memory-optimized. Note that, due to the warning some behaviors of the converted table might be changed. You can fixed the warning by clicking the side resolution link.
Diagram - 3:- Optimization Warnings
In this section (Diagram - 4), you need to put file group name, new name of original table and check if you wish data migration to new table. If you check the second checkbox, then your data will be destroyed due to restart of SQL Services i.e. it won't be durable.
Diagram - 4:- Optimization Options
You can rename the primary key (Diagram - 5). There is two options for primary key:
- NONCLUSTERED INDEX - it is great for tables with many range queries and needing a sort order
- NONCLUSTERED HASH index - it is better for those direct lookup. It needs an additional value "Bucket Count". Bucket count has effect on the performance of the table. You can use the default value.
Diagram - 5:- Primary key conversion
In case you have any additional index in the table, then you will go through (Diagram - 6 & 7) step for index conversion.
Diagram - 6:- Index conversion
Diagram - 7:- Index conversion
The Next step (Diagram - 8) summarizes the migration process with selected options. You can review the summary and proceed for the migration.
Diagram - 8:- Migration summary
Finally, migration is successful (Diagram - 9). A new table is created, old tabled is renamed and data is migrated.
Diagram - 9:- Migration result
Performance Gain
For memory-optimized table, you will get better performance in below scenario:
- A stored procedure with more functionality implemented and fewer calls has performance gain over a short stored procedures with more calls.
- Natively compiled stored procedures has better performance than interpreted stored procedures or query execution.
- Memory-optimized hash indexes have better performance than memory-optimized nonclustered indexes. Memory-optimized nonclustered indexes have better performance than disk-based indexes.
- Applications whose performance is affected by latch contention or blocking, improves significantly when the application moves to In-Memory OLTP.
Why Memory-Optimized Table is Faster
It is faster because not only the data is in memory. The data storage, access, and processing algorithms were redesigned to take the advantage of the latest enhancements in in-memory and high concurrency computing. Furthermore, it removes lock and latch contention between concurrent transactions.
Use cases of Memory-Optimized Table
Memory-optimized table or in-memory OLTP is more appropriate in below scenarios:
- High-throughput and low-latency transaction processing - the best suitable for large volumes of transactions, with consistently low latency for individual transactions. For example, trading of financial instruments, mobile gaming, and ad delivery.
- IoT - in-Memory OLTP is good at ingesting large volumes of data from many different sources at the same time. SQL Server makes running queries against the data fast, and allows you to get real-time insights.
- Caching and session state - ASP.NET session state is a successful use case for In-Memory OLTP.
- tempdb object replacement - non-durable tables and memory-optimized table types to replace your traditional tempdb based structures, such as temporary tables, table variables, and table-valued parameters (TVPs). Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, comparing to the traditional table variables and #temp table.
- ETL Process - non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.
Conclusion
SQL Server memory-optimized queries offer a game-changing approach to achieving unparalleled performance and efficiency for some specific use cases. By taking leverage of this, unlock the full potential of your applications and stay ahead in today's data-driven world.