Preamble
Installing a new SQL Server instance is a common task for DBAs, and most of us use a simple and straightforward GUI approach.
However, there are some configuration options, both during and immediately after the installation, that DBAs can do to improve SQL Server. This blog post is intended to highlight these options and their advantages.
Volume Service Preference
The ability of SQL Server to perform “instant file initialization” will lead to significant time savings when creating new databases with large initial file sizes or when restoring databases from backup copies. In short, this permission allows SQL Server to skip the usual process of “zeroing” disk space before a new data file is prepared, which makes it faster. This is especially useful when VLDB needs to be restored either in normal mode or during the disaster recovery (DR) process.
This option is now available to configure during the installation and you just need to check the box next to it as shown below:
TempDB database configuration
The TempDB database is a critical system database in SQL Server. Its use goes beyond placing custom temporary tables.
The TempDB database is used within SQL Server for such purposes as placing hash tables (intermediate tables in connections), table variables returning tabular values of functions, creating indexes, sorting, etc.,
Although all of this is mostly transparent to user processes, a poorly configured or dimensioned TempDB database may lead to sub-optimal DBMS kernel performance and potentially cause problems.
During the installation, it is possible to configure the number of files in the TempDB database, their initial size, and their growth factor. For starters, Microsoft recommends that the number of TempDB database files be equal to the number of logical processors or eight (8), whichever is less.
The files can also be appropriately defined during the installation itself to get an optimal start for using the TempDB database, as shown in the figure below.
Here the number of files is set to 8, equal to the number of logical processors and with an initial file size of 1 GB with a growth factor of 512 MB. If the disk configuration allows this, the initial size can also be much larger, which reduces the need for file growth.
Data Catalogue
Another simple but elegant configuration is the assignment of folders for data files and logs with corresponding names. This avoids several levels that SQL Server assigns to these folders by default, which simplifies and simplifies the grouping of data and log files.
In the above figure, disks E, F were assigned to data and logs respectively. The “SQL data” and “SQL logs” folders were created in advance.
Memory settings
It is recommended to allocate all server memory to SQL Server except 80%. However, this may change significantly depending on the total amount of available memory. But, in this example, we assume 80% as the initial allocation.
A simple trick to calculate this is 0.8*(total server memory in GB)*1024 = the number that fits in the maximum memory.
The standard recommendation of 80% is to provide less memory competition and partitioning by allocating at least 20% of the available memory to the operating system, which is mostly sufficient for normal operations.
Database Model Configuration
In SQL Server instance, the model database servers are created as a reference for new databases. The configuration of the model database and its parameters are copied to the new databases. Although there are several database parameters that can be configured on the model, and thus extend them to new databases, they are mainly dependent on the individual requirements of the environment.
Two parameters, however, can be applied to most environments, and they are the initial file size and growth factor. The initial file size ensures that the database and log files created for new databases are large enough. The growth factor allows growth events to be less frequent, something that can significantly help OLTP performance.
After installation, after gaining access to the Model database properties, the file parameters may be changed. In the above example, the initial file size of the Model database was set to 10 GB with a growth factor of 1 GB for the data file and the initial file size of 1 GB with a growth factor of 256 MB.
Server settings
There are many server parameters that control the operation of SQL Server. Things from the number of parallel processors in use to access control via xp_cmdshell are functions that can be set using the sp_configure command with T-SQL.
MAXDOP, Max Server Memory, Min Server Memory are some of the functions that DBAs set up this way. The other two most common settings that usually apply to most servers are “backup compression” and “optimization for non-regulated workloads”.
Backup compression is a very useful feature that saves disk space by compressing backup files. This can be controlled by maintenance plans and separate T-SQL backup commands, but it is optimal to enable them at the server level so that large files are not created on the disk, in situations such as running unregulated backups.
The option to optimize for unregulated workloads allows SQL Server to reduce memory usage by query plans that are not reused. Only query plans that are restarted get full memory space.
Using T-SQL below, they can be enabled:
EXEC sp_configure 'show advanced options', 1
RECONFIGURE
GO
EXEC sp_configure 'optimize for ad hoc workloads', 1
RECONFIGURE
GO
EXEC sp_configure 'backup compression default', 1
RECONFIGURE
GO
Interlock Trace Flags
Mutual locks are resolved within SQL Server based on the cost associated with the queries involved in the mutual locks. In many cases, applications and user processes are designed to handle interlocks well without affecting data quality, but there may still be cases where interlocks may be repeated and require more detailed analysis. Interlock trace flags, if enabled, ensure that interlocking information is recorded in the error log for further analysis.
Tracing flags 1204 and 1222 capture the interlock node and the request information. You can enable these trace flags with the DBCC TRACEON T-SQL command, but the best way to enable them and, more importantly, stay enabled after a server reboot is to add them to the SQL Server startup parameters.
You can do this by referring to the start parameters section and specifying the trace flags, as shown below, from SQL Server Configuration Manager:
Additional task settings
Once the SQL Server instance has been installed and configured according to the recommendations and/or company standards, it is recommended to simultaneously configure the corresponding maintenance and backup jobs so that the added databases are eventually covered. Some of the standard maintenance and backup plans are:
- A full backup task that runs on a schedule with folders assigned to each database
- Setting up a Differential Backup, if necessary, with the same characteristics as those mentioned above
- Specifies a T-Log backup to cover the databases in a complete recovery model. Exceptions shall be made for the databases that will participate in log delivery.
- Index and statistics maintenance task, which is performed at least once a week to remove fragmentation from indexes and calculate statistics with a suitable scanning percentage value to ensure optimal execution plans
- Assignment to recycle the error log on a regular basis to help download it quickly, in moments of immediate need for troubleshooting.
- Service cleaning task to periodically delete service history
- Configuring the SQL Server agent to store enough data to analyze historical jobs
As mentioned at the beginning of this blog, these are some configuration and configuration elements that correspond to the general SQL Server installation recommendations.
Often, there are other elements that need to be configured according to company standards or specific application or user requirements. Examples of such programs are the Database Mail component, TLS encryption of database connections, snapshot isolation, MAXDOP, etc.
It is recommended to create a standard assembly document that includes all the steps described above to ensure that all installations of SQL Server instances in the company are consistent and meet all requirements.
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of clouds, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Revolutionizing Real Estate: Enhancing Database Performance and Cost Efficiency with Enteros and Cloud FinOps
- 21 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enteros in Education: Leveraging AIOps for Advanced Anomaly Management and Optimized Learning Environments
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Database Performance and Cost Management: Enteros’ Role in Cost Estimation and Allocation for the Banking World
- 20 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enteros: Revolutionizing Database Performance and Cloud FinOps for Financial Sector RevOps Success
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…