"일꾼이 일을 잘하려면 먼저 도구를 갈고 닦아야 한다." - 공자, 『논어』.
첫 장 > 프로그램 작성 > 데이터베이스 성능 향상을 위한 주요 ySQL 스키마 검사

데이터베이스 성능 향상을 위한 주요 ySQL 스키마 검사

2024년 11월 19일에 게시됨
검색:251

A database schema defines the logical structure of your database, including tables, columns, relationships, indexes, and constraints that shape how data is organized and accessed. It’s not just about how the data is stored but also how it interacts with queries, transactions, and other operations.

These checks can help you stay on top of any new or lingering problems before they snowball into bigger issues. You can dive deeper into these schema checks below and find out exactly how to fix any issues if your database doesn't pass. Just remember, before you make any schema changes, always backup your data to protect against potential risks that might occur during modifications.

1. Primary Key Check (Missing Primary Keys)

The primary key is a critical part of any table, uniquely identifying each row and enabling efficient queries. Without a primary key, tables may experience performance issues, and certain tools like replication and schema change utilities may not function properly.

There are several issues you can avoid by defining a primary key when designing schemas:

  1. If no primary or unique key is specified, MySQL creates an internal one, which is inaccessible for usage.
  2. The lack of a primary key could slow down replication performance, especially with row-based or mixed replication.
  3. Primary keys allow scalable data archiving and purging. Tools like pt-online-schema-change require a primary or unique key.
  4. Primary keys uniquely identify rows, which is crucial from an application perspective.

Example

To create a PRIMARY KEY constraint on the "ID" column when the table is already created, use the following SQL:

ALTER TABLE Persons ADD PRIMARY KEY (ID);

To define a primary key on multiple columns:

ALTER TABLE Persons ADD CONSTRAINT PK_Person PRIMARY KEY (ID, LastName);

Note: If you use the ALTER TABLE command, then the primary key column(s) must have been declared to not contain NULL values when the table was first created.

2. Table Engine Check(Deprecated Table Engine)

The MyISAM storage engine is deprecated, and tables still using it should be migrated to InnoDB. InnoDB is the default and recommended engine for most use cases due to its superior performance, data recovery capabilities, and transaction support. Migrating from MyISAM to InnoDB can dramatically improve performance in write-heavy applications, provide better fault tolerance, and allow for more advanced MySQL features such as full-text search and foreign keys.

Why InnoDB is preferred:

  • Crash recovery capabilities allow it to recover automatically from database server or host crashes without data corruption.
  • Only locks the rows affected by a query, allowing for much better performance in high-concurrency environments.
  • Caches both data and indexes in memory, which is preferred for read-heavy workloads.
  • Fully ACID-compliant, ensuring data integrity and supporting transactions.
  • The InnoDB engine receives the majority of the focus from the MySQL development community, making it the most up-to-date and well-supported engine.

How to Migrate to InnoDB

ALTER TABLE  ENGINE=InnoDB;

3. Table Collation Check (Mixed Collations)

Using different collations across tables or even within a table can lead to performance problems, particularly during string comparisons and joins. If the collations of two string columns differ, MySQL might need to convert the strings at runtime, which can prevent indexes from being used and slow down your queries.

When you make changes to mixed collations tables, a few problems can surface:

  • Collations can differ at the column level, so mismatches at the table level won’t cause issues if the relevant columns in a join have matching collations.
  • Changing a table's collation, especially with a charset switch, isn't always simple. Data conversion might be needed, and unsupported characters could turn into corrupted data.
  • If you don’t specify a collation or charset when creating a table, it inherits the database defaults. If none are set at the database level, server defaults will apply. To avoid these issues, it’s important to standardize the collation across your entire dataset, especially for columns that are frequently used in join operations.

How to Change Collation Settings

Before making any changes to your database's collation settings, test your approach in a non-production environment to avoid unintended consequences. If you're unsure about anything, it’s best to consult with a DBA.

Retrieve the default charset and collation for all databases:

SELECT SCHEMA_NAME, DEFAULT_CHARACTER_SET_NAME, 
DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;

Check the collation of specific tables:

SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_COLLATION FROM
information_schema.TABLES WHERE TABLE_COLLATION IS NOT NULL ORDER BY
TABLE_SCHEMA, TABLE_COLLATION;

Find the server's default charset:

SELECT @@GLOBAL.character_set_server;

Find the server's default collation:

SELECT @@GLOBAL.collation_server;

Update the collation for a specific database:

ALTER DATABASE  COLLATE=;

Update the collation for a specific table:

ALTER TABLE  COLLATE=;

4. Table Character Set Check (Mixed Character Set)

Mixed character sets are similar to mixed collations in that they can lead to performance and compatibility issues. A mixed character set occurs when different columns or tables use different encoding formats for storing data.

  • Mixed character sets can hurt join performance on string columns by preventing index use or requiring value conversions.
  • Character sets can be defined at the column level, and as long as the columns involved in a join have matching character sets, performance won’t be impacted by mismatches at the table level.
  • Changing a table’s character set may involve data conversion, which can lead to corrupted data if unsupported characters are encountered.
  • If no character set or collation is specified, tables inherit the database's defaults, and databases inherit the server's default charset and collation.

How to Change Character Settings

Before adjusting your database's character settings, be sure to test the changes in a staging environment to prevent any unexpected issues. If you're uncertain about any steps, consult a DBA for guidance.

Retrieve the default charset and collation for all databases:

SELECT SCHEMA_NAME,DEFAULT_CHARACTER_SET_NAME,
DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;

Get the character set of a column:

SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, CHARACTER_SET_NAME 
FROM information_schema.COLUMNS WHERE CHARACTER_SET_NAME is not NULL 
ORDER BY TABLE_SCHEMA, CHARACTER_SET_NAME;

Find the server's default charset:

SELECT @@GLOBAL.character_set_server;

Find the server's default collation:

SELECT @@GLOBAL.collation_server;

To view the structure of a table:

show create table 
;

Example output:

CREATE TABLE `
` ( `word` varchar(50) NOT NULL DEFAULT '', `sid` int(10) unsigned NOT NULL DEFAULT '0', `langcode` varchar(12) CHARACTER SET ascii NOT NULL DEFAULT '', `type` varchar(64) CHARACTER SET ascii NOT NULL, `score` float DEFAULT NULL, PRIMARY KEY (`word`,`sid`,`langcode`,`type`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

To change a column character set:

ALTER TABLE  MODIFY `type` varchar(64) CHARACTER SET utf8mb4 NOT NULL;

5. Column Auto Increment Check(Type of Auto Increment Columns)

For tables that are expected to grow indefinitely and use auto-increment for primary keys, it's recommended to switch to the UNSIGNED BIGINT data type. This allows the column to handle a much larger range of values, preventing the need for costly table alterations in the future once the maximum value is reached. By specifying UNSIGNED, only positive values are stored, effectively doubling the range of the data type.

How to Change Character Settings

To modify the column type to UNSIGNED BIGINT:

ALTER TABLE .
MODIFY COLUMN id bigint unsigned NOT NULL AUTO_INCREMENT;

6. Table Foreign Key Check(Existence of foreign keys)

Foreign keys offer data consistency by maintaining the relationship between parent and child tables, but they also impact database performance. Each time a write operation occurs, additional lookups are required to verify the integrity of the related data. This can cause slowdowns, especially in high-traffic environments.

If performance is a concern, you may want to consider removing foreign keys, especially in scenarios where data consistency can be handled at the application level.

How to Remove Foreign Keys

To drop a foreign key constraint from a table:

SHOW CREATE TABLE .
; ALTER TABLE .
DROP CONSTRAINT ;

7. Duplicated Index Check

Duplicate indexes in MySQL consume unnecessary disk space and create additional overhead during write operations, as every index must be updated. This can complicate query optimization, potentially leading to inefficient execution plans without offering any real benefit.

Identify and remove duplicate indexes to streamline query optimization and reduce overhead. But make sure that the index is not being used for critical queries before removing it.

8. Unused Index Check

Unused indexes in MySQL can negatively impact database performance by consuming disk space, increasing processing overhead during inserts, updates, and deletes, and slowing down overall operations. While indexes are valuable for speeding up queries, those that aren't used can create unnecessary strain on your system.
Additional benefits of removing unused or duplicate indexes include:

  • With fewer indexes, MySQL's optimizer has fewer choices to evaluate, simplifying query execution and reducing CPU/memory usage.
  • Removing unused indexes frees up valuable disk space that can be used for more critical data, also improving I/O efficiency.
  • Index maintenance tasks, such as rebuilding or reorganizing, become faster and less resource-intensive when the number of indexes is minimized. This leads to smoother operations, particularly in environments requiring 24/7 uptime.

To identify unused indexed in MySQL or MariabDB please use to following SQL statement:

SELECT CONCAT(object_schema, '.', object_name) AS 'table', index_name
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NOT NULL
AND count_star = 0
AND index_name  'PRIMARY'
AND object_schema NOT IN ('mysql', 'performance_schema', 'information_schema')
ORDER BY count_star, object_schema, object_name;

How to Remove Unused or Duplicated Indexes

In MySQL 8.0 and later, you can make indexes invisible to test whether they’re needed without fully dropping them:

ALTER TABLE  ALTER INDEX  INVISIBLE;

If performance remains unaffected, the index can be safely dropped:

ALTER TABLE  DROP INDEX ;

You can revert an index back to visible if needed:

ALTER TABLE  ALTER INDEX  VISIBLE;

Schema Checks Now Available with Releem

With the latest update, Releem now includes comprehensive schema health checks. These checks provide real-time insights into your database’s structural integrity, along with actionable recommendations for fixing any detected issues.

Top ySQL Schema Checks to Boost Database Performance

By automating the schema monitoring process, Releem takes the guesswork out of manual checks, saving database engineers tons of time and effort. Instead of spending hours working on schema details, you can now focus on more pressing tasks.

릴리스 선언문 이 기사는 https://dev.to/drupaladmin/top-8-mysql-schema-checks-to-boost-database-performance-3m4k?1에서 복제됩니다.1 침해가 있는 경우, [email protected]에 문의하십시오. 그것을 삭제하려면
최신 튜토리얼 더>

부인 성명: 제공된 모든 리소스는 부분적으로 인터넷에서 가져온 것입니다. 귀하의 저작권이나 기타 권리 및 이익이 침해된 경우 자세한 이유를 설명하고 저작권 또는 권리 및 이익에 대한 증거를 제공한 후 이메일([email protected])로 보내주십시오. 최대한 빨리 처리해 드리겠습니다.

Copyright© 2022 湘ICP备2022001581号-3