Cari di MySQL 
    MySQL Manual
Daftar Isi
(Sebelumnya) 5.1.5. Using System Variables6. Security (Berikutnya)

5.2. MySQL Server Logs

MySQL Server has several logs that can help you find out what activity is taking place.

Log TypeInformation Written to Log
Error logProblems encountered starting, running, or stopping mysqld
General query logEstablished client connections and statements received from clients
Binary logStatements that change data (also used for replication)
Relay logData changes received from a replication master server
Slow query logQueries that took more than long_query_time seconds toexecute

By default, no logs are enabled (except the error log on Windows). The following log-specific sections provide information about the server options that enable logging.

By default, the server writes files for all enabled logs in the data directory. You can force the server to close and reopen the log files (or in some cases switch to a new log file) by flushing the logs. Log flushing occurs when you issue a FLUSH LOGS statement; execute mysqladmin with a flush-logs or refresh argument; or execute mysqldump with a --flush-logs or --master-data option. See Section 13.7.6.3, "FLUSH Syntax", Section 4.5.2, "mysqladmin - Client for Administering a MySQL Server", and Section 4.5.4, "mysqldump - A Database Backup Program". In addition, the binary log is flushed when its size reaches the value of the max_binlog_size system variable.

You can control the general query and slow query logs during runtime. You can enable or disable logging, or change the log file name. You can tell the server to write general query and slow query entries to log tables, log files, or both. For details, see Section 5.2.1, "Selecting General Query and Slow Query Log Output Destinations", Section 5.2.3, "The General Query Log", and Section 5.2.5, "The Slow Query Log".

The relay log is used only on slave replication servers, to hold data changes from the master server that must also be made on the slave. For discussion of relay log contents and configuration, see Section 16.2.2.1, "The Slave Relay Log".

For information about log maintenance operations such as expiration of old log files, see Section 5.2.6, "Server Log Maintenance".

For information about keeping logs secure, see Section 6.1.2.3, "Passwords and Logging".

5.2.1. Selecting General Query and Slow Query Log Output Destinations

MySQL Server provides flexible control over the destination of output to the general query log and the slow query log, if those logs are enabled. Possible destinations for log entries are log files or the general_log and slow_log tables in the mysql database. Either or both destinations can be selected.

Currently, logging to tables incurs significantly more server overhead than logging to files. If you enable the general log or slow query log and require highest performance, you should use file logging, not table logging.

Log control at server startup. The --log-output option specifies the destination for log output. This option does not in itself enable the logs. Its syntax is --log-output[=value,...]:

  • If --log-output is given with a value, the value should be a comma-separated list of one or more of the words TABLE (log to tables), FILE (log to files), or NONE (do not log to tables or files). NONE, if present, takes precedence over any other specifiers.

  • If --log-output is omitted, the default logging destination is FILE.

The general_log system variable controls logging to the general query log for the selected log destinations. If specified at server startup, general_log takes an optional argument of 1 or 0 to enable or disable the log. To specify a file name other than the default for file logging, set the general_log_file variable. Similarly, the slow_query_log variable controls logging to the slow query log for the selected destinations and setting slow_query_log_file specifies a file name for file logging. If either log is enabled, the server opens the corresponding log file and writes startup messages to it. However, further logging of queries to the file does not occur unless the FILE log destination is selected.

Examples:

  • To write general query log entries to the log table and the log file, use --log-output=TABLE,FILE to select both log destinations and --general_log to enable the general query log.

  • To write general and slow query log entries only to the log tables, use --log-output=TABLE to select tables as the log destination and --general_log and --slow_query_log to enable both logs.

  • To write slow query log entries only to the log file, use --log-output=FILE to select files as the log destination and --slow_query_log to enable the slow query log. (In this case, because the default log destination is FILE, you could omit the --log-output option.)

Log control at runtime. The system variables associated with log tables and files enable runtime control over logging:

  • The global log_output system variable indicates the current logging destination. It can be modified at runtime to change the destination.

  • The global general_log and slow_query_log variables indicate whether the general query log and slow query log are enabled (ON) or disabled (OFF). You can set these variables at runtime to control whether the logs are enabled.

  • The global general_log_file and slow_query_log_file variables indicate the names of the general query log and slow query log files. You can set these variables at server startup or at runtime to change the names of the log files.

  • To disable or enable general query logging for the current connection, set the session sql_log_off variable to ON or OFF.

The use of tables for log output offers the following benefits:

  • Log entries have a standard format. To display the current structure of the log tables, use these statements:

    SHOW CREATE TABLE mysql.general_log;SHOW CREATE TABLE mysql.slow_log;
  • Log contents are accessible through SQL statements. This enables the use of queries that select only those log entries that satisfy specific criteria. For example, to select log contents associated with a particular client (which can be useful for identifying problematic queries from that client), it is easier to do this using a log table than a log file.

  • Logs are accessible remotely through any client that can connect to the server and issue queries (if the client has the appropriate log table privileges). It is not necessary to log in to the server host and directly access the file system.

The log table implementation has the following characteristics:

  • In general, the primary purpose of log tables is to provide an interface for users to observe the runtime execution of the server, not to interfere with its runtime execution.

  • CREATE TABLE, ALTER TABLE, and DROP TABLE are valid operations on a log table. For ALTER TABLE and DROP TABLE, the log table cannot be in use and must be disabled, as described later.

  • By default, the log tables use the CSV storage engine that writes data in comma-separated values format. For users who have access to the .CSV files that contain log table data, the files are easy to import into other programs such as spreadsheets that can process CSV input.

    The log tables can be altered to use the MyISAM storage engine. You cannot use ALTER TABLE to alter a log table that is in use. The log must be disabled first. No engines other than CSV or MyISAM are legal for the log tables.

  • To disable logging so that you can alter (or drop) a log table, you can use the following strategy. The example uses the general query log; the procedure for the slow query log is similar but uses the slow_log table and slow_query_log system variable.

    SET @old_log_state = @@global.general_log;SET GLOBAL general_log = 'OFF';ALTER TABLE mysql.general_log ENGINE = MyISAM;SET GLOBAL general_log = @old_log_state;
  • TRUNCATE TABLE is a valid operation on a log table. It can be used to expire log entries.

  • RENAME TABLE is a valid operation on a log table. You can atomically rename a log table (to perform log rotation, for example) using the following strategy:

    USE mysql;DROP TABLE IF EXISTS general_log2;CREATE TABLE general_log2 LIKE general_log;RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
  • As of MySQL 5.5.7, CHECK TABLE is a valid operation on a log table.

  • LOCK TABLES cannot be used on a log table.

  • INSERT, DELETE, and UPDATE cannot be used on a log table. These operations are permitted only internally to the server itself.

  • FLUSH TABLES WITH READ LOCK and the state of the global read_only system variable have no effect on log tables. The server can always write to the log tables.

  • Entries written to the log tables are not written to the binary log and thus are not replicated to slave servers.

  • To flush the log tables or log files, use FLUSH TABLES or FLUSH LOGS, respectively.

  • Partitioning of log tables is not permitted.

  • Before MySQL 5.5.25, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.5.25, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

5.2.2. The Error Log

The error log contains information indicating when mysqld was started and stopped and also any critical errors that occur while the server is running. If mysqld notices a table that needs to be automatically checked or repaired, it writes a message to the error log.

On some operating systems, the error log contains a stack trace if mysqld dies. The trace can be used to determine where mysqld died. See MySQL Internals: Porting to Other Systems.

You can specify where mysqld writes the error log with the --log-error[=file_name] option. If the option is given with no file_name value, mysqld uses the name host_name.err by default. The server creates the file in the data directory unless an absolute path name is given to specify a different directory.

If you do not specify --log-error, or (on Windows) if you use the --console option, errors are written to stderr, the standard error output. Usually this is your terminal.

On Windows, error output is always written to the error log if --console is not given.

In addition, on Windows, events and error messages are written to the Windows Event Log within the Application log. Entries marked as Warning and Note are written to the Event Log, but informational messages (such as information statements from individual storage engines) are not copied to the Event Log. The log entries have a source of MySQL. You cannot disable writing information to the Windows Event Log.

If you flush the logs using FLUSH LOGS or mysqladmin flush-logs and mysqld is writing the error log to a file (for example, if it was started with the --log-error option), the effect is version dependent:

  • As of MySQL 5.5.7, the server closes and reopens the log file. To rename the file, you can do so manually before flushing. Then flushing the logs reopens a new file with the original file name. For example, you can rename the file and create a new one using the following commands:

    shell> mv host_name.err host_name.err-oldshell> mysqladmin flush-logsshell> mv host_name.err-old backup-directory

    On Windows, use rename rather than mv.

  • Prior to MySQL 5.5.7, the server renames the current log file with the suffix -old, then creates a new empty log file. Be aware that a second log-flushing operation thus causes the original error log file to be lost unless you save it under a different name. On Windows, you cannot rename the error log while the server has it open before MySQL 5.5.7. To avoid a restart, flush the logs first to cause the server to rename the original file and create a new one, then save the renamed file. That also works on Unix, or you can use the commands shown earlier.

No error log renaming occurs when the logs are flushed in any case if the server is not writing to a named file.

If you use mysqld_safe to start mysqld, mysqld_safe arranges for mysqld to write error messages to a log file or to syslog. mysqld_safe has three error-logging options, --syslog, --skip-syslog, and --log-error. The default with no logging options or with --skip-syslog is to use the default log file. To explicitly specify use of an error log file, specify --log-error=file_name to mysqld_safe, and mysqld_safe will arrange for mysqld to write messages to a log file. To use syslog instead, specify the --syslog option.

If you specify --log-error in an option file in a [mysqld], [server], or [mysqld_safe] section, mysqld_safe will find and use the option.

If mysqld_safe is used to start mysqld and mysqld dies unexpectedly, mysqld_safe notices that it needs to restart mysqld and writes a restarted mysqld message to the error log.

The --log-warnings option or log_warnings system variable can be used to control warning logging to the error log. The default value is enabled (1). Warning logging can be disabled using a value of 0. If the value is greater than 1, aborted connections are written to the error log, and access-denied errors for new connection attempts are written. See Section C.5.2.11, "Communication Errors and Aborted Connections".

5.2.3. The General Query Log

The general query log is a general record of what mysqld is doing. The server writes information to this log when clients connect or disconnect, and it logs each SQL statement received from clients. The general query log can be very useful when you suspect an error in a client and want to know exactly what the client sent to mysqld.

mysqld writes statements to the query log in the order that it receives them, which might differ from the order in which they are executed. This logging order contrasts to the binary log, for which statements are written after they are executed but before any locks are released. (Also, the query log contains all statements, whereas the binary log does not contain statements that only select data.)

By default, the general query log is disabled. To specify the initial general query log state explicitly, use --general_log[={0|1}]. With no argument or an argument of 1, --general_log enables the log. With an argument of 0, this option disables the log. To specify a log file name, use --general_log_file=file_name. To specify the log destination, use --log-output (as described in Section 5.2.1, "Selecting General Query and Slow Query Log Output Destinations"). The older options to enable the general query log, --log and -l, are deprecated.

If you specify no name for the general query log file, the default name is host_name.log. The server creates the file in the data directory unless an absolute path name is given to specify a different directory.

To disable or enable the general query log or change the log file name at runtime, use the global general_log and general_log_file system variables. Set general_log to 0 (or OFF) to disable the log or to 1 (or ON) to enable it. Set general_log_file to specify the name of the log file. If a log file already is open, it is closed and the new file is opened.

When the general query log is enabled, the server writes output to any destinations specified by the --log-output option or log_output system variable. If you enable the log, the server opens the log file and writes startup messages to it. However, further logging of queries to the file does not occur unless the FILE log destination is selected. If the destination is NONE, the server writes no queries even if the general log is enabled. Setting the log file name has no effect on logging if the log destination value does not contain FILE.

Server restarts and log flushing do not cause a new general query log file to be generated (although flushing closes and reopens it). You can rename the file and create a new one by using the following commands:

shell> mv host_name.log host_name-old.logshell> mysqladmin flush-logsshell> mv host_name-old.log backup-directory

On Windows, use rename rather than mv.

You can also rename the general query log file at runtime by disabling the log:

SET GLOBAL general_log = 'OFF';

With the log disabled, rename the log file externally; for example, from the command line. Then enable the log again:

SET GLOBAL general_log = 'ON';

This method works on any platform and does not require a server restart.

The session sql_log_off variable can be set to ON or OFF to disable or enable general query logging for the current connection.

The general query log should be protected because logged statements might contain passwords. See Section 6.1.2.3, "Passwords and Logging".

5.2.4. The Binary Log

The binary log contains "events" that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data. The binary log has two important purposes:

  • For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. See Section 16.2, "Replication Implementation".

  • Certain data recovery operations require use of the binary log. After a backup has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup. See Section 7.5, "Point-in-Time (Incremental) Recovery Using the Binary Log".

The binary log is not used for statements such as SELECT or SHOW that do not modify data. To log all statements (for example, to identify a problem query), use the general query log. See Section 5.2.3, "The General Query Log".

Running a server with binary logging enabled makes performance slightly slower. However, the benefits of the binary log in enabling you to set up replication and for restore operations generally outweigh this minor performance decrement.

The binary log should be protected because logged statements might contain passwords. See Section 6.1.2.3, "Passwords and Logging".

The following discussion describes some of the server options and variables that affect the operation of binary logging. For a complete list, see Section 16.1.3.4, "Binary Log Options and Variables".

To enable the binary log, start the server with the --log-bin[=base_name] option. If no base_name value is given, the default name is the value of the pid-file option (which by default is the name of host machine) followed by -bin. If the basename is given, the server writes the file in the data directory unless the basename is given with a leading absolute path name to specify a different directory. It is recommended that you specify a basename explicitly rather than using the default of the host name; see Section C.5.8, "Known Issues in MySQL", for the reason.

If you supply an extension in the log name (for example, --log-bin=base_name.extension), the extension is silently removed and ignored.

mysqld appends a numeric extension to the binary log basename to generate binary log file names. The number increases each time the server creates a new log file, thus creating an ordered series of files. The server creates a new file in the series each time it starts or flushes the logs. The server also creates a new binary log file automatically after the current log's size reaches max_binlog_size. A binary log file may become larger than max_binlog_size if you are using large transactions because a transaction is written to the file in one piece, never split between files.

To keep track of which binary log files have been used, mysqld also creates a binary log index file that contains the names of all used binary log files. By default, this has the same basename as the binary log file, with the extension '.index'. You can change the name of the binary log index file with the --log-bin-index[=file_name] option. You should not manually edit this file while mysqld is running; doing so would confuse mysqld.

The term "binary log file" generally denotes an individual numbered file containing database events. The term "binary log" collectively denotes the set of numbered binary log files plus the index file.

A client that has the SUPER privilege can disable binary logging of its own statements by using a SET sql_log_bin=0 statement. See Section 5.1.4, "Server System Variables".

The format of the events recorded in the binary log is dependent on the binary logging format. Three format types are supported, row-based logging, statement-based logging and mixed-base logging. The binary logging format used depends on the MySQL version. For general descriptions of the logging formats, see Section 5.2.4.1, "Binary Logging Formats". For detailed information about the format of the binary log, see MySQL Internals: The Binary Log.

The server evaluates the --binlog-do-db and --binlog-ignore-db options in the same way as it does the --replicate-do-db and --replicate-ignore-db options. For information about how this is done, see Section 16.2.3.1, "Evaluation of Database-Level Replication and Binary Logging Options".

If you are replicating from a MySQL Cluster to a standalone MySQL Server, you should be aware that the the NDB storage engine uses default values for some binary logging options (including options specific to NDB such as --ndb-log-update-as-write) that differ from those used by other storage engines. If not corrected for, these differences can lead to divergence of the master's and slave's binary logs. For more information, see Replication from NDB to other storage engines. In particular, if you are using a nontransactional storage engine such as MyISAM on the slave, see Replication from NDB to a nontransactional storage engine.

A replication slave server by default does not write to its own binary log any data modifications that are received from the replication master. To log these modifications, start the slave with the --log-slave-updates option in addition to the --log-bin option (see Section 16.1.3.3, "Replication Slave Options and Variables"). This is done when a slave is also to act as a master to other slaves in chained replication.

You can delete all binary log files with the RESET MASTER statement, or a subset of them with PURGE BINARY LOGS. See Section 13.7.6.6, "RESET Syntax", and Section 13.4.1.1, "PURGE BINARY LOGS Syntax".

If you are using replication, you should not delete old binary log files on the master until you are sure that no slave still needs to use them. For example, if your slaves never run more than three days behind, once a day you can execute mysqladmin flush-logs on the master and then remove any logs that are more than three days old. You can remove the files manually, but it is preferable to use PURGE BINARY LOGS, which also safely updates the binary log index file for you (and which can take a date argument). See Section 13.4.1.1, "PURGE BINARY LOGS Syntax".

You can display the contents of binary log files with the mysqlbinlog utility. This can be useful when you want to reprocess statements in the log for a recovery operation. For example, you can update a MySQL server from the binary log as follows:

shell> mysqlbinlog log_file | mysql -h server_name

mysqlbinlog also can be used to display replication slave relay log file contents because they are written using the same format as binary log files. For more information on the mysqlbinlog utility and how to use it, see Section 4.6.7, "mysqlbinlog - Utility for Processing Binary Log Files". For more information about the binary log and recovery operations, see Section 7.5, "Point-in-Time (Incremental) Recovery Using the Binary Log".

Binary logging is done immediately after a statement or transaction completes but before any locks are released or any commit is done. This ensures that the log is logged in commit order.

Updates to nontransactional tables are stored in the binary log immediately after execution.

Within an uncommitted transaction, all updates (UPDATE, DELETE, or INSERT) that change transactional tables such as InnoDB tables are cached until a COMMIT statement is received by the server. At that point, mysqld writes the entire transaction to the binary log before the COMMIT is executed.

Modifications to nontransactional tables cannot be rolled back. If a transaction that is rolled back includes modifications to nontransactional tables, the entire transaction is logged with a ROLLBACK statement at the end to ensure that the modifications to those tables are replicated.

When a thread that handles the transaction starts, it allocates a buffer of binlog_cache_size to buffer statements. If a statement is bigger than this, the thread opens a temporary file to store the transaction. The temporary file is deleted when the thread ends.

The Binlog_cache_use status variable shows the number of transactions that used this buffer (and possibly a temporary file) for storing statements. The Binlog_cache_disk_use status variable shows how many of those transactions actually had to use a temporary file. These two variables can be used for tuning binlog_cache_size to a large enough value that avoids the use of temporary files.

The max_binlog_cache_size system variable (default 4GB, which is also the maximum) can be used to restrict the total size used to cache a multiple-statement transaction. If a transaction is larger than this many bytes, it fails and rolls back. The minimum value is 4096.

If you are using the binary log and row based logging, concurrent inserts are converted to normal inserts for CREATE ... SELECT or INSERT ... SELECT statements. This is done to ensure that you can re-create an exact copy of your tables by applying the log during a backup operation. If you are using statement-based logging, the original statement is written to the log.

The binary log format has some known limitations that can affect recovery from backups. See Section 16.4.1, "Replication Features and Issues".

Binary logging for stored programs is done as described in Section 19.7, "Binary Logging of Stored Programs".

Note that the binary log format differs in MySQL 5.5 from previous versions of MySQL, due to enhancements in replication. See Section 16.4.2, "Replication Compatibility Between MySQL Versions".

Writes to the binary log file and binary log index file are handled in the same way as writes to MyISAM tables. See Section C.5.4.3, "How MySQL Handles a Full Disk".

By default, the binary log is not synchronized to disk at each write. So if the operating system or machine (not only the MySQL server) crashes, there is a chance that the last statements of the binary log are lost. To prevent this, you can make the binary log be synchronized to disk after every N writes to the binary log, with the sync_binlog system variable. See Section 5.1.4, "Server System Variables". 1 is the safest value for sync_binlog, but also the slowest. Even with sync_binlog set to 1, there is still the chance of an inconsistency between the table content and binary log content in case of a crash. For example, if you are using InnoDB tables and the MySQL server processes a COMMIT statement, it writes the whole transaction to the binary log and then commits this transaction into InnoDB. If the server crashes between those two operations, the transaction is rolled back by InnoDB at restart but still exists in the binary log. To resolve this, you should set --innodb_support_xa to 1. Although this option is related to the support of XA transactions in InnoDB, it also ensures that the binary log and InnoDB data files are synchronized.

For this option to provide a greater degree of safety, the MySQL server should also be configured to synchronize the binary log and the InnoDB logs to disk at every transaction. The InnoDB logs are synchronized by default, and sync_binlog=1 can be used to synchronize the binary log. The effect of this option is that at restart after a crash, after doing a rollback of transactions, the MySQL server cuts rolled back InnoDB transactions from the binary log. This ensures that the binary log reflects the exact data of InnoDB tables, and so, that the slave remains in synchrony with the master (not receiving a statement which has been rolled back).

If the MySQL server discovers at crash recovery that the binary log is shorter than it should have been, it lacks at least one successfully committed InnoDB transaction. This should not happen if sync_binlog=1 and the disk/file system do an actual sync when they are requested to (some do not), so the server prints an error message The binary log file_name is shorter than its expected size. In this case, this binary log is not correct and replication should be restarted from a fresh snapshot of the master's data.

The session values of the following system variables are written to the binary log and honored by the replication slave when parsing the binary log:

5.2.4.1. Binary Logging Formats

The server uses several logging formats to record information in the binary log. The exact format employed depends on the version of MySQL being used. There are three logging formats:

  • Replication capabilities in MySQL originally were based on propagation of SQL statements from master to slave. This is called statement-based logging. You can cause this format to be used by starting the server with --binlog-format=STATEMENT.

  • In row-based logging, the master writes events to the binary log that indicate how individual table rows are affected. You can cause the server to use row-based logging by starting it with --binlog-format=ROW.

  • A third option is also available: mixed logging. With mixed logging, statement-based logging is used by default, but the logging mode switches automatically to row-based in certain cases as described below. You can cause MySQL to use mixed logging explicitly by starting mysqld with the option --binlog-format=MIXED.

In MySQL 5.5, the default binary logging format is STATEMENT.

The logging format can also be set or limited by the storage engine being used. This helps to eliminate issues when replicating certain statements between a master and slave which are using different storage engines.

With statement-based replication, there may be issues with replicating nondeterministic statements. In deciding whether or not a given statement is safe for statement-based replication, MySQL determines whether it can guarantee that the statement can be replicated using statement-based logging. If MySQL cannot make this guarantee, it marks the statement as potentially unreliable and issues the warning, Statement may not be safe to log in statement format.

You can avoid these issues by using MySQL's row-based replication instead.

5.2.4.2. Setting The Binary Log Format

You can select the binary logging format explicitly by starting the MySQL server with --binlog-format=type. The supported values for type are:

  • STATEMENT causes logging to be statement based.

  • ROW causes logging to be row based.

  • MIXED causes logging to use mixed format.

In MySQL 5.5, the default binary logging format is STATEMENT. This includes MySQL Cluster NDB 7.2.1 and later MySQL Cluster NDB 7.2 releases, which are based on MySQL 5.5.

The logging format also can be switched at runtime. To specify the format globally for all clients, set the global value of the binlog_format system variable:

mysql> SET GLOBAL binlog_format = 'STATEMENT';mysql> SET GLOBAL binlog_format = 'ROW';mysql> SET GLOBAL binlog_format = 'MIXED';

An individual client can control the logging format for its own statements by setting the session value of binlog_format:

mysql> SET SESSION binlog_format = 'STATEMENT';mysql> SET SESSION binlog_format = 'ROW';mysql> SET SESSION binlog_format = 'MIXED';
Note

Each MySQL Server can set its own and only its own binary logging format (true whether binlog_format is set with global or session scope). This means that changing the logging format on a replication master does not cause a slave to change its logging format to match. (When using STATEMENT mode, the binlog_format system variable is not replicated; when using MIXED or ROW logging mode, it is replicated but is ignored by the slave.) Changing the binary logging format on the master while replication is ongoing, or without also changing it on the slave can thus cause unexpected results, or even cause replication to fail altogether.

To change the global or session binlog_format value, you must have the SUPER privilege.

In addition to switching the logging format manually, a slave server may switch the format automatically. This happens when the server is running in either STATEMENT or MIXED format and encounters an event in the binary log that is written in ROW logging format. In that case, the slave switches to row-based replication temporarily for that event, and switches back to the previous format afterward.

There are several reasons why a client might want to set binary logging on a per-session basis:

  • A session that makes many small changes to the database might want to use row-based logging.

  • A session that performs updates that match many rows in the WHERE clause might want to use statement-based logging because it will be more efficient to log a few statements than many rows.

  • Some statements require a lot of execution time on the master, but result in just a few rows being modified. It might therefore be beneficial to replicate them using row-based logging.

There are exceptions when you cannot switch the replication format at runtime:

  • From within a stored function or a trigger

  • If the NDBCLUSTER storage engine is enabled

  • If the session is currently in row-based replication mode and has open temporary tables

Trying to switch the format in any of these cases results in an error.

If you are using InnoDB tables and the transaction isolation level is READ COMMITTED or READ UNCOMMITTED, only row-based logging can be used. It is possible to change the logging format to STATEMENT, but doing so at runtime causes a warning to be issued, and leads very rapidly to errors because InnoDB can no longer perform inserts.

Switching the replication format at runtime is not recommended when any temporary tables exist, because temporary tables are logged only when using statement-based replication, whereas with row-based replication they are not logged. With mixed replication, temporary tables are usually logged; exceptions happen with user-defined functions (UDFs) and with the UUID() function.

With the binary log format set to ROW, many changes are written to the binary log using the row-based format. Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE.

The --binlog-row-event-max-size option is available for servers that are capable of row-based replication. Rows are stored into the binary log in chunks having a size in bytes not exceeding the value of this option. The value must be a multiple of 256. The default value is 1024.

Warning

When using statement-based logging for replication, it is possible for the data on the master and slave to become different if a statement is designed in such a way that the data modification is nondeterministic; that is, it is left to the will of the query optimizer. In general, this is not a good practice even outside of replication. For a detailed explanation of this issue, see Section C.5.8, "Known Issues in MySQL".

5.2.4.3. Mixed Binary Logging Format

When running in MIXED logging format, the server automatically switches from statement-based to row-based logging under the following conditions:

Note

A warning is generated if you try to execute a statement using statement-based logging that should be written using row-based logging. The warning is shown both in the client (in the output of SHOW WARNINGS) and through the mysqld error log. A warning is added to the SHOW WARNINGS table each time such a statement is executed. However, only the first statement that generated the warning for each client session is written to the error log to prevent flooding the log.

In addition to the decisions above, individual engines can also determine the logging format used when information in a table is updated. The logging capabilities of an individual engine can be defined as follows:

  • If an engine supports row-based logging, the engine is said to be row-logging capable.

  • If an engine supports statement-based logging, the engine is said to be statement-logging capable.

A given storage engine can support either or both logging formats. The following table lists the formats supported by each engine.

Storage EngineRow Logging SupportedStatement Logging Supported
ARCHIVEYesYes
BLACKHOLEYesYes
CSVYesYes
EXAMPLEYesNo
FEDERATEDYesYes
HEAPYesYes
InnoDBYesYes when the transaction isolation level is REPEATABLE READ or SERIALIZABLE; Nootherwise.
MyISAMYesYes
MERGEYesYes
NDBCLUSTERYesNo

In MySQL 5.5.3 and later, whether a statement is to be logged and the logging mode to be used is determined according to the type of statement (safe, unsafe, or binary injected), the binary logging format (STATEMENT, ROW, or MIXED), and the logging capabilities of the storage engine (statement capable, row capable, both, or neither). Statements may be logged with or without a warning; failed statements are not logged, but generate errors in the log. This is shown in the following decision table, where SLC stands for "statement-logging capable" and RLC stands for "row-logging capable".

ConditionAction
Typebinlog_formatSLCRLCError / WarningLogged as
**NoNoError: Cannot execute statement: Binary logging is impossible since at least one engine is involved that is both row-incapable and statement-incapable.-
SafeSTATEMENTYesNo-STATEMENT
SafeMIXEDYesNo-STATEMENT
SafeROWYesNoError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = ROW and at least one table uses a storage engine that is not capable of row-based logging.-
UnsafeSTATEMENTYesNoWarning: Unsafe statement binlogged in statement format, since BINLOG_FORMAT = STATEMENTSTATEMENT
UnsafeMIXEDYesNoError: Cannot execute statement: Binary logging of an unsafe statement is impossible when the storage engine is limited to statement-based logging, even if BINLOG_FORMAT = MIXED.-
UnsafeROWYesNoError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = ROW and at least one table uses a storage engine that is not capable of row-based logging.-
Row InjectionSTATEMENTYesNoError: Cannot execute row injection: Binary logging is not possible since at least one table uses a storage engine that is not capable of row-based logging.-
Row InjectionMIXEDYesNoError: Cannot execute row injection: Binary logging is not possible since at least one table uses a storage engine that is not capable of row-based logging.-
Row InjectionROWYesNoError: Cannot execute row injection: Binary logging is not possible since at least one table uses a storage engine that is not capable of row-based logging.-
SafeSTATEMENTNoYesError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine that is not capable of statement-based logging.-
SafeMIXEDNoYes-ROW
SafeROWNoYes-ROW
UnsafeSTATEMENTNoYesError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine that is not capable of statement-based logging.-
UnsafeMIXEDNoYes-ROW
UnsafeROWNoYes-ROW
Row InjectionSTATEMENTNoYesError: Cannot execute row injection: Binary logging is not possible since BINLOG_FORMAT = STATEMENT.-
Row InjectionMIXEDNoYes-ROW
Row InjectionROWNoYes-ROW
SafeSTATEMENTYesYes-STATEMENT
SafeMIXEDYesYes-ROW
SafeROWYesYes-ROW
UnsafeSTATEMENTYesYesWarning: Unsafe statement binlogged in statement format since BINLOG_FORMAT = STATEMENT.STATEMENT
UnsafeMIXEDYesYes-ROW
UnsafeROWYesYes-ROW
Row InjectionSTATEMENTYesYesError: Cannot execute row injection: Binary logging is not possible because BINLOG_FORMAT =STATEMENT.-
Row InjectionMIXEDYesYes-ROW
Row InjectionROWYesYes-ROW

Handling of mixed-format logging in MySQL 5.5.2 and earlier. The decision-making process for binary logging changed in MySQL 5.5.3, due to the fix for Bug #39934. Prior to MySQL 5.5.3, when determining the logging mode to be used, the capabilities of all the tables affected by the event are combined, and the set of affected tables is then marked according to these rules:

  • A set of tables is defined as row-logging restricted if the tables are row-logging capable but not statement-logging capable.

  • A set of tables is defined as statement-logging restricted if the tables are statement-logging capable but not row-logging capable.

Once the determination of the possible logging formats required by the statement is complete it is compared to the current binlog_format setting. The following table is used in MySQL 5.5.2 and earlier to decide how the information is recorded in the binary log or, if appropriate, whether an error is raised. In the table, a safe operation is defined as one that is deterministic.

In MySQL 5.5.2 and earlier, several rules decide whether the statement is deterministic, as shown in the following table, where SLR stands for "statement-logging restricted" and RLR stands for "row-logging restricted". A statement is statement-logging restricted if one or more of the tables it accesses is not row-logging capable. Similarly, a statement is row-logging restricted if any table accessed by the statement is not statement-logging capable.

ConditionAction
Safe/unsafebinlog_formatSLRRLRError/WarningLogged as
SafeSTATEMENTYesYesError: not loggable 
SafeSTATEMENTYesNo STATEMENT
SafeSTATEMENTNoYesError: not loggable 
SafeSTATEMENTNoNo STATEMENT
SafeMIXEDYesYesError: not loggable 
SafeMIXEDYesNo STATEMENT
SafeMIXEDNoYes ROW
SafeMIXEDNoNo STATEMENT
SafeROWYesYesError: not loggable 
SafeROWYesNoError: not loggable 
SafeROWNoYes ROW
SafeROWNoNo ROW
UnsafeSTATEMENTYesYesError: not loggable 
UnsafeSTATEMENTYesNoWarning: unsafeSTATEMENT
UnsafeSTATEMENTNoYesError: not loggable 
UnsafeSTATEMENTNoNoWarning: unsafeSTATEMENT
UnsafeMIXEDYesYesError: not loggable 
UnsafeMIXEDYesNoError: not loggable 
UnsafeMIXEDNoYes ROW
UnsafeMIXEDNoNo ROW
UnsafeROWYesYesError: not loggable 
UnsafeROWYesNoError: not loggable 
UnsafeROWNoYes ROW
UnsafeROWNoNo ROW

In all MySQL 5.5 releases, when a warning is produced by the determination, a standard MySQL warning is produced (and is available using SHOW WARNINGS). The information is also written to the mysqld error log. Only one error for each error instance per client connection is logged to prevent flooding the log. The log message includes the SQL statement that was attempted.

If a slave server was started with --log-warnings enabled, the slave prints messages to the error log to provide information about its status, such as the binary log and relay log coordinates where it starts its job, when it is switching to another relay log, when it reconnects after a disconnect, and so forth.

5.2.4.4. Logging Format for Changes to mysql Database Tables

The contents of the grant tables in the mysql database can be modified directly (for example, with INSERT or DELETE) or indirectly (for example, with GRANT or CREATE USER). Statements that affect mysql database tables are written to the binary log using the following rules:

CREATE TABLE ... SELECT is a combination of data definition and data manipulation. The CREATE TABLE part is logged using statement format and the SELECT part is logged according to the value of binlog_format.

5.2.5. The Slow Query Log

The slow query log consists of SQL statements that took more than long_query_time seconds to execute and required at least min_examined_row_limit rows to be examined. The minimum and default values of long_query_time are 0 and 10, respectively. The value can be specified to a resolution of microseconds. For logging to a file, times are written including the microseconds part. For logging to tables, only integer times are written; the microseconds part is ignored.

The time to acquire the initial table locks is not counted as execution time. mysqld writes a statement to the slow query log after it has been executed and after all locks have been released, so log order might differ from execution order.

By default, the slow query log is disabled. To specify the initial slow query log state explicitly, use --slow_query_log[={0|1}]. With no argument or an argument of 1, --slow_query_log enables the log. With an argument of 0, this option disables the log. To specify a log file name, use --slow_query_log_file=file_name. To specify the log destination, use --log-output (as described in Section 5.2.1, "Selecting General Query and Slow Query Log Output Destinations"). The older option to enable the slow query log file, --log-slow-queries, is deprecated.

If you specify no name for the slow query log file, the default name is host_name-slow.log. The server creates the file in the data directory unless an absolute path name is given to specify a different directory.

To disable or enable the slow query log or change the log file name at runtime, use the global slow_query_log and slow_query_log_file system variables. Set slow_query_log to 0 (or OFF) to disable the log or to 1 (or ON) to enable it. Set slow_query_log_file to specify the name of the log file. If a log file already is open, it is closed and the new file is opened.

When the slow query log is enabled, the server writes output to any destinations specified by the --log-output option or log_output system variable. If you enable the log, the server opens the log file and writes startup messages to it. However, further logging of queries to the file does not occur unless the FILE log destination is selected. If the destination is NONE, the server writes no queries even if the slow query log is enabled. Setting the log file name has no effect on logging if the log destination value does not contain FILE.

The server writes less information to the slow query log (and binary log) if you use the --log-short-format option.

To include slow administrative statements such as OPTIMIZE TABLE, ANALYZE TABLE, and ALTER TABLE in the statements written to the slow query log, use the --log-slow-admin-statements server option.

To include queries that do not use indexes for row lookups in the statements written to the slow query log, enable the log_queries_not_using_indexes system variable. When such queries are logged, the slow query log may grow quickly.

The server uses the controlling parameters in the following order to determine whether to write a query to the slow query log:

  1. The query must either not be an administrative statement, or --log-slow-admin-statements must have been specified.

  2. The query must have taken at least long_query_time seconds, or log_queries_not_using_indexes must be enabled and the query used no indexes for row lookups.

  3. The query must have examined at least min_examined_row_limit rows.

The server does not write queries handled by the query cache to the slow query log, nor queries that would not benefit from the presence of an index because the table has zero rows or one row.

By default, a replication slave does not write replicated queries to the slow query log. To change this, use the --log-slow-slave-statements server option.

The slow query log should be protected because logged statements might contain passwords. See Section 6.1.2.3, "Passwords and Logging".

The slow query log can be used to find queries that take a long time to execute and are therefore candidates for optimization. However, examining a long slow query log can become a difficult task. To make this easier, you can process a slow query log file using the mysqldumpslow command to summarize the queries that appear in the log. See Section 4.6.8, "mysqldumpslow - Summarize Slow Query Log Files".

5.2.6. Server Log Maintenance

As described in Section 5.2, "MySQL Server Logs", MySQL Server can create several different log files to help you see what activity is taking place. However, you must clean up these files regularly to ensure that the logs do not take up too much disk space.

When using MySQL with logging enabled, you may want to back up and remove old log files from time to time and tell MySQL to start logging to new files. See Section 7.2, "Database Backup Methods".

On a Linux (Red Hat) installation, you can use the mysql-log-rotate script for this. If you installed MySQL from an RPM distribution, this script should have been installed automatically. Be careful with this script if you are using the binary log for replication. You should not remove binary logs until you are certain that their contents have been processed by all slaves.

On other systems, you must install a short script yourself that you start from cron (or its equivalent) for handling log files.

For the binary log, you can set the expire_logs_days system variable to expire binary log files automatically after a given number of days (see Section 5.1.4, "Server System Variables"). If you are using replication, you should set the variable no lower than the maximum number of days your slaves might lag behind the master. To remove binary logs on demand, use the PURGE BINARY LOGS statement (see Section 13.4.1.1, "PURGE BINARY LOGS Syntax").

You can force MySQL to start using new log files by flushing the logs. Log flushing occurs when you issue a FLUSH LOGS statement or execute a mysqladmin flush-logs, mysqladmin refresh, mysqldump --flush-logs, or mysqldump --master-data command. See Section 13.7.6.3, "FLUSH Syntax", Section 4.5.2, "mysqladmin - Client for Administering a MySQL Server", and Section 4.5.4, "mysqldump - A Database Backup Program". In addition, the binary log is flushed when its size reaches the value of the max_binlog_size system variable.

As of MySQL 5.5.3, FLUSH LOGS supports optional modifiers to enable selective flushing of individual logs (for example, FLUSH BINARY LOGS).

A log-flushing operation does the following:

  • If general query logging or slow query logging to a log file is enabled, the server closes and reopens the general query log file or slow query log file.

  • If binary logging is enabled, the server closes the current binary log file and opens a new log file with the next sequence number.

  • If the server was started with the --log-error option to cause the error log to be written to a file, the result of a log-flushing operation is version dependent:

    • As of MySQL 5.5.7, the server closes and reopens the log file.

    • Prior to MySQL 5.5.7, the server renames the current log file with the suffix -old, then creates a new empty log file.

The server creates a new binary log file when you flush the logs. However, it just closes and reopens the general and slow query log files. To cause new files to be created on Unix, rename the current log files before flushing them. At flush time, the server opens new log files with the original names. For example, if the general and slow query log files are named mysql.log and mysql-slow.log, you can use a series of commands like this:

shell> cd mysql-data-directoryshell> mv mysql.log mysql.oldshell> mv mysql-slow.log mysql-slow.oldshell> mysqladmin flush-logs

On Windows, use rename rather than mv.

At this point, you can make a backup of mysql.old and mysql-slow.old and then remove them from disk.

A similar strategy can be used to back up the error log file, if there is one, except that, on Windows, you cannot rename the error log file while the server has it open before MySQL 5.5.7. To rename the error log file, a stop and restart can be avoided by flushing the logs to cause the server to rename the current log file with the suffix -old and create a new empty error log file. For further information, see Section 5.2.2, "The Error Log".

You can rename the general query log or slow query log at runtime by disabling the log:

SET GLOBAL general_log = 'OFF';SET GLOBAL slow_query_log = 'OFF';

With the logs disabled, rename the log files externally; for example, from the command line. Then enable the logs again:

SET GLOBAL general_log = 'ON';SET GLOBAL slow_query_log = 'ON';

This method works on any platform and does not require a server restart.

5.3. Running Multiple MySQL Instances on One Machine

In some cases, you might want to run multiple instances of MySQL on a single machine. You might want to test a new MySQL release while leaving an existing production setup undisturbed. Or you might want to give different users access to different mysqld servers that they manage themselves. (For example, you might be an Internet Service Provider that wants to provide independent MySQL installations for different customers.)

It is possible to use a different MySQL server binary per instance, or use the same binary for multiple instances, or any combination of the two approaches. For example, you might run a server from MySQL 5.1 and one from MySQL 5.5, to see how different versions handle a given workload. Or you might run multiple instances of the current production version, each managing a different set of databases.

Whether or not you use distinct server binaries, each instance that you run must be configured with unique values for several operating parameters. This eliminates the potential for conflict between instances. Parameters can be set on the command line, in option files, or by setting environment variables. See Section 4.2.3, "Specifying Program Options". To see the values used by a given instance, connect to it and execute a SHOW VARIABLES statement.

The primary resource managed by a MySQL instance is the data directory. Each instance should use a different data directory, the location of which is specified using the --datadir=path option. For methods of configuring each instance with its own data directory, and warnings about the dangers of failing to do so, see Section 5.3.1, "Setting Up Multiple Data Directories".

In addition to using different data directories, several other options must have different values for each server instance:

  • --port=port_num

    --port controls the port number for TCP/IP connections. Alternatively, if the host has multiple network addresses, you can use --bind-address to cause each server to listen to a different address.

  • --socket=path

    --socket controls the Unix socket file path on Unix or the named pipe name on Windows. On Windows, it is necessary to specify distinct pipe names only for those servers configured to permit named-pipe connections.

  • --shared-memory-base-name=name

    This option is used only on Windows. It designates the shared-memory name used by a Windows server to permit clients to connect using shared memory. It is necessary to specify distinct shared-memory names only for those servers configured to permit shared-memory connections.

  • --pid-file=file_name

    This option indicates the path name of the file in which the server writes its process ID.

If you use the following log file options, their values must differ for each server:

For further discussion of log file options, see Section 5.2, "MySQL Server Logs".

To achieve better performance, you can specify the following option differently for each server, to spread the load between several physical disks:

Having different temporary directories also makes it easier to determine which MySQL server created any given temporary file.

If you have multiple MySQL installations in different locations, you can specify the base directory for each installation with the --basedir=path option. This causes each instance to automatically use a different data directory, log files, and PID file because the default for each of those parameters is relative to the base directory. In that case, the only other options you need to specify are the --socket and --port options. Suppose that you install different versions of MySQL using tar file binary distributions. These install in different locations, so you can start the server for each installation using the command bin/mysqld_safe under its corresponding base directory. mysqld_safe determines the proper --basedir option to pass to mysqld, and you need specify only the --socket and --port options to mysqld_safe.

As discussed in the following sections, it is possible to start additional servers by specifying appropriate command options or by setting environment variables. However, if you need to run multiple servers on a more permanent basis, it is more convenient to use option files to specify for each server those option values that must be unique to it. The --defaults-file option is useful for this purpose.

5.3.1. Setting Up Multiple Data Directories

Each MySQL Instance on a machine should have its own data directory. The location is specified using the --datadir=path option.

There are different methods of setting up a data directory for a new instance:

  • Create a new data directory.

  • Copy an existing data directory.

The following discussion provides more detail about each method.

Warning

Normally, you should never have two servers that update data in the same databases. This may lead to unpleasant surprises if your operating system does not support fault-free system locking. If (despite this warning) you run multiple servers using the same data directory and they have logging enabled, you must use the appropriate options to specify log file names that are unique to each server. Otherwise, the servers try to log to the same files.

Even when the preceding precautions are observed, this kind of setup works only with MyISAM and MERGE tables, and not with any of the other storage engines. Also, this warning against sharing a data directory among servers always applies in an NFS environment. Permitting multiple MySQL servers to access a common data directory over NFS is a very bad idea. The primary problem is that NFS is the speed bottleneck. It is not meant for such use. Another risk with NFS is that you must devise a way to ensure that two or more servers do not interfere with each other. Usually NFS file locking is handled by the lockd daemon, but at the moment there is no platform that performs locking 100% reliably in every situation.

Create a New Data Directory

With this method, the data directory will be in the same state as when you first install MySQL. It will have the default set of MySQL accounts and no user data.

On Unix, initialize the data directory by running mysql_install_db. See Section 2.11.1, "Unix Postinstallation Procedures".

On Windows, the data directory is included in the MySQL distribution:

  • MySQL Zip archive distributions for Windows contain an unmodified data directory. You can unpack such a distribution into a temporary location, then copy it data directory to where you are setting up the new instance.

  • As of MySQL 5.5.5, Windows MSI package installers create and set up the data directory that the installed server will use, but also create a pristine "template" data directory named data under the installation directory. After an installation has been performed using an MSI package, the template data directory can be copied to set up additional MySQL instances.

Copy an Existing Data Directory

With this method, any MySQL accounts or user data present in the data directory are carried over to the new data directory.

  1. Stop the existing MySQL instance using the data directory. This must be a clean shutdown so that the instance flushes any pending changes to disk.

  2. Copy the data directory to the location where the new data directory should be.

  3. Copy the my.cnf or my.ini option file used by the existing instance. This serves as a basis for the new instance.

  4. Modify the new option file so that any pathnames referring to the original data directory refer to the new data directory. Also, modify any other options that must be unique per instance, such as the TCP/IP port number and the log files. For a list of parameters that must be unique per instance, see Section 5.3, "Running Multiple MySQL Instances on One Machine".

  5. Start the new instance, telling it to use the new option file.

5.3.2. Running Multiple MySQL Instances on Windows

You can run multiple servers on Windows by starting them manually from the command line, each with appropriate operating parameters, or by installing several servers as Windows services and running them that way. General instructions for running MySQL from the command line or as a service are given in Section 2.3, "Installing MySQL on Microsoft Windows". The following sections describe how to start each server with different values for those options that must be unique per server, such as the data directory. These options are listed in Section 5.3, "Running Multiple MySQL Instances on One Machine".

5.3.2.1. Starting Multiple MySQL Instances at the Windows Command Line

The procedure for starting a single MySQL server manually from the command line is described in Section 2.3.7.5, "Starting MySQL from the Windows Command Line". To start multiple servers this way, you can specify the appropriate options on the command line or in an option file. It is more convenient to place the options in an option file, but it is necessary to make sure that each server gets its own set of options. To do this, create an option file for each server and tell the server the file name with a --defaults-file option when you run it.

Suppose that you want to run mysqld on port 3307 with a data directory of C:\mydata1, and mysqld-debug on port 3308 with a data directory of C:\mydata2. Use this procedure:

  1. Make sure that each data directory exists, including its own copy of the mysql database that contains the grant tables.

  2. Create two option files. For example, create one file named C:\my-opts1.cnf that looks like this:

    [mysqld]datadir = C:/mydata1port = 3307

    Create a second file named C:\my-opts2.cnf that looks like this:

    [mysqld]datadir = C:/mydata2port = 3308
  3. Use the --defaults-file option to start each server with its own option file:

    C:\> C:\mysql\bin\mysqld --defaults-file=C:\my-opts1.cnfC:\> C:\mysql\bin\mysqld-debug --defaults-file=C:\my-opts2.cnf

    Each server starts in the foreground (no new prompt appears until the server exits later), so you will need to issue those two commands in separate console windows.

To shut down the servers, connect to each using the appropriate port number:

C:\> C:\mysql\bin\mysqladmin --port=3307 shutdownC:\> C:\mysql\bin\mysqladmin --port=3308 shutdown

Servers configured as just described permit clients to connect over TCP/IP. If your version of Windows supports named pipes and you also want to permit named-pipe connections, use the mysqld or mysqld-debug server and specify options that enable the named pipe and specify its name. Each server that supports named-pipe connections must use a unique pipe name. For example, the C:\my-opts1.cnf file might be written like this:

[mysqld]datadir = C:/mydata1port = 3307enable-named-pipesocket = mypipe1

Modify C:\my-opts2.cnf similarly for use by the second server. Then start the servers as described previously.

A similar procedure applies for servers that you want to permit shared-memory connections. Enable such connections with the --shared-memory option and specify a unique shared-memory name for each server with the --shared-memory-base-name option.

5.3.2.2. Starting Multiple MySQL Instances as Windows Services

On Windows, a MySQL server can run as a Windows service. The procedures for installing, controlling, and removing a single MySQL service are described in Section 2.3.7.7, "Starting MySQL as a Windows Service".

To set up multiple MySQL services, you must make sure that each instance uses a different service name in addition to the other parameters that must be unique per instance.

For the following instructions, suppose that you want to run the mysqld server from two different versions of MySQL that are installed at C:\mysql-5.1.55 and C:\mysql-5.5.31, respectively. (This might be the case if you are running 5.1.55 as your production server, but also want to conduct tests using 5.5.31.)

To install MySQL as a Windows service, use the --install or --install-manual option. For information about these options, see Section 2.3.7.7, "Starting MySQL as a Windows Service".

Based on the preceding information, you have several ways to set up multiple services. The following instructions describe some examples. Before trying any of them, shut down and remove any existing MySQL services.

  • Approach 1: Specify the options for all services in one of the standard option files. To do this, use a different service name for each server. Suppose that you want to run the 5.1.55 mysqld using the service name of mysqld1 and the 5.5.31 mysqld using the service name mysqld2. In this case, you can use the [mysqld1] group for 5.1.55 and the [mysqld2] group for 5.5.31. For example, you can set up C:\my.cnf like this:

    # options for mysqld1 service[mysqld1]basedir = C:/mysql-5.1.55port = 3307enable-named-pipesocket = mypipe1# options for mysqld2 service[mysqld2]basedir = C:/mysql-5.5.31port = 3308enable-named-pipesocket = mypipe2

    Install the services as follows, using the full server path names to ensure that Windows registers the correct executable program for each service:

    C:\> C:\mysql-5.1.55\bin\mysqld --install mysqld1C:\> C:\mysql-5.5.31\bin\mysqld --install mysqld2

    To start the services, use the services manager, or use NET START with the appropriate service names:

    C:\> NET START mysqld1C:\> NET START mysqld2

    To stop the services, use the services manager, or use NET STOP with the appropriate service names:

    C:\> NET STOP mysqld1C:\> NET STOP mysqld2
  • Approach 2: Specify options for each server in separate files and use --defaults-file when you install the services to tell each server what file to use. In this case, each file should list options using a [mysqld] group.

    With this approach, to specify options for the 5.1.55 mysqld, create a file C:\my-opts1.cnf that looks like this:

    [mysqld]basedir = C:/mysql-5.1.55port = 3307enable-named-pipesocket = mypipe1

    For the 5.5.31 mysqld, create a file C:\my-opts2.cnf that looks like this:

    [mysqld]basedir = C:/mysql-5.5.31port = 3308enable-named-pipesocket = mypipe2

    Install the services as follows (enter each command on a single line):

    C:\> C:\mysql-5.1.55\bin\mysqld --install mysqld1   --defaults-file=C:\my-opts1.cnfC:\> C:\mysql-5.5.31\bin\mysqld --install mysqld2   --defaults-file=C:\my-opts2.cnf

    When you install a MySQL server as a service and use a --defaults-file option, the service name must precede the option.

    After installing the services, start and stop them the same way as in the preceding example.

To remove multiple services, use mysqld --remove for each one, specifying a service name following the --remove option. If the service name is the default (MySQL), you can omit it.

5.3.3. Running Multiple MySQL Instances on Unix

One way is to run multiple MySQL instances on Unix is to compile different servers with different default TCP/IP ports and Unix socket files so that each one listens on different network interfaces. Compiling in different base directories for each installation also results automatically in a separate, compiled-in data directory, log file, and PID file location for each server.

Assume that an existing 5.1 server is configured for the default TCP/IP port number (3306) and Unix socket file (/tmp/mysql.sock). To configure a new 5.5.31 server to have different operating parameters, use a CMake command something like this:

shell> cmake . -DMYSQL_TCP_PORT=port_number \ -DMYSQL_UNIX_ADDR=file_name \ -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-5.5.31

Here, port_number and file_name must be different from the default TCP/IP port number and Unix socket file path name, and the CMAKE_INSTALL_PREFIX value should specify an installation directory different from the one under which the existing MySQL installation is located.

If you have a MySQL server listening on a given port number, you can use the following command to find out what operating parameters it is using for several important configurable variables, including the base directory and Unix socket file name:

shell> mysqladmin --host=host_name --port=port_number variables

With the information displayed by that command, you can tell what option values not to use when configuring an additional server.

If you specify localhost as the host name, mysqladmin defaults to using a Unix socket file connection rather than TCP/IP. To explicitly specify the connection protocol, use the --protocol={TCP|SOCKET|PIPE|MEMORY} option.

You need not compile a new MySQL server just to start with a different Unix socket file and TCP/IP port number. It is also possible to use the same server binary and start each invocation of it with different parameter values at runtime. One way to do so is by using command-line options:

shell> mysqld_safe --socket=file_name --port=port_number

To start a second server, provide different --socket and --port option values, and pass a --datadir=path option to mysqld_safe so that the server uses a different data directory.

Alternatively, put the options for each server in a different option file, then start each server using a --defaults-file option that specifies the path to the appropriate option file. For example, if the option files for two server instances are named /usr/local/mysql/my.cnf and /usr/local/mysql/my.cnf2, start the servers like this: command:

shell> mysqld_safe --defaults-file=/usr/local/mysql/my.cnfshell> mysqld_safe --defaults-file=/usr/local/mysql/my.cnf2

Another way to achieve a similar effect is to use environment variables to set the Unix socket file name and TCP/IP port number:

shell> MYSQL_UNIX_PORT=/tmp/mysqld-new.sockshell> MYSQL_TCP_PORT=3307shell> export MYSQL_UNIX_PORT MYSQL_TCP_PORTshell> mysql_install_db --user=mysqlshell> mysqld_safe --datadir=/path/to/datadir &

This is a quick way of starting a second server to use for testing. The nice thing about this method is that the environment variable settings apply to any client programs that you invoke from the same shell. Thus, connections for those clients are automatically directed to the second server.

Section 2.13, "Environment Variables", includes a list of other environment variables you can use to affect MySQL programs.

On Unix, the mysqld_multi script provides another way to start multiple servers. See Section 4.3.4, "mysqld_multi - Manage Multiple MySQL Servers".

5.3.4. Using Client Programs in a Multiple-Server Environment

To connect with a client program to a MySQL server that is listening to different network interfaces from those compiled into your client, you can use one of the following methods:

  • Start the client with --host=host_name --port=port_number to connect using TCP/IP to a remote server, with --host=127.0.0.1 --port=port_number to connect using TCP/IP to a local server, or with --host=localhost --socket=file_name to connect to a local server using a Unix socket file or a Windows named pipe.

  • Start the client with --protocol=TCP to connect using TCP/IP, --protocol=SOCKET to connect using a Unix socket file, --protocol=PIPE to connect using a named pipe, or --protocol=MEMORY to connect using shared memory. For TCP/IP connections, you may also need to specify --host and --port options. For the other types of connections, you may need to specify a --socket option to specify a Unix socket file or Windows named-pipe name, or a --shared-memory-base-name option to specify the shared-memory name. Shared-memory connections are supported only on Windows.

  • On Unix, set the MYSQL_UNIX_PORT and MYSQL_TCP_PORT environment variables to point to the Unix socket file and TCP/IP port number before you start your clients. If you normally use a specific socket file or port number, you can place commands to set these environment variables in your .login file so that they apply each time you log in. See Section 2.13, "Environment Variables".

  • Specify the default Unix socket file and TCP/IP port number in the [client] group of an option file. For example, you can use C:\my.cnf on Windows, or the .my.cnf file in your home directory on Unix. See Section 4.2.3.3, "Using Option Files".

  • In a C program, you can specify the socket file or port number arguments in the mysql_real_connect() call. You can also have the program read option files by calling mysql_options(). See Section 22.8.3, "C API Function Descriptions".

  • If you are using the Perl DBD::mysql module, you can read options from MySQL option files. For example:

    $dsn = "DBI:mysql:test;mysql_read_default_group=client;" . "mysql_read_default_file=/usr/local/mysql/data/my.cnf";$dbh = DBI->connect($dsn, $user, $password);

    See Section 22.10, "MySQL Perl API".

    Other programming interfaces may provide similar capabilities for reading option files.

5.4. Tracing mysqld Using DTrace

The DTrace probes in the MySQL server are designed to provide information about the execution of queries within MySQL and the different areas of the system being utilized during that process. The organization and triggering of the probes means that the execution of an entire query can be monitored with one level of probes (query-start and query-done) but by monitoring other probes you can get successively more detailed information about the execution of the query in terms of the locks used, sort methods and even row-by-row and storage-engine level execution information.

The DTrace probes are organized so that you can follow the entire query process, from the point of connection from a client, through the query execution, row-level operations, and back out again. You can think of the probes as being fired within a specific sequence during a typical client connect/execute/disconnect sequence, as shown in the following figure.

Figure 5.1. The MySQL Architecture Using Pluggable Storage Engines

DTrace Probe Structure in mysqld

Global information is provided in the arguments to the DTrace probes at various levels. Global information, that is, the connection ID and user/host and where relevant the query string, is provided at key levels (connection-start, command-start, query-start, and query-exec-start). As you go deeper into the probes, it is assumed either you are only interested in the individual executions (row-level probes provide information on the database and table name only), or that you will combine the row-level probes with the notional parent probes to provide the information about a specific query. Examples of this will be given as the format and arguments of each probe are provided.

For more information on DTrace and writing DTrace scripts, read the DTrace User Guide.

MySQL 5.5 includes support for DTrace probes on Solaris 10 Update 5 (Solaris 5/08) on SPARC, x86 and x86_64 platforms. Probes are also supported on Mac OS X 10.4 and higher. Enabling the probes should be automatic on these platforms. To explicitly enable or disable the probes during building, use the -DENABLE_DTRACE=1 or -DENABLE_DTRACE=0 option to CMake.

5.4.1. mysqld DTrace Probe Reference

MySQL supports the following static probes, organized into groups of functionality.

Table 5.5. MySQL DTrace Probes

GroupProbesIntroduced
Connectionconnection-start, connection-done5.4.0
Commandcommand-start, command-done5.4.0
Queryquery-start, query-done5.4.0
Query Parsingquery-parse-start, query-parse-done5.4.0
Query Cachequery-cache-hit, query-cache-miss5.4.0
Query Executionquery-exec-start, query-exec-done5.4.0
Row Levelinsert-row-start, insert-row-done5.4.0
 update-row-start, update-row-done5.4.0
 delete-row-start, delete-row-done5.4.0
Row Readsread-row-start, read-row-done5.4.0
Index Readsindex-read-row-start, index-read-row-done5.4.0
Lockhandler-rdlock-start, handler-rdlock-done5.4.0
 handler-wrlock-start, handler-wrlock-done5.4.0
 handler-unlock-start, handler-unlock-done5.4.0
Filesortfilesort-start, filesort-done5.4.0
Statementselect-start, select-done5.4.0
 insert-start, insert-done5.4.0
 insert-select-start, insert-select-done5.4.0
 update-start, update-done5.4.0
 multi-update-start, multi-update-done5.4.0
 delete-start, delete-done5.4.0
 multi-delete-start, multi-delete-done5.4.0
Networknet-read-start, net-read-done, net-write-start, net-write-done5.4.0
Keycachekeycache-read-start, keycache-read-block, keycache-read-done, keycache-read-hit, keycache-read-miss, keycache-write-start, keycache-write-block,keycache-write-done5.4.0

Note

When extracting the argument data from the probes, each argument is available as argN, starting with arg0. To identify each argument within the definitions they are provided with a descriptive name, but you must access the information using the corresponding argN parameter.

5.4.1.1. Connection Probes

The connection-start and connection-done probes enclose a connection from a client, regardless of whether the connection is through a socket or network connection.

connection-start(connectionid, user, host)connection-done(status, connectionid)
  • connection-start: Triggered after a connection and successful login/authentication have been completed by a client. The arguments contain the connection information:

    • connectionid: An unsigned long containing the connection ID. This is the same as the process ID shown as the Id value in the output from SHOW PROCESSLIST.

    • user: The username used when authenticating. The value will be blank for the anonymous user.

    • host: The host of the client connection. For a connection made using UNIX sockets, the value will be blank.

  • connection-done: Triggered just as the connection to the client has been closed. The arguments are:

    • status: The status of the connection when it was closed. A logout operation will have a value of 0; any other termination of the connection has a nonzero value.

    • connectionid: The connection ID of the connection that was closed.

The following D script will quantify and summarize the average duration of individual connections, and provide a count, dumping the information every 60 seconds:

#!/usr/sbin/dtrace -smysql*:::connection-start{  self->start = timestamp;}mysql*:::connection-done/self->start/{  @ = quantize(((timestamp - self->start)/1000000));  self->start = 0;}tick-60s{  printa(@);}

When executed on a server with a large number of clients you might see output similar to this:

  1  57413 :tick-60s   value  ------------- Distribution ------------- count  -1 | 0   0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 30011   1 | 59   2 | 5   4 | 20   8 | 29  16 | 18  32 | 27  64 | 30 128 | 11 256 | 10 512 | 1 1024 | 6 2048 | 8 4096 | 9 8192 | 8   16384 | 2   32768 | 1   65536 | 1  131072 | 0  262144 | 1524288 | 0 

5.4.1.2. Command Probes

The command probes are executed before and after a client command is executed, including any SQL statement that might be executed during that period. Commands include operations such as the initialization of the DB, use of the COM_CHANGE_USER operation (supported by the MySQL protocol), and manipulation of prepared statements. Many of these commands are used only by the MySQL client API from various connectors such as PHP and Java.

command-start(connectionid, command, user, host)command-done(status)
  • command-start: Triggered when a command is submitted to the server.

    • connectionid: The connection ID of the client executing the command.

    • command: An integer representing the command that was executed. Possible values are shown in the following table.

      ValueNameDescription
      00COM_SLEEPInternal thread state
      01COM_QUITClose connection
      02COM_INIT_DBSelect database (USE ...)
      03COM_QUERYExecute a query
      04COM_FIELD_LISTGet a list of fields
      05COM_CREATE_DBCreate a database (deprecated)
      06COM_DROP_DBDrop a database (deprecated)
      07COM_REFRESHRefresh connection
      08COM_SHUTDOWNShutdown server
      09COM_STATISTICSGet statistics
      10COM_PROCESS_INFOGet processes (SHOW PROCESSLIST)
      11COM_CONNECTInitialize connection
      12COM_PROCESS_KILLKill process
      13COM_DEBUGGet debug information
      14COM_PINGPing
      15COM_TIMEInternal thread state
      16COM_DELAYED_INSERTInternal thread state
      17COM_CHANGE_USERChange user
      18COM_BINLOG_DUMPUsed by a replication slave or mysqlbinlog toinitiate a binary log read
      19COM_TABLE_DUMPUsed by a replication slave to get the master table information
      20COM_CONNECT_OUTUsed by a replication slave to log a connection to the server
      21COM_REGISTER_SLAVEUsed by a replication slave during registration
      22COM_STMT_PREPAREPrepare a statement
      23COM_STMT_EXECUTEExecute a statement
      24COM_STMT_SEND_LONG_DATAUsed by a client when requesting extended data
      25COM_STMT_CLOSEClose a prepared statement
      26COM_STMT_RESETReset a prepared statement
      27COM_SET_OPTIONSet a server option
      28COM_STMT_FETCHFetch a prepared statement
    • user: The user executing the command.

    • host: The client host.

  • command-done: Triggered when the command execution completes. The status argument contains 0 if the command executed successfully, or 1 if the statement was terminated before normal completion.

The command-start and command-done probes are best used when combined with the statement probes to get an idea of overall execution time.

5.4.1.3. Query Probes

The query-start and query-done probes are triggered when a specific query is received by the server and when the query has been completed and the information has been successfully sent to the client.

query-start(query, connectionid, database, user, host)query-done(status)
  • query-start: Triggered after the query string has been received from the client. The arguments are:

    • query: The full text of the submitted query.

    • connectionid: The connection ID of the client that submitted the query. The connection ID equals the connection ID returned when the client first connects and the Id value in the output from SHOW PROCESSLIST.

    • database: The database name on which the query is being executed.

    • user: The username used to connect to the server.

    • host: The hostname of the client.

  • query-done: Triggered once the query has been executed and the information has been returned to the client. The probe includes a single argument, status, which returns 0 when the query is successfully executed and 1 if there was an error.

You can get a simple report of the execution time for each query using the following D script:

#!/usr/sbin/dtrace -s#pragma D option quietdtrace:::BEGIN{   printf("%-20s %-20s %-40s %-9s\n", "Who", "Database", "Query", "Time(ms)");}mysql*:::query-start{   self->query = copyinstr(arg0);   self->connid = arg1;   self->db = copyinstr(arg2);   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));   self->querystart = timestamp;}mysql*:::query-done{   printf("%-20s %-20s %-40s %-9d\n",self->who,self->db,self->query,  (timestamp - self->querystart) / 1000000);}

When executing the above script you should get a basic idea of the execution time of your queries:

shell> ./query.dWho  Database Query Time(ms)root@localhost   test select * from t1 order by i limit 10 0root@localhost   test set global query_cache_size=0 0root@localhost   test select * from t1 order by i limit 10 776root@localhost   test select * from t1 order by i limit 10 773root@localhost   test select * from t1 order by i desc limit 10 795 

5.4.1.4. Query Parsing Probes

The query parsing probes are triggered before the original SQL statement is parsed and when the parsing of the statement and determination of the execution model required to process the statement has been completed:

query-parse-start(query)query-parse-done(status)
  • query-parse-start: Triggered just before the statement is parsed by the MySQL query parser. The single argument, query, is a string containing the full text of the original query.

  • query-parse-done: Triggered when the parsing of the original statement has been completed. The status is an integer describing the status of the operation. A 0 indicates that the query was successfully parsed. A 1 indicates that the parsing of the query failed.

For example, you could monitor the execution time for parsing a given query using the following D script:

#!/usr/sbin/dtrace -s#pragma D option quietmysql*:::query-parse-start{   self->parsestart = timestamp;   self->parsequery = copyinstr(arg0);}mysql*:::query-parse-done/arg0 == 0/{   printf("Parsing %s: %d microseconds\n", self->parsequery,   ((timestamp - self->parsestart)/1000));}mysql*:::query-parse-done/arg0 != 0/{   printf("Error parsing %s: %d microseconds\n", self->parsequery,   ((timestamp - self->parsestart)/1000));}

In the above script a predicate is used on query-parse-done so that different output is generated based on the status value of the probe.

When running the script and monitoring the execution:

shell> ./query-parsing.dError parsing select from t1 join (t2) on (t1.i = t2.i) order by t1.s,t1.i limit 10: 36 msParsing select * from t1 join (t2) on (t1.i = t2.i) order by t1.s,t1.i limit 10: 176 ms

5.4.1.5. Query Cache Probes

The query cache probes are fired when executing any query. The query-cache-hit query is triggered when a query exists in the query cache and can be used to return the query cache information. The arguments contain the original query text and the number of rows returned from the query cache for the query. If the query is not within the query cache, or the query cache is not enabled, then the query-cache-miss probe is triggered instead.

query-cache-hit(query, rows)query-cache-miss(query)
  • query-cache-hit: Triggered when the query has been found within the query cache. The first argument, query, contains the original text of the query. The second argument, rows, is an integer containing the number of rows in the cached query.

  • query-cache-miss: Triggered when the query is not found within the query cache. The first argument, query, contains the original text of the query.

The query cache probes are best combined with a probe on the main query so that you can determine the differences in times between using or not using the query cache for specified queries. For example, in the following D script, the query and query cache information are combined into the information output during monitoring:

#!/usr/sbin/dtrace -s#pragma D option quietdtrace:::BEGIN{   printf("%-20s %-20s %-40s %2s %-9s\n", "Who", "Database", "Query", "QC", "Time(ms)");}mysql*:::query-start{   self->query = copyinstr(arg0);   self->connid = arg1;   self->db = copyinstr(arg2);   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));   self->querystart = timestamp;   self->qc = 0;}mysql*:::query-cache-hit{   self->qc = 1;}mysql*:::query-cache-miss{   self->qc = 0;}mysql*:::query-done{   printf("%-20s %-20s %-40s %-2s %-9d\n",self->who,self->db,self->query,   (self->qc ? "Y" : "N"), (timestamp - self->querystart) / 1000000);}

When executing the script you can see the effects of the query cache. Initially the query cache is disabled. If you set the query cache size and then execute the query multiple times you should see that the query cache is being used to return the query data:

shell> ./query-cache.droot@localhost   test select * from t1 order by i limit 10 N  1072root@localhost set global query_cache_size=262144   N  0root@localhost   test select * from t1 order by i limit 10 N  781root@localhost   test select * from t1 order by i limit 10 Y  0 

5.4.1.6. Query Execution Probes

The query execution probe is triggered when the actual execution of the query starts, after the parsing and checking the query cache but before any privilege checks or optimization. By comparing the difference between the start and done probes you can monitor the time actually spent servicing the query (instead of just handling the parsing and other elements of the query).

query-exec-start(query, connectionid, database, user, host, exec_type)query-exec-done(status)
Note

The information provided in the arguments for query-start and query-exec-start are almost identical and designed so that you can choose to monitor either the entire query process (using query-start) or only the execution (using query-exec-start) while exposing the core information about the user, client, and query being executed.

  • query-exec-start: Triggered when the execution of a individual query is started. The arguments are:

    • query: The full text of the submitted query.

    • connectionid: The connection ID of the client that submitted the query. The connection ID equals the connection ID returned when the client first connects and the Id value in the output from SHOW PROCESSLIST.

    • database: The database name on which the query is being executed.

    • user: The username used to connect to the server.

    • host: The hostname of the client.

    • exec_type: The type of execution. Execution types are determined based on the contents of the query and where it was submitted. The values for each type are shown in the following table.

      ValueDescription
      0Executed query from sql_parse, top-level query.
      1Executed prepared statement
      2Executed cursor statement
      3Executed query in stored procedure
  • query-exec-done: Triggered when the execution of the query has completed. The probe includes a single argument, status, which returns 0 when the query is successfully executed and 1 if there was an error.

5.4.1.7. Row-Level Probes

The *row-{start,done} probes are triggered each time a row operation is pushed down to a storage engine. For example, if you execute an INSERT statement with 100 rows of data, then the insert-row-start and insert-row-done probes will be triggered 100 times each, for each row insert.

insert-row-start(database, table)insert-row-done(status)update-row-start(database, table)update-row-done(status)delete-row-start(database, table)delete-row-done(status)
  • insert-row-start: Triggered before a row is inserted into a table.

  • insert-row-done: Triggered after a row is inserted into a table.

  • update-row-start: Triggered before a row is updated in a table.

  • update-row-done: Triggered before a row is updated in a table.

  • delete-row-start: Triggered before a row is deleted from a table.

  • delete-row-done: Triggered before a row is deleted from a table.

The arguments supported by the probes are consistent for the corresponding start and done probes in each case:

  • database: The database name.

  • table: The table name.

  • status: The status; 0 for success or 1 for failure.

Because the row-level probes are triggered for each individual row access, these probes can be triggered many thousands of times each second, which may have a detrimental effect on both the monitoring script and MySQL. The DTrace environment should limit the triggering on these probes to prevent the performance being adversely affected. Either use the probes sparingly, or use counter or aggregation functions to report on these probes and then provide a summary when the script terminates or as part of a query-done or query-exec-done probes.

The following example script summarizes the duration of each row operation within a larger query:

#!/usr/sbin/dtrace -s#pragma D option quietdtrace:::BEGIN{   printf("%-2s %-10s %-10s %9s %9s %-s \n",  "St", "Who", "DB", "ConnID", "Dur ms", "Query");}mysql*:::query-start{   self->query = copyinstr(arg0);   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));   self->db = copyinstr(arg2);   self->connid = arg1;   self->querystart = timestamp;   self->rowdur = 0;}mysql*:::query-done{   this->elapsed = (timestamp - self->querystart) /1000000;   printf("%2d %-10s %-10s %9d %9d %s\n",  arg0, self->who, self->db,  self->connid, this->elapsed, self->query);}mysql*:::query-done/ self->rowdur /{   printf("%34s %9d %s\n", "", (self->rowdur/1000000), "-> Row ops");}mysql*:::insert-row-start{   self->rowstart = timestamp;}mysql*:::delete-row-start{   self->rowstart = timestamp;}mysql*:::update-row-start{   self->rowstart = timestamp;}mysql*:::insert-row-done{   self->rowdur += (timestamp-self->rowstart);}mysql*:::delete-row-done{   self->rowdur += (timestamp-self->rowstart);}mysql*:::update-row-done{   self->rowdur += (timestamp-self->rowstart);}

Running the above script with a query that inserts data into a table, you can monitor the exact time spent performing the raw row insertion:

St Who DB ConnID Dur ms Query 0 @localhost test  13 20767 insert into t1(select * from t2) 4827 -> Row ops

5.4.1.8. Read Row Probes

The read row probes are triggered at a storage engine level each time a row read operation occurs. These probes are specified within each storage engine (as opposed to the *row-start probes which are in the storage engine interface). These probes can therefore be used to monitor individual storage engine row-level operations and performance. Because these probes are triggered around the storage engine row read interface, they may be hit a significant number of times during a basic query.

read-row-start(database, table, scan_flag)read-row-done(status)
  • read-row-start: Triggered when a row is read by the storage engine from the specified database and table. The scan_flag is set to 1 (true) when the read is part of a table scan (that is, a sequential read), or 0 (false) when the read is of a specific record.

  • read-row-done: Triggered when a row read operation within a storage engine completes. The status returns 0 on success, or a positive value on failure.

5.4.1.9. Index Probes

The index probes are triggered each time a a row is read using one of the indexes for the specified table. The probe is triggered within the corresponding storage engine for the table.

index-read-row-start(database, table)index-read-row-done(status)
  • index-read-row-start: Triggered when a row is read by the storage engine from the specified database and table.

  • index-read-row-done: Triggered when an indexed row read operation within a storage engine completes. The status returns 0 on success, or a positive value on failure.

5.4.1.10. Lock Probes

The lock probes are called whenever an external lock is requested by MySQL for a table using the corresponding lock mechanism on the table as defined by the table's engine type. There are three different types of lock, the read lock, write lock, and unlock operations. Using the probes you can determine the duration of the external locking routine (that is, the time taken by the storage engine to implement the lock, including any time waiting for another lock to become free) and the total duration of the lock/unlock process.

handler-rdlock-start(database, table)handler-rdlock-done(status)handler-wrlock-start(database, table)handler-wrlock-done(status)handler-unlock-start(database, table)handler-unlock-done(status)
  • handler-rdlock-start: Triggered when a read lock is requested on the specified database and table.

  • handler-wrlock-start: Triggered when a write lock is requested on the specified database and table.

  • handler-unlock-start: Triggered when an unlock request is made on the specified database and table.

  • handler-rdlock-done: Triggered when a read lock request completes. The status is 0 if the lock operation succeeded, or >0 on failure.

  • handler-wrlock-done: Triggered when a write lock request completes. The status is 0 if the lock operation succeeded, or >0 on failure.

  • handler-unlock-done: Triggered when an unlock request completes. The status is 0 if the unlock operation succeeded, or >0 on failure.

You can use arrays to monitor the locking and unlocking of individual tables and then calculate the duration of the entire table lock using the following script:

#!/usr/sbin/dtrace -s#pragma D option quietmysql*:::handler-rdlock-start{   self->rdlockstart = timestamp;   this->lockref = strjoin(copyinstr(arg0),strjoin("@",copyinstr(arg1)));   self->lockmap[this->lockref] = self->rdlockstart;   printf("Start: Lock->Read   %s.%s\n",copyinstr(arg0),copyinstr(arg1));}mysql*:::handler-wrlock-start{   self->wrlockstart = timestamp;   this->lockref = strjoin(copyinstr(arg0),strjoin("@",copyinstr(arg1)));   self->lockmap[this->lockref] = self->rdlockstart;   printf("Start: Lock->Write  %s.%s\n",copyinstr(arg0),copyinstr(arg1));}mysql*:::handler-unlock-start{   self->unlockstart = timestamp;   this->lockref = strjoin(copyinstr(arg0),strjoin("@",copyinstr(arg1)));   printf("Start: Lock->Unlock %s.%s (%d ms lock duration)\n",  copyinstr(arg0),copyinstr(arg1),  (timestamp - self->lockmap[this->lockref])/1000000);}mysql*:::handler-rdlock-done{   printf("End:   Lock->Read   %d ms\n",  (timestamp - self->rdlockstart)/1000000);}mysql*:::handler-wrlock-done{   printf("End:   Lock->Write  %d ms\n",  (timestamp - self->wrlockstart)/1000000);}mysql*:::handler-unlock-done{   printf("End:   Lock->Unlock %d ms\n",  (timestamp - self->unlockstart)/1000000);}

When executed, you should get information both about the duration of the locking process itself, and of the locks on a specific table:

Start: Lock->Read   test.t2End:   Lock->Read   0 msStart: Lock->Unlock test.t2 (25743 ms lock duration)End:   Lock->Unlock 0 msStart: Lock->Read   test.t2End:   Lock->Read   0 msStart: Lock->Unlock test.t2 (1 ms lock duration)End:   Lock->Unlock 0 msStart: Lock->Read   test.t2End:   Lock->Read   0 msStart: Lock->Unlock test.t2 (1 ms lock duration)End:   Lock->Unlock 0 msStart: Lock->Read   test.t2End:   Lock->Read   0 ms

5.4.1.11. Filesort Probes

The filesort probes are triggered whenever a filesort operation is applied to a table. For more information on filesort and the conditions under which it occurs, see Section 8.13.9, "ORDER BY Optimization".

filesort-start(database, table)filesort-done(status, rows)
  • filesort-start: Triggered when the filesort operation starts on a table. The two arguments to the probe, database and table, will identify the table being sorted.

  • filesort-done: Triggered when the filesort operation completes. Two arguments are supplied, the status (0 for success, 1 for failure), and the number of rows sorted during the filesort process.

An example of this is in the following script, which tracks the duration of the filesort process in addition to the duration of the main query:

#!/usr/sbin/dtrace -s#pragma D option quietdtrace:::BEGIN{   printf("%-2s %-10s %-10s %9s %18s %-s \n",  "St", "Who", "DB", "ConnID", "Dur microsec", "Query");}mysql*:::query-start{   self->query = copyinstr(arg0);   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));   self->db = copyinstr(arg2);   self->connid = arg1;   self->querystart = timestamp;   self->filesort = 0;   self->fsdb = "";   self->fstable = "";}mysql*:::filesort-start{  self->filesort = timestamp;  self->fsdb = copyinstr(arg0);  self->fstable = copyinstr(arg1);}mysql*:::filesort-done{   this->elapsed = (timestamp - self->filesort) /1000;   printf("%2d %-10s %-10s %9d %18d Filesort on %s\n",  arg0, self->who, self->fsdb,  self->connid, this->elapsed, self->fstable);}mysql*:::query-done{   this->elapsed = (timestamp - self->querystart) /1000;   printf("%2d %-10s %-10s %9d %18d %s\n",  arg0, self->who, self->db,  self->connid, this->elapsed, self->query);}

Executing a query on a large table with an ORDER BY clause that triggers a filesort, and then creating an index on the table and then repeating the same query, you can see the difference in execution speed:

St Who DB ConnID   Dur microsec Query 0 @localhost test  14   11335469 Filesort on t1 0 @localhost test  14   11335787 select * from t1 order by i limit 100 0 @localhost test  14  466734378 create index t1a on t1 (i) 0 @localhost test  14  26472 select * from t1 order by i limit 100

5.4.1.12. Statement Probes

The individual statement probes are provided to give specific information about different statement types. For the start probes the string of the query is provided as a the only argument. Depending on the statement type, the information provided by the corresponding done probe will differ. For all done probes the status of the operation (0 for success, >0 for failure) is provided. For SELECT, INSERT, INSERT ... (SELECT FROM ...), DELETE, and DELETE FROM t1,t2 operations the number of rows affected is returned.

For UPDATE and UPDATE t1,t2 ... statements the number of rows matched and the number of rows actually changed is provided. This is because the number of rows actually matched by the corresponding WHERE clause, and the number of rows changed can differ. MySQL does not update the value of a row if the value already matches the new setting.

select-start(query)select-done(status,rows)insert-start(query)insert-done(status,rows)insert-select-start(query)insert-select-done(status,rows)update-start(query)update-done(status,rowsmatched,rowschanged)multi-update-start(query)multi-update-done(status,rowsmatched,rowschanged)delete-start(query)delete-done(status,rows)multi-delete-start(query)multi-delete-done(status,rows)
  • select-start: Triggered before a SELECT statement.

  • select-done: Triggered at the end of a SELECT statement.

  • insert-start: Triggered before a INSERT statement.

  • insert-done: Triggered at the end of an INSERT statement.

  • insert-select-start: Triggered before an INSERT ... SELECT statement.

  • insert-select-done: Triggered at the end of an INSERT ... SELECT statement.

  • update-start: Triggered before an UPDATE statement.

  • update-done: Triggered at the end of an UPDATE statement.

  • multi-update-start: Triggered before an UPDATE statement involving multiple tables.

  • multi-update-done: Triggered at the end of an UPDATE statement involving multiple tables.

  • delete-start: Triggered before a DELETE statement.

  • delete-done: Triggered at the end of a DELETE statement.

  • multi-delete-start: Triggered before a DELETE statement involving multiple tables.

  • multi-delete-done: Triggered at the end of a DELETE statement involving multiple tables.

The arguments for the statement probes are:

  • query: The query string.

  • status: The status of the query. 0 for success, and >0 for failure.

  • rows: The number of rows affected by the statement. This returns the number rows found for SELECT, the number of rows deleted for DELETE, and the number of rows successfully inserted for INSERT.

  • rowsmatched: The number of rows matched by the WHERE clause of an UPDATE operation.

  • rowschanged: The number of rows actually changed during an UPDATE operation.

You use these probes to monitor the execution of these statement types without having to monitor the user or client executing the statements. A simple example of this is to track the execution times:

#!/usr/sbin/dtrace -s#pragma D option quietdtrace:::BEGIN{   printf("%-60s %-8s %-8s %-8s\n", "Query", "RowsU", "RowsM", "Dur (ms)");}mysql*:::update-start, mysql*:::insert-start,mysql*:::delete-start, mysql*:::multi-delete-start,mysql*:::multi-delete-done, mysql*:::select-start,mysql*:::insert-select-start, mysql*:::multi-update-start{ self->query = copyinstr(arg0); self->querystart = timestamp;}mysql*:::insert-done, mysql*:::select-done,mysql*:::delete-done, mysql*:::multi-delete-done, mysql*:::insert-select-done/ self->querystart /{ this->elapsed = ((timestamp - self->querystart)/1000000); printf("%-60s %-8d %-8d %d\n",   self->query,   0,   arg1,   this->elapsed); self->querystart = 0;}mysql*:::update-done, mysql*:::multi-update-done/ self->querystart /{ this->elapsed = ((timestamp - self->querystart)/1000000); printf("%-60s %-8d %-8d %d\n",   self->query,   arg1,   arg2,   this->elapsed); self->querystart = 0;}

When executed you can see the basic execution times and rows matches:

Query RowsU RowsM Dur (ms)select * from t2 0 275  0insert into t2 (select * from t2) 0 275  9update t2 set i=5 where i > 75   110  110  8update t2 set i=5 where i < 25   254  134  12delete from t2 where i < 5   0 0 0

Another alternative is to use the aggregation functions in DTrace to aggregate the execution time of individual statements together:

#!/usr/sbin/dtrace -s#pragma D option quietmysql*:::update-start, mysql*:::insert-start,mysql*:::delete-start, mysql*:::multi-delete-start,mysql*:::multi-delete-done, mysql*:::select-start,mysql*:::insert-select-start, mysql*:::multi-update-start{ self->querystart = timestamp;}mysql*:::select-done{ @statements["select"] = sum(((timestamp - self->querystart)/1000000));}mysql*:::insert-done, mysql*:::insert-select-done{ @statements["insert"] = sum(((timestamp - self->querystart)/1000000));}mysql*:::update-done, mysql*:::multi-update-done{ @statements["update"] = sum(((timestamp - self->querystart)/1000000));}mysql*:::delete-done, mysql*:::multi-delete-done{ @statements["delete"] = sum(((timestamp - self->querystart)/1000000));}tick-30s{ printa(@statements);}

The script just shown aggregates the times spent doing each operation, which could be used to help benchmark a standard suite of tests.

 delete 0  update 0  insert   23  select 2484  delete 0  update 0  insert   39  select 10744  delete 0  update   26  insert   56  select 10944  delete 0  update   26  insert 2287  select 15985

5.4.1.13. Network Probes

The network probes monitor the transfer of information from the MySQL server and clients of all types over the network. The probes are defined as follows:

net-read-start()net-read-done(status, bytes)net-write-start(bytes)net-write-done(status)
  • net-read-start: Triggered when a network read operation is started.

  • net-read-done: Triggered when the network read operation completes. The status is an integer representing the return status for the operation, 0 for success and 1 for failure. The bytes argument is an integer specifying the number of bytes read during the process.

  • net-start-bytes: Triggered when data is written to a network socket. The single argument, bytes, specifies the number of bytes written to the network socket.

  • net-write-done: Triggered when the network write operation has completed. The single argument, status, is an integer representing the return status for the operation, 0 for success and 1 for failure.

You can use the network probes to monitor the time spent reading from and writing to network clients during execution. The following D script provides an example of this. Both the cumulative time for the read or write is calculated, and the number of bytes. Note that the dynamic variable size has been increased (using the dynvarsize option) to cope with the rapid firing of the individual probes for the network reads/writes.

#!/usr/sbin/dtrace -s#pragma D option quiet#pragma D option dynvarsize=4mdtrace:::BEGIN{   printf("%-2s %-30s %-10s %9s %18s %-s \n",  "St", "Who", "DB", "ConnID", "Dur microsec", "Query");}mysql*:::query-start{   self->query = copyinstr(arg0);   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));   self->db = copyinstr(arg2);   self->connid = arg1;   self->querystart = timestamp;   self->netwrite = 0;   self->netwritecum = 0;   self->netwritebase = 0;   self->netread = 0;   self->netreadcum = 0;   self->netreadbase = 0;}mysql*:::net-write-start{   self->netwrite += arg0;   self->netwritebase = timestamp;}mysql*:::net-write-done{   self->netwritecum += (timestamp - self->netwritebase);   self->netwritebase = 0;}mysql*:::net-read-start{   self->netreadbase = timestamp;}mysql*:::net-read-done{   self->netread += arg1;   self->netreadcum += (timestamp - self->netreadbase);   self->netreadbase = 0;}mysql*:::query-done{   this->elapsed = (timestamp - self->querystart) /1000000;   printf("%2d %-30s %-10s %9d %18d %s\n",  arg0, self->who, self->db,  self->connid, this->elapsed, self->query);   printf("Net read: %d bytes (%d ms) write: %d bytes (%d ms)\n",   self->netread, (self->netreadcum/1000000),   self->netwrite, (self->netwritecum/1000000));}

When executing the above script on a machine with a remote client, you can see that approximately a third of the time spent executing the query is related to writing the query results back to the client.

St Who DB ConnID   Dur microsec Query 0 root@::ffff:192.168.0.108   test  31   3495 select * from t1 limit 1000000Net read: 0 bytes (0 ms) write: 10000075 bytes (1220 ms)

5.4.1.14. Keycache Probes

The keycache probes are triggered when using the index key cache used with the MyISAM storage engine. Probes exist to monitor when data is read into the keycache, cached key data is written from the cache into a cached file, or when accessing the keycache.

Keycache usage indicates when data is read or written from the index files into the cache, and can be used to monitor how efficient the memory allocated to the keycache is being used. A high number of keycache reads across a range of queries may indicate that the keycache is too small for size of data being accessed.

keycache-read-start(filepath, bytes, mem_used, mem_free)keycache-read-block(bytes)keycache-read-hit()keycache-read-miss()keycache-read-done(mem_used, mem_free)keycache-write-start(filepath, bytes, mem_used, mem_free)keycache-write-block(bytes)keycache-write-done(mem_used, mem_free)

When reading data from the index files into the keycache, the process first initializes the read operation (indicated by keycache-read-start), then loads blocks of data (keycache-read-block), and then the read block is either matches the data being identified (keycache-read-hit) or more data needs to be read (keycache-read-miss). Once the read operation has completed, reading stops with the keycache-read-done.

Data will be read from the index file into the keycache only when the specified key is not already within the keycache.

  • keycache-read-start: Triggered when the keycache read operation is started. Data is read from the specified filepath, reading the specified number of bytes. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

  • keycache-read-block: Triggered when the keycache reads a block of data, of the specified number of bytes, from the index file into the keycache.

  • keycache-read-hit: Triggered when the block of data read from the index file matches the key data requested.

  • keycache-read-miss: Triggered when the block of data read from the index file does not match the key data needed.

  • keycache-read-done: Triggered when the keycache read operation has completed. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

Keycache writes occur when the index information is updated during an INSERT, UPDATE, or DELETE operation, and the cached key information is flushed back to the index file.

  • keycache-write-start: Triggered when the keycache write operation is started. Data is written to the specified filepath, reading the specified number of bytes. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

  • keycache-write-block: Triggered when the keycache writes a block of data, of the specified number of bytes, to the index file from the keycache.

  • keycache-write-done: Triggered when the keycache write operation has completed. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

Copyright © 1997, 2013, Oracle and/or its affiliates. All rights reserved. Legal Notices
(Sebelumnya) 5.1.5. Using System Variables6. Security (Berikutnya)