使用mysqlshell进行备份数据导入

背景

前边两篇文章我介绍了使用mysqlshell对数据库进行备份的操作:你还在使用mysqldump备份吗? 你还在使用mysqldump备份吗?(二) 那么.我们使用mysqlshell导出的数据.怎么导入数据库呢?

使用mysqlshell导入数据

使用util.loadDump() 导入

这个模块在mysqlshell 8021版本才GA出来的.

官 介绍:

The dump loading utility provides data streaming from remote storage, parallel loading of tables or table chunks, progress state tracking, resume and reset capability, and the option of concurrent loading while the dump is still taking place. To get the best functionality, always use the most recent version available of MySQL Shell's dump and dump loading utilities.

使用限制:

1 MySQL 5.7 or later is required for the destination MySQL instance.2 The dump loading utility uses the LOAD DATA LOCAL INFILE statement, so the global setting of the local_infile system variable on the target MySQL instance must be ON for the duration of the import. By default, this system variable is set to ON in a standard MySQL DB System configuration.3 The LOAD DATA LOCAL INFILE statement uses nonrestrictive data interpretation, which turns errors into warnings and continues with the load operation. This process can include assigning default values and implicit default values to fields, and converting invalid values to the closest valid value for the column data type. For details of the statement's behavior, see LOAD DATA.4 On the target MySQL instance, the dump loading utility checks whether the sql_require_primary_key system variable is set to ON, and if it is, returns an error if there is a table in the dump files with no primary key. By default, this system variable is set to OFF in a standard MySQL DB System configuration.5 The dump loading utility does not automatically apply the gtid_executed GTID set from the source MySQL instance on the target MySQL instance. The GTID set is included in the dump metadata from MySQL Shell's instance dump utility, schema dump utility, or table dump utility, as the gtidExecuted field in the @.json dump file. To apply these GTIDs on the target MySQL instance for use with replication, use the updateGtidSet option or import them manually, depending on the release of the target MySQL instance and the MySQL Shell release. From MySQL Shell 8.0.23, this is supported on MySQL DB System instances. See the description of the updateGtidSet option for details.

导入整个库:

语法:

util.loadDump(url[, options])

使用:

 MySQL  10.10.119.101:33060+ ssl  JS > util.loadDump("/mysqlsh",{dryRun:true,threads:8,ignoreExistingObjects:true,analyzeTables:"on"})Loading DDL and Data from '/mysqlsh' using 8 threads.Opening dump...dryRun enabled, no changes will be made.Target is MySQL 8.0.21. Dump was produced from MySQL 8.0.21Checking for pre-existing objects...NOTE: Schema `lzm` already contains a table named sbtest1NOTE: Schema `test` already contains a table named t1NOTE: Schema `test_1` already contains a table named T_CM_CUSTNOTE: Schema `test_2` already contains a table named T_HM_HOUSENOTE: Schema `test_2` already contains a table named T_HM_HOUSEDELNOTE: Schema `test_2` already contains a table named T_HM_OWNERNOTE: Schema `test_2` already contains a table named T_HM_PropertyRightNOTE: Schema `test_2` already contains a table named checkpointNOTE: Schema `test_2` already contains a table named checkpoint_loxNOTE: One or more objects in the dump already exist in the destination database but will be ignored because the 'ignoreExistingObjects' option was enabled.Executing common preamble SQLExecuting DDL script for schema `lzm`Executing DDL script for schema `test`Executing DDL script for schema `test_1`Executing DDL script for schema `test_2`[Worker000] Executing DDL script for `test`.`t1`[Worker003] Executing DDL script for `lzm`.`sbtest1`[Worker001] Executing DDL script for `test_1`.`T_CM_CUST`[Worker007] Executing DDL script for `test_2`.`checkpoint_lox`[Worker004] Executing DDL script for `test_2`.`T_HM_OWNER`[Worker002] Executing DDL script for `test_2`.`T_HM_PropertyRight`[Worker000] Executing DDL script for `test_2`.`checkpoint`[Worker006] Executing DDL script for `test_2`.`T_HM_HOUSEDEL`[Worker005] Executing DDL script for `test_2`.`T_HM_HOUSE`Analyzing table `lzm`.`sbtest1`             Analyzing table `test`.`t1`                 Analyzing table `test_1`.`T_CM_CUST`       Analyzing table `test_2`.`T_HM_HOUSEDEL`       Analyzing table `test_2`.`T_HM_HOUSE`  Analyzing table `test_2`.`T_HM_OWNER`  Analyzing table `test_2`.`T_HM_PropertyRight`  Analyzing table `test_2`.`checkpoint`  Executing common postamble SQL              No data loaded.0 warnings were reported during the load.

参数解释:

因为我已经导入过一次.所以你看到输出里边提示NOTE: Schema `lzm` already contains a table named sbtest1/mysqlsh   --指定备份文件所在的目录threads:8  --使用8个线程并行导入ignoreExistingObjects:true  --忽略重复的对象.就是在导入备份文件时.如果已存在的表可以忽略.不加这个参数的话,会 错停止运行analyzeTables:"on"  --数据导入完成之后执行analyze table

备份单库并导入

备份单库或者多库

 MySQL  10.10.119.63:33060+ ssl  JS > util.dumpSchemas(["lzm"],"/mysqlsh",{threads:8,bytesPerChunk:"100M",maxRate:"5M"})Acquiring global read lockGlobal read lock acquiredGathering information - doneAll transactions have been startedLocking instance for backupGlobal read lock has been releasedWriting global DDL filesPreparing data dump for table `lzm`.`sbtest1`Writing DDL for schema `lzm`Data dump for table `lzm`.`sbtest1` will be chunked using column `id`Writing DDL for table `lzm`.`sbtest1`Running data dump using 8 threads.NOTE: Progress information uses estimated values and may not be accurate.Data dump for table `lzm`.`sbtest1` will be written to 1 file1 thds dumping - 101% (100.00K rows / ~98.71K rows), 26.62K rows/s, 5.11 MB/s uncompressed, 2.32 MB/s compressedDuration: 00:00:03s                                         Schemas dumped: 1                                           Tables dumped: 1Uncompressed data size: 19.18 MB                            Compressed data size: 8.75 MB                               Compression ratio: 2.2                                      Rows written: 100000                                        Bytes written: 8.75 MB                                      Average uncompressed throughput: 4.95 MB/s                  Average compressed throughput: 2.26 MB/s          

导入:

MySQL  10.10.119.101:33060+ ssl  JS > util.loadDump("/mysqlsh",{threads:8,ignoreExistingObjects:true,analyzeTables:"on"})Loading DDL and Data from '/mysqlsh' using 8 threads.Opening dump...Target is MySQL 8.0.21. Dump was produced from MySQL 8.0.21Checking for pre-existing objects...Executing common preamble SQLExecuting DDL script for schema `lzm`[Worker000] Executing DDL script for `lzm`.`sbtest1`Analyzing table `lzm`.`sbtest1`             [Worker004] lzm@sbtest1@@0.tsv.zst: Records: 100000  Deleted: 0  Skipped: 0  Warnings: 0Executing common postamble SQL                                         1 chunks (100.00K rows, 19.18 MB) for 1 tables in 1 schemas were loaded in 1 sec (avg throughput 19.18 MB/s)0 warnings were reported during the load.

只导入建表语句:

 MySQL  10.10.119.101:33060+ ssl  JS > util.loadDump("/mysqlsh",{includeTables: ["lzm.sbtest1"],loadDdl:true,loadData:false})Loading DDL only from '/mysqlsh' using 4 threads.Opening dump...Target is MySQL 8.0.21. Dump was produced from MySQL 8.0.21Checking for pre-existing objects...Executing common preamble SQLExecuting DDL script for schema `lzm`[Worker001] Executing DDL script for `lzm`.`sbtest1`Executing common postamble SQL    

总结

这篇文章分享了使用mysqlshell导入备份文件的方法

声明:本站部分文章及图片源自用户投稿,如本站任何资料有侵权请您尽早请联系jinwei@zod.com.cn进行处理,非常感谢!

上一篇 2021年7月14日
下一篇 2021年7月15日

相关推荐