Close Menu
    Facebook X (Twitter) Instagram
    • Download Cisco Packet Tracer
    Facebook X (Twitter) Instagram Pinterest Vimeo
    IT Beginner
    • Home
    • Server
    • WordPress
    IT Beginner
    Home»Database»Import/Export Big MYSQL Database
    Database

    Import/Export Big MYSQL Database

    30/09/2016Updated:16/07/2017No Comments3 Mins Read

    When working with MYSQL Database I often use phpMyAdmin, which is a nice GUI way to manipulate my database. But some operations won’t work in phpMyAdmin when the database is too large. In particular, you can’t import or export really large databases using phpMyAdmin. So sometimes you need to do things on the command line.

    So I thought I’d document some of the command line snippets we use frequently. In the following, replace [USERNAME] with your mysql username, [DBNAME] with your database name, and [/path_to_file/DBNAME] with the path and name of the file used for the database dump.

    Export Big Database MYSQL:

    You create a copy by dumping the database with mysqldump.

    To dump the database and gzip it at the same time, use the following. This will prompt you for your password.

    mysqldump -u [USERNAME] -p [DBNAME] | gzip > [/path_to_file/DBNAME].sql.gz

    Export Big Database MYSQL:

    If you want to replace the database with a fresh dump created by the above process, do the following.

    First, unzip the file.

    gzip -d [/path_to_file/DBNAME].sql.gz

    Type the following command at the shell prompt to login mysql:
    (Get to a mysql prompt (you will be asked for your password.))

    mysql -u [USERNAME] -p

    Then do the following to wipe out the old database and replace it with the new dump:

    SHOW DATABASES;
    DROP DATABASE [DBNAME];
    CREATE DATABASE [DBNAME];
    USE [DBNAME];
    SOURCE [/path_to_file/DBNAME].sql;

     Conditional Dumps

    Sometimes the search index is huge and you want to omit it from the dump. Do so with:

    mysqldump -u [USERNAME] -p [DBNAME] --ignore-table=[DBNAME].search_index | gzip > [/path_to_file/DBNAME].sql.gz

    There are actually a number of tables you could exclude, like the sessions table, the watchdog table and all the cache* tables.

    But if you use the above technique to destroy and recreate the database after doing this, you will be missing all those excluded tables. So you will want to do a two step process instead:

    First, create a backup with ONLY the table information, no data.

    mysqldump -u [USERNAME] -p [DBNAME] --no-data | gzip > [/path_to_file/DBNAME].info.sql.gz

    Then create a backup, including only data from the tables you need.

    [path_to_mysql/]mysqldump -u [USERNAME] -p [DBNAME]  --no-create-info --ignore-table=[DBNAME].search_index --ignore-table=[DBNAME].cache% | gzip > [/path_to_file/DBNAME].data.sql.gz;

    After doing this, just import the two files as above, first the one with only the table info, and then the data. Result, a (relatively) small database with all the optional tables emptied out.

    Note that the wildcard trick above is not documented anywhere that I can see, so you’ll want to test that it works in your setup.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Next Article How to change hostname on centos
    Subscribe
    Notify of
    guest

    guest

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    0 Comments
    Oldest
    Newest Most Voted
    Inline Feedbacks
    View all comments
    Tags
    apache centos Centos 6 cuda Desktop Environment dual boot environment featured gnome GUI hostname hosts intel kernel kill lamp server lemp server life MariaDB netflix nginx nvidia password php-fpm phpmyadmin pids processes s.m.a.r.t Security session solid state drive ssd ssh ssh server tag 1 tag 2 Ubuntu upgrade varnish VirtualBox VNC Server web server window manager wordpress xfce
    Facebook X (Twitter) Instagram Pinterest

    Type above and press Enter to search. Press Esc to cancel.

    wpDiscuz