I have a huge mysql database with size of approximately 85G for 2011 alone, previous years are relatively smaller in size.
From docs, I should use the following command to import data into my infinidb db.
[code]idbmysql -e ' source_table;' -N db2 | /usr/local/Calpont/bin/cpimport db table1 -s 't' [/code]
I'm using the open source version of infinidb on a virtual machine which has 4G of memory.
The only issue I faced, my vm runs out of memory soon after running cpimport utility.
After reading how cpimport tool works, I learned that it loads data into memory before it starts importing it to columns.
I tried spliting the database by lines using split command, it works, but it requires lots of baby sitting before you finish importing the entire database.
I found a better way of doing it by following steps below
What I have done is count number of rows in mysql using the following command.
1. [code]select count(*) from table name;
Note the number of rows down (in my case this is a small database with 23897040)
2. Run a bash loop with count option to limit number of rows which gets loaded to cpimport tool
[code]for (( i=1000000; i <= 23897040; i=$i+1000000 )); do idbmysql -proot -e "calldetailrecord limit $i,1000000;" -N source_db | /usr/local/Calpont/bin/cpimport archive_db calldetailrecord -j501 -s 't' ;done[/code]
This will feed 1million rows at a time, till it reaches 23897040.
I found this is the only way I get the job done without having my virtual machine run out of memory.
is there is a better way? ;)