chkfrag - check if you can save some space!
chkfragis a program that checks if there is fragmentation of the pages that stores the VARIABLE sized attributes. The program will print the degree of fragmentation and suggest you to either do a
OPTIMIZE TABLEor a Rolling Restart to defragment the data.
Gathering information how much DataMemory is being used
... Used DataMemory calculated:8202MB 262464 pages (32KB pages) Used DataMemory reported:8256MB 264192 pages (32KB pages) Total DataMemory reported:16384MB 524288 pages (32KB pages) Percentage of VAR* in database: 4.41 percent Fragmentation of VAR* in database: 50.33 percent Possibly lost memory (locked in by tables): 183.38MB
Community version (GPL)
Download - source dist only (latest first)
Version 0.21 - released 20th of Jun 2009 - download source dist.
Drop an email to
email@example.com you have problems or suggestions how to improve this utility.
MySQL Cluster 6.3 or MySQL Cluster 7.0 installed and started
In order to make it simple it is also recommended to have:
ndb_mgmd running on localhost (where chkfrag is going to be installed) MemoryReportFrequency set in config.ini data nodes can be somewhere else or also on localhost mysql_config on the PATH autoconf, automake, libtool (e.g apt-get install autoconf apt-get install automake apt-get install libtool
Make sure you have the correct mysql_config on the path. If you have used the config tool (and the default install path):
Make sure you have: g++ and gcc installed (run those two commands from the command line). If they don't exists you have to install them!!
or ..another popular PATH is
Proceed with the following when you can execute the command
mysql_config(then your PATH is good):
tar xvfz chkfrag-0.21.tgz
sudo make install
If you don't set the
LD_LIBRARY_PATHto where libmysqlclient_r.so.16 is, then you get the following error message when starting chkfrag:
chkfrag: error while loading shared libraries: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory
on Solaris (you need Sun studio) run the following configure:
CC=cc-5.0 CXX=CC ./configure
Running chkfrag is easy:
$ chkfrag -?will print out the arguments:
[-?|--help] prints this information [-c|--ndb-connectstring=]<hostname> hostname where ndb_mgmd is running, default 'localhost' [-d|--database=]=<d> Database containing table to analyse (default is all databases). [-a|--all] Analyze all records in each table.
By default max 10240 records (depending on actual records in table) will be analyzed.
[-r|--records=]<records> Analyze records many records for calculating storage for tables (default=10240).
The example below runs
chkfragon all NDB tables in all databases:
See above and below :)
Here follows a sample run analyzing a Cluster with 2 data nodes with an OpenLDAP database created:
Gathering information how much DataMemory is being used (this can take some time depending on MemoryReportFrequency)
Calculating storage for tables.. based on content found in 10240 random records
Calculating storage cost per record for table organizationalUnit
table organizationalUnit has 1 rows
Calculating storage cost per record for table OL_nextid
table OL_nextid has 0 rows
Calculating storage cost per record for table OL_opattrs
table OL_opattrs has 999987 rows
Calculating storage cost per record for table testObject
table testObject has 999985 rows
Calculating storage cost per record for table OL_dn2id
table OL_dn2id has 999987 rows
Calculating storage cost per record for table ndb_schema
table ndb_schema has 7 rows
Calculating storage cost per record for table ndb_apply_status
Consider loading database with average data for exact measurement
table ndb_apply_status has 0 rows
Calculating storage cost per record for table domain
table domain has 1 rows
Used DataMemory calculated:8202MB 262464 pages (32KB pages)
Used DataMemory reported:8256MB 264192 pages (32KB pages)
Total DataMemory reported:16384MB 524288 pages (32KB pages)
Percentage of VAR* in database: 4.41 percent
Fragmentation of VAR* in database: 50.33 percent
Possibly lost memory (locked in by tables): 183.38MB
Since the percentage of VAR* in the Cluster is so small compared to the total dataset, it is unlikely that the following recommendations will help.
1. Run OPTIMIZE TABLE on tables with a lot of VARCHAR/VARBINARY.
If you are not happy with the result of this then do step 2)!
2. Perform a rolling restart.
In this case, running
OPTIMIZE TABLEor performing a rolling restart is unlikely to give anything.