Ocean Grid has been offline since February 2016 because of MySQL problems but the database is now restored and the grid is working! The problem occurred because of the growth of the database: see details below for information.
No doubt there will be some things to fix as usual... Ed.: see the following post about functionality now restored and non-urgent work required.
Details of Problems Causing MySQL InnoDB Database Corruption
(1) The following needed to be added to /etc/mysql/mysql.conf.d/mysqld.cnf (a replacement for /etc/mysql/my.conf on some systems).
[mysqld] innodb_data_file_path = ibdata1:10M:autoextend
However, to do this it is necessary to be careful to back up the database first! Details below:
Now the files ibdata1 and ibtmp1 will have enough space to be written. If this is not possible then ibdata1 goes out of sync with the database and the temporary file cannot be written, at which point MySQL will stop unexpectedly, even if you try to drop the database that is corrupted in order to restore it.
It is worth doing a backup of /var/lib/mysql/ (with MySQL stopped) as well as using mysqldump to make a SQL backup.
(2) 4GB memory restriction on old 32 bit operating system.
The system was i686 x86, i.e. 32 bit. This cannot address more than 4GB of memory and thus cannot write a bigger file in one go, which is what mysql tries and fails to do. It had been upgraded since the year dot rather than re-installed, which was why it was now on an x64 server that could and should be running a 64 bit operating system. The restriction has been removed by installing a new operating system and this reason for database corruption has also been resolved. It should be able to address more memory and run faster in general too.
Moral of the story: do not spend 3 years thinking your database is corrupt and cannot be fixed! The backup was fine and could be restored but the index file was getting larger than the default upper limit so this was causing the corruption to reoccur. Also, do not be lazy! Lesson learned.