We often recommend to our clients to convert their current database from MyISAM tables to InnoDB.
The transfer by itself in most cases is almost plain, however the application can be broken by new unexpected errors
1205 (ER_LOCK_WAIT_TIMEOUT)
Lock wait timeout expired. Transaction was rolled back.
1213 (ER_LOCK_DEADLOCK)
Transaction deadlock. You should rerun the transaction.
It is not hard to handle these errors, but you should be aware of.
This is some thing we do in our PHP applications:
CODE:
-
class mysqlx extends mysqli {
-
-
...
-
-
function deadlock_query($query) {
-
$MAX_ATTEMPS = 100;
-
$current = 0;
-
while ($current++ <$MAX_ATTEMPS) {
-
-
$res = $this->query($query);
-
-
if(!$res && ( $this->errno== '1205' || $this->errno == '1213' ) )
-
continue;
-
else
-
break;
-
}
-
}
-
...
-
}
You may want to handle ER_LOCK_WAIT_TIMEOUT in different way, especially for web applications where long waiting is not good, you get the idea.
Also there is script we are using for converting databases with many tables, maybe it will be useful for you
CODE:
-
DBNAME="dbname"
-
DBUSER="user"
-
DBPWD="password"
-
for t in $(mysql -u$DBUSER -p$DBPWD --batch --column-names=false -e "show tables" $DBNAME);
-
do
-
echo "Converting table $t"
-
mysql -u$DBUSER -p$DBPWD -e "alter table $t type=InnoDB" $DBNAME;
-
done
There is the standard script mysql_convert_table_format in the MySQL distribution, but it requires Perl and DBI package and sometimes they are absent on used server.
-
How often are you seeing deadlocks and rolled back transactions. Maybe I’m just lucky with the load of the work systems that I’m yet to see this? Also I tend to screen mysql and run the ALTER TABLE … from a screened session.
-
Why is this error happening ?
I converted a big database to InnoDB and I’m seeing this error randomly, the amount of code to change is huge and I’m considering migrating back to myisam because of this.
Is there anyway this error can be solved at the mysql server level ?
-
Jacques,
It all depends on the load. Innodb does not set any locks for normal selects so to really cause deadlock you need to have update transactions which run into deadlock. Furthermore if you were using MyISAM and so do not use transactions you need to have update statements which lock data in different order so they really collide to deadlock, otherwise you’ll see short row level lock wait and this is it.
-
John,
Deadlocks are normal. With MyISAM you were not getting any because it did not support transactions and used only table level locks which had to be taken all at the same time.
With Innodb and pretty much every transactional storage engines deadlocks will happen because of how data is accessed.
If you want to avoid deadlocks you may serialize your transactions, it will show poor performance of course but this is other story.
No there is no option on MySQL level to solve this problem because only application knows what to do in the case of deadlock. In some cases it is enough to simply retry the statement as Vadim shows in more complicated cases you need to re-run the whole transaction, possibly changing in (because data is likely changed)
If you use single statement you actually can handle it on mysql wrapper you’re using or even for example in php extend mysqli class so it masks such errors for you by retrying statements. I would not do that however.
-
Jacques,
As Peter said it depends of load, you may not see this error or it may happen once per day/week and it
could be not acceptable for you.
Regrading ALTER TABLE it is OK if you have few table.
For database with 50+ table it is a bit annoying to run this statement manually
-
Here is a quick shell script to convert all tables in a database to InnoDB. No dependancies other than a command line prompt on a Un*x like system and the standard MySQL tools:
for T in `mysql -u root -B -N -e “show tables” test`; do mysql -u root -e “alter table $T type=innodb” test; done
Replace “test” with the target database. This pattern is also great for optimizing or analyzing your MyISAM tables.
-
These lock timeouts imply that you have some long running queries (or that your server is overloaded), correct?
I suspect that for most applications you would be better off looking for the source of the long running lock rather than wrapping all your queries in a retry script (which strikes me as being problematic in the context of transactions).
Still, thanks for the post - I wasn’t aware of this MyISAM->InnoDB gotcha.
-
Raven,
Lock wait timeout can happen in two cases - in case of cross storage engine lock wait (in which case it well can be deadlock) or most commonly in case of too long lock which can be high load or simply some long running queries.
deadlocks however can handle even with light load - it is more question of probability.
-
Thanks for the clarification, Peter.
Is it fair to say that a deadlock can’t happen if you are exclusively using InnoDB and not manually locking things? I’d like to believe that InnoDB does locking internally in a consistent and safe manner…
-
No you can’t say so.
even of you do not lock anything manually there could be statements which come into deadlock, consider for example 2 update statements updating the table going by index one going in forward order and other in reverse order. These will deadlock.
-
I stumbled across this website in search of a basic problem I’ve found. I’m using PHP 5.1.6, MySQL 5.0.24a default installs from Ubuntu 6.10, so pretty recent + standard.
I’m using the PHP mysqli API and doing a basic insert into an InnoDB table. My database class starts a transaction with a “$mysqli->autocommit(false)”, then executes an insert SQL statement, and completes with a “$mysqli->commit()”. It works fine……..
…… until PHP bombs mid-way due to a coding bug. So the insert happens, but the script dies doing some other stuff before the commit() occurs. I thought the database would rollback - but instead it remains locked. Another PHP script attempting to insert takes 50 seconds to time-out with a “1205 - Lock wait timeout exceeded; try restarting transaction”. The 1205 error won’t go away until I restart Apache. For whatever reason, the resources are not released until the process is killed.
I found that an ‘exit’, instead of a PHP code bug, will not cause a problem. So there is an auto-rollback mechanism in place - it just fails miserably when PHP dies unexpectantly. Having to restarting apache is a pretty drastic measure to overcome a code bug.
To avoid this, I use “register_shutdown_function()” when I start a transaction, and set a flag to indicate a transaction is in process (cause there is no unregister_shutdown_function()). See below. So the __shutdown_check() routine (I think it needs to be public) is called when the script bombs - which is able to invoke the rollback(). This seems to work fine.
public function begin_transaction()
{
$ret = self::$mysqli->autocommit(false);
$this->transaction_in_progress = true;
register_shutdown_function(array($this, “__shutdown_check”));
}
public function __shutdown_check()
{
if ($this->transaction_in_progress)
{
$this->rollback();
}
}
public function commit()
{
$ret = self::$mysqli->commit();
$this->transaction_in_progress = false;
}
public function rollback()
{
$ret = self::$mysqli->rollback();
$this->transaction_in_progress = false;
}
I notice your code above has a loop on the 1205 (and 1213) error. In my case, each attempt take 50 seconds to time-out - I’m assuming that’s standard (I haven’t tuned anything). So your script could loop for 5000 seconds. Perhaps a better solution is to use timestamps and max out based on duration, rather than a fixed loop?
-
Glen,
Yes, that is really strange.
Also I agree that 1205 for web site should be handled in different way, I showed part of code for batch-process.
-
Glen, could it be that you are using persistant mysql connections? If so, mysql probably doesn’t know your php code has bombed out, all it knows is the apache child is still connected, so maybe it’s only choice is to wait and see if the client/apache completes the transaction
-
ER_LOCK_WAIT_TIMEOUT will also pop up with transactions that are open for long periods of time. For instance, assume 2 clients (A, B) and the following sequence of statements.
A - create table ed (id int unsigned, name varchar(10)) engine=innodb;
A - begin;
A - insert into ed set id = 1, name = ‘one’;
B - begin;
B - update ed set name = ‘two’ where id = 1;
B will hang until A commits/rollsback. If enough time goes by, you get ER_LOCK_WAIT_TIMEOUT.
Moral: keep transactions as short as possible.
ER_LOCK_DEADLOCK can be exacerbated by triggers since triggers can increase the length of the implicit transaction of an insert/update as well as the index locks (especially foreign keys) acquired. Also, don’t forget that summary triggers like “update parent set column = column + 1 where primary_key = ##” lock parent’s row until the original statement is completed.
Moral: make update/insert statements (including triggers) as quick as possible