over 4 years ago - CCP_Bartender - Direct link

My recommendation would be that you start by looking into the query planner explain command. I use the TSQL equivalent a lot when doing these kinds of optimizations to the eve DB. Unfortunately, I know very little about the details of innoDB, but as an example, consider the following query:

UPDATE signatures SET modifiedByID = 0, modifiedByName = "Tripwire" WHERE lifeLeft < NOW() AND lifeLength <> '0' AND type <> 'wormhole';

When executing that query, you’re hinging on the index being created by this key in the signatures table to avoid a whole table scan:
KEY `lifeLeft, lifeLength` (`lifeLeft`,`lifeLength`),

However, it may be buying you less than you expect. You’re about to wander into the deep magic of implementation-specific db engine index construction techniques.

Here are some good links on the topic:




https://www.sql-server-performance.com/composite-indexes/
https://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html

It’s hard to say exactly what this is doing without looking at the query planner results. However, my guess is that because you also have the “type” in the where clause, the composite index is less useful than you might think. The composite index can be used to rapidly assemble a bunch of rows, but it’s possible that they now need to be merged and scanned to see if they are of type wormhole. It’s possible the innoDB engine is ignoring the index entirely and just doing a full table scan, because it considers the merge-and-filter to be more expensive than the table scan.

My recommendations would be:

  1. Throw some of these things through the EXPLAIN statement and see if it tells you where the bottlenecks are.
  2. Try making a triple clustered index that goes (type, lifeLength, lifeLeft) and see if it makes the updates and deletes magically faster.

Note however that if #2 does magically work you probably want to consider the situation a bit more and not just blindly leave that there. A triple index in that order might make your inserts much slower. Use the query planner! :smiley:

over 4 years ago - CCP_Bartender - Direct link

Hrmm, I am mad suspicious about those update+innerjoin queries in the wormholeCritical event.

Again, not an expert in innodb and hard to say without the query planner, but according to this gentleman, you might be locking way more of the table rows than you expect: https://stackoverflow.com/questions/6664667/mysql-innodb-locks-on-joined-rows/12251122#12251122

You could try using a with clause, it might help hint the query planner to reduce the amount of rows locked by divorcing the table scans from the update. Again, the explain command will help you figure out whether the change has really made a difference, you should see radically different plans.

Consider this query:

    UPDATE signatures s 
INNER JOIN wormholes w ON (s.id = initialID OR s.id = secondaryID) AND life = 'stable' 
       SET modifiedByID = 0, modifiedByName = 'Tripwire', modifiedTime = NOW() 
     WHERE s.type = 'wormhole' AND lifeLength <> '0' AND DATE_SUB(lifeLeft, INTERVAL 4 HOUR) < NOW();

If we re-write it (I have done this by hand without testing, it may not work without tweaking) so that “wormholes w” is encapsulated in a WITH clause:

WITH w AS (
    SELECT initialID, SecondaryID
       FROM wormholes
      WHERE life = 'stable'
  )
  UPDATE signatures S
         SET s.modifiedByID = 0, s.modifiedByName = 'Tripwire', s.modifiedTime = NOW()
   WHERE s.type = 'wormhole' AND s.lifeLength <> '0' AND DATE_SUB(s.lifeLeft, INTERVAL 4 HOUR) < NOW()
        AND (s.id IN (w.initialID) OR s.id IN (w.secondaryID));

That might speed it up just by allowing the query planner to unlock a bunch of rows that maybe it wasn’t sure about before. However, this re-write helps show something else as well. Notice the condition used to generate the result set w. It’s only criteria is that the wormhole must be “stable”. How much of your wormhole table is lifetime “stable”?

Provided you’re culling dead wormholes from the table, I’m guessing a lot of it. Which means you’re having to compare every row in the signatures table against not one but two columns of most of the wormholes you know about in order to tell if it needs to be updated.

The database engine might be smart enough to invert the query and make it a “not in” if that reduces the data more, but that’s implementation dependent and I don’t know how innoDB profiles these things. You could try inverting the condition on w and on that last AND clause to hint to the query planner that there’s way more stable wormholes than not, see if excluding rather than including is faster in the average case?

With the INNER JOIN strategy, I wonder if perhaps the database can’t “abort early” for rows which don’t meet some conditions of the WHERE clause, and it’s being forced to lock a tonne of rows in both table that it needn’t because it has to join first before it can confidently execute the WHERE clause and start removing signatures that don’t meet the type condition.

Now, you do have indexes on the both initialID and secondaryID to help with these issues. But again, I’m not certain they’re doing what you expect. It seems to me that query planner would have to tunnel all the way down to the leaf node level for every entry in the signatures row to be sure it’s not in the result, and it has to do that twice, once for the initial and once for the secondary. At the risk of sounding like a cracked record, throw it through the explain statement and see what it’s really doing.