The Cost-Benefit Paradigm

There’s a pattern I keep seeing where organisational structure or segregation of duties makes simple tasks complicated. For a bit of fun I tried to define it:

The Cohen Cost-Benefit Paradigm

When there are two parties involved:

  • Party One understands the benefit of some resource but not the cost.
  • Party Two controls the cost of the same resource but not the benefit of it.

The result is that neither can decide if the usage of the resource is appropriate or not.

This happens a lot in cases where an infrastructure team manages and is in control of costs. Another team comes to them and requests infrastructure components, say some servers.

One: Hi, we’d like some shiny new servers with a load of RAM

Two: Why?! They’re really expensive.

One: We need it to solve problem X.

Two: There has to be a cheaper way. Why can’t you use solution Y instead?

One: No, we analysed the problem already, this makes the best sense. Can you just provision it please?

Two: Nope. It’s too expensive.

Then follows, debate, re-design and discussion that could have been avoided.

Note that neither party is being malicious. One is trying to deliver benefits to their customers, the other to keep their costs under control.

Possible solutions to the problem?

  • Improve Communication. This is more a fix for the symptom than the problem but would make the best of the situation and allow everyone’s concerns to be factored into the design.
  • Make one person/team responsible for both the benefit and the cost.

Are recruiters creating “appealing” faux profiles?

I received this LinkedIn invite today:

linkedin-invite

The profile page on LinkedIn looks legit but the suspicious side of me didn’t think that photo was genuine.

linkedin-profile

TinEye is an awesome service that allows you to search for copied images on the internet.

Searching for the profile image showed up 25 other misc sites including this image. Either this recruiter is a bit of an “internet sensation” or someone’s just copied a picture of someone attractive to create a faux profile.

If it’s the latter their strategy is definitely sound!

linkedin-how-you're-connected

Putting a GPS Tracker in the mail

I lent my SPOT Personal Tracker to a friend recently. The SPOT is a neat piece of kit that relays your GPS location via the Globalstar satellite constellation in various different ways. There are two buttons that allow you to send pre-configured messages to a given set of SMS/email users, an optional “Track Progress” Google Maps overlay and a 911 emergency button that alerts an international emergency centre. It’s been a great reassurance to have this with me on a couple of foreign motorbike trips especially those that get out of mobile phone range.

I posted the SPOT Tracker in a padded envelope and just before I put it in the post I enabled the “Track Progress” function for a laugh. The GPS needs pretty good clear view of the sky and the satellite transmissions are low power. The low power allows it to track for an amazing 14 days or send 911 signals for approx 7 days, impressive off just a pair of Lithium AA batteries. Given that the conditions it would be in were far from ideal I didn’t have high hopes of it tracking at all.

I took it to post office near work and posted it Royal MailSpecial Delivery at lunchtime. I expected the parcel to be put on a truck/train for the 250 mile (400km) journey.

That night evening I checked in and found that it had sent out some signals!

First from the large Mount Pleasant sorting office, note the red coloured Royal Mail vehicles.
Location 1

It then spent an hour on the road, I’m guessing in a fibreglass or soft topped truck heading out of London and up the M11.

All of the updates while it was on the move are accurate enough that you can see the correct side of the motorway that it was on.
Motorway position

It stopped sending messages an hour after it left the sorting office, you can see clearly that it’s in a loading bay
Loading Bay, Stansted

Turns out it wasn’t going on its journey in a train or truck, it was going on a plane from Stansted airport. Perhaps I shouldn’t have left it turned on and tracking!
Stansted

It all went a bit quiet from here until early the next morning when it showed up 5 miles or so away from my friend’s house
Prudhoe Depot
Red vans on an industrial estate? Looks like another Royal Mail depot to me.

Finally we watched it on the driver’s delivery round and spotted it a couple of streets away from his house
Close to delivery

It arrived shortly afterwards about 24 hours after being posted.

It was quite cool watching the progress of the parcel in real-time without using the carrier’s own tracking information and would have been incredibly useful if the package had gone missing at all. This was a fun demonstration of the SPOT Tracker which I’d recommend to anyone who does outdoor stuff away from mobile phone reception.

With parcels all we need next is some kind of cheap data-logging accelerometers so you can prove when the courier dropped it now…

Poor Man’s Parallelization for Batch Processing Jobs

One common problem that I’ve seen time and time again with batch processing jobs (generally cronjobs) is that when they’re written they run quickly. Over time their workload grows until eventually it’s unacceptably slow. Most of simple jobs are single threaded and follow the pattern:

  • Query a database to get a list of objects to work on (users, pages, customers, etc)
  • Process or derive some data for each user

As an example let’s use a simple script that just iterates over a list of users.

<?php

$userIds = range(1,10);

foreach ($userIds as $userId) {
    echo "working on userId=$userId". PHP_EOL;
}

If you run this script it’ll just output:

working on userId=1
working on userId=2
working on userId=3
...
working on userId=10

To create a simple of way of separating the jobs in a consistent way we can use the modulus operator. It just calculates the remainder of two numbers. It’s a common arithmetic operator in almost all languages so this technique is pretty portable.

Let’s allow two arguments to be passed into the code. We want to define the number of different jobs to run in parallel and also which of the two this one is. Here’s a simple way of doing it:

<?php

$job = (int)$argv[1];
$jobs = (int)$argv[2];

echo "running job $job of $jobs". PHP_EOL;

$userIds = range(1,10);

foreach ($userIds as $userId) {

    if ($userId % $jobs != $job - 1) {
        continue;
    }
    echo "working on userId=$userId". PHP_EOL;
}

Running this on the command-line like php myScript.php 1 2 will output:

running job 1 of 2
working on userId=2
working on userId=4
working on userId=6
working on userId=8
working on userId=10

…and running with php myScript.php 2 2

running job 2 of 2
working on userId=1
working on userId=3
working on userId=5
working on userId=7
working on userId=9

Nice! Now we can run the even and odd jobs separately and at the same time.

Have a play with the script and try to run it split three ways with the options: "1 3", "2 3" and "3 3"

You can use the modulus operator in SQL queries too which could be useful if you’re pulling your list of Ids from the database.

SELECT userId FROM users WHERE userId % 2 = 1;

It’s worth noting that this hack won’t always work. You need to be able to identify each iteration of your loop with some unique integer and have sufficient free resources on your machine(s) to run the script in parallel. In some cases the proper way to go is a full on async setup using something like Gearman but if you’re in a hurry or the code is trivial this is a great little five minute fix.

Node.js 20 line proxy server ported to 15 lines of CoffeeScript

I only recently discovered CoffeeScript which seems like it could be a nicer way of writing simple JavaScript code.

After seeing, and being impressed by A HTTP Proxy Server in 20 Lines of node.js Code around a year ago I thought it might be a suitable piece of code to try porting to CoffeeScript. More complicated than the “Hello World” on the Node.js homepage but still fairly simple.

http = require 'http';

http.createServer( (request, response) ->

  proxy = http.createClient 80, request.headers['host']
  proxy_request = proxy.request request.method, request.url, request.headers;

  proxy_request.addListener 'response', (proxy_response) ->

    proxy_response.addListener 'data', (chunk) ->
      response.write chunk, 'binary'

    proxy_response.addListener 'end', -> 
      response.end()

    response.writeHead proxy_response.statusCode, proxy_response.headers

  request.addListener 'data', (chunk) ->
    proxy_request.write chunk, 'binary'

  request.addListener 'end', ->
    proxy_request.end()

).listen(8080)

The port was incredibly simple, just some search/replace type modification and the five line saving is only on lines closing functions with });. I’m not entirely convinced that it’s nicer to read for the moment but maybe it’s because I’m not very used to it yet. This code doesn’t exactly use many of the nice tricks that it can do.

I’ll definitely give CoffeeScript a go for with a small project in the future to evaluate it but for the moment the jury’s out.

Working with Date and Time in PHP

A lot of people ask questions relating to date and time in PHP. Here are some answers to the most commonly asked questions and common mistakes.

Procedural vs OO

Historically PHP provided a set of commands that allowed easy parsing of dates/times into UNIX timestamps (mktime(), strtotime(), etc) and then some tools to manipulate these timestamps and return them as formatted strings (date(), gmdate(), etc).

PHP5 introduced the DateTime class but lots of users still seem to be using the old procedural classes. I’ll try to demonstrate both approaches for each example and hope to win you over that DateTime is far easier to use and more powerful.

strtotime()

strtotime() is probably the simplest way takes a date as a string and attempts to convert it to a unix timestamp. Here are some examples of using strtotime()

A lot of people seem to try to use mktime() and gmmktime() to create dates from data they’ve fetched from a user or from the database. In most cases you can use strtotime() which very powerful and far simpler to use.

using mktime()

$dateString = '2011-05-01 09:22:34';
if (preg_match('/^(\d{4})\-(\d{2})\-(\d{2}) (\d{2}):(\d{2}):(\d{2})$/', $dateString, $a)) {
    $t = mktime($a[4], $a[5], $a[6], $a[2], $a[3], $a[1]);
    echo date('r', $t) . PHP_EOL; // returns: Sun, 01 May 2011 09:22:34 +0100
}

using strtotime()

$dateString = '2011-05-01 09:22:34';
$t = strtotime($dateString);
echo date('r', $t) . PHP_EOL; // returns: Sun, 01 May 2011 09:22:34 +0100

As you can see it’s far easier to use strtotime() to do this. The DateTime class wraps this functionality too:

using DateTime

$dateString = '2011-05-01 09:22:34';
$dt = new DateTime($dateString);
echo $dt->format('r') . PHP_EOL; // returns: Sun, 01 May 2011 09:22:34 +0100

Timezones

If you tried out the examples above and got a warning that a default timezone isn’t set then you should set one using either date_default_timezone_set() at runtime or by defining date.timezone in your php.ini file. There’s a full list of timezones here

Setting the timezone at runtime

date_default_timezone_set('Europe/London');

Formatting/returning dates

As a UNIX timestamp isn’t human readable you’ll need to convert it back out to something readable, you can format it into just about any style you want using this list of format strings. If you ever need to find it in a hurry just search on Google for “php date“. The procedural function date() is wrapped by the format() method in the DateTime class.

$dateString = '2011-05-01 09:22:34';
$t = strtotime($dateString);
echo date('l jS \of F, Y', $t) . PHP_EOL; // returns: Sunday 1st of May, 2011
$dateString = '2011-05-01 09:22:34';
$dt = new DateTime($dateString);
echo $dt->format('l jS \of F, Y') . PHP_EOL; // returns: Sunday 1st of May, 2011

Modifying dates

strtotime() is really powerful and allows you to modify/transform dates easily with it’s relative expressions too:

Procedural

$dateString = '2011-05-01 09:22:34';
$t = strtotime($dateString);
$t2 = strtotime('-3 days', $t);
echo date('r', $t2) . PHP_EOL; // returns: Thu, 28 Apr 2011 09:22:34 +0100

DateTime

$dateString = '2011-05-01 09:22:34';
$dt = new DateTime($dateString);
$dt->modify('-3 days');
echo $dt->format('r') . PHP_EOL; // returns: Thu, 28 Apr 2011 09:22:34 +0100

The stuff you can throw at strtotime() is quite surprising and very human readable. Have a look at this example looking for Tuesday next week.

Procedural

$t = strtotime("Tuesday next week");
echo date('r', $t) . PHP_EOL; // returns: Tue, 10 May 2011 00:00:00 +0100

DateTime

$dt = new DateTime("Tuesday next week");
echo $dt->format('r') . PHP_EOL; // returns: Tue, 10 May 2011 00:00:00 +0100

Note that these examples above are being returned relative to the time now.

The full list of time formats that strtotime() and the DateTime constructor takes are listed on the PHP Supported Date and Time Formats page.

Hopefully by now I’ve won you over that using the DateTime object is easier/cleaner than using the procedural functions. I’m going to follow by solving some common problems with the DateTime object and some other friends of its.

Converting time between timezones

Displaying a time in multiple time zones

$dt = new DateTime('1 May 2011 14:15', new DateTimeZone('America/Los_Angeles'));
// returns: LA:	Sun, 01 May 2011 14:15:00 -0700
echo "LA:\t". $dt->format('r') . PHP_EOL;

$dt->setTimeZone(new DateTimeZone('Europe/London'));
// returns: London:	Sun, 01 May 2011 22:15:00 +0100
echo "London:\t". $dt->format('r') . PHP_EOL;

Here we use the DateTimeZone class to convert a known time in a known timezone to a different timezone. This could allow users to select their timezone in their profile and then display all times from your application in that time zone. You’d need to be storing all of the dates in the database in the same timezone (e.g. GMT) though.

Displaying the difference between two times

This could be handy for stuff like showing a countdown to an event on your website (e.g. an offer expiring). PHP’s built-in DateInterval class makes this really simple.

// returns 25th December of this year
$offerEnds = new DateTime('25th December');
$now = new DateTime();

$diff = $offerEnds->diff($now);

// using the public properties
echo "It's only {$diff->days} days until Christmas!" . PHP_EOL;

// using the format string
echo $diff->format("%m months, %d days, %h hours and %i minutes till Christmas") . PHP_EOL;

The first echo uses the public properties of the DateDiff class and the second one uses the format strings and format() method.

Simple handling of date/time from databases

Converting DateTime objects into MySQL date format

class MySQLDateTime extends DateTime
{
    public function __toString()
    {
        return $this->format('Y-m-d H:i:s');
    }
}


$now = new MySQLDateTime();

$sql = "SELECT foo, bar FROM myTable WHERE someDate<'$now';";
echo $sql . PHP_EOL;

Converting dates from MySQL into DateTime objects

while (list($id, $date) = mysql_fetch_row($r)) {
    $dateObj = new DateTime($date);
}

Summary

These were just a few simple examples/tricks with date/time handling which demonstrate just how easy it is to work with dates/times using these classes. Hopefully this means an end to the days of working in UNIX timestamps by subtracting seconds, crudely manipulating time differences, etc.

If you have any other specific, common use cases that you’d like included in the article just add a comment to the post and I’ll try to add them.

How to Avoid Character Encoding Problems in PHP

Character sets can be confusing at the best of times. This post aims to explain the potential problems and suggest solutions.

Although this is applied to PHP and a typical LAMP stack you can apply the same principles to any multi-tier stack.

If you’re in a hurry you can skim past this first “The boring history” section.

The boring history

Back in 1963 ASCII was published, it was a simple character set conceived in the US and designed as a standard to allow different systems to interact with one another. It includes alphanumeric characters, numbers and some common symbols. It’s a 7-bit character set (ASCII Table)

This works OK for English speaking countries but doesn’t help with other languages that have different characters, accented characters like é. Twenty or so years later the ISO-8859 set of standards were established. By then bytes (8-bits) had become a standard sized chunk of data to send information around in. These new character sets allowed space for another 128 characters. This was enough space to create different sets for different languages/regions but not enough to put everything into a single character-set.

ISO-8859-1 is probably the most commonly used (also known as “latin1” or “Western European”) and other 15 other character sets were defined too including ISO-8859-2 (Central European), ISO-8859-3 (South European), etc, etc. There’s a full list on Wikipedia.

This created a big problem, you need to know which character-set you’re using because although the common ASCII characters are the same in different languages the same sign is £ in one character-set and Ł, Ŗ, Ѓ or ฃ in various different sets!

An easier solution would be to have all possible characters in some single character set and that’s what UTF-8 does. It’s shares the same first 7 bytes with ASCII (it’s backwardly compatible) but can be anything from one byte to four bytes in length. That gives it a staggering choice of 1,112,064 different characters. That makes life a bunch easier, because you can use UTF-8 with your web application and it’ll work for everyone around the world.

There is another used character set called UTF-16 but it’s not backwardly compatible with ASCII and less widely used.

Conclusion of the boring history section

If you didn’t bother to read all of the section above there’s just one thing to take away from it: Use UTF-8

Where do the problems occur?

You have a potential for problems to occur anywhere that one part of your system talks to another. For a PHP/LAMP setup these components are:

* Your editor that you’re creating the PHP/HTML files in
* The web browser people are viewing your site through
* Your PHP web application running on the web server
* The MySQL database
* Anywhere else external you’re reading/writing data from (memcached, APIs, RSS feeds, etc)

To avoid these potential problems we’re going to make sure that every component is configured to use UTF-8 so that no mis-translation goes on anywhere.

Configuring your editor

Ensure that your text editor, IDE or whatever you’re writing the PHP code in saves your files in UTF-8 format. Your FTP client, scp, SFTP client doesn’t need any special UTF-8 setting.

Making sure that web browsers know to use UTF-8

To make sure your users’ browsers all know to read/write all data as UTF-8 you can set this in two places.

The content-type <META> tag
Ensure the content-type META header specifies UTF-8 as the character set like this:

<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">

The HTTP response headers
Make sure that the Content-Type response header also specifies UTF-8 as the character-set like this:

ini_set('default_charset', 'utf-8')

Configuring the MySQL Connection

Now you know that all of the data you’re receiving from the users is in UTF-8 format we need to configure the client connection between the PHP and the MySQL database.

There’s a generic way of doing by simply executing the MySQL query:

SET NAMES utf8;

…and depending on which client/driver you’re using there are helper functions to do this more easily instead:

With the built in mysql functions

mysql_set_charset('utf8', $link);

With MySQLi

$mysqli->set_charset("utf8")

With PDO_MySQL (as you connect)

$pdo = new PDO( 
    'mysql:host=hostname;dbname=defaultDbName', 
    'username', 
    'password', 
    array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8") 
);

The MySQL Database

We’re pretty much there now, you just need to make sure that MySQL knows to store the data in your tables as UTF-8. You can check their encoding by looking at the Collation value in the output of SHOW TABLE STATUS (in phpmyadmin this is shown in the list of tables).

If your tables are not already in UTF-8 (it’s likely they’re in latin1) then you’ll need to convert them by running the following command for each table:

ALTER TABLE myTable CHARACTER SET utf8 COLLATE utf8_general_ci;

One last thing to watch out for

With all of these steps complete now your application should be free of any character set problems.

There is one thing to watch out for, most of the PHP string functions are not unicode aware so for example if you run strlen() against a multi-byte character it’ll return the number of bytes in the input, not the number of characters. You can work round this by using the Multibyte String PHP extension though it’s not that common for these byte/character issues to cause problems.

How to work around Amazon EC2 outages

Today’s Amazon EC2 outages (which at the time of writing are still ongoing) have meant downtime for lots of their customers including household names like Quora, FourSquare and reddit. The problem is with their Elastic Compute Cloud (EC2) service in one of the availability zones in their Eastern US N.Virginia region.

Often problems like this are localised to one availability zone (datacentre) which gives you a number of ways of working round the problem.

Elastic IP Addresses

By using an Elastic IP Address you can bring up new instance in another availability zone and then bind the Elastic IP to it. There’d likely be some manual intervention from you to do this and you’d need to make sure that you had a decent enough backup on EBS or a snapshot to resume from.

Elastic Load Balancing

Using an Elastic Load Balancer you can spread the load between servers in multiple availability zones. This could allow you to have e.g. one web server in each Eastern US zone and the loss of one zone like today should be handled transparently. This would be easy to implement with a simple website but to create full redundancy of backend data (in RDBMS, etc) you’d need to setup appropriate data replication there too. In theory this approach should allow a zone failure to be completely transparent to your users.

Low DNS TTLs

If you’re not willing to pay for Elastic IPs or Elastic Load Balancing then you could manually redirect traffic in the event of an outage to a new AWS instance or to another ISP for all it matters. Read more about DNS TTLs here: Using DNS TTL to control migrations

Disaster Recovery and Backups

You need to decide what level of Disaster Recovery you require. It’s usually a trade-off between the cost of the downtime to your business and the cost of implementing it. You could decide that in the event of a rare outage it’s acceptable to just display a “sorry we’re having problems page” served from an instance that you only bring up in the event of problems. If your requirement is to bring up a full copy of the site in a new zone here are suggestions as to how you could do this.

Amazon Elastic Block Store (EBS) supports snapshotting which is persisted to Amazon S3 across all zones in that region. This would be a great way of keeping backups if you can live with resuming from some slightly older snapshotted data. All you need to do is bring up the new instance in one of the fully-functioning zones and attach an EBS volume derived from the snapshot.

If using snapshotted data isn’t acceptable then you’d need to look at implementing your own replication of data. Almost all of the commonly used RDBMS/NoSQL applications support replication and setting up replicas is fairly standard operationally.

Big datasets for full-text search benchmarking

A few times recently I’ve looked for large datasets to experiment/benchmark against and I usually manage to come up blank.

I managed to spend longer than usual on this problem yesterday and came up with some which I’ll share with you.

Project Gutenberg

http://www.gutenberg.org/

This project hosts the content of over 33,000 books. You can download the data as one book per file and there are full instructions for downloading/mirroring here. It seems that they’ve blocked Amazon AWS IP ranges from mirroring content from their site which is a shame.

The Westbury Lab USENET Corpus

http://www.psych.ualberta.ca/~westburylab/downloads/usenetcorpus.download.html

The contents of some USENET groups have been anonymised and cleaned up to form a set containing >28M documents and >25B words. Each week’s data is stored as a single text file and each post simply delimited which makes parsing a breeze. You can easily load this data into a MySQL database with a command similar to:

LOAD DATA LOCAL INFILE '/path/to/file.txt' INTO TABLE myTable LINES TERMINATED BY '---END.OF.DOCUMENT---';

It’s also available as part of Amazon’s AWS Public Datasets offering as EBS snapshot snap-­c1d156aa in the US West AWS region. Using AWS is a really quick way of getting hold of this dataset without the need to wait for any downloading to complete.

I found this a really nice dataset with each document at around 5kB, it seemed to be a sensible size for benchmarking email body text, blog/publishing posts, etc

Wikipedia Text

http://dumps.wikimedia.org/
Wikipedia provide huge database dumps. It seems that there’s an AWS Snapshot snap-­8041f2e9 which contains this data too but it’s a couple of years old. There’s also a “WEX” extract (snap-­1781757e on AWS) created by the Freebase team which is provided as XML markup ready to be easily imported into a db table, one row per article.

In doing this research I came across a couple of v.interesting projects that extract/compile metadata from various sources including Wikipedia. They’re Freebase and DBpedia. I hope to play with some of their datasets and write a post on that in the future.

MySQL Server’s built-in profiling support

MySQL’s SHOW PROFILES command and its profiling support is something that I can’t believe I hadn’t spotted before today.

It allows you to enable profiling for a session and then record performance information about the queries executed. It shows details of the different stages in the query execution (as usually displayed in the thread state output of SHOW PROCESSLIST) and how long each of these stages took.

I’ll demonstrate using an example. First within our session we need to enable profiling, you should only do this in sessions that you want to profile as there’s some overhead in performing/recording the profiling information:

mysql> SET profiling=1;
Query OK, 0 rows affected (0.00 sec)

Now let’s run a couple of regular SELECT queries

mysql> SELECT COUNT(*) FROM myTable WHERE extra LIKE '%zkddj%';
+----------+
| COUNT(*) |
+----------+
|        0 | 
+----------+
1 row in set (0.32 sec)

mysql> SELECT COUNT(id) FROM myTable;
+-----------+
| COUNT(id) |
+-----------+
|    513635 | 
+-----------+
1 row in set (0.00 sec)

Followed up with some stuff that we know’s going to execute a bit slower:

mysql> CREATE TEMPORARY TABLE foo LIKE myTable;
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO foo SELECT * FROM myTable;
Query OK, 513635 rows affected (33.53 sec)
Records: 513635  Duplicates: 0  Warnings: 0

mysql> DROP TEMPORARY TABLE foo;
Query OK, 0 rows affected (0.06 sec)

Now we’ve run the queries let’s look at their summary with SHOW PROFILES

mysql> SHOW PROFILES;
+----------+-------------+-------------------------------------------------------------------+
| Query_ID | Duration    | Query                                                             |
+----------+-------------+-------------------------------------------------------------------+
|        1 |  0.33174700 | SELECT COUNT(*) FROM myTable WHERE extra LIKE '%zkddj%'           | 
|        2 |  0.00036600 | SELECT COUNT(id) FROM myTable                                     | 
|        3 |  0.00087700 | CREATE TEMPORARY TABLE foo LIKE myTable                           | 
|        4 | 33.52952000 | INSERT INTO foo SELECT * FROM myTable                             | 
|        5 |  0.06431200 | DROP TEMPORARY TABLE foo                                          | 
+----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)

It’s not like any of those numbers are a surprise as we saw them from the client but it’s a handy record of the execution times and could easily be queried within an application just before the connection to a db is closed or e.g. at the end of the web request.

We can also dig deeper into each of the commands, let’s look at the first query we ran:

mysql> SHOW PROFILE FOR QUERY 1;
+--------------------------------+----------+
| Status                         | Duration |
+--------------------------------+----------+
| starting                       | 0.000033 | 
| checking query cache for query | 0.000073 | 
| Opening tables                 | 0.000013 | 
| System lock                    | 0.000007 | 
| Table lock                     | 0.000035 | 
| init                           | 0.000032 | 
| optimizing                     | 0.000014 | 
| statistics                     | 0.000016 | 
| preparing                      | 0.000014 | 
| executing                      | 0.000009 | 
| Sending data                   | 0.331296 | 
| end                            | 0.000016 | 
| end                            | 0.000003 | 
| query end                      | 0.000005 | 
| storing result in query cache  | 0.000105 | 
| freeing items                  | 0.000012 | 
| closing tables                 | 0.000007 | 
| logging slow query             | 0.000003 | 
| logging slow query             | 0.000048 | 
| cleaning up                    | 0.000006 | 
+--------------------------------+----------+
20 rows in set (0.00 sec)

Looks like almost all of the time there was spent executing the query, definitely one worth investigating further with EXPLAIN

Now let’s look at the slow INSERT sub-select we ran to see what took the time. I’ve enabled CPU profiling here too.

mysql> SHOW PROFILE CPU FOR QUERY 4;
+----------------------+-----------+-----------+------------+
| Status               | Duration  | CPU_user  | CPU_system |
+----------------------+-----------+-----------+------------+
| starting             |  0.000069 |  0.000000 |   0.000000 | 
| checking permissions |  0.000010 |  0.000000 |   0.000000 | 
| Opening tables       |  0.000217 |  0.000000 |   0.000000 | 
| System lock          |  0.000006 |  0.000000 |   0.000000 | 
| Table lock           |  0.000014 |  0.000000 |   0.000000 | 
| init                 |  0.000041 |  0.000000 |   0.000000 | 
| optimizing           |  0.000007 |  0.000000 |   0.000000 | 
| statistics           |  0.000014 |  0.000000 |   0.000000 | 
| preparing            |  0.000013 |  0.000000 |   0.000000 | 
| executing            |  0.000006 |  0.000000 |   0.000000 | 
| Sending data         |  4.326303 |  3.544221 |   0.324020 | 
| Creating index       |  0.000029 |  0.000000 |   0.000000 | 
| Repair by sorting    | 29.202254 | 17.133071 |  11.616726 | 
| Saving state         |  0.000040 |  0.000000 |   0.000000 | 
| Creating index       |  0.000007 |  0.000000 |   0.000000 | 
| Sending data         |  0.000389 |  0.000000 |   0.000000 | 
| end                  |  0.000009 |  0.000000 |   0.000000 | 
| end                  |  0.000012 |  0.000000 |   0.000000 | 
| query end            |  0.000006 |  0.000000 |   0.000000 | 
| freeing items        |  0.000015 |  0.000000 |   0.000000 | 
| closing tables       |  0.000007 |  0.000000 |   0.000000 | 
| logging slow query   |  0.000005 |  0.000000 |   0.000000 | 
| logging slow query   |  0.000040 |  0.000000 |   0.000000 | 
| cleaning up          |  0.000007 |  0.000000 |   0.000000 | 
+----------------------+-----------+-----------+------------+
24 rows in set (0.00 sec)

It seems that building the indexes for the new table were what took the time. The General Thread States page of the MySQL Documentation is a useful reference. Interestingly we can see the “logging slow query” state here too, something that sails by too quickly to ever see when looking at SHOW PROCESSLIST output.

This profiling support doesn’t fulfil the same role as MySQL’s EXPLAIN command and is only useful in some places but if you were to look at implementing profiling or instrumentation for your app this could be really handy.

You can find the full documentation for MySQL’s profiling support at SHOW PROFILES Syntax it appears to be supported from at least MySQL 5.0 and it’s worth noting is only available in the MySQL Community (non-Enterprise) builds.