Monday, December 26, 2011

Latest SQL Anywhere EBFs - Santa Claus Edition

The three asterisks "***" show what's new since the previous list.

  • Only EBFs for the latest fully-supported versions of SQL Anywhere are shown here: 10.0.1, 11.0.1 and 12.0.1.

  • Note: Some earlier versions are also fully supported but they are not shown here; e.g., the 12.0.0.2758 EBF for Windows was just released. For more information see SQL Anywhere Supported Platforms and Engineering Support Status.

  • Also: Just because an older version isn't "fully supported" doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new EBFs released. For a complete list of everything available for download see Downloads - EBFs / Maintenance.

Current builds for the most active platforms...

HP-UX Itanium    12.0.1.3523 EBF       15 Dec 2011
                 11.0.1.2724 EBF       12 Dec 2011
                 10.0.1.3777 EBF       14 Oct 2008

IBM AIX          12.0.1.3523 EBF       15 Dec 2011
                 11.0.1.2724 EBF       12 Dec 2011

Linux x86/x64    12.0.1.3519 EBF ***   20 Dec 2011 ***
                 11.0.1.2724 EBF       14 Dec 2011
                 10.0.1.4239 EBF       17 Jun 2011

Mac OS           12.0.1.3352 EBF       26 May 2011
                 11.0.1.2449 EBF       29 Jun 2010
                 10.0.1.4042 EBF       01 Oct 2010

Solaris SPARC    12.0.1.3523 EBF       15 Dec 2011
                 11.0.1.2724 EBF       12 Dec 2011
                 10.0.1.4239 EBF       17 Jun 2011

Solaris x64      12.0.1.3523 EBF       15 Dec 2011
                 11.0.1.2724 EBF       12 Dec 2011
                 10.0.1.3870 EBF       01 Apr 2009

Windows x86/x64  12.0.1.3519 EBF       20 Dec 2011 ***
                 11.0.1.2724 EBF       20 Dec 2011 ***
[Download EBFs here] [Register for the Fuji Beta here]

...and for the other platforms:

HP-UX PA-RISC    10.0.1.3778 EBF                 16 Oct 2008

Linux Itanium    10.0.1.3777 EBF                 14 Oct 2008

NetWare          10.0.1.4036 EBF                 10 Mar 2010

Windows CE       10.0.1.4036 EBF                 09 Mar 2010

Windows Itanium  10.0.1 GA Upgrade from 10.0.0   26 Apr 2007
[Download EBFs here]


Friday, December 23, 2011

Product Suggestion: User-Defined ERRORMSG()

Exception handling in SQL Anywhere procedures, triggers and events is quite powerful, of that there is no question.

When it comes to user-defined exceptions, however...

...not so much


Here's a demonstration:
  • Procedure p1 calls procedure p2.

  • Procedure p1 is called by an application (in this case ISQL).

  • When p2 executes it uses the SIGNAL statement to raise a user-defined exception.

  • That exception is captured by the EXCEPTION block in p2.

  • The exception block in p2 then

  • When the exception reaches p1 it is captured by the EXCEPTION block in p1.

  • The exception block in p1 then
    • displays another message on the database console log and
    • once again uses the RESIGNAL statement to pass the exception up to the application.

  • When the exception reaches the application (ISQL) a dialog box is displayed.

Here's the code:
CREATE PROCEDURE p1()
BEGIN
   DECLARE @sqlcode   INTEGER;
   DECLARE @sqlstate  VARCHAR ( 5 );
   DECLARE @errormsg  VARCHAR ( 32767 );

   CALL p2();

EXCEPTION

   WHEN OTHERS THEN

      SELECT SQLCODE, SQLSTATE, ERRORMSG() 
        INTO @sqlcode, @sqlstate, @errormsg;

      MESSAGE STRING ( 
         'EXCEPTION detected in p1() at ', 
         CURRENT TIMESTAMP, 
         ': SQLCODE = ', @sqlcode,  
         ', SQLSTATE = ', @sqlstate,  
         ', ERRORMSG() = ', @errormsg ) 
         TO CONSOLE;

      RESIGNAL;

END;

CREATE PROCEDURE p2()
BEGIN
   DECLARE @sqlcode   INTEGER;
   DECLARE @sqlstate  VARCHAR ( 5 );
   DECLARE @errormsg  VARCHAR ( 32767 );

   BEGIN
      DECLARE @exception EXCEPTION FOR SQLSTATE '99001';
      SIGNAL @exception;
   END;

EXCEPTION
   WHEN OTHERS THEN

      SELECT SQLCODE, SQLSTATE, ERRORMSG() 
        INTO @sqlcode, @sqlstate, @errormsg;

      MESSAGE STRING ( 
         'EXCEPTION detected in p2() at ', 
         CURRENT TIMESTAMP, 
         ': SQLCODE = ', @sqlcode,  
         ', SQLSTATE = ', @sqlstate,  
         ', ERRORMSG() = ', @errormsg ) 
         TO CONSOLE;

      RESIGNAL;

END;

CALL p1(); -- call from ISQL

Here's what appears in the database console log, followed by the dialog box:
EXCEPTION detected in p2() at 2011-12-22 15:16:54.381: SQLCODE = -297, SQLSTATE = 99001, ERRORMSG() = User-defined exception signaled
EXCEPTION detected in p1() at 2011-12-22 15:16:54.382: SQLCODE = -297, SQLSTATE = 99001, ERRORMSG() = User-defined exception signaled


That's all well and good, but the problem is this: only the SQLSTATE is user-defined. The SQLCODE and ERRORMSG() values are both fixed and pre-defined, and they mean exactly the same thing:

Hey! We know that already!


In order to be truly useful, user-defined exceptions need some mechanism for providing a user-defined error message as well as a SQLSTATE code.

Even that pathetic limited Transact SQL RAISERROR statement has a mechanism for providing a user-defined message as well as an error code.


Wednesday, December 21, 2011

Let's play "Gotcha!" - the ACID Edition

Welcome to the ACID Edition of "Gotcha!", the self-testing quiz game for SQL Anywhere developers.

Let's get started right away...

Are your transactions atomic?


Are they consistent? isolated? durable?

Are they reentrant?

Are you sure?


Let's say you have created a SQL Anywhere web service to update your database, and this web service is launched when the user clicks on an INPUT TYPE="BUTTON" on a web page.

Let's also say you've designed the code carefully so all the database updates are performed within a single database transaction, with a single COMMIT after the updates are complete.

Furthermore, let's say you have carefully inspected the code, and you are sure of the following facts:
  • Each time your web service is executed, it either runs on a brand-new database connection or a pool connection that has been freshly sanitized (because that's how SQL Anywhere web services work),

  • if the web service finishes normally, by definition the COMMIT will have been executed, which means all the updates performed during the database transaction will have been successfully written to the database, and

  • if the web service terminates abnormally before reaching the COMMIT, SQL Anywhere will roll back all the updates performed during the database transaction (because that's what SQL Anywhere does when a connection terminates without doing a COMMIT).
Here's a simple demonstration that's just one step up from "Hello World":



Here's what the service does when it is invoked from the browser:
  • It deletes all 100 rows from a table called t, then

  • it re-inserts all 100 rows and finally

  • it displays them as ordered pairs inside parentheses.
Here's the code: One table, two procedures and one service; remember, this is a demonstration, so don't expect anything useful:

CREATE TABLE t (
   pkey   INTEGER NOT NULL PRIMARY KEY,
   data   INTEGER NOT NULL );

INSERT t SELECT row_num, row_num FROM sa_rowgenerator ( 1, 100 );
COMMIT;

CREATE PROCEDURE p()
BEGIN

DECLARE @sqlcode   INTEGER;
DECLARE @sqlstate  VARCHAR ( 5 );
DECLARE @errormsg  VARCHAR ( 32767 );

DELETE t;

WAITFOR DELAY '00:00:00.001';

INSERT t SELECT row_num, row_num FROM sa_rowgenerator ( 1, 100 );

WAITFOR DELAY '00:00:00.001';

COMMIT;

EXCEPTION
   WHEN OTHERS THEN
      SELECT SQLCODE, SQLSTATE, ERRORMSG() 
        INTO @sqlcode, @sqlstate, @errormsg;
      MESSAGE STRING ( 
         'DIAG ', @@SPID, ' ', CURRENT TIMESTAMP, ' EXCEPTION in p: SQLCODE = ', @sqlcode,  
         ', SQLSTATE = ', @sqlstate,  
         ', ERRORMSG() = ', @errormsg ) 
         TO CONSOLE;
      RESIGNAL;

END;

CREATE SERVICE s 
   TYPE 'RAW' AUTHORIZATION OFF USER DBA
   AS CALL s();

CREATE PROCEDURE s()
RESULT ( html_string LONG VARCHAR )
BEGIN
DECLARE @sqlcode   INTEGER;
DECLARE @sqlstate  VARCHAR ( 5 );
DECLARE @errormsg  VARCHAR ( 32767 );

MESSAGE STRING ( 'DIAG ', @@SPID, ' ', CURRENT TIMESTAMP, ' s started...' ) TO CONSOLE;

CALL p();

CALL dbo.sa_set_http_header( 'Content-Type', 'text/html' );

WAITFOR DELAY '00:00:00.001';

SELECT STRING (
   ' ',
   ' ',
   'Service s ',
   ' ',
   ' ',
   '
', LIST ( STRING ( '( ', t.pkey, ', ', t.data, ' )' ), ', ' ORDER BY t.pkey ), '

', ' ', '

', ' ', ' ' ) FROM t; EXCEPTION WHEN OTHERS THEN SELECT SQLCODE, SQLSTATE, ERRORMSG() INTO @sqlcode, @sqlstate, @errormsg; MESSAGE STRING ( 'DIAG ', @@SPID, ' ', CURRENT TIMESTAMP, ' EXCEPTION in s: SQLCODE = ', @sqlcode, ', SQLSTATE = ', @sqlstate, ', ERRORMSG() = ', @errormsg ) TO CONSOLE; ROLLBACK; RESIGNAL; END;

As you can see, the code also explicitly captures any EXCEPTION raised in the code:
  • The outer EXCEPTION handler in procedure s displays the connection number for the service

  • together with the current connection number (@@SPID) and the SQLCODE, and then

  • it does a ROLLBACK and RESIGNAL before the service terminates.
The inner EXCEPTION handler in procedure p does the same thing except no ROLLBACK.

So far, so good


Every time you click on the "Save" button, it does the same thing and displays the same output:



What happens if you start pounding on the Save button, and the service gets cancelled and restarted before it can finish?

That's not a problem, right? The first execution of the service gets rolled back if it hasn't done the COMMIT yet, and the second service runs... unless it gets cancelled too.

But that's not a problem either, right?



Here's the skill-testing question:


What do you see in the browser window if your really start pounding on the Save button as fast as you can?

Don't rush, take your time...



Your code's been reviewed, it's ACID, reentrant, all those good things?

Right?

Shouldn't you see this no matter what?



Are you sure? Take your time...



Gotcha!


If you keep pounding on the Save button, eventually you will see this in the browser:



Here are all the diagnostic messages from the SQL Anywhere database server console:

DIAG 365 2011-12-19 10:05:00.924 s started...
DIAG 365 2011-12-19 10:05:01.093 EXCEPTION in p: SQLCODE = -299, SQLSTATE = 57014, ERRORMSG() = Statement interrupted by user
DIAG 366 2011-12-19 10:05:01.102 s started...
DIAG 365 2011-12-19 10:05:01.102 EXCEPTION in s: SQLCODE = -299, SQLSTATE = 57014, ERRORMSG() = Statement interrupted by user
DIAG 366 2011-12-19 10:05:01.208 EXCEPTION in p: SQLCODE = -193, SQLSTATE = 23W01, ERRORMSG() = Primary key for table 't' is not unique : Primary key value ('1')
DIAG 367 2011-12-19 10:05:01.253 s started...
DIAG 366 2011-12-19 10:05:01.256 EXCEPTION in s: SQLCODE = -193, SQLSTATE = 23W01, ERRORMSG() = Primary key for table 't' is not unique : Primary key value ('1')
DIAG 367 2011-12-19 10:05:01.419 EXCEPTION in p: SQLCODE = -299, SQLSTATE = 57014, ERRORMSG() = Statement interrupted by user
DIAG 368 2011-12-19 10:05:01.425 s started...
DIAG 367 2011-12-19 10:05:01.429 EXCEPTION in s: SQLCODE = -299, SQLSTATE = 57014, ERRORMSG() = Statement interrupted by user
DIAG 368 2011-12-19 10:05:01.539 EXCEPTION in p: SQLCODE = -193, SQLSTATE = 23W01, ERRORMSG() = Primary key for table 't' is not unique : Primary key value ('1')
DIAG 368 2011-12-19 10:05:01.547 EXCEPTION in s: SQLCODE = -193, SQLSTATE = 23W01, ERRORMSG() = Primary key for table 't' is not unique : Primary key value ('1')
DIAG 369 2011-12-19 10:05:01.589 s started...
DIAG 369 2011-12-19 10:05:01.740 EXCEPTION in p: SQLCODE = -299, SQLSTATE = 57014, ERRORMSG() = Statement interrupted by user
DIAG 370 2011-12-19 10:05:01.746 s started...
DIAG 369 2011-12-19 10:05:01.750 EXCEPTION in s: SQLCODE = -299, SQLSTATE = 57014, ERRORMSG() = Statement interrupted by user
DIAG 370 2011-12-19 10:05:01.864 EXCEPTION in p: SQLCODE = -193, SQLSTATE = 23W01, ERRORMSG() = Primary key for table 't' is not unique : Primary key value ('1')
DIAG 370 2011-12-19 10:05:01.872 EXCEPTION in s: SQLCODE = -193, SQLSTATE = 23W01, ERRORMSG() = Primary key for table 't' is not unique : Primary key value ('1')

Some of the exceptions are expected: SQLCODE = -299 happens when the service is cancelled by the browser. The second column (365, 366, etc) shows the connection number so you can see how the executions overlap.

The other exceptions are not expected: the SQLCODE = -193 primary key violations mean the DELETE / INSERT logic doesn't work properly when the Save button is given a vigorous workout.

Here's another interesting observation: the "SQL error" doesn't reach the browser until after the primary key exception has been raised several times. How do we know that? Because once the browser displays "SQL error" there's no Save button any more, and that clearly wasn't the case: the repeated "s started..." messages prove there was a Save button on screen long after the first primary key exception.

Locking, blocking, the atomic nature of transactions and the reentrant nature of SQL Anywhere executable code... none of that helps this application avoid a showstopper exception.

For extra points, explain why it happens and how to fix it.


Monday, December 19, 2011

Latest SQL Anywhere EBFs: 11.0.1.2724 and 12.0.1.3523 (revised)

The three asterisks "***" show what's new since the previous list.

  • Only EBFs for the latest fully-supported versions of SQL Anywhere are shown here: 10.0.1, 11.0.1 and 12.0.1.

  • Note: Some earlier versions are also fully supported but they are not shown here; e.g., the 12.0.0.2758 EBF for Windows was just released. For more information see SQL Anywhere Supported Platforms and Engineering Support Status.

  • Also: Just because an older version isn't "fully supported" doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new EBFs released. For a complete list of everything available for download see Downloads - EBFs / Maintenance.

Current builds for the most active platforms...

HP-UX Itanium    12.0.1.3523 EBF ***   15 Dec 2011 ***
                 11.0.1.2724 EBF       12 Dec 2011
                 10.0.1.3777 EBF       14 Oct 2008

IBM AIX          12.0.1.3523 EBF ***   15 Dec 2011 ***
                 11.0.1.2724 EBF       12 Dec 2011

Linux x86/x64    12.0.1.3457 EBF       12 Oct 2011
                 11.0.1.2724 EBF ***   14 Dec 2011 ***
                 10.0.1.4239 EBF       17 Jun 2011

Mac OS           12.0.1.3352 EBF       26 May 2011
                 11.0.1.2449 EBF       29 Jun 2010
                 10.0.1.4042 EBF       01 Oct 2010

Solaris SPARC    12.0.1.3523 EBF ***   15 Dec 2011 ***
                 11.0.1.2724 EBF       12 Dec 2011
                 10.0.1.4239 EBF       17 Jun 2011

Solaris x64      12.0.1.3523 EBF ***   15 Dec 2011 ***
                 11.0.1.2724 EBF       12 Dec 2011
                 10.0.1.3870 EBF       01 Apr 2009

Windows x86/x64  12.0.1.3505 EBF       24 Nov 2011
                 11.0.1.2713 EBF       24 Nov 2011
                 10.0.1.4295 EBF       18 Nov 2011
[Download EBFs here] [Register for the Fuji Beta here]

...and for the other platforms:

HP-UX PA-RISC    10.0.1.3778 EBF                 16 Oct 2008

Linux Itanium    10.0.1.3777 EBF                 14 Oct 2008

NetWare          10.0.1.4036 EBF                 10 Mar 2010

Windows CE       10.0.1.4036 EBF                 09 Mar 2010

Windows Itanium  10.0.1 GA Upgrade from 10.0.0   26 Apr 2007
[Download EBFs here]


Friday, December 16, 2011

Another tool for the box



Here's another simple question:

Question: How do I select the top 10 rows in each partition of a table?"
Will CROSS APPLY work as well as it did before?

Here's the data. The table looks pretty much the same, except that the entry_id is autoincrementing instead of repeating and there's a lot more data; three partitions with 100 rows in each partition:
CREATE TABLE t ( 
   partition_id     INTEGER NOT NULL,
   entry_id         INTEGER NOT NULL DEFAULT AUTOINCREMENT,
   data             VARCHAR ( 10 ) NOT NULL,
   PRIMARY KEY ( entry_id ) );

BEGIN
   DECLARE @loop_counter INTEGER;
   SET @loop_counter = 1;
   WHILE @loop_counter <= 100 LOOP
      INSERT t ( partition_id, data ) VALUES ( 10, 'aaa' );
      INSERT t ( partition_id, data ) VALUES ( 20, 'bbb' );
      INSERT t ( partition_id, data ) VALUES ( 30, 'ccc' );
      SET @loop_counter = @loop_counter + 1;
   END LOOP;
   COMMIT;
END;

SELECT * FROM t ORDER BY entry_id;

partition_id,entry_id,data
10,1,'aaa'
20,2,'bbb'
30,3,'ccc'
10,4,'aaa'
20,5,'bbb'
30,6,'ccc'
10,7,'aaa'
20,8,'bbb'
30,9,'ccc'
...
partition_id,entry_id,data
10,298,'aaa'
20,299,'bbb'
30,300,'ccc'
Before tackling the "top 10" question, let's confirm that the CROSS APPLY answer to the previous question "How do I select the first row in each partition of a table?" works just as well on this table:
SELECT first_t.*
  FROM ( SELECT DISTINCT
                t.partition_id 
           FROM t
          ORDER BY t.partition_id ) AS unique_partition
  CROSS APPLY
       ( SELECT FIRST * 
           FROM t
          WHERE t.partition_id = unique_partition.partition_id
          ORDER BY t.entry_id ) AS first_t 
 ORDER BY first_t.partition_id;
Yes, it works:
partition_id,entry_id,data 10,1,'aaa' 20,2,'bbb' 30,3,'ccc'
Plus, it is clear from the SQL what its purpose is:
  • When the "( SELECT DISTINCT ... )" query is translated into English you get "select each distinct partition_id,"

  • when the "( SELECT FIRST ... )" query is translated into English you get "select the first row by entry_id for each row in the other result set," and then

  • when the combined CROSS APPLY query is translated by combining the other two translations, you get "select the first row by entry_id for each distinct partition_id"
which is pretty much a restatement of the original question: "How do I select the first row in each partition?"

It's not perfect; in a perfect world, every single query problem would be solved by a single SQL SELECT statement, not a combination of two queries. But, this is real life... and the original question did refer to two queries: "first row" and "each partition", and the CROSS APPLY contains exactly those two queries...

Not like the Old School solution which does funky things with MIN and GROUP BY, neither of which are even indirectly mentioned in the original question,

SELECT t.* 
  FROM t INNER JOIN ( SELECT partition_id     AS partition_id,
                             MIN ( entry_id ) AS min_entry_id
                        FROM t
                       GROUP BY partition_id
                    ) AS first_row
          ON t.partition_id = first_row.partition_id
         AND t.entry_id     = first_row.min_entry_id
 ORDER BY t.partition_id;
or the OLAP solution which introduced the OVER, WINDOW, PARTITION BY and nested ORDER BY clauses plus the RANK function and a WHERE clause to deal with the RANK result.
SELECT partition_id,
       entry_id,
       data 
  FROM ( select partition_id, 
                entry_id, 
                data,
                rank ( ) OVER partition_id_window AS entry_rank
           FROM t
         WINDOW partition_id_window AS (
                   PARTITION BY partition_id
                   ORDER BY entry_id
                ) ) DT
 WHERE entry_rank = 1
 ORDER BY partition_id;
Go ahead, translate either of those queries into English, see what you get!

Back on topic...

Question: How do I select the top 10 rows in each partition of a table?"
It turns out the CROSS APPLY solution scales just as well as the OLAP solution. With the OLAP solution, the "WHERE entry_rank = 1" predicate was simply changed to "entry_rank <= 10", and with the CROSS APPLY the "FIRST" simply needs to be changed to "TOP 10".

Once again, it works, and once again, the SQL matches the question:

SELECT first_t.*
  FROM ( SELECT DISTINCT
                t.partition_id 
           FROM t
          ORDER BY t.partition_id ) AS unique_partition
  CROSS APPLY
       ( SELECT TOP 10 * 
           FROM t
          WHERE t.partition_id = unique_partition.partition_id
          ORDER BY t.entry_id ) AS first_t 
 ORDER BY first_t.partition_id;

partition_id,entry_id,data
10,1,'aaa'
10,4,'aaa'
10,7,'aaa'
10,10,'aaa'
10,13,'aaa'
10,16,'aaa'
10,19,'aaa'
10,22,'aaa'
10,25,'aaa'
10,28,'aaa'
20,2,'bbb'
20,5,'bbb'
20,8,'bbb'
20,11,'bbb'
20,14,'bbb'
20,17,'bbb'
20,20,'bbb'
20,23,'bbb'
20,26,'bbb'
20,29,'bbb'
30,3,'ccc'
30,6,'ccc'
30,9,'ccc'
30,12,'ccc'
30,15,'ccc'
30,18,'ccc'
30,21,'ccc'
30,24,'ccc'
30,27,'ccc'
30,30,'ccc'
Sometimes "scalability" means "Will it handle millions of rows?", other times the definition is different: "Will it handle more complex problems as well as it handled simple ones?"

For the second definition, at least so far, the answer for CROSS APPLY is "yes".


Wednesday, December 14, 2011

If you can't JOIN 'em, ...

The question's simple, why should the answer be complicated?


Question: How do I select the first row in each partition of a table?
Here's the data:
CREATE TABLE t ( 
   partition_id     INTEGER NOT NULL,
   entry_id         INTEGER NOT NULL,
   data             VARCHAR ( 10 ) NOT NULL,
   PRIMARY KEY ( partition_id, entry_id ) );

INSERT t VALUES ( 10, 1, 'aaa' );
INSERT t VALUES ( 10, 2, 'bbb' );
INSERT t VALUES ( 10, 3, 'ccc' );

INSERT t VALUES ( 20, 1, 'ppp' );
INSERT t VALUES ( 20, 2, 'qqq' );
INSERT t VALUES ( 20, 3, 'rrr' );

INSERT t VALUES ( 30, 1, 'xxx' );
INSERT t VALUES ( 30, 2, 'yyy' );
INSERT t VALUES ( 30, 3, 'zzz' );
Here's what the result should look like:
SELECT [the first row in each partition of t];

partition_id,entry_id,data
10,1,'aaa'
20,1,'ppp'
30,1,'xxx'
If you didn't already know, could you tell what the question was by reading the answers in these earlier articles?

First, there was this Old School solution using MIN and GROUP BY:
SELECT t.* 
  FROM t INNER JOIN ( SELECT partition_id     AS partition_id,
                             MIN ( entry_id ) AS min_entry_id
                        FROM t
                       GROUP BY partition_id
                    ) AS first_row
          ON t.partition_id = first_row.partition_id
         AND t.entry_id     = first_row.min_entry_id
 ORDER BY t.partition_id;
Second, there was an OLAP solution using RANK and PARTITION BY:
SELECT partition_id,
       entry_id,
       data 
  FROM ( select partition_id, 
                entry_id, 
                data,
                rank ( ) OVER partition_id_window AS entry_rank
           FROM t
         WINDOW partition_id_window AS (
                   PARTITION BY partition_id
                   ORDER BY entry_id
                ) ) DT
 WHERE entry_rank = 1
 ORDER BY partition_id;
Both get points for being obscure, neither look much like an answer to the question.

Let's Try Again


Question: How do I select the first row in each partition of a table?
Let's start by breaking the question in two parts, and solving each part separately:

Question 1: How do I select all the unique values for partition_id?

Answer: Use DISTINCT when selecting all the values of partition_id.

There's no question what this code is doing, it's selecting all the unique values of partition_id just like the question asked:
SELECT DISTINCT
       t.partition_id 
  FROM t
 ORDER BY t.partition_id;

partition_id
10
20
30
Question 2: How do I select the first row in a single partition?

Answer: Use a WHERE clause to pick the partition, then FIRST and ORDER BY to select the first row.

Once again, it's clear from reading the code what the question was:
SELECT FIRST * 
  FROM t
 WHERE t.partition_id = 10
 ORDER BY t.entry_id;

partition_id,entry_id,data
10,1,'aaa'

What now?


Now comes the hard part: Combining those two result sets to get the answer to the original question.

Let's try this:
SELECT first_t.*
  FROM ( SELECT DISTINCT
                t.partition_id 
           FROM t
          ORDER BY t.partition_id ) AS unique_partition
  INNER JOIN
       ( SELECT FIRST * 
           FROM t
          WHERE t.partition_id = unique_partition.partition_id
          ORDER BY t.entry_id ) AS first_t 
 ORDER BY first_t.partition_id;
You knew that wouldn't work, didn't you? INNER JOIN needs an ON clause:
Could not execute statement.
There is no way to join 'unique_partition' to 'first_t'
SQLCODE=-146, ODBC 3 State="42000"
Line 1, column 1
Let's try the old-fashioned "comma join operator", it doesn't need an ON clause:
SELECT first_t.*
  FROM ( SELECT DISTINCT
                t.partition_id 
           FROM t
          ORDER BY t.partition_id ) AS unique_partition,
       ( SELECT FIRST * 
           FROM t
          WHERE t.partition_id = unique_partition.partition_id
          ORDER BY t.entry_id ) AS first_t 
 ORDER BY first_t.partition_id;
Different problem this time; you can't refer to a column in the left-hand side of a join in the right-hand side:
Could not execute statement.
Illegal reference to correlation name 'unique_partition'
SQLCODE=-824, ODBC 3 State="42S02"
Line 1, column 1
One last try; let's move the WHERE clause to an ON clause so INNER JOIN will work:
SELECT first_t.*
  FROM ( SELECT DISTINCT
                t.partition_id 
           FROM t
          ORDER BY t.partition_id ) AS unique_partition
  INNER JOIN
       ( SELECT FIRST * 
           FROM t
          ORDER BY t.entry_id ) AS first_t 
  ON first_t.partition_id = unique_partition.partition_id 
 ORDER BY first_t.partition_id;
Yeah, it "works" all right, but it only returns one row instead of three; the predicate really needs to be inside the inner SELECT, not the outer one:
partition_id,entry_id,data
10,1,'aaa'

You've seen that before, haven't you?


Let's be honest, you've seen those error messages and wrong results before, haven't you? Like when you were trying to solve a problem that should have been easy (like this one) but you had to fiddle around with complex subqueries or temporary tables or even (gasp!) cursor fetch loops, and you ended up with a solution that looked like it was designed for a whole different problem.

It turns out there is a join operator that makes both of those error messages go away:
  • it doesn't need an ON clause, and

  • it does allow references from the right-hand side back to the left-hand side.

It's called CROSS APPLY...


...and it looks like this:
SELECT first_t.*
  FROM ( SELECT DISTINCT
                t.partition_id 
           FROM t
          ORDER BY t.partition_id ) AS unique_partition
  CROSS APPLY
       ( SELECT FIRST * 
           FROM t
          WHERE t.partition_id = unique_partition.partition_id
          ORDER BY t.entry_id ) AS first_t 
 ORDER BY first_t.partition_id;

partition_id,entry_id,data
10,1,'aaa'
20,2,'bbb'
30,3,'ccc'
CROSS APPLY is like the comma-join and CROSS JOIN operators in that it doesn't need (and cannot have) an ON clause. It is different from other join operators in that predicates on the right-hand side can refer to columns on the left-hand side.

So, like comma-join and CROSS JOIN, the CROSS APPLY joins every single row on the left-hand side with every single row on the right-hand side...

but...


...exactly what constitutes "every single row on the right-hand side" depends on each row on the left-hand side.

In this case, for each row on the left-hand side, the set consisting of "every single row on the right-hand side" is boiled down to one single row, the first row with matching partition_id.

So, CROSS APPLY is like INNER JOIN with special powers.



References:


Tuesday, December 13, 2011

Latest SQL Anywhere EBFs: 11.0.1.2724

The three asterisks "***" show what's new since the previous list. Only EBFs for the three fully-supported versions of SQL Anywhere are shown here: 10.0.1, 11.0.1 and 12.0.1.

Current builds for the most active platforms...

HP-UX Itanium    12.0.1.3469 EBF       14 Oct 2011
                 11.0.1.2724 EBF ***   12 Dec 2011 ***
                 10.0.1.3777 EBF       14 Oct 2008

IBM AIX          12.0.1.3469 EBF       20 Oct 2011
                 11.0.1.2724 EBF ***   12 Dec 2011 ***

Linux x86/x64    12.0.1.3457 EBF       12 Oct 2011
                 11.0.1.2700 EBF       27 Oct 2011
                 10.0.1.4239 EBF       17 Jun 2011

Mac OS           12.0.1.3352 EBF       26 May 2011
                 11.0.1.2449 EBF       29 Jun 2010
                 10.0.1.4042 EBF       01 Oct 2010

Solaris SPARC    12.0.1.3469 EBF       20 Oct 2011
                 11.0.1.2724 EBF ***   12 Dec 2011 ***
                 10.0.1.4239 EBF       17 Jun 2011

Solaris x64      11.0.1.2724 EBF ***   12 Dec 2011 ***
                 10.0.1.3870 EBF       01 Apr 2009

Windows x86/x64  12.0.1.3505 EBF       24 Nov 2011
                 11.0.1.2713 EBF       24 Nov 2011
                 10.0.1.4295 EBF       18 Nov 2011
[Download EBFs here] [Register for the Fuji Beta here]

...and for the other platforms:

HP-UX PA-RISC    10.0.1.3778 EBF                 16 Oct 2008

Linux Itanium    10.0.1.3777 EBF                 14 Oct 2008

NetWare          10.0.1.4036 EBF                 10 Mar 2010

Windows CE       10.0.1.4036 EBF                 09 Mar 2010

Windows Itanium  10.0.1 GA Upgrade from 10.0.0   26 Apr 2007
[Download EBFs here]


Friday, December 9, 2011

Product Suggestion: Minimum Row Size

Question: Have you ever set PCTFREE for a table?

Likely Answer Number 1: "Never."

Likely Answer Number 2: "Don't think so, can't remember."

Likely Answer Number 3: "I set it to 100 once, can't remember why."



As SQL Anywhere performance tuning knobs go, the PCTFREE setting is one of most obscure and least used. It's been around forever, but nobody ever uses it, probably because nobody can figure out how to use it properly.

What the %#!@ is PCTFREE?


Here's a code example that sets PCTFREE to 50% for table t:
CREATE TABLE t
   pkey    INTEGER         NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
   data    VARCHAR ( 200 ) NOT NULL DEFAULT '',
   PCTFREE 50 );
Here's what the Help says:

PCTFREE clause - Specifies the percentage of free space you want to reserve for each table page. The free space is used if rows increase in size when the data is updated. If there is no free space in a table page, every increase in the size of a row on that page requires the row to be split across multiple table pages, causing row fragmentation and possible performance degradation.

The value percent-free-space is an integer between 0 and 100. The former value specifies that no free space is to be left on each page—each page is to be fully packed. A high value causes each row to be inserted into a page by itself. If PCTFREE is not set, or is later dropped, the default PCTFREE value is applied according to the database page size (200 bytes for a 4 KB (and up) page size). The value for PCTFREE is stored in the ISYSTAB system table.

Just what should PCTFREE be set to? Is 50% good? Is 40% better, or is 60%?

It doesn't help that PCTFREE is a percentage but the default is expressed in bytes... I think you can convert, but who cares? Is 200 bytes good? Or 100? Or 400? ...the answer is the same, "I don't know."

The problem is this: When designing a new cargo ship / warehouse / database, folks generally know how many tonnes / cubic feet / bytes it should store; i.e., the capacity, not the free space.

Taken down to the database page level, folks know (or can figure out) how big the average row is when it's inserted, and how big it will be after it's been updated. What they don't know is how to convert that to an efficient amount of free space to leave after inserting.

Why should we care?


It is a fairly common practice to insert empty rows and then update them later when actual data become available... easy is good.

Once inserted, a row can never be moved to a different page, so when a page becomes full of empty rows the first UPDATE that expands a column will split that row... splits are bad, hence PCTFREE.

Here's a thought...


...instead of (or in addition to) PCTFREE why not give the users a knob called MINROWSIZE which they can calculate as expected average row size after the row has been updated, and is to be used by SQL Anywhere as the minimum amount of table data space to be reserved for each row when it is inserted?

In this case, MINROWSIZE 25 means reserve 4 bytes for the integer and about 20 bytes for the string even though the INSERT is providing an empty row:
CREATE TABLE t
   pkey    INTEGER         NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
   data    VARCHAR ( 200 ) NOT NULL DEFAULT '',
   MINROWSIZE 25 );

INSERT t VALUES ( DEFAULT, DEFAULT );
Currently, when a new row is inserted the PCTFREE amount is compared to the current free space in the page when deciding if a new row will fit. The size of each existing row and whether or not it is expected to grow in size is not taken into account, just the total space used and the PCTFREE amount, so depending of the current state of affairs inside the page the ideal PCTFREE could be larger or smaller than the actual value.

With MINROWSIZE, SQL Anywhere would take into account existing rows that have still not used up their MINROWSIZE allocation, as well as the MINROWSIZE allocation for the new row, when making the decision.

With some effort, the user could provide a MINROWSIZE value that is meaningful and accurate... as opposed to picking the number for Lotto PCTFREE.


Wednesday, December 7, 2011

A Walking Tour of the Foxhound FAQ

Is there anything duller than documentation?

Let's see... how about reading Frequently Asked Questions?

Even worse?


How about a walking tour through some new FAQs? Not the big shiny What's New in Foxhound 1.2 entries, but the more obscure ones, like this one:

What environment variables are used by Foxhound?

FOXHOUND1 is the all-important "Foxhound drive-and-path" used by all the Foxhound shortcuts... it's Foxhound's version of %SQLANY12%, if you lose it you'll probably want to set it up again.

FOXHOUND1BIN is the esoteric variable you can set to force Foxhound to start up using the 32-bit version of SQL Anywhere if you have both versions installed... "esoteric" because you almost never need it, Foxhound runs just fine using the 64-bit version. But, if you need it, here's how to use FOXHOUND1BIN (no, you don't need it if only the 32-bit version of SQL Anywhere is installed, Foxhound will use that one automatically).

FOXHOUND1UPGRADE refers to the variable that controls how much data the "post-setup" process will copy from your old Foxhound database when you are upgrading to a new version. The environment variable version of FOXHOUND1UPGRADE sets the initial value, but it defaults to "ALL" and is easy to change during the installation process, so it is rarely used.... the environment variable is rarely used, that is, but the underlying value is very important.

Ooops...


...wait a minute, that wasn't a FAQ, that was three paragraphs about a FAQ. To experience the full measure of dullness you have to follow the links, this article is just a travelogue.



Speaking of FOXHOUND1UPGRADE, this FAQ tells the full story: How do the different FOXHOUND1UPGRADE values work?

There's a lot of logic behind upgrading... it's a big deal if you have a big database, especially if you don't need all the data any more. You can let the purge process slowly delete it over time, or you can let the upgrade process quickly delete-it-by-not-copying it, so "upgrading" to the same version of Foxhound suddenly makes sense: How do I shrink the size of the Foxhound database?



An "alias" feature has been added to process that compiles the FAQ so different questions can point to the same answer. For example, this somewhat obscure question "How can the Heartbeat time be larger than the Sample time?" is answered as part of this longer explanation "How do I measure latency and throughput?"



The FAQ alias feature is most useful in the alphanumeric Messages section where messages containing multiple phrases and numeric codes appear repeatedly in the list. Quite a few messages have two or three entries pointing to the same explanation; this one's the winner with 4 appearances in the sorted list:
23: I/O error Fatal error: A disk read from file "...\foxhound1.db" failed with error code: (23) -- transaction rolled back --

A disk read from file "...\foxhound1.db" failed with error code: (23) -- transaction rolled back --

I/O error Fatal error: A disk read from file "...\foxhound1.db" failed with error code: (23) -- transaction rolled back --

transaction rolled back: I/O error Fatal error: A disk read from file "...\foxhound1.db" failed with error code: (23)



Even more useful than the alias feature is the new Google "Custom Search Engine" link that appears on every single FAQ page:

Loading

Go ahead, give it a whirl, type in "adhoc" and do a search.



Speaking of adhoc reporting, it's not new with Foxhound 1.2 but it might as well be; it wasn't documented until now:
How do I run adhoc queries on the Foxhound database?

How do I see the schema for adhoc reporting?

How do I change the ADHOC password?
Plus, a bunch of new columns were added, and new views, to make queries easier; after all, it's all your data!



Here's one of the shorter FAQs:
Question: How do I backup my Foxhound database?

Answer: Use one or both of the Foxhound shortcuts:
All Programs 
   - Foxhound1 
      - Tools 
         - Backup Foxhound Database 
         - Backup Foxhound Transaction Log
The second shortcut, Backup Foxhound Transaction Log, is new to Foxhound 1.2.

So is this explanation
How do the Foxhound backup shortcuts work?
plus, at long last, an explanation of what to do with all those backups:
How do I restore the Foxhound database from a backup?



Some folks like the context-sensitive Help frame on the right side of the Foxhound display, some folks hate it, but one thing everyone hated was the "Hide Help / Show Help" buttons when they didn't work.

Now those buttons are gone, and if you want to turn the Help frame on and off you have to check the "Show Help" box on the Foxhound Options page.



If there was a contest for "Shortest FAQ" this would be a another contestant:
Question: How do I set up Foxhound to monitor a large number of target databases?

Answer: See Section 7. Manage Multiple Monitor Sessions on the Monitor Options page.
It used to be an endless affair starting and stopping 100 sampling sessions: click, click, click, [curse], click, click, click, [curse], ...

Now it's a matter of setting up a text file with all the connection strings, then clicking on one button to start them all, another to stop them all, with nothing stopping you from starting and stopping them individually like before.



And finally, we have the answer to this important question: How much overhead does the Foxhound connection add to the target database?

Here's an excerpt, possibly a candidate for Quote Of The Day:
"Generally speaking, if Foxhound's overhead causes any problems, the target database is already in serious difficulty for other reasons."
That's a true statement, not just for Foxhound but for any performance monitor product: If monitoring performance hurts performance, there is already a problem that needs to be dealt with.


Monday, December 5, 2011

Foxhound 1.2 Is Now Available

Foxhound is a browser-based health, availability and performance monitor for SQL Anywhere that displays everything you need to know at a glance.

... and it's FREE! (to upgrade, that is)


If you already have version 1.0 or 1.1, Foxhound 1.2 is now available as a free upgrade.

If not, you can get an evaluation copy or buy a copy here.

Here's the What's New . . .


Now it's easier to manage up to 100 Monitor sessions:
  • The "Alerts Criteria" page has been redesigned as the "Monitor Options" page.
    See: The Monitor Options Page.

  • The Default Settings can now be edited apart from the options for any particular target database.
    See: Monitor Options - Switch Display.

  • The new Force Default Settings On All Targets button on the Monitor Options page lets you "push" the default settings to all the targets, all at once. Previously, you could only "pull" the settings to each target one at a time with the Restore Default Settings button.
    See: Save And Restore Alerts Criteria Defaults.

  • The Manage Multiple Monitor Sessions feature lets you specify batches of target databases, and the Start and Stop All Sessions buttons let you turn sampling on and off for each batch.
    See: Manage Multiple Monitor Sessions.


The Monitor tab on the Foxhound Menu page is now easier to use, and more informative:
  • The lists of targets and outstanding Alerts have been vertically squished so you don't have to scroll down so far when you've got lot of targets.

  • The Disable Refresh and Enable Refresh buttons make it easier to scroll down to look at something.

  • More "at a glance" columns have been added:
    • Active Alerts count, with a link down into the Alerts list
    • Conns
    • Waiting Req
    • Locks Held, Blocked Conns
    • CPU Time
    • Temp Space
    • Disk Reads, Writes

  • The Start All Sampling and Stop All Sampling buttons let you turn sampling on and off for all the targets.
See: The Monitor Tab.


Scrolling through gigabytes of data on the History page used to be slowwwww, like continental drift, now it's faster.

Plus, new scroll amounts have been added (500 samples and 3 hours):
« Newest « 500 « 100 « 20 « 1 sample         1 sample » 20 » 100 » 500 » Oldest » 
« Month « Week « Day « 3 Hours « 1 Hour   1 Hour » 3 Hours » Day » Week » Month »

...and, the "Freeze Frame Heights" button lets you resize and freeze the History page frames so they don't reset ("Grrr!") every time you scroll:
  • The database-level samples frame,

  • the blocked connections frame and

  • the current connections frame.

See: The History Page.


It's your data; now it's easier to run adhoc queries:
  • New views have been added for adhoc reporting.

  • A separate read-only "Adhoc Schema" database lets you see what the views and underlying tables look like.

  • New connection-level columns have been added to make queries easier to write:
    LockRowID                         BIGINT NULL
    blocked_connection_count          BIGINT NOT NULL DEFAULT 0
    current_req_status                VARCHAR ( 100 ) NOT NULL DEFAULT ''             
    cache_satisfaction                DECIMAL ( 30, 0 ) NOT NULL DEFAULT 0.0 
    time_connected                    BIGINT NOT NULL DEFAULT 0
    total_waits                       BIGINT NOT NULL DEFAULT 0 
    waiting_time                      DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0
    transaction_running_time          BIGINT NOT NULL DEFAULT 0
    time_since_last_request           BIGINT NOT NULL DEFAULT 0
    index_satisfaction                DECIMAL ( 30, 0 ) NOT NULL DEFAULT 0.0
    Except for LockRowID, all the new columns contain values that Foxhound used to calculate only when the data was displayed on the Monitor and History pages; now these values are calculated once and stored permanently.

See: How do I run adhoc queries on the Foxhound database?


Now you can use Gmail to send Alert emails as well as receive them.
See: How do I tell Foxhound to send Alert emails via the Gmail SMTP server smtp.gmail.com?


The data upgrade process now runs faster...

...as well as giving you control over how much data to upgrade:
  • The default is FOXHOUND1UPGRADE = ALL to copy all the data from your old Foxhound database to the new one.

  • If you choose FOXHOUND1UPGRADE = OPTIONS during the installation process, Foxhound will copy everything except the Monitor samples. When you start Foxhound again, all the old sample data will be gone but sampling will start again right away (assuming it was running before you installed the new version).

  • If you want to save the Monitor samples recorded since a particular date, specify FOXHOUND1UPGRADE = yyyymmmdd.

  • To save the samples recorded during the past n days, use FOXHOUND1UPGRADE = n.

What's really cool about the new upgrade process is you can use it to purge, shrink and reorganize the Foxhound database: just reinstall the same version of Foxhound with FOXHOUND1UPGRADE = yyyymmmdd or FOXHOUND1UPGRADE = n.

See:

How do the different FOXHOUND1UPGRADE values work?

How do I shrink the size of the Foxhound database?



Good news, bad news:


The look-and-feel of the Foxhound Options page has changed to match the new improved Monitor Options window.
See: The Foxhound Options Page.


A single global "Enable Emails" setting has been added to the Foxhound Options page in case you want to turn off an accidental "alert storm".
See: Foxhound Options - Global Overrides.


New shortcuts have been added:
start - All Programs 
   Foxhound1
      Start Foxhound via Chrome
      Tools 
         Backup Foxhound Transaction Log
         Start Foxhound via Chrome - debug

See:
How do I start Foxhound?

How do I backup my Foxhound database?



Exception handling has been improved to allow Foxhound to continue operating after receiving bad data from a target database.
See: How do I report unexpected errors and exceptions?


The thirty-day Evaluation period can now be extended: just get a Evaluation Renewal key from breck.carter@gmail.com.
See: How do I get a copy of Foxhound?


Friday, December 2, 2011

The fRiDaY File - Mobility Mayhem

Texting!




Recalculating!




Texting!






Meet Mayhem, prerecorded...




...and in person:





How do you know when it's time to ship?

At some point, every piece of software has to "go out the door", into the hands of unsuspecting eager customers.

The question is, how do you know when that time has come?

When all the bugs are gone!


Yeah, sure, and if that was the case there wouldn't be any new software, ever, because the bugs are never all gone.

Fortunately, there are three simple criteria to be met before a software product is ready to ship:

1. All the new bugs are going into the "Future Fix" category.


Once upon a time, you were determined to fix each and every flaw that turned up, no matter how minor.

Now, it's not just a matter of "When will this bug get fixed?" (Not Now) but "Will this bug get fixed?"... that's when a Future Fix bug gets flagged with "Wait For Reproducible" which is the same as "No, Never Mind".

2. You're writing more FAQs than fixes.


Sometimes it's easier to cover an electrical outlet than to document why it's not a good idea to stick a fork into it... until you consider the fact that docs don't need to be tested but if you change the code you have to go back and run all . . . those . . . endless . . . regression tests again.

Plus, it's really easy to say "It's not a bug, it's a FAQ!" with a straight face, a lot easier than "It's not a bug, it's a feature!" even though the two statements are not that different.



Here's the third and final "Ready To Ship" criteria:

3. You just wrote the same FAQ a second time.


...and it's not as well-written as the first one!

That's when you know, it's time to ship.