Monday, October 3, 2011

Dump and Continue

Question: How do I capture information about a SQL exception inside a stored procedure and then roll back and continue processing?

A short (and incomplete) answer can be found here:

Here's a longer answer:
  • Use an extra BEGIN END block to isolate the SQL code that may raise an exception that is to be handled.

  • Use an EXCEPTION handler inside that BEGIN block to get control when the exception occurs.

  • Use SELECT INTO to capture the values of SQLCODE, SQLSTATE and ERRORMSG().

  • Use SELECT * to capture all the data from a row in a table without requiring maintenance effort when the schema changes.

  • Use UNLOAD SELECT TO VARIABLE to dump the row, including binary data, into a single readable character string.

  • Use TRIGGER EVENT to start a separate process with a separate connection which in turn will allow a separate database transaction.

  • Use EVENT_PARAMETER to pass all the captured data to the event process.

  • Use INSERT and COMMIT inside the EVENT process to save the captured data to the database.

  • Use ROLLBACK in the EXCEPTION handler to roll back the failing transaction.

  • Let the EXCEPTION handler end normally (no RESIGNAL statement) to continue processing.
Sounds like a lot of work, doesn't it?

Well, yes, there IS some work involved

"Dump and Continue" is harder than "Crap Out and Stop". You know what I'm talking about: Crap Out and Stop means don't do anything, just let the exception bubble up into the application and throw a Java traceback in the user's face.

Or worse, do less than nothing, don't even check for an error back in the application, just continue processing with a partially complete transaction, maybe commit, maybe roll back... there's an infinity of Good, Bad and Ugly things applications can do when stuff happens.

But why would anyone want "Dump and Continue"?

With some applications, continuing with the next step or task or operation is more important than stopping to ask the user what to do when a problem occurs. Some applications don't even have users in the normal sense: they aren't interactive but instead run in the background or as scheduled tasks. Their users might want to investigate later on what went wrong with the 1% that didn't get done, but they sure don't want the other 99% to get held up by the 1%.

Other applications go way out where the buses don't run are further out on the continuum that leads to nonstop processing. These applications run in the foreground and they've got interactive users, all right, but it's a special kind of user, one who absolutely positively relies on the application recovering immediately from whatever glitch just happened and pressing on with the job at hand. They don't care about the Dump option, just Continue, and in their world, the world of airline pilots and heart surgeons, "Stop" means "Die".

But enough with the proselytizing, this isn't an article about life-critical applications, it's about Dump and Continue, so

Show me the code!

First, here's a procedure with a problem; can you tell me what that problem is?
"No, wait, last week was Quiz Week, let's just get on with it!"
OK, OK, here it is...
CREATE PROCEDURE p()
BEGIN

DECLARE @sqlcode                 INTEGER;
DECLARE @sqlstate                VARCHAR ( 5 );
DECLARE @errormsg                VARCHAR ( 32767 );
DECLARE @exception_dump          LONG VARCHAR;

FOR f_fetch AS c_fetch INSENSITIVE CURSOR FOR
SELECT pkey  AS @pkey,
       data1 AS @data1
  FROM t
FOR READ ONLY
DO

   UPDATE t SET counter = counter + 1 WHERE pkey = @pkey;

   BEGIN

      UPDATE t SET data2 = @data1 + 1 WHERE pkey = @pkey;

   EXCEPTION WHEN OTHERS THEN

      SELECT SQLCODE, SQLSTATE, ERRORMSG() 
        INTO @sqlcode, @sqlstate, @errormsg;

      ROLLBACK;

      UNLOAD SELECT 'Table t', * FROM t WHERE pkey = @pkey
        INTO VARIABLE @exception_dump;

      CALL record_exception ( 
         STRING ( 'SQLCODE = ', @sqlcode, 
                  ', SQLSTATE = ', @sqlstate, 
                  ', ERRORMSG() = ', @errormsg ),
         @exception_dump );

   END;

   COMMIT;

END FOR;

END;

The FOR loop on lines 9 through 42 steps through each row in table t, performing two UPDATE statements and a COMMIT (lines 16, 20 and 40). In other words, each row in t is updated twice as a single transaction.
Of course nobody in the real world would use two UPDATE statements where one would do the job, but that's not the point... the point is to demonstrate how one multi-update transaction can be rolled back without affecting the other transactions.
The UPDATE on line 20 has been singled out for special exception handling. That is done with the BEGIN statement on line 18, the corresponding END on line 38 and the EXCEPTION handler on lines 22 through 36. Only exceptions that occur in the UPDATE on line 20 will be handled by the EXCEPTION handler on line 22; any exceptions that occur outside the inner BEGIN block (say, a problem in the UPDATE on line 16) will be bubbled up to the caller because the procedure p doesn't have an EXCEPTION handler for its BEGIN block on lines 2 through 44.

The SELECT on lines 24 and 25 captures the SQLCODE, SQLSTATE and ERRORMSG() values as they stood when the exception was raised. These values must be captured as soon as the EXCEPTION handler starts executing, using a single SELECT INTO statement so the values aren't changed by other statements.

The ROLLBACK on line 27 backs out all the changes made during the current transaction. It some situations, it's important to get the ROLLBACK done right away, like it's done here. In other cases, the ROLLBACK is done later so that more diagnostic information can be gathered; an example of that is shown later in this article. Whether a ROLLBACK is needed at all depends on the application; the question at the top of this article asked for one, so it's here.

The UNLOAD on lines 29 and 30 is the "Dump" part of "Dump and Continue"... it captures an image of the row in t but instead of writing it to a text file like UNLOAD usually does, it puts the data into a single local LONG VARCHAR variable using the new INTO VARIABLE clause.
OK, so it was introduced in SQL Anywhere 11 which makes it "relatively new"... but UNLOAD INTO VARIABLE is still new to lots of us :)
The CALL on lines 32 through 36 passes two strings to the record_exception procedure that will be shown later. The first string contains diagnostic data about the exception and the second is the dump.

The code between line 36 (the CALL) and line 38 (the END) is the secret behind the "Continue" part of "Dump and Continue".

What code? No code... that's the point, the exception handler does nothing else before the END, there's no RETURN statement, no RESIGNAL, nothing. In SQL Anywhere an exception handler does only what you tell it to do, and once it finishes what you've told it to do, if you haven't told it to go somewhere else, it carries on with the next statement after the END.

And the next statement after the END is the COMMIT on line 40... which does nothing in the case of an exception because the handler did a ROLLBACK.

After the COMMIT comes the END FOR on line 42, which means the FOR loop keeps on running until table t runs out of rows, no matter how many times the EXCEPTION handler on line 22 gets control.


Here's the code for the record_exception procedure that p-the-procedure-with-a-problem calls from the inner EXCEPTION handler:
CREATE PROCEDURE record_exception ( 
   IN @diagnostic_text LONG VARCHAR,
   IN @exception_dump  LONG VARCHAR DEFAULT '' )
BEGIN

DECLARE @exception_diagnostic  LONG VARCHAR;
DECLARE @sqlcode               INTEGER;
DECLARE @sqlstate              VARCHAR ( 5 );
DECLARE @errormsg              VARCHAR ( 32767 );

SET @exception_diagnostic = STRING ( 
   CURRENT TIMESTAMP, 
   ' ',  
   CONNECTION_PROPERTY ( 'Number' ),  
   ' ',  
   @diagnostic_text );

IF COALESCE ( @exception_dump, '' ) <> '' THEN

   TRIGGER EVENT record_exception ( 
      @p_exception_diagnostic = @exception_diagnostic,
      @p_exception_dump       = @exception_dump );

ELSE 

   TRIGGER EVENT record_exception ( 
      @p_exception_diagnostic = @exception_diagnostic );

END IF;

EXCEPTION WHEN OTHERS THEN

   SELECT SQLCODE, SQLSTATE, ERRORMSG() 
     INTO @sqlcode, @sqlstate, @errormsg;

   MESSAGE STRING ( 
      CURRENT TIMESTAMP, 
      ' EXCEPTION in record_exception: ',
      ', SQLCODE = ', @sqlcode,  
      ', SQLSTATE = ', @sqlstate,  
      ', ERRORMSG() = ', @errormsg ) 
      TO CONSOLE;

END;

The SET on lines 11 through 16 adds more data to the @diagnostic_text passed from the caller. If other common data is important to diagnosing exceptions in your application, this is where you can capture it so you don't have to include it everywhere this procedure is called; e.g., the reference to CURRENT TIMESTAMP and the call to CONNECTION_PROPERTY() shown here.

The IF THEN ELSE on lines 18 through 29 handles the fact that the input parameter @exception_dump is optional, both for this procedure and for the event it triggers.

The two TRIGGER EVENT statements starting on lines 20 and 26 use the special "trigger-parameter-name = expression" syntax that is required to pass string values to the event. Just strings, but that's OK since pretty much everything can be converted to a string.

The EXCEPTION handler on lines 31 through 42 handles any exception that might occur in the record_exception procedure itself, as follows:
  • capture the SQLCODE, SQLSTATE and ERRORMSG() values,

  • use MESSAGE TO CONSOLE to write a diagnostic message to the database server console log, and

  • carry on as if nothing bad happened; i.e., return to the caller.
In other words, it's a dumbed-down version of "Dump and Continue"... the last thing any application needs is for an error in the error handler to get in the way.


Here's the code for the EVENT that does the actual work of recording the exception, plus the two tables it uses:
CREATE TABLE exception_diagnostic (
   exception_id          BIGINT NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
   exception_diagnostic  LONG VARCHAR NOT NULL );

CREATE TABLE exception_dump (
   exception_id          BIGINT NOT NULL,
   dump_id               BIGINT NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
   dump_inserted_at      TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
   dump                  LONG VARCHAR NOT NULL ); 

CREATE EVENT record_exception
HANDLER BEGIN

DECLARE @p_exception_diagnostic  LONG VARCHAR;
DECLARE @p_exception_dump        LONG VARCHAR;
DECLARE @sqlcode                 INTEGER;
DECLARE @sqlstate                VARCHAR ( 5 );
DECLARE @errormsg                VARCHAR ( 32767 );

SET @p_exception_diagnostic = COALESCE ( EVENT_PARAMETER ( '@p_exception_diagnostic' ), '[NULL]' );

INSERT exception_diagnostic ( exception_diagnostic ) VALUES ( @p_exception_diagnostic );

COMMIT;

SET @p_exception_dump = COALESCE ( EVENT_PARAMETER ( '@p_exception_dump' ), '' );

IF @p_exception_dump <> '' THEN

   INSERT exception_dump ( exception_id, dump ) VALUES ( @@IDENTITY, @p_exception_dump );

   COMMIT;

END IF;

EXCEPTION WHEN OTHERS THEN

   SELECT SQLCODE, SQLSTATE, ERRORMSG() 
     INTO @sqlcode, @sqlstate, @errormsg;

   MESSAGE STRING ( 
      CURRENT TIMESTAMP, 
      ' EXCEPTION in event record_exception: ',
      ', SQLCODE = ', @sqlcode,  
      ', SQLSTATE = ', @sqlstate,  
      ', ERRORMSG() = ', @errormsg ) 
      TO CONSOLE;

END;

The CREATE TABLE statements on lines 1 through 9 define separate tables for the diagnostic and dump strings. Using two tables instead of one makes it possible to create more than one dump for a single exception, and makes it easy to independently delete data from one or the other table after, say, a diagnostic runaway... those might not be compelling reasons but they're good enough for government work :)

The SET statement on line 20 uses the funky EVENT_PARAMETER function make up for the fact that the CREATE EVENT statement doesn't have a parameter list like CREATE PROCEDURE. The result is that the local variable @p_exception_diagnostic gets the first string passed via the TRIGGER EVENT shown earlier.

The INSERT on line 22 takes that string and shoves it into the first table, and the COMMIT makes sure that row sticks around no matter what happens later... another reason for two tables, perhaps.

The SET on line 26 copies the second string parameter into a local variable, and if there is something in it the INSERT and COMMIT on lines 30 and 32 shoves that string into the second table and commits it.

The important thing about this event, and the ONLY reason it exists, is so the COMMIT statements on lines 24 and 32 run on a separate connection from the rest of the application.

Does it work?

Yes, it works, and here's how...

First, so you don't have to scroll up and down so much, here's the procedure-with-a-problem again, followed by some data for it to use and some code to call it and to show what happened:
CREATE PROCEDURE p()
BEGIN

DECLARE @sqlcode                 INTEGER;
DECLARE @sqlstate                VARCHAR ( 5 );
DECLARE @errormsg                VARCHAR ( 32767 );
DECLARE @exception_dump          LONG VARCHAR;

FOR f_fetch AS c_fetch INSENSITIVE CURSOR FOR
SELECT pkey  AS @pkey,
       data1 AS @data1
  FROM t
FOR READ ONLY
DO

   UPDATE t SET counter = counter + 1 WHERE pkey = @pkey;

   BEGIN

      UPDATE t SET data2 = @data1 + 1 WHERE pkey = @pkey;

   EXCEPTION WHEN OTHERS THEN

      SELECT SQLCODE, SQLSTATE, ERRORMSG() 
        INTO @sqlcode, @sqlstate, @errormsg;

      ROLLBACK;

      UNLOAD SELECT 'Table t', * FROM t WHERE pkey = @pkey
        INTO VARIABLE @exception_dump;

      CALL record_exception ( 
         STRING ( 'SQLCODE = ', @sqlcode, 
                  ', SQLSTATE = ', @sqlstate, 
                  ', ERRORMSG() = ', @errormsg ),
         @exception_dump );

   END;

   COMMIT;

END FOR;

EXCEPTION WHEN OTHERS THEN

   SELECT SQLCODE, SQLSTATE, ERRORMSG() 
     INTO @sqlcode, @sqlstate, @errormsg;

   MESSAGE STRING ( 
      CURRENT TIMESTAMP, 
      ' EXCEPTION in procedure p: ',
      ', SQLCODE = ', @sqlcode,  
      ', SQLSTATE = ', @sqlstate,  
      ', ERRORMSG() = ', @errormsg ) 
      TO CONSOLE;

END;

CREATE TABLE t (
   pkey       INTEGER NOT NULL PRIMARY KEY,
   counter    INTEGER NOT NULL,
   data1      INTEGER NOT NULL,
   data2      BIGINT NOT NULL );

INSERT t VALUES ( 1, 0, 3857592,    0 );
INSERT t VALUES ( 2, 0, 2147483647, 0 );
INSERT t VALUES ( 3, 0, 54296,      0 );
COMMIT;

CALL p();
SELECT * FROM t ORDER BY pkey;
SELECT * FROM exception_diagnostic ORDER BY exception_id;
SELECT * FROM exception_dump ORDER BY dump_id;

The CALL on line 70 runs to completion even though there's an exception halfway through the FOR loop.

How do you know that?

Because it's shown by the three SELECT statements on lines 71 through 73:
pkey,counter,data1,data2
1,1,3857592,3857593
2,0,2147483647,0
3,1,54296,54297

exception_id,exception_diagnostic
1,2011-10-02 13:09:00.943 2 SQLCODE = -158, SQLSTATE = 22003, ERRORMSG() = Value 2147483647 + 1 out of range for destination

exception_id,dump_id,dump_inserted_at,dump
1,1,'2011-10-02 13:09:00.945','Table t',2,0,2147483647,0\x0d\x0a

The error message on line 7 shows that one of the updates failed with the dreaded "Value out of range for destination".

The dump on line 10 shows that the error occurred on row 2 in table t.

The result set on lines 2 through 4 shows that all the changes for row 2 were rolled back, but the changes for rows 1 and 2 were made successfully.


Here's what the exception_dump table looks like if the UNLOAD is moved before the ROLLBACK; it shows the row as stood after the first UPDATE but before the ROLLBACK (the counter column contains 1 instead of zero):
exception_id,dump_id,dump_inserted_at,dump
1,1,'2011-10-02 13:09:48.564','Table t',2,1,2147483647,0\x0d\x0a

You can code pretty much what you want inside the EXCEPTION handler; here's an example that uses UNLOAD with APPEND ON to capture before-and-after-ROLLBACK images of the row in table t:
   EXCEPTION WHEN OTHERS THEN

      SELECT SQLCODE, SQLSTATE, ERRORMSG() 
        INTO @sqlcode, @sqlstate, @errormsg;

      UNLOAD SELECT 'Table t before ROLLBACK', * FROM t WHERE pkey = @pkey
        INTO VARIABLE @exception_dump;

      ROLLBACK;

      UNLOAD SELECT 'Table t after ROLLBACK', * FROM t WHERE pkey = @pkey
        INTO VARIABLE @exception_dump APPEND ON;

      CALL record_exception ( 
         STRING ( 'SQLCODE = ', @sqlcode, 
                  ', SQLSTATE = ', @sqlstate, 
                  ', ERRORMSG() = ', @errormsg ),
         @exception_dump );

   END;

Here's what the dump looks like with the two row images, with counter = 1 before the ROLLBACK and zero afterwards:
exception_id,dump_id,dump_inserted_at,dump
1,1,'2011-10-02 11:17:45.347','Table t before ROLLBACK',2,1,2147483647,0\x0d\x0a'Table t after ROLLBACK',2,0,2147483647,0\x0d\x0a

Dilbert.com


1 comment:

kerog said...

Hi,
Thanks for your example, but I have question:
How I do 'commit' in try...catch... block?

I have code as bellow:

for a as aa dynamic scroll cursor for
select a from tabl
do
begin
call some_proc(a); -- in proc I Have commit and it must be there
EXCEPTION WHEN OTHERS THEN
SELECT SQLCODE, SQLSTATE, ERRORMSG()
INTO @sqlcode, @sqlstate, @errormsg;
END;
end for;