Friday, July 29, 2011

The fRiDaY File - Teal Dear...

...How To Enjoy This Cartoon

  1. Read...

    Random

  2. Laugh.

  3. Look it up.

  4. Laugh again.


Using LockCount To Display UPDATE Progress

Some programming tasks are notoriously difficult to get right but are regarded as "easy" by folks who haven't actually tackled them.

One is creating a good installation setup for a complex piece of software... proof of how difficult that is lies in the vast numbers of crappy setups that have been published over the years.

Another is displaying a decent progress indicator for a long-running process that runs as a "black box" with a beginning and an end but no outside access to intermediate steps... like a set-oriented UPDATE statement that affects millions of rows.

Here is an example of where such a progress indicator is needed: when Foxhound 1.2 upgrades the data from an earlier version it has to UPDATE all the connection-level sample data to fill in some new derived columns.

In this run the UPDATE took 19 minutes to process 6 million rows and all it displayed was an occasional "Cache size adjusted" message... not good:


I. 07/26 15:27:10. 2011-07-26 15:27:10.473 DATA UPGRADE: Upgrading existing rows.
I. 07/26 15:27:41. Cache size adjusted to 3228864K
I. 07/26 15:27:47. Cache size adjusted to 3268992K
I. 07/26 15:30:03. Cache size adjusted to 3621072K
I. 07/26 15:36:48. Cache size adjusted to 3993080K
I. 07/26 15:38:48. Cache size adjusted to 4118600K
I. 07/26 15:40:04. Cache size adjusted to 4149984K
I. 07/26 15:40:09. Cache size adjusted to 4157832K
I. 07/26 15:40:14. Cache size adjusted to 4159792K
I. 07/26 15:40:19. Cache size adjusted to 4160280K
I. 07/26 15:40:24. Cache size adjusted to 4160400K
I. 07/26 15:41:53. Starting checkpoint of "new_f" (foxhound1.db) at Tue Jul 26 2011 15:41
I. 07/26 15:43:02. Finished checkpoint of "new_f" (foxhound1.db) at Tue Jul 26 2011 15:43
I. 07/26 15:46:52. 2011-07-26 15:46:52.728 DATA UPGRADE: Creating foreign keys.

Dilbert.com



Here's a look at the new improved console log display:



Here's an overview of the technique used:
  • Select SYSTAB.count to determine how many rows are being updated (the entire table),

  • watch the connection-level 'LockCount' property to determine how many rows have been updated, and

  • use an EVENT to display progress messages on the database console.
Sounds easy, doesn't it? I thought so too... stop reading now if you want to hang on to that impression.

Still there?


Here's the EVENT:

CREATE EVENT rroad_display_progress_messages
HANDLER BEGIN

DECLARE @calling_connection_id INTEGER;
DECLARE @commit_percent BIGINT;
DECLARE @continuing VARCHAR ( 1 );
DECLARE @previous_commit_message_displayed_at_loop_counter BIGINT;
DECLARE @previous_update_message_displayed_at_loop_counter BIGINT;
DECLARE @lock_count BIGINT;
DECLARE @lock_count_has_increased_at_least_once VARCHAR ( 1 );
DECLARE @loop_counter BIGINT;
DECLARE @previous_commit_percent BIGINT;
DECLARE @previous_lock_count BIGINT;
DECLARE @previous_update_percent BIGINT;
DECLARE @row_count BIGINT;
DECLARE @update_percent BIGINT;

SET @continuing = 'Y'; -- until proven otherwise

SET @calling_connection_id = CAST ( COALESCE ( EVENT_PARAMETER ( 'ConnectionID' ), '0' ) AS INTEGER );

IF @calling_connection_id <= 0 THEN
SET @continuing = 'N'; -- the calling connection id is out of range
END IF;

IF @continuing = 'Y' THEN
SET @row_count = CAST ( COALESCE ( EVENT_PARAMETER ( '@row_count_string' ), '0' ) AS BIGINT );
IF @row_count <= 0 THEN
SET @continuing = 'N'; -- the row count is out of range
END IF;
END IF;

SET @lock_count_has_increased_at_least_once = 'N';
SET @previous_update_message_displayed_at_loop_counter = 0;
SET @previous_commit_message_displayed_at_loop_counter = 0;
SET @previous_lock_count = 0;
SET @previous_update_percent = 0;
SET @previous_commit_percent = 0;
SET @loop_counter = 0;

WHILE @continuing = 'Y' LOOP

SET @loop_counter = @loop_counter + 1;

SET @lock_count = CONNECTION_PROPERTY ( 'LockCount', EVENT_PARAMETER ( 'ConnectionID' ) );

IF @lock_count IS NULL THEN
SET @continuing = 'N'; -- the calling connection no longer exist
END IF;

IF @continuing = 'Y' THEN
IF NOT EXISTS ( SELECT *
FROM rroad_progress_messages_requested
WHERE requested_by_connection_id = @calling_connection_id ) THEN
SET @continuing = 'N'; -- the caller no longer wants progress messages displayed
END IF;
END IF;

IF @continuing = 'Y' THEN

CASE

WHEN @lock_count > 0
AND @lock_count > @previous_lock_count THEN

SET @lock_count_has_increased_at_least_once = 'Y';

SET @update_percent = CAST (
( CAST ( @lock_count AS DECIMAL ( 11, 2 ) )
/ CAST ( @row_count AS DECIMAL ( 11, 2 ) ) ) * 100.00
AS BIGINT );

IF @update_percent > @previous_update_percent
OR ( @update_percent > 0
AND @update_percent = @previous_update_percent
AND @loop_counter >= @previous_update_message_displayed_at_loop_counter + 9
AND MOD ( @loop_counter, 10 ) = 0 ) THEN

MESSAGE STRING ( @update_percent, '% updated (', @lock_count, ' rows of ', @row_count, ')' ) TO CONSOLE;
SET @previous_update_message_displayed_at_loop_counter = @loop_counter;

END IF;

SET @previous_update_percent = @update_percent;

WHEN @lock_count > 0
AND @lock_count < @previous_lock_count
AND @lock_count_has_increased_at_least_once = 'Y' THEN

SET @commit_percent = CAST (
( ( CAST ( @row_count AS DECIMAL ( 11, 2 ) ) - CAST ( @lock_count AS DECIMAL ( 11, 2 ) ) )
/ CAST ( @row_count AS DECIMAL ( 11, 2 ) ) ) * 100.00
AS BIGINT );

IF @commit_percent > @previous_commit_percent
OR ( @commit_percent > 0
AND @commit_percent = @previous_commit_percent
AND @loop_counter >= @previous_commit_message_displayed_at_loop_counter + 9
AND MOD ( @loop_counter, 10 ) = 0 ) THEN

MESSAGE STRING ( @commit_percent, '% committed (', @row_count - @lock_count, ' rows of ', @row_count, ')' ) TO CONSOLE;
SET @previous_commit_message_displayed_at_loop_counter = @loop_counter;

END IF;

SET @previous_commit_percent = @commit_percent;

WHEN @lock_count = 0
AND @lock_count_has_increased_at_least_once = 'Y' THEN

MESSAGE STRING ( 'Done: ', @row_count, ' rows' ) TO CONSOLE;
SET @continuing = 'N';

ELSE
-- No action.

END CASE;

END IF;

IF @continuing = 'Y' THEN
SET @previous_lock_count = @lock_count;
WAITFOR DELAY '00:00:01';
END IF;

END LOOP;

END;
  • The SET @calling_connection_id on line 20 gets the caller's connection number so the event can ask for the lock count later on.

  • The SET @row_count on line 27 call the magic EVENT_PARAMETER function to retrieve the parameter that was passed from the caller via TRIGGER EVENT.

  • The loop starting on line 41 runs forever, or until some IF THEN SET @continuing = 'N' decides it is time to stop.

  • The SET @lock_count on line 45 gets the LockCount property for the calling connection (oops, it should say "@calling_connection_id" instead of "EVENT_PARAMETER ( 'ConnectionID' )"... that's been fixed but this is the code that produced the test results below so here it is, warts and all).

  • The IF THEN SET @continuing = 'N' on lines 52 through 56 checks to see if the caller no longer needs the services of the event, thank you very much... the caller communicates that fact by deleting its row in rroad_progress_messages_requested.

  • The first CASE WHEN on lines 63 through 84 handles MESSAGE output while the caller's UPDATE is gathering locks.

  • The IF statement on lines 73 through 82 produces a MESSAGE if
    • the update percent has increased since the previous loop pass, or

    • the update percent has remained stuck at the same non-zero value for several passes through the loop.

  • The second CASE WHEN on lines 86 through 106 handles MESSAGE output while the caller's COMMIT is releasing locks.

  • The AND @lock_count_has_increased_at_least_once = 'Y' on line 88 makes sure this code doesn't run before the first UPDATE starts... that's not likely, but you never know. OK, sure, this might be case of Cargo Cult Programming, I'm not too proud to consider the possibility.

  • The third CASE WHEN on lines 108 through 112 probably is an example of Cargo Cult Programming because this event ain't never gonna see the lock count hit zero before the caller kills the whole deal by deleting its row in rroad_progress_messages_requested (see the IF THEN SET @continuing = 'N' on lines 52 through 56).

  • The WAITFOR DELAY on line 123 slows the whole thing down so you don't get millions of progress messages.
Why an EVENT? So the WHILE loop runs as a separate asynchronous process, independent from and at the same time as the caller's UPDATE and COMMIT... I think they call it multiprogramming or multitasking or something like that.

Here's an excerpt from the code that triggers the event:

DECLARE @rroad_group_2_property_pivot_row_count BIGINT;
. . .

------------------------------------------------------------
-- Start displaying progress messages.

INSERT rroad_progress_messages_requested ( requested_by_connection_id ) VALUES ( @@SPID );

COMMIT;

SELECT SYSTAB.count
INTO @rroad_group_2_property_pivot_row_count
FROM SYSTAB
WHERE SYSTAB.table_name = 'rroad_group_2_property_pivot';

TRIGGER EVENT rroad_display_progress_messages (
@row_count_string = STRING ( @rroad_group_2_property_pivot_row_count ) );

------------------------------------------------------------
-- Update rroad_group_2_property_pivot with remaining calculated columns

UPDATE rroad_group_2_property_pivot
INNER JOIN rroad_sample_set
ON rroad_sample_set.sample_set_number = rroad_group_2_property_pivot.sample_set_number
LEFT OUTER JOIN ( SELECT rroad_group_2_property_pivot.sample_set_number AS sample_set_number,
. . .

COMMIT;

------------------------------------------------------------
-- Stop displaying progress messages.

DELETE rroad_progress_messages_requested
WHERE requested_by_connection_id = @@spid;

COMMIT;
  • The INSERT on line 7 allows the EVENT to produce messages when it runs.

  • The SELECT on lines 11 through 14 works perfectly if there has been a CHECKPOINT recently; if not, it's probably good enough, and in this case it's just fine.

  • The TRIGGER EVENT on lines 16 and 17 is where the magic lies: it defines a pseudo-parameter called @row_count_string and gives it a value that is available via the (also magic) EVENT_PARAMETER function shown earlier.

  • The UPDATE and COMMIT on lines 22 through 28 are the "black box" process that runs for 19 minutes.

  • The DELETE starting on line 33 tells the EVENT to stop what it's doing, no more messages are required.
Here's the table that is used to communicate with the event:

CREATE TABLE rroad_progress_messages_requested (
requested_by_connection_id INTEGER NOT NULL PRIMARY KEY );

Here's a screen capture showing how processing slowed down near the end for some unknown reason:
  • the updated percentage got stuck (98%, 98%, 98%, ...), and

  • the COMMIT ran long enough for several committed percentages to appear (10%, 31%, 54%, ...).



Here what the database console log contained for another test; this one didn't slow down as it neared completion, and the COMMIT was faster (only a single "70% committed" message):

I. 07/28 11:14:36. 2011-07-28 11:14:36.180 DATA UPGRADE: Updating Foxhound Monitor connection-level sample rows.
I. 07/28 11:14:44. Cache size adjusted to 3648208K
I. 07/28 11:15:00. Cache size adjusted to 3773872K
I. 07/28 11:15:05. Cache size adjusted to 3814120K
I. 07/28 11:15:09. Cache size adjusted to 3828376K
I. 07/28 11:15:11. Cache size adjusted to 3842344K
I. 07/28 11:15:21. Cache size adjusted to 3849032K
I. 07/28 11:15:26. Cache size adjusted to 3851152K
I. 07/28 11:15:27. 1% updated (67219 rows of 6133762)
I. 07/28 11:15:31. Cache size adjusted to 3851888K
I. 07/28 11:15:36. 2% updated (127610 rows of 6133762)
I. 07/28 11:15:36. Cache size adjusted to 3852120K
I. 07/28 11:15:41. Cache size adjusted to 3851928K
I. 07/28 11:15:45. 3% updated (185549 rows of 6133762)
I. 07/28 11:15:47. Cache size adjusted to 3863232K
I. 07/28 11:15:52. Cache size adjusted to 3866080K
I. 07/28 11:15:55. 4% updated (250198 rows of 6133762)
I. 07/28 11:16:03. 5% updated (309751 rows of 6133762)
I. 07/28 11:16:12. 6% updated (368423 rows of 6133762)
I. 07/28 11:16:24. 7% updated (434508 rows of 6133762)
I. 07/28 11:16:33. 8% updated (492776 rows of 6133762)
I. 07/28 11:16:43. 9% updated (555023 rows of 6133762)
I. 07/28 11:16:52. Cache size adjusted to 3909880K
I. 07/28 11:16:54. 10% updated (618356 rows of 6133762)
I. 07/28 11:17:03. 11% updated (679788 rows of 6133762)
I. 07/28 11:17:12. 12% updated (740038 rows of 6133762)
I. 07/28 11:17:15. Cache size adjusted to 3924104K
I. 07/28 11:17:20. Cache size adjusted to 3928528K
I. 07/28 11:17:22. 13% updated (799102 rows of 6133762)
I. 07/28 11:17:25. Cache size adjusted to 3931472K
I. 07/28 11:17:30. Cache size adjusted to 3933392K
I. 07/28 11:17:32. 14% updated (862614 rows of 6133762)
I. 07/28 11:17:35. Cache size adjusted to 3935112K
I. 07/28 11:17:40. Cache size adjusted to 3941088K
I. 07/28 11:17:42. 15% updated (924157 rows of 6133762)
I. 07/28 11:17:45. Cache size adjusted to 3943656K
I. 07/28 11:17:50. Cache size adjusted to 3948168K
I. 07/28 11:17:51. 16% updated (985534 rows of 6133762)
I. 07/28 11:18:01. 17% updated (1047734 rows of 6133762)
I. 07/28 11:18:09. 18% updated (1105532 rows of 6133762)
I. 07/28 11:18:18. 19% updated (1170862 rows of 6133762)
I. 07/28 11:18:25. 20% updated (1227940 rows of 6133762)
I. 07/28 11:18:34. 21% updated (1296232 rows of 6133762)
I. 07/28 11:18:42. 22% updated (1351509 rows of 6133762)
I. 07/28 11:18:50. 23% updated (1419113 rows of 6133762)
I. 07/28 11:18:50. Cache size adjusted to 3953328K
I. 07/28 11:18:58. 24% updated (1476474 rows of 6133762)
I. 07/28 11:19:07. 25% updated (1540502 rows of 6133762)
I. 07/28 11:19:19. 25% updated (1578351 rows of 6133762)
I. 07/28 11:19:29. 26% updated (1595529 rows of 6133762)
I. 07/28 11:19:41. 27% updated (1662896 rows of 6133762)
I. 07/28 11:19:50. Cache size adjusted to 3934168K
I. 07/28 11:19:51. 28% updated (1722376 rows of 6133762)
I. 07/28 11:20:01. 29% updated (1783204 rows of 6133762)
I. 07/28 11:20:13. 30% updated (1844001 rows of 6133762)
I. 07/28 11:20:23. 31% updated (1905747 rows of 6133762)
I. 07/28 11:20:33. 32% updated (1968430 rows of 6133762)
I. 07/28 11:20:46. 33% updated (2025920 rows of 6133762)
I. 07/28 11:20:51. Cache size adjusted to 3921488K
I. 07/28 11:20:57. 34% updated (2087546 rows of 6133762)
I. 07/28 11:21:08. 35% updated (2147983 rows of 6133762)
I. 07/28 11:21:18. 36% updated (2210031 rows of 6133762)
I. 07/28 11:21:28. 37% updated (2272765 rows of 6133762)
I. 07/28 11:21:38. 38% updated (2334295 rows of 6133762)
I. 07/28 11:21:47. 39% updated (2392202 rows of 6133762)
I. 07/28 11:21:51. Cache size adjusted to 3917520K
I. 07/28 11:22:01. 40% updated (2458832 rows of 6133762)
I. 07/28 11:22:12. 41% updated (2519676 rows of 6133762)
I. 07/28 11:22:17. Cache size adjusted to 3918608K
I. 07/28 11:22:22. Cache size adjusted to 3919096K
I. 07/28 11:22:25. 41% updated (2574924 rows of 6133762)
I. 07/28 11:22:26. 42% updated (2580107 rows of 6133762)
I. 07/28 11:22:27. Cache size adjusted to 3914408K
I. 07/28 11:22:32. Cache size adjusted to 3912736K
I. 07/28 11:22:35. 42% updated (2629137 rows of 6133762)
I. 07/28 11:22:37. Cache size adjusted to 3916120K
I. 07/28 11:22:38. 43% updated (2638896 rows of 6133762)
I. 07/28 11:22:42. Cache size adjusted to 3923840K
I. 07/28 11:22:47. Cache size adjusted to 3928120K
I. 07/28 11:22:51. 44% updated (2704091 rows of 6133762)
I. 07/28 11:22:52. Cache size adjusted to 3930848K
I. 07/28 11:23:01. 45% updated (2760737 rows of 6133762)
I. 07/28 11:23:10. 46% updated (2822995 rows of 6133762)
I. 07/28 11:23:21. 47% updated (2883865 rows of 6133762)
I. 07/28 11:23:32. 48% updated (2949216 rows of 6133762)
I. 07/28 11:23:43. 49% updated (3006036 rows of 6133762)
I. 07/28 11:23:55. 50% updated (3073329 rows of 6133762)
I. 07/28 11:24:05. 51% updated (3130900 rows of 6133762)
I. 07/28 11:24:16. 52% updated (3190512 rows of 6133762)
I. 07/28 11:24:27. 53% updated (3255993 rows of 6133762)
I. 07/28 11:24:37. 54% updated (3317650 rows of 6133762)
I. 07/28 11:24:45. 55% updated (3373726 rows of 6133762)
I. 07/28 11:24:56. 56% updated (3441046 rows of 6133762)
I. 07/28 11:25:06. 57% updated (3499829 rows of 6133762)
I. 07/28 11:25:17. 58% updated (3561681 rows of 6133762)
I. 07/28 11:25:22. Cache size adjusted to 4039144K
I. 07/28 11:25:27. 59% updated (3619535 rows of 6133762)
I. 07/28 11:25:39. 60% updated (3683934 rows of 6133762)
I. 07/28 11:25:52. 61% updated (3747029 rows of 6133762)
I. 07/28 11:26:03. 62% updated (3809460 rows of 6133762)
I. 07/28 11:26:11. 63% updated (3864864 rows of 6133762)
I. 07/28 11:26:19. 64% updated (3929201 rows of 6133762)
I. 07/28 11:26:22. Cache size adjusted to 4097008K
I. 07/28 11:26:28. 65% updated (3987744 rows of 6133762)
I. 07/28 11:26:34. Starting checkpoint of "new_f" (foxhound1.db) at Thu Jul 28 2011 11:26
I. 07/28 11:27:22. Cache size adjusted to 4144584K
I. 07/28 11:28:26. Finished checkpoint of "new_f" (foxhound1.db) at Thu Jul 28 2011 11:28
I. 07/28 11:28:32. 66% updated (4049736 rows of 6133762)
I. 07/28 11:28:41. 67% updated (4111378 rows of 6133762)
I. 07/28 11:28:50. 68% updated (4171434 rows of 6133762)
I. 07/28 11:28:58. 69% updated (4233962 rows of 6133762)
I. 07/28 11:29:07. 70% updated (4297422 rows of 6133762)
I. 07/28 11:29:16. 71% updated (4359568 rows of 6133762)
I. 07/28 11:29:22. Cache size adjusted to 4156480K
I. 07/28 11:29:24. 72% updated (4416568 rows of 6133762)
I. 07/28 11:29:34. 73% updated (4480383 rows of 6133762)
I. 07/28 11:29:43. 74% updated (4543735 rows of 6133762)
I. 07/28 11:29:52. 75% updated (4603354 rows of 6133762)
I. 07/28 11:30:01. 76% updated (4669403 rows of 6133762)
I. 07/28 11:30:08. 77% updated (4726151 rows of 6133762)
I. 07/28 11:30:14. Starting checkpoint of "new_f" (foxhound1.db) at Thu Jul 28 2011 11:30
I. 07/28 11:30:22. Cache size adjusted to 4159456K
I. 07/28 11:30:23. Finished checkpoint of "new_f" (foxhound1.db) at Thu Jul 28 2011 11:30
I. 07/28 11:30:26. 78% updated (4791562 rows of 6133762)
I. 07/28 11:30:34. 79% updated (4848418 rows of 6133762)
I. 07/28 11:30:39. Cache size adjusted to 4160200K
I. 07/28 11:30:44. 80% updated (4910819 rows of 6133762)
I. 07/28 11:30:44. Cache size adjusted to 4160384K
I. 07/28 11:30:52. 81% updated (4968486 rows of 6133762)
I. 07/28 11:31:01. 82% updated (5034257 rows of 6133762)
I. 07/28 11:31:10. 83% updated (5098073 rows of 6133762)
I. 07/28 11:31:18. 84% updated (5156062 rows of 6133762)
I. 07/28 11:31:26. 85% updated (5218284 rows of 6133762)
I. 07/28 11:31:34. 86% updated (5275652 rows of 6133762)
I. 07/28 11:31:44. 87% updated (5337384 rows of 6133762)
I. 07/28 11:31:52. 88% updated (5403349 rows of 6133762)
I. 07/28 11:32:00. 89% updated (5460098 rows of 6133762)
I. 07/28 11:32:09. 90% updated (5526121 rows of 6133762)
I. 07/28 11:32:17. 91% updated (5585565 rows of 6133762)
I. 07/28 11:32:24. 92% updated (5643851 rows of 6133762)
I. 07/28 11:32:33. 93% updated (5708012 rows of 6133762)
I. 07/28 11:32:42. 94% updated (5766534 rows of 6133762)
I. 07/28 11:32:50. 95% updated (5829575 rows of 6133762)
I. 07/28 11:32:59. 96% updated (5891464 rows of 6133762)
I. 07/28 11:33:08. 97% updated (5952045 rows of 6133762)
I. 07/28 11:33:18. 98% updated (6018117 rows of 6133762)
I. 07/28 11:33:26. 99% updated (6073883 rows of 6133762)
I. 07/28 11:33:37. 100% updated (6133762 rows of 6133762)
I. 07/28 11:33:47. 70% committed (4313700 rows of 6133762)
I. 07/28 11:33:48. 2011-07-28 11:33:48.717 DATA UPGRADE: Creating foreign keys.

Did the EVENT overhead slow things down? Nope, the last test took 19 minutes just like the first one... all it did was give the customer something to look at.


Thursday, July 28, 2011

The Thursday Quote - Joel Spolsky


"It's harder to read code than to write it."
- Things You Should Never Do, Part I by Joel Spolsky in Joel on Software on Thursday, April 06, 2000


Here's the wider context, just a snippet from Joel Spolsky's wonderful post:

Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted.

It's a bit smarmy of me to criticize them for waiting so long between releases. They didn't do it on purpose, now, did they?

Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make:

They decided to rewrite the code from scratch.

. . .

There's a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:
It's harder to read code than to write it.
This is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it's easier and more fun than figuring out how the old function works.

Joel was right when he wrote that 11 years ago. If he'd been alive in 1960 he would have been right then.

...and he's right today.

And now for the Fair, Balanced and Incorrect point of view...


Joel Spolsky's business partner Jeff Attwood has written many wonderful articles over the years, but his response is a real clunker:

"I think the way most developers "understand" code is to rewrite it."
- When Understanding means Rewriting by Jeff Atwood in Coding Horror on Sep 18, 2006


Wouldn't it be better if programmers tried a bit harder to write code that was understandable when read by others?

Code that could, say, pass something as simple as the Telephone Test?

Go ahead, try that with anything that's even remotely object oriented :)



Next week: Jakob Nielsen


Wednesday, July 27, 2011

Beware CURRENT TIMESTAMP

Did you know that your computer can travel in time? Into the future and backwards?



Here are three reasons SELECT CURRENT TIMESTAMP can suddenly return a value earlier than it did a moment ago:

  • the autumn "fall back" daylight savings time adjustment has just been made,

  • a leap second has just been added, or

  • the system clock has drifted forward and then been reset disciplined.
Daylight savings time you know about, and if you're one of the lucky ones you don't have to (or never did have to) deal with it.

Leap seconds haven't happened since 2008 so we're probably due for another one soon, but really... who cares? OK, astronomers care... anyone else?

My favorite is the last one...
When the W32Time Manager receives time samples, it uses special algorithms in NTP to determine which of the time samples is the most appropriate for use. The time service also uses another set of algorithms to determine which of the configured time sources is the most accurate. When the time service has determined which time sample is best, based on the above criteria, it adjusts the local clock rate to allow it to converge toward the correct time. If the time difference between the local clock and the selected accurate time sample (also called the time skew) is too large to correct by adjusting the local clock rate, the time service sets the local clock to the correct time. This adjustment of clock rate or direct clock time change is known as clock discipline. - How the Windows Time Service Works

Dilbert.com



Is This a Bohrbug? or a Heisenbug?


Here's a Murphy's Law scenario...
  • You see an error message like "Cannot convert '00:00:67' to a timestamp".

  • You fix the code.

  • Later on you see the same error message coming from a different location in the code.

  • At this point you ask "Should I search all the code for similar errors?"

  • If "Yes" then the effort will be wasted because only that second location needs fixing.

  • If "No" then there is a third location that needs fixing, and it will eventually show up in testing... or production (this is Murphy's Law, after all).

  • When you fix the third location, and ask the question again, the result is the same... there will always be at least one more error until you go looking for it ahead of time.

Show Me The Code!


Here's the pseudocode for a loop that executes every five seconds...

WHILE ... LOOP;

SET t = CURRENT TIMESTAMP;

perform a task that takes a varying amount of time;

SET delay = 5 seconds - DATEDIFF ( between t and CURRENT TIMESTAMP in seconds );

WAITFOR DELAY STRING ( '00:00:', delay );

END LOOP;

In the real world, the code's a bit harder; here are some excerpts from the actual SQL inside Foxhound...

DECLARE @from_timestamp TIMESTAMP;
DECLARE @waitfor_msec BIGINT;
DECLARE @waitfor_sec INTEGER;

...

WHILE ... LOOP

...

SET @from_timestamp = CURRENT TIMESTAMP;

CALL rroad_monitor_main();

...

SET @waitfor_msec = 5000 - rroad_f_datediff_msec ( @from_timestamp, CURRENT TIMESTAMP );

IF @waitfor_msec < 1000 THEN
SET @waitfor_sec = 1; -- minimum additional wait, even if this loop pass has taken a long time
ELSE
SET @waitfor_sec = CAST ( ROUND ( CAST ( @waitfor_msec AS DECIMAL ( 11, 2 ) ) / 1000.0, 0 ) AS INTEGER );
END IF;

...

WAITFOR DELAY STRING ( '00:00:', @waitfor_sec );

...

END LOOP;

The function call rroad_f_datediff_msec ( ... ) on line 17 is some legacy code from the days when DATEDIFF ( MILLISECOND, ... ) returned INTEGER instead of BIGINT like it does now in Version 12.

The IF statement on lines 19 through 23 accounts for the fact that the earlier CALL might have used up a lot of time, resulting in a very small or even negative @waitfor_msec value. If the value is less 1000 milliseconds (one second), @waitfor_sec is set to 1, otherwise @waitfor_sec is calculated using the ridiculously careful conversions on line 22.

The WAITFOR DELAY on line 27 uses a value most likely in the range '00:00:1' through '00:00:4'.

So... why this error message?



SQLCODE = -157, SQLSTATE = 53018, ERRORMSG() = Cannot convert '00:00:67' to a timestamp

The answer is that CURRENT TIMESTAMP on line 17 returned a value that was about a minute earlier than the value on line 11, and that happened because Windows had disciplined the system clock between the two references to CURRENT TIMESTAMP.

And Windows did that because the system clock had slowly, over days, drifted forward in time... it was just Foxhound's bad luck that the Windows clock discipline had been put into effect at a point between the two CURRENT TIMESTAMP references.

Here are some possible numbers: The CALL statement took 1 second and Windows set the system clock back by 63 seconds, so rroad_f_datediff_msec() returned -62000, so...


SET @waitfor_msec = 5000 - ( -62000 ) = 67000

...which resulted in "Cannot convert '00:00:67' to a timestamp".

Here's the fix: change IF to a CASE that deals with stupid-big values...

CASE

WHEN @waitfor_msec < 1000 THEN
SET @waitfor_sec = 1; -- minimum additional wait, even if this loop pass has taken a long time

WHEN @waitfor_msec > 5000 THEN
SET @waitfor_sec = 1; -- deal with the fact CURRENT TIMESTAMP can move backwards

ELSE
SET @waitfor_sec = CAST ( ROUND ( CAST ( @waitfor_msec AS DECIMAL ( 11, 2 ) ) / 1000.0, 0 ) AS INTEGER );

END CASE;

Yes, this was the second time in over a year that the same bug was found and fixed in a different code location.

Yes, this time the answer to the Murphy's Law question was "Yes, let's spend the time and look for any other code that might have the same problem."

And yes, of course the result was "Nope, you wasted your time, Foxhound only has two delay loops like this."


Monday, July 25, 2011

Fuji: SQL Anywhere in the Cloud for Independent Software Vendors



Discover Sybase's new cloud data platform, code-named "Fuji", designed specifically for the needs of the ISV market. Many of the recent cloud data platforms have been focused on end-users and providing "infinite" scalability of a single database. "Fuji" is different. "Fuji" is focused on the ISV and providing scalability for the number of databases in the system. Come learn about how "Fuji" will allow a small team to host tens-of-thousands of separate, isolated databases in a secure, flexible, and manageable multi-tenant environment.



That's just one of the "Data Management" breakout sessions about Fuji at this year's Techwave conference on September 12 to 16 at the Venetian Hotel in Las Vegas.

Here are all the "Product Focus: SQL Anywhere" sessions with words and phrases that caught my eye highlighted in bold:



From TechWave 2011 Partial List of Breakout Sessions: Data Management (DAT and DMA)...

DATCLD1 "Fuji" - The Cloud Data Platform Built for Independent Software Vendors
Product Focus: SQL Anywhere
Level of Experience: All
Discover Sybase's new cloud data platform, code-named "Fuji", designed specifically for the needs of the ISV market. Many of the recent cloud data platforms have been focused on end-users and providing "infinite" scalability of a single database. "Fuji" is different. "Fuji" is focused on the ISV and providing scalability for the number of databases in the system. Come learn about how "Fuji" will allow a small team to host tens-of-thousands of separate, isolated databases in a secure, flexible, and manageable multi-tenant environment.

DATCLD2 ISV Plans for SaaS: Peer Perspectives
Product Focus: SQL Anywhere
Level of Experience: All
Are ISVs planning to move applications between public and private clouds? What factors have affected their data architecture in the cloud? What are their biggest concerns in moving to the cloud? A recent Sybase-sponsored survey examined the plans of over 250 ISVs as they move to SaaS-based application models [that's Software as a Service]. This talk will explore the results of the worldwide survey as well as data from numerous interviews with SaaS vendors and industry analysts.

DATCLD3 Cloud and Multi-Tenant Architectures Using SQL Anywhere
Product Focus: SQL Anywhere
Level of Experience: All
When moving to the cloud, it is vital that organizations retain the unique aspects of their applications, rather than simply following someone else's architecture. Truly visionary companies are leveraging the cloud, while still maintaining their own competitive differentiators. This talk will detail several key cloud architecture approaches, backed-up with real-world examples of SQL Anywhere OEM partners who are transforming their applications using the cloud, while continuing to deliver real competitive advantage to their customers.

DATCLD4 Sybase: Data in the Cloud
Product Focus: SQL Anywhere
Level of Experience: All
Discover Sybase's cloud vision and roadmap for products that manage, analyze and mobilize data in public and private clouds. This talk will examine the key cloud computing trends influencing Sybase's strategy, and explore how Sybase's products and services are aligning to provide solutions.

DATSQL1 SQL Anywhere 12 Data Management New Features
Product Focus: SQL Anywhere
Level of Experience: All
This session will dive into the data management enhancements provided in SQL Anywhere Version 12. Through discussion and demos, you will learn about SQL Anywhere 12's new language extensions, performance and scalability enhancements, spatial support, and the new administration and monitoring tool features. You'll also learn some tips and techniques for migrating from previous releases.

DATSQL2 Self-Management and Self-Healing in SQL Anywhere - Part 1
Product Focus: SQL Anywhere
Level of Experience: Intermediate
This presentation will discuss the self-management and self-healing features of SQL Anywhere which is designed from the ground up for frontline business environments with minimal administration. Topics covered will illustrate how the various self-management features of SQL Anywhere work in concert to provide a robust data management solution in zero-administration environments.

DATSQL3 Self-Management and Self-Healing in SQL Anywhere - Part 2
Product Focus: SQL Anywhere
Level of Experience: All
This presentation will discuss the self-management and self-healing features of SQL Anywhere which is designed from the ground up for frontline business environments with minimal administration. Topics covered will illustrate how the various self-management features of SQL Anywhere work in concert to provide a robust data management solution in zero-administration environments.

DATSQL4 SQL Anywhere Performance Analysis
Product Focus: SQL Anywhere
Level of Experience: Intermediate
This session provides an overview of issues to consider when designing a performance evaluation of SQL Anywhere. It outlines an systematic approach to construct a model of performance to determine the relative importance of various performance factors, such as database cache size and the server multiprogramming level. In addition, methods for troubleshooting performance problems in SQL Anywhere will be discussed. We will outline the steps a DBA should take to classify a performance problem, the tools provided in the product to examine the problem, and the analysis that must be done to resolve it.

DATSQL5 Migrating to SQL Anywhere 12
Product Focus: SQL Anywhere
Level of Experience: All
Whether you have an older version of SQL Anywhere or are looking to migrate from another database product, SQL Anywhere 12 presents several new features and improvements that will greatly benefit your applications. In this session, we describe the main reasons for migrating to SQL Anywhere 12, as well as key features to leverage. We will also examine the process of taking an existing non-Sybase database and migrating it to SQL Anywhere by simply using the built-in graphical administration tool.

DATSQL6 Using Spatial Data in SQL Anywhere
Product Focus: SQL Anywhere
Level of Experience: All
Did you know spatial data can be stored in a SQL Anywhere 12 database? In addition to storing these points, lines and polygons, the server can query such data extremely fast. SQL Anywhere is the first RDBMS that allows for the development of data driven spatial applications for BlackBerry, iPhone and Windows Mobile devices. New special data support capabilities in SQL Anywhere 12 will be previewed throughout this session, including: support for OGC and SQLMM standards for spatial data types and APIs; native import for shape files and built in functions to export to KML, GML and SVG formats; bi-directional synchronization of spatial and standard data between field worker and many popular enterprise systems; and more.

DATSQL7 .NET Development with SQL Anywhere and Visual Studio
Product Focus: SQL Anywhere
Level of Experience: All
SQL Anywhere offers strong support for developing client/server, embedded and mobile database applications using the latest version of Visual Studio and .NET Framework. This session outlines and demonstrates all the .NET features included in SQL Anywhere to assist in the implementation of robust .NET applications. The session will focus on the following topics: Visual Studio integration, ADO.NET, Entity Framework, ASP.NET, CLR stored procedures, .NET Compact Framework and .NET data synchronization logic.

DATSQL8 Using the SQL Anywhere Monitor
Product Focus: SQL Anywhere
Level of Experience: All
The SQL Anywhere Monitor is a browser-based tool that provides you with information about the health and availability of SQL Anywhere, MobiLink, and Sybase Relay servers. It allows you to store and view months of diagnostic data about your servers, giving you access to historical information when diagnosing problems or determining usage patterns. This talk will explore the features of the new V12 SQL Anywhere Monitor including installation and setup.

DATSQL9 Embedding SQL Anywhere in Commercial Applications
Product Focus: SQL Anywhere
Level of Experience: All
Independent Software Vendors (ISVs) and Original Equipment Manufacturers (OEMs) often need to embed a database in their commercial applications. Doing so yields many benefits for both the ISV/OEM and their customers, including a reduction in support costs and enhanced application performance. This session describes the different methods of embedding a SQL Anywhere database inside an application. Topics covered include reasons for embedding a database, how to integrate SQL Anywhere with your application, and using tools specifically designed for embedded systems. Software demonstrations will be used to highlight the advantages of embedding SQL Anywhere in commercial solutions.

DATSQL10 SQL Anywhere Tips and Techniques
Product Focus: SQL Anywhere
Level of Experience: All
This example-filled session will demonstrate some of the out-of-the-box things you can do with SQL Anywhere's extremely flexible feature set. Access the web by producing and consuming web services, and generating RSS feeds from the database. Take advantage of location and spatial data by creating density maps and generating spatial drawings and overlaying them on graph paper, all generated using SQL. Manage your database, log and backup files as well as external multimedia data from within the server using directory access capabilities. Generate cross-database queries using remote data access. You'll learn all this and more!

DATSQL11 Using Sybase SQL Anywhere with SAP Products
Product Focus: SQL Anywhere
Level of Experience: All
Do you know that Sybase SQL Anywhere is commonly used with SAP products such as Crystal Reports and Business Objects as an alternative to administration-intensive databases? Built to provide high performance and autonomous management, SQL Anywhere offers a database system that can quickly and easily be deployed alongside existing SAP infrastructure. This session provides an overview of the major features of SQL Anywhere and covers topics such as enterprise-caliber functionality, out-of-box performance, self-management, embeddability and data synchronization. The session will illustrate these concepts through a demonstration SQL Anywhere's usage with SAP Crystal Reports.

DATSQL12 SQL Anywhere and Odata
Product Focus: SQL Anywhere
Level of Experience: All
The Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores. Learn about OData and how you can access your SQL Anywhere database using an OData interface.

DATSQL13 Successfully Synchronizing Partitioned Data with MobiLink
Product Focus: SQL Anywhere
Level of Experience: Intermediate
This session will discuss the lessons learned from using SQL Anywhere and MobiLink as the technical underpinnings for a geographically distributed database with heavily partitioned data spread across a diverse range of networks. We will identify and discuss focal points for ensuring that your MobiLink synchronizations accurately reflect your rules for data integrity and access, averting problems through proper planning and preparation.

DATSQL14 Meeting the Challenges of Satellite, Message-based Replication
Product Focus: SQL Anywhere
Level of Experience: Advanced
SQL Remote replication technology provides the foundation for successful message-based replication, however, remote platforms such as ships at sea, oil drilling platforms or other very remote facilities which rely on more expensive satellite based communications provide special challenges. Meeting the challenges of satellite, message-based replication provides details on the challenges faced over my ten years of experience with the US Military Sealift Command including day-to-day replication, application upgrades requiring database updates via SQL Remote "passthrough" statements, and database version upgrades.

DATSQL15 How a Legacy N-Tier System Became Fault Tolerant with SQL Anywhere High Availability
Product Focus: SQL Anywhere
Level of Experience: Intermediate
High Availability for SQL Anywhere provides a simple and powerful solution to recover from a database server failure, making it ideal for creating fault tolerant client/server applications. But what if the requirement is to provide high availability for an existing multi-tier application, in which the logic tier can also fail? This session will cover how High Availability for SQL Anywhere was combined with custom code to allow a standalone legacy system to work in a primary/backup configuration. High Availability was used to seamlessly mirror the data between the primary and backup servers, and a software agent was developed to monitor the status of the multiple tiers of the system and do an automatic switchover to a backup server if case of failure of any of the components.



From TechWave 2011 Partial List of Breakout Sessions: Enterprise Mobility (MOB)...

MOB9 Mobile App Development: Database-Driven Architectures for Mobile Applications
Product Focus: SQL Anywhere
Level of Experience: Intermediate
Anyone building or buying a mobile application needs to decide where to keep the application data: on the device or on a server? The answer depends on your needs for availability, latency, and consistency. This session offers key considerations and the impacts of these decisions for developing database-driven architectures for mobile applications. Attend this session to explore the ways in which SQL Anywhere deals with consistency problems for mobile applications that need to be highly available and responsive.

MOB10 Mobile App Development: Developing Database-Driven Apps for Android
Product Focus: SQL Anywhere
Level of Experience: Intermediate
Android is now the most widely used Smartphone operating system in the world. This session explores database-driven application development for Android, including keeping a responsive user interface, data management, and implementing push. It focuses on developing data-rich applications using SAP.

MOB11 Case Study: Scalable Line-of-Business Mobile Applications with MobiLink
Product Focus: SQL Anywhere
Level of Experience: All
The SAP Mobility Platform solution set includes powerful MobiLink technology. MobiLink has been used extensively by many enterprises to build some of the largest, most complex, and most successful mobile applications ever created. This session examines a number of successfully deployed enterprise mobile applications and gives insight into the considerations, challenges, and keys to success for each of these projects. It also provides an overview of the architecture and features of MobiLink and discusses how these were leveraged in real-world mobile applications.

MOB21 Product Overview: Performance Monitoring MobiLink Server Farms
Product Focus: SQL Anywhere
Level of Experience: Beginner
Every well-run system needs to be monitored to determine how it is meeting performance goals, and to identify areas for improvement. In this session, we'll discuss the use of SQL Anywhere Monitor to analyze the performance of MobiLink server farms including setting of alerts and notifications. Learn how MobiLink Monitor, synchronization events, and database tools like the MDA tables are used to identify the root cause of performance problems.

Dilbert.com


Friday, July 22, 2011

The fRiDaY File - Priorities


Looking for something to do this afternoon? Try dilbert.com...

  1. Type something in the "Keyword Search" field, say, priorities.

  2. Press SEARCH.

  3. Enjoy.

Dilbert.com


By The Numbers

Here are the latest numbers from Quantcast...



Some observations...

  • This blog is slowly growing in popularity both inside the United States and around the world.

  • It seems to be doing well among older Asian men with PhDs,

  • but with teenage Hispanic girls? Not so much.

...and an Action Plan!


Less Dilbert, more Don Omar!





Thursday, July 21, 2011

The Thursday Quote - Stan Murawski


"... data types are the key to success."
Beyond Windows XP: Get Ready Now for the Upcoming 64-Bit Version of Windows by Stan Murawski, from the November 2001 issue of MSDN Magazine.

That quote caught my eye during my protracted struggle with 32-versus-64-bits.

Here's a bit (pun intended) of the wider context:

SUMMARY In this article the author modifies an industry standard middle-tier application server benchmark called Nile. The goal was to get it to build and run correctly on the 64-bit edition of the Microsoft .NET Advanced Server running on computers with Intel Itanium processors and still build as a 32-bit version to run on Pentium class x86 processors.

While modifying Nile, the author discovered some of the tips he presents here. As the article explains, when modifying code for 64-bit Windows, data types are the key to success. The article discusses which types to use and when, along with new data types and the importance of memory alignment.
Yes, he's talking about C code, but nevertheless data types ARE one of the keys to success. It is all well and good to use abstract data types like "string" and "number" but if you don't have a clue about what they actually are, you may be unprepared when reality intrudes.

Consider the bear...

Abstract Bear



Actual Bear





Next week: Joel Spolsky


Wednesday, July 20, 2011

More Stupid Error Messages

The earlier Stupid Error Messages began with this rant:


Have you ever seen an error message like this one?
Invalid phone number (212) 555-1284 - not numeric
Maybe this one?
Invalid phone number 2125551284 - format not (999) 999-999
Both messages are stupid, stupid, stupid... that's three "stupids" out of a maximum possible score of 5. Modern systems should be able to deal with human-readable input formats like credit card numbers with or without the dashes.
Invalid credit card number 4533-3048-3842-9544 - not numeric


The Finest Talent


Here's a new, more extreme example of a stupid error message:



This is Best Buy, not Mama Mary's Pasta Palace. Presumably Best Buy can afford the finest talent and not rely on cousin Vinnie for family tech support. Proof of that lies in the fact the error message is well thought out, carefully crafted and fully explained... clearly a product of the finest talent, not cousin Vinnie.

However...
  • In the real world, credit cards have a single free-format "name on card" string, not separate "first name" and "last name".

  • In the real world, people sometimes have more than (or less than) two names.

  • In the real world, companies have credit cards.

  • In the real world, the "name on card" can contain digits.
But none of that matters: Best Buy shouldn't care what you enter in the "name on card", they should just care if the credit card company authorizes the purchase.

The bottom line? Best Buy didn't get my money.

At The Other End Of The Food Chain


Here's another new example of a stupid error message, this one from a recent release of Wordpress:



This is cousin Vinnie's work: no thought whatsoever was put into the design of the error message, thus proving the adage "Open source is free if your time isn't worth anything."

Both messages are stupid, stupid, stupid, stupid... four "stupids" out of a maximum possible score of 5.


Monday, July 18, 2011

Latest SQL Anywhere EBFs: 12.0.1.3389 and 11.0.1.2634 Windows, 11.0.1.2632 Solaris

The three asterisks "***" indicate what's new about this posting.

Only the latest point releases of the three supported versions are shown here: 12.0.1, 11.0.1 and 10.0.1.

To find out what you're running just do SELECT @@VERSION

Current builds...

HP-UX Itanium 11.0.1.2584 EBF 07 Apr 2011
10.0.1.3777 EBF 14 Oct 2008

HP-UX PA-RISC 10.0.1.3778 EBF 16 Oct 2008

IBM AIX 12.0.1 GA Upgrade from 12.0.0 15 Mar 2011
11.0.1.2584 EBF 07 Apr 2011

Linux Itanium 10.0.1.3777 EBF 14 Oct 2008

Linux x86/x64 12.0.1.3311 EBF 01 Apr 2011
11.0.1.2617 EBF 13 Jun 2011
10.0.1.4239 EBF 17 Jun 2011

Mac OS 12.0.1.3352 EBF 26 May 2011
11.0.1.2449 EBF 29 Jun 2010
10.0.1.4042 EBF 01 Oct 2010

NetWare 10.0.1.4036 EBF 10 Mar 2010

Solaris SPARC 12.0.1 GA Upgrade from 12.0.0 15 Mar 2011
11.0.1.2632 EBF *** 07 Jul 2011 ***
10.0.1.4239 EBF 17 Jun 2011

Solaris x64 11.0.1.2584 EBF 07 Apr 2011
10.0.1.3870 EBF 01 Apr 2009

Windows CE 10.0.1.4036 EBF 09 Mar 2010

Windows Itanium 10.0.1 GA Upgrade from 10.0.0 26 Apr 2007

Windows x86/x64 12.0.1.3389 EBF *** 11 Jul 2011 ***
11.0.1.2634 EBF *** 14 Jul 2011 ***
10.0.1.4239 EBF 17 Jun 2011


Friday, July 15, 2011

The fRiDaY File - Ask me about Worfklow


Oh, look, IBM has something called "Worfklow"...



Cisco has "Worfklow" too...



and Cnet will let you download Virto "Worfklow"...



I wonder what Virto "Worfklow" is?
Let's do a search...

on Virto "Worfklow".

Wow, twenty thousand hits for Virto "Worfklow"!



Will you look at that, even Microsoft Office has Virto "Worfklow"!




R.I.P. SQLA 1.0





Thursday, July 14, 2011

The Thursday Quote - Linus Torvalds


"95 percent of all programmers think they are in the top 5 percent, and the rest are certain they are above average"
- Linus Torvalds, quoted in The One-second War (What Time Will You Die?) by Poul-Henning Kamp, acmqueue, April 6, 2011

Here is the context...

"But Linus' [Torvalds] observation that "95 percent of all programmers think they are in the top 5 percent, and the rest are certain they are above average" should not be taken lightly: very few programmers have any idea what the difference is between "wall-clock time" and "interval time," and leap seconds are way past rocket science for them. (For example, Posix defines only a pthread_cond_timedwait(), which takes wall-clock time but not an interval-time version of the call.)"
If Mr. Torvalds is correct, he is only approximately so: I regard myself a below-average programmer and therefore I fall outside "the rest". If I was above average I would not have to work so hard to produce good code.

And I'm not even confident about the "good code" part... trying hard does not always lead to success.

Even worse, I know that many folks despise my coding style: too verbose, too vertical, too much white space, too many long names, too many underscores, not enough uPpEr and LoWeRcAsE, too many explicit name.qualifications, too many comments, and especially [blasphemy] not enough abstraction, polymorphism or inheritance [/blasphemy].

None of that bothers me, however. Only one complaint gets my attention: "I do not understand what you have written."

Oh, and of course, "Your code does not work." ...it goes without saying that gets my attention! :)


Wednesday, July 13, 2011

Using SQL to control a batch file

Whenever a developer starts the Foxhound build process, this exhortation appears:



Here's what it's saying: Go to this page in Foxhound and check to see if there are any run-time error messages that need to be dealt with:



Here's the code that produces the exhortation:


ECHO OFF
ECHO *************************************************************************
ECHO *****
ECHO ***** Have you looked at Foxhound Options - Display Exceptions?
ECHO *****
ECHO ***** Before you rebuild your Foxhound database and wipe out what was in
ECHO ***** the old copy, you might want to see if there were any errors you
ECHO ***** need to look at... or at least copy-and-paste into your to-do list.
ECHO *****
ECHO *************************************************************************
ECHO ON
PAUSE

Wouldn't it be easier for the developer if the build process checked to see if there actually ARE any exceptions?

Here's a Windows batch file that demonstrates how to use dbisql.com in batch mode, together with the ISQL EXIT statement, to do just that:

ECHO OFF

"%SQLANY12%\bin64\dbisql.com"^
-c "ENG=foxhound1;DBN=f;UID=ADHOC;PWD=SQL"^
-q^
CREATE VARIABLE rc INT; SET rc = 0; IF EXISTS ( SELECT * FROM rroad_exception ) THEN SET rc = 999 ENDIF; EXIT rc;

ECHO ERRORLEVEL = %ERRORLEVEL%

IF %ERRORLEVEL% == 999 ( ECHO rroad_exception is known to have rows ) ELSE ( ECHO rroad_exception is not known to have any rows )

PAUSE

The dbisql.com command on lines 3 through 6 does the following:
  • it connects to the Foxhound database if it's running, using the ADHOC user id provided by Foxhound for (you guessed it) ad-hoc reporting,

  • it specifies -q to suppress all display output except for error messages, and

  • it runs the SQL statements on line 6 instead of executing a script via the ISQL READ statement. Another alternative to coding out all the statements is to CALL a stored procedure, but this dbisql.com command is standalone; i.e., it doesn't require a separate SQL script or stored procedure to be created ahead of time.
The SQL statements on line 6 are all squished onto a single line because dbisql.com can't cope with multiple lines of SQL on the command line... or at least, I haven't figured out how to code line breaks in the SQL statement section.

All the carp code on line 6 to CREATE and use a connection-level variable is necessary because EXIT is an ISQL statement, not a SQL statement, and it can't be coded inside an IF THEN ELSE control block. The IF THEN block is sent to the SQL Anywhere engine to be processed, and the engine would choke throw an error if it saw an EXIT statement.

The IF %ERRORLEVEL% == 999 command on line 10 displays "rroad_exception is known to have rows" if the EXIT rc returned 999. There are two other likely %ERRORLEVEL% values: 0 for "it's all good, there's nothing to look at" and 9 for "can't connect, and therefore the number of rows is unknown"... that's why the ELSE ECHO message is so vague.
Note: The Windows documentation of the IF command talks a bit about %ERRORLEVEL% versus ERRORLEVEL, but it doesn't actually show an example of how IF %ERRORLEVEL% is so much easier think about, code and read than the old-fashioned IF ERRORLEVEL... folks who code batch files for a living know what I'm talkin' about.
Here's what the batch file displays in all three cases, rc = 999, 0 and 9:







Here's the snippet of code that was added to the Foxhound build process:

"%SQLANY12%\bin64\dbisql.com"^
-c "ENG=foxhound1;DBN=f;UID=ADHOC;PWD=SQL"^
-q^
CREATE VARIABLE rc INT; SET rc = 0; IF EXISTS ( SELECT * FROM rroad_exception ) THEN SET rc = 999 ENDIF; EXIT rc;

IF %ERRORLEVEL% == 999 ( GOTO rroad_exception_is_known_to_have_rows ) ELSE ( GOTO rroad_exception_is_not_known_to_have_any_rows )

:rroad_exception_is_known_to_have_rows
ECHO OFF
ECHO *************************************************************************
ECHO *****
ECHO ***** The rroad_exception table is NOT EMPTY !!!
ECHO *****
ECHO ***** Before you rebuild your Foxhound database and wipe out what was in
ECHO ***** the old copy, you might want to see if there were any errors you
ECHO ***** need to look at... or at least copy-and-paste into your to-do list.
ECHO *****
ECHO *************************************************************************
ECHO ON
PAUSE

:rroad_exception_is_not_known_to_have_any_rows

Here's the new improved exhortation:



BTW, the new exhortation was added to the build process, it didn't replace the old one... you can never have too many exhortations when you're looking for bugs!


Monday, July 11, 2011

Transactional Upload With MobiLink


True or False?

It is a fundamental rule of The MobiLink Synchronization Doctrine that uploaded changes are processed as a single all-or-nothing transaction on the consolidated database.
Here's a clue:
"The MobiLink server incorporates changes uploaded from each MobiLink client into the consolidated database in one transaction. It commits these changes once it has completed inserting new rows, deleting old rows, making updates, and resolving any conflicts."
- Transactions in the synchronization process
OK, true or false?




No... the correct answer is "False"


It was "True" for many years, but... did you know that since version 9.0.1 of SQL Anywhere it has been possible to break the one-transaction-rule for the upload stream?
"When you use -tu, you create a transactional upload: dbmlsync uploads each transaction on the remote database as a distinct transaction. The MobiLink server applies and commits each transaction separately when it is received."
- dbmlsync -tu option

It *is* well documented, but not easy to find, perhaps testimony to the stealth nature of SQL Anywhere.

If you're familiar with replication via SQL Remote or Replication Server, transactional upload is similar to how uploads work with dbremote.exe (the SQL Remote client) and dbltm.exe (the Log Transfer Manager, also known as... deep breath... the SQL Anywhere Replication Agent for Sybase Replication Server). Some folks have designed their systems to expect uploads to be applied to the consolidated database in the same order as the original transactions were applied to the remote databases. With the end of support for SQL Remote for ASE and for the Log Transfer Manager altogether, MobiLink is now pretty much the only game in town if you're not using SQL Anywhere for the consolidated database.

But wait, there's more...
  • MobiLink transactional upload may help with performance if the consolidated database is happier with small transactions than a giant all-or-nothing upload transaction.

  • Transactional upload may also help with robustness by allowing part of an upload to succeed before a failure occurs on an unreliable communication line between the MobiLink client and server.

  • The mlsrv12 -tx n option tells the MobiLink server to batch groups of n transactions and commit them together, when you use the dbmlsync -tu option on the MobiLink client side to get transactional upload.

  • Also note: The dbmlsync inc (increment) extended option is an alternative to dbmlsync -tu. When you specify inc=1M, for example, the single all-or-nothing upload transaction is broken into 1M (very approximate) chunks and each chunk is committed on the consolidated database. This isn't the same as transactional upload, not even close, but like transactional upload it might help with performance and robustness.

The Dark Side of Transactional Upload


MobiLink transactional upload ain't exactly the same as how SQL Remote and Replication Server work: transaction order isn't exactly the same as operation order. MobiLink still uploads full row images rather than column-level changes as they were recorded in the transaction log, and multiple operations on the same row in the same transaction are effectively "coalesced" (combined) into the final row image that MobiLink uploads.

Who cares? Well, in most cases, you don't. Most folks don't need transactional upload to make their systems work properly; the only reason they might use it is for performance and/or robustness.

Of the remainder (folks who need transactional upload), most don't care about transaction-versus-operation-order; as long as the end result of each transaction is the same on the remote and consolidated databases, they're happy.

Who's left? Folks who need transactional upload, but can't use it without changing the way their system works because they really depend on the order of operations within a transaction.

The following end-to-end demonstration uses SQL Anywhere 11 for the remote database and Microsoft SQL Server 2008 for the consolidated database
  • to show how MobiLink transactional upload works, and

  • to demonstrate how the difference between transaction order and operation order might affect the consolidated database.


The phrase "end-to-end demonstration" in the previous section means just that. The code makes no assumptions except
  • that you have a Microsoft SQL Server 2008 server up and running, and

  • that you have a working copy of SQL Anywhere 11 installed.
The code does not assume you have any test databases; it starts from scratch with CREATE DATABASE and dbinit.exe.

Having said that, not every line of code is explained here, just the pertinent parts. Feel free to skip over the parts you don't care about, and if you don't like the verbosity please pay close attention to what Alice has to say:

Dilbert.com


Step 1: Create and set up the consolidated database.

1_create_and_set_up_cons.bat

"c:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe"^
-d master^
-i "s1_create_and_set_up_cons.sql"^
-I^
-P j68Fje9#fyu489^
-S localhost^
-U sa

PAUSE

s1_create_and_set_up_cons.sql

---------------------------------------------------------------------
-- Create separate database "mltest" (change the paths as desired).

USE master
GO

CREATE DATABASE mltest
ON
( NAME = mltest,
FILENAME = 'D:\data\mltest\mltest.mdf',
SIZE = 100MB )
LOG ON
( NAME = mltest_log,
FILENAME = 'D:\data\mltest\mltest.ldf',
SIZE = 100MB )
GO

---------------------------------------------------------------------
USE mltest
GO

SET QUOTED_IDENTIFIER ON
GO

---------------------------------------------------------------------
CREATE TABLE account_balance (
account_number VARCHAR ( 10 ) NOT NULL,
amount DECIMAL ( 15, 2 ) NOT NULL DEFAULT 0,
limit_exceeded VARCHAR ( 1 ) NOT NULL DEFAULT 'N' CHECK ( limit_exceeded IN ( 'Y', 'N' ) ),
PRIMARY KEY ( account_number ) )
GO

---------------------------------------------------------------------
BEGIN TRANSACTION
INSERT account_balance ( account_number ) VALUES ( '100' )
INSERT account_balance ( account_number ) VALUES ( '200' )
INSERT account_balance ( account_number ) VALUES ( '300' )
INSERT account_balance ( account_number ) VALUES ( '400' )
INSERT account_balance ( account_number ) VALUES ( '500' )
INSERT account_balance ( account_number ) VALUES ( '600' )
INSERT account_balance ( account_number ) VALUES ( '700' )
INSERT account_balance ( account_number ) VALUES ( '800' )
INSERT account_balance ( account_number ) VALUES ( '900' )
COMMIT TRANSACTION
GO

---------------------------------------------------------------------
CREATE TRIGGER tru_account_balance ON account_balance AFTER INSERT, UPDATE
AS
BEGIN

-- Flag the row as soon as it exceeds the limit.

UPDATE account_balance
SET account_balance.limit_exceeded = 'Y'
FROM inserted
WHERE account_balance.account_number = inserted.account_number
AND account_balance.limit_exceeded = 'N'
AND account_balance.amount > 5000.00

END
GO

The CREATE TABLE supports one application requirement: Store the current balance for each account.

The INSERT statements set up accounts 100 through 900 to be used in examples that will be numbered 1 through 9.

The CREATE TRIGGER supports another application requirement: Flag each row as soon as the balance exceeds $5000, and preserve that flag setting even if the balance drops to $5000 or lower at a later point.


Step 2: Exercise the trigger.

2_update_cons_to_demonstrate_trigger.bat

"c:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe"^
-d mltest^
-i "s2_update_cons_to_demonstrate_trigger.sql"^
-I^
-P j68Fje9#fyu489^
-S localhost^
-U sa

PAUSE

s2_update_cons_to_demonstrate_trigger.sql

-- Example 1: One local transaction, one operation.

BEGIN TRANSACTION
UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '100'
COMMIT TRANSACTION
GO

-- Example 2: One local transaction, two operations.

BEGIN TRANSACTION
UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '200'
UPDATE account_balance SET amount = amount - 1000.00 WHERE account_number = '200'
COMMIT TRANSACTION
GO

-- Example 3: Two local transactions, one operation each.

BEGIN TRANSACTION
UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '300'
COMMIT TRANSACTION
GO

BEGIN TRANSACTION
UPDATE account_balance SET amount = amount - 1000.00 WHERE account_number = '300'
COMMIT TRANSACTION
GO

The script above contains three examples:
  • Example 1 performs one single operation in a single transaction to cause the account balance to exceed $5000 and remain there.

  • Example 2 performs two operations in a single transaction to cause the account balance to exceed $5000 and then fall back down to $5000.

  • Example 3 does the same thing, but it uses a separate transaction for each operation.


Step 3: Show what the trigger did.

3_display_cons.bat

"c:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe"^
-d mltest^
-e^
-q "SELECT * FROM account_balance ORDER BY account_number"^
-I^
-P j68Fje9#fyu489^
-S localhost^
-U sa

In all three examples the flag should be set to 'Y' and left there... and the consolidated database confirms that:

SELECT * FROM account_balance ORDER BY account_number
account_number amount limit_exceeded
-------------- ----------------- --------------
100 6000.00 Y
200 5000.00 Y
300 5000.00 Y
400 .00 N
500 .00 N
600 .00 N
700 .00 N
800 .00 N
900 .00 N

(9 rows affected)
1>


Step 4: Add MobiLink support for SQL Server to the consolidated database.

4_run_syncmss_on_cons.bat

"c:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe"^
-d mltest^
-i "%SQLANY11%\MobiLink\setup\syncmss.sql"^
-I^
-P j68Fje9#fyu489^
-S localhost^
-U sa

PAUSE


Step 5: Create the MobiLink event scripts on the consolidated database.

5_set_up_MobiLink_scripts_on_cons.bat

"c:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe"^
-d mltest^
-i "s5_set_up_MobiLink_scripts_on_cons.sql"^
-I^
-P j68Fje9#fyu489^
-S localhost^
-U sa

PAUSE

s5_set_up_MobiLink_scripts_on_cons.sql

USE mltest
GO

SET QUOTED_IDENTIFIER ON
GO

---------------------------------------------------------------------
EXECUTE ml_add_column 'v1', 'account_balance', NULL, NULL
GO

EXECUTE ml_add_column 'v1', 'account_balance', 'account_number', NULL
GO

EXECUTE ml_add_column 'v1', 'account_balance', 'amount', NULL
GO

---------------------------------------------------------------------
EXECUTE ml_add_table_script 'v1', 'account_balance', 'upload_insert', NULL
GO

EXECUTE ml_add_table_script 'v1', 'account_balance', 'upload_insert', '
INSERT account_balance
( account_number,
amount )
VALUES ( {ml r.account_number},
{ml r.amount} )'
GO

---------------------------------------------------------------------
EXECUTE ml_add_table_script 'v1', 'account_balance', 'upload_update', NULL
GO

EXECUTE ml_add_table_script 'v1', 'account_balance', 'upload_update', '
UPDATE account_balance
SET amount = {ml r.amount}
WHERE account_number = {ml r.account_number}'
GO

---------------------------------------------------------------------
EXECUTE ml_add_table_script 'v1', 'account_balance', 'upload_delete', NULL
GO

EXECUTE ml_add_table_script 'v1', 'account_balance', 'upload_delete', '
DELETE account_balance
WHERE account_number = {ml r.account_number}'
GO

The SQL Server EXECUTE statements set up ordinary MobiLink event scripts for handling INSERT, UPDATE and DELETE operations that have been uploaded from the remote to the consolidated database.


Step 6: Set up the remote database.

6_create_and_set_up_remo.bat

"%SQLANY11%\bin32\dbinit.exe"^
remo.db

"%SQLANY11%\bin32\\dbspawn.exe"^
-f "%SQLANY11%\bin32\dbeng11.exe"^
-o dbeng11_log_remo.txt^
-x tcpip^
remo.db

"%SQLANY11%\bin32\dbisql.com"^
-c "ENG=remo;DBN=remo;UID=dba;PWD=sql"^
READ ENCODING Cp1252 "s6_create_and_set_up_remo.sql"

PAUSE

s6_create_and_set_up_remo.sql

CREATE TABLE account_balance (
account_number VARCHAR ( 10 ) NOT NULL,
amount DECIMAL ( 15, 2 ) NOT NULL DEFAULT 0,
PRIMARY KEY ( account_number ) )
GO

---------------------------------------------------------------------
INSERT account_balance ( account_number ) VALUES ( '400' );
INSERT account_balance ( account_number ) VALUES ( '500' );
INSERT account_balance ( account_number ) VALUES ( '600' );
INSERT account_balance ( account_number ) VALUES ( '700' );
INSERT account_balance ( account_number ) VALUES ( '800' );
INSERT account_balance ( account_number ) VALUES ( '900' );
COMMIT;

---------------------------------------------------------------------
CREATE PUBLICATION p1 (
TABLE account_balance ( account_number,
amount ) );

CREATE SYNCHRONIZATION USER "1" TYPE tcpip;

CREATE SYNCHRONIZATION SUBSCRIPTION TO p1 FOR "1";

The script above creates the account_balance table in the remote database and inserts 6 rows to exactly match the corresponding 6 rows in the consolidated database, except for the fact that the limit_exceeded column does not exist on the remote database.

The remaining CREATE statements serve to define how the remote database will be synchronized with the consolidated database, as well as defining the point in time after which changes will be processed by the first synchronization. In other words, the earlier INSERT operations won't be uploaded, but any subsequent changes will be.


Step 7: Update the remote database to demonstrate MobiLink "Classic" mode.

7_update_remo_to_demonstrate_dbmlsync_without_tu.bat

"%SQLANY11%\bin32\dbisql.com"^
-c "ENG=remo;DBN=remo;UID=dba;PWD=sql"^
READ ENCODING Cp1252 "s7_update_remo_to_demonstrate_dbmlsync_without_tu.sql"

PAUSE

s7_update_remo_to_demonstrate_dbmlsync_without_tu.sql

-- Example 4: One synchronization without -tu, one transaction, one operation.

UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '400';
COMMIT;

-- Example 5: One synchronization without -tu, one transaction, two operations.

UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '500';
UPDATE account_balance SET amount = amount - 1000.00 WHERE account_number = '500';
COMMIT;

-- Example 6: One synchronization without -tu, two transactions, one operation each.

UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '600';
COMMIT;

UPDATE account_balance SET amount = amount - 1000.00 WHERE account_number = '600';
COMMIT;

SELECT * FROM account_balance ORDER BY account_number;

The script above repeats the three examples from earlier, except this time the updates are performed on the remote database:
  • Example 4 performs one single operation in a single transaction to cause the account balance to exceed $5000 and remain there.

  • Example 5 performs two operations in a single transaction to cause the account balance to exceed $5000 and then fall back down to $5000.

  • Example 6 does the same thing, but it uses a separate transaction for each operation.
Here's what the remote database looks like at this point:

account_number amount
--------------------------------
400 6000.00
500 5000.00
600 5000.00
700 0.00
800 0.00
900 0.00

(6 rows)


Step 8: Start the MobiLink server on the consolidated database.

8_run_mlsrv11.bat

"%SQLANY11%\bin32\mlsrv11.exe"^
-c "DSN=mltest;UID=sa;PWD=j68Fje9#fyu489"^
-fr^
-o mlsrv11_log_cons.txt^
-os 10M^
-ppv 60^
-vcefhimnopqrstu^
-zu+




Step 9: Run a synchronization session in MobiLink "Classic" mode.

9_run_dbmlsync_without_tu.bat

"%SQLANY11%\bin32\dbmlsync.exe"^
-c "ENG=remo;DBN=remo;UID=dba;PWD=sql"^
-e "adr='host=localhost';sv=v1"^
-o dbmlsync_log_remo_from_step_9.txt^
-os 10M^
-uo^
-vnorsu^
-x



You can run 3_display_cons.bat again to see what happened during the MobiLink "Classic" mode upload above, and the news is not good: Only one of the three uploaded rows 400, 500 and 600 has limit_exceeded set to 'Y' even though all three rows exceeded $5000 at one point or another before they were uploaded.

SELECT * FROM account_balance ORDER BY account_number
account_number amount limit_exceeded
-------------- ----------------- --------------
100 6000.00 Y
200 5000.00 Y
300 5000.00 Y
400 6000.00 Y
500 5000.00 N
600 5000.00 N
700 .00 N
800 .00 N
900 .00 N

(9 rows affected)
1>

That's what happens when you're not using transactional upload: MobiLink only sends the final version of each updated row, and so when the trigger fired on the consolidated database it did not set the flag for the two rows where the final amount was $5000.


Step 10: Update the remote database to demonstrate transactional upload.

10_update_remo_to_demonstrate_dbmlsync_tu.bat

"%SQLANY11%\bin32\dbisql.com"^
-c "ENG=remo;DBN=remo;UID=dba;PWD=sql"^
READ ENCODING Cp1252 "s10_update_remo_to_demonstrate_dbmlsync_tu.sql"

PAUSE

s10_update_remo_to_demonstrate_dbmlsync_tu.sql

-- Example 7: One synchronization with -tu, one transaction, one operation.

UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '700';
COMMIT;

-- Example 8: One synchronization with -tu, one transaction, two operations.

UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '800';
UPDATE account_balance SET amount = amount - 1000.00 WHERE account_number = '800';
COMMIT;

-- Example 9: One synchronization with -tu, two transactions, one operation each.

UPDATE account_balance SET amount = amount + 6000.00 WHERE account_number = '900';
COMMIT;

UPDATE account_balance SET amount = amount - 1000.00 WHERE account_number = '900';
COMMIT;

SELECT * FROM account_balance ORDER BY account_number;

The script above repeats the same three examples yet again:
  • Example 7 performs one single operation in a single transaction to cause the account balance to exceed $5000 and remain there.

  • Example 8 performs two operations in a single transaction to cause the account balance to exceed $5000 and then fall back down to $5000.

  • Example 9 does the same thing, but it uses a separate transaction for each operation.
Here's what the remote database looks like at this point:

account_number amount
--------------------------------
400 6000.00
500 5000.00
600 5000.00
700 6000.00
800 5000.00
900 5000.00

(6 rows)


Step 11: Run a synchronization session using transactional upload.

11_run_dbmlsync_with_tu.bat

"%SQLANY11%\bin32\dbmlsync.exe"^
-c "ENG=remo;DBN=remo;UID=dba;PWD=sql"^
-e "adr='host=localhost';sv=v1"^
-o dbmlsync_log_remo_from_step_11.txt^
-os 10M^
-tu^
-uo^
-vnorsu^
-x



The news on the consolidated database is a bit better when transactional upload is used: Two of the three uploaded rows 700, 800 and 900 have limit_exceeded set to 'Y'. However, two updates on row 800 were performed in one single transaction on the remote database, and MobiLink combines all the changes made to one row in one transaction even though transactional upload is used, so... when the trigger fires for that row, it only fires once, and it sees the final value $5000, so limit_exceeded remains 'N'.

SELECT * FROM account_balance ORDER BY account_number
account_number amount limit_exceeded
-------------- ----------------- --------------
100 6000.00 Y
200 5000.00 Y
300 5000.00 Y
400 6000.00 Y
500 5000.00 N
600 5000.00 N
700 6000.00 Y
800 5000.00 N
900 5000.00 Y

(9 rows affected)
1>

The situation is better for row 900 (limit_exceeded is set to 'Y') because the two updates were done in separate transactions, and that's how you have to change your application if you don't want operations combined when they are uploaded: you have to put different operations on the same row in different transactions.

The next two steps perform cleanup when you're done, or when you want to start the demonstration over from scratch.

Step 12: Cleanup - Drop the consolidated database

12_drop_cons.bat

PAUSE Are you sure you want to drop the consolidated database?

"c:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe"^
-d master^
-i "s12_drop_cons.sql"^
-I^
-P j68Fje9#fyu489^
-S localhost^
-U sa

PAUSE

s12_drop_cons.sql

USE master
GO

BEGIN TRY
DROP DATABASE mltest
END TRY
BEGIN CATCH
-- ignore error
END CATCH
GO


Step 13: Cleanup - Stop and drop the remote database.

13_stop_and_drop_remo.bat

PAUSE Are you sure you want to delete all the files?

"%SQLANY11%\bin32\dbstop.exe" -c "ENG=remo;DBN=remo;UID=dba;PWD=sql" -y

PAUSE Wait a bit, then

ERASE /F *.log
ERASE /F dbeng11_log_remo.txt
ERASE /F dbmlsync_log_remo*.txt
ERASE /F mlsrv.mle
ERASE /F mlsrv11_log_cons.txt
ERASE /F remo.db
ERASE /F remo.rid

PAUSE