Friday, October 30, 2009

Navigation to the webpage was canceled

For many years, SQL Anywhere has used Microsoft Compiled HTML Help files (*.chm) to deliver all the manuals in one easy-to-read, easy-to-search package on your desktop, one that doesn't make you wander through endless menus and wait for slow-loading web pages.

These files are easy to copy, too, and sometimes folks copy the Help files from machine to machine without using the official SQL Anywhere setup.exe... and they zip them up and send them by email as well.

Suddenly, however, that process has become fraught with error: On the receiving end, when you open the *.chm file, you get "Navigation to the webpage was canceled" no matter which topic you navigate to:

Steven McDowell has the answer (and by the way, the problem DOES NOT AFFECT files installed by the official setup):

from Steven McDowell
to Breck Carter
date Wed, Oct 28, 2009 at 5:11 PM
subject Files downloaded from our FTP site can be "blocked" on XP/Vista/Win 7

Hi Breck,

FYI ... my investigation of your "blocked" CHMs seems to have determined what is going on.


It appears that a recent Windows security update may have further restricted the use of CHM files on XP and Vista. I don't know if other file types are affected by this, but I thought I would share the info since we often ZIP files and send them via FTP to customers and other staff.

Sneak peak: It actually appears to be an IE "feature". That's all you get here. See conclusion at bottom. :-)

It was reported that our CHM docs failed to open on an XP computer. The CHMs had been zipped here and copied via FTP. When they were unzipped, they were marked as "blocked" and the contents could not be viewed until each file was unblocked.

I have reproduced the problem on my Windows 7 system. I am an administrator but still have UAC turned on. Here is what I encountered:
* Unzipping to a local temp folder proceeds normally.
* When I try to open one of the CHMs, I get:
* If I leave the "Always ask" checkbox checked and click Open, the CHM opens, but the contents pane is blocked (with an unhelpful message). This behaviour is the same as trying to open a CHM from a network drive, but this is happening on a local drive.

* As long as the "Always ask" checkbox is checked, the contents pane is blocked for that file regardless of how many times you try to open it. Once I uncheck the box and click Open, not one but two things happen: 1) the contents pane is visible, and, 2) the dialog no longer appears. So something in the file's attributes has changed permanently.

* I picked another CHM file that had not been opened yet. I displayed its Properties. Note the Unblock button and the text to the left. This button and text are not displayed in the Properties for CHMs that were opened with the checkbox turned off. So, the "Always ask" checkbox has the dual purpose of suppressing the dialog and marking the file as unblocked.

Fortunately, it appears that using our doc install does not trigger this blocking problem. There is the possibility that other file types sent via ZIP (and probably other file copy methods) will also encounter these issues.

I did find one discussion (in the link below) that suggests it is IE that is noting that the files came from "the Internet" and marking them as blocked. I'm not sure how that works inside a ZIP file, but it appears to. Possibly non-IE transfers will not do this. CONFIRMED! I downloaded the same ZIP file via FTP using Firefox instead of IE and the files are not blocked!

Relevant URLs (Google "windows chm file blocked"):


Wednesday, October 28, 2009

Comparing Database Schemas, Improved Edition

The original version of this article contained this false statement: "These dbunload steps ... only work on Version 10 and 11 databases ..."

Well, not exactly false, but easily fixed so the technique will work on earlier database files: Versions 5.5, 6, 7, 8 and 9. It turns out that while the Version 11 engine can't run a Version 9 database file, that restriction does not apply to the dbunload utility. In order to facilitate database migrations, dbunload contains the ability to read earlier database files... and that ability extends to ordinary dbunload-to-a-SQL-file steps.

And that bit of magic makes it possible to compare all versions of databases. Here's the full rewrite of the original article:

Question: How do I compare the schemas of two SQL Anywhere databases?

Answer, from Bruce Hay at Sybase iAnywhere: Check out the new "-no" option for dbunload in SQL Anywhere 11 (unload objects in name order, no data)? It's intended for this purpose.

Here's how it works:
  • Step 1: Run dbunload against the first SQL Anywhere database to create a text file containing CREATE TABLE and other schema definitions. The new -no option tells dbunload to unload just the schema, not the data, and do it in name order so it will be easier to compare with another text file sorted in the same order.

  • Step 2: Run dbunload again, on the second SQL Anywhere database, to produce another text file.

  • Step 3: Run a text comparison utility like Compare It! to show the differences between the two text files.
[ Keep reading, the new stuff is coming soon... ]
Here's why using dbunload is a good approach:
  • The dbunload utility doesn't miss anything; it unloads every single schema object in the database... tables, views, procedures, triggers, everything.

  • The dbunload utility is always going to be more up-to-date than any other software except the SQL Anywhere engine itself. In other words, even PowerDesigner falls behind dbunload in terms of supporting SQL Anywhere syntax for new features.

  • Running dbunload on actual databases guarantees you will be comparing the current database schemas, not out-of-date information stored is separate model files.

  • The text file comparison view presents differences in terms of actual database syntax, not some funky abstract logical view used by data modelling tools.
[ OK, here comes the new stuff... ]

But Wait! It's Not Quite That Easy!

How you code the "Run dbunload" parts of Steps 1 and 2 depends on what version of SQL Anywhere was used to create each database:
  • If the database was created with Version 5.5, 6, 7, 8 or 9 you have to run the Version 11 copy of dbunload.exe without starting the database first. Use the -c "DBF=filespec.db;UID=xxx;PWD=yyy" connection string so that the Version 9 engine inside dbunload.exe will be used to start the database. Yes, this is magic!

  • If the database was created with Version 10 or 11, you do have to start the database first, using the Version 11 engine dbeng11.exe or dbsrv11.exe, and then run the Version 11 copy of dbunload.exe with the usual client-server connection string -c "ENG=aaa;DBN=bbb;UID=xxx;PWD=yyy".
That's not just magic, it's also complicated and confusing. That's why I've created three examples you can copy and edit: Compare Version 10 and 11 databases using the second technique described above, then Version 5.5 with Version 9 using the first technique, and finally Version 5.5 with Version 11 using both techniques. These examples all use the demo databases that come with the various versions of SQL Anywhere.

Example 1: Compare Version 10 and 11 Database Schemas

This script is easily changed to work with two Version 10 databases, or two Version 11 databases.

REM Start V10 database via V11 engine

-n demo10^
"C:\Documents and Settings\All Users\Documents\SQL Anywhere 10\Samples\demo.db"


REM Unload V10 database

-c "ENG=demo10;DBN=demo;UID=dba;PWD=sql"^
-r c:\temp\demo10.sql


REM Start V11 database

-n demo11^
"C:\Documents and Settings\All Users\Documents\SQL Anywhere 11\Samples\demo.db"


REM Unload V11 database

-c "ENG=demo11;DBN=demo;UID=dba;PWD=sql"^
-r c:\temp\demo11.sql


REM Compare schemas

"C:\Program Files\Compare It!\wincmp3.exe"^

Here's what Compare It! shows for one difference:

Compare It! can also produce a report in HTML format; just use File - Report... to get output like this:

Example 2: Compare Version 5.5 and 9 Database Schemas

This script is easily changed to work with two Version 9 databases, or any combination of Version 5.5 through 9.

REM Start and unload V5 database via V11 dbunload

-c "DBF=C:\Program Files\Sybase\SQL Anywhere 5.0\sademo.db;UID=dba;PWD=sql"^
-r c:\temp\demo5.sql


REM Start and unload V9 database via V11 dbunload

-c "DBF=C:\Program Files\Sybase\SQL Anywhere 9\asademo.db;UID=dba;PWD=sql"^
-r c:\temp\demo9.sql


REM Compare schemas

"C:\Program Files\Compare It!\wincmp3.exe"^


Here's a Compare It! screen shot:

Example 3: Compare Version 5.5 and 11 Database Schemas

This script is easily changed to compare a Version 5.5 through 9 database with a Version 10 or 11 database.

REM Start and unload V5 database via V11 dbunload

-c "DBF=C:\Program Files\Sybase\SQL Anywhere 5.0\sademo.db;UID=dba;PWD=sql"^
-r c:\temp\demo5.sql


REM Start V11 database

-n demo11^
"C:\Documents and Settings\All Users\Documents\SQL Anywhere 11\Samples\demo.db"


REM Unload V11 database

-c "ENG=demo11;DBN=demo;UID=dba;PWD=sql"^
-r c:\temp\demo11.sql


"C:\Program Files\Compare It!\wincmp3.exe"^


This Compare It! screen shot shows where the schema starts to diverge; after this point, everything's different between the Version 5.5 and 11 demo databases:

Monday, October 26, 2009

DocCommentXchange 2.0 Rocks!

I will let Laura Nevin, Shengyu Bao, Andrew Arnott and Steven McDowell tell the story:

Sent: Friday, October 23, 2009 11:54 AM
Subject: DocCommentXchange 2.0 is now live

Hi Breck,

I think this is what you have been waiting for. It took way longer than expected, but I think the internal improvements were worth the wait.

Please email me if you have any problems or issues, or even comments about DCX.


----- Forwarded by Steven McDowell/SYBASE on 10/23/2009 11:53 AM -----

DocCommentXchange 2.0 is now running at

Main new features:
* based on SA11 web server and full text search (no Apache or PHP)
* preview area added to "Submit a comment" dialog; your comment is formatted and displayed while you type
* index is much faster now
* bookmarks for pages are more readable
Other notes:
* existing URLs remain valid
* existing comments are preserved
* existing userids remain valid
Steve, Laura, Andrew, Shengyu

Friday, October 23, 2009

How To Capture A Graphical Plan

(Updated August 3, 2012 to include Version 12, plus a screenshot of the Plan Viewer.)

Graphical plans are wonderful things. Graphical plans tell you everything you need to know about why a query is running slowly.

Even if you don't have a clue about how to read a graphical plan, it's still wonderful because you can capture it in a file and send it to someone who does know how to read it ( like Glenn Paulley ) or someone like [cough] who pretends to know :)

Seriously, let's say you have The Query From Hell, some SELECT that's taking forever. Maybe yesterday (on SQL Anywhere 9) it ran quickly but now (on SQL Anywhere 11) it runs like continental drift... or whatever, it's slow now.

Here are the steps for capturing the graphical plan, for Versions 8 through 12. At the bottom there's a "Tip" that is very important, but only for Version 10 and earlier.

How To Capture A Graphical Plan In Version 11 and 12

- If desired, run CALL sa_flush_cache() in dbisql.

- Type or paste your query into dbisql.

- Choose Tools - Plan Viewer.

- Select Statisics level: Detailed and node statistics.

- Select Cursor type: Asensitive.

- Select Update status: Read-only.

- Click "Get Plan".

- When the plan appears, click "Save As...".

- Use Save as type Execution plan (*.saplan).

Click on this screenshot to see it full size...

How To Capture A Graphical Plan In Version 10

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with full statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Click "OK".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

How To Capture A Graphical Plan In Version 9

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Click "Make Permanent".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

How To Capture A Graphical Plan In Version 8

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Uncheck "Show UltraLite plan" unless you want that.

- Click "Make Permanent".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

Tip for Version 10 and earlier: If you do use dbisql to execute a query and capture a graphical plan with statistics, use SQL - Get Plan (Shift+F5) instead of SQL - Execute (F5 or F9). If you use Execute, the query will be executed twice: once to determine the result set and a second time to capture the plan. This plan may be completely different from the one used the first time the query was executed, and not just because the cache contains the rows. For example, if the plan is for a DELETE, the first execution will actually delete the rows so the plan may be for the completely different "zero rows" case. If you use Get Plan (Shift+F5) the query will only be run once, and the plan will match.

Monday, October 19, 2009

"Search this blog, plus 5 others..."

The Google Custom Search Engine gadgets on this page have changed:

First of all, the original Search gadget has been changed from "Search this blog, plus Glenn Paulley's" to "Search this blog, plus 5 others": Glenn's plus the other four active SQL Anywhere blogs on Jason Hinsperger, Chris Kleisath, Eric Farrar and Tom Slee.

Second, a new gadget "Search this blog" has been added. This one is an "ego search", just for me, so I can search my old posts.

Third, the additional gadget uses the new Google "Custom Search Element" which displays the result set in an overlay on the same page instead of jumping to a separate search page:

Yeah, that's confusing: "Custom Search Element" sounds so much like "Custom Search Engine"... perhaps Google is taking steps to preserve the earth's rapidly diminishing supply of acronyms.

Friday, October 16, 2009

Product Futures: Grow In-Memory Cache Faster

Maybe I've been living a sheltered life, but until recently I've never seen SQL Anywhere have a Cache Panic. Not once, not in production, not in testing, not even while testing Foxhound which is supposed to display the number of Cache Panics on the monitor page.

What is a Cache Panic? It happens when SQL Anywhere needs to allocate a page in the cache but can't find one. Sounds bad, but what with dynamic cache sizing and the temporary file, I'd never seen one.

At one point, I considered removing Cache Panics from the Foxhound monitor page to save valuable screen real estate (if you've seen the Foxhound monitor page, you know what "crowded" means when it comes to displays :) However, a wise voice counseled me to leave it alone, saying that it's an important statistic.

Not important enough, however, for me to include a Cache Panics threshold in the Foxhound email alerts feature... that was a bad decision, about to be fixed... because I've just seen my first Cache Panics, and believe me, they are bad:

What the Foxhound figures above don't show is how badly performance can be affected when Cache Panics are happening. When you start a SQL Anywhere database using the in-memory feature (dbsrv11 -im nw in this case), there is no temporary file, and it doesn't write anything to the database file, so SQL Anywhere has no place other than RAM to put a page of data. If the RAM cache is full, SQL Anywhere has to allocate more RAM.

It gets worse: If you don't specify the -c initial RAM cache size value when you start the server, and if you perform some intense operation like LOAD TABLE a million rows, SQL Anywhere will have to allocate RAM at a furious pace... all the while Cache Panicking when it can't find a page to allocate.

Who cares? If performance is important, you care. In this case adding -c 1200M to the dbsrv11 command line improved LOAD TABLE performance by 165%, from 2,878 rows per second to 7,640 rows per second.

Grow In-Memory Cache Faster

Here's what the server console log looked like when it had to grow the cache a zillion times, up to 10 times per second:

I. 10/14 15:29:15. Now accepting requests
I. 10/14 15:29:18. Cache size adjusted to 2340K
I. 10/14 15:29:23. Cache size adjusted to 3180K
I. 10/14 15:29:28. Cache size adjusted to 3392K
I. 10/14 15:29:33. Cache size adjusted to 3488K
I. 10/14 15:30:14. DIAG 2009-10-14 15:30:14.421 load table started
I. 10/14 15:30:14. Cache size adjusted to 3748K
I. 10/14 15:30:14. Cache size adjusted to 4012K
I. 10/14 15:30:14. Cache size adjusted to 4276K
I. 10/14 15:41:23. Cache size adjusted to 1151416K
I. 10/14 15:41:23. Cache size adjusted to 1151676K
I. 10/14 15:41:23. Cache size adjusted to 1151936K
I. 10/14 15:41:23. Cache size adjusted to 1151724K
I. 10/14 15:41:23. Cache size adjusted to 1152144K
I. 10/14 15:41:23. Cache size adjusted to 1152404K
I. 10/14 15:41:24. Cache size adjusted to 1152668K
I. 10/14 15:41:24. DIAG 2009-10-14 15:41:24.343 load table finished in 669.0 seconds
I. 10/14 15:42:23. Cache size adjusted to 1152388K
I. 10/14 15:42:27. Cache size adjusted to 1152800K

Most of the time, growing the cache step-by-step works just fine... but not when the database is growing rapidly and it's all stored in memory.

The Suggestion...

So here's the Product Futures suggestion: When a SQL Anywhere database is running in-memory, and a lot of Cache Panics are happening, and it's having to grow the cache ten times a second, then the dynamic cache sizing algorithm should pick some increment other than 64 pages when allocating more RAM.

That way, we won't have to fiddle around trying to pick exactly the right -c initial cache size.

...Or, Maybe Not

It's a little known fact (well, I didn't know it) that picking a reasonable value for -c is a good idea regardless of the cache growth performance issue described here.

For example, if you expect the cache to grow to some size between 1G and 2G, you should specify a value like -c 1G instead of letting it default to the tiny value (on Windows) of 2M. The server uses hash chains to find data in the cache, and the number of hash chains is determined by the initial cache size. A cache that starts at 2M will have fewer hash chains and those chains will grow very long (and slow) as the cache grows to 2G; performance will suffer, even if the Cache Panics problem is solved. And that performance problem will persist even after the cache finishes growing.

On the other hand, a cache that starts at -c 1G and grows to 2G will have more, shorter chains, and better performance. In other words, you don't have to pick an exact value for -c, but you should avoid letting it default to a stup..., er, wildly inappropriate value (like 2M) if you know it's going to grow to at least 1G.

And, of course, the Cache Panics problem won't be nearly so noticeable if you pick a half-decent value for -c. Plus, it goes away after the cache finishes growing.

So, maybe it's not worth fixing, just worth knowing about :)

Wednesday, October 14, 2009

SQL Anywhere Product Futures Discussion

Sybase forum users will recognize "SQL Anywhere Product Futures Discussion" as the title for the sybase.public.sqlanywhere.product_futures_discussion newsgroup on the NNTP server. It's available from your web browser, but I won't put a link here because every one of the several HTTP / HTML interfaces to the NNTP forums are, well, inadequate.

Besides, the NNTP forums are reaching the end of their usefulness, which is why I'm not going to post suggestions there any more... I'm going to post them here, starting now...

DB_PROPERTY ( 'BackupInProgress' )

There should be a database property which answers the question, "Is a database backup process in progress?" It would apply to the BACKUP statement and the dbbackup.exe utility.

The question arises, should the answer be as simple as yes/no, or more complex? The continuous live backup process comes to mind, where the answer would always be yes... maybe live backups should be excluded.

And what about database versus log-only backups? Should the response be db/log/both/no?

That's what "discussion" is for; if you have an opinion, post a comment here.

One reason for having this property is so database monitor programs, like Foxhound and the SQL Anywhere Monitor, could adjust their behavior when a backup is running; e.g., they could recognize that server I/O processing will change in behavior, and that checkpoints will be suspended.

Another reason might be so you can write code that starts a backup as long as one isn't running already, perhaps a second redundant process that makes sure a backup is taken even if some other backup process failed.

DB_PROPERTY ( 'BackupSuccessfulAt' )

This database property should answer the question, "When was the last backup taken?" It should return NULL if there never has been a successful backup, or the timestamp when the most recent successful backup finished.

In my opinion, this request doesn't need any justification... it's hard to think of any question more important than one involving database backups. And today, there is just no reliable way to answer the question, certainly not easily.

I'm guessing live backups don't count here, if only because then never finish unless there's a problem; e.g., the server crashes, so dbbackup loses connectivity... and a database property is useless if the server ain't running.

Should there be different properties for different backup types? e.g., 'DBBackupSuccessfulAt', 'LogBackupSuccessfulAt', 'FullBackupSuccessfulAt'... more discussion!

Monday, October 12, 2009

Global Exception Handler

Stephan.ahlf posted an excellent suggestion in this thread in the Google SQL Anywhere Web Development group:

"It would be a nice feature if developers could implement a Global Exception Handler. In case of errors in procedures invoked via a database event there is only a error message in the database console. With a Global Exception Handler I could develop a general error notification e.g via email. There is an event for after RAISEERROR. Does there exists somthing like that for native database exceptions?"
Eric Farrar thinks it's a good idea too, which is wonderful news; Eric is highly respected throughout the SQL Anywhere community and his word should carry some significant weight:
"Unfortunately, there is nothing inside of SQL Anywhere to allow you to specify a top-level exception handler that will be called on any native database exception. I do understand why this would be useful (especially in a web context), and I have passed this idea on to the appropriate engineers."
Exception handling plays a huge role in the web-based Foxhound database monitor, and I would love to be able to code an all-encompassing global exception handler to record errors for later diagnosis.

One problem facing a global exception handler, however, is providing an answer to the question "Where, in all the thousands of lines of SQL code, did this exception occur?"

That's a hard question to answer even for a local exception handler. One solution (used by Foxhound) is to maintain a "where am I?" variable as the code executes just so an exception handler can include it with the other information it saves and/or displays.

For a global exception handler, one could use a global CREATE VARIABLE to store the "where am I?" value, but I would rather not... I would rather have SQL Anywhere provide the source code line number and object name (procedure, event, service, etc).

Sunday, October 11, 2009

We Don't Need No Steenking Backup!

Let's all welcome Microsoft and T-Mobile's Sidekick service as the latest Database Deadpool winners!

Here's the reported time line for the T-Mobile disaster:

Friday, October 2, 2009

Microsoft, whose Danger subsidiary powers the [T-Mobile] Sidekick service, said it is "working urgently" to deal with the problems, which interrupted data service for most Sidekick users starting at 1:30 a.m. PDT on Friday.
Major outage hits T-Mobile Sidekick users October 6, 2009 11:57 AM PDT by Ina Fried
We are working to resolve the issue and hope to have things restored to normal soon.
Sidekick Service Issue (Oct 2nd) 10-02-2009, 08:40 PM

Sunday, October 4, 2009 seems that Danger's data center has been on the fritz for a solid two days now with symptoms ranging from text message weirdness to dead address books and everything in between, up to and including a completely unusable experience. Frantic calls to T-Mobile are resulting in comped bills for some subscribers, so if you're affected, you might want to give 'em a ring.
Danger's server woes leave Sidekicks in the lurch by Chris Ziegler posted Oct 4th 2009 at 4:48PM

Monday, October 5, 2009

Users of T-Mobile's Sidekick smartphones spent much of the weekend without data service, and many are still waiting for service to be restored.
T-Mobile: Danger Working to Resolve Sidekick Data Outage Monday October 5, 2009

Thursday October 8, 2009

We sincerely apologize for the inconvenience, and appreciate your patience as Danger works hard, around-the-clock to resolve this issue. T-Mobile will continue to keep you updated as we learn more. Thank you!
Sidekick Service Issue 10-08-2009, 04:22 PM

Saturday, October 10, 2009

The massive data failure at Microsoft's Danger subsidiary threatens to put a dark cloud over the company's broader "software plus services" strategy.
Sidekick outage casts cloud over Microsoft October 10, 2009 3:26 PM PDT by Ina Fried
...this is shaping up to be one of the biggest disasters in the history of cloud computing.
T-Mobile: we probably lost all your Sidekick data by Chris Ziegler posted Oct 10th 2009 at 3:45PM
Perez Hilton loses 2000 contacts in his Sidekick. 2000 people to meet in Griffith Park for biggest group hug ever.
John Mayer
Regrettably, based on Microsoft/Danger's latest recovery assessment of their systems, we must now inform you that personal information stored on your device - such as contacts, calendar entries, to-do lists or photos - that is no longer on your Sidekick almost certainly has been lost as a result of a server failure at Microsoft/Danger.
This is a code red cloud disaster.
Cloud Goes Boom, T-Mo Sidekick Users Lose All Data Posted by Eric Zeman, Oct 10, 2009 08:22 PM
This latest large-scale publicized data loss will surely lead to managers everywhere forwarding a link to the story to their IT departments asking "what are we doing so that this doesn't happen to us." ...
If you read almost any technology website or newspaper, you could be forgiven for thinking that "The Cloud" solves everything.
In the Danger case, it appears from initial speculation that the data was lost because they attempted to upgrade a storage array without backing it up first. Here is a case of smart and rational people who do this for a living at one of the best companies in the world, and they didn't even bother making a backup..
Letting Data Die A Natural Death Nik Cubrilovic Saturday, October 10, 2009; 8:20 PM

Sunday, October 11, 2009

Microsoft/Danger has stated that they cannot recover the data but are still trying.
Server Failure Destroys Sidekick Users' Backup Data Posted by timothy on Sunday October 11, @05:29AM

Wednesday, October 7, 2009

How To Make Sure Your DatabaseStart Event Finishes First

If you have scheduled events in your database, and those events contain logic that depends on the DatabaseStart event finishing beforehand, it's up to you to make sure your scheduled events don't start running too early.

For example, let's say you have critical initialization logic in your DatabaseStart event, and your scheduled events contain other logic that assumes the initialization process is complete before they run. Since every event runs on its own connection, and connections run asynchronously, there's no guarantee that the DatabaseStart event will finish before your scheduled events start firing.

One strategy is to add more logic to each scheduled event to check if the DatabaseStart event has finished running, and to immediately RETURN if DatabaseStart has not finished.

Here's one way to implement that strategy:

  • Add code to the end of the DatabaseStart event to record the CURRENT TIMESTAMP in a single-row table.

  • Add code to the beginning of each scheduled event to check if the SQL Anywhere server startup timestamp is later than the timestamp recorded by the DatabaseStart event.

  • If the server startup timestamp is later than the DatabaseStart timestamp, then RETURN from the scheduled event.
If the server startup timestamp is later than the DatabaseStart timestamp, then the DatabaseStart timestamp must have been recorded by a previous execution of the DatabaseStart event during a previous server startup... which in turn implies the DatabaseStart event for the current server startup hasn't finished executing.

Eventually, the DatabaseStart event will finish, and the next time the scheduled event fires it won't execute the RETURN.

Here are some excerpts from the Foxhound database monitor; first, a single-row table contains the DatabaseStart timestamp:

CREATE TABLE rroad_database_start_blocker (
CHECK ( one_row_only = 1 )
started_at TIMESTAMP NOT NULL );

CREATE TRIGGER trd_one_row_only BEFORE DELETE ON rroad_database_start_blocker
ROLLBACK TRIGGER WITH RAISERROR 99999 'Do not attempt to delete from rroad_database_start_blocker';

INSERT rroad_database_start_blocker VALUES ( DEFAULT, CURRENT TIMESTAMP );

Second, the DatabaseStart event updates DatabaseStart timestamp after all the initialization logic is complete:

CREATE EVENT rroad_database_start TYPE DatabaseStart HANDLER

... critical initialization code ...

-- Block other events from proceeding until this event finishes.

UPDATE rroad_database_start_blocker
SET rroad_database_start_blocker.started_at = CURRENT TIMESTAMP
WHERE rroad_database_start_blocker.one_row_only = 1;



Finally, a scheduled event starts by checking the server startup timestamp against the DatabaseStart timestamp:

CREATE EVENT rroad_monitor_auto_restart

-- Wait for the rroad_database_start event to finish before proceeding.

IF COALESCE ( PROPERTY ( 'StartTime' ), '' ) = '' THEN

-- The SQL Anywhere engine hasn't yet filled in the StartTime property, so it can't be
-- compared with rroad_database_start_blocker.started_at, so it's not safe to proceed
-- with this event yet.

RETURN; -- try again when this event fires again


> ( SELECT rroad_database_start_blocker.started_at
FROM rroad_database_start_blocker
WHERE rroad_database_start_blocker.one_row_only = 1 ) THEN

-- The DatabaseStart event hasn't yet updated rroad_database_start_blocker.started_at so
-- this event cannot proceed.

RETURN; -- try again when this event fires again


... other logic that depends on the initialization process being complete ...


Note that PROPERTY ( 'StartTime' ) might not even be filled in by the time a scheduled event starts executing so there are two reasons to RETURN early.