Thursday, August 2, 2018

Controlling The SQL Anywhere Transaction Log Size

Overview

The problem of a rapidly growing SQL Anywhere transaction log filling up the hard drive is discussed. The Foxhound High-Frequency Insert-Delete database is used as the primary example and a non-High-Frequency Insert-Delete benchmark database is used as a secondary example. The Introduction describes the problem, and several solutions are presented:The recommended solution is 2-plus-5 for a Foxhound database, and for a non-Foxhound database the best solution depends on the answers to questions like "Do you care about your data?".

The following topics are not discussed:
  • Databases which participate in SQL Remote replication, because the preservation of old transaction log entries is so important and controlling the file size is a whole different topic.
  • Databases which participate as MobiLink remote databases, for the same reason as SQL Remote.
  • Databases which participate in SQL Anywhere High Availability (mirror) and Read-Only Scale-Out setups, because... well... there just wasn't time :)
  • Databases with auditing = 'On' and audit_log = 'TRANSLOG' because the details are recorded in the transaction log.[26].
  • The brute force "Method Zero" (stop database, delete log, start database) because... well... that pretty much covers Method Zero :)


Introduction     (top)


One of the drawbacks to Foxhound is the tendency for the transaction log file to grow rapidly. This is especially true if Foxhound is used to monitor multiple target databases with many connections.
Background

Foxhound[1] is a third-party database monitor for SAP® SQL Anywhere®. It uses its own SQL Anywhere database to store samples of performance statistics gathered from multiple target databases.

The Foxhound database is a "High-Frequency Insert-Delete" database (sometimes called "High Throughput") because
  • it has a high rate of inserts (the continuous stream of sample data coming from the targets)

  • but the database file doesn't grow in size because one old row is deleted for every new row that is inserted.
At first, the Foxhound database and log files both grow in size at the same rate because new rows are inserted and nothing is deleted. Eventually the database file stops growing when the built-in purge process[25] starts deleting old data (after 30 days by default).

However, the purge actually makes the log grow faster because now the delete statements are logged as well as all the inserted data.
If nothing is done to control the size of the log it will fill the disk drive and Foxhound's own SQL Anywhere engine will crash. At that point you might have to use the dbsrv -f option[2] to get it going again.
How To Get Foxhound Running Again After Losing The Transaction Log.

The code shown below is from Foxhound 4, but the method's the same for Foxhound 3 and earlier.
  1. Edit the $start_foxhound4_engine.bat file in the C:\ProgramData\RisingRoad\Foxhound4\ folder.

  2. Insert the -f^ line shown in yellow.
    [snip]
    "!SDIR!\!BIN!\dbspawn.exe"^
      -f "!SDIR!\!BIN!\!SPGM!"^
      -c 25p^
      -ch 50p^
      -cr-^
      -gk all^
      -gn 220^
      -gna 0^
      -n foxhound4^
      -o foxhound4_debug.txt^
      -oe foxhound4_debug_startup.txt^
      -on 1M^
      -qn^
      -sb 0^
      -ufd restart^
      -x tcpip^
      -xd^
      -xs http(port=80;maxsize=0;to=600;kto=600)^
      foxhound4.db^
    -f^
    -n f [snip]
    Tip: When using the caret ^ to insert line breaks, make it the last character on the line; do not follow it with any spaces.

  3. Run the $start_foxhound4_engine.bat file.
    It should start the foxhound4.db file, create a new foxhound4.log file, and then shut down.

  4. Now you should be able to start Foxhound normally.
    Re-edit the $start_foxhound4_engine.bat file to remove the extra -f.
There are several ways to control the size of the transaction log file, ranging from the extreme (#1) to the traditional (#5 and 6):
Method 1: Get rid of the log altogether with dblog -n

Method 2: Automatically truncate the log on every checkpoint with dbsrv -m

Method 3: Regularly truncate the log with dbbackup -xo

Method 4: Regularly truncate the log with BACKUP ... TRUNCATE;

Method 5: Regularly make a full backup with dbbackup -x

Method 6: Do #5, then make incremental log backups with dbbackup -n -t -x

Method 7: Use a combination of methods
These questions and answers may help you choose a method . . .
Questioning the methods:          1      2         3         4         5        6         7

(a) Is it easy to set up?         no     yes       no        no        no       no        no

(b) Is it easy to administer?     yes    yes       no        no        no       no        no

(c) Will the database work        yes    yes       yes       yes       yes      yes       yes 
    properly?

(d) Will the database run as      maybe  probably  probably  probably  maybe    maybe     maybe
    fast?

(e) Will automatic recovery       yes    yes       yes       yes       yes      yes       yes
    work after the SQL Anywhere 
    engine stops abruptly   
    (no final checkpoint)?

(f) Will you be able to restore   maybe  maybe     maybe     maybe     yes      yes       yes
    your database file after    
    it is lost or damaged?

(g) Will the restored database    no     no        no        no        maybe    probably  maybe 
    file include all the   
    latest updates?

(h) But... Do you really care about all of your data?

(i)        Do you have alternative methods of restoring lost data?
... and some of the answers are expanded in the following "How To" sections.


Method 1: Get rid of the log altogether with dblog -n     (top)


Trigger Warning: Even thinking about operating a database without a transaction log may cause anxiety in some readers :)

Before writing this article I had never even experimented with removing the transaction log... it was taboo! Running without a transaction log would kill performance!

It says so in the Help:[3] "When operating without a transaction log, SQL Anywhere performs a checkpoint at the end of every transaction which consumes considerable resources."

It even says so in this blog! (Number 19 in How To Make SQL Anywhere Slow[4])

Method 1 (getting rid of the log) is the most effective (and most dramatic) way of controlling the transaction log size: Stop SQL Anywhere from writing anything to the log at all.

It is also easy to administer because after setup there is absolutely no administration required.

Here's how to get rid of the transaction log, in the context of Foxhound 4 running on SQL Anywhere 17:

First, stop the database if it is running.
Tip: If you forget to stop the database, dblog will fail with "Unable to open database file "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" -- Permission denied" because the actions of turning the log off and on, or renaming it, or moving it are performed by changing data inside the physical database file.

Tip: If you crash the SQL Anywhere process rather than stopping the database cleanly, dblog may fail with "Database quit abnormally last time it was used. Database recovery must be performed before proceeding." Start the database again, and stop it cleanly (run dbstop, or right mouse - Shut down "yourdb" on the taskbar icon, or stop the service).
Second, run dblog -n[5] to update the foxhound4.db file (you might have to use Run as Administrator):
"%SQLANY17%\Bin64\dblog.exe"^
  -n^
  -o "C:\ProgramData\RisingRoad\Foxhound4\dblog_messages.txt"^
  "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db"
PAUSE

SQL Anywhere Transaction Log Utility Version 17.0.9.4803
"C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" was 
   using log file "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"
"C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" was using no log mirror file
Transaction log mirror filename not changed
"C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" is now using no log file
Transaction log starting offset is 01770229792
Transaction log current relative offset is 0021542337
Third, start the database.
SQL Anywhere won't complain about a missing transaction log... maybe it should, but it doesn't.

Also, the usual "Starting checkpoint" and "Finished checkpoint" messages are not displayed in the SQL Anywhere console display or included in the dbsrv17.exe -o diagnostic text file, even though checkpoints continue to happen.
Finally, you can delete the old transaction log file because it isn't being used any more.
ERASE /F "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"
PAUSE
You won't notice any difference in the way the database works, or in the ability of SQL Anywhere to perform automatic restart recovery, and you may or may not notice a difference in performance.

For example, here's what the automatic recovery process looks like after Task Manager was used to kill a Foxhound dbsrv17.exe process that wasn't using a transaction log:
SQL Anywhere Network Server Version 17.0.9.4803
...
Starting database "f" (C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db) at Mon Jul 23 2018 12:23
Database recovery in progress
    Last checkpoint at Mon Jul 23 2018 12:22
    Checkpoint log...
    Checkpointing...
Starting checkpoint of "f" (foxhound4.db) at Mon Jul 23 2018 12:23
Finished checkpoint of "f" (foxhound4.db) at Mon Jul 23 2018 12:23
Recovery complete
Database "f" (foxhound4.db) started at Mon Jul 23 2018 12:23
Database server started at Mon Jul 23 2018 12:23
...
Now accepting requests
---------------------------------------
Foxhound Version 4.0 by RisingRoad - Extended Edition 4.0.4740a built at 2017 10 02 14:29 
   in a SQL Anywhere 16.0.0.2127 database created at 2017 10 02 14:29
---------------------------------------
...
Foxhound now accepting requests.
The "Database recovery in progress" usually works like this[6]:
  1. Use the checkpoint log to restore the database back to the most recent checkpoint.

  2. Use the transaction log (redo log) to roll the subsequent committed and uncommitted changes forward to the point of failure.

  3. Use the rollback log to undo uncommitted changes.

  4. Take a checkpoint.
Step 2 can't be done when there's no transaction log, but that's a moot point because when there's no transaction log the most recent checkpoint coincides with the most recent commit... and there's nothing for Step 3 to do either.

Performance may be another issue, however. When a SQL Anywhere database doesn't have a transaction log, a checkpoint operation must be performed with (more or less) every commit. Here's what that looks like for the Foxhound 4 database (as displayed by a separate "dogfood" copy of Foxhound) running with and without a transaction log:

Figure 1: Foxhound With And Without A Transaction Log
Foxhound 4 database operating WITH a transaction log


Foxhound 4 database operating WITHOUT a transaction log


The Checkpoints[7] column (over on the right) shows the number of checkpoints performed in each 10-second sample. As expected, the values are higher (about 10 checkpoints every 10 seconds) when Foxhound is run WITHOUT a transaction log, but that doesn't seem to matter because . . .
  • The Throughput...[8] columns measure the rate of work being performed, and they remain the same with or without a log.

  • In particular, the Throughput... Commits column remains the same (about 2.5 per second).

  • The CPU column[9] also remains the same, another indication that Foxhound is getting the same work done at the same rate.

  • The Waiting Time[10] column shows the total time all current connections were blocked or forced to wait during the previous interval, and the reason it's higher without a log may be that all the checkpoints are causing delays... not big delays, however, because total Waiting Time never reaches 1 second in any 10 second interval.

Figure 1 shows that Foxhound performance did not suffer after getting rid of the transaction log.

Your mileage may vary, of course. The tests in Figure 1 showed Foxhound monitoring only 1 target database with 1000 connections, not ten or twenty or more targets which is not uncommon.

Also, the tests don't show when Foxhound is overloaded by running on [cough] inadequate hardware; alas, that too is not uncommon :)

With other (non-Foxhound) databases, however, the penalty for running without a transaction log can be severe. Here's what that looks like for the benchmark application where 100 connections are running simple update-commit transactions as fast as SQL Anywhere can process them:

Figure 2: Non-Foxhound Database With And Without A Transaction Log
Benchmark database operating WITH a transaction log


Benchmark database operating WITHOUT a transaction log


Everything about the second test is bad news:
  • The Interval[11] between monitor samples (the left-most column) is longer than the nominal 10 seconds. Sometimes this happens because Foxhound is overloaded, but not this time: the benchmark database simply isn't responding fast enough.

  • The Response... Heartbeat[12] time went up by a factor of 10 (how long it took to Foxhound to get SELECT * FROM DUMMY).

  • The Response... Sample time doubled (how long it took to get the performance statistics)... without a log, the sample time is longer than Foxhound's nominal inter-sample interval of 10 seconds.

  • All the Throughput... numbers fell off a cliff, in particular the rate of Throughput... Commits which dropped from over 15,000 commits per second to lower than 1,000.

  • The Waiting Time numbers went up a bit. They were bad before (because the benchmark database was struggling), now they're even worse.

  • The CPU percent column tells more of the story. Low CPU isn't always a good thing; in this case the lower CPU numbers mean the lack of a transaction log prevented the application from getting its work done...

  • ...because SQL Anywhere was too busy taking Checkpoints: more than 6,000 checkpoints every ten seconds, as opposed to zero checkpoints with a log.
    (This may be why the usual "Starting checkpoint" and "Finished checkpoint" messages are not displayed in the SQL Anywhere console display... whaddaya think? :)

Figure 2 shows that non-Foxhound benchmark performance suffered greatly after getting rid of the transaction log.

If you change your mind: How To Start Using A Transaction Log Again.

The code shown below is from Foxhound 4, but the method's the same for Foxhound 3 and earlier.
  1. First, stop Foxhound.

  2. Run dblog -t to update the foxhound4.db file:
    "%SQLANY17%\Bin64\dblog.exe"^
      -t "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"^
      -o "C:\ProgramData\RisingRoad\Foxhound4\dblog_t_messages.txt"^
      "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db"
    PAUSE
    
    SQL Anywhere Transaction Log Utility Version 17.0.9.4803
    "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" was using no log file
    "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" is using no log mirror file
    "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" is now 
       using log file "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"
    Transaction log starting offset is 0024948999
    Transaction log current relative offset is 0000968454
    That doesn't actually create the transaction log file... yet...

  3. Start Foxhound ...this is when the transaction log gets created.


What about restoring the database?     (top)


Trigger Warning: This section suggests that you might not care about your data :)

The really bad news about Method 1 (getting rid of the transaction log) is that no, even if you can restore the database file from a backup after it is lost or corrupted, you will not be able to "roll forward" the latest changes... because you don't have a transaction log.[13][18]

But... maybe you don't care about the latest changes. Maybe you don't even care about restoring the database. Maybe Foxhound is like the GPS in a car, if it breaks you replace it, you don't fret about losing a record of all your trips.

The GPS analogy works like this:
  • The GPS record of all your trips is analogous to the performance statistics that Foxhound gathers automatically.

  • The GPS record of all your favorite locations is analogous to the options and settings you manual enter into Foxhound.
For more about backing up your database even when it doesn't have a transaction log; see Method 7: Use a combination of methods.

Tip: You can restore the Foxhound database to its empty pre-installation state without having to reinstall Foxhound:
  1. Stop Foxhound.
  2. Delete C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db
  3. Delete C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log
  4. Copy the file foxhound4.db.4.0.4740.ORIGINAL_COPY from C:\ProgramData\RisingRoad\Foxhound4\setup to C:\ProgramData\RisingRoad\Foxhound4
  5. Rename it to foxhound4.db
  6. Start Foxhound.
  7. Enter your activation key.


Method 2: Automatically truncate the log on every checkpoint with dbsrv -m     (top)


This is an easy solution for both setup and administration: Simply add the -m option[14] to the SQL Anywhere server command line and all your problems vanish because a new empty transaction log is created and the old fat log is deleted every time SQL Anywhere performs a checkpoint... which is every few minutes or so.

Here's an excerpt showing how to modify the Foxhound 4 command files:
[snip]
"!SDIR!\!BIN!\dbspawn.exe"^
  -f "!SDIR!\!BIN!\!SPGM!"^
  -c 25p^
  -ch 50p^
  -cr-^
  -gk all^
  -gn 220^
  -gna 0^
  -n foxhound4^
  -o foxhound4_debug.txt^
  -oe foxhound4_debug_startup.txt^
  -on 1M^
  -sb 0^
  -ufd restart^
  -x tcpip^
  -xd^
  -xs http(port=80;maxsize=0;to=600;kto=600)^
  foxhound4.db^
  -m^
  -n f
[snip]
Here are the files that contain that code; you only need to change the ones you're going to use (like maybe the one that's highlighted ):
Foxhound4 Shortcut in Windows 10 Menu       Command File in C:\ProgramData\RisingRoad\Foxhound4 
------------------------------------------  ---------------------------------------------------
Backup Foxhound Database                    $backup_foxhound4.bat
Start Foxhound Engine                       $start_foxhound4_engine.bat
Start Foxhound via Chrome                   $start_foxhound4_chrome.bat
Start Foxhound via Chrome - debug           $start_foxhound4_chrome_debug.bat
Start Foxhound via default browser          $start_foxhound4_default_browser.bat 
Start Foxhound via default browser - debug  $start_foxhound4_default_browser_debug.bat
Start Foxhound via Firefox                  $start_foxhound4_firefox.bat
Start Foxhound via Firefox - debug          $start_foxhound4_firefox_debug.bat
Start Foxhound via IE                       $start_foxhound4_ie.bat
Start Foxhound via IE - debug               $start_foxhound4_ie_debug.bat


Method 3: Regularly truncate the log with dbbackup -xo     (top)


Instead of automatically truncating the log with every checkpoint, you can do it manually with a dbbackup -xo[16] command line, or automatically using the Windows Task Scheduler.

The Help says it best: dbbackup -xo[15] "Deletes the current transaction log and starts a new one. This operation does not perform a backup; its purpose is to free up disk space."

The following command file
  • assumes you're using the 64-bit version of SQL Anywhere 17 to run Foxhound (as opposed to the 32-bit version and/or SQL Anywhere 16),

  • assumes the backup subfolder already exists in C:\ProgramData\RisingRoad\Foxhound4, and

  • uses the built-in Foxhound credentials UID=BACKER; PWD=SQL; to run dbbackup.
Run dbbackup -xo on Foxhound 4
"%SQLANY17%\bin64\dbbackup.exe"^
  -c "ENG=foxhound4; DBN=f; UID=BACKER; PWD=SQL;"^
  -o "C:\ProgramData\RisingRoad\Foxhound4\backup\dbbackup_xo_messages.txt"^
  -xo

PAUSE

SQL Anywhere Backup Utility Version 17.0.9.4803
Transaction log truncation completed
This method
  • is not as easy to set up as Method 2 (truncate log on checkpoint),

  • or as easy to administer as Method 2 or Method 1 (get rid of the log),
because you are responsible
  • for manually running the command file

  • or for setting it up in the Task Scheduler[17] and making sure it actually runs.
However, this method may have a slight performance advantage over Method 2 (truncate log on checkpoint) because you can schedule a once-a-night dbbackup -xo run rather than forcing the SQL Anywhere server to truncate the log during prime time. In fact, checkpoints happen more often during heavy update periods which is exactly when you don't want extra file-handling operations.


Method 4: Regularly truncate the log with BACKUP ... TRUNCATE;     (top)


You can run this exact SQL statement to yield the same result as Method 3 (run dbbackup -xo):
Run BACKUP ... TRUNCATE[19] on any database
BACKUP DATABASE DIRECTORY '' TRANSACTION LOG ONLY TRANSACTION LOG TRUNCATE;
The result may be the same, but the setup and administration is completely different; the following comparison applies to all usage of dbbackup and BACKUP, not just truncating the log:
dbbackup.exe Utility BACKUP Statement
Connects to the server as a client Runs as a SQL statement inside the database
Requires an explicit connection string Implicitly uses the current connection
Usually run from a command line Rarely run from a command line using dbisql
Often run manually Rarely run manually using dbisql
Often run via Task Scheduler Rarely run via Task Scheduler using dbisql
Rarely launched from inside the database via xp_cmdshell Often run from inside a SQL EVENT or PROCEDURE
Rarely run from a scheduled SQL EVENT using xp_cmdshell Often run from a inside scheduled SQL EVENT
Rarely launched from inside a client application Easily run from inside a connected client application
Favored by procedural command-line fans Favored by declarative SQL fans

The Foxhound 4 database doesn't let you embed a BACKUP statement inside a PROCEDURE or scheduled EVENT[20], but you can run it via dbisql:
Run BACKUP ... TRUNCATE on Foxhound 4
"%SQLANY17%\bin64\dbisql.exe"^
  -c "ENG=foxhound4; DBN=f; UID=BACKER; PWD=SQL;"^
  -nogui^
  BACKUP DATABASE DIRECTORY '' TRANSACTION LOG ONLY TRANSACTION LOG TRUNCATE

IF ERRORLEVEL 1 ( ECHO *** ERROR - BACKUP TRUNCATE statement failed ) ELSE ( ECHO **** BACKUP TRUNCATE statement ran OK )

PAUSE

**** BACKUP TRUNCATE statement ran OK

Tip: Production-quality command-line scripts should have much better record keeping and error handling than shown here, especially if the scripts are run via Task Scheduler while you're at home asleep. A similar assertion can be made about BACKUP statements inside scheduled events; unlike the Task Scheduler, by default SQL Anywhere keeps no record at all of whether an event succeeded or failed... or failed to run at all.
When the BACKUP statement is used to perform a "real" backup (as opposed to just truncating the log), it performs better than dbbackup because it runs inside the server and doesn't pass all the backup data across the client-server boundary. However, neither Method 3 nor Method 4 deal with data so neither one performs better than the other.


Method 5: Regularly make a full backup with dbbackup -x     (top)


If you love your old Foxhound samples and options, this is a good solution because it protects your data: The dbbackup -x command takes a full backup of the Foxhound database and log, renames the old log file and creates a new empty one.

The Foxhound4 - Tools - Backup Foxhound Database shortcut[21] can be run repeatedly to keep only the three latest backup copies... it's the "keep only the three latest backup copies" part that controls the size of the log as long as you run it more often than [cough] "never" :)

Here's an excerpt from the underlying command file in C:\ProgramData\RisingRoad\Foxhound4:

Excerpt from $backup_foxhound4.bat
[snip]
"!SDIR!\!BIN!\dbbackup.exe"^
  -c "ENG=foxhound4;DBN=f;UID=BACKER;PWD=SQL"^
  -o backup\generation_temp\backup_log.txt^
  -x^
  backup\generation_temp
[snip]

Tue 07/31/2018  8:49:28.23 ********************************* 
Tue 07/31/2018  8:49:28.23 *** Foxhound full backup started 
Tue 07/31/2018  8:49:28.23 *** Starting dbsrv16 if necessary... 
Tue 07/31/2018  8:49:33.16 *** Starting full dbbackup... 
SQL Anywhere Backup Utility Version 17.0.9.4803
 (57610 of estimated 57610 pages, 100% complete)
Transaction log truncated
Database backup completed
Tue 07/31/2018  8:49:40.12 *** Moving backup to the generation3 subfolder  
Tue 07/31/2018  8:49:40.12 *** Foxhound full backup OK 

( Warts and all... Note the imprecise reference to dbsrv16 in 
  the ECHO message... the test was run using SQL Anywhere 17 :)


Method 6: Do #5, then make incremental log backups with dbbackup -n -t -x     (top)


You can backup and truncate the log without backing up the database every time by combining Method 5 (full backup) with multiple runs of Method 6 (incremental log-only backups).

If you love your old Foxhound samples and options, this is a better solution because it increases the likelihood that you'll be able to restore recent changes after a disaster.

The Foxhound4 - Tools - Backup Foxhound Transaction Log shortcut can be run repeatedly to store multiple log backups together with the latest database backup.

Here's an excerpt from the underlying command file in C:\ProgramData\RisingRoad\Foxhound4:

Excerpt from $backup_foxhound4_log.bat
[snip]
"!SDIR!\!BIN!\dbbackup.exe"^
  -c "ENG=foxhound4;DBN=f;UID=BACKER;PWD=SQL"^
  -o backup\generation3\backup_log.txt^
  -n^
  -t^
  -x^
  backup\generation3\logs
[snip]

Tue 07/31/2018  9:11:42.80 ********************************* 
Tue 07/31/2018  9:11:42.80 *** Foxhound log backup started 
Tue 07/31/2018  9:11:42.82 *** Starting dbbackup... 
SQL Anywhere Backup Utility Version 17.0.9.4803
 (13 of estimated 13 pages, 100% complete)
Transaction log truncated
Database backup completed
Tue 07/31/2018  9:11:44.01 *** Foxhound log backup OK 


Method 7: Use a combination of methods     (top)


One combination is Method 1 plus Method 5 to protect your intellectual property (all the work you did to set up the Foxhound options and settings) but not necessarily protect all samples:
  • Use Method 1 (get rid of the log) to prevent the log from growing too large, then

  • run Method 5 (full backup) every time you make changes to the Foxhound options and settings; note that Method 5 doesn't back up the transaction log because it doesn't exist.
Here's how to restore after the foxhound.db file is lost or damaged:
  1. Delete the old foxhound4.db file if it still exists in C:\ProgramData\RisingRoad\Foxhound4.

  2. Replace it by copying the backup foxhound4.db from C:\ProgramData\RisingRoad\Foxhound4\backup\generation3.
    There's no foxhound4.log file to restore because a transaction log isn't being used.

  3. Start Foxhound.
    Foxhound will automatically restart sampling the target database(s).
    There will be a gap in the sample timestamps[22] around a message like this:
    Recent Samples  Interval
      1:01:53 PM     3m 53s   -- Foxhound stopped -- 
    
    Any changes to Foxhound options you made before the backup will be restored, but not the changes made afterwards.
Another combination is Method 2 plus Method 5, a slightly different way to accomplish the same thing:
Recommended for Foxhound
  • Use Method 2 (truncate log on checkpoint) to prevent the log from growing too large, then

  • run Method 5 (full backup) every time you make changes to the Foxhound options and settings.
The restore works the same way, except now there's a backup transaction log file involved:
  1. Delete the old foxhound4.db file if it still exists in C:\ProgramData\RisingRoad\Foxhound4.

  2. Replace it by copying the backup foxhound4.db from C:\ProgramData\RisingRoad\Foxhound4\backup\generation3.

  3. Delete the old foxhound4.log file if it still exists in C:\ProgramData\RisingRoad\Foxhound4.

  4. Replace it by copying the backup foxhound4.log from C:\ProgramData\RisingRoad\Foxhound4\backup\generation3.

  5. Start Foxhound.
    Foxhound will automatically restart sampling the target database(s).
    There will be a gap in the sample timestamps around a message like this:
    Recent Samples  Interval
      1:53:51 PM     2m 54s   -- Foxhound stopped -- 
    
    Any changes to Foxhound options you made before the backup will be restored, but not the changes made afterwards.


Recommendations     (top)


For a Foxhound database, Method 2 (truncate log on checkpoint) is a good approach, followed closely by Method 1 (get rid of the log), but the recommended solution is 2-plus-5 (truncate log on checkpoint plus occasional backup).

For a non-Foxhound database, the right answer depends on your answers to questions like "Do you care about your data?".


External References     (top)


1. ^ Foxhound third-party database monitor for SAP® SQL Anywhere® [Foxhound 4 Home] [White Paper] [FAQ] [Help]
2. ^ The dbsrv -f option forces the database server to start after the transaction log has been lost [DCX Help] [Help Portal]
3. ^ Tip: Always use a transaction log [DCX Help] [Help Portal]
4. ^ How To Make SQL Anywhere Slow [SQLAnywhere Blog]
5. ^ Transaction Log utility (dblog): Administers the transaction log for a database [DCX Help] [Help Portal]
6. ^ The automatic recovery process after a system failure [DCX Help] [Help Portal]
7. ^ Checkpoints is the number of CHECKPOINT operations that have been executed by the server in the previous interval [Foxhound Help]
8. ^ Throughput, also known as bandwidth, is a measure of how much work the database has performed [Foxhound Help]
9. ^ CPU percent used during the preceding interval [Foxhound Help]
10. ^ Waiting Time is the total time all current connections were blocked or forced to wait during the previous interval [Foxhound Help]
11. ^ Interval time between the previous sample and this one [Foxhound Help]
12. ^ Response time, also known as latency, is a measure of how long it takes the database to respond to a single request [Foxhound Help]
13. ^ The transaction log is also called the forward log or the redo log [DCX Help] [Help Portal]
14. ^ -m database option truncates the transaction log when a checkpoint is performed [DCX Help] [Help Portal]
15. ^ Backup utility (dbbackup) [DCX Help] [Help Portal]
16. ^ Quick! How do I truncate the transaction log? [SQLAnywhere Blog]
17. ^ Task Scheduler in Windows 10: Start - Windows Administrative Tools - Task Scheduler [Wikipedia][Microsoft Docs]
18. ^ Point-in-time recovery (PITR) offers you a way to restore a database to a moment in time [DCX Help] [Help Portal]
19. ^ BACKUP DATABASE: The transaction log can be renamed and restarted without completing a backup [DCX Help] [Help Portal]
20. ^ CREATE EVENT statement: Defines an event and its associated handler for automating scheduled actions [DCX Help][Help Portal]
21. ^ Foxhound Introduction and Setup - Backup: It is a good idea to take regular backups of the Foxhound database [Foxhound Help]
22. ^ Recent Samples shows the date/time that Foxhound recorded each sample [Foxhound Help]
23. ^ Control transaction log file growth by ensuring that all your tables have compact primary keys [DCX Help] [Help Portal]
24. ^ Log Translation utility (dbtran): Translates a transaction log into a SQL script file [DCX Help] [Help Portal]
25. ^ The Foxhound purge process is automatically launched every 20 minutes, 24 hours a day [Foxhound Help]
26. ^ Auditing tracks database activity and saves it in the transaction log, the system event log or a user-defined file [DCX Help] [Help Portal]


Bonus Tip     (top)


Tip: If you run a lot of UPDATEs and DELETEs on a big table, the transaction log will grow much faster if the table doesn't have a PRIMARY KEY or UNIQUE constraint[23].

Here's what that looks like when dbtran[24] is used to display the transaction log; the "good table" is on the left, "bad table" on the right:

--CONNECT-1004-0000679704-DBA-2018-07-19 04:07
--BEGIN TRANSACTION-1004-0000679715
BEGIN TRANSACTION
go

--SQL-1004-0000679718
create table "DBA"."t"(
  "column1" integer not null,
  "column2" integer not null,
  "column3" varchar(100) not null,
primary key("column1"),
) go --COMMIT-1004-0000679883 COMMIT WORK go --BEGIN TRANSACTION-1004-0000679886 BEGIN TRANSACTION go --INSERT-1004-0000679985 INSERT INTO DBA.t(column1,column2,column3) VALUES (1,2,'This is a big fAt string.') go --UPDATE-1004-0000680026
UPDATE DBA.t SET column2=222 WHERE column1=1
go --UPDATE-1004-0000680043
UPDATE DBA.t SET column3='THIS IS A BIG FAT STRING.' WHERE column1=1
go --DELETE-1004-0000680082
DELETE FROM DBA.t WHERE column1=1
go --COMMIT-1004-0000680093 COMMIT WORK go
--CONNECT-1004-0000679704-DBA-2018-07-19 04:07
--BEGIN TRANSACTION-1004-0000679715
BEGIN TRANSACTION
go

--SQL-1004-0000679718
create table "DBA"."t"(
  "column1" integer not null,
  "column2" integer not null,
  "column3" varchar(100) not null,

  )
go

--COMMIT-1004-0000679856
COMMIT WORK
go

--BEGIN TRANSACTION-1004-0000679859
BEGIN TRANSACTION
go

--INSERT-1004-0000679954
INSERT INTO DBA.t(column1,column2,column3)
VALUES (1,2,'This is a big fAt string.')
go

--UPDATE-1004-0000679995
UPDATE FIRST DBA.t SET column2=222 WHERE column1=1 AND column2=2 AND column3='This is a big fAt string.'
go --UPDATE-1004-0000680042
UPDATE FIRST DBA.t SET column3='THIS IS A BIG FAT STRING.' WHERE column1=1 AND column2=222 AND column3='This is a big fAt string.'
go --DELETE-1004-0000680111
DELETE FIRST FROM DBA.t WHERE column1=1 AND column2=222 AND column3='THIS IS A BIG FAT STRING.'
go --COMMIT-1004-0000680152 COMMIT WORK go


(top)

1 comment:

Oliver Russo said...

Hi nice reading yoour blog