Monday, December 31, 2018

WITH FILE NAME LOGGING runs faster, but...

The LOAD TABLE statement has been a feature of SQL Anywhere since the early days, and the Version 5.5 Help was enthusiastic about it: "LOAD TABLE is more efficient than the ISQL statement INPUT and can be called from any client application."

The official enthusiasm had waned a bit by Version 6: "LOAD TABLE is intended solely for fast loading of large amounts of data. It is not intended for routine use in applications."

The Dark Side of LOAD TABLE was introduced in Version 7: "Inserts are not recorded in the log file. Thus, the inserted rows may not be recovered in the event of a crash. In addition, the LOAD TABLE statement should never be used in database involved in SQL Remote replication or databases used as MobiLink clients because these technologies replicated changes through analysis of the log file.

Versions 8, 9 and 10 were positively shrill on the subject: "Inserts are not recorded in the log file, raising the risk that data will be lost in the event of a crash and making this statement unusable with SQL Remote or with MobiLink remote databases."

Version 11, however, dropped the scary rhetoric and introduced three new LOAD TABLE clauses: WITH FILE NAME LOGGING, WITH ROW LOGGING and WITH CONTENT LOGGING.

WITH FILE NAME LOGGING is the default in Versions 11 through 17, and it is the name given to the way LOAD TABLE worked in all the earlier versions:

  • It records the file specification of the input file in the transaction log file, not the individual row INSERT statements.

  • It is also the fast method because it doesn't have to write millions of records to the log, and

  • it makes the log file much smaller.
The bad news is that
  • SQL Remote stops working if you use LOAD TABLE WITH FILE NAME LOGGING on any table involved in replication,

  • MobiLink stops working if you do the same thing to any remote table involved in synchronization, and

  • any database restart recovery that needs to apply transaction log data must have access to the original input file in its original location.
If you don't use SQL Remote or MobiLink then you don't care the first two problems, and if you do use them you already know what the problems with LOAD TABLE are.

The third problem (LOAD TABLE versus restart recovery) is less well understood.

For example, here's what an ordinary LOAD TABLE FROM 'file' statement
LOAD TABLE ttt FROM 'C:\\DATA\\TEST\\ttt.TXT';
looks like in the SQL Anywhere 17 transaction log:
--CHECKPOINT-0000-05896335817-2019-01-04 08:32:43.559
--CONNECT-1004-05896335806-dba-2019-01-04 08:32
--ROLLBACK-1004-05896335851
ROLLBACK WORK
go
--CHECKPOINT-0000-05896335867-2019-01-04 08:38:26.067
--CONNECT-1007-05896335856-dba-2019-01-04 08:38
--BEGIN TRANSACTION-1007-05896335901
BEGIN TRANSACTION
go
--BEGIN LOAD TABLE-1007-05896335904: load into table "dba"."ttt" using file 'C:\\TEMP\\ttt.TXT'
--END LOAD TABLE-1007-05896335966
--ROLLBACK-1007-05896335969
ROLLBACK WORK
go
--CHECKPOINT-0000-05896335972-2019-01-04 08:39:05.645
--BEGIN TRANSACTION-1007-05896336006
BEGIN TRANSACTION
go
--BEGIN LOAD TABLE-1007-05896336009: load into table "dba"."ttt" using file 'C:\\DATA\\TEST\\ttt.TXT'
--SQL-1007-05896336077
begin
  set temporary option "conversion_error" = 'On';
  set temporary option "string_rtruncation" = 'On';
  set temporary option "date_order" = 'YMD';
  set temporary option "nearest_century" = '50';
  set temporary option "default_timestamp_increment" = '1';
  set temporary option "time_zone_adjustment" = '-300';
  load into table "dba"."ttt" using file 'C:\\DATA\\TEST\\ttt.TXT' encoding 'windows-1252';
  set temporary option "conversion_error" = ;
  set temporary option "string_rtruncation" = ;
  set temporary option "date_order" = ;
  set temporary option "nearest_century" = ;
  set temporary option "default_timestamp_increment" = ;
  set temporary option "time_zone_adjustment" = ;
end
go
--END LOAD TABLE-1007-05896336793
--COMMIT-1007-05896336796-2019-01-04 08:41:03.742
COMMIT WORK
go
--START CHECKPOINT-0000-05896336807-2019-01-04 08:41:05.581
--FINISH CHECKPOINT-0000-05896336832-2019-01-04 08:41:26.851
--ROLLBACK-1007-05896336848
ROLLBACK WORK
go
--CHECKPOINT-0000-05896336853-2019-01-04 08:45:55.716
Let's say you do these four things:
  1. You make a full backup of your database,

  2. you run the LOAD TABLE ttt FROM 'C:\\DATA\\TEST\\ttt.TXT' as shown above,

  3. you back up the log file, and then

  4. you delete the data file C:\DATA\TEST\ttt.TXT.
Then, disaster strikes: You lose your current database.

Here's how you might try to recover your database:
  1. You copy your backup database file to replace the current database, and then

  2. you run dbsrv17 -ad to apply the backup log file to the restored database file.
Sadly, that Cannot access file 'C:\DATA\TEST\ttt.TXT':
I. 01/04 09:18:32. Starting database "ddd17" (C:\$ web\blog\20181231 WITH FILE NAME LOGGING runs faster, 
   but\ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:32. Performance warning: Database file "C:\$ web\blog\20181231 WITH FILE NAME LOGGING runs faster, 
   but\ddd17.db" consists of 213 disk fragments
I. 01/04 09:18:32. Database recovery in progress
I. 01/04 09:18:32.     Last checkpoint at Fri Jan 04 2019 08:29
I. 01/04 09:18:32.     Checkpoint log...
I. 01/04 09:18:33.     Transaction log: C:\$ web\blog\20181231 WITH FILE NAME LOGGING runs faster, but\190104AB.LOG...
I. 01/04 09:18:33. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:33. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:33. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:33. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:33. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:33. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:33.     Transaction log: C:\$ web\blog\20181231 WITH FILE NAME LOGGING runs faster, but\ddd17.log...
I. 01/04 09:18:33. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:34. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:34. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:34. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:34.     Parallel recovery enabled
I. 01/04 09:18:34. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
I. 01/04 09:18:34. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:18
E. 01/04 09:18:34. *** ERROR *** Assertion failed: 100948 (17.0.9.4882)
Failed to redo a database operation (id=4, page_no=0x1, offset=0x128) - Error: Cannot access file 'C:\DATA\TEST\ttt.TXT' 
   -- The system cannot find the file specified.
Here are three alternatives to avoid that problem:
  1. Take a full backup right after each LOAD TABLE WITH FILE NAME LOGGING statement,

  2. preserve all the input files until they are no longer needed for recovery... which is after you eventually do take a full backup, or

  3. use LOAD TABLE WITH ROW LOGGING instead (or WITH CONTENT LOGGING).
Alternative 3's probably the easiest if you have a lot of LOAD TABLE files and they're not very big; just run with ROW LOGGING and throw the files away after they're loaded.

If you regularly load millions of rows, however, you might need the speed of Alternatives 1 and 2. Here's how FILE NAME LOGGING and ROW LOGGING compare:
              LOAD TABLE              LOAD TABLE 
              WITH FILE NAME LOGGING  WITH ROW LOGGING
              ======================  ================ 
Elapsed time  118 seconds             214 seconds      - much slower
Log growth    0 pages                 481,295 pages    - huge growth
Here are the test scripts:

LOAD TABLE ttt FROM 'C:\\DATA\\TEST\\ttt.TXT';
BEGIN
DECLARE @start_at TIMESTAMP;
DECLARE @seconds  BIGINT;
DECLARE @pages    BIGINT;
DECLARE @growth   BIGINT;

-- Test A: LOAD TABLE

SET @start_at = CURRENT TIMESTAMP;
SET @pages    = DB_EXTENDED_PROPERTY ( 'FileSize', 'translog' );
MESSAGE STRING ( CURRENT TIMESTAMP, ' Test A starting; @@VERSION = ', @@VERSION ) TO CONSOLE;
LOAD TABLE ttt FROM 'C:\\DATA\\TEST\\ttt.TXT';
SET @seconds = DATEDIFF ( SECOND, @start_at, CURRENT TIMESTAMP );
SET @growth  = DB_EXTENDED_PROPERTY ( 'FileSize', 'translog' ) - @pages;
MESSAGE STRING ( CURRENT TIMESTAMP, ' Test A seconds = ', @seconds, ', log pages growth = ', @growth ) TO CONSOLE;
END;

I. 01/04 08:38:26. 2019-01-04 08:38:26.035 Test A starting; @@VERSION = 17.0.9.4882
I. 01/04 08:38:26. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 08:38
I. 01/04 08:38:26. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 08:38
I. 01/04 08:39:05. 2019-01-04 08:39:05.598 Test A starting; @@VERSION = 17.0.9.4882
I. 01/04 08:39:05. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 08:39
I. 01/04 08:39:05. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 08:39
I. 01/04 08:41:03. 2019-01-04 08:41:03.742 Test A seconds = 118, log pages growth = 0
I. 01/04 08:41:05. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 08:41
I. 01/04 08:41:05. Cache size adjusted to 2270460K
I. 01/04 08:41:10. Cache size adjusted to 2256544K
I. 01/04 08:41:26. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 08:41

LOAD TABLE ttt FROM 'C:\\DATA\\TEST\\ttt.TXT' WITH ROW LOGGING;
BEGIN
DECLARE @start_at TIMESTAMP;
DECLARE @seconds  BIGINT;
DECLARE @pages    BIGINT;
DECLARE @growth   BIGINT;

-- Test B: LOAD TABLE WITH ROW LOGGING

SET @start_at = CURRENT TIMESTAMP;
SET @pages    = DB_EXTENDED_PROPERTY ( 'FileSize', 'translog' );
MESSAGE STRING ( CURRENT TIMESTAMP, ' Test B starting; @@VERSION = ', @@VERSION ) TO CONSOLE;
LOAD TABLE ttt FROM 'C:\\DATA\\TEST\\ttt.TXT' WITH ROW LOGGING;
SET @seconds = DATEDIFF ( SECOND, @start_at, CURRENT TIMESTAMP );
SET @growth  = DB_EXTENDED_PROPERTY ( 'FileSize', 'translog' ) - @pages;
MESSAGE STRING ( CURRENT TIMESTAMP, ' Test B seconds = ', @seconds, ', log pages growth = ', @growth ) TO CONSOLE;
END;

I. 01/04 09:41:29. 2019-01-04 09:41:29.472 Test B starting; @@VERSION = 17.0.9.4882
I. 01/04 09:41:29. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:41
I. 01/04 09:41:29. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:41
I. 01/04 09:45:03. Starting checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:45
I. 01/04 09:45:03. 2019-01-04 09:45:03.724 Test B seconds = 214, log pages growth = 481295
I. 01/04 09:45:04. Cache size adjusted to 2267588K
I. 01/04 09:45:19. Finished checkpoint of "ddd17" (ddd17.db) at Fri Jan 04 2019 09:45
I. 01/04 09:46:04. Cache size adjusted to 2255732K



Wednesday, November 14, 2018

Foxhound Consulting Done Remotely

SQL Anywhere performance and tuning consulting by a real person (me) is now available remotely, via email. Here's how it works:

  1. You run the Foxhound 4 Database Monitor to gather samples from your SQL Anywhere database,

  2. then you send me a copy (or a subset) of your Foxhound database

  3. which I look at for one hour

  4. and then I send you my observations and suggestions.



Yes, the first hour of consulting is free... and you can send me your entire foxhound4.db file if it fits in an email or dropbox.

But . . . if your Foxhound database is huge . . .


If your foxhound4.db file is too big to send by email, you can use Foxhound's Adhoc Query feature to unload a smaller subset which will fit in an email:
  1. Download this SQL script to make this process easier: unload_samples.txt

  2. and save it in a folder like C:\ProgramData\RisingRoad\Foxhound4.

  3. Modify Foxhound to allow the script to run. To do this, add the dbsrv -gl all option to one of the Foxhound shortcut command files.

    For example, here's how to add -gl all to the $start_foxhound4_default_browser.bat file in C:\ProgramData\RisingRoad\Foxhound4:
    ...
    "!SDIR!\!BIN!\dbspawn.exe"^
      -f "!SDIR!\!BIN!\!SPGM!"^
      -c 25p^
      -ch 50p^
      -cr-^
      -gk all^
    -gl all^
    -gn 220^ -gna 0^ -n foxhound4^ -o foxhound4_debug.txt^ -oe foxhound4_debug_startup.txt^ -on 1M^ -sb 0^ -ufd restart^ -x tcpip^ -xd^ -xs http(port=8080;maxsize=0;to=600;kto=600)^ foxhound4.db^ -n f ...

  4. Run: Foxhound4 - Stop Foxhound Engine

  5. Run: Foxhound4 - Start Foxhound via default browser to put -gl all into effect.

  6. Run: Foxhound4 - Adhoc Query Foxhound Database via ISQL

  7. Run the unload_samples.txt script to gather a subset.

    For example, the following statements unload all the samples recorded since May 1 and write the files into C:\temp:
    READ unload_samples.txt;
    CALL unload_samples ( '2018-05-01' ); -- for more examples look inside unload_samples.txt
    

  8. Attach the files from C:\temp to a covering email and send it to me: Breck.Carter@gmail.com

That's it! ...you'll hear back from me soon.


Monday, October 22, 2018

Let's Be Blunt

SQL Anywhere really does do things the way they should be done. That's why the title of this blog has changed.

Another thing that's changed is the row of page links at the top: "Community" is gone because ( in order to serve you better? :) the "SAP SQL Anywhere Community" has been deleted.

The Forum link is still there. That's where the real community has always been, and remains.

Two SQL Anywhere Help links remain: DCX because that's the only way to see Version 16 and earlier, and the SAP Help Portal because it works fairly well and is a lot faster than DCX.

Plus, there are two Foxhound links: one to the Help where all the "Performance Tips" live, and one to the White Paper that shows off some of the features.

The rest are gone because they were just clutter: use the Portal search instead.


Saturday, October 6, 2018

Characteristic Errors, Revision 8

A characteristic error is a mistake so easy to make it appears you're actively encouraged to make it by the very nature of the software you're using.

Here are the latest entries...

  1. SQL Anywhere: Forgetting to code an INTO @local_variable clause on a SELECT inside a deeply-nested PROCEDURE, thus accidentally returning an additional result set to an higher FROM call, then wondering why you get SQLCODE -866 (returned a result set with a different schema than expected), or SQLCODE -873 (returned multiple result sets), or both.

  2. SQL Anywhere: Expecting to see MESSAGE TO CLIENT output with the SQL Anywhere 17 ISQL GUI. Don't bother looking, it's virtually inaccessible. Use MESSAGE TO CONSOLE instead, or revert to the version 16 ISQL which works just fine with version 17 databases.





Here's the full list...
  1. SQL: Seeing too little data, or no data at all, because a predicate in the WHERE clause effectively turned your OUTER JOIN into an INNER JOIN.

  2. SQL: Seeing too much data because a missing predicate in the WHERE clause effectively turned your INNER JOIN into a CROSS JOIN.

  3. SQL: Getting the wrong COUNT() or SUM() because you forgot to code WHERE ... IS NOT NULL, or you *did* code it when you shouldn't have.

  4. SQL: Getting the wrong answer because you forgot that, in general, NULL values [cough] suck.

  5. SQL Anywhere: Not seeing MESSAGE output because you forgot to run SET TEMPORARY OPTION DEBUG_MESSAGES = 'ON';

  6. SQL Anywhere: Not seeing any data because you forgot ON COMMIT PRESERVE ROWS or NOT TRANSACTIONAL.

  7. SQL Anywhere: Coding ENDIF where END IF was required, or vice versa (before Version 11).

  8. SQL Anywhere: Connecting to the wrong server because you forgot DOBROAD=NONE (before Version 12).

  9. SQL Anywhere: Forgetting the asterisk in SELECT TOP 10 FROM ...

  10. SQL Anywhere: Coding IF NOT VAREXISTS ( 'x' ) THEN ... instead of IF VAREXISTS ( 'x' ) = 0 THEN ...

  11. SQL Anywhere: Coding the wrong magic numbers 1, 2, 3, ... in the get_value() and set_value() calls in an EXTERNAL C DLL function.

  12. SQL Anywhere: Getting proxy table ODBC errors because the engine's running as a service and you've set up a User DSN instead of System DSN.

  13. SQL Anywhere: Getting file-related errors because the file specifications are relative to the server rather than the client.

  14. SQL Anywhere: Getting file-related errors because the engine's running as a service without the necessary permissions.

  15. SQL Anywhere: Coding CREATE TRIGGER IF NOT EXISTS instead of CREATE OR REPLACE TRIGGER, or vice versa for CREATE TABLE (in 11.0.1 or later).

  16. SQL Anywhere: Getting integer arithmetic when you wanted fractional parts because you forgot to CAST.

  17. Stored procedure debugger: Setting it to watch a specific user id other than the one you're using to test your code.

  18. Sybase Central: Setting it to display objects for owner names other than the one you're interested in.

  19. Copy and paste: Forgetting to edit after pasting; e.g., Copy and paste SET @continue = 'Y' into the body of a WHILE loop and then forgetting to change it to 'N'.

  20. MobiLink: Forgetting to call ml_add_column for any of the columns you're trying to synchronize, thus guaranteeing yourself a "Sassen Frassen Fricken Fracken!" moment when you run the first test.

  21. MobiLink: Forgetting to call ml_add_[various] with the NULL parameter to delete old ml_[whatever] rows, thus ending up with thousands of orphan system table rows in the consolidated database.

  22. OLAP Windowing: Coding the wrong combination of ASC and DESC in an inner OVER ORDER BY clause and the outer SELECT ORDER BY: different when they should be the same, the same when they should be different, or some other variation of "wrong combination"...
                SELECT older_sample_set.sample_set_number
                  INTO @20_older_sample_set_number
                  FROM ( SELECT TOP 20
                                ROW_NUMBER() OVER ( ORDER BY rroad_sample_set.sample_set_number ASC ) AS scrolling_row_number,
                                rroad_sample_set.sample_set_number                                    AS sample_set_number
                           FROM rroad_sample_set
                          WHERE rroad_sample_set.sampling_id       = @sampling_id
                            AND rroad_sample_set.sample_set_number < @sample_set_number
                          ORDER BY rroad_sample_set.sample_set_number DESC ) AS older_sample_set
                 WHERE older_sample_set.scrolling_row_number = 20;

  23. MobiLink: Forgetting to call ml_add_column() when trying to use named parameters instead of "?" in versions 10 and 11 MobiLink scripts, resulting in a "What the ... ? Sassen Frassen Fricken Fracken!" moment during the first test (thank you, Jeff Albion).

  24. SQL: Omitting a PRIMARY KEY column from the WHERE clause, thus turning a singleton SELECT (or DELETE!) into something rather more enthusiastic than expected (thank you, Ron Hiner).

  25. HTTP web services: Leaving an & in the code when a ? is required, and vice versa, when editing service URLs; e.g., 'HTTP://localhost:12345/web_service&service_parm2=!parm2'

  26. SQL Anywhere: Forgetting that not all functions look like functions: SELECT CAST ( CURRENT TIMESTAMP, VARCHAR )

  27. Batch file: Trailing spaces on SET commands; e.g., SELECT CAST ( xp_getenv ( 'DEBUG_MESSAGES' ) AS VARCHAR ) returns 'OFF ' instead of 'OFF' after SET DEBUG_MESSAGES=OFF

  28. Forum: Clicking Reply on the main Question or Answer entry instead of the comment you wanted.

  29. SQL Anywhere: Forgetting to run dblog to tell the database file where the log is now, after moving the database and log files to a different folder (thank you, Justin Willey).

  30. SQL Anywhere: Having to look up WAIT in the Help ... every ... single ... time, to be reminded that's it's WAITFOR, not WAIT.

  31. SQL: Forgetting to check the SELECT against the GROUP BY, resulting in "Function or column reference to ... must also appear in a GROUP BY" (thank you, Glenn Paulley).

  32. SQL: Coding too much in the GROUP BY (like, say, the primary key) so every group contains but a single row (thank you, Glenn Paulley).

  33. Design: Forgetting to accomodate or prevent loops in a tree structure, resulting in a tree traversal process that pegs the CPU at 100%... forever (thank you, Ove B).

  34. MobiLink: Unwittingly using a variety of user ids when running sync*.sql, updating MobiLink scripts and running the MobiLink server, resulting in inexplicable inconsistencies.

  35. MobiLink: Accidentally creating multiple script versions and then getting them crossed up when updating MobiLink scripts and running the MobiLink client.

  36. SQL Anywhere: Forgetting to run the 32-bit version of SQL Anywhere when working with Excel proxy tables.

  37. ODBC Administrator: Running the 64-bit version (huh?) of odbcad32.exe (say what?) when you need 32-bit version at C:\WINDOWS\SysWOW64\odbcad32.exe (oh, fer #*@&!!!)

  38. ODBC Administrator: Forgetting to click OK ... twice ... to actually save your new ODBC DSN after celebrating your success with Test Connection.

  39. ODBC Administrator: Setting up an ODBC DSN on the wrong computer: "It goes with the client!" ... but sometimes it's not obvious where the client is actually located.

  40. Security: Forgetting which Windows user id you're using on which system, then spending too much time with Windows menus, firewall software and Google searches before the "Doh!" moment.

  41. SQL: Getting an exception that is not only completely inexplicable, but absolutely impossible for the statement that raised it... until you think to look inside the triggers.

  42. SQL Anywhere: Getting an exception because a FOR loop variable has a scope conflict with a column name, or worse, NOT getting an exception, just a wrong result.

  43. SQL: Forgetting the comma between two columns in SELECT list, thus turning the second column name into a profoundly misleading alias name for the first column.

  44. SQL Anywhere: Coding SET OPTION instead of SET TEMPORARY OPTION, then wondering where the commit came from.

  45. SQL: Not bothering to check that you have the same number of columns and arguments in an INSERT statement and then wondering why you have an error (thank you, Justin Willey).

  46. SQL: Forgetting to omit your autoincrement primary key from your INSERT column name list (thank you, Justin Willey).

  47. Foxhound: Changing the "Default" Monitor option settings, then expecting them to apply to an existing target database.

  48. Foxhound: Creating two separate Foxhound Monitor sessions for the same actual target database, perhaps one using a DSN and the other a DSN-less connection string.

  49. SQL Anywhere: Forgetting to code AUTOSTOP OFF on a START DATABASE statement, then wondering why the database immediately stopped after starting.

  50. SQL Anywhere: Running a new version of dbupgrade.exe on a older version of a database file that is running on an older version of the database server and not noticing it did not have the desired effect.

    For example, running version 16 of dbupgrade.exe that connects to a version 12 database that is running on dbsrv12.exe will appear to run OK but none of the new version 16 stored procedures will exist after the "upgrade":
    SELECT * FROM sa_certificate_info ( 'whatever' );
    
    Could not execute statement.
    Procedure 'sa_certificate_info' not found
    SQLCODE=-265, ODBC 3 State="42S02"
    Line 1, column 1
    (Continuing after error)
    
    The version 16 dbupgrade.exe says it ran "SQL Anywhere Upgrade Utility Version 16.0.0.2512" but the SYSHISTORY table says the version was '12.0.1.4231':
    SQL Anywhere Upgrade Utility Version 16.0.0.2512
    Database upgrade started
    Creating system views
    Creation of system views completed
    Setting option values
    Setting option values completed
    Creating migration procedures
    Creation of migration procedures completed
    Creating jConnect procedures
    Creation of jConnect procedures completed
    Database upgrade finished
    Upgrade completed - shut down database and archive transaction log now
    
    SELECT operation, version, first_time, last_time
      FROM SYS.SYSHISTORY
     WHERE operation = 'UPGRADE';
    
    operation,  version,        first_time,                last_time
    'UPGRADE',  '12.0.1.4231', '2018-09-04 08:36:01.000',  '2018-09-04 08:36:01.000'
    
    To work as expected, a new version of dbupgrade.exe must connect to a database running on the new version of the database server, either by connecting to a database that is already running on the new server or by implicitly starting that database on the new server via the dbupgrade -c'DBF=...' connection parameter.

  51. SQL Anywhere: Not making the SELECT TOP ORDER BY clause sufficiently deterministic, then wondering why it doesn't pick the same top row every time.

    For example, a SELECT TOP 1 with an ORDER BY that doesn't specify enough columns to exactly determine which of 2 or more rows are the TOP 1 will run OK but it may not pick the same top row every time.

  52. SQL Anywhere: Not checking other ORDER BY clauses when changing one of several ORDER BY clauses in the same query; e.g., not checking a ROW_NUMBER() OVER ( ORDER BY ... ) when changing the SELECT ORDER BY, then wondering why there's something funky about the result set.

    Here's an example from down inside Foxhound:
      FROM ( SELECT TOP 500 
                    ROW_NUMBER() OVER ( ORDER BY rroad_v_sample_history_menu_up_count_locator.locator_number ASC ) AS scrolling_row_number,
                    rroad_v_sample_history_menu_up_count_locator.locator_number                                    AS locator_number,
                    rroad_v_sample_history_menu_up_count_locator.secondary_sort_order_number                       AS secondary_sort_order_number
               FROM rroad_v_sample_history_menu_up_count_locator
              ORDER BY rroad_v_sample_history_menu_up_count_locator.locator_number ASC,
                       rroad_v_sample_history_menu_up_count_locator.secondary_sort_order_number ASC 
           ) AS newer_sample_set
     WHERE newer_sample_set.scrolling_row_number = 500;
    

  53. SQL Anywhere: Forgetting to code an INTO @local_variable clause on a SELECT inside a deeply-nested PROCEDURE, thus accidentally returning an additional result set to an higher FROM call, then wondering why you get SQLCODE -866 (returned a result set with a different schema than expected), or SQLCODE -873 (returned multiple result sets), or both.

  54. SQL Anywhere: Expecting to see MESSAGE TO CLIENT output with the SQL Anywhere 17 ISQL GUI. Don't bother looking, it's virtually inaccessible. Use MESSAGE TO CONSOLE instead, or revert to the version 16 ISQL which works just fine with version 17 databases.

Sunday, September 16, 2018

Dogfooding As A Foxhound Hallmark

Developers make the worst salespeople.

If that wasn't true, how is it possible that "Dogfooding" wasn't added to the list of Foxhound Hallmarks until now? That, and "Robustness":

  1. Robustness  The Foxhound Monitor keeps on trying to connect to the target database during an outage. Foxhound also silently handles invalid performance statistics, and it keeps working after it throws an exception, thus meeting the definition of robustness: "The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions."

  2. Dogfooding  Foxhound is regularly run against the Foxhound database itself; to test new features, sure, but primarily to look for performance bottlenecks. Foxhound is also the first tool of choice on client performance and tuning assignments; that's how the Foxhound Monitor was created in the first place, on assignment in 2003 as a Blocked Connection Display for a SQL Anywhere 9 database at UNO Money Transfers. The Schema Display feature was created even earlier, in 1998, for an immense Oracle database at Princess Margaret Hospital... which makes 2018 the twentieth anniversary of Foxhound!
The whole list can be found in the updated Foxhound 4 Database Monitor White Paper . . .

. . . which has finally been converted back to HTML from PDF format that other thing :)




Tuesday, September 4, 2018

Characteristic Errors, Revision 7

UPDATE: See the latest version of this article here.


A characteristic error is a mistake so easy to make it appears you're actively encouraged to make it by the very nature of the software you're using.

Here are the latest entries...

  1. SQL Anywhere: Running a new version of dbupgrade.exe on a older version of a database file that is running on an older version of the database server and not noticing it did not have the desired effect.

    For example, running version 16 of dbupgrade.exe that connects to a version 12 database that is running on dbsrv12.exe will appear to run OK but none of the new version 16 stored procedures will exist after the "upgrade":
    SELECT * FROM sa_certificate_info ( 'whatever' );
    
    Could not execute statement.
    Procedure 'sa_certificate_info' not found
    SQLCODE=-265, ODBC 3 State="42S02"
    Line 1, column 1
    (Continuing after error)
    
    The version 16 dbupgrade.exe says it ran "SQL Anywhere Upgrade Utility Version 16.0.0.2512" but the SYSHISTORY table says the version was '12.0.1.4231':
    SQL Anywhere Upgrade Utility Version 16.0.0.2512
    Database upgrade started
    Creating system views
    Creation of system views completed
    Setting option values
    Setting option values completed
    Creating migration procedures
    Creation of migration procedures completed
    Creating jConnect procedures
    Creation of jConnect procedures completed
    Database upgrade finished
    Upgrade completed - shut down database and archive transaction log now
    
    SELECT operation, version, first_time, last_time
      FROM SYS.SYSHISTORY
     WHERE operation = 'UPGRADE';
    
    operation,  version,        first_time,                last_time
    'UPGRADE',  '12.0.1.4231', '2018-09-04 08:36:01.000',  '2018-09-04 08:36:01.000'
    
    To work as expected, a new version of dbupgrade.exe must connect to a database running on the new version of the database server, either by connecting to a database that is already running on the new server or by implicitly starting that database on the new server via the dbupgrade -c'DBF=...' connection parameter.

  2. SQL Anywhere: Not making the SELECT TOP ORDER BY clause sufficiently deterministic, then wondering why it doesn't pick the same top row every time.

    For example, a SELECT TOP 1 with an ORDER BY that doesn't specify enough columns to exactly determine which of 2 or more rows are the TOP 1 will run OK but it may not pick the same top row every time.

  3. SQL Anywhere: Not checking other ORDER BY clauses when changing one of several ORDER BY clauses in the same query; e.g., not checking a ROW_NUMBER() OVER ( ORDER BY ... ) when changing the SELECT ORDER BY, then wondering why there's something funky about the result set.

    Here's an example from down inside Foxhound:
    FROM ( SELECT TOP 500 
                    ROW_NUMBER() OVER ( ORDER BY rroad_v_sample_history_menu_up_count_locator.locator_number ASC ) AS scrolling_row_number,
                    rroad_v_sample_history_menu_up_count_locator.locator_number                                    AS locator_number,
                    rroad_v_sample_history_menu_up_count_locator.secondary_sort_order_number                       AS secondary_sort_order_number
               FROM rroad_v_sample_history_menu_up_count_locator
              ORDER BY rroad_v_sample_history_menu_up_count_locator.locator_number ASC,
                       rroad_v_sample_history_menu_up_count_locator.secondary_sort_order_number ASC 
           ) AS newer_sample_set
     WHERE newer_sample_set.scrolling_row_number = 500;
    



Here's the full list...
  1. SQL: Seeing too little data, or no data at all, because a predicate in the WHERE clause effectively turned your OUTER JOIN into an INNER JOIN.

  2. SQL: Seeing too much data because a missing predicate in the WHERE clause effectively turned your INNER JOIN into a CROSS JOIN.

  3. SQL: Getting the wrong COUNT() or SUM() because you forgot to code WHERE ... IS NOT NULL, or you *did* code it when you shouldn't have.

  4. SQL: Getting the wrong answer because you forgot that, in general, NULL values [cough] suck.

  5. SQL Anywhere: Not seeing MESSAGE output because you forgot to run SET TEMPORARY OPTION DEBUG_MESSAGES = 'ON';

  6. SQL Anywhere: Not seeing any data because you forgot ON COMMIT PRESERVE ROWS or NOT TRANSACTIONAL.

  7. SQL Anywhere: Coding ENDIF where END IF was required, or vice versa (before Version 11).

  8. SQL Anywhere: Connecting to the wrong server because you forgot DOBROAD=NONE (before Version 12).

  9. SQL Anywhere: Forgetting the asterisk in SELECT TOP 10 FROM ...

  10. SQL Anywhere: Coding IF NOT VAREXISTS ( 'x' ) THEN ... instead of IF VAREXISTS ( 'x' ) = 0 THEN ...

  11. SQL Anywhere: Coding the wrong magic numbers 1, 2, 3, ... in the get_value() and set_value() calls in an EXTERNAL C DLL function.

  12. SQL Anywhere: Getting proxy table ODBC errors because the engine's running as a service and you've set up a User DSN instead of System DSN.

  13. SQL Anywhere: Getting file-related errors because the file specifications are relative to the server rather than the client.

  14. SQL Anywhere: Getting file-related errors because the engine's running as a service without the necessary permissions.

  15. SQL Anywhere: Coding CREATE TRIGGER IF NOT EXISTS instead of CREATE OR REPLACE TRIGGER, or vice versa for CREATE TABLE (in 11.0.1 or later).

  16. SQL Anywhere: Getting integer arithmetic when you wanted fractional parts because you forgot to CAST.

  17. Stored procedure debugger: Setting it to watch a specific user id other than the one you're using to test your code.

  18. Sybase Central: Setting it to display objects for owner names other than the one you're interested in.

  19. Copy and paste: Forgetting to edit after pasting; e.g., Copy and paste SET @continue = 'Y' into the body of a WHILE loop and then forgetting to change it to 'N'.

  20. MobiLink: Forgetting to call ml_add_column for any of the columns you're trying to synchronize, thus guaranteeing yourself a "Sassen Frassen Fricken Fracken!" moment when you run the first test.

  21. MobiLink: Forgetting to call ml_add_[various] with the NULL parameter to delete old ml_[whatever] rows, thus ending up with thousands of orphan system table rows in the consolidated database.

  22. OLAP Windowing: Coding the wrong combination of ASC and DESC in an inner OVER ORDER BY clause and the outer SELECT ORDER BY: different when they should be the same, the same when they should be different, or some other variation of "wrong combination"...
    SELECT older_sample_set.sample_set_number
                  INTO @20_older_sample_set_number
                  FROM ( SELECT TOP 20
                                ROW_NUMBER() OVER ( ORDER BY rroad_sample_set.sample_set_number ASC ) AS scrolling_row_number,
                                rroad_sample_set.sample_set_number                                    AS sample_set_number
                           FROM rroad_sample_set
                          WHERE rroad_sample_set.sampling_id       = @sampling_id
                            AND rroad_sample_set.sample_set_number < @sample_set_number
                          ORDER BY rroad_sample_set.sample_set_number DESC ) AS older_sample_set
                 WHERE older_sample_set.scrolling_row_number = 20;

  23. MobiLink: Forgetting to call ml_add_column() when trying to use named parameters instead of "?" in versions 10 and 11 MobiLink scripts, resulting in a "What the ... ? Sassen Frassen Fricken Fracken!" moment during the first test (thank you, Jeff Albion).

  24. SQL: Omitting a PRIMARY KEY column from the WHERE clause, thus turning a singleton SELECT (or DELETE!) into something rather more enthusiastic than expected (thank you, Ron Hiner).

  25. HTTP web services: Leaving an & in the code when a ? is required, and vice versa, when editing service URLs; e.g., 'HTTP://localhost:12345/web_service&service_parm2=!parm2'

  26. SQL Anywhere: Forgetting that not all functions look like functions: SELECT CAST ( CURRENT TIMESTAMP, VARCHAR )

  27. Batch file: Trailing spaces on SET commands; e.g., SELECT CAST ( xp_getenv ( 'DEBUG_MESSAGES' ) AS VARCHAR ) returns 'OFF ' instead of 'OFF' after SET DEBUG_MESSAGES=OFF

  28. Forum: Clicking Reply on the main Question or Answer entry instead of the comment you wanted.

  29. SQL Anywhere: Forgetting to run dblog to tell the database file where the log is now, after moving the database and log files to a different folder (thank you, Justin Willey).

  30. SQL Anywhere: Having to look up WAIT in the Help ... every ... single ... time, to be reminded that's it's WAITFOR, not WAIT.

  31. SQL: Forgetting to check the SELECT against the GROUP BY, resulting in "Function or column reference to ... must also appear in a GROUP BY" (thank you, Glenn Paulley).

  32. SQL: Coding too much in the GROUP BY (like, say, the primary key) so every group contains but a single row (thank you, Glenn Paulley).

  33. Design: Forgetting to accomodate or prevent loops in a tree structure, resulting in a tree traversal process that pegs the CPU at 100%... forever (thank you, Ove B).

  34. MobiLink: Unwittingly using a variety of user ids when running sync*.sql, updating MobiLink scripts and running the MobiLink server, resulting in inexplicable inconsistencies.

  35. MobiLink: Accidentally creating multiple script versions and then getting them crossed up when updating MobiLink scripts and running the MobiLink client.

  36. SQL Anywhere: Forgetting to run the 32-bit version of SQL Anywhere when working with Excel proxy tables.

  37. ODBC Administrator: Running the 64-bit version (huh?) of odbcad32.exe (say what?) when you need 32-bit version at C:\WINDOWS\SysWOW64\odbcad32.exe (oh, fer #*@&!!!)

  38. ODBC Administrator: Forgetting to click OK ... twice ... to actually save your new ODBC DSN after celebrating your success with Test Connection.

  39. ODBC Administrator: Setting up an ODBC DSN on the wrong computer: "It goes with the client!" ... but sometimes it's not obvious where the client is actually located.

  40. Security: Forgetting which Windows user id you're using on which system, then spending too much time with Windows menus, firewall software and Google searches before the "Doh!" moment.

  41. SQL: Getting an exception that is not only completely inexplicable, but absolutely impossible for the statement that raised it... until you think to look inside the triggers.

  42. SQL Anywhere: Getting an exception because a FOR loop variable has a scope conflict with a column name, or worse, NOT getting an exception, just a wrong result.

  43. SQL: Forgetting the comma between two columns in SELECT list, thus turning the second column name into a profoundly misleading alias name for the first column.

  44. SQL Anywhere: Coding SET OPTION instead of SET TEMPORARY OPTION, then wondering where the commit came from.

  45. SQL: Not bothering to check that you have the same number of columns and arguments in an INSERT statement and then wondering why you have an error (thank you, Justin Willey).

  46. SQL: Forgetting to omit your autoincrement primary key from your INSERT column name list (thank you, Justin Willey).

  47. Foxhound: Changing the "Default" Monitor option settings, then expecting them to apply to an existing target database.

  48. Foxhound: Creating two separate Foxhound Monitor sessions for the same actual target database, perhaps one using a DSN and the other a DSN-less connection string.

  49. SQL Anywhere: Forgetting to code AUTOSTOP OFF on a START DATABASE statement, then wondering why the database immediately stopped after starting.

  50. SQL Anywhere: Running a new version of dbupgrade.exe on a older version of a database file that is running on an older version of the database server and not noticing it did not have the desired effect.

    For example, running version 16 of dbupgrade.exe that connects to a version 12 database that is running on dbsrv12.exe will appear to run OK but none of the new version 16 stored procedures will exist after the "upgrade":
    SELECT * FROM sa_certificate_info ( 'whatever' );
    
    Could not execute statement.
    Procedure 'sa_certificate_info' not found
    SQLCODE=-265, ODBC 3 State="42S02"
    Line 1, column 1
    (Continuing after error)
    
    The version 16 dbupgrade.exe says it ran "SQL Anywhere Upgrade Utility Version 16.0.0.2512" but the SYSHISTORY table says the version was '12.0.1.4231':
    SQL Anywhere Upgrade Utility Version 16.0.0.2512
    Database upgrade started
    Creating system views
    Creation of system views completed
    Setting option values
    Setting option values completed
    Creating migration procedures
    Creation of migration procedures completed
    Creating jConnect procedures
    Creation of jConnect procedures completed
    Database upgrade finished
    Upgrade completed - shut down database and archive transaction log now
    
    SELECT operation, version, first_time, last_time
      FROM SYS.SYSHISTORY
     WHERE operation = 'UPGRADE';
    
    operation,  version,        first_time,                last_time
    'UPGRADE',  '12.0.1.4231', '2018-09-04 08:36:01.000',  '2018-09-04 08:36:01.000'
    
    To work as expected, a new version of dbupgrade.exe must connect to a database running on the new version of the database server, either by connecting to a database that is already running on the new server or by implicitly starting that database on the new server via the dbupgrade -c'DBF=...' connection parameter.

  51. SQL Anywhere: Not making the SELECT TOP ORDER BY clause sufficiently deterministic, then wondering why it doesn't pick the same top row every time.

    For example, a SELECT TOP 1 with an ORDER BY that doesn't specify enough columns to exactly determine which of 2 or more rows are the TOP 1 will run OK but it may not pick the same top row every time.

  52. SQL Anywhere: Not checking other ORDER BY clauses when changing one of several ORDER BY clauses in the same query; e.g., not checking a ROW_NUMBER() OVER ( ORDER BY ... ) when changing the SELECT ORDER BY, then wondering why there's something funky about the result set.

    Here's an example from down inside Foxhound:
    FROM ( SELECT TOP 500 
                    ROW_NUMBER() OVER ( ORDER BY rroad_v_sample_history_menu_up_count_locator.locator_number ASC ) AS scrolling_row_number,
                    rroad_v_sample_history_menu_up_count_locator.locator_number                                    AS locator_number,
                    rroad_v_sample_history_menu_up_count_locator.secondary_sort_order_number                       AS secondary_sort_order_number
               FROM rroad_v_sample_history_menu_up_count_locator
              ORDER BY rroad_v_sample_history_menu_up_count_locator.locator_number ASC,
                       rroad_v_sample_history_menu_up_count_locator.secondary_sort_order_number ASC 
           ) AS newer_sample_set
     WHERE newer_sample_set.scrolling_row_number = 500;
    



Wednesday, August 29, 2018

One Of These Statements Is Wrong



Nobody else has this problem?

Right?

Just me?

I thought so :)


Thursday, August 2, 2018

Controlling The SQL Anywhere Transaction Log Size

Overview

The problem of a rapidly growing SQL Anywhere transaction log filling up the hard drive is discussed. The Foxhound High-Frequency Insert-Delete database is used as the primary example and a non-High-Frequency Insert-Delete benchmark database is used as a secondary example. The Introduction describes the problem, and several solutions are presented:The recommended solution is 2-plus-5 for a Foxhound database, and for a non-Foxhound database the best solution depends on the answers to questions like "Do you care about your data?".

The following topics are not discussed:
  • Databases which participate in SQL Remote replication, because the preservation of old transaction log entries is so important and controlling the file size is a whole different topic.
  • Databases which participate as MobiLink remote databases, for the same reason as SQL Remote.
  • Databases which participate in SQL Anywhere High Availability (mirror) and Read-Only Scale-Out setups, because... well... there just wasn't time :)
  • Databases with auditing = 'On' and audit_log = 'TRANSLOG' because the details are recorded in the transaction log.[26].
  • The brute force "Method Zero" (stop database, delete log, start database) because... well... that pretty much covers Method Zero :)


Introduction     (top)


One of the drawbacks to Foxhound is the tendency for the transaction log file to grow rapidly. This is especially true if Foxhound is used to monitor multiple target databases with many connections.
Background

Foxhound[1] is a third-party database monitor for SAP® SQL Anywhere®. It uses its own SQL Anywhere database to store samples of performance statistics gathered from multiple target databases.

The Foxhound database is a "High-Frequency Insert-Delete" database (sometimes called "High Throughput") because
  • it has a high rate of inserts (the continuous stream of sample data coming from the targets)

  • but the database file doesn't grow in size because one old row is deleted for every new row that is inserted.
At first, the Foxhound database and log files both grow in size at the same rate because new rows are inserted and nothing is deleted. Eventually the database file stops growing when the built-in purge process[25] starts deleting old data (after 30 days by default).

However, the purge actually makes the log grow faster because now the delete statements are logged as well as all the inserted data.
If nothing is done to control the size of the log it will fill the disk drive and Foxhound's own SQL Anywhere engine will crash. At that point you might have to use the dbsrv -f option[2] to get it going again.
How To Get Foxhound Running Again After Losing The Transaction Log.

The code shown below is from Foxhound 4, but the method's the same for Foxhound 3 and earlier.
  1. Edit the $start_foxhound4_engine.bat file in the C:\ProgramData\RisingRoad\Foxhound4\ folder.

  2. Insert the -f^ line shown in yellow.
    [snip]
    "!SDIR!\!BIN!\dbspawn.exe"^
      -f "!SDIR!\!BIN!\!SPGM!"^
      -c 25p^
      -ch 50p^
      -cr-^
      -gk all^
      -gn 220^
      -gna 0^
      -n foxhound4^
      -o foxhound4_debug.txt^
      -oe foxhound4_debug_startup.txt^
      -on 1M^
      -qn^
      -sb 0^
      -ufd restart^
      -x tcpip^
      -xd^
      -xs http(port=80;maxsize=0;to=600;kto=600)^
      foxhound4.db^
    -f^
    -n f [snip]
    Tip: When using the caret ^ to insert line breaks, make it the last character on the line; do not follow it with any spaces.

  3. Run the $start_foxhound4_engine.bat file.
    It should start the foxhound4.db file, create a new foxhound4.log file, and then shut down.

  4. Now you should be able to start Foxhound normally.
    Re-edit the $start_foxhound4_engine.bat file to remove the extra -f.
There are several ways to control the size of the transaction log file, ranging from the extreme (#1) to the traditional (#5 and 6):
Method 1: Get rid of the log altogether with dblog -n

Method 2: Automatically truncate the log on every checkpoint with dbsrv -m

Method 3: Regularly truncate the log with dbbackup -xo

Method 4: Regularly truncate the log with BACKUP ... TRUNCATE;

Method 5: Regularly make a full backup with dbbackup -x

Method 6: Do #5, then make incremental log backups with dbbackup -n -t -x

Method 7: Use a combination of methods
These questions and answers may help you choose a method . . .
Questioning the methods:          1      2         3         4         5        6         7

(a) Is it easy to set up?         no     yes       no        no        no       no        no

(b) Is it easy to administer?     yes    yes       no        no        no       no        no

(c) Will the database work        yes    yes       yes       yes       yes      yes       yes 
    properly?

(d) Will the database run as      maybe  probably  probably  probably  maybe    maybe     maybe
    fast?

(e) Will automatic recovery       yes    yes       yes       yes       yes      yes       yes
    work after the SQL Anywhere 
    engine stops abruptly   
    (no final checkpoint)?

(f) Will you be able to restore   maybe  maybe     maybe     maybe     yes      yes       yes
    your database file after    
    it is lost or damaged?

(g) Will the restored database    no     no        no        no        maybe    probably  maybe 
    file include all the   
    latest updates?

(h) But... Do you really care about all of your data?

(i)        Do you have alternative methods of restoring lost data?
... and some of the answers are expanded in the following "How To" sections.


Method 1: Get rid of the log altogether with dblog -n     (top)


Trigger Warning: Even thinking about operating a database without a transaction log may cause anxiety in some readers :)

Before writing this article I had never even experimented with removing the transaction log... it was taboo! Running without a transaction log would kill performance!

It says so in the Help:[3] "When operating without a transaction log, SQL Anywhere performs a checkpoint at the end of every transaction which consumes considerable resources."

It even says so in this blog! (Number 19 in How To Make SQL Anywhere Slow[4])

Method 1 (getting rid of the log) is the most effective (and most dramatic) way of controlling the transaction log size: Stop SQL Anywhere from writing anything to the log at all.

It is also easy to administer because after setup there is absolutely no administration required.

Here's how to get rid of the transaction log, in the context of Foxhound 4 running on SQL Anywhere 17:

First, stop the database if it is running.
Tip: If you forget to stop the database, dblog will fail with "Unable to open database file "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" -- Permission denied" because the actions of turning the log off and on, or renaming it, or moving it are performed by changing data inside the physical database file.

Tip: If you crash the SQL Anywhere process rather than stopping the database cleanly, dblog may fail with "Database quit abnormally last time it was used. Database recovery must be performed before proceeding." Start the database again, and stop it cleanly (run dbstop, or right mouse - Shut down "yourdb" on the taskbar icon, or stop the service).
Second, run dblog -n[5] to update the foxhound4.db file (you might have to use Run as Administrator):
"%SQLANY17%\Bin64\dblog.exe"^
  -n^
  -o "C:\ProgramData\RisingRoad\Foxhound4\dblog_messages.txt"^
  "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db"
PAUSE

SQL Anywhere Transaction Log Utility Version 17.0.9.4803
"C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" was 
   using log file "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"
"C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" was using no log mirror file
Transaction log mirror filename not changed
"C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" is now using no log file
Transaction log starting offset is 01770229792
Transaction log current relative offset is 0021542337
Third, start the database.
SQL Anywhere won't complain about a missing transaction log... maybe it should, but it doesn't.

Also, the usual "Starting checkpoint" and "Finished checkpoint" messages are not displayed in the SQL Anywhere console display or included in the dbsrv17.exe -o diagnostic text file, even though checkpoints continue to happen.
Finally, you can delete the old transaction log file because it isn't being used any more.
ERASE /F "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"
PAUSE
You won't notice any difference in the way the database works, or in the ability of SQL Anywhere to perform automatic restart recovery, and you may or may not notice a difference in performance.

For example, here's what the automatic recovery process looks like after Task Manager was used to kill a Foxhound dbsrv17.exe process that wasn't using a transaction log:
SQL Anywhere Network Server Version 17.0.9.4803
...
Starting database "f" (C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db) at Mon Jul 23 2018 12:23
Database recovery in progress
    Last checkpoint at Mon Jul 23 2018 12:22
    Checkpoint log...
    Checkpointing...
Starting checkpoint of "f" (foxhound4.db) at Mon Jul 23 2018 12:23
Finished checkpoint of "f" (foxhound4.db) at Mon Jul 23 2018 12:23
Recovery complete
Database "f" (foxhound4.db) started at Mon Jul 23 2018 12:23
Database server started at Mon Jul 23 2018 12:23
...
Now accepting requests
---------------------------------------
Foxhound Version 4.0 by RisingRoad - Extended Edition 4.0.4740a built at 2017 10 02 14:29 
   in a SQL Anywhere 16.0.0.2127 database created at 2017 10 02 14:29
---------------------------------------
...
Foxhound now accepting requests.
The "Database recovery in progress" usually works like this[6]:
  1. Use the checkpoint log to restore the database back to the most recent checkpoint.

  2. Use the transaction log (redo log) to roll the subsequent committed and uncommitted changes forward to the point of failure.

  3. Use the rollback log to undo uncommitted changes.

  4. Take a checkpoint.
Step 2 can't be done when there's no transaction log, but that's a moot point because when there's no transaction log the most recent checkpoint coincides with the most recent commit... and there's nothing for Step 3 to do either.

Performance may be another issue, however. When a SQL Anywhere database doesn't have a transaction log, a checkpoint operation must be performed with (more or less) every commit. Here's what that looks like for the Foxhound 4 database (as displayed by a separate "dogfood" copy of Foxhound) running with and without a transaction log:

Figure 1: Foxhound With And Without A Transaction Log
Foxhound 4 database operating WITH a transaction log


Foxhound 4 database operating WITHOUT a transaction log


The Checkpoints[7] column (over on the right) shows the number of checkpoints performed in each 10-second sample. As expected, the values are higher (about 10 checkpoints every 10 seconds) when Foxhound is run WITHOUT a transaction log, but that doesn't seem to matter because . . .
  • The Throughput...[8] columns measure the rate of work being performed, and they remain the same with or without a log.

  • In particular, the Throughput... Commits column remains the same (about 2.5 per second).

  • The CPU column[9] also remains the same, another indication that Foxhound is getting the same work done at the same rate.

  • The Waiting Time[10] column shows the total time all current connections were blocked or forced to wait during the previous interval, and the reason it's higher without a log may be that all the checkpoints are causing delays... not big delays, however, because total Waiting Time never reaches 1 second in any 10 second interval.

Figure 1 shows that Foxhound performance did not suffer after getting rid of the transaction log.

Your mileage may vary, of course. The tests in Figure 1 showed Foxhound monitoring only 1 target database with 1000 connections, not ten or twenty or more targets which is not uncommon.

Also, the tests don't show when Foxhound is overloaded by running on [cough] inadequate hardware; alas, that too is not uncommon :)

With other (non-Foxhound) databases, however, the penalty for running without a transaction log can be severe. Here's what that looks like for the benchmark application where 100 connections are running simple update-commit transactions as fast as SQL Anywhere can process them:

Figure 2: Non-Foxhound Database With And Without A Transaction Log
Benchmark database operating WITH a transaction log


Benchmark database operating WITHOUT a transaction log


Everything about the second test is bad news:
  • The Interval[11] between monitor samples (the left-most column) is longer than the nominal 10 seconds. Sometimes this happens because Foxhound is overloaded, but not this time: the benchmark database simply isn't responding fast enough.

  • The Response... Heartbeat[12] time went up by a factor of 10 (how long it took to Foxhound to get SELECT * FROM DUMMY).

  • The Response... Sample time doubled (how long it took to get the performance statistics)... without a log, the sample time is longer than Foxhound's nominal inter-sample interval of 10 seconds.

  • All the Throughput... numbers fell off a cliff, in particular the rate of Throughput... Commits which dropped from over 15,000 commits per second to lower than 1,000.

  • The Waiting Time numbers went up a bit. They were bad before (because the benchmark database was struggling), now they're even worse.

  • The CPU percent column tells more of the story. Low CPU isn't always a good thing; in this case the lower CPU numbers mean the lack of a transaction log prevented the application from getting its work done...

  • ...because SQL Anywhere was too busy taking Checkpoints: more than 6,000 checkpoints every ten seconds, as opposed to zero checkpoints with a log.
    (This may be why the usual "Starting checkpoint" and "Finished checkpoint" messages are not displayed in the SQL Anywhere console display... whaddaya think? :)

Figure 2 shows that non-Foxhound benchmark performance suffered greatly after getting rid of the transaction log.

If you change your mind: How To Start Using A Transaction Log Again.

The code shown below is from Foxhound 4, but the method's the same for Foxhound 3 and earlier.
  1. First, stop Foxhound.

  2. Run dblog -t to update the foxhound4.db file:
    "%SQLANY17%\Bin64\dblog.exe"^
      -t "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"^
      -o "C:\ProgramData\RisingRoad\Foxhound4\dblog_t_messages.txt"^
      "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db"
    PAUSE
    
    SQL Anywhere Transaction Log Utility Version 17.0.9.4803
    "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" was using no log file
    "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" is using no log mirror file
    "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db" is now 
       using log file "C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log"
    Transaction log starting offset is 0024948999
    Transaction log current relative offset is 0000968454
    That doesn't actually create the transaction log file... yet...

  3. Start Foxhound ...this is when the transaction log gets created.


What about restoring the database?     (top)


Trigger Warning: This section suggests that you might not care about your data :)

The really bad news about Method 1 (getting rid of the transaction log) is that no, even if you can restore the database file from a backup after it is lost or corrupted, you will not be able to "roll forward" the latest changes... because you don't have a transaction log.[13][18]

But... maybe you don't care about the latest changes. Maybe you don't even care about restoring the database. Maybe Foxhound is like the GPS in a car, if it breaks you replace it, you don't fret about losing a record of all your trips.

The GPS analogy works like this:
  • The GPS record of all your trips is analogous to the performance statistics that Foxhound gathers automatically.

  • The GPS record of all your favorite locations is analogous to the options and settings you manual enter into Foxhound.
For more about backing up your database even when it doesn't have a transaction log; see Method 7: Use a combination of methods.

Tip: You can restore the Foxhound database to its empty pre-installation state without having to reinstall Foxhound:
  1. Stop Foxhound.
  2. Delete C:\ProgramData\RisingRoad\Foxhound4\foxhound4.db
  3. Delete C:\ProgramData\RisingRoad\Foxhound4\foxhound4.log
  4. Copy the file foxhound4.db.4.0.4740.ORIGINAL_COPY from C:\ProgramData\RisingRoad\Foxhound4\setup to C:\ProgramData\RisingRoad\Foxhound4
  5. Rename it to foxhound4.db
  6. Start Foxhound.
  7. Enter your activation key.


Method 2: Automatically truncate the log on every checkpoint with dbsrv -m     (top)


This is an easy solution for both setup and administration: Simply add the -m option[14] to the SQL Anywhere server command line and all your problems vanish because a new empty transaction log is created and the old fat log is deleted every time SQL Anywhere performs a checkpoint... which is every few minutes or so.

Here's an excerpt showing how to modify the Foxhound 4 command files:
[snip]
"!SDIR!\!BIN!\dbspawn.exe"^
  -f "!SDIR!\!BIN!\!SPGM!"^
  -c 25p^
  -ch 50p^
  -cr-^
  -gk all^
  -gn 220^
  -gna 0^
  -n foxhound4^
  -o foxhound4_debug.txt^
  -oe foxhound4_debug_startup.txt^
  -on 1M^
  -sb 0^
  -ufd restart^
  -x tcpip^
  -xd^
  -xs http(port=80;maxsize=0;to=600;kto=600)^
  foxhound4.db^
  -m^
  -n f
[snip]
Here are the files that contain that code; you only need to change the ones you're going to use (like maybe the one that's highlighted ):
Foxhound4 Shortcut in Windows 10 Menu       Command File in C:\ProgramData\RisingRoad\Foxhound4 
------------------------------------------  ---------------------------------------------------
Backup Foxhound Database                    $backup_foxhound4.bat
Start Foxhound Engine                       $start_foxhound4_engine.bat
Start Foxhound via Chrome                   $start_foxhound4_chrome.bat
Start Foxhound via Chrome - debug           $start_foxhound4_chrome_debug.bat
Start Foxhound via default browser          $start_foxhound4_default_browser.bat 
Start Foxhound via default browser - debug  $start_foxhound4_default_browser_debug.bat
Start Foxhound via Firefox                  $start_foxhound4_firefox.bat
Start Foxhound via Firefox - debug          $start_foxhound4_firefox_debug.bat
Start Foxhound via IE                       $start_foxhound4_ie.bat
Start Foxhound via IE - debug               $start_foxhound4_ie_debug.bat


Method 3: Regularly truncate the log with dbbackup -xo     (top)


Instead of automatically truncating the log with every checkpoint, you can do it manually with a dbbackup -xo[16] command line, or automatically using the Windows Task Scheduler.

The Help says it best: dbbackup -xo[15] "Deletes the current transaction log and starts a new one. This operation does not perform a backup; its purpose is to free up disk space."

The following command file
  • assumes you're using the 64-bit version of SQL Anywhere 17 to run Foxhound (as opposed to the 32-bit version and/or SQL Anywhere 16),

  • assumes the backup subfolder already exists in C:\ProgramData\RisingRoad\Foxhound4, and

  • uses the built-in Foxhound credentials UID=BACKER; PWD=SQL; to run dbbackup.
Run dbbackup -xo on Foxhound 4
"%SQLANY17%\bin64\dbbackup.exe"^
  -c "ENG=foxhound4; DBN=f; UID=BACKER; PWD=SQL;"^
  -o "C:\ProgramData\RisingRoad\Foxhound4\backup\dbbackup_xo_messages.txt"^
  -xo

PAUSE

SQL Anywhere Backup Utility Version 17.0.9.4803
Transaction log truncation completed
This method
  • is not as easy to set up as Method 2 (truncate log on checkpoint),

  • or as easy to administer as Method 2 or Method 1 (get rid of the log),
because you are responsible
  • for manually running the command file

  • or for setting it up in the Task Scheduler[17] and making sure it actually runs.
However, this method may have a slight performance advantage over Method 2 (truncate log on checkpoint) because you can schedule a once-a-night dbbackup -xo run rather than forcing the SQL Anywhere server to truncate the log during prime time. In fact, checkpoints happen more often during heavy update periods which is exactly when you don't want extra file-handling operations.


Method 4: Regularly truncate the log with BACKUP ... TRUNCATE;     (top)


You can run this exact SQL statement to yield the same result as Method 3 (run dbbackup -xo):
Run BACKUP ... TRUNCATE[19] on any database
BACKUP DATABASE DIRECTORY '' TRANSACTION LOG ONLY TRANSACTION LOG TRUNCATE;
The result may be the same, but the setup and administration is completely different; the following comparison applies to all usage of dbbackup and BACKUP, not just truncating the log:
dbbackup.exe Utility BACKUP Statement
Connects to the server as a client Runs as a SQL statement inside the database
Requires an explicit connection string Implicitly uses the current connection
Usually run from a command line Rarely run from a command line using dbisql
Often run manually Rarely run manually using dbisql
Often run via Task Scheduler Rarely run via Task Scheduler using dbisql
Rarely launched from inside the database via xp_cmdshell Often run from inside a SQL EVENT or PROCEDURE
Rarely run from a scheduled SQL EVENT using xp_cmdshell Often run from a inside scheduled SQL EVENT
Rarely launched from inside a client application Easily run from inside a connected client application
Favored by procedural command-line fans Favored by declarative SQL fans

The Foxhound 4 database doesn't let you embed a BACKUP statement inside a PROCEDURE or scheduled EVENT[20], but you can run it via dbisql:
Run BACKUP ... TRUNCATE on Foxhound 4
"%SQLANY17%\bin64\dbisql.exe"^
  -c "ENG=foxhound4; DBN=f; UID=BACKER; PWD=SQL;"^
  -nogui^
  BACKUP DATABASE DIRECTORY '' TRANSACTION LOG ONLY TRANSACTION LOG TRUNCATE

IF ERRORLEVEL 1 ( ECHO *** ERROR - BACKUP TRUNCATE statement failed ) ELSE ( ECHO **** BACKUP TRUNCATE statement ran OK )

PAUSE

**** BACKUP TRUNCATE statement ran OK

Tip: Production-quality command-line scripts should have much better record keeping and error handling than shown here, especially if the scripts are run via Task Scheduler while you're at home asleep. A similar assertion can be made about BACKUP statements inside scheduled events; unlike the Task Scheduler, by default SQL Anywhere keeps no record at all of whether an event succeeded or failed... or failed to run at all.
When the BACKUP statement is used to perform a "real" backup (as opposed to just truncating the log), it performs better than dbbackup because it runs inside the server and doesn't pass all the backup data across the client-server boundary. However, neither Method 3 nor Method 4 deal with data so neither one performs better than the other.


Method 5: Regularly make a full backup with dbbackup -x     (top)


If you love your old Foxhound samples and options, this is a good solution because it protects your data: The dbbackup -x command takes a full backup of the Foxhound database and log, renames the old log file and creates a new empty one.

The Foxhound4 - Tools - Backup Foxhound Database shortcut[21] can be run repeatedly to keep only the three latest backup copies... it's the "keep only the three latest backup copies" part that controls the size of the log as long as you run it more often than [cough] "never" :)

Here's an excerpt from the underlying command file in C:\ProgramData\RisingRoad\Foxhound4:

Excerpt from $backup_foxhound4.bat
[snip]
"!SDIR!\!BIN!\dbbackup.exe"^
  -c "ENG=foxhound4;DBN=f;UID=BACKER;PWD=SQL"^
  -o backup\generation_temp\backup_log.txt^
  -x^
  backup\generation_temp
[snip]

Tue 07/31/2018  8:49:28.23 ********************************* 
Tue 07/31/2018  8:49:28.23 *** Foxhound full backup started 
Tue 07/31/2018  8:49:28.23 *** Starting dbsrv16 if necessary... 
Tue 07/31/2018  8:49:33.16 *** Starting full dbbackup... 
SQL Anywhere Backup Utility Version 17.0.9.4803
 (57610 of estimated 57610 pages, 100% complete)
Transaction log truncated
Database backup completed
Tue 07/31/2018  8:49:40.12 *** Moving backup to the generation3 subfolder  
Tue 07/31/2018  8:49:40.12 *** Foxhound full backup OK 

( Warts and all... Note the imprecise reference to dbsrv16 in 
  the ECHO message... the test was run using SQL Anywhere 17 :)


Method 6: Do #5, then make incremental log backups with dbbackup -n -t -x     (top)


You can backup and truncate the log without backing up the database every time by combining Method 5 (full backup) with multiple runs of Method 6 (incremental log-only backups).

If you love your old Foxhound samples and options, this is a better solution because it increases the likelihood that you'll be able to restore recent changes after a disaster.

The Foxhound4 - Tools - Backup Foxhound Transaction Log shortcut can be run repeatedly to store multiple log backups together with the latest database backup.

Here's an excerpt from the underlying command file in C:\ProgramData\RisingRoad\Foxhound4:

Excerpt from $backup_foxhound4_log.bat
[snip]
"!SDIR!\!BIN!\dbbackup.exe"^
  -c "ENG=foxhound4;DBN=f;UID=BACKER;PWD=SQL"^
  -o backup\generation3\backup_log.txt^
  -n^
  -t^
  -x^
  backup\generation3\logs
[snip]

Tue 07/31/2018  9:11:42.80 ********************************* 
Tue 07/31/2018  9:11:42.80 *** Foxhound log backup started 
Tue 07/31/2018  9:11:42.82 *** Starting dbbackup... 
SQL Anywhere Backup Utility Version 17.0.9.4803
 (13 of estimated 13 pages, 100% complete)
Transaction log truncated
Database backup completed
Tue 07/31/2018  9:11:44.01 *** Foxhound log backup OK 


Method 7: Use a combination of methods     (top)


One combination is Method 1 plus Method 5 to protect your intellectual property (all the work you did to set up the Foxhound options and settings) but not necessarily protect all samples:
  • Use Method 1 (get rid of the log) to prevent the log from growing too large, then

  • run Method 5 (full backup) every time you make changes to the Foxhound options and settings; note that Method 5 doesn't back up the transaction log because it doesn't exist.
Here's how to restore after the foxhound.db file is lost or damaged:
  1. Delete the old foxhound4.db file if it still exists in C:\ProgramData\RisingRoad\Foxhound4.

  2. Replace it by copying the backup foxhound4.db from C:\ProgramData\RisingRoad\Foxhound4\backup\generation3.
    There's no foxhound4.log file to restore because a transaction log isn't being used.

  3. Start Foxhound.
    Foxhound will automatically restart sampling the target database(s).
    There will be a gap in the sample timestamps[22] around a message like this:
    Recent Samples  Interval
      1:01:53 PM     3m 53s   -- Foxhound stopped -- 
    
    Any changes to Foxhound options you made before the backup will be restored, but not the changes made afterwards.
Another combination is Method 2 plus Method 5, a slightly different way to accomplish the same thing:
Recommended for Foxhound
  • Use Method 2 (truncate log on checkpoint) to prevent the log from growing too large, then

  • run Method 5 (full backup) every time you make changes to the Foxhound options and settings.
The restore works the same way, except now there's a backup transaction log file involved:
  1. Delete the old foxhound4.db file if it still exists in C:\ProgramData\RisingRoad\Foxhound4.

  2. Replace it by copying the backup foxhound4.db from C:\ProgramData\RisingRoad\Foxhound4\backup\generation3.

  3. Delete the old foxhound4.log file if it still exists in C:\ProgramData\RisingRoad\Foxhound4.

  4. Replace it by copying the backup foxhound4.log from C:\ProgramData\RisingRoad\Foxhound4\backup\generation3.

  5. Start Foxhound.
    Foxhound will automatically restart sampling the target database(s).
    There will be a gap in the sample timestamps around a message like this:
    Recent Samples  Interval
      1:53:51 PM     2m 54s   -- Foxhound stopped -- 
    
    Any changes to Foxhound options you made before the backup will be restored, but not the changes made afterwards.


Recommendations     (top)


For a Foxhound database, Method 2 (truncate log on checkpoint) is a good approach, followed closely by Method 1 (get rid of the log), but the recommended solution is 2-plus-5 (truncate log on checkpoint plus occasional backup).

For a non-Foxhound database, the right answer depends on your answers to questions like "Do you care about your data?".


External References     (top)


1. ^ Foxhound third-party database monitor for SAP® SQL Anywhere® [Foxhound 4 Home] [White Paper] [FAQ] [Help]
2. ^ The dbsrv -f option forces the database server to start after the transaction log has been lost [DCX Help] [Help Portal]
3. ^ Tip: Always use a transaction log [DCX Help] [Help Portal]
4. ^ How To Make SQL Anywhere Slow [SQLAnywhere Blog]
5. ^ Transaction Log utility (dblog): Administers the transaction log for a database [DCX Help] [Help Portal]
6. ^ The automatic recovery process after a system failure [DCX Help] [Help Portal]
7. ^ Checkpoints is the number of CHECKPOINT operations that have been executed by the server in the previous interval [Foxhound Help]
8. ^ Throughput, also known as bandwidth, is a measure of how much work the database has performed [Foxhound Help]
9. ^ CPU percent used during the preceding interval [Foxhound Help]
10. ^ Waiting Time is the total time all current connections were blocked or forced to wait during the previous interval [Foxhound Help]
11. ^ Interval time between the previous sample and this one [Foxhound Help]
12. ^ Response time, also known as latency, is a measure of how long it takes the database to respond to a single request [Foxhound Help]
13. ^ The transaction log is also called the forward log or the redo log [DCX Help] [Help Portal]
14. ^ -m database option truncates the transaction log when a checkpoint is performed [DCX Help] [Help Portal]
15. ^ Backup utility (dbbackup) [DCX Help] [Help Portal]
16. ^ Quick! How do I truncate the transaction log? [SQLAnywhere Blog]
17. ^ Task Scheduler in Windows 10: Start - Windows Administrative Tools - Task Scheduler [Wikipedia][Microsoft Docs]
18. ^ Point-in-time recovery (PITR) offers you a way to restore a database to a moment in time [DCX Help] [Help Portal]
19. ^ BACKUP DATABASE: The transaction log can be renamed and restarted without completing a backup [DCX Help] [Help Portal]
20. ^ CREATE EVENT statement: Defines an event and its associated handler for automating scheduled actions [DCX Help][Help Portal]
21. ^ Foxhound Introduction and Setup - Backup: It is a good idea to take regular backups of the Foxhound database [Foxhound Help]
22. ^ Recent Samples shows the date/time that Foxhound recorded each sample [Foxhound Help]
23. ^ Control transaction log file growth by ensuring that all your tables have compact primary keys [DCX Help] [Help Portal]
24. ^ Log Translation utility (dbtran): Translates a transaction log into a SQL script file [DCX Help] [Help Portal]
25. ^ The Foxhound purge process is automatically launched every 20 minutes, 24 hours a day [Foxhound Help]
26. ^ Auditing tracks database activity and saves it in the transaction log, the system event log or a user-defined file [DCX Help] [Help Portal]


Bonus Tip     (top)


Tip: If you run a lot of UPDATEs and DELETEs on a big table, the transaction log will grow much faster if the table doesn't have a PRIMARY KEY or UNIQUE constraint[23].

Here's what that looks like when dbtran[24] is used to display the transaction log; the "good table" is on the left, "bad table" on the right:

--CONNECT-1004-0000679704-DBA-2018-07-19 04:07
--BEGIN TRANSACTION-1004-0000679715
BEGIN TRANSACTION
go

--SQL-1004-0000679718
create table "DBA"."t"(
  "column1" integer not null,
  "column2" integer not null,
  "column3" varchar(100) not null,
primary key("column1"),
) go --COMMIT-1004-0000679883 COMMIT WORK go --BEGIN TRANSACTION-1004-0000679886 BEGIN TRANSACTION go --INSERT-1004-0000679985 INSERT INTO DBA.t(column1,column2,column3) VALUES (1,2,'This is a big fAt string.') go --UPDATE-1004-0000680026
UPDATE DBA.t SET column2=222 WHERE column1=1
go --UPDATE-1004-0000680043
UPDATE DBA.t SET column3='THIS IS A BIG FAT STRING.' WHERE column1=1
go --DELETE-1004-0000680082
DELETE FROM DBA.t WHERE column1=1
go --COMMIT-1004-0000680093 COMMIT WORK go
--CONNECT-1004-0000679704-DBA-2018-07-19 04:07
--BEGIN TRANSACTION-1004-0000679715
BEGIN TRANSACTION
go

--SQL-1004-0000679718
create table "DBA"."t"(
  "column1" integer not null,
  "column2" integer not null,
  "column3" varchar(100) not null,

  )
go

--COMMIT-1004-0000679856
COMMIT WORK
go

--BEGIN TRANSACTION-1004-0000679859
BEGIN TRANSACTION
go

--INSERT-1004-0000679954
INSERT INTO DBA.t(column1,column2,column3)
VALUES (1,2,'This is a big fAt string.')
go

--UPDATE-1004-0000679995
UPDATE FIRST DBA.t SET column2=222 WHERE column1=1 AND column2=2 AND column3='This is a big fAt string.'
go --UPDATE-1004-0000680042
UPDATE FIRST DBA.t SET column3='THIS IS A BIG FAT STRING.' WHERE column1=1 AND column2=222 AND column3='This is a big fAt string.'
go --DELETE-1004-0000680111
DELETE FIRST FROM DBA.t WHERE column1=1 AND column2=222 AND column3='THIS IS A BIG FAT STRING.'
go --COMMIT-1004-0000680152 COMMIT WORK go


(top)