Version 3 of the Foxhound Database Monitor is now available...

Wednesday, February 2, 2011

Capturing the Server Console Log

"Server Console" is a fancy term for the SQL Anywhere database server window, and "Server Console Log" refers to the messages from the server that appear in that window:

Actually, it's probably more accurate to define "Server Console" as the SQL Anywhere Console utility, and the "Server Console Log" as the Messages pane displayed by that utility:
"%SQLANY12%\bin32\dbconsole.exe"^
-c "ENG=ddd12;DBN=ddd12;UID=dba;PWD=sql"

But that's not important... what matters is that the messages come from the database server, and what's really important is to...

...Always Capture The Server Console Log In Production!


The easiest way to do that is to specify the dbsrv12 -o filespec.txt command line option to save the console log to a text file:
"%SQLANY12%\bin32\dbsrv12.exe"^
-o dbsrv12_log_ddd12.txt^
-oe dbsrv12_log_fatal_ddd12.txt^
-os 10M^
ddd12.db
While you're at it, specify the -os 10M option to rename and restart the filespec.txt file when it grows large; -os 1M and 10M are reasonable limits, beyond which the file may get unwieldy when you need to browse it.
Tip: If you're starting the server as a service, you will probably need to include the full drive and path in the -o filespec.txt option.
The -oe different_filespec.txt option tells SQL Anywhere to put Really Important Messages in a separate file: startup and fatal errors, assertions and Catch 22 messages. Here's an example of a Catch 22 message; the server can't start because it can't open the -o filespec.txt file because it's already open because the server is already running:
01/30 05:12:07. Can't open Message window log file: dbsrv12_log_ddd12.txt
Why is it important to capture the server console log? Because if you don't, and the server displays a Really Important Message and then crashes, the message is gone gone gone, never to be seen again.

Another Way


There's another way to look at the server console log besides dbconsole and the -o file: the sa_server_messages() procedure. This is not a substitute for always capturing the log via -o filespec.txt, it is an alternative that lets you write SQL SELECT statements to query the messages:
MESSAGE 'Hello, World!' TO CONSOLE;

SELECT *
FROM sa_server_messages()
ORDER BY sa_server_messages.msg_id;

(Why isn't sa_server_messages() a substitute for -o filespec.txt? Because sa_server_messages() only works while the server is running, so if the server crashes you still need the -o file. Plus, sa_server_messages() only shows you messages produced since the server started... and if the server has been running for a long time, it will only show you recent messages. But besides all that, it's still pretty cool :)
Here's a simple query that shows all the checkpoints:
SELECT sa_server_messages.msg_text, 
sa_server_messages.msg_time
FROM sa_server_messages()
WHERE sa_server_messages.msg_category = 'CHKPT'
ORDER BY sa_server_messages.msg_id;

Here's a not-so-simple query that calculates how long each checkpoint took (surely, someone can write a better query, i.e., simpler, maybe with an OLAP WINDOW)...

BEGIN

DECLARE LOCAL TEMPORARY TABLE checkpoint_record (
checkpoint_starting TIMESTAMP NOT NULL PRIMARY KEY,
checkpoint_finished TIMESTAMP )
NOT TRANSACTIONAL;

INSERT checkpoint_record ( checkpoint_starting )
SELECT sa_server_messages.msg_time
FROM sa_server_messages()
WHERE sa_server_messages.msg_category = 'CHKPT'
AND sa_server_messages.msg_text LIKE 'Starting %';

FOR f_fetch AS c_fetch INSENSITIVE CURSOR FOR
SELECT checkpoint_record.checkpoint_starting AS @checkpoint_starting
FROM checkpoint_record
ORDER BY checkpoint_record.checkpoint_starting
FOR READ ONLY
DO

UPDATE checkpoint_record
SET checkpoint_record.checkpoint_finished = sa_server_messages.msg_time
FROM ( SELECT TOP 1 sa_server_messages.msg_time
FROM sa_server_messages()
WHERE sa_server_messages.msg_category = 'CHKPT'
AND sa_server_messages.msg_text LIKE 'Finished %'
AND sa_server_messages.msg_time >= @checkpoint_starting
ORDER BY sa_server_messages.msg_time ) AS sa_server_messages
WHERE checkpoint_record.checkpoint_starting = @checkpoint_starting;

END FOR;

SELECT *,
DATEDIFF ( MILLISECOND,
checkpoint_record.checkpoint_starting,
checkpoint_record.checkpoint_finished ) AS msec
FROM checkpoint_record
ORDER BY checkpoint_record.checkpoint_starting;

END;

checkpoint_starting checkpoint_finished msec
2011-01-30 05:11:32.000 2011-01-30 05:11:32.281 281
2011-01-30 05:31:33.453 2011-01-30 05:31:33.937 484
2011-01-30 05:51:35.046 2011-01-30 05:51:35.515 469
2011-01-30 06:11:36.640 2011-01-30 06:11:37.078 438
2011-01-30 06:31:38.234 2011-01-30 06:31:38.781 547
2011-01-30 06:51:39.937 2011-01-30 06:51:41.125 1188
Besides being easily searched and having extra columns like msg_category, the sa_server_messages() result set offers this distinct advantage: the timestamps are more accurate; e.g., 2011-01-30 05:17:57.578 instead of 01/30 05:17:57.

2 comments:

ron hiner said...

Thanks Breck... I didn't know about sa_server_messages, but I've often wondered why these messages can't be written to a table. Must be a design decision -- what do you think?

Anonymous said...

@Ron: I can think of two reasons:
1. The server log is server-specific, not database-specific. Which database should store the messages if the server runds several dbs (or starts without one)?
2. At least the starting messages can't be written to a table as they tell what the server has to do to initialize, to recover a database and the like until the database is ready to use.

That being said, it might be not too difficult to write an event (type "ServerIdle"?) that checks periodically whether new messages have been generated, and store them in a table of your choice...


@Breck: That's a summary to be told to the world - it's just sooo easy and too often overseen to use -o.

Volker