Of particular interest to me was this point from slide 6:
Self-management is largely all about performanceIf you don't think error handling is a big deal, have a look at this this conversation about
- A significant exception: error handling
Self-management is largely all about performanceIf you don't think error handling is a big deal, have a look at this this conversation about
- A significant exception: error handling
Seriously, it doesn't say "Oracle" in the title of this article, and the points it makes apply to a lot of non-Oracle shops including some using SQL Anywhere:
The Seven Deadly Habits of a DBABut... the writer IS talking about Oracle, and Habit #5 does apply to every single large Oracle (and IBM) shop I've ever dealt with.
Habit #5. THE BLAME GAME: "Don't look at me, it's the developer's fault that SQL is in production"
Some DBAs have a real "us versus them" mentality when it comes to developers in their organization. They see themselves not as facilitators helping the developers develop quality code from a database standpoint, but rather as guardians who prevent poor-quality code from making it into production. This might seem like semantics, but a confrontational relationship between developers and DBAs results in a lack of developer initiative and significant slowdowns in release cycles.
Cures:
- Select DBAs who understand it's their responsibility to work as an integrated team with the developers they support.
- Cultivate a team attitude by structuring continuous DBA involvement in every project rather than at review milestones.
- Consider assigning an individual DBA in a developer support role. If it's clearly in the job description, there's more motivation to do it well.
Did you know that DATEDIFF returns a signed INTEGER value? Not BIGINT?
SELECT EXPRTYPE ( 'SELECT DATEDIFF ( DAY, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 ) AS "Data Type";
Data Type
'int'
BEGIN
DECLARE @sqlcode INTEGER;
DECLARE @sqlstate VARCHAR ( 5 );
DECLARE @errormsg VARCHAR ( 32767 );
DECLARE @ok BIGINT;
DECLARE @splat BIGINT;
SELECT DATEDIFF ( MILLISECOND,
CURRENT TIMESTAMP,
DATEADD ( DAY, 24, CURRENT TIMESTAMP ) )
INTO @ok;
BEGIN
SELECT DATEDIFF ( MILLISECOND,
CURRENT TIMESTAMP,
DATEADD ( DAY, 25, CURRENT TIMESTAMP ) )
INTO @splat;
EXCEPTION WHEN OTHERS THEN
SELECT SQLCODE, SQLSTATE, ERRORMSG()
INTO @sqlcode, @sqlstate, @errormsg;
MESSAGE STRING (
'EXCEPTION raised by "SELECT INTO @splat" at ',
CURRENT TIMESTAMP,
': SQLCODE = ', @sqlcode,
', SQLSTATE = ', @sqlstate,
', ERRORMSG() = ', @errormsg )
TO CLIENT;
END;
SELECT @ok, @splat;
END;
@ok,@splat
2073600000,(NULL)
EXCEPTION raised by "SELECT INTO @splat" at 2010-10-16 08:52:49.610: SQLCODE = -158, SQLSTATE = 22003, ERRORMSG() = Value datediff(millisecond,2010-10-16 08:52:49.610,2010-11-10 08:52:49.610) out of range for destination
You can probably figure out the exact "splat!" point between 24 and 25 days using DATEADD ( HOUR, ... ) or MINUTE or even SECOND. Note, however, that DATEDIFF ( SECOND, ... ) and MINUTE have their own splat! points, and the effect of multiple splat! points on code verbosity will soon become apparent.
Sounds like a job for FLOAT? Hah! ...don't talk to me about floating point numbers, they're icky sloppy things, not to be touched or handled without gloves.One workaround is to accept FLOAT-like behavior in a BIGINT value (large values are not perfectly precise), and turn failing DATEDIFF ( MILLISECOND, ... ) calls into ones that work: DATEDIFF ( SECOND, ... ) * 1000, DATEDIFF ( MINUTE, ... ) * 60 * 1000 and so on.
Yes, I am a Data Type Bigot and proud of it. In olden days my motto was, "If it ain't greater than zero and less than 32767 I'm not interested!"
Now it's "Give me fixed point or give me death!"
Besides, DATEDIFF returns an INTEGER, and it *still* craps out at 25 days if you CAST it as FLOAT.
The table rroad_purge_run is used for monitoring Foxhound itself, in particular the internal database purge process.
CREATE TABLE rroad_purge_run (
run_number BIGINT NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY, -- do not INSERT or UPDATE this column
progress VARCHAR ( 100 ) NOT NULL DEFAULT 'Starting',
started_at TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP, -- do not INSERT or UPDATE this column
is_complete VARCHAR ( 1 ) NOT NULL DEFAULT 'N' CHECK ( @complete IN ( 'Y', 'N' ) ),
completed_at TIMESTAMP NOT NULL DEFAULT TIMESTAMP, -- do not INSERT or UPDATE this column
must_trigger_next_purge VARCHAR ( 1 ) NOT NULL DEFAULT 'N' CHECK ( must_trigger_next_purge IN ( 'Y', 'N' ) ),
sample_purge_interval VARCHAR ( 100 ) NOT NULL DEFAULT '',
uninteresting_connections_purge_interval VARCHAR ( 100 ) NOT NULL DEFAULT '',
purge_speed VARCHAR ( 100 ) NOT NULL DEFAULT '',
exception_delete_count BIGINT NOT NULL DEFAULT 0,
orphan_sample_set_delete_count BIGINT NOT NULL DEFAULT 0,
old_sample_set_delete_count BIGINT NOT NULL DEFAULT 0,
uninteresting_connections_delete_count BIGINT NOT NULL DEFAULT 0,
exception_delete_msec INTEGER NOT NULL DEFAULT 0,
orphan_sample_set_delete_msec INTEGER NOT NULL DEFAULT 0,
old_sample_set_delete_msec INTEGER NOT NULL DEFAULT 0,
uninteresting_connections_delete_msec INTEGER NOT NULL DEFAULT 0,
run_msec BIGINT NOT NULL COMPUTE (
CASE
WHEN ABS ( DATEDIFF ( YEAR, started_at, completed_at ) ) >= 4083
THEN CAST ( DATEDIFF ( HOUR, started_at, completed_at ) AS BIGINT ) * 60 * 60 * 1000
WHEN ABS ( DATEDIFF ( YEAR, started_at, completed_at ) ) >= 68
THEN CAST ( DATEDIFF ( MINUTE, started_at, completed_at ) AS BIGINT ) * 60 * 1000
WHEN ABS ( DATEDIFF ( DAY, started_at, completed_at ) ) >= 24
THEN CAST ( DATEDIFF ( SECOND, started_at, completed_at ) AS BIGINT ) * 1000
ELSE DATEDIFF ( MILLISECOND, started_at, completed_at )
END ) );
Here's another simulation showing how the COMPUTE works for 23 versus 25 days; there's no EXCEPTION, but there is also a slight loss of precision (the 11 milliseconds is lost: 1987200011 versus 2160000000):
INSERT rroad_purge_run ( run_number ) VALUES ( DEFAULT ); -- work begins
WAITFOR DELAY '00:00:00.5'; -- some work is done
UPDATE rroad_purge_run
SET exception_delete_count = 123 -- a column is changed
WHERE run_number = 1;
SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;
started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-10-16 08:11:38.503',593
WAITFOR DELAY '00:00:00.5'; -- more work is done
UPDATE rroad_purge_run
SET orphan_sample_set_delete_count = 567 -- another column is changed
WHERE run_number = 1;
SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;
started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-10-16 08:11:39.050',1140
UPDATE rroad_purge_run
SET completed_at = DATEADD ( DAY, 23, DATEADD ( MILLISECOND, 11, started_at ) )
WHERE run_number = 1;
SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;
started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-11-08 08:11:37.921',1987200011
UPDATE rroad_purge_run
SET completed_at = DATEADD ( DAY, 25, DATEADD ( MILLISECOND, 11, started_at ) )
WHERE run_number = 1;
SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;
started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-11-10 08:11:37.921',2160000000
Well, maybe not all of Sybase, but surely one part is hiring like the recession's over. Do this Google search, see how many hits you get:
jobs "Sybase Federal, an SAP GSS Company"A little bird told me that the successful 2010 US Census project has been a real door-opener for SQL Anywhere and MobiLink inside the US federal government.
For those of you who don't read SQLA, here's an excerpt from a recent "Sybase Partner News" email (the emphasis is mine):
Sybase Wins Cisco's Annual Supplier Appreciation Award for Software Excellence
September 30, 2010 - presented during its annual Supplier Appreciation Conference, this is the first year Cisco recognized software suppliers in addition to several categories for hardware suppliers. Cisco currently embeds SQL Anywhere in nearly 20 products, and cited Sybase iAnywhere's efforts to ensure Cisco's success with the technology. During the presentation, Cisco praised Sybase iAnywhere's unique approach, and the technical and relationship support provided by the Sybase team.
Back on October 9 I posted "What's going on here?" about the suspicious behavior of a simple SELECT with and without a WHERE clause.
The first line said "Can something this simple really be so wrong?" and the last line said "This is a bug in SQL Anywhere 11.0.1.2276, right?"
Anonymous said...Well, [redacted] is not alone. I thought it was a bug, too, until someone pointed out to me...Well, don't leave us hanging... is this a bug or not?[redacted] said...
October 14, 2010 10:53 AMI say it's a bug. The documentation pretty clearly states that the ORDER BY clause will be respected (and is in fact more or less required) for TOP and START AT.
Is there an index on the "data" column?
October 14, 2010 8:55 PM
Mark C. said...If you got fooled into thinking it was a bug in SQL Anywhere, it might be because rows 1 through 9 were left out of "What's going on here?". I did that on purpose so you could be misled just like I was.Breck: If there are five rows in the "top 10" of original query where data != 'B' then these five rows would not be included in the second query's result set and this would cause the output of the second query to be different.V Barth said...
October 9, 2010 11:21 AMOh, a riddle?Phil said...
I suggest there are 5 rows in ther pkey range 1-9 WHERE data = 'B' IS NOT TRUE. Therefore the rows 10-14 are still part of the 2nd select but get skipped by the START AT clause as they are now among the first 9 rows.
October 10, 2010 11:27 AMWhat's the data for pkey 1 through 9? I'm not an expert, but if there are 5 rows with data != 'B', then I would expect this result. I would read this query as "give me rows 10 through 15 that meet these conditions" not "give me the rows that meet these conditions from rows 10 through 15."
October 11, 2010 11:45 AM
CREATE TABLE t (
pkey INTEGER NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
data VARCHAR ( 10 ) NOT NULL );
INSERT t VALUES ( 1, 'A' );
INSERT t VALUES ( 2, 'A' );
INSERT t VALUES ( 3, 'A' );
INSERT t VALUES ( 4, 'A' );
INSERT t VALUES ( 5, 'A' );
INSERT t VALUES ( 6, 'B' );
INSERT t VALUES ( 7, 'B' );
INSERT t VALUES ( 8, 'B' );
INSERT t VALUES ( 9, 'B' );
INSERT t VALUES ( 10, 'B' );
INSERT t VALUES ( 11, 'B' );
INSERT t VALUES ( 12, 'B' );
INSERT t VALUES ( 13, 'B' );
INSERT t VALUES ( 14, 'B' );
INSERT t VALUES ( 15, 'B' );
INSERT t VALUES ( 16, 'B' );
INSERT t VALUES ( 17, 'B' );
INSERT t VALUES ( 18, 'B' );
INSERT t VALUES ( 19, 'B' );
INSERT t VALUES ( 20, 'B' );
COMMIT;
SELECT TOP 5 START AT 10 *
FROM t
ORDER BY pkey;
pkey,data
10,'B'
11,'B'
12,'B'
13,'B'
14,'B'
SELECT TOP 5 START AT 10 *
FROM t
WHERE data = 'B'
ORDER BY pkey;
pkey,data
15,'B'
16,'B'
17,'B'
18,'B'
19,'B'
Here's something you may have heard from time to time:
Don't use cursors to make changes. Use set-oriented SQL statements instead.But hey, sometimes you HAVE to write a COBOL-style program, one that uses an old-fashioned input-process-repeat loop to manipulate a result set one record, er, row at a time.
Always make your cursors INSENSITIVE.An insensitive cursor is a safe cursor, all other forms are spooky scary. If you like spooky scary, then fine, go ahead. But be prepared for never knowing if your code's going to work in all scenarios... well, you can never know that anyway, but with other kinds of cursors you can be pretty sure you WILL have problems, eventually.
If you are going to INSERT, UPDATE or DELETE any of the tables involved in a cursor definition, either directly (your code inside the fetch loop) or indirectly (say, when your connection fires a trigger that makes such a change, or when some other connection, even an EVENT that your code fires, makes such a change), thenHere's a SQL fetch loop template using the wonderful FOR loop syntax:If you end up having performance problems, then consider changing INSENSITIVE to something else. Most of the time (80%? 90%? 99%?) you won't have to, and life will be that much safer and easier.
- always specify both INSENSITIVE and FOR READ ONLY, and
- never use WHERE CURRENT OF, always use UPDATE and DELETE statements with ordinary WHERE clauses.
FOR [loop name] AS [cursor name] INSENSITIVE CURSOR FOR
SELECT t.primary_key_column_1_of_2 AS @primary_key_column_1_of_2,
t.primary_key_column_2_of_2 AS @primary_key_column_2_of_2,
t.[some other column] AS @[some other column],
t.[some other column] AS @[some other column]
FROM t
WHERE [some predicates]
ORDER BY [some columns]
FOR READ ONLY
DO
...
[references to the @variables implicitly declared in the SELECT]
...
INSERT t ( [some column names] ) VALUES ( [some values] )
...
UPDATE t
SET t.[some other column] = [some value],
t.[some other column] = [some value]
WHERE t.primary_key_column_1_of_2 = @primary_key_column_1_of_2
AND t.primary_key_column_2_of_2 = @primary_key_column_2_of_2;
...
DELETE t
WHERE t.primary_key_column_1_of_2 = @primary_key_column_1_of_2
AND t.primary_key_column_2_of_2 = @primary_key_column_2_of_2;
...
END FOR;
INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.Let's beat the point to death, er, drive the point home... if you printed out all the Help files, here's how you could fix them:
Version 5 is OK: A cursor declared INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 6 is OK: INSENSITIVE cursors ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 7 is OK: INSENSITIVE cursors ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 8: INSENSITIVE ...It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 9: INSENSITIVE ...It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 10: INSENSITIVE ...It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 11: INSENSITIVE clause ...It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Version 12: INSENSITIVE clause ...It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Insensitive cursors
FOR statement
DECLARE CURSOR statement
Does anyone else WANT TO KNOW how cursors really work?
Here is the abstract of the keynote speech Glenn Paulley will be giving at the CIKM 2010 - 19th ACM International Conference on Information and Knowledge Management in Toronto, October 26 to 30, 2010:
This article is based on this SQLA question-and-answer, How do I set up a TYPE RAW web service to use HTTPS? and on this not-yet-published Foxhound FAQ: How do I specify the HTTPS transport-layer security for Foxhound?
Specify the identity certificate file and private key in the -xs https identity= and identity_password= parameters.
Note that the default port for HTTPS is 443.
"%SQLANY11%\Bin32\dbspawn.exe"^
-f^
"%SQLANY11%\Bin32\dbsrv11.exe"^
-c 50p^
-ch 75p^
-cr-^
-gk all^
-gn 120^
-n foxhound1^
-o foxhound1_debug.txt^
-oe foxhound1_debug_startup.txt^
-on 10M^
-qn^
-sb 0^
-x none^
-xd^
-xs https(identity="%SQLANY11%\Bin32\rsaserver.id";identity_password=test;port=443;maxsize=0;to=600;kto=600)^
foxhound1.db^
-n f
-xs http(port=80;maxsize=0;to=600;kto=600),https(identity="%SQLANY11%\Bin32\rsaserver.id";identity_password=test;port=443;maxsize=0;to=600;kto=600)^
CREATE SERVICE service_name TYPE 'RAW'
AUTHORIZATION OFF USER user_name SECURE ON
AS CALL procedure_name (
:parameter1,
:parameter2 );
Can something this simple really be so wrong? Here's a simple query in SQL Anywhere 11.0.1.2276 that returns 5 rows, starting at row number 10, from a table t where pkey is the PRIMARY KEY column t, and that column is DEFAULT AUTOINCREMENT with values 1, 2, 3...:
SELECT TOP 5 START AT 10 *
FROM t
ORDER BY pkey;
pkey,data
10,'B'
11,'B'
12,'B'
13,'B'
14,'B'
SELECT TOP 5 START AT 10 *
FROM t
WHERE data = 'B'
ORDER BY pkey;
pkey,data
15,'B'
16,'B'
17,'B'
18,'B'
19,'B'