Monday, November 29, 2010

Toad for SQL Anywhere

Not yet, not even in Beta, but according to Michael Garza of Quest Software "The first beta will have ASE only but you should see IQ and Anywhere support popup sometime during the beta cycle."



November 30 Update ...


mgarza said...
The first beta has been posted and can be found here: http://sqloptimizersybase.inside.quest.com/pageloader.jspa?sbinPageName=beta_toadforsybase.htm&sbinPageTitle=Toad%20for%20Sybase%20Beta

I look forward to any comments or suggestions.

Thanks.




Friday, November 26, 2010

The fRiDaY File

Does anyone remember SilverStream Software?

It's strange, nobody does.

What's stranger still is that huge disasters usually get their own Wikipedia articles, but not SilverStream, just this mention in the article on Novell:

In July 2002, Novell acquired SilverStream Software, a leader in web services-oriented application, but a laggard in the marketplace.
At the time (late 1990s) it was said that all the good programmers had left PowerSoft and Lotus to work on SilverStream. Whether that was true or not, I don't know, but I do know this: "laggard in the marketplace" is Wikipedia-speak for "complete piece of siht", and that's what prompted me to publish this article on June 17, 1999:

SilverStream By The Numbers


Communication among SilverStream programmers is more efficient when numbers are assigned to common conversational elements.

For example,
"How do I [do some thing] in SilverStream?"
may be answered simply:
"32! 33!"
The person asking the question might then express shock by
"Oh, man, 73! 1!"



Here's the list, SilverStream By The Numbers:...

1 - the Help sucks

2 - it's so slow

3 - Simple Actions are useless, click on Java

4 - you're at the 3GL level all the time except when you want to be

5 - that's really really really hard to do in SilverStream

6 - you can't use javadoc

14 - the Designer's locked up

15 - the Designer's using all the CPU

16 - the Designer's using all the memory

17 - use Task Manager to kill the Designer

18 - the Server's locked up

19 - the Server's using all the CPU

20 - the Server's using all the memory

21 - use Task Manager to kill the Server

22 - the Server won't start

23 - reboot NT

29 - everyone take a break, I'm going to resynch the database

30 - you need more RAM

31 - the day sure goes by fast when you 23 every few minutes

32 - that doesn't work in SilverStream

33 - I had to write my own

34 - nobody's asked that on the newsgroups

35 - nobody's answered that on the newsgroups

36 - I don't think anybody's tried that before

37 - I told you, 32, but you weren't listening

38 - Fulcrum sucks

39 - the debugger sucks

41 - you can't do that with a SilverStream view

42 - try creating a database view to fool SilverStream

43 - you can't put SQL in a SilverStream where clause

44 - you can't use real SQL any more

45 - the Expression Builder sucks

46 - Uncle SilverStream Knows Best

50 - hosed the server

51 - OK, who 50

52 - oops, I 50

68 - SilverStream isn't for programmers, it's for investment bankers

69 - how can a group that wrote such a cool program write such a bad one

72 - that would be easy with a DataWindow

73 - that would be easy in PowerBuilder

74 - that would be easy in PowerJ

75 - how much does Jaguar cost

78 - SilverStream doesn't work with [fill in product name: e.g., 79]

79 - Oracle

81 - I'm screwed

82 - you're screwed

83 - we're all screwed

84 - maybe that's a bug in the VM

85 - maybe that's a bug in SilverStream

97 - maybe that will be in the beta

98 - maybe that will be in the next release

99 - what a kludge

Pushing UNLOAD

Hey, lookit here, a whole WEEK of "Pushing" articles!

First it was Pushing OPENSTRING and CROSS APPLY which talked about splitting tab-delimited strings from one table into multiple columns in another table, and then it was Pushing LOAD which showed how to do the same thing with the new LOAD TABLE USING COLUMN feature.

This article goes back and forth

  • from multiple columns in one table

  • to a single comma-delimited "toString" version of that column in another table,

  • and then back to multiple columns again.
Here are the three features used:
  • the new UNLOAD TABLE INTO VARIABLE feature,

  • the new LOAD TABLE USING VALUE feature, and

  • the new LOAD TABLE USING COLUMN feature described in Pushing LOAD.
These aren't NEW new features, but they're pretty new, and they haven't been talked about much. That's partly because while folks were waiting for these enhancements to LOAD and UNLOAD they discovered (or were taught, if they're like me) how to exploit the LIST() aggregate function in new and wonderful ways. Well, that's my excuse... I've since coded many combined STRING and LIST calls that go on for hundreds of lines to produce entire HTML pages in a single SELECT... awesome stuff and actually readable (well, except for some of the embedded JavaScript and <funkyTags>, but that's not SQL's fault :).

OK, let's get back on topic: Pushing UNLOAD


First, let's start with a nice normalized table where each row contains several columns describing a single radio program:

CREATE TABLE original_radio_program (
program_number BIGINT NOT NULL PRIMARY KEY CLUSTERED,
station VARCHAR ( 100 ) NOT NULL,
dates VARCHAR ( 100 ) NOT NULL,
times VARCHAR ( 100 ) NOT NULL,
duration VARCHAR ( 100 ) NOT NULL,
city VARCHAR ( 100 ) NOT NULL,
state VARCHAR ( 100 ) NOT NULL,
frequency VARCHAR ( 100 ) NOT NULL );

INSERT original_radio_program VALUES (
1, 'WAAV', 'Mon-Fri', '10am', '2 hours', 'Leland', 'NC', '980 AM' );
INSERT original_radio_program VALUES (
2, 'KFAR', 'Mon-Fri', '12pm', '3 hours', 'Fairbanks', 'AK', '660 AM' );
INSERT original_radio_program VALUES (
3, 'News Talk 550', 'Mon-Fri', '10am', '2 hours', 'Gainesville', 'GA', '550 AM' );
INSERT original_radio_program VALUES (
4, 'News Radio 1390', 'Mon-Fri', '10am', '2 hours', 'Jackson', 'TN', '1390 AM' );
INSERT original_radio_program VALUES (
5, 'KENN', 'Sat', '5pm', '3 hours', 'Farmington', 'NM', '1390 AM' );
COMMIT;



Here is the intermediate "toString" table where each row from the input table is going to be squished (a technical term meaning "squashed") into a single string column:

CREATE TABLE toString_radio_program (
toString_number BIGINT NOT NULL
DEFAULT AUTOINCREMENT
PRIMARY KEY CLUSTERED,
toString_row LONG VARCHAR NOT NULL );

Here's the code that does the squishing:

CREATE VARIABLE @original_radio_programs LONG VARCHAR;

UNLOAD TABLE original_radio_program
INTO VARIABLE @original_radio_programs
ROW DELIMITED BY '\x0d\x0a###';

LOAD TABLE toString_radio_program ( toString_row )
USING VALUE @original_radio_programs
DEFAULTS ON
DELIMITED BY ''
ESCAPES OFF
HEXADECIMAL OFF
QUOTES OFF
STRIP OFF
ROW DELIMITED BY '###';

Line 1 creates the connection-level variable to be filled by UNLOAD TABLE INTO VARIABLE. This variable could just as well be a local DECLARE variable in a BEGIN block.
Note: LONG VARCHAR is just a data type like any other, there are no silly syntax rules telling you where you can and cannot code it in SQL Anywhere... generations of application developers wish they could say the same about other programming tools.
The UNLOAD statement on lines 3 to 5 takes all the rows from original_radio_program and dumps them, not into a file like classic UNLOAD, but into the string variable @original_radio_programs.

There are a zillion options you can specify for UNLOAD. In this case all the defaults are OK except for the funky ROW DELIMITED BY on line 5. Here's the story on that: The LOAD TABLE USING COLUMN statement (not the LOAD TABLE on line 7, but the one later on in this article) needs to have a row delimiter stored with each string value. However, the LOAD TABLE USING VALUE on line 7 is going to strip the row delimiters added by the UNLOAD TABLE INTO VARIABLE, so we need to define two sets of row delimiters.

So, the ROW DELIMITED BY value on line 5 specifies five characters: an ordinary carriage return - line feed pair \x0d\x0a, and three # characters... the string stored in @original_radio_programs will have '\x0d\x0a###' at the end of each row instead of just '\x0d\x0a' (the default). More on this in a moment...

The LOAD statement on lines 7 to 15 takes the data from the USING VALUE @original_radio_programs string and ignores everything except the row delimiters. In other words, it ignores the 'quotes' around string columns, and all the commas between field values, and any escapes and hex numbers and blank padding, and treats each row as a single string to be loaded into the toString_radio_program.toString_row column. This behavior is specified by the five options on lines 10 through 14.
Note: Those five values DELIMITED BY '' ESCAPES OFF HEXADECIMAL OFF QUOTES OFF STRIP OFF comprise the magic "what you see is what you get" set of options, very useful when exploiting LOAD and UNLOAD to perform string handling operations as well as bulk data handling. You can read about them in the Help and see another example here.
The DEFAULTS ON option on line 9 tells LOAD TABLE to honor the DEFAULT AUTOINCREMENT setting on the toString_number column (which isn't named in the column list on line 7). This technique is handy for recording the original input order when loading data via LOAD TABLE. It isn't important in this example because the toString_row column contains the program_number value.

...now on to line 15, where the ROW DELIMITED BY option tells LOAD TABLE to leave the \x0d\x0a at the end of each row alone, and just strip off the ###. This technique (mismatched row delimiters for input and output) is useful in other situations too, like moving between SQL Anywhere's powerful UNLOAD syntax and Microsoft SQL Server's feeble BULK INSERT syntax. Here, it's used to accommodate LOAD TABLE USING COLUMN's strange need for row delimiters on each column value.

Here's what the string looks like after the UNLOAD TABLE INTO VARIABLE:



Here's what the intermediate "toString" table looks like after the LOAD TABLE USING VALUE:



Here's some code that goes the other way, back to a reconstituted table containing multiple columns. You've seen this code before in Pushing LOAD:

CREATE TABLE reconstituted_radio_program (
program_number BIGINT NOT NULL PRIMARY KEY CLUSTERED,
station VARCHAR ( 100 ) NOT NULL,
dates VARCHAR ( 100 ) NOT NULL,
times VARCHAR ( 100 ) NOT NULL,
duration VARCHAR ( 100 ) NOT NULL,
city VARCHAR ( 100 ) NOT NULL,
state VARCHAR ( 100 ) NOT NULL,
frequency VARCHAR ( 100 ) NOT NULL );

LOAD TABLE reconstituted_radio_program
USING COLUMN toString_row
FROM toString_radio_program
ORDER BY toString_number;

Yup, it looks the same as original_radio_program:

Wednesday, November 24, 2010

Pushing LOAD

Here's a slightly simpler and somewhat less powerful alternative to Monday's OPENSTRING and CROSS APPLY solution to the following problem:

"Let's say you have a table that contains a tab-delimited list of field values in a single string column, and you want to split those values apart and store them in separate columns in another table."
To make it simpler yet, let's change "tab-delimited list" to "comma-delimited" which is the default.

Here's the input table:

CREATE TABLE source_table (
source_pkey INTEGER NOT NULL PRIMARY KEY,
row LONG VARCHAR NOT NULL );

INSERT source_table VALUES ( 1, '21, ''Hello'', 99, 123.45\x0d\x0a' );
INSERT source_table VALUES ( 2, '22, ''World'', 88, 456.78\x0d\x0a' );
INSERT source_table VALUES ( 3, '23, ''Goodbye'',77, 234.56\x0d\x0a' );
COMMIT;

Each value stored in source_table.row is a string which in turn contains four field values; e.g., an integer 21, a string 'Hello', another integer 99, a decimal value 123.45 and so on. Commas separate the field values inside source_table.row, and a \x0d\x0a carriage return - line feed pair is used as row delimiter.

Here's the output table, with one column corresponding to each of the four field values stored in source_table.row:

CREATE TABLE target_table (
target_pkey INTEGER NOT NULL PRIMARY KEY,
data1 VARCHAR ( 10 ) NOT NULL,
data2 INTEGER NOT NULL,
data3 DECIMAL ( 11, 2 ) NOT NULL );

Here's a LOAD TABLE statement that splits-and-copies the data from source_table.row into target_table:

LOAD TABLE target_table
USING COLUMN row
FROM source_table
ORDER BY source_pkey;

The USING clause on lines 2 through 4 tells the LOAD TABLE statement that
  • the data is coming from COLUMN row,

  • that column is coming FROM source_table, and

  • and the rows are to be loaded in ORDER BY source_pkey.
Here's what the input and output look like:



Tip: If you only see one row in target_table, you probably forgot the row delimiter (\x0d\x0a in this case).

Monday, November 22, 2010

Pushing OPENSTRING and CROSS APPLY

The word "pushing" in the title has two meanings: First, to promote (push) the use of OPENSTRING to solve real-world problems, and second, to push OPENSTRING and CROSS APPLY beyond the examples shown in the Help.

Simply put...

SELECT ... FROM OPENSTRING is like coding SELECT ... FROM LOAD TABLE, and

CROSS APPLY lets you join OPENSTRING with another table in the FROM clause.

(For more information, scroll down to the "openstring-expression" section on the FROM clause page in the Help; also see Joins resulting from apply expressions.)

Let's say you have a table that contains a tab-delimited list of field values in a single string column, and you want to split those values apart and store them in separate columns in another table.

OPENSTRING and CROSS APPLY will let you do that in a single INSERT SELECT statement.

Here's an example of the input table; each value in the line_text column contains a tab-delimited list of field values describing a single radio program: station WAAV, dates Mon-Fri, time 10am and so on:
CREATE TABLE raw_text (
   line_number BIGINT NOT NULL 
                  DEFAULT AUTOINCREMENT
                  PRIMARY KEY CLUSTERED,
   line_text   LONG VARCHAR NOT NULL DEFAULT '' );

INSERT raw_text ( line_text ) VALUES ( 
   'Play - not available now\x09\x09WAAV\x09Mon-Fri\x0910am\x092 hours\x09Leland\x09NC\x09980 AM\x09\x09' );
INSERT raw_text ( line_text ) VALUES ( 
   'Play - not available now\x09\x09KFAR\x09Mon-Fri\x0912pm\x093 hours\x09Fairbanks\x09AK\x09660 AM\x09\x09' );
INSERT raw_text ( line_text ) VALUES ( 
   'Play - not available now\x09\x09News Talk 550\x09Mon-Fri\x0910am\x092 hours\x09Gainesville\x09GA\x09550 AM\x09\x09' );
INSERT raw_text ( line_text ) VALUES ( 
   'Play - not available now\x09\x09News Radio 1390\x09Mon-Fri\x0910am\x092 hours\x09Jackson\x09TN\x091390 AM\x09\x09' );
INSERT raw_text ( line_text ) VALUES ( 
   'Play - not available now\x09\x09KENN\x09Sat\x095pm\x093 hours\x09Farmington\x09NM\x091390 AM\x09' );
COMMIT;
Here's a table designed to hold each field value in a separate column:
CREATE TABLE radio_programs (
   line_number BIGINT NOT NULL PRIMARY KEY CLUSTERED,
   station     VARCHAR ( 100 ) NULL,
   dates       VARCHAR ( 100 ) NULL,
   times       VARCHAR ( 100 ) NULL,
   duration    VARCHAR ( 100 ) NULL,
   city        VARCHAR ( 100 ) NULL,
   state       VARCHAR ( 100 ) NULL,
   frequency   VARCHAR ( 100 ) NULL,
   line_text   LONG VARCHAR NOT NULL );
Here's the INSERT SELECT that fills the radio_programs table:
INSERT radio_programs
WITH AUTO NAME
SELECT *
  FROM raw_text 
       CROSS APPLY
       OPENSTRING ( VALUE STRING ( raw_text.line_text ) )
             WITH ( TABLE radio_programs (
                          filler(),
                          filler(),
                          station,
                          dates,
                          times,
                          duration,
                          city,
                          state,
                          frequency,
                          filler() ) )
             OPTION ( DELIMITED BY '\x09' ) AS programs;
Lines 6 through 18 shows the OPENSTRING call which acts like LOAD TABLE on the string in raw_text.line_text.

The VALUE clause on line 6 specifies where the input is coming from, and the OPTION clause on line 18 tells OPENSTRING that raw_text.line_text is tab-delimited rather than the default of comma-delimited.

The WITH clause on lines 7 through 17 tells OPENSTRING what to do with each of ten fields in each input string.

The TABLE clause on line 7 tells OPENSTRING that the radio_programs table is going to be the model or template for OPENSTRING to use. The column names on lines 10 through 16 tell OPENSTRING which radio_programs columns correspond to which input fields. The filler() fields on lines 8, 9 and 17 tell OPENSTRING to skip fields 1, 2 and 10 in the input.

The AS clause on line 18 is required to give the OPENSTRING call a correlation name in the FROM clause... but that name isn't used in this example.

The CROSS APPLY join operator on line 5 makes it possible for the OPENSTRING call to include a reference to a column in the raw_text table.

Here's what the final result looks like:


Friday, November 19, 2010

The fRiDaY File

This was first published on March 16, 2006:

What's more important? Readability!

Alas, in all the comments on this posting, nobody put readability first...


What's more important?

What's more important, for your code to be readable, or for it to be right?

Sorry, nobody gets to reject the premise of the question. If you code at all, you already have an answer, it's written right into the product of your labors.

How about this: What's more important, for your code to be correct, or for it to be fast?

I know my answers, what's yours?
I mean sure, we can debate what "readability" means, but what does it mean not putting it first? Do you have so little respect for yourself that you want to make your life more difficult rather than easier?

If a piece of code is not readable, how exactly do you ensure that it is correct? Oh, right... by testing... let me know how that works out for you.

If a piece of code is not readable, how exactly do you fix it when it fails to work properly? Well, if you're me, you start by rewriting it so it's readable.

If a piece of code is not readable, how exactly do you improve it's performance? Well, if you're me... same answer.

But, apparently, that's just me...

I am alone

Wednesday, November 17, 2010

Isn't it just possible?

Here's a line that rang a bell with me:

Isn't it just possible that my inability to profit from OOP reflects a problem with OOP itself and not my own incompetence? - "Objects Never? Well, Hardly Ever!" by Mordechai Ben-Ari, Communications of the ACM, September 2010

I am not alone!

Here's another excerpt...
I am not the only one whose intuition fails when it comes to OOP. ... Again, isn't it just possible that the intuition of experienced software engineers is perfectly OK, and that it is OOP that is not intuitive and frequently even artificial?
To put those quotes in context, here are first two paragraphs from the article:
At the 2005 SIGCSE (Special Interest Group in Computer Science Education) Symposium in St. Louis, MO, a packed audience listened to the Great Objects Debate: Should we teach "objects first" or "objects later"?1 In the objects-first approach, novices are taught object-oriented programming (OOP) in their initial introduction to programming, as opposed to an objects-later approach, where novices are first introduced to procedural programming, leaving OOP to the end of the first semester or the end of the first year. Kim Bruce and Michael K├Âlling spoke in favor of the objects-first approach, while their opponents Stuart Reges and Eliot Koffman argued for teaching procedural programming first. One of Bruce's arguments was: since OOP is dominant in the world of software development, it should be taught early. I later contacted Bruce to ask for a warrant for the dominance of OOP, but he could not give me one, nor could any of several other experts to whom I posed the same question.

I claim that the use of OOP is not as prevalent as most people believe, that it is not as successful as its proponents claim, and, therefore, that its central place in the CS curriculum is not justified.
The full text of the article is behind a "Pay Wall" so you have join the Association for Computing Machinery or cough up some $$ to read it.

Speaking of which, I've been a member since Before Time Began (1973) and it is only recently (the past few years) when the ACM's flagship magazine "Communications" has been worth reading for practitioners of The Dark Arts (you know, programmers).

Now, it's well worth the expense. Not a single issue goes by that doesn't provoke at least one "Aha!" moment.

Aha! The Programmer's New Paradigm!

This is not an excerpt from the article, this is what the article provoked in me:
A Programmer who cares for nothing but the elegance of his code hires two consultants who promise him the finest programming paradigm whose benefits are invisible to anyone who is unfit for his position or "just hopelessly stupid". The Programmer cannot see the benefits himself, but pretends that he can for fear of appearing unfit for his position or stupid; his colleagues do the same. When the consultants report that the paradigm is ready, they show him how it works and the Programmer starts developing applications with it. A young intern on the team calls out that the paradigm has no benefits at all and the cry is taken up by others. The Programmer cringes, suspecting the assertion is true, but holds himself up proudly and continues along the same path. - with apologies to Hans Christian Andersen



One more excerpt, not from me, not from the article, but from a comment on the article...
"... in a design oriented field such as ours, fads are all to easy to hatch. It takes considerable will to resist fads and stay focused on the real issues." — Alan Kay, September 11, 2010

Who's Alan Kay, you ask?

Oh, nobody important, just some random guy on the internet :)

Monday, November 15, 2010

Crosstab, Rotate, Pivot, Normalize

The "Crosstab, Rotate, Pivot" article from earlier this year should have been named "Crosstab, Rotate, Pivot, Denormalize" because it showed how to take a perfectly nice table like this

and muck it up to look like this
without hard-coding any of the data values in the SQL.

What about going the other way?

What if you have a fat squat denormalized table with a bazillion columns
and you want to turn it into a tall skinny table with a bazillion rows?
And then (for bonus points) what if you wanted to keep rotating to get a different fat squat version like this?
First things first; here's the first fat squat table:

--------------------------------------------------------------------------------
-- Step 1: Initialize data.

CREATE TABLE t1 (
c1 VARCHAR ( 10 ) NOT NULL,
Ford_count INTEGER NOT NULL,
Hyundai_count INTEGER NOT NULL,
Honda_count INTEGER NOT NULL,
Chevrolet_count INTEGER NOT NULL,
PRIMARY KEY ( c1 ) );

INSERT t1 VALUES ( 'AZ', 5000, 5000, 1000, 3000 );
INSERT t1 VALUES ( 'CA', 1000, 2000, 9000, 7000 );
INSERT t1 VALUES ( 'FL', 9000, 7000, 2000, 1000 );
INSERT t1 VALUES ( 'MA', 2000, 6000, 5000, 3000 );
INSERT t1 VALUES ( 'NY', 4000, 5000, 1000, 6000 );
COMMIT;
Here's the tall skinny table:

CREATE TABLE t1_normalized (
c1 VARCHAR ( 10 ) NOT NULL,
c2 VARCHAR ( 128 ) NOT NULL,
c3 INTEGER NOT NULL,
PRIMARY KEY ( c1, c2 ) );
Here's the code that fills the tall skinny table (Step 2) and then rotates it again to create the second fat squat table (Step 3):

BEGIN

DECLARE @sql LONG VARCHAR;

--------------------------------------------------------------------------------
-- Step 2: Normalize table.

SELECT STRING (
'INSERT t1_normalized \x0d\x0a',
LIST (
STRING (
'SELECT c1, \x0d\x0a',
' ''', SYSCOLUMN.column_name, ''',\x0d\x0a',
' ', SYSCOLUMN.column_name, '\x0d\x0a',
' FROM ', SYSTABLE.table_name ),
'\x0d\x0aUNION ALL \x0d\x0a'
ORDER BY SYSCOLUMN.column_id ) )
INTO @sql
FROM SYSCOLUMN
INNER JOIN SYSTABLE
ON SYSTABLE.table_id = SYSCOLUMN.table_id
WHERE SYSTABLE.table_name = 't1'
AND SYSCOLUMN.column_name LIKE '%?_count' ESCAPE '?';

SELECT @sql; -- for display

EXECUTE IMMEDIATE @sql;

--------------------------------------------------------------------------------
-- Step 3: Pivot c1 values into columns.

SELECT STRING (
'SELECT c2',
LIST (
STRING (
',\x0d\x0a SUM ( ( IF t1_normalized.c1 = ''',
t1_distinct.c1,
''' THEN 1 ELSE 0 ENDIF ) * t1_normalized.c3 ) AS "',
t1_distinct.c1,
'"' ),
''
ORDER BY t1_distinct.c1 ),
'\x0d\x0a INTO #t1_pivot',
'\x0d\x0a FROM t1_normalized',
'\x0d\x0a GROUP BY t1_normalized.c2' )
INTO @sql
FROM ( SELECT DISTINCT c1
FROM t1_normalized ) AS t1_distinct;

SELECT @sql; -- for display

EXECUTE IMMEDIATE @sql;

SELECT * FROM t1 ORDER BY c1; -- original data, for checking
SELECT * FROM t1_normalized ORDER BY c1, c2; -- normalized data, for checking
SELECT * FROM #t1_pivot ORDER BY c2; -- pivot table

END;
Step 2 is the interesting part: it builds a SQL statement dynamically, at runtime, and then uses the magic EXECUTE IMMEDIATE feature to execute that statement. EXECUTE IMMEDIATE isn't a new feature, it's been around forever, and although the code in this article has only been tested on SQL Anywhere 11.0.1 and 12.0.0 I'm pretty sure it works on older versions like 9 and 10, maybe even earlier.

The code on lines 19 through 23 looks in the SYSTABLE and SYSCOLUMN system tables for all the "_count" columns in table "t1". (Yes, this code depends on the being able to identify the columns of interest by how they are named: see the LIKE predicate on line 23.)
08.SELECT STRING (
09. 'INSERT t1_normalized \x0d\x0a',
10. LIST (
11. STRING (
12. 'SELECT c1, \x0d\x0a',
13. ' ''', SYSCOLUMN.column_name, ''',\x0d\x0a',
14. ' ', SYSCOLUMN.column_name, '\x0d\x0a',
15. ' FROM ', SYSTABLE.table_name ),
16. '\x0d\x0aUNION ALL \x0d\x0a'
17. ORDER BY SYSCOLUMN.column_id ) )
18. INTO @sql
19. FROM SYSCOLUMN
20. INNER JOIN SYSTABLE
21. ON SYSTABLE.table_id = SYSCOLUMN.table_id
22. WHERE SYSTABLE.table_name = 't1'
23. AND SYSCOLUMN.column_name LIKE '%?_count' ESCAPE '?';
24.
25.SELECT @sql; -- for display
26.
27.EXECUTE IMMEDIATE @sql;
The code on lines 8 through 18 builds a SQL statement in a string, and puts that string in the local LONG VARCHAR variable @sql.

The value placed in @sql consists of an 'INSERT...' (line 9) followed by a LIST (line 10) of 'SELECT...' statements (lines 12 through 15) separated by the 'UNION ALL' operator (line 16). The STRING() and LIST() functions are magic too, just like EXECUTE IMMEDIATE... all well worth learning.

The LIST - ORDER BY on line 17 doesn't affect the final outcome, but it is nice to have the generated SELECTs appear in the same order as the columns appear in the original fat squat table.

Confused yet?

Maybe it will help to see what gets put in @sql; an INSERT statement which creates one row in t1_normalized for every column in t1:

INSERT t1_normalized
SELECT c1,
'Ford_count',
Ford_count
FROM t1
UNION ALL
SELECT c1,
'Hyundai_count',
Hyundai_count
FROM t1
UNION ALL
SELECT c1,
'Honda_count',
Honda_count
FROM t1
UNION ALL
SELECT c1,
'Chevrolet_count',
Chevrolet_count
FROM t1

Here are the tables again; first, the fat squat input "FROM t1":
...and the tall skinny output "INSERT t1_normalized":
Oh, by the way... the code in Step 3 uses pretty much the same technique shown in Crosstab, Rotate, Pivot to produce the rotated-again fat squat table:

Does it work in the real world?

If you have the impression I sit around all day coding "SELECT FROM t1", let me set the record straight: I own and maintain some of the nastiest real-world code that's ever been created. And it's in that real world where the real problems arise, like the need to normalize and denormalize data to make it possible for the inevitable Queries From Hell to run in finite time. Necessity is the mother of Invention (although I do have to admit, it does sound pompous to use the word "invention" in a sentence :)

OK, so here's some real code, warts and all, the horribly denormalized table that supports the "Peaks since" line in the Foxhound database monitor:



Truly fat, truly squat:

CREATE TABLE rroad_peaks (

-- *****************************************************************
-- ***** THIS TABLE IS A CANDIDATE FOR POST-SETUP DATA UPGRADE *****
-- *****************************************************************

sampling_id UNSIGNED INTEGER NOT NULL,
peaks_calculated_to_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peaks_calculated_after TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
peaks_calculated_after_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
earliest_preserved_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_sample_elapsed_msec BIGINT NOT NULL DEFAULT 0,
peak_canarian_query_elapsed_msec BIGINT NOT NULL DEFAULT 0,
peak_ActiveReq BIGINT NOT NULL DEFAULT 0,
peak_CheckpointUrgency BIGINT NOT NULL DEFAULT 0,
peak_interval_Chkpt BIGINT NOT NULL DEFAULT 0,
peak_ConnCount BIGINT NOT NULL DEFAULT 0,
peak_CurrIO BIGINT NOT NULL DEFAULT 0,
peak_DBFileFragments BIGINT NOT NULL DEFAULT 0,
peak_LockCount BIGINT NOT NULL DEFAULT 0,
peak_MultiProgrammingLevel BIGINT NOT NULL DEFAULT 0,
peak_RecoveryUrgency BIGINT NOT NULL DEFAULT 0,
peak_UnschReq BIGINT NOT NULL DEFAULT 0,
peak_executing_connection_count BIGINT NOT NULL DEFAULT 0,
peak_idle_connection_count BIGINT NOT NULL DEFAULT 0,
peak_waiting_connection_count BIGINT NOT NULL DEFAULT 0,
peak_total_blocked_connection_count BIGINT NOT NULL DEFAULT 0,
peak_total_temporary_file_bytes BIGINT NOT NULL DEFAULT 0,
peak_rate_Bytes DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_Disk DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_BytesReceived DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_BytesSent DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_CachePanics DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_CacheSatisfaction DECIMAL ( 30, 6 ) NOT NULL DEFAULT 1.0, -- a ratio, with the smallest value defined as the "peak"
peak_rate_DiskRead DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_DiskWrite DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_FullCompare DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_IndAdd DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_IndLookup DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_IndSatisfaction DECIMAL ( 30, 6 ) NOT NULL DEFAULT 1.0, -- a ratio, with the smallest value defined as the "peak"
peak_rate_LogWrite DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_cpu_percentage_string VARCHAR ( 10 ) NOT NULL DEFAULT '',
peak_cpu_percentage_number DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_QueryLowMemoryStrategy DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_Req DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_Commit DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_rate_Rlbk DECIMAL ( 30, 6 ) NOT NULL DEFAULT 0.0,
peak_sample_elapsed_msec_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_canarian_query_elapsed_msec_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_ActiveReq_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_CheckpointUrgency_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_interval_Chkpt_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_ConnCount_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_CurrIO_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_DBFileFragments_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_LockCount_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_MultiProgrammingLevel_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_RecoveryUrgency_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_UnschReq_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_executing_connection_count_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_idle_connection_count_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_waiting_connection_count_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_total_blocked_connection_count_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_total_temporary_file_bytes_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_Bytes_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_Disk_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_BytesReceived_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_BytesSent_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_CachePanics_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_CacheSatisfaction_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_DiskRead_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_DiskWrite_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_FullCompare_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_IndAdd_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_IndLookup_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_IndSatisfaction_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_LogWrite_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_cpu_percentage_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_QueryLowMemoryStrategy_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_Req_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_Commit_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
peak_rate_Rlbk_sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
PRIMARY KEY ( sampling_id ) );

Here's the code to normalize the "_sample_set_number" columns into a tall, skinny table:

BEGIN

DECLARE @sql LONG VARCHAR;

DECLARE LOCAL TEMPORARY TABLE normalized_peaks (
sampling_id UNSIGNED INTEGER NOT NULL,
column_name VARCHAR ( 128 ) NOT NULL,
sample_set_number UNSIGNED BIGINT NOT NULL DEFAULT 0,
PRIMARY KEY ( sampling_id, column_name ) );

SELECT STRING (
'INSERT normalized_peaks \x0d\x0a',
LIST (
STRING (
'SELECT sampling_id, \x0d\x0a',
' ''', SYSCOLUMN.column_name, ''',\x0d\x0a',
' ', SYSCOLUMN.column_name, '\x0d\x0a',
' FROM ', SYSTABLE.table_name ),
'\x0d\x0aUNION ALL \x0d\x0a'
ORDER BY SYSCOLUMN.column_id ) )
INTO @sql
FROM SYSCOLUMN
INNER JOIN SYSTABLE
ON SYSTABLE.table_id = SYSCOLUMN.table_id
WHERE SYSTABLE.table_name = 'rroad_peaks'
AND SYSCOLUMN.column_name LIKE '%?_sample_set_number' ESCAPE '?';

SELECT @sql; -- for display

EXECUTE IMMEDIATE @sql;

SELECT * FROM rroad_peaks ORDER BY sampling_id;
SELECT * FROM normalized_peaks ORDER BY sampling_id, column_name;

END;

Here's what the normalized table looks like:
And just to drive the point home, here's the answer to the question "Why go to all that trouble with EXECUTE IMMEDIATE blah blah, why not just hand-code the solution?"

The answer is, the rroad_peaks table is subject to change (especially new columns) and the normalization logic is just one more piece of code that does NOT have to change along with it; i.e., this is the generated INSERT, this is the code that doesn't have to be maintained:

INSERT normalized_peaks
SELECT sampling_id,
'peaks_calculated_to_sample_set_number',
peaks_calculated_to_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peaks_calculated_after_sample_set_number',
peaks_calculated_after_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'earliest_preserved_sample_set_number',
earliest_preserved_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_sample_elapsed_msec_sample_set_number',
peak_sample_elapsed_msec_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_canarian_query_elapsed_msec_sample_set_number',
peak_canarian_query_elapsed_msec_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_ActiveReq_sample_set_number',
peak_ActiveReq_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_CheckpointUrgency_sample_set_number',
peak_CheckpointUrgency_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_interval_Chkpt_sample_set_number',
peak_interval_Chkpt_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_ConnCount_sample_set_number',
peak_ConnCount_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_CurrIO_sample_set_number',
peak_CurrIO_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_DBFileFragments_sample_set_number',
peak_DBFileFragments_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_LockCount_sample_set_number',
peak_LockCount_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_MultiProgrammingLevel_sample_set_number',
peak_MultiProgrammingLevel_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_RecoveryUrgency_sample_set_number',
peak_RecoveryUrgency_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_UnschReq_sample_set_number',
peak_UnschReq_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_executing_connection_count_sample_set_number',
peak_executing_connection_count_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_idle_connection_count_sample_set_number',
peak_idle_connection_count_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_waiting_connection_count_sample_set_number',
peak_waiting_connection_count_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_total_blocked_connection_count_sample_set_number',
peak_total_blocked_connection_count_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_total_temporary_file_bytes_sample_set_number',
peak_total_temporary_file_bytes_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_Bytes_sample_set_number',
peak_rate_Bytes_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_Disk_sample_set_number',
peak_rate_Disk_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_BytesReceived_sample_set_number',
peak_rate_BytesReceived_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_BytesSent_sample_set_number',
peak_rate_BytesSent_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_CachePanics_sample_set_number',
peak_rate_CachePanics_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_CacheSatisfaction_sample_set_number',
peak_CacheSatisfaction_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_DiskRead_sample_set_number',
peak_rate_DiskRead_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_DiskWrite_sample_set_number',
peak_rate_DiskWrite_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_FullCompare_sample_set_number',
peak_rate_FullCompare_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_IndAdd_sample_set_number',
peak_rate_IndAdd_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_IndLookup_sample_set_number',
peak_rate_IndLookup_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_IndSatisfaction_sample_set_number',
peak_IndSatisfaction_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_LogWrite_sample_set_number',
peak_rate_LogWrite_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_cpu_percentage_sample_set_number',
peak_cpu_percentage_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_QueryLowMemoryStrategy_sample_set_number',
peak_rate_QueryLowMemoryStrategy_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_Req_sample_set_number',
peak_rate_Req_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_Commit_sample_set_number',
peak_rate_Commit_sample_set_number
FROM rroad_peaks
UNION ALL
SELECT sampling_id,
'peak_rate_Rlbk_sample_set_number',
peak_rate_Rlbk_sample_set_number
FROM rroad_peaks

Saturday, November 13, 2010

Cisco Presents Software Excellence Award To Sybase

You heard about it from me first, here, and then again here, and now it's official:


Award Recognizes Sybase’s Commitment to the Success of its Partners

DUBLIN, CALIF. - Sybase, Inc., an SAP company (NYSE: SAP) and an industry leader in enterprise and mobile software, today announced that it has received the 2010 Software Excellence Supplier Award from Cisco. This prestigious award recognizes those software suppliers who engage with Cisco to ensure a high level of customer satisfaction.

The distinction was awarded during Cisco’s 19th annual Supplier Appreciation Event, held September 29, 2010, at the Santa Clara Convention Center. The event celebrated the commitment of Cisco’s preferred suppliers to ongoing customer satisfaction and accelerated growth as demonstrated by their operational excellence.

“Sybase understands that high-quality, leading-edge technology is critical to information management, and they deliver even greater value by embracing the new business models Cisco must embrace to meet customer needs" said Prentis Wilson, vice president, Global Supplier Management, Cisco. "Ultimately, Sybase partners with us proactively to anticipate and deliver for our customers"

Cisco presented awards to suppliers in recognition of their contributions to Cisco’s success in fiscal year 2010. Cisco also reaffirmed the importance of supplier relationships as the company pursues its corporate vision to build the human network and transform the way people live, work, plan and learn.

"The Cisco Software Excellence award is a recognition of our strategy to go beyond just delivering product innovation" said Brian Vink, vice president of Data Management Products, Sybase. "At Sybase we put a lot of focus on the intangibles of an ISV partnership such as collaborating on roadmap, providing unmatched support, and being flexible in all aspects of the relationship"

More than 10 million users have benefited from applications that have been deployed by ISVs who rely on the Sybase® SQL Anywhere® embedded database server to power their business critical systems. SQL Anywhere’s ease-of-use, out-of-the-box performance, and self-management innovations address the evolving requirements of solution providers across many vertical markets. To learn more, visit www.sybase.com/sqlanywhere.
Interestingly, the press release doesn't mention SQL Anywhere by name until the last paragraph, and then doesn't connect SQL Anywhere with Cisco.

That's OK, I'm here to fix that...

Here's what the original announcement said:
Cisco currently embeds SQL Anywhere in nearly 20 products, and cited Sybase iAnywhere's efforts to ensure Cisco's success with the technology. During the presentation, Cisco praised Sybase iAnywhere's unique approach, and the technical and relationship support provided by the Sybase team.

Friday, November 12, 2010

The fRiDaY File

This was first published on August 20, 2000:



It's going to take a while to get that image out of your head, isn't it?


Some background: PowerPhoto used ASA 7 remote databases, MobiLink and an ASE 12 consolidated database.
No, I didn't work on the project personally, but a colleague did... and when I stumbled upon one of the Fun! Fantasy! kiosks in the wild, I couldn't resist.
The technology worked but... even if PowerPhoto's business model hadn't sucked (it did), the timing certainly did (remember 2000?), and in 2004 The Kiosk Factory acquired PowerPhoto's assets.

After that, it gets murky... The Kiosk Factory may or may not exist today, perhaps (or perhaps not) as Walters Interactive... the website(s) are kinda dead and there ain't no hits in Google News.

It just goes to show... sometimes a bad idea is still bad ten years later... like Breck in a leather bustier :)

How do I create a larger DOS box?

Folks laugh at me because I code in Wordpad and test in dbisql. I just lower my head and mutter under my breath, "Go ahead, laugh, I've yet to find myself on a client workstation that doesn't have all the tools I need to solve their problems."

That, and a browser... and Foxhound [obligatory promotional plug]... but I need to get...

Back on topic...


I regularly run a 30K batch file to build Foxhound from beginning to end, from dbinit through dbisql to Carbonite and InstallShield... lots and lots of commands scrolling off the screen never to be seen again.

Once in a while I notice an error message scrolling by, and I am never fast enough pressing Function + Pause to catch it before it's lost off the top.

Finally... after how many years putting up with this?... I asked Shalmaneser, I mean Google this question:
How do I create a larger DOS box?
The first hit suggested using the Properties - Layout dialog box for the DOS box, so that's what I did:
  • First, I used right-mouse - drag and drop - "Create Shortcuts Here" to make a Windows shortcut for the *.bat file, and then

  • I clicked right-mouse - Properties on the new shortcut and picked the Layout tab, and finally

  • I set the Screen buffer size - Height to the maximum 9999. I also made it wider (150) and fattened up the "Window size" to 150 by 50.

Here's the big fat DOS box in action; I still can't figure out the error message at the bottom of the window, but at least I can see it now (click on it to see full-size):

Wednesday, November 10, 2010

Everyone hates Ticketmaster, except...

The November issue of Wired magazine contains this article about startup companies trying to compete with TicketMaster:

Everyone Hates Ticketmaster — But No One Can Take It Down


Spoiler Alert!


Here's the last line, the bottom line:
"... ticketing is not about the fans or the bands. It’s about the venues. And for them, Ticketmaster works."
Why does Ticketmaster work? Because
"... the company’s system is almost supernaturally reliable."
What does that have to do with SQL Anywhere? Well, SQL Anywhere is one of the reasons no one can take down TicketMaster...
"Because the ARCHTICS® ticketing systems are deployed into client sites with small or non-existant IT staffs, Ticketmaster requires a database to power its solution that could be deployed and managed with minimal administration. However, the high demands of the application meant it was not acceptable that this would come at the expense of performance or features." - Ticketmaster SQL Anywhere success story
So, everyone hates TicketMaster, except me :)

Monday, November 8, 2010

Couldn't find sqla.stackexchange.com

UPDATE: SQLA is back up, and backed up, and no, I still don't know what happened.



I don't know what's happened, I've sent an email, all we can do is wait and see.

What's more important?

What's more important, for your code to be readable, or for it to be right?

Sorry, nobody gets to reject the premise of the question. If you code at all, you already have an answer, it's written right into the product of your labors.

How about this: What's more important, for your code to be correct, or for it to be fast?

I know my answers, what's yours?

Friday, November 5, 2010

The fRiDaY File

This was first published on March 15, 2006:

EXPRTYPE, DATEDIFF, MICROSECOND, BIGINT and Volker Barth

Some of the minor differences between Volker Barth and me are apparent in this photo (I'm the one on the right):



However, there is a much bigger, hidden difference between Volker and me: when Volker reads something, he remembers what he's read and is able to use it in his daily life.

Unlike me, apparently.

You see, Volker's read the "What's New" sections in the SQL Anywhere Version 12 Help. I know that because of his comment on this blog post about how DATEDIFF is now returning BIGINT as well as supporting the new MICROSECOND date part.

I also know that I also read all the What's New stuff in Version 12... I know that because, well, I had to, didn't I? In order to write 10 Cool New Features In SQL Anywhere 12?

Didn't I?

I'm sure I did :)

OK, That's 12, This Is Now


For those of you working with earlier versions of SQL Anywhere, and even those lucky enough to be running Version 12, the question sometimes arises, "What data type am I getting?"

Older Help files often don't tell you what data type is returned by a builtin function. For example, the Version 9.0.2 Help file doesn't tell you that DATEDIFF returns INTEGER.

Plus, you may have coded an expression and you need to know what the resulting data type is so that, for example, you're not getting an integer when you want three decimal places.

The EXPRTYPE function was introducted in Version 9.0.0 8.0.3 for just this purpose. You pass it a 'SELECT-statement-inside-single-quotes', plus a number 1, 2, 3 telling EXPRTYPE which SELECT list item you are interested in, and it returns a string containing the data type for that item.
exprtype() works with 8.0.3, though it is not documented there. FWIW, the 8.0.3 maintenance release did not have its own doc set, just a "Readme file" with the news and changes - and unfortunately, that made it newer than the official docs, and therefore even the v9-V11 docs don't list the changes from 8.0.3. – Volker Barth
Here's what EXPRTYPE says about DATEDIFF in 9.0.2 8.0.3:

SELECT ' Version:' AS "Date Part", @@Version AS "Data Type"
UNION
SELECT ' 1. YEAR', EXPRTYPE ( 'SELECT DATEDIFF ( YEAR, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 2. QUARTER', EXPRTYPE ( 'SELECT DATEDIFF ( QUARTER, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 3. MONTH', EXPRTYPE ( 'SELECT DATEDIFF ( MONTH, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 4. WEEK', EXPRTYPE ( 'SELECT DATEDIFF ( WEEK, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 5. DAY', EXPRTYPE ( 'SELECT DATEDIFF ( DAY, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 6. HOUR', EXPRTYPE ( 'SELECT DATEDIFF ( HOUR, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 7. MINUTE', EXPRTYPE ( 'SELECT DATEDIFF ( MINUTE, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 8. SECOND', EXPRTYPE ( 'SELECT DATEDIFF ( SECOND, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 9. MILLISECOND', EXPRTYPE ( 'SELECT DATEDIFF ( MILLISECOND, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
ORDER BY 1;

Date Part,Data Type
' Version:','8.0.3.5379'
' 1. YEAR','int'
' 2. QUARTER','int'
' 3. MONTH','int'
' 4. WEEK','int'
' 5. DAY','int'
' 6. HOUR','int'
' 7. MINUTE','int'
' 8. SECOND','int'
' 9. MILLISECOND','int'

Here's what it says about DATEDIFF in Version 12:

SELECT ' Version:' AS "Date Part", @@Version AS "Data Type"
UNION
SELECT ' 1. YEAR', EXPRTYPE ( 'SELECT DATEDIFF ( YEAR, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 2. QUARTER', EXPRTYPE ( 'SELECT DATEDIFF ( QUARTER, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 3. MONTH', EXPRTYPE ( 'SELECT DATEDIFF ( MONTH, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 4. WEEK', EXPRTYPE ( 'SELECT DATEDIFF ( WEEK, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 5. DAY', EXPRTYPE ( 'SELECT DATEDIFF ( DAY, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 6. HOUR', EXPRTYPE ( 'SELECT DATEDIFF ( HOUR, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 7. MINUTE', EXPRTYPE ( 'SELECT DATEDIFF ( MINUTE, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 8. SECOND', EXPRTYPE ( 'SELECT DATEDIFF ( SECOND, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT ' 9. MILLISECOND', EXPRTYPE ( 'SELECT DATEDIFF ( MILLISECOND, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
UNION
SELECT '10. MICROSECOND', EXPRTYPE ( 'SELECT DATEDIFF ( MICROSECOND, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 )
ORDER BY 1;

Date Part,Data Type
' Version:',12.0.0.2589
' 1. YEAR',int
' 2. QUARTER',int
' 3. MONTH',int
' 4. WEEK',int
' 5. DAY',int
' 6. HOUR',bigint
' 7. MINUTE',bigint
' 8. SECOND',bigint
' 9. MILLISECOND',bigint
'10. MICROSECOND',bigint

If you like EXPRTYPE, you'll love sa_describe_query().

Only in 12, though.

Wednesday, November 3, 2010

Product Suggestion: Improve ISQL - Tools - Index Consultant




Let's say you have some slow SQL that can't be run as a separate, single SQL statement but can only live inside a BEGIN block... let's say it's part of a stored procedure or web service or some such.

Furthermore, let's say you really really really really really need some help... you've followed the 64 Easy Steps described here and you've looked at graphical plans until your head hurts, and you still don't know what to do.

And then, let's say, you have an epiphany: "The Index Consultant will tell me what to do!"

So you grab your code and paste it into ISQL and fiddle with it until it runs... slowly, of course, because that's your problem... and then you press Tools - Index Consultant.

You've done that before, and it's worked, and the Index Consultant is WONDERFUL!

But not this time...




Instead of two clicks (Tools - Index Consultant), it wants you to use the Application Profiling feature in Sybase Central. I'm not sure how many clicks that would be (fifty? 64? five hundred?) because I haven't been able to get it to work. It either comes up empty or Sybase Central crashes.

Which brings me to this suggestion: ISQL - Tools - Index Consultant SHOULD WORK on batches as well as single statements.

Yes, I know, lots of folks can get Application Profiling to work for them. But (I'm guessing here) there's a larger group of people who have just given up. I'm part of that group, in fact I'm a serial surrenderer... every six months or a year I make a serious attempt to use Application Profiling, using Version 10, Version 11, Version 12, and every time it's the same.

I used to shout at the monitor, "Has nobody ever heard of usability testing?"

Now I just relax, sippa cuppa, get on with other things.

Dilbert.com

Monday, November 1, 2010

Database Tracing In English

The interweb has finally run out of storage space for images. Or, I've run out of patience pressing Alt + Print Screen.

Either way, this article is...

  • a text-only description

  • of how to get the query plan

  • in SQL Anywhere Version 11.0.1.2276

  • for a SQL statement

  • that lives inside a block of code...
...like an application, or a stored procedure, or a trigger, or something as simple as a batch in ISQL containing a cursor fetch loop.

It's not easy. It should be easy, because ISQL has a wonderful "Plan Viewer", but it's not; here's why:

There... is... no... plan...


For example, if you have a BEGIN block containing a FOR loop that runs just fine in ISQL, albeit slowly, when you try to run Tools - Plan Viewer on the whole block you get this message:
There is no plan for the SQL statement.
[Sybase][ODBC Driver][SQL Anywhere]Syntax error near 'DECLARE' on line 2
If, in foolish desperation, you highlight the FOR loop and try to run the Plan Viewer on that, you'll see a message like this:
There is no plan for the SQL statement.
[Sybase][ODBC Driver][SQL Anywhere]Plan can not be generated for this type of statement

Who knew?


But wait! There is a way! It's called Database Tracing! and it comes in the box with SQL Anywhere! (take THAT, evil Borg!)
Note: If I spoke German (I don't) this article could have been "Database Tracing In German". The specific language isn't important except for the fact it is NOT "Database Tracing In Hieroglyphs". In other words, all text, no pictures. "A picture is worth a thousand words" means words are necessary to explain what you are looking at. Well, except for pronography art :)

Database Tracing In 64 Easy Steps


There's only 55 or 60 steps, some of these are [ahem] editorial comment:
  1. Start Sybase Central.

  2. Connect to your database. Don't forget to do that. If you forget, your whole experience will be an empty one.

  3. See the "Mode" item appear in the menu bar. Q: Why wasn't it there before, disabled but visible? A: Bad GUI design.

  4. Pay NO ATTENTION to the Mode item, it just leads to the Application Profiling Wizard which in turn leads to unhappiness.

  5. Instead, right-click on the grey "oil drum" icon that represents your database in the left pane, and click on "Tracing..."

  6. See the "Database Tracing Wizard" window appear.

  7. ...or maybe not. Everything looks different the second time you do this. Q: Why is this stuff so hard to explain? A: Bad GUI design.

  8. Let's assume this is your first time, life's too short to describe every branch of a tree that has a thousand forks.

  9. Click on Next.

  10. See the "Tracing Detail Level" window appear.

  11. Check "High detail".

  12. Click on Next.

  13. See the "Edit Tracing Levels" window appear.

  14. Uncheck everything... really, uncheck all 7 items, one by one, click click click, they're useless.

  15. Click on New.

  16. See the "Add Tracing Level" dialog box appear.

  17. Choose these values:
    Scope: database
    Tracing type: plans_with_statistics
    Condition: none

  18. Take a moment to wonder why, if those are the default values for a "New" entry, why wasn't that entry already there? A: Bad GUI design.

  19. At this point, if you listen carefully, you may hear a voice say "Don't do that! It's too expensive!"

  20. Disregard the voice, click on Add.

  21. Click on Next.

  22. See the "Create External Database" window appear.

  23. Check "Do not create a new database".

  24. Click on Next.

  25. See the "Start Tracing" window appear.

  26. Check "Save tracing in this database".

  27. Check "No limit".

  28. Click on Finish.

  29. Wait a bit... it's either going to work... or it's not (see next section).

  30. If it does work...

  31. See the Sybase Central display appear as if nothing was happening... but it is, behind the scenes.

  32. To confirm that, look for this message in your database's console log...
    Diagnostic tracing is being sent to 'links=tcpip{host=127.0.0.1};ENG=sss;DBN=ddd;Encryption=SIMPLE'

  33. Switch over to ISQL, or your application, whatever, and run your test(s).

  34. When you're done running your test(s), switch back to Sybase Central.

  35. Right-click on the database icon in the left pane, and click on Tracing - Stop tracing with save.

  36. See the "Stop Tracing With Save" progress window appear and disappear... cool horizontal barber pole!

  37. NOW, go back and click on Mode - Application Profiling on the menu bar.

  38. ...that's "Mode" and then "Application Profiling", not "Application Profiling". Q: Why does a sub-menu item have the same name as a menu item? A: Bad GUI design.

  39. See the "Application Profiling Wizard" window appear.

  40. Click on Cancel.

  41. See the pretty blue "Application Profiling Details" pane appear at the bottom of the Sybase Central window.

  42. Click on the line of text "Open Analysis File or Connect to a Tracing Database". Yes, it's a link... you didn't know that, did you? A: Bad GUI design.

  43. See the "Open Analysis Or Connect To Tracing Database" dialog box appear.

  44. Check "In a tracing database"... even though you don't think you have a tracing database, you do... it's YOUR database.

  45. Click on Open.

  46. See the "Connect to a Tracing Database" dialog box open.

  47. Fill in these fields to connect to YOUR database...
    Identification tab - User ID: dba
    Identification tab - Password: xxx
    Database tab - Server name: sss
    Database tab - Database name: ddd

  48. Click on OK.

  49. See the "Tracing Database" display appear in the "Application Profiling Details" pane.

  50. Don't touch the buttons! Look down, wayyyy down, see the line of upside-down tabs? You didn't notice them at all, did you? A: Bad GUI design.

  51. Click on the "Database Tracing Data" tab.

  52. See a second line of upside-down tabs appear, with the "Summary" tab showing.

  53. Find the query you're interested in... not the BEGIN block line, but SELECT you coded in the FOR statement.

  54. On that query line, right-click "Show the Detailed SQL Statements for the Selected Summary SQL Statement"... yes, we have no verbosity.

  55. See the display switch to the "Details" tab.

  56. Find the query you're interested in... again...

  57. Take a moment to remember the "LogExpensiveQueries" feature in 9.0.2, and how you thought that was complicated.

  58. Right-click "View More SQL Statement Details for the Selected Statement".

  59. See the "SQL Statement Details" dialog box appear.

  60. See the line of upside-down tabs at the bottom of the dialog box. You STILL didn't notice them appear? Yeah, neither did I. We're just dumb, I guess.

  61. Click on the "Query Information" tab.

  62. See the plan! Woohoo!

  63. It's just like the ISQL Plan Viewer except crappier.

  64. Why "crappier"? Try saving the plan to a file, you'll see. Same thing in 12, thanks for asking.
But... you do get to see the query plan! So that's good news.

ATTACH... TRACING... could... not... connect...


Here is Database Tracing's most popular common error message:
An error has occurred - tracing was not attached to the database.
ATTACH TRACING could not connect to the tracing database
[Sybase][ODBC Driver][SQL Anywhere]ATTACH TRACING could not connect to the tracing database
SQLCODE: -1097
SQLSTATE: HY000
SQL Statement: ATTACH TRACING TO LOCAL DATABASE
You can get that error...
  • if you specify the dbsrv11 -x none command line option (TCPIP is required even for a local database), or

  • if you specify the dbsrv11 -sb 0 command line option, or

  • if you use the SQL Anywhere 12.0.0.2589 engine on a database created with 11.0.1.2276.
The latter combination (V12 Database Tracing on a V11 database) comes with a bit more excitement later on: the engine crashes on shutdown.

In every case, the "ATTACH TRACING could not connect" message gives no clue as to the real problem, and that's a real problem in itself.