Wednesday, June 29, 2011

Top Ten Things To Know About Sybase Technologies and Their Fit Within SAP

That's the name of a session at the SAP TechEd conference which is co-located with Sybase Techwave in Las Vegas on September 12 through 16:


Top Ten Things To Know About Sybase Technologies and Their Fit Within SAP

Lecture (1 hour)

This session will introduce you to the Sybase technology solution portfolio, now integrated within many SAP solutions and available to the broader SAP community. Sybase's strategy of the unwired enterprise, long focused on the core aspects of managing data, analyzing data, and mobilizing data, has many current and planned integration points within the SAP technology portfolio to drive operational efficiency and reduce overall costs. This session will give answers to questions such as: What is the overall strategy of the Sybase technology solution portfolio and how does it integrate into the SAP portfolio? How will information management and business intelligence technology evolve in the context of a combined offering from SAP, BusinessObjects and Sybase? Get insights and guidance on what you need to know now, and what you need to anticipate coming in the future of Sybase's technology solution portfolio.

Session Level: Beginner

SAP Products Type: Sybase Adaptive Server Enterprise ; Sybase IQ ; Sybase Replication Server ; Sybase Unwired Platform

Topics: Enterprise Data Warehousing ; Enterprise Information Management (EIM) ; On Demand ; On Device ; Sybase Technology and Solutions ; Sybase, an SAP Company

Job Function: Business Process ; Business Intelligence ; Design and Architecture ; IT Management ; SAP Partners, Consultants and ISVs ; Web and Application Development ; Systems Administration and Security

Speaker(s):
Peter Thawley
Sr. Director / Architect, Office of the CTO
Sybase, an SAP Company

What else?


So far there are 256 sessions (yes, a perfect power of 2) listed for TechEd, and a completely wonderful web interface for searching the sessions:



Have a look for yourself, it's really quite breathtaking when you think of past Techwave agendas, at least in the number of sessions and the absolute horde great variety of speakers.

In other news...


The "TechWave 2011 Partial List of Breakout Sessions" has appeared on the Sybase Techwave 2011 website.

Also, Techwave registration is now open.


Monday, June 27, 2011

Why did it take 0.43 seconds to insert one row?

Jonathan O'Keeffe (jpo) posted this comment on Stupid Error Messages last week:

"Why did it take 0.43 seconds to insert one row? Is there something special about uniqueidentifier that makes this slow?"
The short answers are "it doesn't" and "no".

Jonathan was referring to this output copied from the "SQL Statements" and "Results - Messages" panes displayed by SQL Anywhere's dbisql utility:

CREATE TABLE t (
pkey UNIQUEIDENTIFIER NOT NULL PRIMARY KEY );

INSERT t VALUES ( '4b67abbe3c9246f883db0ab2d2b78f8b' );

Execution time: 0.43 seconds
1 row(s) inserted

I remember thinking at the time "That's an awfully big number!" but I left it in because that's what was actually displayed.

Or was it?


I can't remember (or find) the database where I ran that test, or even remember what version of SQL Anywhere was used.

But wait, it gets worse... I can't reproduce anything that looks even remotely close to that output. The order of the "Execution time:" and "1 row(s) inserted" lines in versions 9 and 10 is different from versions 11 and 12, but in all four versions there are two "Execution time:" numbers displayed because there are two statements executed, CREATE TABLE and INSERT:



Here is the long answer to Jonathan's questions...
  • The "Execution time: 0.43 seconds" is probably a copy-and-paste mistake. It might be for the CREATE TABLE... it could also be for the INSERT, but it probably isn't.

  • No, there is nothing special about the UNIQUEIDENTIFIER data type that makes it slow... under the covers it's just another steenking BINARY string.

  • As far as I know (can't find it in the docs) the "Execution time:" number is a notoriously unreliable client-side view of how long something takes.

  • No matter how fast the client-server interface is, it adds considerable overhead (relatively speaking) to very fast operations like INSERT. For example, if you have a million rows to insert and you care about performance, don't code a client-side loop that sends a million separate INSERT statements to the server. Use INSERT SELECT, call a stored procedure, use LOAD TABLE, do something to push the process down to the server.
    "Any programmer seriously interested in performance runs transactions using a stored-procedure
    interface, rather than SQL commands over ODBC/JDBC."
    - Michael Stonebraker

  • Execution times can vary wildly with the state of the server, especially if (like in this case) absolutely no care has been taken with experimental design ...in fact, in this case, there was no experiment being conducted.



Friday, June 24, 2011

The fRiDaY File - Managing the Web-Enabled Enterprise

Dilbert.com


Will SUP 2.1 Mark The Beginning Of The End For SQL Anywhere?

Perhaps this is just another crazy conspiracy theory but here goes...

When version 2.1 of the Sybase Unwired Platform is released in Septemer 2011, it will ship with an HTML5 container, plus 16 mobile applications using the HTML5 container, and according to Kevin Benedict "HTML5 will play a significant and growing role in the SAP/Sybase strategy and road map".

So what?


Here's so what: One of the top 5 supported features in HTML5 is Web Storage which is persistent storage of data on the client side. In other words, a database.

Just like SQL Anywhere, except it isn't.

Or is it?

Well, if you do a Google search on HTML5 "SQL Anywhere" you pretty well come up empty, so if SQL Anywhere has a role to play with SUP 2.1's HTML5 container it's a well kept secret.



If you do a Google search for "SUP 2.1" you will see that it is already available... but not from SAP/Sybase. I see a name change coming in the near future, or a trademark lawsuit... or maybe just a good old-fashioned Googlewashing campaign :)


Wednesday, June 22, 2011

Comparing SQL Anywhere With ASE

Glenn Paulley has just published a wonderful new whitepaper describing the differences between SQL Anywhere and Adaptive Server Enterprise.

Here's the official description:

Migrating SQL Anywhere database applications to ASE. Some Sybase customers may need to explore the possibility of migrating one or more SQL Anywhere applications to ASE servers to consolidate their server environments, or to take advantage of some specific ASE product features. This white paper outlines the major differences between the two systems, specifically comparing SQL Anywhere 12.0.1 with ASE 15.5.
Here's how Dilbert might describe it:
To be Forewarned is to be Forearmed. Some SQL Anywhere developers may want to know how miserable what life will be like when they are forced by management to move their databases to ASE. This white paper outlines the major differences in complexity between the two systems, specifically comparing SQL Anywhere 12.0.1 with ASE 15.5.

Dilbert.com
But Glenn isn't Dilbert... you won't find any sarcasm in his whitepaper, just the facts. Pages and pages of facts... 71 pages in all.


Monday, June 20, 2011

Latest SQL Anywhere EBFs: 12.0.1.3356 and 10.0.1.4239 Windows, 10.0.1.4239 Solaris

Current builds...

HP-UX Itanium 11.0.1.2584 EBF 07 Apr 2011
10.0.1.3777 EBF 14 Oct 2008

HP-UX PA-RISC 10.0.1.3778 EBF 16 Oct 2008

IBM AIX 12.0.1 GA Upgrade from 12.0.0 15 Mar 2011
11.0.1.2584 EBF 07 Apr 2011

Linux Itanium 10.0.1.3777 EBF 14 Oct 2008

Linux x86/x64 12.0.1.3311 EBF 01 Apr 2011
11.0.1.2617 EBF 13 Jun 2011
10.0.1.4213 EBF 28 Apr 2011

Mac OS 12.0.1.3352 EBF 26 May 2011
11.0.1.2449 EBF 29 Jun 2010
10.0.1.4042 EBF 01 Oct 2010

NetWare 10.0.1.4036 EBF 10 Mar 2010

Solaris SPARC 12.0.1 GA Upgrade from 12.0.0 15 Mar 2011
11.0.1.2588 EBF 19 Apr 2011
10.0.1.4239 EBF *** 17 Jun 2011 ***

Solaris x64 11.0.1.2584 EBF 07 Apr 2011
10.0.1.3870 EBF 01 Apr 2009

Windows CE 10.0.1.4036 EBF 09 Mar 2010

Windows Itanium 10.0.1 GA Upgrade from 10.0.0 26 Apr 2007

Windows x86/x64 12.0.1.3356 EBF *** 17 Jun 2011 ***
11.0.1.2606 EBF 20 May 2011
10.0.1.4239 EBF *** 17 Jun 2011 ***


Friday, June 17, 2011

The fRiDaY File - Elbonian to English

What would you do if you saw this error message?

Cannot drop the table '#elbat_yraropmet_ymmud_a_si_siht' 
Would you try this?
Nice try, but here is where you need to look.


Stupid Error Messages

Have you ever seen an error message like this one?

Invalid phone number (212) 555-1284 - not numeric
Maybe this one?
Invalid phone number 2125551284 - format not (999) 999-999
Both messages are stupid, stupid, stupid... that's three "stupids" out of a maximum possible score of 5. Modern systems should be able to deal with human-readable input formats like credit card numbers with or without the dashes.
Invalid credit card number 4533-3048-3842-9544 - not numeric

Today's Stupid Error Message



CREATE TABLE t (
pkey UNIQUEIDENTIFIER NOT NULL PRIMARY KEY );

INSERT t VALUES ( '4b67abbe3c9246f883db0ab2d2b78f8b' );
GO

Msg 8169, Level 16, State 2, Server ENVY, Line 4
Conversion failed when converting from a character string to uniqueidentifier.

You need the dashes for that when using SQL Server 2008:

INSERT t VALUES ( '4b67abbe-3c92-46f8-83db-0ab2d2b78f8b' );
GO

(1 row affected)

I suppose we should be grateful it accepts lower case.

For the record, SQL Anywhere has no such difficulty:

CREATE TABLE t (
pkey UNIQUEIDENTIFIER NOT NULL PRIMARY KEY );

INSERT t VALUES ( '4b67abbe3c9246f883db0ab2d2b78f8b' );

Execution time: 0.43 seconds
1 row(s) inserted

As they say, Watcom does things they way they should be done.


Monday, June 13, 2011

The MESSAGE Statement In SQL Server

Question: How do I write diagnostic/trace messages to a text file during the execution of a stored procedure on Microsoft SQL Server 2008, similar to SQL Anywhere's MESSAGE ... TO CONSOLE statement?

I need this facility to help diagnose and test MSS stored procedures called from MobiLink scripts.

Answer: When this question was asked on the SQL Anywhere Forum, three answers were given:

I chose the first answer: Call SQL Server's xp_cmdshell procedure to execute a Windows ECHO statement which uses the ">>" redirection operator to append a string to a file.

Here's the code, warts and all:

-- To allow advanced options to be changed.
EXEC sp_configure 'show advanced options', 1
GO
-- To update the currently configured value for advanced options.
RECONFIGURE
GO
-- To enable the feature.
EXEC sp_configure 'xp_cmdshell', 1
GO
-- To update the currently configured value for this feature.
RECONFIGURE
GO

BEGIN TRY
DROP PROCEDURE write_diagnostic_message
END TRY
BEGIN CATCH
-- ignore error
END CATCH
GO

CREATE PROCEDURE write_diagnostic_message
@message VARCHAR ( 1000 )
AS
BEGIN
DECLARE @cmd VARCHAR ( 2000 )
SET NOCOUNT ON
SET @cmd = 'ECHO ' + CONVERT ( VARCHAR, CURRENT_TIMESTAMP, 121 ) + ' ' + @message + ' >>c:\temp\messages.txt'
EXEC master..xp_cmdshell @cmd, no_output
END
GO

The crap stuff on lines 1 through 12 is necessary to prevent xp_cmdshell from failing with an error message.

The BEGIN TRY and BEGIN CATCH blocks on lines 14 through 20 are a quick-and-dirty way to execute a statement (in this case DROP PROCEDURE) which may or may not work and you don't care which.

The procedure defined on lines 22 through 31 provides a slightly-easier-to-call interface to the xp_cmdshell solution:
  • The DECLARE on line 26 is necessary because the EXEC statement can't handle expressions passed as arguments to stored procedures.

  • The SET NOCOUNT ON on line 27 is more crap stuff required by SQL Server; you can read about it in the "Best Practices" section of the page documenting the CREATE PROCEDURE statement.

  • The SET statement on line 28 uses CONVERT to append the current date and time in front of the @message parameter, and builds an ECHO ... >>filespec command."

  • The EXEC statement on line 29 passes the ECHO command to xp_cmdshell, and the no_output option prevents xp_cmdshell itself from trying to display anything on screen.
Here's how to code a call to write_diagnostic_message from another SQL Server procedure:

EXEC write_diagnostic_message 'This is a test'

Here's the output in c:\temp\messages.txt:

2011-06-13 06:00:59.897 This is a test

Caution: Don't try to pass an expression as the argument to write_diagnostic_message, that won't work in SQL Server. You'll need a DECLARE another variable and use SET to fill it from the expression, and then pass the variable to write_diagnostic_message.

Question: Why do people use SQL Server after using SQL Anywhere?

Answer: Because they are forced to by management? ...it's a continuing mystery to me.


Friday, June 10, 2011

The fRiDaY File - Look! Actual Code!

Dilbert.com


Opinion: Code Generators Are (Should Be) Disposable

A code generator is a piece of software, often template-driven, which creates a large amount of application program code from a small amount of information entered by a programmer. The generated code is then processed by a compiler or other language processor just as if it was written by a human being.

(This definition is not to be confused with the code generation component of a true language compiler... that's something else entirely.)
Code generators are wonderful things, especially when you have a lot of boring, similar code to write. The MobiLink Synchronization Model Wizard is one example of a valuable code generator, especially the most recent version that comes in-the-box with SQL Anywhere Version 12; MobiLink setups often include hundreds of scripts that are exactly the same except for differences between one table and another (columns, primary keys, etcetera).

When In Rome


By definition, no part of a code generator is present at runtime. That means runtime error messages are expressed in the context of the output created by the code generator, not the input created by the programmer. In the case of the MobiLink wizard, error messages are expressed in terms of the output SQL, not the spreadsheet-like input to the wizard.

And that's the main reason a code generator should be disposed of as soon as possible, right after all the boring bits have been successfully generated. If the debugging is going to involve the generated code, then the fixes should be made to the generated code... When in Rome, do as the Romans do... When in SQL, code in SQL.

This is reinforced by the fact that special cases are always going to be difficult if not impossible to express as input to a code generator. No matter how sophisticated a code generator is, there are going to be cases that have to be hand-coded in the output language, not the input.

A worst-case scenario involves a custom-coded code generator, especially one created and used by a temporary contractor... those really should be disposed of as soon as their usefulness diminishes. There will always be more people who can work with the output code than those who can maintain and operate the code generator.


Wednesday, June 8, 2011

Most Successful Database Software Vendor In China


According to USA Today
Sybase CEO John Chen is "one of the most knowledgeable American business leaders on the inner workings of China."

And it looks like it's paying off...

Sybase Named Most Successful Database Software Vendor in China

China's Centre for Information Industry Development Consulting (CCID Consulting) honoured the company with the "2010-2011 Most Successful Database Software Vendor in China" award. Sybase is the only enterprise vendor to receive this prestigious award two years in a row.

... CCID Consulting estimated Sybase's database has garnered 25.1 percent market share in the telecommunications industry, 25 percent in the banking industry, and 24.1 percent in the insurance industry.

... These recognitions reaffirm the technical prowess of Sybase's solutions and the excellent market competitiveness it accords to customers of its technology, including its industry-leading SQL Anywhere mobile and embedded database technology, ...


Monday, June 6, 2011

Installing A SQL Server Database From A BAK File

Question: I've just been given a SQL Server 2005 *.bak file to use as a consolidated database in a MobiLink synchronization setup... what do I do with this file?

Answer: That file was most likely created by the SQL Server 2005 BACKUP statement and it's easy to restore into a brand-new database.

But first, you need to install SQL Server. This blog is about SQL Anywhere, not SQL Server, so this article makes no assumptions about where you're starting from... like the man said on TV "nobody needs a clue":

If you want, you can start with SQL Server 2008 instead of 2005 and still use the *.bak file you've been given; you can download Microsoft SQL Server 2008 Express Edition Service Pack 1 here.

Step 2: Start the SQL Server interactive SQL utility osql.exe on the master database (which always exists on SQL Server); here's a command line for Windows:

"C:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe" -S Envy -U sa -P j68Fje9#fyu489 -d master



Step 3: Use the CREATE DATABASE statement in osql to prepare a database to receive the data in the *.bak file; the DROP DATABASE statement is included in case you have to start over again later:

USE master
GO

DROP DATABASE mydb
GO

CREATE DATABASE mydb
ON
( NAME = mydb,
FILENAME = 'C:\temp\mydb.mdf',
SIZE = 300MB )
LOG ON
( NAME = mydb_log,
FILENAME = 'C:\temp\mydb.ldf',
SIZE = 100MB )
GO

Step 4: Find out what you have in the *.bak file with the RESTORE FILELISTONLY statement:

RESTORE FILELISTONLY
FROM DISK = 'C:\temp\mydb.bak'
GO

Here are the results; only the two "LogicalName" values are important: myXXX for the database file and myXXX_log for the transaction log:

LogicalName
PhysicalName
Type
FileGroupName
Size MaxSize FileId
CreateLSN DropLSN
UniqueId ReadOnlyLSN
ReadWriteLSN BackupSizeInBytes SourceBlockSize
FileGroupId LogGroupGUID
DifferentialBaseLSN DifferentialBaseGUID
IsReadOnly IsPresent TDEThumbprint
myXXX
D:\SQLServer\Data\myXXX.mdf
D
PRIMARY
209715200 35184372080640 1
0 0
9DF4176F-4C1F-4198-A32C-3E5E91F8EED8 0
0 146210816 512
1 NULL
3619000000786000140 062CF42D-5C45-47F4-8ADB-84EF794C2F82
0 1 NULL
myXXX_log
E:\SQLServer\Logfiles\myXXX_log.ldf
L
NULL
214433792 2199023255552 2
0 0
47EE2F14-8CB0-49FA-B525-CD79D38FAE91 0
0 0 512
0 NULL
0 00000000-0000-0000-0000-000000000000
0 1 NULL
(2 rows affected)

Step 5: Use the RESTORE DATABASE statement to copy the data from the *.bak file into your database. The WITH MOVE options tell SQL Server to put the data into files that are in different locations from the original, and the WITH REPLACE option tells SQL Server accept the fact you're changing the database name to mydb. The only place you have to use the old names is in the two MOVE clauses: MOVE 'myXXX' and MOVE 'myXXX_log' refer to the LogicalName values you found in Step 4.

RESTORE DATABASE mydb
FROM DISK = 'C:\temp\mydb.bak'
WITH MOVE 'myXXX' TO 'C:\temp\mydb.mdf',
MOVE 'myXXX_log' TO 'C:\temp\mydb.ldf',
REPLACE
GO

The RESTORE DATABASE statement is remarkably robust when it comes to upgrading data on the fly; between SQL Server 2005 and 2008 there were apparently quite a few version changes:

Processed 17848 pages for database 'mydb', file 'myXXX' on file 1.
Processed 2 pages for database 'mydb', file 'myXXX_log' on file 1.
Converting database 'mydb' from version 611 to the current version 655.
Database 'mydb' running the upgrade step from version 611 to version 621.
Database 'mydb' running the upgrade step from version 621 to version 622.
Database 'mydb' running the upgrade step from version 622 to version 625.
Database 'mydb' running the upgrade step from version 625 to version 626.
Database 'mydb' running the upgrade step from version 626 to version 627.
Database 'mydb' running the upgrade step from version 627 to version 628.
Database 'mydb' running the upgrade step from version 628 to version 629.
Database 'mydb' running the upgrade step from version 629 to version 630.
Database 'mydb' running the upgrade step from version 630 to version 631.
Database 'mydb' running the upgrade step from version 631 to version 632.
Database 'mydb' running the upgrade step from version 632 to version 633.
Database 'mydb' running the upgrade step from version 633 to version 634.
Database 'mydb' running the upgrade step from version 634 to version 635.
Database 'mydb' running the upgrade step from version 635 to version 636.
Database 'mydb' running the upgrade step from version 636 to version 637.
Database 'mydb' running the upgrade step from version 637 to version 638.
Database 'mydb' running the upgrade step from version 638 to version 639.
Database 'mydb' running the upgrade step from version 639 to version 640.
Database 'mydb' running the upgrade step from version 640 to version 641.
Database 'mydb' running the upgrade step from version 641 to version 642.
Database 'mydb' running the upgrade step from version 642 to version 643.
Database 'mydb' running the upgrade step from version 643 to version 644.
Database 'mydb' running the upgrade step from version 644 to version 645.
Database 'mydb' running the upgrade step from version 645 to version 646.
Database 'mydb' running the upgrade step from version 646 to version 647.
Database 'mydb' running the upgrade step from version 647 to version 648.
Database 'mydb' running the upgrade step from version 648 to version 649.
Database 'mydb' running the upgrade step from version 649 to version 650.
Database 'mydb' running the upgrade step from version 650 to version 651.
Database 'mydb' running the upgrade step from version 651 to version 652.
Database 'mydb' running the upgrade step from version 652 to version 653.
Database 'mydb' running the upgrade step from version 653 to version 654.
Database 'mydb' running the upgrade step from version 654 to version 655.
RESTORE DATABASE successfully processed 17850 pages in 9.746 seconds (14.308
MB/sec).

Now you can start using your new database; here's an osql command line:

"C:\Program Files\Microsoft SQL Server\100\Tools\Binn\osql.exe" -S Envy -U sa -P j68Fje9#fyu489 -d mydb



Friday, June 3, 2011

The fRiDaY File - RTFM!

Dilbert.com


The USE Statement In SQL Anywhere

Question: How do I code "USE database" in SQL Anywhere? I'm trying to write a batch file to update several databases on the same server.

Answer: Use (pun intended) the CONNECT and SET CONNECTION statements in your SQL file, then run that SQL file in batch mode via dbisql.

Here is a simple example of how CONNECT can establish named connections to three different databases, and then switch back and forth between those connections via SET CONNECTION:


CONNECT TO ddd1 DATABASE ddd1 AS ddd1_con USER dba IDENTIFIED BY sql;
CONNECT TO ddd2 DATABASE ddd2 AS ddd2_con USER dba IDENTIFIED BY sql;
CONNECT TO ddd3 DATABASE ddd3 AS ddd3_con USER dba IDENTIFIED BY sql;

SET CONNECTION ddd1_con;
SET CONNECTION ddd2_con;
SET CONNECTION ddd3_con;
Caution: This stuff doesn't fully work in dbisql's GUI mode. I don't know why, but there you go... in batch mode the CONNECT statement can be used to create and maintain a named connection that can be reused via SET CONNECTION. However, in GUI mode the second CONNECT closes the first connection even if you gave that one a name. Yup, that's a bummer... but you can use CONNECT to switch back and forth between databases in GUI mode if you don't mind only having one connection open at a time, so maybe it doesn't matter to you.
There are two different forms of CONNECT; the format shown above is for shared memory connections, and CONNECT USING can be used with TCPIP connections:

CONNECT USING 'SERVER=ddd1;DBN=ddd1;UID=dba;PWD=sql;LINKS=TCPIP;CON=ddd1_con';
CONNECT USING 'SERVER=ddd2;DBN=ddd2;UID=dba;PWD=sql;LINKS=TCPIP;CON=ddd2_con';
CONNECT USING 'SERVER=ddd3;DBN=ddd3;UID=dba;PWD=sql;LINKS=TCPIP;CON=ddd3_con';

SET CONNECTION ddd1_con;
SET CONNECTION ddd2_con;
SET CONNECTION ddd3_con;
Caution: CONNECT USING is a SQL statement so don't use "double quotes" around the connection string, use 'single quotes' ...and don't forget the CON= connection parameter; you have to give the connection a name if you want to use the SET CONNECTION statement.
For the skeptical among you, here's proof that it works. First, here a Windows command file to create and start three separate databases, then run dbisql in batch mode to run a SQL script:

"%SQLANY12%\bin32\dbinit.exe" ddd1.db
"%SQLANY12%\bin32\dbinit.exe" ddd2.db
"%SQLANY12%\bin32\dbinit.exe" ddd3.db

"%SQLANY12%\bin32\dbspawn.exe" -f "%SQLANY12%\bin32\dbeng12.exe" -x none ddd1.db
"%SQLANY12%\bin32\dbspawn.exe" -f "%SQLANY12%\bin32\dbeng12.exe" -x none ddd2.db
"%SQLANY12%\bin32\dbspawn.exe" -f "%SQLANY12%\bin32\dbeng12.exe" -x none ddd3.db

"%SQLANY12%\bin32\dbisql.com" READ ENCODING Cp1252 "06s script demonstrate_set_connection.sql"

Here's what the SQL script "06s script demonstrate_set_connection.sql" contains:

----------------------------------------------------------------------------
-- Establish 3 connections from one dbisql session.

CONNECT TO ddd1 DATABASE ddd1 AS ddd1_con USER dba IDENTIFIED BY sql;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ),
' CONNECT TO ', PROPERTY ( 'Name' ),
' DATABASE ', DB_PROPERTY ( 'Name' ),
' AS ', CONNECTION_PROPERTY ( 'Name' ) ) TO CONSOLE;

CONNECT TO ddd2 DATABASE ddd2 AS ddd2_con USER dba IDENTIFIED BY sql;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ),
' CONNECT TO ', PROPERTY ( 'Name' ),
' DATABASE ', DB_PROPERTY ( 'Name' ),
' AS ', CONNECTION_PROPERTY ( 'Name' ) ) TO CONSOLE;

CONNECT TO ddd3 DATABASE ddd3 AS ddd3_con USER dba IDENTIFIED BY sql;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ),
' CONNECT TO ', PROPERTY ( 'Name' ),
' DATABASE ', DB_PROPERTY ( 'Name' ),
' AS ', CONNECTION_PROPERTY ( 'Name' ) ) TO CONSOLE;

----------------------------------------------------------------------------
-- INSERT a row on each connection.

SET CONNECTION ddd1_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd1_con;' ) TO CONSOLE;
BEGIN
DROP TABLE t1;
EXCEPTION WHEN OTHERS THEN
END;
CREATE TABLE t1 ( pkey INTEGER NOT NULL PRIMARY KEY );
INSERT t1 VALUES ( 111 );
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' INSERT t1 VALUES ( 111 );' ) TO CONSOLE;

WAITFOR DELAY '00:00:02';

SET CONNECTION ddd2_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd2_con;' ) TO CONSOLE;
BEGIN
DROP TABLE t2;
EXCEPTION WHEN OTHERS THEN
END;
CREATE TABLE t2 ( pkey INTEGER NOT NULL PRIMARY KEY );
INSERT t2 VALUES ( 222 );
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' INSERT t2 VALUES ( 222 );' ) TO CONSOLE;

WAITFOR DELAY '00:00:02';

SET CONNECTION ddd3_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd3_con;' ) TO CONSOLE;
BEGIN
DROP TABLE t3;
EXCEPTION WHEN OTHERS THEN
END;
CREATE TABLE t3 ( pkey INTEGER NOT NULL PRIMARY KEY );
INSERT t3 VALUES ( 333 );
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' INSERT t3 VALUES ( 333 );' ) TO CONSOLE;

WAITFOR DELAY '00:00:02';

----------------------------------------------------------------------------
-- ROLLBACK, then INSERT a different row.

SET CONNECTION ddd1_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd1_con;' ) TO CONSOLE;
ROLLBACK;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' ROLLBACK;' ) TO CONSOLE;
INSERT t1 VALUES ( 1 );
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' INSERT t1 VALUES ( 1 );' ) TO CONSOLE;

WAITFOR DELAY '00:00:02';

SET CONNECTION ddd2_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd2_con;' ) TO CONSOLE;
ROLLBACK;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' ROLLBACK;' ) TO CONSOLE;
INSERT t2 VALUES ( 2 );
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' INSERT t2 VALUES ( 2 );' ) TO CONSOLE;

WAITFOR DELAY '00:00:02';

SET CONNECTION ddd3_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd3_con;' ) TO CONSOLE;
ROLLBACK;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' ROLLBACK;' ) TO CONSOLE;
INSERT t3 VALUES ( 3 );
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' INSERT t3 VALUES ( 3 );' ) TO CONSOLE;

WAITFOR DELAY '00:00:02';

----------------------------------------------------------------------------
-- COMMIT, then show final row.

SET CONNECTION ddd1_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd1_con;' ) TO CONSOLE;
COMMIT;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' COMMIT;' ) TO CONSOLE;
BEGIN
DECLARE @pkey INTEGER;
SELECT t1.pkey INTO @pkey FROM t1;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' t1.pkey = ', @pkey ) TO CONSOLE;
END;

WAITFOR DELAY '00:00:02';

SET CONNECTION ddd2_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd2_con;' ) TO CONSOLE;
COMMIT;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' COMMIT;' ) TO CONSOLE;
BEGIN
DECLARE @pkey INTEGER;
SELECT t2.pkey INTO @pkey FROM t2;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' t2.pkey = ', @pkey ) TO CONSOLE;
END;

WAITFOR DELAY '00:00:02';

SET CONNECTION ddd3_con;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' SET CONNECTION ddd3_con;' ) TO CONSOLE;
COMMIT;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' COMMIT;' ) TO CONSOLE;
BEGIN
DECLARE @pkey INTEGER;
SELECT t3.pkey INTO @pkey FROM t3;
MESSAGE STRING ( CURRENT TIMESTAMP, ' ', CONNECTION_PROPERTY ( 'Name' ), ' t3.pkey = ', @pkey ) TO CONSOLE;
END;

Here's what the dbisql batch session displays as it runs:

Connected to "ddd1" on "ddd1".
Connected to "ddd2" on "ddd2".
Connected to "ddd3" on "ddd3".
1 row(s) inserted
1 row(s) inserted
1 row(s) inserted
1 row(s) inserted
1 row(s) inserted
1 row(s) inserted
1 row(s) affected
1 row(s) affected
1 row(s) affected
Execution time: 18.14 seconds

Here's the MESSAGE output as it appeared in the three separate server console windows for ddd1, ddd2 and ddd3:

2011-06-02 12:04:46.978 ddd1_con CONNECT TO ddd1 DATABASE ddd1 AS ddd1_con
2011-06-02 12:04:47.110 ddd1_con SET CONNECTION ddd1_con;
2011-06-02 12:04:47.602 ddd1_con INSERT t1 VALUES ( 111 );
2011-06-02 12:04:54.776 ddd1_con SET CONNECTION ddd1_con;
2011-06-02 12:04:54.807 ddd1_con ROLLBACK;
2011-06-02 12:04:54.824 ddd1_con INSERT t1 VALUES ( 1 );
2011-06-02 12:05:01.114 ddd1_con SET CONNECTION ddd1_con;
2011-06-02 12:05:01.144 ddd1_con COMMIT;
2011-06-02 12:05:01.168 ddd1_con t1.pkey = 1

2011-06-02 12:04:47.052 ddd2_con CONNECT TO ddd2 DATABASE ddd2 AS ddd2_con
2011-06-02 12:04:49.739 ddd2_con SET CONNECTION ddd2_con;
2011-06-02 12:04:50.171 ddd2_con INSERT t2 VALUES ( 222 );
2011-06-02 12:04:56.946 ddd2_con SET CONNECTION ddd2_con;
2011-06-02 12:04:56.958 ddd2_con ROLLBACK;
2011-06-02 12:04:56.971 ddd2_con INSERT t2 VALUES ( 2 );
2011-06-02 12:05:03.281 ddd2_con SET CONNECTION ddd2_con;
2011-06-02 12:05:03.294 ddd2_con COMMIT;
2011-06-02 12:05:03.304 ddd2_con t2.pkey = 2

2011-06-02 12:04:47.105 ddd3_con CONNECT TO ddd3 DATABASE ddd3 AS ddd3_con
2011-06-02 12:04:52.286 ddd3_con SET CONNECTION ddd3_con;
2011-06-02 12:04:52.743 ddd3_con INSERT t3 VALUES ( 333 );
2011-06-02 12:04:59.059 ddd3_con SET CONNECTION ddd3_con;
2011-06-02 12:04:59.071 ddd3_con ROLLBACK;
2011-06-02 12:04:59.083 ddd3_con INSERT t3 VALUES ( 3 );
2011-06-02 12:05:05.386 ddd3_con SET CONNECTION ddd3_con;
2011-06-02 12:05:05.398 ddd3_con COMMIT;
2011-06-02 12:05:05.408 ddd3_con t3.pkey = 3

When you take the MESSAGE output and sort it by timestamp, you can see how one dbisql session was able to switch back and forth between three separate connections; not only that, but it could do that without disturbing the transactions that were running on each of those connections:

2011-06-02 12:04:46.978 ddd1_con CONNECT TO ddd1 DATABASE ddd1 AS ddd1_con

2011-06-02 12:04:47.052 ddd2_con CONNECT TO ddd2 DATABASE ddd2 AS ddd2_con

2011-06-02 12:04:47.105 ddd3_con CONNECT TO ddd3 DATABASE ddd3 AS ddd3_con

2011-06-02 12:04:47.110 ddd1_con SET CONNECTION ddd1_con;
2011-06-02 12:04:47.602 ddd1_con INSERT t1 VALUES ( 111 );

2011-06-02 12:04:49.739 ddd2_con SET CONNECTION ddd2_con;
2011-06-02 12:04:50.171 ddd2_con INSERT t2 VALUES ( 222 );

2011-06-02 12:04:52.286 ddd3_con SET CONNECTION ddd3_con;
2011-06-02 12:04:52.743 ddd3_con INSERT t3 VALUES ( 333 );

2011-06-02 12:04:54.776 ddd1_con SET CONNECTION ddd1_con;
2011-06-02 12:04:54.807 ddd1_con ROLLBACK;
2011-06-02 12:04:54.824 ddd1_con INSERT t1 VALUES ( 1 );

2011-06-02 12:04:56.946 ddd2_con SET CONNECTION ddd2_con;
2011-06-02 12:04:56.958 ddd2_con ROLLBACK;
2011-06-02 12:04:56.971 ddd2_con INSERT t2 VALUES ( 2 );

2011-06-02 12:04:59.059 ddd3_con SET CONNECTION ddd3_con;
2011-06-02 12:04:59.071 ddd3_con ROLLBACK;
2011-06-02 12:04:59.083 ddd3_con INSERT t3 VALUES ( 3 );

2011-06-02 12:05:01.114 ddd1_con SET CONNECTION ddd1_con;
2011-06-02 12:05:01.144 ddd1_con COMMIT;
2011-06-02 12:05:01.168 ddd1_con t1.pkey = 1

2011-06-02 12:05:03.281 ddd2_con SET CONNECTION ddd2_con;
2011-06-02 12:05:03.294 ddd2_con COMMIT;
2011-06-02 12:05:03.304 ddd2_con t2.pkey = 2

2011-06-02 12:05:05.386 ddd3_con SET CONNECTION ddd3_con;
2011-06-02 12:05:05.398 ddd3_con COMMIT;
2011-06-02 12:05:05.408 ddd3_con t3.pkey = 3

To tell you the truth, I don't know if SET CONNECTION is the same as, different from, better than or worse than the USE statement in Transact/SQL. If the SQL Server docs didn't suck so badly I would know the answer without testing... just a minute... OK, there, it's on my to do list :)


Wednesday, June 1, 2011

SQL Anywhere Functions On The Client Side

Question: How do I use SQL Anywhere's builtin functions like COMPRESS() and ENCRYPT() on the client side of an application?

Answer: The short answer is "You can't!", according to this response.

The long answer is,

  • if you're willing to start an empty local database using the SQL Anywhere personal server on the same machine that's running your client application

  • and you're willing to wave your arms and declare "This embedded database shall henceforth be regarded as part of the client side!"

  • then yes, you can invoke COMPRESS() and ENCRYPT() without passing the data back and forth across the network to your "real" database.
First, here's an all-SQL demonstration of the four functions that are going to be invoked on the client side:

BEGIN
DECLARE @input LONG VARCHAR;
DECLARE @compressed LONG BINARY;
DECLARE @encrypted LONG BINARY;
DECLARE @decrypted LONG BINARY;
DECLARE @output LONG VARCHAR;
DECLARE @key LONG VARCHAR;

SET @key = '4wlPnLAs34cHUlgSVLUUbIzany3G3RkXQOjouGo';

SET @input = 'Hello, World!';
SET @compressed = COMPRESS ( @input, 'zip' );
SET @encrypted = ENCRYPT ( @compressed, @key, 'AES256' );
SET @decrypted = DECRYPT ( @encrypted, @key, 'AES256' );
SET @output = CAST ( DECOMPRESS ( @decrypted, 'zip' ) AS LONG VARCHAR );

SELECT @input, @compressed, @encrypted, @decrypted, @output;
END;

The code on lines 11 through 17 compresses a string, then encrypts it, decrypts it and decompresses it to show the final output is the same as the original input:

@input,@compressed,@encrypted,@decrypted,@output
Hello, World!,0x789cf348cdc9c9d75108cf2fca495104001f9e046a,0x652c6bb5bf26f146592932352d63e0976ee8ed4b5f59f19682664d42c3a0a5d8543b4aceda8e98ae5875e0f2eef8f029,0x789cf348cdc9c9d75108cf2fca495104001f9e046a,Hello, World!

Here's the output from a PowerBuilder application that does the same thing:



The client-side PowerBuilder code is quite a bit more verbose than the SQL. First of all, the local database must be started and a connection established; in this example the Window Open event is used to do that:

// Auto-start local database with auto-stop enabled.

gtr_local = CREATE u_tr_sqlca

gtr_local.DBMS = 'ODB'

gtr_local.DBParm &
= "ConnectString='Driver=SQL Anywhere 12;" &
+ "UID=dba;PWD=sql;" &
+ "START=C:\Program Files\SQL Anywhere 12\Bin64\dbeng12.exe -o C:\data\dbeng12_log_local.txt -os 10M -x none -zl -zp -zt;" &
+ "DBF=C:\data\local.db'," &
+ "ConnectOption='SQL_DRIVER_CONNECT,SQL_DRIVER_NOPROMPT'"

CONNECT USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', 'CONNECT USING gtr_local failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

// Create a temporary table for holding blob data.

EXECUTE IMMEDIATE 'CREATE LOCAL TEMPORARY TABLE blob ( pkey INTEGER PRIMARY KEY, data LONG BINARY )'
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'CREATE failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

// Insert a row for use by subsequent UPDATEBLOB statements.

INSERT INTO blob VALUES ( 1, '' )
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'INSERT failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF


The code on lines 1 through 25 uses a DSN-less connection to auto-start the local database C:\data\local.db.

The code on lines 27 through 59 creates and initializes a local temporary table used to pass a single LONG BINARY value from step to step in the PowerBuilder application; this table is used to get around PowerBuilder's inability to handle blob values as easily as other data types.

The application uses two global variables: gtr_local maintains a separate connection to the local database and gs_key holds the key used by the ENCRYPT and DECRYPT funtions.

u_tr_sqlca gtr_local
string gs_key = '4wlPnLAs34cHUlgSVLUUbIzany3G3RkXQOjouGo'

The u_tr_sqlca data type used to define gtr_local is a user-defined data type based on the standard PowerBuilder transaction object; in simple terms, gtr_local is a separate copy of the SQLCA communication area.

COMPRESS()


Here's the code for the "Compress" button clicked event:

blob lblob_compressed

// Erase the output fields.

sle_compressed.text = ''
sle_encrypted.text = ''
sle_decrypted.text = ''
sle_output.text = ''

// COMPRESS() the input into a local blob variable.

SELECTBLOB COMPRESS ( :sle_input.text, 'zip' )
INTO :lblob_compressed
FROM dummy
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'SELECT COMPRESS failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

sle_compressed.text = "Compressed OK: length = " + String ( len ( lblob_compressed ) )

// Save the local blob variable in the local blob table.

UPDATEBLOB blob
SET blob.data = :lblob_compressed
WHERE blob.pkey = 1
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'UPDATEBLOB failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

The PowerBuilder SELECTBLOB statement on lines 12 through 15 returns the compressed value to the local variable lblob_compressed.

The UPDATEBLOB statement on lines 33 through 36 is used to store that value in the local temporary table for use in the next step... if there is no "next step", if all you need is the compressed value, then the UPDATEBLOB is not required.

Neither the local table nor the UPDATEBLOB statement should ever be necessary, but it appears that the SELECTBLOB statement in the next PowerBuilder script can't handle a blob being passed to it as a host variable in the SELECT list as well being returned to another host variable in the INTO clause.

ENCRYPT()


The code for the "Encrypt" button clicked event is quite similar except that the SELECTBLOB statement works on the blob.data column passed from the previous script via the local temporary table rather than a host variable like :sle_input.text in the previous script:

blob lblob_encrypted

// ENCRYPT() the saved blob column into a local blob variable.

SELECTBLOB ENCRYPT ( blob.data, '4wlPnLAs34cHUlgSVLUUbIzany3G3RkXQOjouGo', 'AES256' )
INTO :lblob_encrypted
FROM blob
WHERE blob.pkey = 1
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'SELECTBLOB ENCRYPT failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

sle_encrypted.text = "Encrypted OK: length = " + String ( len ( lblob_encrypted ) )

// Save the local blob variable in the local blob table.

UPDATEBLOB blob
SET blob.data = :lblob_encrypted
WHERE blob.pkey = 1
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'UPDATEBLOB failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF


DECRYPT()


The code for the "Decrypt" button clicked event is almost identical, SELECTBLOB followed by UPDATEBLOB, except that it calls DECRYPT instead of ENCRYPT:

blob lblob_decrypted

// DECRYPT() the saved blob column into a local blob variable.

SELECTBLOB DECRYPT ( blob.data, :gs_key, 'AES256' )
INTO :lblob_decrypted
FROM blob
WHERE blob.pkey = 1
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'SELECTBLOB DECRYPT failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

sle_decrypted.text = "Decrypted OK: length = " + String ( len ( lblob_decrypted ) )

// Save the local blob variable in the local blob table.

UPDATEBLOB blob
SET blob.data = :lblob_decrypted
WHERE pkey = 1
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'UPDATEBLOB failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF


DECOMPRESS()


The code for the "Decompress" button clicked event is simpler because there is no "next step" so there's no need for a local blob variable:

// DECOMPRESS() the saved blob column into into the display field.

SELECT CAST ( DECOMPRESS ( blob.data, 'zip' ) AS LONG VARCHAR )
INTO :sle_output.text
FROM blob
WHERE blob.pkey = 1
USING gtr_local;

IF gtr_local.SQLCODE <> 0 THEN
MessageBox ( 'Error', &
'SELECT DECOMPRESS failed:' &
+ '~r~nSQLCode = ' &
+ String ( gtr_local.SQLCode ) &
+ '~r~nSQLDBCode = ' &
+ String ( gtr_local.SQLDBCode ) &
+ '~r~n' &
+ gtr_local.SQLErrText )
RETURN
END IF

Many other client-side development environments suffer from similar limitations when it comes to handling blobs, different from PowerBuilder's but just as annoying. However, SQL Anywhere builtin functions that don't involve LONG VARCHAR and LONG BINARY arguments and return values (and there are lots of those) shouldn't bump up against these annoyances, and they should be easier to implement on the client side.