Monday, December 28, 2009

The beta is awesome! (2)

It's a big step for Foxhound when a new version of SQL Anywhere is released.

First, Foxhound has compile on Version 12, then it has to run properly against target databases created with versions 5.5 through 11.

What is Foxhound? It's a database monitor and schema display utility for SQL Anywhere databases, currently in development with its own beta planned for 2010.
Foxhound also has to recognize target databases created with version 12, and then it has to work properly with those target databases... it's one thing to get Foxhound to stop rejecting "12" as a database version number, quite another matter to get Foxhound to handle version 12's enhancements and behavior changes.

But, first things first: Foxhound compiles cleanly on the SQL Anywhere version 12 beta with no code changes, and it passes a simple smoke test.

Woohoo! That's no mean feat... virtually all of Foxhound is written in SQL...
  • 80,000 lines of rather funky Watcom SQL, up from 60,000 lines a year ago,

  • 200 web services, procedures, events and triggers,

  • 340 tables and views with 2600 columns, plus

  • 1,300 lines of C in 9 external procedures.
So, thank you, iAnywhere Solutions, for (a) not shipping broken code and (b) not breaking my code!

Tuesday, December 22, 2009

When will SQLA go live?

The SQL Anywhere question & answer site SQLA will probably go live sometime in 2010, possibly before the US midterm elections (what, isn't EVERYONE waiting to see what happens on November 2? :)

Here are some points to consider:

...and maybe there will be a Beta 8, who knows?
  • There are a lot of little bugs in the underlying software, you've probably seen them and been just as irritated by them as I have. Feel free to report them on, but if you do, you'll probably see they've already been discussed... which is why I don't bother sweating the small stuff.

  • There are quite a few features that are "stubbed out" (not yet implemented) in the administration areas "admin", "mod" and "tools". The big one, IMO, is called "Download Your StackExchange Database". That's huge. SQLA will never go live without that one.

Not ever.

Allow me to elaborate: The creator of StackExchange just lost his entire blog because he didn't take responsibility for backing up his own data. SQLA is not my data, it is your data. It's one thing for Jeff Atwood to be careless with his own data, that's his problem. It would be an entirely different matter for me to take the same risks with your data. I don't care how safe the Fog Creek infrastructure is... blah blah blah cloud computing blah blah blah.

Until there is a reasonably convenient and absolutely reliable way for me to back up SQLA content, it ain't ever going live.

Here's what I really think: The backup problem will go away long before StackExchange goes live, and we'll be able to "pump up the volume" (invite more participation in SQLA) sometime in January or February.

That will happen before SQLA goes "live"... it will still say "govern yourself accordingly", but it'll be a lot more fun!

Saturday, December 19, 2009

The beta is awesome!

I'm probably violating some part of the EULA just by saying that, but I don't care.

Folks who join the SQL Anywhere 12 beta can read my article "Top 1 Cool New Features In SQL Anywhere 12" in the beta newsgroup... that's where I talk about how much adjective deleted noun deleted are.

Then there's the follow up "Top 2 Cool New Features In SQL Anywhere 12" where I discover the new noun deleted feature.

The best part, it hasn't verb deleted, not once!

Tuesday, December 15, 2009

Good Luck Glenn!

Good luck Glenn, on today's webcast....

Object Relational Mappers: Friend or Foe?

Join the SQL Anywhere experts for a deep dive discussion on Object Relational Mappers.

Object relational mappers (ORMs) such as LINQ, Hibernate, and ActiveRecord bridge the gap between the relational database world, and the object-oriented world. By abstracting the database into "virtual database objects", they let programmers develop in any language and environment that they like without ever writing a line of SQL. It sounds great, but is the ORMs sweet song actually a siren's call? This talk will put ORMs on trial to help us find the answer.

Presenter: Glenn Paulley, Director of Engineering

Date: Tuesday December 15th
Time: 7am PST / 10am EST / 3pm GMT / 4pm CET
or 11am PST / 2pm EST / 7pm GMT / 8pm CET

Thursday, December 10, 2009

Why SQLA Exists

The SQL Anywhere question and answer website SQLA exists for questions like this

What particular statistics should one look at to help decide the best database page size to use?
and answers like this.

That doesn't mean you can't ask questions like "Where does the semicolon go?", those are perfectly fine and will be answered quickly and politely... but you've gotta appreciate the gems!

Monday, December 7, 2009

Object Relational Mappers: Friend or Foe?

Folks who know me, even just a little, know where I stand on the subject of Object Relational Mappers: Friend or Foe?

For everyone else, here's a hint: ORMs are like this Safety Device for Walking on Icy Pavements...

When you slip on ice, your foot kicks paddle (A), lowering finger (B), snapping turtle (C) extends neck to bite finger, opening ice tongs (D) and dropping pillow (E), thus allowing you to fall on something soft.

Which brings us to this webcast...

...I can't wait to hear what Glenn has to say!

Object Relational Mappers: Friend or Foe?

Join the SQL Anywhere experts for a deep dive discussion on Object Relational Mappers.

Object relational mappers (ORMs) such as LINQ, Hibernate, and ActiveRecord bridge the gap between the relational database world, and the object-oriented world. By abstracting the database into "virtual database objects", they let programmers develop in any language and environment that they like without ever writing a line of SQL. It sounds great, but is the ORMs sweet song actually a siren's call? This talk will put ORMs on trial to help us find the answer.

Presenter: Glenn Paulley, Director of Engineering

Date: Tuesday December 15th
Time: 7am PST / 10am EST / 3pm GMT / 4pm CET
or 11am PST / 2pm EST / 7pm GMT / 8pm CET

Monday, November 30, 2009

RIVA The Robot In The News

RIVA The Robot was mentioned here back in July...

RIVA stands for Robotic IV Administration, it uses SQL Anywhere for its database, and it's back in the news again: Should Pittsburgh pharmacies purchase robots to prepare your IV antibiotics and chemo drugs?

Sunday, November 22, 2009

SQLA, Two Weeks Later

It has been two weeks since the SQL Anywhere Question and Answer website SQLA was announced here, and it's been one week since the first status report.

Another slow(er) week

The total number of questions rose from 25 to 44. That's less than double, indicating that posting activity was slower in the second week. Also, a significant number of questions were of the administrative variety, so nobody better get excited about SQLA being crushed by its own success or anything like that.

Not yet, anyway.

The number of users rose from 29 to 35... nope, not exciting yet.

But they are good users, and they've been asking some great questions, more on that later.

Here are the StatCounter numbers for the nine full days since the hidden JavaScript thingie was plopped onto the "Sidebar Low" area (which sits at the bottom-right of some - not all - of the SQLA page displays):

Top Four Questions

Here are the top four new questions from SQLA's second week of operation:
Troubleshooting High Temporary File Usage

How does SQL Anywhere pass parameters?

Product Suggestion: Please implement record variable type

What’s the size of your biggest database?
Here's the top five from the first week:
Why should I use SQL Anywhere for my next project?

Is there already an iAnywhere position on this new website?

Does parameter passing degrade stored procedure performance?

What is the best refresh strategy for materialized views?

What is the best way to simulate Oracle Autonomous Transactions in SQL Anywhere?
Is it just me, or are some of the questions and answers starting to look more like magazine articles or blog posts than tech support cases?

There's room for both, methinks. There's room for pretty much anything that's relevant.

Development is proceeding. Honestly. Trust me.

The FAQ still hasn't been updated. Instead, a bunch of questions tagged "faq" have been posted, 14 at last count. So it's not like there's no work being done, just not any hard work :)

Here are a few of the more significant ones:
Poll: Is it OK for vendors to post questions?

What is the backup and recovery strategy for SQLA?

Exhortation: Do not mark anything “community wiki”

It's hard being a tag

Tag population control is in effect at SQLA.

SQLA has its own Search facility, plus there's always Google, so there doesn't seem to be much point in defining a bazillion different tags for the purpose of improving search. Plus, it's really hard work (ask anyone who's had to create a book index), and very few people do a good job (ask me about the mediocre job I did on my book).

So, when a tag gets used more than once, it's going to get looked at to see if it should be renamed, eliminated, merged with another tag, etcetera, all with the goal of using tags to classify or categorize the question.

Users are completely free to define new tags, five per question if they want (the software-imposed limit). It's only when a tag gets used on more than one question is it going to be a candidate for euthanasia, er, revision.

BTW, there is a new "SQLA" tag (category, label, whatever) on this blog so you can see all the posts about SQLA at once.

Training begins for the backup administrator

A volunteer for the post of backup SQLA administrator has been accepted, and training begins on Tuesday. Then, while the primary administrator lies face-up by the pool, the backup administrator will be responsible for keeping SQLA free from evil-doers.

For that role the backup administrator will be wearing a supersuit, with cape.

The SQLA Lottery

The pirate chief, er, primary administrator has started giving out random bounties of 100 reputation points, for no particular reason other than it's important for people to have points.

Three such bounties have already been handed out (which accounts for the fact the administrator is no longer first place in the points) and two bounties remain available... essentially for the asking.

One prediction, one boast

SQLA activity will not increase significantly any time soon, and might even decrease:
  • It's all in beta, the software and the site, and some people don't do beta.

  • We're in one of those no-real-work-gets-done-for-weeks-on-end periods, this one tagged "Thanksgiving-Christmas-New-Years".

  • SQLA has not been officially announced; i.e., nobody's promoted it on the NNTP forums.
But... SQLA will eventually be recognized as perfectly suited to the task, of that there is no doubt.

Saturday, November 21, 2009

Cloud Computing with SQL Anywhere

Apparently I don't check my email very carefully because I missed the original announcement of this webcast:

Cloud and Multi-Tenant Architectures with SQL Anywhere Webcast

SQL Anywhere supports a variety of capabilities that ease the shift from an on-premise database server to a hosted server environment. This session will examine different architectural approaches for designing and developing multi-tenant applications to be hosted in a SQL Anywhere Server. Additionally, several operational considerations for hosted SQL Anywhere environments will be discussed.

Presenter: Chris Kleisath, Sr. Director of Engineering

Date: Tuesday, November 24, 2009
Time: 7am PST / 10am EST / 3pm GMT / 4pm CET
or: 11am PST / 2pm EST / 7pm GMT / 8pm CET
Yes, that's this Tuesday coming up. Click here to register.

If you can't attend, you can probably get most of the material from the five-part series starting here: Multi-Tenant Database Architecture – Part 1.

But... there's always a "but"... if you can watch the webcast you should watch the webcast because Chris is always entertaining when he speaks.

Friday, November 20, 2009

What is the backup and recovery strategy for SQLA?

Nobody wants another Magnolia.

Here's the current situation: The SQLA question & answer site for SQL Anywhere runs on StackExchange software and is hosted by Fog Creek. According to an email from Joel Spolsky: "It is actually the same as Fog Creek hosting (it’s on the same infrastructure as what is described in that article)."

That takes care of part of SQLA's backup and recovery strategy: We trust Fog Creek, they operate on a far higher plane than Magnolia ever did.

(Magnolia was named after the Greek god of cluelessness, and it has now been re-launched as "Gnolia" which is Latin for "missed business opportunity".)
The second part of SQLA's strategy is the "verify" part of "trust but verify". It is expected that before the StackExchange service goes live (it is in Beta now, just like SQLA is), the "Download Your StackExchange Database" site management option will be implemented.

Having said all that, there are gaps, like "Will it be possible to upload the StackExchange data we downloaded?"

And, "How would we migrate to a different hosting service?"

Don't know, do care (very much), too early to know the answers.

Wednesday, November 18, 2009

Come to Candia!

Here's an email I got Monday from the International Sybase User Group.

I am sure everyone from the USA is welcome, not just folks from the nothern part.

Caution: Candia can be cold in December so bring a sweater!

Dear Sybase Enthusiast,

Please join us on December 1st and 2nd, 2009 for the ISUG Canadian Technical Summit to be held in the Candian Capital - Ottawa!

For the first time, Ottawa will be hosting a two day event drawing together many Sybase professionals and 3rd party vendors from Canada and the nothern USA. This event will have two streams - Data Management and Development. You will be able to register for either one or better yet, make sure that you bring a colleague with you so that you can cover both agendas. There is even a planned "cocktail reception" on the Tuesday night (Dec 1st).

The event will cover subject areas such as: SMS messaging from PB 12.NET, Taking your PowerBuilder applications to the Web; Data Warehousing, Replication, Sybase and SAS, Data Modeling, Database Monitoring and Performance, ASE and PB tools, Product Futures, Mobility, etc just to highlight some key areas.

On the Development side we will see evangelist John Strano showing us the latest PB 12.NET from engineering, best practices and another training session. For Data Management we will have key Sybase personnel addressing key product futures, Mobility and presentations on comparing various Data Modeling approaches. You will have access to key Sybase technical staff from Canada and the USA to ask questions about Sybase products, interact with users utilizing the same technologies and gain key information into 2010 based products and features.

For more information and to see a complete list of topics covered, please see our registration page. You'll notice that to register you must be a member of the International Sybase User Group (ISUG). You can become a member directly at or by joining the Ottawa Sybase User Group (OSUG) at All members of OSUG are automatically members of ISUG as well! You can register with a free Associate Member account, but we encourage you to become a full Regular Member if you can.

For questions about and support for this event, you can contact ISUG's North American Regional User Group Director and OSUG President, Chris Pollach ( Chris will be happy to answer any questions you may have and assist you with registering if necessary.

We look forward to seeing you in December!

The ISUG Events Team

Sunday, November 15, 2009

SQLA, One Week Later

As of 7:20 AM EST, Sunday November 8 2009, the SQLA question and answer website was up and running and announced, making today the one week anniversary.

For you history buffs, work on SQLA began around 1 or 2 AM on November 8... seriously, six hours from project inception to live beta, it's that easy to set up a StackExchange site. Before 1 AM it was a just a to-do: "try out StackExchange beta".

In fact, the site was created from scratch twice, once using the wrong site name (gotta respect those trademarks!)

Here are the usage numbers

According to the SQLA software itself, there are now
  • 29 registered users,

  • 25 questions,

  • 46 answers and

  • 42 comments, and there have been

According to StatCounter, for the one full weekday that it was told to accumulate SQLA statistics (Friday), there were:
  • 532 page views,

  • 83 unique visitors and

  • 40 returning visitors.
Considering that there has been no mention of SQLA on any of the NNTP forums yet, I think the numbers are... low? high? just right?

Top Five Questions

Chosen subjectively by a panel of judge, based on content and activity:

Why should I use SQL Anywhere for my next project?

Is there already an iAnywhere position on this new website?

Does parameter passing degrade stored procedure performance?

What is the best refresh strategy for materialized views?

What is the best way to simulate Oracle Autonomous Transactions in SQL Anywhere?

Comments are being used as a substitute for threaded discussions

For many years I have clung to the idea that threaded discussions are absolutely necessary for having, well, discussions. One hundred percent of the HTTP-based implementations of threaded discussions suck, when compared with NNTP (newsgroups). That includes purely-web-based discussion sites, all the web interfaces to newsgroup servers, even the re-incarnation of Google Groups. They are either slow, or unusable, or both.

Only classic port 119 (NNTP) clients like Forte Agent work well for threaded discussions.

Or so I believe(d).
(If there was a page for "Late Adopter" in Wikipedia, it would have my picture.)
The trouble is, all the world's firewalls are closing down port 119. And new features are coming to the web, not newsgroups.

So, now we have SQLA. Wonderful interface, fast enough, but... no threaded discussions.

Or are there? It is starting to look like being able to
  • post multiple answers,

  • and post multiple (albeit short) comments on each answer,

  • AND post comments on the question itself,

  • as well as editing the answers and even the question,
is a pretty [expletive deleted] good alternative to threaded discussions.

Well, I'm not sure about the "editing the answers and even the question" part, that's not really happening yet and I'm not sure it should (see later point "Shy people are still shy").

Out there in the blogosphere people are having endless discussions using comments, with and without cheesy "nesting" as a substitute for threading, and the overwhelming volume of traffic says it's working. Here at SQLA, the multiple-answers-multiple-comments features should be much better/easier than that.

Plus, it's easy to type a twitter-style "@username:" prefix on your comment, the cyberspace equivalent of the meatspace shout "Hey, Joe, I think we might have a solution for you, come on over, let's talk!"

Maybe it will work, maybe it won't. I think it will. I think it will work very well for "How do I..." questions, and even better for discussions about future product features... possibly even better at that than the NNTP forum.

We'll see.

Will SQLA replace the NNTP forums? Don't know, don't care. It's like cellular, some people still use radio telephones ... honestly, I didn't know that either, but it's true, and they have reasons (like, "it's the law", stuff like that).

Will Breck still answer questions on the NNTP forums? Sure will! That's the point, answering the questions... not having a protocol war, but helping people.

What will happen when Sybase/iAnywhere implements a modern web-based community site? That's a good question ... [struggle] [microphone drops] ... We have no comment on that at this time.

Voter turnout is low

A total of 30 up-or-down votes have been cast. That represents a voter turnout of 1.5% according to the following absurd formula:
  • each user can vote each question up or down, except their own

  • each user can vote each answer up or down, except their own

  • there are 29 users, 25 questions and 46 answers

  • which comes out to an theoretical maximum of 29*25 - 1*25 + 29*46 - 1*46 = ...

  • wait for it...

  • 1988 possible votes

  • or 30/1988 = ...

  • 1.5% voter turnout
That formula is absurd because it doesn't account for
  • the fact that not every user looks at every question and answer

  • the fact that users don't necessarily form an opinion about every thing they read

  • and the fact that down-votes should probably be rare, perhaps almost non-existent.
Still, 1.5% feels low. Voting is a powerful feature for making SQLA better.

Maybe 3% turnout is a goal worth seeking. Plus, getting more folks who ask questions to check the answer they like the best. And getting folks to click the "Favorite" star.

Early adopters are behaving well

Nobody has voted anything down into negative-points territory, or at least nothing's been voted down past zero points. That's OK with me.

Nobody has edited anything except their own stuff, at least as far as I can tell.

In fact, nothing bad is happening at all. That is *not* a surprise.

Shy people are still shy

I don't know of any way to *guarantee* people will become less-shy about asking questions, but there are an infinity of ways to guarantee people will *not* participate. You can see some of those morale-destroying techniques in action over at StackOverflow. The software is wonderful but a significant minority of the active StackOverflow participants are arrogant [expletive deleted]s ... my [insincerity alert] apologies in advance, but there's no other way to put it.

That won't happen at SQLA, it's moderated.

But the real reason it won't happen is this: The SQL Anywhere user community consists of folks who are treat each other with respect.

So far, "SQLA Moderator" has been the easiest job in the world. It's probably illegal to have this much fun.

And finally...

Here is StatCounter's view of where SQLA page views came from for the last couple of days.

Question: Did the user on the shores of Hudson Bay at the southern border of Nunavut answer a question asked by the user at the end of PA279 near the Xingu river and São Félix do Xingu in Pará, or was it the other way around, or neither case?

Wednesday, November 11, 2009

Tell your friends about SQLA

If y'all want SQLA to succeed, more participation is the answer. The Beta is open for everyone, and it's up to you to tell other people.

Use your own words, or use/edit the "official announcement" below. Post it on the NNTP forums, send emails, whatever you want.

Your friends and colleagues will thank you!

Here are two copies of the announcement, WYSIWYG and HTML...

New website for SQL Anywhere questions and answers

One of the founding members of Team Sybase has started a moderated community-based website for questions and answers about SQL Anywhere and related topics:

SQLA is hosted by the StackExchange service and it uses the same software as StackOverflow.

SQLA is different from StackOverflow in several ways: in particular, discussions are encouraged and SQLA is moderated.

In other important ways, SQLA is the same as StackOverflow: it is (soon will be) Google-searchable, it will always be free to use, the reputation points and badge systems are the same, and free speech is protected (within the bounds of moderation).

(If you've never heard of StackOverflow, you're in for a real treat!'s the best thing since Google, as far as technical question-and-answer services are concerned.)

Caveat Emptor: SQLA is currently in early beta testing... VERY early beta, where "early beta" means "likely to change". Not only is the SQLA site in beta, but so are the rules of moderation (see the unfinished FAQ), and so is the StackExchange service and the software it provides. Also, note that SQLA is not owned by Sybase.

HTML code... click on the "Copy" button over there --->

New website for SQL Anywhere questions and answers

One of the founding members of Team Sybase has started a moderated community-based website for questions and answers about SQL Anywhere and related topics:

SQLA is hosted by the StackExchange service and it uses the same software as StackOverflow.

SQLA is different from StackOverflow in several ways: in particular, discussions are encouraged and SQLA is moderated.

In other important ways, SQLA is the same as StackOverflow: it is (soon will be) Google-searchable, it will always be free to use, the reputation points and badge systems are the same, and free speech is protected (within the bounds of moderation).

(If you've never heard of StackOverflow, you're in for a real treat!'s the best thing since Google, as far as technical question-and-answer services are concerned.)

Caveat Emptor: SQLA is currently in early beta testing... VERY early beta, where "early beta" means "likely to change". Not only is the SQLA site in beta, but so are the rules of moderation (see the unfinished FAQ), and so is the StackExchange service and the software it provides. Also, note that SQLA is not owned by Sybase.

Tuesday, November 10, 2009


The stack at SQLA is off to a good start, at least by SQL-Anywhere-forum standards: 7 questions, 8 answers in it's first day of operation, with only one announcement.

There's a bit of new material in the FAQ...

  • What kind of questions can I ask here?

  • How come other people can edit my stuff?

  • How does moderation work?
Etymology Alert: I don't know if I'm the first one to use "stack" to refer to a hosted site, but just in case I'm staking my claim here and now :)

Sunday, November 8, 2009

I'm a big fan of the rule "Don't announce a Beta until there's a Beta to announce" so here it is:

a StackOverflow-style question-and-answer website dedicated to SQL Anywhere,
in beta.

Be the second person to ask a question!

Here's what it says in the "How to Ask" box:
  • Is your question about SQL Anywhere?

  • Technical questions should start with "How do I [do some thing]?"

  • Provide the version and build: SELECT @@VERSION

  • Discussions are welcome! (this is NOT StackOverflow)
Yes, is going to be different, different from, and REALLY different from the NNTP newsgroups.
For more information, check out the first question and answer.

Here's a question that's not posted...

Q: Why the name "SQLA" instead of "SQLAnywhere"?

A: Let's just say we want to avoid any Imperial entanglements.

Friday, November 6, 2009


This little procedure's been floating around for years and years, and the source code sits in various inaccessible places like the NNTP newsgroups, the execrable CodeXchange and a sub-sub-sub-folder on my C drive called "miscellaneous".

Time to put it somewhere accessible: Google.

As in, "on an actual web page".

As in, here...

How to get full control over a SQL Anywhere server

CREATE PROCEDURE p_drop_other_connections ()

/* How to get full control over a SQL Anywhere server...

(1) CALL sa_server_option ( 'ConnsDisabled', 'ON' );

At this point, any attempt to open a new connection will
get "Connections to database have been disabled".
However, current connections will still work until...

(2) CALL p_drop_other_connections();

(3) Do whatever you need to do.

(4) CALL sa_server_option ( 'ConnsDisabled', 'OFF' );


DECLARE @this_connection_id INTEGER;
DECLARE @other_connection_id INTEGER;
DECLARE @drop_command VARCHAR ( 1000 );

SET @this_connection_id = connection_property ( 'number' );
SET @other_connection_id = NEXT_CONNECTION ( NULL );

WHILE @other_connection_id IS NOT NULL LOOP

IF @other_connection_id <> @this_connection_id THEN
SET @drop_command = STRING (
@other_connection_id );
EXECUTE IMMEDIATE @drop_command;

SET @other_connection_id
= NEXT_CONNECTION ( @other_connection_id )



Thursday, November 5, 2009

And yet ANOTHER beta (UltraLiteWeb)

If iAnywhere Solutions ever put on a standalone SQL Anywhere Developer Conference (they won't, alas), Eric Farrar would be enough of a "big name draw" to pack the place. Anyone else on the agenda would be icing on the cake.

Let me put it this way: If, after hearing an Eric Farrar presentation, you're not more enthusiastic about your job and about SQL Anywhere, you need medical attention.

Eric Farrar creates cool stuff. Eric Farrar makes SQL Anywhere dance and sing.

(Right about now, his co-workers are probably saying, "Look, Eric, Breck has a man-crush on you!")
Not true. If I was going to have a man-crush on anyone, it would be Brad Pitt.


You don't need to wait for any conference, you can see the latest of Eric's offerings here... UltraLite replacing SQLite in Google Gears... a browser-side SQL Anywhere database, with MobiLink synchronization. Woohoo!

Wednesday, November 4, 2009

Sybase Smooths Enterprise Path for IPhones

In PC World...

A REAL Database For The iPhone

The SQL Anywhere for the iPhone Beta has started; sign up here.

Soon, there's going to be a "Tutorial: Build an iPhone application" on DocCommentXChange, alongside the current Tutorial: Build a BlackBerry application...

Guessing Game

What will be the DocCommentXChange url be, for "Tutorial: Build an iPhone application", when it goes GA?

Here's my guess:

Here's my second guess:

What's your guess?

Monday, November 2, 2009

Boot-Time Defragmentation

I wish I had lots and lots of money to buy new equipment every few months, but I don't, never have, so I tend to use computers long past their "Best Before" dates.

Like the HP Pavilion zd8000 laptop I'm working on right now.

Yes, in these modern days of 64-bit Windows 7 and Core 2 Duo processors and giant drives, I'm running Windows XP on a 3.4GHz Pentium 4 with 2G (gasp!) of RAM and a 100G drive. It's my "main machine", the one I lug around to clients, the one I do almost all of my work on. With seven different versions of SQL Anywhere, plus Oracle 10g, SQL Server 2008, Visual Studio, blah blah blah... hundreds and hundreds of thousands of files.

And yes, my computer runs hot. Much too hot to hold in my lap. I even carry a little portable fan with me, to keep it from overheating and shutting down. And battery life sucks, less than one hour on a brand-new battery. But it's fast, fast, fast...

Over time, as more and more software is installed and removed, more files created and destroyed, performance on this laptop starts to slip. Such was the case this weekend when it became just too unbearable to wait 10 or 20 seconds for Wordpad to open a tiny text file.
Memories of mainframe, memories of 4.77Mhz.
No, there's no viruses or spyware or rootkits running, the router and firewall and Norton 360 and Windows Update take care of that. No apparent disk fragmentation, Diskeeper takes care of that (every time the screen saver kicks in, Diskeeper does some defragmentation).

I say "apparent" because if you go hunting around in the Diskeeper menus, you see there's a feature called "boot-time defragmentation"...

Tip: Don't check "Produce a summary log file" if you don't have a locally-attached drive that will be available during the boot period, other than the drive being defragmented. In other words, no network drives and (I think) no USB drives. Diskeeper won't warn you, but the boot-time defragmentation will skip most of the steps and you'll have to start over. You can specify a floppy drive A:, but who has floppies any more? No summary log for me, my laptop only has the C: drive.
The Diskeeper docs promise that the boot-time defragmentation process will do lot of work the regular defragmentation doesn't do, and that does seem to be the case. The reboot process took longer than an hour including before-and-after CHKDSK runs; the hard drive light was on solid for the whole time.

Here's part of the display output:

Boot-time Defragmentation Report

*** STAGE 0 ***
Stage skipped; MFT already defragmented.

*** STAGE 1 ***
Moved 3 files
Moved 1 attributes

*** STAGE 2 ***
Info: Paging file is not fragmented; skipping paging file processing
Located 335866 files...
Need 53421 clusters for folders & attributes

*** STAGE 3 ***
[xx%] Moving data file
Moved 12011 files from consolidation target areas

*** STAGE 4 ***
[xx%] Moving folder file
Moved 4417 folders to target area
Moved 5 attributes to target area

*** STAGE 5 ***
Moved 4 files
Moved 1 attributes

Contiguous Uncompressed Folders Moved: .......................... 4524
Contiguous Uncompressed Files Moved: ............................ 10805
Non-resident Attributes Moved: .................................. 10
Contiguous Compressed Folders Moved: ............................ 206
Contiguous Compressed Files Moved: .............................. 875
Fragmented Compressed Folders Moved: ............................ 1
Fragmented Compressed Files Moved: .............................. 24

Diskeeper boot-time processing completed.
Press ESC to reboot system.
Tip: When Windows asks to run CHKDSK because "One of your disks needs to be checked for consistency", let it run. Otherwise, Diskeeper will refuse to run any regular defragmentation steps and you'll have to reboot and let CHKDSK run.
After the boot-time defragmentation Diskeeper still showed a few "Pink - low performing system files"; not a lot, and much fewer than beforehand. Another regular defragmentation run got rid of almost all the pink areas, now it's all beautiful "Blue - high performing files and folders"...

I don't have screenshots from before all this, but here's what it looked like beforehand:
  • No "Red - low performing files and folders" at all, probably because Diskeeper runs regularly during screen saver periods, but

  • A large number of "Pink - low performing system files" areas that did not go away no matter how many times I did full manual Diskeeper defragmentations.
So how's performance now? A lot snappier, files and folders seem to open much more quickly, or so it seems. No hard numbers though :)
Tip: There's a wealth of information about Windows disk fragmentation on the Diskeeper site, including Myths of Fragmentation which includes this gem: "A freshly-installed operating system starts off with its files so badly fragmented that a simple defragmentation can decrease the boot time from 153.9 seconds to 111.8 seconds."

Friday, October 30, 2009

Navigation to the webpage was canceled

For many years, SQL Anywhere has used Microsoft Compiled HTML Help files (*.chm) to deliver all the manuals in one easy-to-read, easy-to-search package on your desktop, one that doesn't make you wander through endless menus and wait for slow-loading web pages.

These files are easy to copy, too, and sometimes folks copy the Help files from machine to machine without using the official SQL Anywhere setup.exe... and they zip them up and send them by email as well.

Suddenly, however, that process has become fraught with error: On the receiving end, when you open the *.chm file, you get "Navigation to the webpage was canceled" no matter which topic you navigate to:

Steven McDowell has the answer (and by the way, the problem DOES NOT AFFECT files installed by the official setup):

from Steven McDowell
to Breck Carter
date Wed, Oct 28, 2009 at 5:11 PM
subject Files downloaded from our FTP site can be "blocked" on XP/Vista/Win 7

Hi Breck,

FYI ... my investigation of your "blocked" CHMs seems to have determined what is going on.


It appears that a recent Windows security update may have further restricted the use of CHM files on XP and Vista. I don't know if other file types are affected by this, but I thought I would share the info since we often ZIP files and send them via FTP to customers and other staff.

Sneak peak: It actually appears to be an IE "feature". That's all you get here. See conclusion at bottom. :-)

It was reported that our CHM docs failed to open on an XP computer. The CHMs had been zipped here and copied via FTP. When they were unzipped, they were marked as "blocked" and the contents could not be viewed until each file was unblocked.

I have reproduced the problem on my Windows 7 system. I am an administrator but still have UAC turned on. Here is what I encountered:
* Unzipping to a local temp folder proceeds normally.
* When I try to open one of the CHMs, I get:
* If I leave the "Always ask" checkbox checked and click Open, the CHM opens, but the contents pane is blocked (with an unhelpful message). This behaviour is the same as trying to open a CHM from a network drive, but this is happening on a local drive.

* As long as the "Always ask" checkbox is checked, the contents pane is blocked for that file regardless of how many times you try to open it. Once I uncheck the box and click Open, not one but two things happen: 1) the contents pane is visible, and, 2) the dialog no longer appears. So something in the file's attributes has changed permanently.

* I picked another CHM file that had not been opened yet. I displayed its Properties. Note the Unblock button and the text to the left. This button and text are not displayed in the Properties for CHMs that were opened with the checkbox turned off. So, the "Always ask" checkbox has the dual purpose of suppressing the dialog and marking the file as unblocked.

Fortunately, it appears that using our doc install does not trigger this blocking problem. There is the possibility that other file types sent via ZIP (and probably other file copy methods) will also encounter these issues.

I did find one discussion (in the link below) that suggests it is IE that is noting that the files came from "the Internet" and marking them as blocked. I'm not sure how that works inside a ZIP file, but it appears to. Possibly non-IE transfers will not do this. CONFIRMED! I downloaded the same ZIP file via FTP using Firefox instead of IE and the files are not blocked!

Relevant URLs (Google "windows chm file blocked"):


Wednesday, October 28, 2009

Comparing Database Schemas, Improved Edition

The original version of this article contained this false statement: "These dbunload steps ... only work on Version 10 and 11 databases ..."

Well, not exactly false, but easily fixed so the technique will work on earlier database files: Versions 5.5, 6, 7, 8 and 9. It turns out that while the Version 11 engine can't run a Version 9 database file, that restriction does not apply to the dbunload utility. In order to facilitate database migrations, dbunload contains the ability to read earlier database files... and that ability extends to ordinary dbunload-to-a-SQL-file steps.

And that bit of magic makes it possible to compare all versions of databases. Here's the full rewrite of the original article:

Question: How do I compare the schemas of two SQL Anywhere databases?

Answer, from Bruce Hay at Sybase iAnywhere: Check out the new "-no" option for dbunload in SQL Anywhere 11 (unload objects in name order, no data)? It's intended for this purpose.

Here's how it works:
  • Step 1: Run dbunload against the first SQL Anywhere database to create a text file containing CREATE TABLE and other schema definitions. The new -no option tells dbunload to unload just the schema, not the data, and do it in name order so it will be easier to compare with another text file sorted in the same order.

  • Step 2: Run dbunload again, on the second SQL Anywhere database, to produce another text file.

  • Step 3: Run a text comparison utility like Compare It! to show the differences between the two text files.
[ Keep reading, the new stuff is coming soon... ]
Here's why using dbunload is a good approach:
  • The dbunload utility doesn't miss anything; it unloads every single schema object in the database... tables, views, procedures, triggers, everything.

  • The dbunload utility is always going to be more up-to-date than any other software except the SQL Anywhere engine itself. In other words, even PowerDesigner falls behind dbunload in terms of supporting SQL Anywhere syntax for new features.

  • Running dbunload on actual databases guarantees you will be comparing the current database schemas, not out-of-date information stored is separate model files.

  • The text file comparison view presents differences in terms of actual database syntax, not some funky abstract logical view used by data modelling tools.
[ OK, here comes the new stuff... ]

But Wait! It's Not Quite That Easy!

How you code the "Run dbunload" parts of Steps 1 and 2 depends on what version of SQL Anywhere was used to create each database:
  • If the database was created with Version 5.5, 6, 7, 8 or 9 you have to run the Version 11 copy of dbunload.exe without starting the database first. Use the -c "DBF=filespec.db;UID=xxx;PWD=yyy" connection string so that the Version 9 engine inside dbunload.exe will be used to start the database. Yes, this is magic!

  • If the database was created with Version 10 or 11, you do have to start the database first, using the Version 11 engine dbeng11.exe or dbsrv11.exe, and then run the Version 11 copy of dbunload.exe with the usual client-server connection string -c "ENG=aaa;DBN=bbb;UID=xxx;PWD=yyy".
That's not just magic, it's also complicated and confusing. That's why I've created three examples you can copy and edit: Compare Version 10 and 11 databases using the second technique described above, then Version 5.5 with Version 9 using the first technique, and finally Version 5.5 with Version 11 using both techniques. These examples all use the demo databases that come with the various versions of SQL Anywhere.

Example 1: Compare Version 10 and 11 Database Schemas

This script is easily changed to work with two Version 10 databases, or two Version 11 databases.

REM Start V10 database via V11 engine

-n demo10^
"C:\Documents and Settings\All Users\Documents\SQL Anywhere 10\Samples\demo.db"


REM Unload V10 database

-c "ENG=demo10;DBN=demo;UID=dba;PWD=sql"^
-r c:\temp\demo10.sql


REM Start V11 database

-n demo11^
"C:\Documents and Settings\All Users\Documents\SQL Anywhere 11\Samples\demo.db"


REM Unload V11 database

-c "ENG=demo11;DBN=demo;UID=dba;PWD=sql"^
-r c:\temp\demo11.sql


REM Compare schemas

"C:\Program Files\Compare It!\wincmp3.exe"^

Here's what Compare It! shows for one difference:

Compare It! can also produce a report in HTML format; just use File - Report... to get output like this:

Example 2: Compare Version 5.5 and 9 Database Schemas

This script is easily changed to work with two Version 9 databases, or any combination of Version 5.5 through 9.

REM Start and unload V5 database via V11 dbunload

-c "DBF=C:\Program Files\Sybase\SQL Anywhere 5.0\sademo.db;UID=dba;PWD=sql"^
-r c:\temp\demo5.sql


REM Start and unload V9 database via V11 dbunload

-c "DBF=C:\Program Files\Sybase\SQL Anywhere 9\asademo.db;UID=dba;PWD=sql"^
-r c:\temp\demo9.sql


REM Compare schemas

"C:\Program Files\Compare It!\wincmp3.exe"^


Here's a Compare It! screen shot:

Example 3: Compare Version 5.5 and 11 Database Schemas

This script is easily changed to compare a Version 5.5 through 9 database with a Version 10 or 11 database.

REM Start and unload V5 database via V11 dbunload

-c "DBF=C:\Program Files\Sybase\SQL Anywhere 5.0\sademo.db;UID=dba;PWD=sql"^
-r c:\temp\demo5.sql


REM Start V11 database

-n demo11^
"C:\Documents and Settings\All Users\Documents\SQL Anywhere 11\Samples\demo.db"


REM Unload V11 database

-c "ENG=demo11;DBN=demo;UID=dba;PWD=sql"^
-r c:\temp\demo11.sql


"C:\Program Files\Compare It!\wincmp3.exe"^


This Compare It! screen shot shows where the schema starts to diverge; after this point, everything's different between the Version 5.5 and 11 demo databases:

Monday, October 26, 2009

DocCommentXchange 2.0 Rocks!

I will let Laura Nevin, Shengyu Bao, Andrew Arnott and Steven McDowell tell the story:

Sent: Friday, October 23, 2009 11:54 AM
Subject: DocCommentXchange 2.0 is now live

Hi Breck,

I think this is what you have been waiting for. It took way longer than expected, but I think the internal improvements were worth the wait.

Please email me if you have any problems or issues, or even comments about DCX.


----- Forwarded by Steven McDowell/SYBASE on 10/23/2009 11:53 AM -----

DocCommentXchange 2.0 is now running at

Main new features:
* based on SA11 web server and full text search (no Apache or PHP)
* preview area added to "Submit a comment" dialog; your comment is formatted and displayed while you type
* index is much faster now
* bookmarks for pages are more readable
Other notes:
* existing URLs remain valid
* existing comments are preserved
* existing userids remain valid
Steve, Laura, Andrew, Shengyu

Friday, October 23, 2009

How To Capture A Graphical Plan

(Updated August 3, 2012 to include Version 12, plus a screenshot of the Plan Viewer.)

Graphical plans are wonderful things. Graphical plans tell you everything you need to know about why a query is running slowly.

Even if you don't have a clue about how to read a graphical plan, it's still wonderful because you can capture it in a file and send it to someone who does know how to read it ( like Glenn Paulley ) or someone like [cough] who pretends to know :)

Seriously, let's say you have The Query From Hell, some SELECT that's taking forever. Maybe yesterday (on SQL Anywhere 9) it ran quickly but now (on SQL Anywhere 11) it runs like continental drift... or whatever, it's slow now.

Here are the steps for capturing the graphical plan, for Versions 8 through 12. At the bottom there's a "Tip" that is very important, but only for Version 10 and earlier.

How To Capture A Graphical Plan In Version 11 and 12

- If desired, run CALL sa_flush_cache() in dbisql.

- Type or paste your query into dbisql.

- Choose Tools - Plan Viewer.

- Select Statisics level: Detailed and node statistics.

- Select Cursor type: Asensitive.

- Select Update status: Read-only.

- Click "Get Plan".

- When the plan appears, click "Save As...".

- Use Save as type Execution plan (*.saplan).

Click on this screenshot to see it full size...

How To Capture A Graphical Plan In Version 10

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with full statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Click "OK".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

How To Capture A Graphical Plan In Version 9

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Click "Make Permanent".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

How To Capture A Graphical Plan In Version 8

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Uncheck "Show UltraLite plan" unless you want that.

- Click "Make Permanent".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

Tip for Version 10 and earlier: If you do use dbisql to execute a query and capture a graphical plan with statistics, use SQL - Get Plan (Shift+F5) instead of SQL - Execute (F5 or F9). If you use Execute, the query will be executed twice: once to determine the result set and a second time to capture the plan. This plan may be completely different from the one used the first time the query was executed, and not just because the cache contains the rows. For example, if the plan is for a DELETE, the first execution will actually delete the rows so the plan may be for the completely different "zero rows" case. If you use Get Plan (Shift+F5) the query will only be run once, and the plan will match.

Monday, October 19, 2009

"Search this blog, plus 5 others..."

The Google Custom Search Engine gadgets on this page have changed:

First of all, the original Search gadget has been changed from "Search this blog, plus Glenn Paulley's" to "Search this blog, plus 5 others": Glenn's plus the other four active SQL Anywhere blogs on Jason Hinsperger, Chris Kleisath, Eric Farrar and Tom Slee.

Second, a new gadget "Search this blog" has been added. This one is an "ego search", just for me, so I can search my old posts.

Third, the additional gadget uses the new Google "Custom Search Element" which displays the result set in an overlay on the same page instead of jumping to a separate search page:

Yeah, that's confusing: "Custom Search Element" sounds so much like "Custom Search Engine"... perhaps Google is taking steps to preserve the earth's rapidly diminishing supply of acronyms.

Friday, October 16, 2009

Product Futures: Grow In-Memory Cache Faster

Maybe I've been living a sheltered life, but until recently I've never seen SQL Anywhere have a Cache Panic. Not once, not in production, not in testing, not even while testing Foxhound which is supposed to display the number of Cache Panics on the monitor page.

What is a Cache Panic? It happens when SQL Anywhere needs to allocate a page in the cache but can't find one. Sounds bad, but what with dynamic cache sizing and the temporary file, I'd never seen one.

At one point, I considered removing Cache Panics from the Foxhound monitor page to save valuable screen real estate (if you've seen the Foxhound monitor page, you know what "crowded" means when it comes to displays :) However, a wise voice counseled me to leave it alone, saying that it's an important statistic.

Not important enough, however, for me to include a Cache Panics threshold in the Foxhound email alerts feature... that was a bad decision, about to be fixed... because I've just seen my first Cache Panics, and believe me, they are bad:

What the Foxhound figures above don't show is how badly performance can be affected when Cache Panics are happening. When you start a SQL Anywhere database using the in-memory feature (dbsrv11 -im nw in this case), there is no temporary file, and it doesn't write anything to the database file, so SQL Anywhere has no place other than RAM to put a page of data. If the RAM cache is full, SQL Anywhere has to allocate more RAM.

It gets worse: If you don't specify the -c initial RAM cache size value when you start the server, and if you perform some intense operation like LOAD TABLE a million rows, SQL Anywhere will have to allocate RAM at a furious pace... all the while Cache Panicking when it can't find a page to allocate.

Who cares? If performance is important, you care. In this case adding -c 1200M to the dbsrv11 command line improved LOAD TABLE performance by 165%, from 2,878 rows per second to 7,640 rows per second.

Grow In-Memory Cache Faster

Here's what the server console log looked like when it had to grow the cache a zillion times, up to 10 times per second:

I. 10/14 15:29:15. Now accepting requests
I. 10/14 15:29:18. Cache size adjusted to 2340K
I. 10/14 15:29:23. Cache size adjusted to 3180K
I. 10/14 15:29:28. Cache size adjusted to 3392K
I. 10/14 15:29:33. Cache size adjusted to 3488K
I. 10/14 15:30:14. DIAG 2009-10-14 15:30:14.421 load table started
I. 10/14 15:30:14. Cache size adjusted to 3748K
I. 10/14 15:30:14. Cache size adjusted to 4012K
I. 10/14 15:30:14. Cache size adjusted to 4276K
I. 10/14 15:41:23. Cache size adjusted to 1151416K
I. 10/14 15:41:23. Cache size adjusted to 1151676K
I. 10/14 15:41:23. Cache size adjusted to 1151936K
I. 10/14 15:41:23. Cache size adjusted to 1151724K
I. 10/14 15:41:23. Cache size adjusted to 1152144K
I. 10/14 15:41:23. Cache size adjusted to 1152404K
I. 10/14 15:41:24. Cache size adjusted to 1152668K
I. 10/14 15:41:24. DIAG 2009-10-14 15:41:24.343 load table finished in 669.0 seconds
I. 10/14 15:42:23. Cache size adjusted to 1152388K
I. 10/14 15:42:27. Cache size adjusted to 1152800K

Most of the time, growing the cache step-by-step works just fine... but not when the database is growing rapidly and it's all stored in memory.

The Suggestion...

So here's the Product Futures suggestion: When a SQL Anywhere database is running in-memory, and a lot of Cache Panics are happening, and it's having to grow the cache ten times a second, then the dynamic cache sizing algorithm should pick some increment other than 64 pages when allocating more RAM.

That way, we won't have to fiddle around trying to pick exactly the right -c initial cache size.

...Or, Maybe Not

It's a little known fact (well, I didn't know it) that picking a reasonable value for -c is a good idea regardless of the cache growth performance issue described here.

For example, if you expect the cache to grow to some size between 1G and 2G, you should specify a value like -c 1G instead of letting it default to the tiny value (on Windows) of 2M. The server uses hash chains to find data in the cache, and the number of hash chains is determined by the initial cache size. A cache that starts at 2M will have fewer hash chains and those chains will grow very long (and slow) as the cache grows to 2G; performance will suffer, even if the Cache Panics problem is solved. And that performance problem will persist even after the cache finishes growing.

On the other hand, a cache that starts at -c 1G and grows to 2G will have more, shorter chains, and better performance. In other words, you don't have to pick an exact value for -c, but you should avoid letting it default to a stup..., er, wildly inappropriate value (like 2M) if you know it's going to grow to at least 1G.

And, of course, the Cache Panics problem won't be nearly so noticeable if you pick a half-decent value for -c. Plus, it goes away after the cache finishes growing.

So, maybe it's not worth fixing, just worth knowing about :)

Wednesday, October 14, 2009

SQL Anywhere Product Futures Discussion

Sybase forum users will recognize "SQL Anywhere Product Futures Discussion" as the title for the sybase.public.sqlanywhere.product_futures_discussion newsgroup on the NNTP server. It's available from your web browser, but I won't put a link here because every one of the several HTTP / HTML interfaces to the NNTP forums are, well, inadequate.

Besides, the NNTP forums are reaching the end of their usefulness, which is why I'm not going to post suggestions there any more... I'm going to post them here, starting now...

DB_PROPERTY ( 'BackupInProgress' )

There should be a database property which answers the question, "Is a database backup process in progress?" It would apply to the BACKUP statement and the dbbackup.exe utility.

The question arises, should the answer be as simple as yes/no, or more complex? The continuous live backup process comes to mind, where the answer would always be yes... maybe live backups should be excluded.

And what about database versus log-only backups? Should the response be db/log/both/no?

That's what "discussion" is for; if you have an opinion, post a comment here.

One reason for having this property is so database monitor programs, like Foxhound and the SQL Anywhere Monitor, could adjust their behavior when a backup is running; e.g., they could recognize that server I/O processing will change in behavior, and that checkpoints will be suspended.

Another reason might be so you can write code that starts a backup as long as one isn't running already, perhaps a second redundant process that makes sure a backup is taken even if some other backup process failed.

DB_PROPERTY ( 'BackupSuccessfulAt' )

This database property should answer the question, "When was the last backup taken?" It should return NULL if there never has been a successful backup, or the timestamp when the most recent successful backup finished.

In my opinion, this request doesn't need any justification... it's hard to think of any question more important than one involving database backups. And today, there is just no reliable way to answer the question, certainly not easily.

I'm guessing live backups don't count here, if only because then never finish unless there's a problem; e.g., the server crashes, so dbbackup loses connectivity... and a database property is useless if the server ain't running.

Should there be different properties for different backup types? e.g., 'DBBackupSuccessfulAt', 'LogBackupSuccessfulAt', 'FullBackupSuccessfulAt'... more discussion!

Monday, October 12, 2009

Global Exception Handler

Stephan.ahlf posted an excellent suggestion in this thread in the Google SQL Anywhere Web Development group:

"It would be a nice feature if developers could implement a Global Exception Handler. In case of errors in procedures invoked via a database event there is only a error message in the database console. With a Global Exception Handler I could develop a general error notification e.g via email. There is an event for after RAISEERROR. Does there exists somthing like that for native database exceptions?"
Eric Farrar thinks it's a good idea too, which is wonderful news; Eric is highly respected throughout the SQL Anywhere community and his word should carry some significant weight:
"Unfortunately, there is nothing inside of SQL Anywhere to allow you to specify a top-level exception handler that will be called on any native database exception. I do understand why this would be useful (especially in a web context), and I have passed this idea on to the appropriate engineers."
Exception handling plays a huge role in the web-based Foxhound database monitor, and I would love to be able to code an all-encompassing global exception handler to record errors for later diagnosis.

One problem facing a global exception handler, however, is providing an answer to the question "Where, in all the thousands of lines of SQL code, did this exception occur?"

That's a hard question to answer even for a local exception handler. One solution (used by Foxhound) is to maintain a "where am I?" variable as the code executes just so an exception handler can include it with the other information it saves and/or displays.

For a global exception handler, one could use a global CREATE VARIABLE to store the "where am I?" value, but I would rather not... I would rather have SQL Anywhere provide the source code line number and object name (procedure, event, service, etc).

Sunday, October 11, 2009

We Don't Need No Steenking Backup!

Let's all welcome Microsoft and T-Mobile's Sidekick service as the latest Database Deadpool winners!

Here's the reported time line for the T-Mobile disaster:

Friday, October 2, 2009

Microsoft, whose Danger subsidiary powers the [T-Mobile] Sidekick service, said it is "working urgently" to deal with the problems, which interrupted data service for most Sidekick users starting at 1:30 a.m. PDT on Friday.
Major outage hits T-Mobile Sidekick users October 6, 2009 11:57 AM PDT by Ina Fried
We are working to resolve the issue and hope to have things restored to normal soon.
Sidekick Service Issue (Oct 2nd) 10-02-2009, 08:40 PM

Sunday, October 4, 2009 seems that Danger's data center has been on the fritz for a solid two days now with symptoms ranging from text message weirdness to dead address books and everything in between, up to and including a completely unusable experience. Frantic calls to T-Mobile are resulting in comped bills for some subscribers, so if you're affected, you might want to give 'em a ring.
Danger's server woes leave Sidekicks in the lurch by Chris Ziegler posted Oct 4th 2009 at 4:48PM

Monday, October 5, 2009

Users of T-Mobile's Sidekick smartphones spent much of the weekend without data service, and many are still waiting for service to be restored.
T-Mobile: Danger Working to Resolve Sidekick Data Outage Monday October 5, 2009

Thursday October 8, 2009

We sincerely apologize for the inconvenience, and appreciate your patience as Danger works hard, around-the-clock to resolve this issue. T-Mobile will continue to keep you updated as we learn more. Thank you!
Sidekick Service Issue 10-08-2009, 04:22 PM

Saturday, October 10, 2009

The massive data failure at Microsoft's Danger subsidiary threatens to put a dark cloud over the company's broader "software plus services" strategy.
Sidekick outage casts cloud over Microsoft October 10, 2009 3:26 PM PDT by Ina Fried
...this is shaping up to be one of the biggest disasters in the history of cloud computing.
T-Mobile: we probably lost all your Sidekick data by Chris Ziegler posted Oct 10th 2009 at 3:45PM
Perez Hilton loses 2000 contacts in his Sidekick. 2000 people to meet in Griffith Park for biggest group hug ever.
John Mayer
Regrettably, based on Microsoft/Danger's latest recovery assessment of their systems, we must now inform you that personal information stored on your device - such as contacts, calendar entries, to-do lists or photos - that is no longer on your Sidekick almost certainly has been lost as a result of a server failure at Microsoft/Danger.
This is a code red cloud disaster.
Cloud Goes Boom, T-Mo Sidekick Users Lose All Data Posted by Eric Zeman, Oct 10, 2009 08:22 PM
This latest large-scale publicized data loss will surely lead to managers everywhere forwarding a link to the story to their IT departments asking "what are we doing so that this doesn't happen to us." ...
If you read almost any technology website or newspaper, you could be forgiven for thinking that "The Cloud" solves everything.
In the Danger case, it appears from initial speculation that the data was lost because they attempted to upgrade a storage array without backing it up first. Here is a case of smart and rational people who do this for a living at one of the best companies in the world, and they didn't even bother making a backup..
Letting Data Die A Natural Death Nik Cubrilovic Saturday, October 10, 2009; 8:20 PM

Sunday, October 11, 2009

Microsoft/Danger has stated that they cannot recover the data but are still trying.
Server Failure Destroys Sidekick Users' Backup Data Posted by timothy on Sunday October 11, @05:29AM

Wednesday, October 7, 2009

How To Make Sure Your DatabaseStart Event Finishes First

If you have scheduled events in your database, and those events contain logic that depends on the DatabaseStart event finishing beforehand, it's up to you to make sure your scheduled events don't start running too early.

For example, let's say you have critical initialization logic in your DatabaseStart event, and your scheduled events contain other logic that assumes the initialization process is complete before they run. Since every event runs on its own connection, and connections run asynchronously, there's no guarantee that the DatabaseStart event will finish before your scheduled events start firing.

One strategy is to add more logic to each scheduled event to check if the DatabaseStart event has finished running, and to immediately RETURN if DatabaseStart has not finished.

Here's one way to implement that strategy:

  • Add code to the end of the DatabaseStart event to record the CURRENT TIMESTAMP in a single-row table.

  • Add code to the beginning of each scheduled event to check if the SQL Anywhere server startup timestamp is later than the timestamp recorded by the DatabaseStart event.

  • If the server startup timestamp is later than the DatabaseStart timestamp, then RETURN from the scheduled event.
If the server startup timestamp is later than the DatabaseStart timestamp, then the DatabaseStart timestamp must have been recorded by a previous execution of the DatabaseStart event during a previous server startup... which in turn implies the DatabaseStart event for the current server startup hasn't finished executing.

Eventually, the DatabaseStart event will finish, and the next time the scheduled event fires it won't execute the RETURN.

Here are some excerpts from the Foxhound database monitor; first, a single-row table contains the DatabaseStart timestamp:

CREATE TABLE rroad_database_start_blocker (
CHECK ( one_row_only = 1 )
started_at TIMESTAMP NOT NULL );

CREATE TRIGGER trd_one_row_only BEFORE DELETE ON rroad_database_start_blocker
ROLLBACK TRIGGER WITH RAISERROR 99999 'Do not attempt to delete from rroad_database_start_blocker';

INSERT rroad_database_start_blocker VALUES ( DEFAULT, CURRENT TIMESTAMP );

Second, the DatabaseStart event updates DatabaseStart timestamp after all the initialization logic is complete:

CREATE EVENT rroad_database_start TYPE DatabaseStart HANDLER

... critical initialization code ...

-- Block other events from proceeding until this event finishes.

UPDATE rroad_database_start_blocker
SET rroad_database_start_blocker.started_at = CURRENT TIMESTAMP
WHERE rroad_database_start_blocker.one_row_only = 1;



Finally, a scheduled event starts by checking the server startup timestamp against the DatabaseStart timestamp:

CREATE EVENT rroad_monitor_auto_restart

-- Wait for the rroad_database_start event to finish before proceeding.

IF COALESCE ( PROPERTY ( 'StartTime' ), '' ) = '' THEN

-- The SQL Anywhere engine hasn't yet filled in the StartTime property, so it can't be
-- compared with rroad_database_start_blocker.started_at, so it's not safe to proceed
-- with this event yet.

RETURN; -- try again when this event fires again


> ( SELECT rroad_database_start_blocker.started_at
FROM rroad_database_start_blocker
WHERE rroad_database_start_blocker.one_row_only = 1 ) THEN

-- The DatabaseStart event hasn't yet updated rroad_database_start_blocker.started_at so
-- this event cannot proceed.

RETURN; -- try again when this event fires again


... other logic that depends on the initialization process being complete ...


Note that PROPERTY ( 'StartTime' ) might not even be filled in by the time a scheduled event starts executing so there are two reasons to RETURN early.

Friday, September 25, 2009

Quassnoi's Explain Extended

I've added a new blog to the "WIDER VIEWS..." list on the right: Quassnoi's Explain Extended.

Lot's of solid SQL content there, in particular this excellent discussion about "thinking in sets": Double-thinking in SQL.

Wednesday, September 23, 2009

Turning Off System Restore Points

I have discovered a new way to slow Microsoft SQL Server's BULK INSERT down so much that it mimics continental drift.

It's been a couple of years since the I discovered the first way to slow down BULK INSERT: leave the PRIMARY KEY and other indexes in place while inserting data. The solution to that problem is to drop all the indexes, run the BULK INSERT and then recreate the indexes... it can be orders of magnitude faster that way.

Why am I talking about BULK INSERT? Because SQL Server is often used together with SQL Anywhere, and it's often necessary to ship data back and forth between the two databases, and BULK INSERT is a very efficient technique... sometimes. The online docs do talk about Optimizing Bulk Import Performance, but the subject of indexes is mentioned only in passing: "How to best increase the performance of a particular bulk import operation is influenced by the following factors: ... Whether the table has indexes."
What's the new way to make BULK INSERT slow? Leave the Windows System Restore feature enabled on the disk drive holding the SQL Server database. That increases disk I/O enormously, and the I/O is split between two different files in two different locations on the same drive... thus making the head move back and forth, slowing things down much more than if only one file was affected.

I've never noticed this kind of disk activity before, perhaps because I haven't seen a display like the following until after installing Windows Vista with its wonderful Resource Monitor:

In the picture above you can see the heaviest write I/O involves a file in the C:\System Volume Information folder which is where Windows automatic restore points are recorded. The SQL Server database and log files (main.mdf and main.ldf) come second and third in terms of write I/O, and this ranking persists for the entire time the BULK INSERT is running.

And then there's all that database file read I/O... I'm guessing all of it is needed for the System Restore feature, as you'll see later when it all disappears.

The solution is to follow the instructions here and turn Windows System Restore off for the disk drive affected, as follows: Control Panel - System - System Protection - Automatic restore points - clear the checkbox next to the drive letter:

Turning off restore points made a dramatic difference in the BULK INSERT disk I/O, which in turn resulted in the elapsed time dropping from 1,612 seconds to 512 seconds for a 1.9 million row data set; that's a factor of 3 improvement. It did this by completely eliminating all I/O to the System Volume Information folder, which in turn allowed the database file I/O to run much faster. It also completely eliminated the database file read I/O as shown in this picture:

Does the Windows System Restore feature affect SQL Anywhere database? I'm not sure, but I think the answer is "no". I tried loading the same data into a SQL Anywhere 11 database via LOAD TABLE and there was no activity in the System Volume Information folder at all.

Oh, and I would love to tell you how much faster SQL Anywhere's LOAD TABLE was than BULK INSERT, but I can't... that would be a "benchmark test", a violation of the license agreements, and it might even exhibit some kind of "measurement bias"... who, me, biased? <g>