Tuesday, December 29, 2009

Memories of IT - 2003-2006 - Business Rules, and blogging

So, I was back in the trenches. Most of the projects were enhancements of existing systems, while building up to a replacement of the main loan application processing system. I did a little project management along with Business Analysis on smaller projects.

One thing I did get some time to research was Business Rules, starting with the Business Rules Manifesto, and reading of proponents like Ron Ross. I worked on getting Business Rules defined explicitly in the company's requirements documents, which went well; I also attended 2 Business Rule Forum conferences, and did a track session at one of them on our use of rules in requirements.

I did the track session because it reached the point where I had been to enough conferences over the years that I thought I had something I could offer, rather than just attend. The conference organizers seemed to agree, as I did a few other Business Analyst World conferences on how I was doing requirements work.

Also, I had been ignoring the rise of blogs, until it became apparent to me at some point that it wasn't just for young people sharing their feelings and thoughts on movies; blogs on business and IT topics were emerging, and again I saw them and thought "I can do this", so I started this blog and did one over on IT Toolbox for a while. Being a Business Analyst can mean writing a lot, and I have found the blog a great outlet for writing that is not specifically for a project. I highly recommend it...

Thursday, December 24, 2009

Memories of IT - 2002 - DHL was fun while it lasted...

DHL Systems was located in the Bay area because it was felt that’s where the skilled systems people were. True enough, although the tech bubble had burst by now. Overall when I joined, DHL was owned by a combination of Deustche Post, Japan Airlines, and family of the original founders; then the Germans bought out JAL or something, and ended up as majority owners. (That’s why DHL vans and planes went from white with red/blue to yellow with black.) At some point someone in Deutschland decided that the Bay area was too expensive, so in December 2001, DHL Systems was ended. The operational part that ran the comm network moved to Arizona, but ATP was mostly dismantled and most of my friends end-dated. I think there was a lot of payback going on for past politics and disagreements. Somehow I, amongst a couple of other people, was offered a job elsewhere. Mine was in the London systems office I had visited. However, the salary was average, and not much was offered to help me and my family move. I also looked into what it would cost to put my sons in ‘american school’ in the UK, and that killed it. Probably for the best, as later on the London office was closed and the jobs moved to Prague. Nothing against Prague, never been there, hear its lovely, but I think it would have been too much of a culture shock.

…but, I am in the USA on an H1B sponsored by DHL. I start looking for a new job, but it’s a slim market, and no company wants to take over sponsoring me, so after some months its clear we have to move back to Canada if I was going to get work… so pack up and move, decide on Kitchener-Waterloo because we had been there before, my wife gets a job first, and then I get back into the workforce as a BA at a captive finance company, like GMAC, but it was for farm and construction equipment. So, out of the ivory tower and back into the trenches.

Afterword: I and my family certainly did like that time we spent in the Bay Area. Three out of 4 seasons was fine, although my son would go on school ski trips to Tahoe. We especially liked heading down to Monterrey and Carmel for weekends, and even just to Half-Moon Bay for the day, and drive up the coast to San Francisco. When I win the lottery, there is a little place on Monterrey Bay I may have to invest in...

Thursday, December 17, 2009

Memories of IT - 2000's - Architecture and the Ivory Tower

As the bringers of the new Architecture, we would get assigned to projects around DHL that say (or are told) that it will use the new way. The PMs would initially be happy, looking at us as another resource for their project. The shine would rub off when we recommended using the new methods to plan the project, especially if the PM had already built a plan. The other team members might get on board or, if the new way was very different from what they had been doing, they might resist. Eventually heated discussions would ensue, long conference calls, at odd hours depending where in the world the project team was.

The usual issue was the iterative aspect of the new architecture, which PMs or higher ups would be uncomfortable with because they wanted a completion date while we were looking to time-box the work and deliver what could be done in that time. Once during an all-hands IT meeting (DHL Systems and DHL USA), one of our key people raised a question about adoption of the architecture, and the senior guy in the room said iteration was never going to work in the real world. A sign of things to come…

The other angle in all this was that our Director actually reported to a VP in DHL central systems in London, U.K. This did mean that we would travel there and back on occasion. Having not been to the UK for decades, it was nice bonus, visited a few spots of interest; dropped over to Brussels once where DHL is actually headquartered, but all I saw was the airport, hotel and the office. The atmosphere in both locations was thoroughly byzantine, highly political, people jostling for position, you never knew when something you said might be turned around and used to bite you…didn’t happen to me but I saw it happen to others. The “Architecture” was often at the center of all this.

The thing was, though, the people I worked with in ATP are among the finest, smartest people I have ever known, so going to work was still great even with all the outside issues.

Quick aside: DHL was in the news a while back, and I know how it started. Back in 2000, DHL had no internal USA business to speak of; DHL was known rightly for being international. It also had a stupid way of measuring how it made money. Each country had its own DHL Corporation. Each one made money on shipments originating in its country, but got nothing for delivering shipments sent to their country, The USA got way more shipments sent to it than were sent out, so it always ‘lost money’, so whoever was running DHL USA looked bad. When I had this explained to me for the first time, I just could not believe it. I have to think that US management complained about this, but it wasn’t going to change.

So instead of more fairly reporting the real profits, DHL decided it would get into the domestic shipping business up against Fedex and UPS. They bought the smaller player Airborne Express, and then you started seeing yellow DHL trucks on the roads.

Well, seems to have not worked as DHL has announced it is getting out of US domestic shipping, going back to their focus on international shipping; hope they fixed their income reporting.

Tuesday, December 15, 2009

Memories of IT - 2001 - B2B and early days of Rosettanet

After the Shipment Tracking project was over, I spent the next year working on several things, overlapping and simultaneously.

One cool thing was Rosettanet, the standards-based B2B architecture being developed, mainly driven first by IT and electronics companies. The idea was to define a set of supply-chain messages that companies could use with other companies who used the architecture. So, once you started using the message with one other company, you could then use them to communicate with other Rosettanet –using companies with little to no added effort.

One of the key set of messages had to do with making shipments of orders through a delivery company like DHL, including getting shipment status, and notice of delivery. So, Frank and I signed on as the representatives for DHL, and joined a working that included reps from Federal Express, UPS and a few smaller players.

The main deliverable consisted of XML message formats, with all the data defined that would need to pass from sender to receiver. ( A lot of the other messages sets were about automatically checking a suppliers inventory for desired product, then making the product order, and the supplier confirming the order, shipping --- which was our part ---, followed by invoicing. When we signed up, there were upwards of 100 individual messages defined, and about half had been created, and the first companies were starting to use them.)

Our messages were drafted during a couple days of meetings with all the reps, at a Fedex Ground location outside Pittsburgh. We took along an actual business person from DHL USA (also located near San Francisco) for Shipment knowledge. DHL’s acknowledged expertise at that time was in international shipments, so a lot of the message content about customs clearance came straight from us.

After that, we met weekly by conference call to iron out the final versions, after which they went through a formal Rosettanet review and approval process, after which they were published. The interesting thing about Rosettanet was the published services were free for any company to use, to spread their use, obviously. The core companies like IBM and others provided the most, and it cost to join in order to participate in development of the messages, worth the money if you made sure the messages were going to work for you.

I also went to one Rosettnet User Conference, in Chicago. A lot of it was about how to implement and start using Rosettanet at a company, with all day sessions or various tracks of one hour sessions, like most conferences. The basic structure was to have software sit in front of a company’s existing order and inventory systems, extract data as needed from them, use it to create the standard message and send them off. The same software would do the reverse as well, accepts messages, parse out the data and feed the existing systems. As you might expect, a number of vendors in the B2B space had already built and were hawking software packages to do just that, including services to automate the extracts from and feeds to existing systems. So, all the vendors were at the conference, of course. That always meant some free food and drinks and entertainment, which always helps make any conference more enjoyable.

In the end, the interesting thing was that some months later, Cisco and DHL entered into a logistics agreement, where DHL would warehouse Cisco parts around the world and deliver the parts when directed by Cisco…and their requirement was that these shipments would be triggered automatically using Rosettanet messages.

Anyway, its still out there, see www.rosettanet.org , check it out if you are serious about B2B...

Monday, December 14, 2009

Memories of IT - 2001 - Arguments over interfaces...solved.

So, I deliver Use Cases and design/development begins. My group designs a service and interface following the Architecture. The KL systems people want a specific date/time-stamp form an existing system in the interface; our lead designer says no, that any other system using the interface should not know where the data provided came from, so that the service can be changed in the future without disrupting any systems using the service…. Things got quite heated. In a moment of prescience, upon hearing of this argument, I stepped in and said that this was a requirement issue, not a design issue. The date/time-stamp was actually the time a Shipment was first recorded in a DHL System. I added that date to our data model, then added it to the Use Cases, and the argument ended, at least publicly… there always seem to be some level of discord between DHL Systems ATP and Systems areas in DHL. We actually printed a poster of a white tower and put it on the wall heading into our office.

So, design and development went ahead for the pilot, was accepted, and then went live. For some period of time, at least, when you searched for your Shipment on DHL.com, you used a service based on my Use Cases.

Friday, December 11, 2009

Memories of IT - 2001 - Architectures, RUP, Iterative, BEA and other stuff in the life of Architecture Technology Pathfinding (ATP)

This pilot was the first real test of the work of the group I had joined, known as Architecture Technology Pathfinding(ATP). The architecture in the title was actually multiple architectures, and multiple uses of the word.

The main Architecture was about application structure, and it was a detailed description of an N-Tier environment, with interfaces invoking Business Services which would then invoke Data Services and such. The implementation of it was based around the BEA web server product, and DHL Systems had in fact been an early customer and supporter.

But how do you define the applications? We were strong adherents to the Zachman Architecture Framework for starting with concepts, then models, then implementations of models down to code. Frank and I were focused on the Data Column of Zachman, with overlap into the Process column when defining the CRUD functions on data. In the application architecture these CRUD functions were the lowest level services that supported everything else. The overall approach across the architecture was service-based, with a service being defined and made available through an exposed interface, so that everything behind the interface was unaffected by what used it (and vice-versa) as long as it could provide what the interface said it needed. Very OO, of course, and a hint of SOA (which had not appeared as a term yet).

The other main use of ‘architecture’ was that as used in RUP, summarized as development of an application by first creating an overall architecture for the app, and then building it up in the multiple iteration approach of RUP (not Agile, which wasn’t a well-known term yet either). Selling RUP and its iterative approach was a big desire of the key people in ATP, as a means of moving away from a big-bang approach to development that had failed in the past.

The thing was, it had to be sold to IT execs in DHL proper, and there was one of those for almost every country in the world (although we focused on the USA and Europe as the likely place that success of RUP would then spread from.) Many an IT exec had blown off iterative, mainly because it was believed you could not define a specific end date for iterative development, and so you could not estimate it. The fact that past projects with planned target dates and estimates had usually missed both by wide margins didn’t count. The execs wanted something that made big-bang work, not something different they could not understand or felt they could control. However, the whole combined Architecture approach espoused by ATP had taken some time to develop, and had involved some other DHL areas, so it could not just be ignored. That is why the pilot for Shipment Tracking was devised, to illustrate the use of the Architecture and Methods. The complications came when the design started and some non-believers tried to undercut the whole thing….more next time.

Tuesday, December 08, 2009

Memories of IT - 2000 - ERDs, OO, UML, Use Cases and all that

The first thing I did at DHL Systems was learn and use Rational Rose, and learn more about the RUP methodology. The aim was to figure out how to marry classic ERD data models with the Object-Oriented development modeling, especially the Class Diagram. This actually involved a lot of discussion with the OO experts in our group. I even went on another UML course (my first visit to Denver), but I still wasn’t grokking it. I also read one of those “UML for Business Modeling” books, but it didn’t help.

So, I translated some of the existing ERD models into Class Diagrams, and then tried my hand at some Activity Diagrams to model how some Classes would be created in a business transaction. Frank (my boss and fellow Analyst) and I showed these to our chief OO designer/developer, and his reaction was negative. His opinion was that UML is a software design language, not be used for analysis or requirements. He stated that what OO developers needed from Business Analysts was a “concept diagram” ( a generalized Class diagram that just showed what the main things of interest are) and Conceptual Use Cases. Frank was concerned that my hard work had gone for naught, but we moved on. Our Architecture group was going to participate in a company wide pilot to use UML, OO, RUP and a new architectural approach to create new services for tracking shipments through the DHL website. My job was to first write the Conceptual Use Cases, which would be used to create a design and concrete use cases, to be vetted by DHL IT in Brussels and built by another DHL Systems group in Kuala Lumpur.

There were no business people involved directly, the KL group was positioned as our ‘customers’. As such, any requirements they had were technical and cause for much wrangling between them and the OO folks in my group (more about that later). So, I went back to the big data models built in years past because, as you might expect, they were all about Shipments and all the supporting things you needed to pick-up, send, and deliver a Shipment. I created a set of Query Use Cases based on the data models that would accept enough data to identify a shipment (or at least a short list of candidates) and return whatever data you needed about the contents of the shipment, where it was and so on. I actually thought they were pretty lame, as my IEF days had installed in me the belief that queries and reports didn’t need requirements the way Create, Update and Delete transactions did; but, they were accepted pretty well, mainly because previous attempts had produced requirements that were either too vague, or too technical… I had found the middle ground. Then the real fun began as we moved into design…

Monday, December 07, 2009

Memories of IT - 2000 - A Canadian in California

For any Americans reading this post, you should know... there are Canadians amongst you. And I don't just mean Jim Carrey or Mike Myers or Neil Young or (so I am told) one of the guys on "Glee". For a while, there was also me, my wife and my two younger sons.

A true story... my wife and I were shopping at a little outdoor mall in Castro Valley CA, and there were booths set up for the day selling crafts and such. One booth, however, was for voter registration. A nice lady in that booth asked my wife as she passed if she had registered, and my wife said she couldn't, to which the lady replied with "why not?". "I'm not not a citizen" my wife told her. The Lady asked "where are you from?", and my wife replied "Canada". The nice lady then exclaimed, "Really? You look just like us!" But then she took a beat and then said "well, that was a pretty stupid thing to say...". So, we never complain about Americans or the USA, we just offer useful information when we can.

It was a big move again, of course, but my wife had always felt she should have been born in California, and we would have been there long ago if not for the world's longest undefended border. For the record, DHL sponsored me on a TN Visa and then an H1B. I know H1Bs are a touchy topic for some, but DHL was able to show that they had been looking to fill the job for a while (and they had), and had been unsuccessful until they found me.I believe that's what visas should be used for, and that it helps the USA in that case. If visas can be abused in some ways, though, that's not good.

We settled in the East Bay, first in Castro Valley then later in San Ramon. That did mean I had to drive over the San Mateo bridge every day, and it had not been widened yet. However, commuting has been something I have always done, and in a lot less interesting places than San Francisco Bay.

It was a great place to live. If you live there, you already know that. If you have never lived there, don't worry about the earthquakes or the state going bankrupt or whatever, mover there if you ever have the chance.

On other thing before I get back to the IT stuff; we actually got to California by driving to Chicago, and then taking the California Zephyr train out to Oakland. We planned it as a mini-vacation (that also meant that my wife did not have to fly..), and it was a great trip, across the plains to Denver, then up and down both the Rockies and Sierra mountains. I keep hearing they might stop running the Zephyr, but if they haven't, I highly recommend it.

Thursday, December 03, 2009

Memories of IT - Spring, 2000 - California, here I come

Suffice it to say that I and my employer of three years parted ways in early 2000.

Through a series of timely connections and circumstances, I found myself on the phone with a fine gentleman from northern California, and that’s really when I first leaned about DHL.

Eight years ago, DHL was almost unknown in North America (and as it turns out, may be unknown again in a few years). The man who called me, his name was Frank, and he was soon to become and remains a good friend. Frank was the chief (and lone) data modeler in the architecture area of a subsidiary called DHL Systems, located in Burlingame, California. Burlingame is of those smaller cities/towns that hug the west side of San Francisco Bay between San Francisco and San Jose, just below the airport.

The group was similar to the TD group I had been in at Crown, and they were looking for a person with data modeling and information engineering experience to work for Frank. They had been having a hard time finding someone (really!), and apparently were willing to reach out far and wide in their search, and I had been found. After a good first phone conversation, they booked me a flight to San Francisco to be interviewed by Frank, his peer managers and the overall director. This was my first trip to the west coast and California, so it had the feel of an adventure. DHL Systems was in an office building right on the Bay, and I was put up at the suites hotel right next door.

Then off to a morning of interviews, lunch with the whole team at a fine restaurant also on the Bay, then back to meet with HR, during which Frank and the Director, Dee, came in and offered me the job right there and then. I was surprised, but I think I managed to respond professionally, said thanks and let me go home and talk it over with the wife and I will let you know… but it was pretty much a done deal right there and then.

Wednesday, December 02, 2009

Memories of IT - late 90's - More projects, and ETL tools

!997, I am back in Southern Ontario, doing project work for my new insurance company employer, mostly forgettable stuff like how to restructure life insurance policies that had been sold on the promise of "Vanishing Premiums". I did escape the whole Y2K thing, so I can be thankful for that.

One interesting thing I worked on was ETL tools.I had done reading on Data Warehousing, especially Bill Inmon's seminal book on the topic. A lot of it was/is dependent on getting data from transaction systems into the DW in summarized, time-dependent structures. This was “Extract, Transform and Load”(ing) of data, ETL.

A different need for ETL came up at my new employer. Its growth had been fueled by acquiring smaller competitors. When you buy another insurance company, you get all its in-force (premium generating) polices, and all the computer systems that had been used at the company being bought. All of these systems were usually pretty unique, so my employer would end up having to run several systems for the same kind of insurance product, often variations of the same dominant insurance package system of the time. This was expensive and wasteful of resources, so the inherent desire was to migrate all the policies on all the acquired systems on to the latest/greatest system being used for new business.

That ain’t easy. Insurance is a non-physical product, so actuaries can dream up almost anything that they can sell and make money, and systems have to be customized to support very unique products. So the first hard thing to do was to figure out what the products/policies on the old systems really were, and if the new system could support them. That might mean the new system needs to be changed. After that, you have to get the data out of the old system, and get it into a format the new system could read and use to add policies. That could mean a lot of complicated and time-consuming programming, to create code that would only be used once, when the conversion from old to new system is done. My feeling was there had to be a better way, and a quick check of the marketplace showed that commercial ETL products were now available. I wanted something where the interface matched source data to the target database, accepted transformation rules, handled different types of files and databases, and generated the code to do the ETL process.

I found many such tools, and recommended we investigate them to acquire one to save time on ETL programming, as we had so many conversions to do and probably more coming in the future. I was in a line area, and most tech evals were done centrally, so I contacted the central area and they said they had no time, so go ahead yourself. So I contacted people in other line areas who agreed this might help them too, and got a part-time team together, generated requirements and went out and evaluated tools.

The best one was a product from an Austin company called ETI. It emerged from a research think-tank that major vendors and universities supported. It did exactly what I describe above.

I did up a full cost-benefit analysis, showed it was a good buy, and it had to go up to my VPs boss for approval because of all the other areas involved.

So we get the tool in, I and another analyst get trained, and we are lined up to do the next project, and then the push-back begins. The lead programmer decides she doesn’t want to get trained because she would have to travel, and she doesn’t like to travel, and the PM says OK. It becomes apparent that despite a good cost-benefit and management approval, programmers and some analysts don’t think much of the ETL tool; generating code meant no programming, and who wants that? They could not see that it meant no boring, repetitive programming, freeing up resources for new, interesting stuff.

So the whole thing stalled, which can happen when trying to implement change that frightens some people. It was too bad, the tool really worked as advertised. It was a sign...

Tuesday, December 01, 2009

Memories of IT - 1997 - Salary, Regina and time for moving on...

One of the reasons for moving a company from Toronto to Regina is lower pay scales for your staff. If you moved from Toronto, you kept your current salary, but very quickly one raise or two would get you to the top of your job’s salary range and from then on your boss would say every year “I am really sorry, but no raise for you”.

Now, whether I really needed 2% raises or not, when this starts happening, you get annoyed. Also, some of what I used to do in TD about methodology was now thought to be needed again, and at a higher job level. I thought this sounded good and applied to the new position, All the people I worked with saw the job posting and said “Dave, that job is so you!” My application came back with a note that I was not right for the job. The HR person who told me this said that the senior IT VP hiring the position, who I had helped with methodology before, said he would be willing to meet with me if I wanted to discuss his decision, possibly a attempt to placate me as he was hiring somebody from outside the company he had worked with before. I most politely declined the offer, because I was steaming mad inside, and I may just have blown-up if I saw that VP. I shared this with someone I trusted and he said I had done the right thing.

This was also happening during my fourth winter in Regina, a horrible winter. So, all these factors came to a head and led me into an active search for a new job somewhere else. This was big decision, as I had worked at Crown Life for 17 years. It took a few months and I got an offer from a big insurance back in Ontario who would pay to move us back. I jumped at it.

Monday, November 30, 2009

Memories of IT - the 90's continue - the user from heck

Next up was a desired enhancement and guess what, it was about Agent Management. In the United States, the rules for becoming an insurance agent are at a state level, so that means different rules in different states. Some of these rules had to with an Insurance Company sponsoring Agents, the proper way of registering, and if you did it incorrectly, large fines would result because the insurance boards took a dim view of people selling insurance who were not legally registered and sponsored to do so.

It’s a really complicated part of the business which (surprise, surprise) our vendor had not put in their system, but they sure would if we gave them requirements, and paid for the development work. Here we have another good thing this management had done, and that was assign the most knowledgeable business people to the project full-time to make sure the system did the right things. That is rare and to be applauded; unfortunately (!), management thought these people could write the requirements to send to the vendor, and that was a disaster. A document had been written, sent to the vendor and they sent it back saying it was unusable. So, a walk-through by selected team members was scheduled, and as an Analyst who had done Requirements before, I got invited.

The meeting was intended to change the document to make it usable, but it was just pages and pages of rambling text, so after some period of time I had to pipe up and say we needed to produce a real Requirements document, and I knew how to do it.

So, I am assigned an Agency SME and an Underwriter, the latter being the person who knew the rules, because underwriters checked these rules when a new agent submitted their first application.

The Agency guy was great; he was the one I had worked with before that liked my Data Models. However, the underwriter was the person who had written the original document, and was understandably not happy that her hard work had been rejected, and so she became my User From Hell. I scheduled a week in a meeting room to do the Requirements, but she fought me tooth and nail and it took four weeks. I did functional decomposition, a data model and rules, and she didn’t want to know that this could work. It was painful, and like all things you would rather forget, the specifics and severity of the pain have faded, but I do now that when we were done and the result was reviewed successfully internally and the vendor loved it, then that underwriter switched around to be the biggest supporter of what we had done… I felt good!

Sunday, November 29, 2009

Memories of IT - Packages and Reverse Engineering

Ah, packages. They are better now, but this was still a period where it was pretty likely you wanted to change it. One thing management had done right (and you gotta acknowledge that when you see it) was the vendor agreed to work with us to make changes if we gave them requirements. This was really one of those “win-win” deals, because almost all packages are created for the USA first and, if you are a Canadian company, the first thing you have to do is “Canadianize” the package; that can mean currency differences or adding French versions of screens and such. This vendor was indeed American, and we were their first Canadian customer, so they wanted to end up with a new version of their system to sell to other Canadian insurance companies.

They didn’t have such a version yet because the package was still pretty new. It had been developed in-house at a Chicago insurance company, and then it had been spun-out to a separate company. The original company was still their number one customer.

Was it a good package? I can’t really say for sure because (spoiler alert!) I wasn’t around to see it fully implemented. I do know the original version was not using industrial-strength technology. The database was some Dbase/Xbase thing, the code was Basic or something, and the user interface was green screen running under DOS; but the vendor wasn’t dumb. They piggy-backed on top of our agreement the effort to upgrade the technology: Oracle, Unix and other good stuff. This came about because the business liked the system as they saw it, but our IT people said we can’t support this, and our scale of business would probably swamp the system. So, the business people agreed to pay for the upgrade, and what vendor would not love that.

So, I have these D/Xbase table definitions, and there are hundreds of tables. The central tables emerged easily and I started subject areas, and the structure became clearer. What I found was that a lot of these tables were add-ons: if they had a Policy table, and then decided they needed more attributes for a Policy, they did not change the table, they created a new one with the same key. I can see how that would be easier to do, but it made the database into a dog’s breakfast. I raised this in one of the on-site meetings with the vendors, that I was not impressed, and they were not impressed that I was not impressed.

So, I continued parsing my way through this mess, creating a pretty big data model. I was doing this while starting to do direct project work, but I eventually finished. The next time some vendor people came in, I showed it to them as I thought it would assist in specifying good requirements and their impact on the database. Their response was muted at best, which left me puzzled. It was only some time later that I was taking to a consultant who had been part of the package search that I found out why I had got that response: right in our contract was a clause that we would not reverse-engineer any part of the system to understand and document it, that was verboten. Well, nobody had told me, I don’t think anybody else like who had joined after the contract was signed knew either. It was no secret I was doing it, and nobody ever said I should not do it. I don’t know if there were any repercussions from the vendor after they saw it, but I had the data model and I used it from then on.

Friday, November 27, 2009

Memories of IT - mid 90's - Activity Based Costing and Functional Decompositions

Other stuff was happening in the 90’s, things like Business Process Re-Engineering, and Business Process Improvement. Crown Life got into a specific approach called Activity Based Costing (ABC), where you defined and broke down your overall process a few levels so that you figure what different parts did and what they cost. Each business unit had done this, and when I saw them, I said “great, these are Functional Decompositions.”

When I joined the Individual Insurance package project, I learned they had used their ABC decomposition to evaluate packages, and it was a good decomposition, functional not organizational. As I still had a licence for IEF, I put that decomp into IEF, as part of learning about the business before doing real work. What I found out, however, was that no data requirements had been defined, a big piece of the puzzle that was missing.

Turns out I had about a month after I joined the project before some real work started, so I said I would use the time to get up to speed. I said, do we have documentation on the package that I can read? Yes, we did, look in this LAN folder. In there I found table definitions for all the package's data, with foreign keys defined. I said to myself, I can reverse-engineer this stuff to build a data model to align with the func decomp, and I will have a great picture of the business, an actual Information Architecture.

So, that’s what I started, and started learning about the package too.

Thursday, November 26, 2009

Memories of IT - mid-90's - Tossed out of the White Tower

A little background on Technical Development (TD); it had existed in Toronto, and was staffed up again in Regina. We all worked for a middle manager, a couple of different ones who came and went. When one of them did move on, that’s when the VP I have mentioned previously took over. She had managed a large application area, and a re-org took away that area, and she came to manage us white-tower, R&D folks. It seemed like overkill to me, but it turned out that her job was to get the current of R&D projects done, This was mainly getting a server-based environment implemented that could be used to take over from the mainframe, which was supposed to be cheaper and better because it would be GUIs on PCs and such.

This reminds me that when we moved to Regina, our mainframe processing was switched from Datacrown/Crowntec to a local IBM/ISM facility, so we were still paying for every MF cycle and disc storage byte, so servers were seen as the solution.

So, at some point, it was divined that the server environment was ready, so we would never have to do R&D again, so TD was to be disbanded. It was like “the end of history…”. I also think it was because we were a senior bunch of people, higher-paid, and management decided it was time for us to work on actual projects and earn our pay.

So, we all dispersed across the company, and the VP got a new big app area to manage. I was concerned, because a lot of app areas were run by people who had declined to do IEM/IEF, and that’s what I was tagged with. However, I ended up joining the team doing the new Individual Insurance package, to do requirements for possible changes to the package.

So, back to the project trenches I went…

Wednesday, November 25, 2009

Memories of IT - mid 90's - The IAA from IBM

What IBM had was something called the Insurance Application Architecture, the IAA. They assigned an IAA consultant to work with us, so he came in one day to do an overview. I was really curious how they could have a model that any Insurance company could use, as I had done some models and they certainly looked specific to my company to me.

So, he starts his presentation and gets into some detail and it dawned on me that this was an example of a concept that I had recently been learning about: the IAA was a “meta-model”, a general model that can be used for modeling something specific. In fact, it was a meta-meta-model, maybe more “meta’s” than that. So, about 20 minutes in to the presentation, it burst from my lips, “aahh, this is a meta-model…”. My VP and others in the room said “what?”, but the consultant said “Yes, you’re right.”

Apparently they had worked with about 20 client companies for many months in Brussels to come up with a data model and a functional decomposition. The decomposition looked reasonable, and had business words, but the data model entities and attributes seemed to have no insurance words. That’s because they had seriously generalized the “things of interest” such that a major part of the model would actually be used to define the business, with the rest used to actually capture data. I remember one subject area was Agreements, which would be used to define what your insurance products, but could also be used for any legal agreement. This was also the first time I saw what we now know as a Party model, a meta-model for defining customers and other business participants,

Because the model was general, it came with syntax for defining what data would be needed to do things like define a Product. It was at this point that my brain rebelled. I am a pretty Conceptual/Logical thinking kind-of-guy, but this was just too much. However, my co-worker who did tech support for IEF grokked it completely, so he and the consultant would sit in a room and spin out this syntax, and nod at each other and agree on stuff, and I would be in the room nodding like I knew what the f&%k was going on. I was worried, what if I never “got it”?

Skipping ahead a bit, IAA started being used for that Agent System. I had done data modeling with the business people about this just before IAA had been bought. The consultant and our IEF guy started holding meetings with these business people, and I joined the first meeting a few hours in, and the whiteboards and flip-charts were filled with the IAA syntax. One of the business people, a great guy, turned to me and said “I like your models better”. I felt a whole lot better.

In the long run, I need not have worried, as management totally blew the implementation of IAA. It was a general model, and very big, so its was recommended you start with a basic subject area like Party, and implement and start cutting your existing systems to start getting Customer data from the new Party system. It did not work well for doing a specific line of business or function, because you would have to use almost the whole model right away.

However, after evaluation and purchase of IAA, our VP sat down with the IAA consultant and said “OK, start with Agent Management and Compensation”. The rest of us meet with the consultant directly after this, and he holds us to secrecy to tell us that he recommended, even pleaded with the VP, not to do this, but she was adamant. So, that was that and those IAA modeling and syntax sessions began.

Now, doing this kind of modeling and analysis should likely have me involved, but I managed to avoid it somehow, it just seemed to default to the two other folks to do it. There were other projects going on, as usual, so they kept me occupied as well. I know that the IAA project did get to our tech guy writing some Action Diagrams and database generation in IEF to support party and agreement, but it stalled. I was still keeping up-to-date on the progress, as I still wanted to be able to learn and do this if it was going to stick around. What became apparent to me the doing analysis using a meta-model was next to impossible. If the model did not have business words like Customer or Policy, the business people would not get it and could never validate it. My recommendation was to do the analysis using logical models to capture the business requirement. It was true that systems built from such models would need to be changed whenever the business changed, and the power of the meta-model is that you make such change by changing data, not code. …But people can’t think ‘meta’’, so do the analysis logically, get it approved, then generalize it to the meta-model for use in subsequent development. My belief was that IBM should be able to build something that would take a logical model and generate the IAA syntax to use in the IAA model.

Unfortunately (again) for IAA and IEF, the overall desire to replace the whole CLASSIC system had reached fever pitch. One day, we heard from the VP of Individual Insurance that a package had been bought to replace CLASSIC and would also do Agent Management and such. So, that was the end of IAA.

Tuesday, November 24, 2009

Memories of IT - mid 90's - Life in a slightly white tower

So I am ensconced in Regina, and IEF usage has plateaued, I am still in the R&D Technical Development area, and I start taking on various projects and research.

One of the projects was to pick a new Project Management tool. Crown has been using some older products but, as in other cases, a lot of the new people had been using MS Project, so the result was pretty much pre-ordained. I also looked at back-end tools for doing a PMO, merging of projects and resource management, all from partner companies. We did not buy any of those, but they would have been really useful on some future projects.

Client-Server, as I mentioned earlier, was getting big, and the merits of 2-tier versus n-tier was already coming up. Two-tier was quick and easy, but it was never clear where the business logic was run, so three to N-tier introduced intermediate platforms where logic parts of the app ran, between the screen and the database. I ended up writing a short methodology for client-server development (wish I had that one too).The thrust was to do your analysis and design so that it was produced in logical parts that could be implemented on different styles of CS, mainly 2 versus n tier.

Our group also did some internal consulting to project teams; mostly I would help do data models on projects. We were a group of about 8 people with different sets of expertise. Pretty much anything new in IT would be evaluated by us, sometimes with a project team doing a trial. I don’t think we ever repeated the evaluation disaster of CS tools.

And so the development cycle I have described before had come around to look at the Individual Insurance area of the company. It used a system called CLASSIC. CL stood for Crown life, don’t know what the rest meant. It was the first online app in the company, developed in the 1970’s in PLI and IMS. It got bigger and bigger over time. I recall that running a system test took forever, and cost a lot of money to run at Crowntek, so the VP in charge said you only got to run it once, and then you implemented. I never worked on the system; this was what I heard from people who did. The actual subject at hand was Agent Management and Compensation, which was sort of an add-on to CLASSIC and did not work well; At least two previous projects had tried to fix this and failed.

One day, our Tech Dev VP came back from a meeting with IBM, and announced we were going to buy a model from IBM for Insurance Systems that would help us with this and other problems. I know that there was also a latent desire to replace CLASSIC itself, which drove this choice. This was actually a second chance for IEF, as the model could be delivered in IEF, so that got me involved.

What IBM had was something called the Insurance Application Architecture, the IAA. ...more next time.

Monday, November 23, 2009

Memories of IT - mid-90's - Lessons learned about IT Standards in a Company

So, I was now in Regina. Over the period of the IEF saga, I had moved from the Corporate Systems area to the Technical Development (R&D) area, which included methodology support (me!), hardware and software standards and new tech evaluations, and IT Training was in their too. This is where I watched some things happened and learned some useful lessons.

The first one was around those Client-Server tools I had mentioned before. There were a couple of leading tools that people wanted to look at and use, so TD was assigned the job of picking one…but some IT teams were clamoring to use a tool now to get something done (all those new managers wanting quick success). So, our manager and these managers decided that one team would try out one of the tools, and another team would try out the other tool, on real projects for 3 or 4 months… then they would all get together and decide which one had worked the best and that would be selected as our standard CS tool.

Can you see the problem here? Both teams learned to use the tool they had and built a system, and were happy. So when the decision time came, each team claimed that the tool they had used had to better than the other one, because they had delivered something with it. An underlying driver was that if one tool was chosen over the other, then the team that had used the ‘losing’ tool was going to have re-develop their system. Well, no one wants to do that, and any attempt to force a choice was denigrated as central TD not being flexible enough to meet the needs of each team.

Lessons learned:

1) Always have all reviewers of products being reviewed use ALL the products, so they can make a comparison; otherwise, they will prefer the tools they have used (if it is basically adequate for the task). There is big example of this: back when Wordperfect was still a viable competitor to Word, one group of Wordperfect users was asked to review it versus Word, and a group of Word users was asked to review it versus Wordperfect. The (predictable) result? Each group preferred the tool they had been using. You could have shown them an un-biased review that showed at a point time, one product was better than the other (until the next release of one of then came out), they would still prefer what they have used; that’s human nature.

2) IT Standards, that list of technologies and products that the company says it uses, is not useful for its specific content, but for measuring how much variance from the standard exists at a point in time… because there will be non-standard stuff being used when ever a standard is ‘set’, and powerful managers (that make the money for the company) will get exceptions from the standard if that’s what they want. Once you accept that, that the standards will never be absolute, you can use them to be able to advise people how much more money it will cost, or how much less support they will get, if they buy something that is non-standard. If that information is provided, you are helping those people understand the impact of their choice; they may still go non-standard, but with their eyes open and they can’t bitch later about lower support and such.

Friday, November 20, 2009

Memories of IT - 90's - why did we stop using IEF?

Why did IEF development stop at Crown? It was the move; the majority of middle management and a lot of senior management did not move to Regina. As I have said, this gave the company a chance to downsize, so the number of staff after the move was 20 or 25% less than before the move…but that still meant a lot of people had to be hired. So, all the original stakeholders who had supported IEF were gone, and the new management had no stake in its continued use.

And IEF was really susceptible to this situation, because overall it was used in support of strategic redevelopment of all our systems, the original business case pictured this happening over the course of seven years, that’s a strategy.

But if you are a new manager in a new job in a new company, you don’t want to fall in line with a seven year strategy created by people who are now longer around, you want to deliver quick success, show that you are worth having around. That’s OK, and is acceptable in normal turn-over at a company; but with 60 to 70% of the managers all new at the same time, the IEM/IEF strategy (no matter how worthwhile) was dead.

I made a last stab at keeping it alive, writing a white-paper and doing presentations and such. I received great compliments on the white-paper -- I wish I had kept a copy --- but the response overall was “I can’t do that now”. I even presented how to use IEF on a more tactical level, just doing any project without an overall architecture. Jams Martin had figured out people wanted this and developed a one-project-only version of IEM called RAD (Rapid Application Development), and the promise of this version had helped sell IEM in the first place.

Unfortunately(!), this was the point in IT history when Client-Server development tools appeared, especially 2-tier tools that had you paint a screen, run it on a PC and it went directly against a database on a server. These tools did a whole lot less than IEF, but that also made them a whole lot cheaper, so when I would meet with a Project Manager about IEF, he or she would say SQLWindows was cheaper (and they had used it in their last job), so that’s what I am using, sorry.

And so ended the real saga of IEF at Crown. We kept our hands in it because of the Canada Pensions system, and I did get to go to some IEF user conferences. Such conferences are always in nice places, like Disneyworld or Vegas etc., so you take those perks when you can…but it was at one of those conferences, as described earlier, that TI announced it was selling IEF …and that was the real end.

Thursday, November 19, 2009

Memories of IT - 1991 - Regina Bound...

The company was in trouble, but who really knew that. I found over the years that I worked at Crown that my friends, family, and any one I met had never heard of the company. It did not advertise to the public, it marketed through agents and brokers. It also meant that though it was apparently the 18th largest insurance company in North America, I never saw much about it in the business newspapers.

Then, one day in 1991, the announcement came: a company from Saskatchewan (holding company for the richest family out there) had bought up control of Crown Life, and was going to move the company to Regina. That was news; it even came up in the Ontario Parliament question period, the opposition blaming the government for losing business/jobs from the province.

If you want to downsize your company staff, I can think of no better way to do it than pickup your company and move it 1000 miles, especially from the biggest city in the country to a relative hinterland. Current staff was offered the chance to move with the company, all expenses paid, or stay to a certain date and get a good-sized settlement. This was 1991, and Ontario was in a recession, so even if the settlement was good, opportunities for a new job were bleak. So, I decided to go to Regina. Looking ahead a bit, I can tell you that I and my family lived in Regina for 4 years. (A lot of people did not move, usually citing love of Toronto, wanting to stay near family, and many other good reasons)

I always tell people (truthfully) that I do not regret moving to Regina, but neither do I regret leaving Regina after those 4 years. I had grown up in Toronto and lived/worked in area of Toronto ever since. So, when I have been writing these posts, the place all this happened to me did not really affect what happened, it was in a big city, I commuted, lived in suburbs, like many millions of people. Regina was different, in both life and work, and that difference will come out in some of my future posts.

But it was over a year before I actually moved; a group of ‘pioneers’ went first to get started, using temporary space while a new head office was built and such. In the meantime, that Canada Pensions IEF project was still underway, had delivered some of the first parts of the system (structured by Business Areas), but it would not be done before the business unit made its move to Regina, and no one on the project team was going to move (they all felt skill in IEF was marketable, and I think that was true for a while). The unit management persuaded the team to keep working in Toronto after the move till the system was done, and they delivered a good system.

By that time, I was in Regina, about the only person with IEF exposure who made the move. One thing that I worked on first was a program by Texas Instruments for sponsoring education on IEF at universities, and we got the program set up for the University of Regina. I actually went out and spoke about IEF, systems development and Crown Life to a senior class. I didn’t think I made any impression, but apparently some students learned IEF.

I know this because the Canada Pensions IEF project did finish, and the Toronto team members all went on their way. So, the business unit had to get people in Regina to support and enhance the system, and usually new people need time to learn about a system before they can be productive; but, one thing the unit did was hire some UofR grads who had learned IEF, and because they could read the Data Model and Action Diagrams, they were productive almost immediately. It proved that systems generated from commonly known modeling techniques were a whole lot easier to maintain and enhance.

Unfortunately (and I feel I have to say that a lot), that Canada Pensions system was the first and last IEF system built at Crown Life…

Wednesday, November 18, 2009

Memories - early 90's - How not to do ISPs, and other stuff...

The previous post mentioned one project I worked on, and it was probably one of several I may have been assigned to. If you work in a typical company, and you are not on a big development project, then you usually have more than one project on the go at any one time.

My focus was still around IEM/IEF. I would like to say it all went smoothly, but how likely is that... The company was divided up into about 12 business units at the time, basically a combination of product line and geography, like Canadian Life or U.S. Pensions. As a result, the IEM approach was to do an ISP for each unit, plus one for Corporate & Investments. I ran the ISP for Corporate as a trial of the process, and I and a James Martin consultant did manage to get the senior VP and his reports in a room for a day and do some models and prioritizations. That senior VP eventually became President, and he always remembered who I was whenever we met (it was not a huge company, so it was possible to see senior management around now and then).

Meanwhile, one business unit was chomping at the bit to go. It was Canada Pensions and it was the part of the business whose time for a new system had come. I can't recall if they looked for packages first, or if they really did an ISP, but they were soon off to do their data model, function decomp, got down to doing Action Diagrams. They had people go to IEF training, had a few experienced consultants come on.

Then the "while" I mentioned in my last post came to an end...

Tuesday, November 17, 2009

Memories of IT - meanwhile, back in the (insurance) business

Ok, the last several posts have proved to be an arc on the one topic of IEM and IEF, like several linked episodes of a TV show. From start to finish, the arc covers several years, from the start of the project to pick IEM/IEF in 1989, to me changing jobs in 1997. A lot of other things happened, plus I have more on how we used IEF in the first years after we bought it.

A lot of it is better understood in context of the state of the company, which had been struggling. Crown Life was a typical Life and Health company, actually created by an Act of Parliament in 1901. Since them it had grown, entered and sometimes left foreign markets, added investment/pension products. Like a lot of companies in the 30 to 40 years after World War 2, it made money pretty much independently of any specific things management did or changed over the years; the basic business model was still working. So, Crown Life was a pleasant place to work, often referred to as the "Crown Life Country Club". Each president had come up from the Actuary ranks, and there were about 15 levels of management possible between worker and president.

However, the business environment for insurance soured in the 80's. I am not going to recount why, but stuff happens, and profits sank. Crown Life was a stock company with a few primary owners, and they eventually (mid-80's) sacked the last of the old-style presidents and brought in a turn-around guy, Bob Bandeen was his name. He had done the turn-around at CN so his arrival was momentous. After the usual few months of looking around, he started squashing those 15 levels to about five, so almost every day you would have seen some middle manager heading out the door with a box of stuff in his arms and a shocked look on their face. A noted financial writer of the time produced a book about the Canadian Insurance industry of the time, and it had a chapter for each of main companies; the chapter on Crown Life was called The Abattoir(!).

Amazingly, IT/MIS suffered very little, so in a strange way I was in a protected bubble of business as usual; I had to read that book to find out how bad it had been.

But eventually, Bob finished squashing and moved on. Crown Life sort of merged with Extendicare and created an overall company called Crownx (no typo); it was going to use Crowntek as a basis for getting further in IT services as a new business, selling PCs to companies, and other not-well-thought-out-stuff that sucked money from Crown Life and Extendicare until it was abandoned.

That left Crown Life in precarious shape. I had a bit of insight as one thing I worked on was cash flow reporting and investment management, which tried to predict how much actual cash was coming in from premiums and investments, and how much could be reinvested or kept for claims. What was apparent is that the flows were almost always mis-matched and the company was always short of actual cash. My absolute favourite moment was when we sold the head office building on Bloor Street in Toronto to some real estate company and leased it back, to get a cash injection into the company. I still shake my head over that one when I think of it.

I moved on to other stuff (like IEM/IEF) so my direct knowledge of company problems was reduced, and I suppose this and other tricks kept things going for a while, but not a long while. I will return to this topic when the "while period" ended in a future post.

Monday, November 16, 2009

Memories of IT - the death of IEF

So, why isn't everyone using IEF today, and if you are of a certain age, why have you not even heard of IEF? CASE tools were a big thing for a while, which means many people liked them and many other people did not. The latter were usually put off by the rigor, they thought they were giving up flexibility. Programmers could be put off by the fear that it replaced what they did, whereas what it did was just move programming up to logical Action Diagrams, just like 3GLs had been a move up from assembler coding.

But two things happened that really killed IEF and CASE as a whole: IBM's AD-Cycle, and ERP systems like SAP.

AD-Cycle: As I said, CASE tools were a big thing in the years around 1990. IEF was only one of many tools you could buy, but the vast majority of the tools only did part of the job, As described in an earlier post, there were modeling tools that analysts would use but went no further; these were called Upper-CASE. Other tools existed that would generate code from some kind of input; these were called Lower-CASE. The Upper and Lower referred to the parts of the lifecycle the tools covered when viewed as a waterfall that went from high (initiate, analyze) to low (design, code, test). After a while, vendors of one kind of tool would partner with the vendor of the other kind of tool, and both would trumpet that you could now do the whole life-cycle if you used their tools together.

Unfortunately, there were so many tools that you could not just pick any two you liked; if you picked one, then only so many other tools would work with it. I suppose somebody though this was a huge problem or opportunity, because IBM (still the big player in the largely still mainframe world of the time) decided they had the solution.

You see, each upper-CASE tool had some kind of repository or encyclopedia to store its models, especially if you created them on a PC, after which you would upload to one repository that all modelers would have access to. Those repositories, like the tools, were proprietary to the vendor. IBM decided it would create one common repository that all tools could use, so you could then use any combination of upper and lower you wanted. Add some services and its own tools, and the whole thing is presented to the world as AD-Cycle. Immediately a whole lot of the most popular tools signed on to the program.

Remember, IEF wasn't upper or lower, it was the whole deal, which was known as Integrated-CASE. Texas Instruments looked at AD-Cycle and said, we like our own repository just fine and we don't interface to any other tools, so you folks carry on and when you have something usable we will consider it. (I am trying to remember if IEW did sign on to AD-Cycle, I think it did but don't recall why.)

The problem was that the AD-Cycle repository was a disaster. Real customers who bought it got something that was huge, slow and not very functional. News got around and sales tanked but, even worse, companies who had not used any CASE tools yet avoided all CASE tools, not just the AD-Cycle repository. The whole tools segment was hit, and this hit home to me when I was attending my second IEF user conference, and the main TI guy for IEF walked up to the microphone and announced that TI had sold IEF to a relatively unknown software tool company. TI was a hardware company, and they just decided a failing software segment was not for them anymore. The new vendor changed the name but eventually was bought up, and up and up, until IEF disappeared into the maw of Computer Associates. I changed companies not too much later (for other reasons), so that was the last I saw of IEF.

But it did carry on, and I think some version of it may still be being used by its original customers, but that was it.

What helped to finally bury it was the parallel arrival of the big ERP systems like SAP. They were selling to management that you could buy SAP and not have to develop anything. So, if you stopped in-house development pretty much cold, why would you buy an admittedly pricey I-CASE tool that was just for development? Well, you wouldn't, and that was that.

Saturday, November 14, 2009

Memories of IT - IEF and Action Diagrams

So, you have a data model detailed with all entities, relationships and attributes, and a set of elementary processes that mainly CRUD all that data. The specifics of those CRUD functions would then be detailed using a rigorous logic called Action Diagrams (AD) . You would define your input data, what the process would do with the data, and the output. The rigor of this logic was that all data used had to be in the data model, and the AD would use views of the data model to define input, output, and of course CRUDs of the data in the model. The AD also enforced the rules in the data model, such as the example in the previous post. If you did not specify it correctly, IEF gave you an error, The whole AD was supported in IEF as selection of logic phrases from only what was actually valid at the point you are defining the logic. This ensured that when you had finished the AD logic for a process, IEF could generate code free of execution errors. You could still define the wrong logic for the process, but it would run.

Last point: IEM/IEF defined a Process as a logical activity. When you wanted to use that process, you would create a Procedure, either an online procedure or a batch procedure, and these would use processes as needed, the key thing being that a process, defined once, would be used as many times as needed within all the procedures, like an online transaction for Add Customer, or a batch program that would get a file of data and use Add Customer as many times as was needed.

So, you have Procedures that use Processes that use the Data Model, all in IEF. The tool has assisted you in specifying all these things to avoid execution-errors, and it also had other validation capabilities that would ensure that all the pieces you have work together. Once all the pieces have been validated, select Code Gen and IEF created all the code for your application: DDL for creating the database, code for execution of process and procedure, and online transactions or job control language for the procedures. Send all that to the various compilers for the languages you have generated, and out comes your executable. No hand-coding needed; you can literally not look at and even throw away the generated code away.

IEF first generated COBOL, DB2 and CICS for mainframe systems. As client-server emerged in the early 90̢۪s, they added generation of C, Oracle and Unix to run on PCs and servers.

Next time: So why aren't we all using IEF?

Wednesday, November 11, 2009

Memories of IT - early 90's - IEM and Business Area Analysis

So, you have enough of a business model in IEF to divide the enterprise into cohesive Business Areas (BAs), which now need to be detailed enough to be a complete requirement for systems. The planning mentioned previously will indicate which Business Area to do first. Other than the limits imposed by natural build sequence (data created in one BA needs to be built before other BA'™s can use it), you could do the work on some BA'™s in parallel if they are not directly dependent. In IEM, this step was called Business Area Analysis (BAA). This was done by mainly parallel decomposition of the high-level data and function models.

Most people would not think of a Data Model as hierarchical, as it usually happens in reverse. You start putting Entity Types on a page, connecting them up with Relationship lines, and soon you have a lot of boxes and the page is filling up. Design studies tell us that the optimum number of objects to draw on a page is seven plus or minus two, or the human brain doesn't comprehend it well. Less than 5 is usually not a problem if not that useful, but more than 9 is.

What you will see is some entity types have many others hanging off them, often called central entity types, like Customer, Product, Employee. Each of these is the central entity type of a "Subject Area", usually named as the plural of the central entity, so Subject Areas are Customers, Products, Employees. Group all your entity types this way and you have a 2 level hierarchy of data. IEF supported this grouping into Subject Areas, and then further groupings of Subject Areas into a higher Subject Area, so a multilevel hierarchy results.

Meanwhile, you have a level or 2 of functions, more formally known as a Functional Decomposition.

Decomposing an enterprise is analysis in its purest form; understanding a thing by examining its pieces and how they relate to each other. You look at a thing/function and determine what are the seven plus or minus two sub-things that comprise the thing.

IEM defined a functional decomposition as composed of two types of "things", Functions and Processes. A Function is an activity that is continuous, no obvious start and end, like Marketing. A Process, then, is an activity with defined start and end, like Create New Marketing Program. So, the decomposition will start with some levels of functions, then each path of decomposition will reach a point where the next level down is a group of Processes, and then remaining levels of that path will be processes.

Functional decomposition usually gets criticized or can get misused. The most common misuse is that people think that functional decomp is the same as the Org Chart; it is not. The best way to realize this is think about how many reorganizations you have been through ---probably lots---, and how many times this actually changed the work you did --- almost none ---.

If determining the decomposition is difficult, some advanced IEM methods recommended parallel decomposition, meaning in parallel with the data model, so the function Marketing is parallel with the subject area Markets. Given this match, you decompose both models together. When you get to Processes, they will be verb-noun, where the noun is an entity or attribute in the data model.

All this decomposition is done to get to Elementary Processes, which answers the question "how do I know when to stop decomposing?". Each process will define how data in the data model is managed. A good process is one that manages data and leaves the data model in a valid state. So, if a process creates an occurrence of an entity and it has a mandatory relationship to another entity, then the process has to create that one too, otherwise the state of the data is invalid. A process that creates only the first entity is sub-elementary, and you have decomposed too far.

IEF supported this functional decomposition, enforced some rules like Functions can be composed of Functions or Processes, but Processes only decompose into other Processes; and it had you indicate what you believed to be the Elementary Process. The interesting thing is that all of this decomposition is done to get to those Elementary Processes; once they are all defined, you don't need the decomp any more.

(Note; this definition of process is not the same as that for Business Process Modeling or Re-Engineering.)

Next Time...Action Diagrams

Monday, November 02, 2009

Memories of IT -into the 90's- What was IEF,anyway?

The decade turns...

What was IEF, anyway?


It was automated Information Engineering. That methodology was based on information across a whole enterprise, so its first step was to create the Information Strategy Plan (ISP) for a complete enterprise. The core task was to create a high-level model of the enterprises functions and data (remember, function + data = information). This was indeed high-level where the data defined was Customers, Product, Materials, Staff, and such. The functions were the first to second levels of a functional decomposition, usually based on the main activities of the business: Define and Market Product, Acquire Material, Make Product, Sell Product, and supporting functions like Hire Staff and Create Financial Statements.

The functions were always defined as doing something with data. Given this perspective, you could create the CRUD matrix, Data items on one axis and Functions on the other, and each matrix cell contains the letter for Create, Read, Update, Update…or blank. Given this matrix, you can now do Affinity Analysis, which is a process of identifying what groups of functions manage a Data Item. I did this manually back in an earlier project.

But IEF captured the Data Model and the Function Model, and the CRUD matrix; then you initiated an automated affinity analysis process, and out came your restructured matrix. The result is a set of clusters of functions managing a set of data, which are de-coupled from each other. Each cluster was then used as the definition of a Business Area; a typical enterprise would have 5 to 9 Business Areas defined.

This is your Information Architecture. The IE Methodology (IEM) then provided a series of evaluation and analysis tasks to determine how well current systems support each Business Area, what is the value of automating a Business Area, and such…from which you would create a plan, the Information Strategy Plan, for moving from current systems to a new set of Business Area-focused systems that would eliminate silos, data duplication, etc..

So, now you were ready for Business Area Analysis...

Friday, October 30, 2009

Memories of not IT

It has been pointed out to me by one of my offspring that he and his brothers have not been mentioned in these posts. He found them because they are being replicated over on Facebook as Notes, and he thinks by reading them he might understand what I have been doing for 30 years that put a roof over our heads and food on the table.

Unless requested, I will leave names out of this. I did marry my lovely wife in 1979, and sons arrived in 1981, 1986, and 1989. My wife claims she remembers nothing about the 80's except diapers and formula (I did my part too, so sleep-deprivation was equally shared).

It might be useful for these posts to give some geographical context, because it does start changing later on. I grew up in what was then the suburbs of Toronto, a place called Etobicoke (KE are silent). I went to University of Toronto (which I think I mentioned). My employer through the 80's was located in the "middle" of Toronto, Yonge and Bloor. When I started, I could take the bus and subway to get there, then we moved further out, so a car to the subway was needed, then we moved much farther out in order to afford a house in the time of 15% mortage rates, so that meant driving/carpooling , or commuter (GO) train and subway. The drive took an hour when we first bought our house, it was up to two hours ten years later.

Overall, the sons came along in a fairly stable time in terms of where we lived and where I worked. The company had a Children's Christmas Party each year, right in the head office, so children could be taken up the elevator to see daddy's desk. It's state of organization must have made an impression, because I started getting presents like mouse pads with the Tasmanian Devil on it whirling around leaving destruction in his path.

But things would change and become quite interesting, as you will see in future posts...

Wednesday, October 28, 2009

Memories of IT - 1990 - IEW vs. IEF

1990, and the promise of CASE was huge...

We have two products, IEW and IEF, to choose between.

Memory and perception can be funny things, so when it comes to IEW (Information Engineering Workbench), any corrections from the reading public are especially welcome.

First off, I recall the vendor company'™s name was Knowledgeware. Its president or CEO was one Fran Tarkenton, indeed the famous NFL quarterback. I never did figure out what he really was to the company: was he a closet geek who really was involved in the product? Was this where he invested his NFL salary? Was he a figurehead? Comments welcome!

The other angle was that Knowledgeware was supposed to be very closely related to James Martin, but in what way I can't be sure. The implication was that if you were really doing Information Engineering, then Knowledgeware and IEW just had to be your choice.

So it surprised me no end that when I saw the product demonstrated, its functional modeling was based squarely on Data Flow Diagrams (DFDs). It might seem like an esoteric issue now, but if you had followed the methodology advancements of the 1980's, you would have seen that DFDs had featured strongly in Structured Analysis and Design, but they had fallen into disfavor with the rise of Information or Data-centric approaches using Data Modeling. In these approaches, DFDs with data flowing around and many Files were thought to lead to bad data design, silos and all that. And IEM (The methodology) did not use DFDs for functional modeling, it used a straight Functional Decomposition; but, you could probably have used DFDs without breaking any methodology rules.

On the other hand, there is IEF (Information Engineering Facility) from the software division of Texas Instruments. TI is really an engineering, hardware company, but they backed into selling a CASE tool because they had bought into Information Engineering for their own information systems and wanted a tool to support it, so they built one. IEF was automated IEM, for sure, but with a focus on the parts of the methodology that led to producing code; any diagrams that didn't lead to code were not automated. One of these was, in fact, DFDs, which IEM did use in a limited way for documenting current systems, but no more, so TI kept them out of IEF.

In the end, it came down to code generation; both tools generated code, but IEF was the most complete and straightforward; IEM was missing some parts and impressed ourt technical people less. So, IEF emerged the winner.

Next time: What was IEF, anyway?

Tuesday, October 27, 2009

Memories of IT - 1989 - Methodologies and CASE tools

1989...

As per my previous post, we have a couple of methodologies to evaluate. PRIDE was all about Information Resource Management; IEM was, well, about Information Engineering. If I was to line them up against each other today, I am not sure there would be much difference between the two, except we knew that IEM had two popular CASE tools supporting it, so PRIDE never really stood a chance.

So, IEM won. This was James Martin's baby, through his latest organization, James Martin & Associates. At the time, they had a Canadian office that we worked with, so I don't how many degrees of separation there was between myself and James, but it wasn't close. He was doing tours at that point, charging large sums; when he did come through Toronto, only VPs of my company got to go. I recall he was already moving on to new topics, like Enterprise Engineering and Value Flows...

Meanwhile, back on the project, we have IEM, so now we look at supporting CASE tools... but let's talk about CASE first. Computer Assisted Software/System Engineering. There were actually a few different angles to it. It had started with the model/diagramming tools I have mentioned before. Because they supported tasks in the first few phases of the SDLC, they were tagged as Upper-Case, meaning the diagrams were good but it stopped there. At some point, other vendors created code generator products which, because coding happens later in the SDLC, were tagged as Lower-Case; then vendors of both types of tools would hook-up, so that Upper-Case diagrams could be used (somehow) as input to the Lower-Case tools to tell them what code to generate.

I never saw a Lower-Case tool up-close, so I never knew how they worked independently, or how interfaces with Upper-Case tools really worked. I never did have to know that, because we were looking at the third angle: Integrated CASE tools (I-CASE, long before i-pods or other such stuff). This was a product that did the whole SDLC, from first diagrams to final code gen and testing, and there were two main players: the Information Engineering Workbench (IEW), and the Information Engineering Facility (IEF).

Next time, comparing IEW and IEF...

Wednesday, October 21, 2009

Memories of IT (late 80's) - new Methodologies... and tools?

It's always amazing how much you don't know, or even worse, what you don't know you don't know. A new analyst joined us for the new methodology project, and we were discussing various tools for modeling and analysis, and he informed me, to my initial disbelief, that there were tools out there that could generate complete systems from Data and Function models. I thought I was pretty good at keeping up with trends, but this had escaped me, so it was time to catch up.

This happened within the context of our new development methodology project, which also included CASE tools that might support such new methodologies. The approach was pretty good: find the methodology that best met our needs, and then pick a tool that best supported that methodology. I was the lead analyst, charged with gathering requirements that would be used for RFPs and detailed evaluation. Key IT people from each unit participated in requirements sessions. I know we produced a good, long list, but the details have faded from memory. This group was not working in "controlled isolation", so I am sure that what any or all of us knew about existing products, and also from looking ahead to tools, influenced the results. I know I was already looking for candidate products, and reading up on all of them.

What emerged from the requirements list was a desire for a methodology that helped us deliver low-maintenance systems, and wouldn't it be nice if a tool automated that methodology to speed up the process a little. Of about a dozen methodologies I found (pre-Web, so the big magazines like Computerworld were a key source), there were only a few that matched up in any real way. One was PRIDE, which is still out there, and the other was the Information Engineering Methodology (IEM) .

Next time, looking at the two methodologies...

Tuesday, October 13, 2009

Memories of IT - circa 1988 - The Maintenance Dilemma

My company's management, IT and Business, were now grappling with the 'maintenance problem', the generally agreed dictum that 75% or more of a company's IT 'development' budget was actually spent on fixing and enhancing its existing systems, leaving little for delivering the new systems everyone wants to support new business initiatives.

The standard reaction was (and is) usually to find some way to get more new development out of the available resources, resulting in the adoption and eventual abandonment of many tarnished 'silver bullets'. A less common but no more successful approach was to find ways to maintain those existing systems with fewer resources: code analyzers, reverse-engineering in models, and such.

The third and least used (and least understood) approach was to recognize that systems had to be built from the start to require less maintenance effort, so that the 75-25 resource split could be moved towards 50-50 or better. This requires a long-term strategic view of your information systems inventory, one that recognizes that over 7 to 10 years, many of your current systems will be replaced, so why not do this following a strategic plan; otherwise, you will end up in the same state in 10 years, with a few new systems.

One thing that anyone reading this will agree on is that thinking out 7 to 10 years is difficult for the average company, even for its core business of what it sells or services; taking such a long term view of its supporting information systems is really difficult. The allure of the quick fix can be hard to resist. So, in retrospect, the fact that the average insurance company I worked for would even consider a strategic approach to its information systems still stands out as an amazing development that would take my own career down a new path... to Information Engineering.

Thursday, October 08, 2009

Memories of IT - when I started to learn about Methodology

So, these posts are still in the 80's, but a lot was going on. By coincidence, both I and my company were thinking more about methodologies and the system development life cycle. Looking back, it's hard to explain that we weren't really thinking in these terms, work just got done, a simpler time I suppose. Of course, the idea of using a methodology was not new in the mid-80's but it wasn't accepted everywhere either.

The only real methodology concept I was exposed to early in my career was the Scheduled Maintenance Release. Working on an existing system, requests for change would come in at any time. I suppose at one point before my time such changes might be dealt with as they arrived, but it had become apparent that this was not a best use of resources. It became clear that "opening up a system" for changes carried a certain level of cost irrespective of what the change was, including implementing changes into production.

So, change requests were evaluated as they came in (production bugs were fixed as they happened); if the change could wait, they went on the change list. At a future point, either on a regular basis or when resources were available, all the current changes were considered for a maintenance release project.

As I worked in a department where the systems were relatively small, and project teams may be one person, I don't recall doing much estimating or cost-benefit analysis for these projects during that era. Releases might be organized around major functions, like all changes for month-end reporting. Once a scope and a set of changes were agreed to with my main business contact, I just went ahead and did the work. I remember that I would figure what code changes were needed, do them, and test them. For the small systems I worked on, I don't recall there being separate unit, integration or User Acceptance Testing.

If you have read my earlier posts, you will know that I did work on a large in-house development project as a programmer, but I can't say if the project was following a methodology. I came on the project during construction, and all I remember was that the PM/BA did write and give out what would be considered Specs today. I think she also did the integration testing of the system as we delivered unit-tested bits.

But there was indeed some work on Methodology work going on in the company... One day some of us were scheduled to attend training on the company's new System Development Methodology (SDM). Apparently one or two people in IS Training had been developing an SDM (still had that in-house bias). So off we went; to the creators' credit, I recall it what we saw was pretty good. This is probably when I first heard the word "phases", and that there were at least 4 or 5 of them in this SDM. Unfortunately, creating an SDM is a lot of work, and so far they had only completed the Analysis phase in detail, the rest was just the framework. They said the remaining phases would come over time; well, time ran out on this work when someone figured out you could buy a whole/complete SDM, so the remaining phases were never done and the in-house SDM was never mentioned again.

... but I recall it was my IS department that then went out and got an SDM. The winner was from a local consulting company, who offered "The One Page Methodology"; methodologies were already getting the reputation that they were big and unwieldy, and the manuals would be put on a shelf and never used again. Now, this "one page" was as 4' by 2' foot wall-poster, but it served the purpose. The poster was divided in to 5 horizontal bars, one for each phase, and each phase had around 10 boxes/steps, going from left to right, but that's all I remember.

What I do remember was the vendor also had a CASE tool, called "The Developer", to automate the diagrams used in the methodology. These were basically data models and data flow diagrams. It also had a data dictionary for the data model, and text boxes for documenting your DFDs. So, Excelerator was gone, replaced by this Developer.

I used it quite a lot as I started doing analysis on a lot of smaller projects. I can't say that our developers got what the models were for, but it was mostly current system maintenance so they would ask questions and figure it somehow.Not sure how this situation was tolerated, but things were changing all the time, so newer methods and tools were coming...

Wednesday, October 07, 2009

Memories of IT - mid-80's - Analysis and CASE Tools

So now I move on to the next project, replacing yet another batch system with something newer and shinier, and I am doing the Analysis phase. In a timely fashion, a few things arrive that will help me greatly...
  • a fast, collating, stapling photocopier, which I would need for distributing the requirements documents
  • an IBM PC AT, with a hard drive, a mouse, and a graphics card. The drive storage was tiny by today's standards, 10 meg. The monitor was still monochrome, an annoying orange-yellow.
  • and the reason for the AT, a copy of Excelerator 1.0
Excelerator was my first CASE tool. I think the vendor sales guy drove up from Boston to Toronto and gave us a copy out of his trunk. I entered my level 0 DFDs in it, and then I could easily change it as needed. When it was stable, I would generate a print file on to a diskette, and then go to a PC attached to a plotter to print the diagram. (No LAN yet.)

Next, I could select a level 0 function, and then draw a new diagram , exploding that function to level 1, keeping a link to level 0. This was like gold for me, so pencil and erasers were now history.

Excelerator did not support Data Models in 1.0, but I used a general diagram to at least draw the diagram and kept definitions and such in a text document.

Looking back, I would have to say that the quality of what I produced was probably low, but I was young, all was new, and there weren't many examples to compare to. I am fortunate to have had the time to learn and improve since then, and it hasn't stopped; there is always room for improvement, and new things to learn.

Tuesday, September 29, 2009

Memories of IT - still in the 80's - They didn't call it downsizing yet...

Ok, now I am trained up to be a Business Analyst, and an opportunity to be one was about to present itself. But first, a little about changing environments.

When I joined and for the first few years I was there, Crown was PL/1-IMS mainframe shop with all systems developed in-house. I don't know how this came about, as so many companies used COBOL and VSAM files. The fact that Crown Life did not use COBOL was one of the reasons I accepted their job offer in 1979. The only old war story I heard from some company veterans was when the first computer was installed at the company in the mid-60's. It was a big deal, all about "turning the big switch" to move the company on to using a computer.

Development of new systems moved in cycles, of course, depending on what needed built/replaced, allocation of budgets, etc. After the development of the new Stocks/Bonds Admin system was done, the focus moved to the Group Insurance business. Crown Life did a lot of business in the United States, like most Canadian insurance companies of any real size, and it was still possible to make money in group health business as well as life, before health costs sky-rocketed and HMOs took over.

So, the company had a number of group sales offices around the United States doing business, and they needed a new system. Something about the cost of using billable mainframe cycles probably played a role in the decisions to create a system based on a mini-computer located in each sales office, which employees would use all day, and each mini would then feed the days transactions to a master system on the mainframe in Toronto for consolidated processing; sounded reasonable, I guess. Adding minis to our mainframe environment was new, and probably should have been treated as a risk, but I was 26 years old and not on that project, so can't say I worried about it much myself.

The team for this project was located next to the department I was in. I recall that I did not know many of these people directly, but they seemed like a good enough bunch. Reporting about project status to people outside the project always presented things in a good light, but that kind of reporting usually does, no matter what is actually happening.

Because something bad was happening on that project; I found out about it like most of the rest of the company when I came to work one day and the area with that project team was empty, and stayed empty. The mini-mainframe mix had not worked out (more on that later), so the project was cancelled and the whole team up to director level was let go, fired. I don't believe the 'down-sizing' euphemism had been invented yet, but this was the first one I ever saw like this, quick and brutal. I think this was the cause of me realizing what a lot of other people were realizing as the 80's progressed: companies could just not afford to be completely loyal to their employees, so it was time to start managing your career yourself, start looking out for #1. Obviously this is when employee loyalty to their employers started to disappear too, and management magazines over the next 10 years or so had to gall to print articles asking "why?".

Anyway, the Group Business division still needed a system, so senior management brought in a new team that went right out and bought a Group Insurance COTS package. It was totally mainframe, so whoever wanted to avoid mainframe costs was ignored (or had been one of those who was fired), and it was COBOL-CICS-VSAM.

This was the first package I had seen purchased at Crown Life, so the developing of all systems in-house could not be assumed anymore; and the pure PL/1-IMS environment was no more. Business realities would now drive what systems and technologies would be used.

PS: About the reason why the mini-mainframe mix failed... about 5 years later, I was on a training course near Washington DC, and the direct flight for myself and a co-worker also attending back to Toronto was canceled. So, we ended up on a travel nightmare, taking a flight that stopped 3 times and we had to change planes once. A couple of other travelers were doing the same thing, so we got to talking (especially in the bar between planes) and turned out they were AT&T techies. When I said we were from Crown Life, they rolled their eyes and said that we may not want to hang out for the rest of the trip.

Why? It turns out that for the minis in those Group offices to communicate each night with the mainframe, they would need special or dedicated lines (any network techies reading this will probably know why). In those days, those lines were not easy to get, and Crown Life ended up on a waiting list measured in years. When that situation became apparent to all, it wasn't too long after that I came into work and found that project team gone.

PPS: Crown Life apparently sued AT&T over this. They had a contract, of course, and they claimed the delays breached the contract or something like that. As a Crown Life employee, I never heard about this lawsuit, but the AT&T guys we were traveling with sure had; I don't know how the suit ended up, but Crown Life was not a favorite of AT&T for a long time; but us foot soldiers who had met at random decided we would have a few more beers and not let company issues bother; we just wanted to get home that night.

So, when I started on the next big project, all assumptions had changed.

Monday, September 28, 2009

Memories of IT - 1984 - Structured Techniques

So, this many posts in and I am still less than 5 years into my career... but an important shift in the story is imminent.

As I came off pure programming work like the new devevelopment project I have been describing, the Analyst part of my Programmer/Analyst job title started to dominate. Some of it was the work I moved on to, but also a parallel path of training that the company paid to have me attend. Everyone has a complaint or two about anywhere they have worked, but my first employer did understand that training was vital to the development and overall happiness of their employees.

Some of the training was general skills, like public speaking and on giving presentations. On the IT side of things, structured techniques were all the rage, broken into the parts of Structured Analysis and Structured Design. As I was still primarily a programmer, I recall I attended a Structured Design course first; it was given by an external vendor. The actual diagrams and techniques have faded from memory, but I do recall it is where I first learned about"high cohesion" and "low coupling". The course also showed how it used the results of Structured Analysis as input to Structured Design.

So, I continued along, doing enhancement work on existing systems. Understanding what the business wanted out of the enhancement, and then determining the impact it would have a on a system was the majority of the work; the actual programming work needed could often be minimal. So, it seemed like maybe I was turning into an Analyst who programs a little. I think the company did have the Business Analyst job title already, so I veered my career path towards that title.

That meant I got to attend the next offering of Structured Analysis training. The course used one of the era's gurus as a basis for the course: DeMarco or Constantine or Yourdon or whomever. I should have kept my course material, it would be a classic now!

It was on this training I was first introduced to the Data Flow Diagram, orDFD. The idea of diagramming what your system should do really appealed to me, even if the diagrams were hand-drawn and hard to change. Pencils and erasers is what I remember of the course and subsequent work back at the office. That would have caused a lot of people to use it less or stop using it, but not me. My future was being defined right there in that course.

Next time - the next big project

Wednesday, September 16, 2009

Memories of IT - unexpected consequences of system development

(Actually, I need to wrap up a few things and start a few others before I get back to PCs...)

So, I came off of the big PL/1-IMS development project, as it wound down, 1983 or so. The system from my viewpoint was brilliant, supporting all the stock and bond investment business of the company. I worked a lot on look-up screens, starting with a definition of a security and breaking it down to all the lowest levels of investments the company had in that security. A friend of mine was quite proud of a program he wrote to allocate bond income across the company's complete portfolio after one hit of the enter key. However, it still faced two challenges:

1. the easy-to-program architecture of the online system did require more of the system to be loaded in memory at a time than an architecture based on a single screen structure. As a result, the online system always had lowest priority among all the online IMS systems, and so response to the users was always slow.

2. the actual securities traders had been doing business over the phone and writing down their trades on scrips/paper, then handing these to clerks to code up transactions to feed the existing batch systems. A major proposed benefit of going to an on-line system is that the traders would now enter their trades directly into the system, giving real-time updates and removing the clerical effort. So, our BA/PM shows a test version of the system to management and the traders, to show just how this was all going to work for them. Apparently the traders weren't aware of this benefit, and they reacted very negatively. Typing anything was for clerks and secretaries, so using a system that required typing was beneath them. So, when the system went live, traders continued to write their trades down and handed them to the clerks, whose jobs continued but as the new users of the online system.

The system did go on to have a useful life of about 10 years. I did not work on it again, so I don't know if the traders ever warmed to it. I do recall some outside consultants doing reviews of our existing systems later in the decade, and they said something stupid about this one; given its volumes compared to say, our core individual insurance systems, they wanted to know why we had not developed it on a mini-computer. I think the architect,s head probably almost exploded. I suppose not owning a mini-computer did not mean anything, nor the fact that that the company's core IT skills at the time of development were all on mainframes. By the time of this review, however, other changes had occurred; packages were being purchased, which meant COBOL and CICS were invading our previously pristine PL/1-IMS world. The architect left the company not too much later. Breck Carter, wherever you are, if you see this, drop me a line.

About Me

Ontario, Canada
I have been an IT Business Analyst for 25 years, so I must have learned something. Also been on a lot of projects, which I have distilled into the book "Cascade": follow the link to the right to see more.