Thursday, December 21, 2006

Mainframe to client-server, more


I’ve noticed a marked increase in the number of calls regarding mainframe contracts in the second half of 2006. Although many of them look for Vantage 1 and DB2, in general the requirements have become less specific than they were even a year ago.

Yet in the late 1990s everyone said that anyone who didn’t switch to “client-server” (or to "open systems") would become unemployable after Y2K was over. Remember that Y2K generated a tremendous demand for routine mainframe programming (including ALC) in the late 1990s, and it did drop off precipitously right after 2000 started. There have been other blips, though: MMIS, HIPAA, the Euro, and a variety of other social welfare programs, and even some special projects for the IRS. Many code maintenance jobs went overseas, especially to India, and sometimes this included night support (to take advantage of global geography and astronomy). There seems to be some evidence that some of this work is coming back home. Offshore outsourcing does not always save all the money that companies expect, and sometimes valuable expertise is lost.

In the 1990s it got fashionable to say that older professionals couldn’t learn the new stuff. This has been a subject of some controversy before. The real techies started out by running their own webservers at home in the middle 1990s (one friend of mine did this on a 386 machine -- he would scold me for my "astonishing lack of curiosity" when I did not play around with downloading various software just to experiment, which is what you had to do then to learn this stuff-- and another developed a web hosting business that he ran; I was his stable customer for four years). Yet, this developed just after news commentators started talking about the Web on CNN and while AOL and Prodigy still lived off of their proprietary content. (How that has changed! Prodigy was a bit clownish in those days.)

Those were the days, my friend (as in the 1968 song). 2400 baud was an acceptable way to get email at home, and even at work, until maybe 1991 or so, 9600 baud was a bit of a luxury if you had a remote mainframe. The rapid improvement in connectivity options no doubt help speed up the corporate mergers in the 90s.

By the late 90s, companies like the insurance company that I worked for had developed a paradigm of legacy systems and cycles on the mainframe, fancy COBOLMVS batch replications to a Unix midtier, C-code (procedural, not object) screen-ems (off CICS), a little DB2 direct connect, TPX, a data access layer in java with a context factory (I think this fits into the OSI model somehow), a C++ bridge, and a GUI, in this case in Powerbuilder. I moved over to supporting this for two years, and found getting employability-guaranteeing expertise in these areas difficult to get when just responding to user calls or fixing minor bugs. (The tools were a bit backward: the Unix systems had Hummingbird, which was like a slow TSO/ISPF, and code could be edited in VI (with its 24 simultaneous buffers, or Emacs), which many techies were familiar with but which seems clunky compared to ISPF. Java seems easier to pick up than Powerbuilder. But I took a course in C# with Visual Studio ,NET at a technical college near Minneapolis before moving back to DC, and I found that C# was much more straightforward than any of these.

The only way to get good at this stuff is to do it, and to spend a couple of years in developing a system, going though the unit testing, QA with user testing, implementation, and support. Just doing post-imp support isn’t enough; you have to do the whole thing. So making the "switch" (to "open systems") is a several-year commitment. Now Visual Studio ,NET looks like a much more straightforward environment than anything my company had – but what about platform independence. And instead of screen-ems and replications, it seems like XML is a much more straightforward technology for moving data around. Health care companies have used it heavily, as it seems to fit in to HIPAA compliance more easily.

Of course, what I want to do with my own “knowledge management”, as discussed on my other blogs, is to get it into a database and have an intelligence engine (something like a super data access) connect all the dots. Right now, Visual Studio may be the most straightforward way to do this. You can download Express for free and work with the database and webserver portions separately, but to make something runnable, it looks like you need Visual Studio Professional, close to a thousand bucks. But Microsoft keeps changing this (they need your money, though), so I will see.

Picture: operations research and dynamic programming from 1970, RCA Spectra environment.

Monday, December 18, 2006

More on public exposure for IT professionals


I have visited the issue before of social networking sites and blogs by I.T. professionals. Employers have become more concerned about these in the past year or so. I suppose some employers would actually like to see participation of candidates on technical blogs, but there can be issues of confidentiality.

There can occur a situation where someone is placed with a client by an agenting company, and the client becomes concerned about the "reputation" of the contractor from Internet content in areas outside of direct job relevance.

There has also been startup companies that promise to manage the "online reputations" of people and that also want to manage their online presence for "appearance" or public relations purposes. Agenting companies might fear that a client would, if it found a contractor's poliitical or personal materials on the web, perceive the contractor as less than "professional" about his/her public space use, but this notion is very subjective.

Generally, as an individual contributor in a W-2 situation, I would not allow an outside company to manage my "reputation" or public appearance (as if it were clothing -- "Sartor Resartus") and I do not believe that this is necessary. In a corp-to-corp situation, and especially if the agenting company pays salary on benefits while I would be "on the bench", I do agree that this is a much more important issue. In such a situation, the agency is selling the candidate as a professional on the specific subject matter, and a major outside Internet presence on competing issues (especially political ones) could create confusion.

I have more concrete statements about this issue at my Johnwboushka site

Also, the Persistence Policy, and suggested blogging policy.

I added JCL to my certifications on Dec 8, 2006. Go to this link.

Earlier posting from Sept 15, 2006.

Sunday, December 10, 2006

IBM Mainframe databases

The most commonly desired database is DB2, which first appeared around 1983. I recall a telephone interview in 2002 in which I was asked about "indexable predicates" and to give circumstances where a full outer join would be used.

The next most common is IMS with the DL1 command language, the world of PCBs and PSBs. Sometimes IMS-DC is needed as a TP monitor, as it is a rare skill, since most people learned CICS only, and since relatively few installations still have it.

A simpler "relational" system was ADR Datacomm DB along with Datacom DC, which Chilton Credit Reporting in Dallas used in the 1980s (before takeover by TRW, eventually leading to a spinoff as Experian). It worked essentially as an inverted list. It did not make one very marketable for the job market entering the 1990s.

IDMS is a "network" system, originating with Cullinane and then belonging to Computer Associates from the 1990s. It had a fourth generation online language called ADS(O) which supplanted the need for conventional command level CICS programming. IDMS could work with files in VSAM format as well as its own proprietary format.(By the way, you can do command level CICS in Assembler was well as COBOL, but Macro level CICS was usually only in Assembler).

Sperry Univac (in the 1970s) had a DBMS similar to IDMS, called DMS-1100.

Another common database used to be ADABAS, with the accompanying 4GL NATURAL.

For life insurance and annuities, VANTAGE developed a proprietary system to access either VSAM files or DB2 with such specific structures that it is practically a DBMS on its own right, with very specific call structures and link deck conventions, that often require gurus to maintain.

Thursday, December 07, 2006

Review topic: JCL, MVS

hiperspace -- comparable to a dataspace but resides in extended storage and helps jobs run more efficiently

ICF -- Integrated Cataloge Facility
VSAM files must be catalogued

JES3 ADDRSPC VIRT is default REAL means storage is non-pageable

DCB you can use OPTCD to read and write data in ASCII

examples of pre-printed forms

//STATEMENT DD SYSOUT=(B,,STM1),FCB=STM1
or
//OPT1 OUTPUT CLASS=G,FORMS=STM1,FCB=STM1
or for laser printer
//OPT1 OUTPUT FKASH=STM1

The OUTPUT statement is used to specify parameter sets for multiple SYSOUT DD's within a particular step. The DEFAULT=YES on an output statement makes the parameters apply to all SYSOUT DDs that don't specify an alternate statement. The DEST subparameter is often used to direct specific forms to other remote printers. In practice, when doing laser printing, many shops require that the programmer insert various special characters in the print line to further control printing, especially in automated stacks with client breaks indicated by various colored papers (agent commission statements, for example). Printer vendors sometimes require content control that goes beyond what is commonly accomplished by JCL output control parameters and subparameters.

Remember that in cataloguing or referencing a tape dataset, the VOL parameter is very useful. A number after the second comma may reference which reel of a multiple volume set, and the number after the third comma can specify the max reels to create. A default of 5 is assumed. In one small consulting company in 1989 (in a 4381 environment, the "small" mainframe of the time), we kept summary data on multi-file reels, and had to be very skilled in reading them back correctly with various parameters; we were in an environment where we paid for disk space and computer time and had an economic incentive to resue summary data. (We also had to manipulate the Medpar detail on various volumes a lot.) Times have changed tremendously since then!

It is often desirable and more professional to allocate temporary datasets within virtual storage. UNIT=VIO will accomplish this. Surprisingly, many shops do not bother to do this. Shops generally encourage the use of temp sets, however, and will monitor programmers' use of unnecessary catalogued datasets.

JES2 and JES3 handle scheduling jobs on specific processors differently. In JES2 you use the /*JOBPARM statement with the system affinity (SYSAFF) parameter. In JES3 you use the //*MAIN statement (two slashes) with the SYSTEM parameter. A global processor is in charge of a whole network, and controls local processors.

Back in the late 1970s at Bradford we used CHKPT a lot on tape jobs, because we had sequential jobs that processed millions of records of MMIS claims data tapes. The EOV parameter takes a checkpoint at the end of writing each volume. This was very useful in being able to finish production processing in time, especially with programmers on call on their own salaried time.