Monday, July 16, 2007
A fair question about my background might be, I spent 31 years in the conventional IT world, but why didn’t I “advance”?
In fact, I had direct reports, as a project leader, just once, in 1988, during my last six weeks at Chilton Credit Reporting (aka TRW aka Experian now) in Dallas. But I had already determined that I should leave the company because of political and merger-related circumstances, thinking I was stretching my luck if I sat if out for the severance if I could get a job elsewhere. I came back to Washington DC and worked for a health care policy consulting firm (now Lewin) for 18 months before going to Uslico, which would be absorbed by NWNL – ReliaStar (1995) and ING (2000).
My career started out in operations research and defense (Navy Dept) within the Univac world (1108, 1110, etc). I worked for Univac in marketing and doing benchmarks for a year and a half. I moved into the commercial area by going to NBC (National Broadcasting Company) in New York City in 1974, to work on their general ledger application, in a Univac 1110 shop. Then the big deal for career employability was to “get IBM.” So I went to Bradford National Corporation and worked on Medicaid MMIS for New York State in 1977. I stayed there for 19 months and left, in retrospect quite prematurely, to move to Texas in early 1979. There I worked for the Combined A & B Medicare Consortium (CABCO), hosted by Blue Cross and Blue Shield of Texas.
My intention there was to prosper and advance. The environment was to be IBM, IMS and CICS, the preferred environment of the day. However, because of political infighting the project failed. I got a stable programming job at Chilton in Dallas, working on daily and monthly billing, but the environment was less desirable, being Datacomm DB and DC, now no longer factors in the job market.
In the 80s, with all the mergers, leveraged buyouts and hostile takeovers toward the end of the decade (in an environment of falling oil prices and overbuilding and real estate recession in Texas) companies were already starting to flatten their organizations, with fewer layers of management and larger spans of control per manager. The manager or managing project leader was considered more vulnerable to layoff, often, than the “can do” programmer who did the nightcall and kept the production systems running.
The mentality continued in the 90s, and the big demand for mainframe programmers for Y2K picked up around 1997 or so. The culture very much supported the idea of a career as an individual contributor or perhaps a team lead without direct reports. After Y2k and the first Internet bubble burst in 2000, the market started to tank, and then 9/11 and the accounting scandals or Enron and Worldcomm (and depressed stock market valuations in 2002) really killed it. As the market recovered slowly, job descriptions (especially state government contracts) became much pickier. Since about early 2006 the job gig requirements have loosened a bit, possibly suggesting increased demand, and that seems not entirely clear yet.
Now, with the influence of the Internet and the schizophrenic reaction of employers to individual user generated content and social networking, which can produce publicity conflicts, the whole attitude is mixed and hard to predict. Employers (and the headhunting staffing firms that they use) are uncertain as to what they really need. But this could be a great time to be in college for a student who plans his course work and internships carefully and focuses on the skills that are obviously in demand (security, architecture, OOP, and especially “connecting the dots”, a theme that I talk about on the other blogs.)
Thursday, July 12, 2007
I’ve had some more discussions with a couple of companies, having passed five certifications from Brainbench (COBOL, JCL, Ansi-SQL, DB2, CICS). Another one of them arranged for a telephone interview with short-answer technical questions (actually I’ve had two of those; the first was from Derrico, and consist of many very short answers). “You can pass a multiple choice test,” one interviewer would say, “but (after five years) can you still do it?” In general, COBOL coding practices have shifted in the past few years, with case-structure techniques common in other languages being more expected, and less use of Exit logic. OS-390 has led to the use of skills that cross onto other platforms like Linux.
The last time I worked in a formal IT shop was December 2001. That’s 5-1/2 years. I have not had a completely functional corporate log on since then. I did have Right Management, then a VAX log on at a debt collecting company, and a server logon for a public school system.
What needs to happen for me to be marketable in the mainframe area is for an increase in demand for mainframe gigs to continue. This does appear to be happening, given the nature of recent recruiter calls. It’s always an issue to return to work, but “working moms” do it after several years all the time, and the same may hold for retirees, as life-spans increase and labor shortages develop, especially when companies have hurriedly offshored basic functions and the offshoring backfires or fails to save money as expected. (Salaries in India and even China are rapidly increasing, and are bound to continue to do so.)
My earlier posting on this (mentioning Bob Weinstein's columns) was here, in March 2007.
My public transcript for Brainbench is here. Sometimes this link works for me only in Internet Explorer.
The preferred resume site is http://www.johnwboushka.com
Coordinate post about the insurance business: link.
Monday, July 09, 2007
Batch Cycles Used to Make Up the Heart of Computing
I started my “career” in 1970 getting out of the Army, but the first time I worked as a programmer on financial systems in a commercial shop was when I went to NBC in New York City in August 1974 (right after Nixon’s resignation). For all jobs that I had (except one, nineteen months with a health care lobbying consulting firm from 1988-1990), the overnight batch cycle seemed to comprise the department computing culture. (OK, from 1979-1981 I did only design on a proposed Medicare system, but even there I worked on back end reporting and the preoccupating was how batch processing would flow.)
Batch is not hip-hop, and in the “Broken Arrow” world of John Travolta it ain’t “cool” but it was fundamental. The cycle had to finish on time and totals had to balance in time for CICS (or in one company, Datacom DC) files to come up on time for the business day. Programmers had to be on-call for those dreaded S0C7’s (programming problems) or 001’s (usually empty files) or U100’s (usually out of balance). In one job, in the 1980s, systems were self-contained enough that everyone was responsible for his own system. In the 1990s at another company it was necessary to have rotating on-call lists, because of the interdependence of the systems. Of course, there were the dreaded “month ends”. I remember the phrase “End of month is on fire!”
One major result of supporting batch cycles for so many years is knowledge of the meaning of the business processes taking place. In the 1980s, I worked on billing systems for a credit reporting company (Chilton, to be absorbed by TRW in 1989 and spun off as Experian, much of it still in the Dallas area today). The cycle comprised a couple of major parts: processing the day log files (of credit report requests) to collect statistics, and updating the daily billable volume, often applying complicated volume and other discounts and fixed charges. (Another area of the company was feeding credit scoring, well known today as FICO). In the 1990s, I was working in the life insurance and annuity business, and the cycle was a bit more complicated. The cycle comprised (1) new business processing (ISSU-COMM and NBU), including policy print (2) the daily claims administration cycle (with different lines of business at different subsidiary companies on different platforms, including Vantage, VLn, CFO, and various older in-house systems (3) feeds to accounting and salary deduction for subscribing employers, as well as feeds to the agent commission systems. End of month included all the accounting and commission payments and statements.
In supporting a life and annuity processing cycle effectively, one develops an understanding of the core concepts of life processing. The employer required and encouraged all programmers to take LOMA (Life Office Management Association) courses and earn FLMI certificates, by taking up to ten multiple choice tests. Insurance has its own set of concepts (like anti-selection), that become second nature to people who work in the business. One of the most notorious of the LOMA tests was the actuarial math test (“present values”, etc). A background in statistics and calculus does help.
Property and casualty insurance, and health insurance would tend to have similar underlying processes. Property and casualty would emphasize estimates and imaging of accidents or claims events. Health insurance has to meet strict legal privacy requirements in transmitting data among different systems and companies (HIPAA), often with XML.
When one reaches “retirement age” as I did, various issues come up (complicated by the economic dislocations at the end of 2001). One is whether I could return as a mainframe applications programmer after five years, and I could if the re-emerging demand is great enough. Another is whether the user and business knowledge would be useful in selling insurance products, an issue that I will take up in a couple of other blog entries soon. I’ve already taken up “Can techies sell?” on this blog (Oct 2, 2006 – click on the archives link).
Picture: I used to work in the building in Arlington VA that this new building replaces. Progress moves on.
Tuesday, July 03, 2007
The old fashioned culture of information systems in large companies, with overnight batch cycles, end-of-month, and demand for absolute perfection actually developed a few years ago, in the 60s and 70s. IBM introduced its 360 lines in the mid 1960s, and their verbose JCL (DOS at the time, to be largely replaced by MVS in the 70s) and assembler and COBOL created the computing culture of the time.
In the 1970s many financial institutions and other large companies (manufacturing, media, retail) would write their own inhouse business systems, and developed a culture of “systems analysts” who wrote specs, and programmers, and operators. The level of perfection necessary quickly became apparent and programmers had to be on call when nightly production ran. The dreaded “SOC7” became a buzzword. (That was IBM, [“I’ve Been Moved”] which came to dominate business systems culture, partly because of the secondary propagation by H. Ross Perot and his militaristic company EDS (located in Exchange Part in Oak Lawn in Dallas in the early days, to move to Forest Lane and then Plano, and then all over the world, and morph into a mainstream, even pluralistic consulting, project management, data center management and software firm). With other vendors, there were similar buzzwords, like “0219” on RCA Spectra 70 (which emulated IBM), and “Guard Mode” on Univac 1108 / 1110 (which I worked on in the early 70s until yielding to the pressure to migrate to IBM).
In the old days, programmers had unchallenged access to production files, and there were no automatic source management procedures to guarantee source-load module integrity. (All of this developed in the 80s.) You had to be very careful to save records of what you did. At NBC, in the mid 1970s, where we had a Univac 1110 environment, we had paper-roll terminals and could save the rolls of what we did (we didn’t have individual CRT yet: to look at source code in a program, you entered a command to get it to print on the paper.) Another thing you shouldn’t do was go on vacation over a month-end closing. (I did once, and almost regretted my gratuitousness.) Also, if you wrote a program to save records, you learned to read the data back yourself; otherwise production data could be lost forever. The close-calls of those days sap energy from later personal competitiveness, and it is not good to let them happen.
Today, of course, there are (and have been since 1990 or earlier) all kinds of automated tools (RACF, Top Secret, source management like Changeman or Endeavor) that protect the integrity of the production environment automatically. The old days were much riskier than you want to know.
One major news issue is the physical security of data in large companies. Today, an employee theft of financial data of about 2.3 million consumers from Fidelity National Information Services, part of Certegy Check Services in Florida, was reported. The data was used only for sale to direct marketers. But the point is that physical security of consumer data is a bigger issue than it was fifteen or so years ago, and compares to home personal computer security as a possible source of identity compromise. When I was working, it was common to take work home (sometimes test results listings and source listings) during implementations, as well as laptop computers. Possible theft of a company laptop, as by burglary (especially when traveling on business) is a serious source of compromise if the laptop contains production data. Other employees dial in to a work mainframe from a personally owned computer (this used to be done a lot through products like Procomm or PC-Anywhere), but this could lead to co-mingling of personal and protected business information of a personally owned computer. Because I had authored a politically controversial book on my own computers and maintained websites on them, and was sensitive to the idea that company resources could ever appear to be misused for personal political purposes, I insisted that any work done from home (even production support of abends) be done on equipment that I owned. However, most of the time I physically went in when there was a problem. Many other associates, however, worked from home some days of the week (I did not have to as I was in an apartment 1500 feet from work), and telecommuting, while desirable from an energy-saving and carbon-saving (and work v. family) point of view, can present security issues.
Picture: White has just played 8. Qa4 and Black can resign. This is like a Fool's Mate. Chess Life and Review, June 2007, p. 26.