Thursday, December 27, 2007

IT people face road forks on their "online reputation" strategies


Well, I’ve been out of the “formal” job market for six years now, since the end of 2001 (with some little episodes along the way, such as writing a certification test in 2003). I do have some ideas about what I want to do – more details coming up on the main blog soon – but I still don’t rule out (even at 64) the idea of going back to a more convention It “job” or W-2 or even corp-to-corp contract. This may have come close to happening back in May.

One particular concern is this whole evolving idea of online “reputation.” I’ve pretty much cemented my “reputation” for better or worse, with these blogs and sites, and with the political involvement with the “don’t ask don’t tell” problem that I committed myself to in the early to mid 1990s. I’ve tried to segment the “reputation” with a site (johnwboushka.com) that search engines pick up first, that presents my I.T. resume and then explains what the other sites and blogs (that follow in the search engine results) are all about. I don’t really know yet how recruiters react to this. I set this up in the latter part of 2006. There has been a drop-off of contacts since mid to late 2007.

When you work for a staffing company and go to a client site (often with temporary relocation, living in an extended-stay motel or corporate apartment in another city) the staffing company wants to market you as an “expert” in the disciplines for which you were hired. Sometimes the list of required experience is quite long and specific (especially with state government contracts – Medicaid MMIS and welfare departments -- where, ironically, job description requirements are so specific as a way to prevent legal challenges for discrimination in hiring). In those cases, “reputation” is more likely to be perceived the way it used to be, from the resume and word of mouth. In other cases, though, clients may want to have reassurance that they can depend on the contractor as an “asset person” of last resort to deal with specific arcane problems in depth in long-standing technical areas (like, in the mainframe, DB2, IMS or CICS internals), or in client-server, many less-established and quickly evolving technologies (OOP). In these cases, it would sound as though staffing companies may start becoming more concerned with notions like “professional reputation management.”

Contracts in these areas can be quite challenging. A friend of mine took a contract a non-profit in the mid 1990s and fought IDMS and VSAM fires for six months (usually technology that should have been stable).

Most people with specific expertises that generate contracts today developed them by accretion, with a series of related jobs or contracts. Typically there was no conscious decision to become an “IMS expert” even though recruiters now scour the country for the few of them that remain for the few jobs that there are. It’s a kind of L’Hopital’s Rule problem (from calculus) in reverse. Because companies have been unpredictable and inconsistent, willing to dump people to eliminate redundancies of function that occur with corporate mergers, programmers and IT people have developed a short-term view of their own futures, and believed that they must be flexible, shift gears quickly, and wear many different hats at the same time. Yet the whole “online reputation” issue (that I have discussed on my other blogs) tends to create the impression that a “professional” nurtures and deploys his or her core skills so that others can count on them. There is a kind of perfect storm going on here.

Thursday, December 20, 2007

"Strategic planning" and capacity planning requirements: how things have changed from the 80s


Back in early 1989, when I was working as a “mainframe” computer programmer for a health care policy consulting company, we were concerned with reducing “computer costs” (disk space and charged EXCP’s) because they came out of the bottom line for the business. As noted in the last post, I was able to reduce some costs by replacing random VSAM accesses with Sorts and sequential processing. But I also called around to a couple of data centers in the northern Virginia area for quotes on space and time. I was almost going to be in the position to negotiate a lease of space with connections. The buzzword for this kind of systems analysis was "capacity planning"; but at Univac, back around 1973, I had encountered this concept with benchmarks of 1110's and with general customer site support. We wound up getting bought and moving over to a 4341 and 4381 environment that offered VM (which made a mainframe look rather like a PC 1980s style with its F-disk) and MVS; there were cultural squabbles on how to run SAS. (I remember those notorious “podiatry jobs” and the SAS “bundles”).

How things have changed, where Windows Vista and Server are the standards, with equivalents in the Linux and Unix worlds; where on the mainframe OS 390 runs anything you want. Now, as I noted on this blog in August, I size and price items for video and movie editng software, and Visual Studio / ADO / SQL server or comparable MySQL environments on my own machine or own domains, and how to tie everything together with something like Wordpress.

But the 80s, those were the days, my friends. But they would “ever end.”

Tuesday, December 18, 2007

Computing and searching speed and I/O access: on mainframes, then on Internet


I can recall, when working for NBC in the mid 1970s, that on a Sperry Univac 1110 it would take about 3 hours in an end-of-month closing to sort maybe 300000 detail records for the voucher register in the general ledger system. That cycle kept me up when I was on call.

By the mid 1980s, it was taking maybe five minutes to sort a similarly sized file on an Amdahl (compatible with IBM mainframe) during a nightly daily billing cycle. As a result, we did all our sorts externally with Syncsort steps. We never bothered to code Sorts in COBOl with SD’s, Input and Output procedures, or even Using and Giving.

By around 1988 when I was at a small consulting company with access to an IBM 3090 at Healthnet in Richmond, the same sized sort would take maybe 30 seconds. I had to reduce the computer costs for a simulation model that I worked on. So, with one program that did a lot of random VSAM access, I sorted in sequence and processed sequentially with “balanced line” matching in COBOL, saved about 2/3 of the cost and it ran in less than half the time altogether.

The preferred mainframe sort product has always been SYBCSORT, but back in the late 1980s and engineering company in northern VA called ICF had developed a competitor, PLSORT, with pretty much the same syntax of commands, but supposedly less resource use (back in the days of 4381 environments).

In 1991, at a life insurance company with an IBM clone Hitachi with MVS, I had a mix of jobs running simultaneously that did a lot of VSAM accesses (simulated by IDMS) to print consolidated salary deduction bills. I remember that one of these jobs could take 2 hours to go through 26000 print image records in VSAM. To run a hundred bills took all day. But by 1998, in a much more modern environment in Minneapolis, the same mix of jobs could finish in less than an hour, well before anyone came to work. I don’t know exactly how the VSAM performance was improved (in terms of CI splits and so on), but it took only about 1% of the time that it had taken in 1991.

Vantage life insurance legacy policy administrative systems had a reputation of running forever for even a small volume of contracts, but by the late 1990s these problems seemed to be overcome. There were no problems at all with any of these systems in the Y2K event.

Is it any wonder, then, that we find that the Internet is so efficient, and that, even when there may be a few hundred million personal profiles and blogs and various sites, anything controversial that anyone puts out tends to be found quickly. It’s just the mathematics of binary searches.

Monday, December 10, 2007

How e-commerce sites display multiple versions of same item


I've noticed on Amazon that when there are multiple editions of a book, sometimes it is hard to see all of them, or they do not come up in what would be a preferred sequence. For a book, one probably wants to see the most recently published version first. That will, in many cases, be a paperback, sometimes with more material added, often at a lower price. Sometimes the original may be out of print.

Guaranteeing that various versions always appear in a reliable order would appear to be related to conventions in coding SQL statements (and maybe in setting up database indexes). Options like DISTINCT and ORDER BY and DESCENDING would need to be considered.

I've even seen this happen in a school system payroll system paying substitutes where there was some complexity in how multiple-part assignments were coded.

I explained how this issue plays out with my own self-published books here.

Saturday, December 01, 2007

Reported links betwen graveyard shifts and cancer: an issue for I.T. workers?


Last week major media sources discussed a World Health Organization Agency for Research on Cancer upcoming report linking higher incidences in cancer for persons who work the graveyard shift. The International Herald Tribune carried the story AP here: The title is “once dismissed as far-fetched, link between night shifts and cancer gaining acceptance.”

It’s likely that closer examination of results will finger irregular hours, common in information technology jobs where people have to work unusual hours when systems can be taken down from customers for maintenance (such as on Saturday night / Sunday morning) or when employees have to be on-call for nightly production cycles, common in financial institutions. Many of these cycles are largely run on the mainframe in batch (followed by replications or scripts to establish GUI interfaces for end-users), governed by scheduling software and with intricate schemes using generation data groups and backups to ease recovery from abends. Nevertheless, people have to be on call to respond to unexpected failures, especially after implementations or upgrades.

In shops where people are strictly responsible for their own systems, people develop techniques to minimize the chances of failure. In shops where the on-call rotation is widespread, there can be problems among staffers. Sometimes people without families are expected to do more of it. Left-wing rhetoric would claim that if the world needs to be open 24x7, everyone should “pay his dues” and do his fair share of it. But in recent years since Y2K, some companies have offshored production support work to India offshore.

Of course, operators in data centers are used to shift rotations, as most data centers are populated 24 x 7. Sometimes data centers are shut down briefly for major holidays.

Some contract programmer jobs require on-call availability. Some of these compensate the consultant hourly, so that on-call incidents (if valid) provide extra (usually straight time) income; others are salaried, and consultants might sometimes have to provide on-call on their own time, at least for their own systems.

The medical paper may attribute the incidence of cancer to the fact that when sleep patterns are interfered with, natural metabolism produces fewer anti-oxidants. Remedies may include regular lighting to simulate the natural effect of sunlight, and much longer times to acclimate to shift work.

Shift work is also essential in medicine. Long hours for interns and residents have long been controversial.

Monday, November 26, 2007

MySQL skeleton political database installed


Today I placed a facility on my experimental site billboushka.com to look at entries in my "Political Knowledge" database. This is on a shared web hosting site on a Unix platform with MySQL.

Right now, it is very rudimentary with only a few records. The link to the facility and instructions are here. The login ID supplied allows SELECT only. Note that the bookmark facility does not yet work. That require certain tables in phpmyadmin.pma_bookmark and I am trying to find an easy way to create them.

Some typical queries are

SELECT `argtb` . * FROM argtb LIMIT 0, 30 ;

SELECT `incident` . * FROM incident LIMIT 0, 30 ;

SELECT `argtb` . `srcecode` , `argtb` . `stateargument` , `incident` . `srcecode` FROM argtb , incident WHERE `argtb` . `srcecode` = `incident` . `srcecode` ORDER BY `argtb` . `srcecode` ASC , `argtb` . `stateargument` ASC , `incident` . `srcecode` ASC LIMIT 0, 30 ;

At one time, I had a service on another ISP that offered java, but it became unreliable. The java you could code a .jsp element like this:

It looked like this:

response.setContentType("text/html");

Class.forName("org.gjt.mm.mysql.Driver");

Connection con = DriverManager.getConnection("jdbc:mysql://mysql.xxxxxx.com/topics", "jboushka", "xxxxx");


Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT topic.topicdesc, topic.argcd, topic.polcode, topic.polkind, topic.actdate, topic.actionds, argument.argtext FROM topic, argument where topic.argcd = argument.argcd");


while(rs.next()) {
out.print("");
out.println("" + rs.getString("topicdesc") + "" + "" + rs.getString("argcd")+ "" + "" + rs.getString("polcode") + "" + "" + rs.getString("polkind") + " " + "" + rs.getString("actdate") + "" + "" + rs.getString("actionds") + "" + "" + rs.getString("argtext") + "");
out.print("");
}


I've considered doing this with Visual Studio .NET and C#, but I would have to purchase Microsoft Professional Edition of Visual Studio (as discussed on this blog in August).

Tuesday, November 13, 2007

Do elections provide I.T. work?


This year, for the off-season 2007 election in norther Virginia (all races were local or state) I signed up for poll work as "tech." That means I would be the person to deal with the WinVote machines (from Advanced Voting Solutions) if any machine failed.

There was an additional 90 minute training session, that was mostly talk. The components of the processing have to do with the mechanics of voting, the various location and machine close and open reports printed on a cash-register like paper tape, the "USB" which is really a memory stick, the reboot procedures, and the physical setup, which has to do with standing them on pegs and properly connecting them. This is more of a mechanical and a programming job.

There was only one failure, late in the afternoon. Failures occur when a machine is not activated by a smart card or a vote won't register. They seem to happen if too much pressure was placed on the screen or if there are power supply stability problems in the building (despite the batteries). The instructions document the reboot procedures, which are similar to restarting a laptop from sleep -- but they took longer than documented, and the instructions didn't list all the steps. The machines do not lose any record of votes when they hang, and they produce an awake report.

Nevertheless, the possibility that this can happen argues for the idea that a detailed paper record ought to be kept of the individual votes.

It's hard to be really effective in this kind of work when one does it so infrequently. One needs to work steadily on some sort of contract with a county or city in order to be effective and dependable. As I documented on the Issues Blog last week, the day is very long.

Some media companies have election units in information technology and these units provide employment only during election seasons. That's kind of like a tax company providing employment only during the tax season. This was the case when I worked for NBC in the 1970s (although I was permanent and did not work for that unit).

Friday, November 09, 2007

Some fun with an imaginary universe (OOP classes and methods)


A novel manuscript that I am working on (called "Brothers Simple") imagines a scenario where people find out who the gods and angels really are. They map to Biblical concepts in a way (in a New Age sense) and the various entities could be described as classes of objects with associated behaviors or methods, that combine or detach the entities in various ways. Any language like java, C#, or C++ could work.

I suppose this could evolve into some kind of online universe (not exactly "Second Life").

Have fun.

Classes:
Soul
Person
OrdinaryPerson
Angel
FullMemoryTrace
Merit


Methods:

Group
AddSoul

Soul: (vehicle though which you are aware of yourself crosses dreams – a bundle of consciousness)
AddPerson
ShaveBackPerson
ExtinguishSoul
ViewPerson (//always from a distance)
HoldSoul (suspend for reawakening)
ModifyKarma
Death: ShaveBackPerson for all Person objects in Soul
Death: JoinGroup

Person: (vehicle for your waking identity)
ShaveBackPersonalHold (//reject temptation)
Extinguish (//death)
Increment (//change appearance)
InsertintoSoul (//reincarnation)
UpwardAffiliate (//with angel)
Add PartFrom Soul (//experience memory of another as if one’s own)
RemovePartToSoul (//like forgetting a dream)
Classify (//angel – class 1 ordinary second class brownie third class -- the “brownies and elves problem”)
ModifyMerit

Angel extends Person
Preserve (//protect from Death methods)
Trump (//take over older angelic identity)
Capture (//ordinary person to be affiliated)
Hibernate (//when taken over)
RecallAndExpellAllPreviousParts (//relive each life and expel permanently)
JumpOnDoubles (//speed > c (boardgame dice roll))
RefuseDreamAsUnopened

OrdinaryPerson extends Person
LinkToNewSoul (amnesia syndrome)

Wednesday, November 07, 2007

Hourly v. salared exempt


My entire career in information technology, I was salaried and exempt. The colloquial definition is “you don’t punch a time clock” but the real meaning is you work the hours until the job is done, and you provide on-call support on your own time. In some installations, management uses log on time onto a computer network (and logoff time) as official arrival at and departure from work. The comfy buzzword back in the 70s was “salaried professional.”

Most W-2 contracts are “hourly.” The client pays the personnel staffing company for billable hours, and all time is supposed to be compensated. Clients may reasonably expect billed hours to be productively spent. Overtime in W-2 contracts is usually “straight time” although sometimes it is more. However, in some arrangements, contractors are paid only for forty hour weeks as if they were salaried that is always the case with corp-to-corp.

Saturday, November 03, 2007

The schizophrenic job market -- it's partly up to employers to say what they want


Having “retired” at the end of 2001, played around with interim jobs and a lot of writing, I’ve received a lot of calls from headhunters. From the end of 2006 through the spring of 2007 they seemed heavier, and then they seemed to drop off. As I indicated in the last posting, one issue that came up, especially with someone out of the market for a while, is that they prefer a reverse chronological resume, rather than a functional one (or “functionable” one), which they see as hiding things (or at least clients see it that way).

Most staffing companies offer W-2 and corp-to-corp arrangements, and expect the employee to pay his own way in a temporary relocation within the hourly rate (it must be 364 days a year or less to meet IRS requirements, I am told). A few offer tax-free per diem.

In this era of “reputation management” staffing companies are likely to become more concerned about an online presence, as well as resume, that represents some specific area of technical expertise (MMIS, state welfare applications, Case tools, DB2, IMS, Oracle, MQ series, all kinds of things even in the “just mainframe” areas). For one thing, I wouldn't want to invest in the background that would justify a "professional social networking" profile that presented me as the god of DB2 tablespace design if that kind of work was going to be jerked overseas at corporate whims.

But, of course, it was the short-term behavior of the employment market that started in the late 80s with the hostile takeovers and leveraged buyouts, continued in the 90s (where many people in the job market had substandard experience with older technology that would die with mergers – and many of these people were “bailed out” by the Y2K crunch, even if they missed out on the “war for talent” in the dot com craze), took the 9/11 hit in 2001 and continued. The market, after Y2K, fragmented into many small niches, and it was not easy for professionals to predict what would remain viable. I can look back to 1991, when I may have had a change to get into Vantage (the mainframe life insurance and annuities platform, with all of its link deck idiosyncrasies that help Vantage “rule the world”) but was stuck in slow motion in another homemade system that had to be babysat, and trying to get VLN (which went under) going.

The market for various skill sets would wax and wane, somewhat depending on whether the job could be offshored and performed more efficiency overseas (India) – then sometimes the savings did not come about and the domestic demand would come back. (A lot of this started in the late 90s as companies had to offshore some of the Y2K coding changes – there simply wasn’t the manpower available in the states to do it all in the last 24 months or so; then after 2000, why not continue? I can remember one programmer saying in 1996, “we are all set until the year 2000.” Indeed.)

Conventional wisdom was to become flexible, move around, not get too attached to any one skill set. But then, as the new century developed, one didn’t have the job-ready specific knowledge that was needed. After 2000 passed, some people, like me, fell for the shallow “jump to client-server” argument, without getting the experience in completing a complex project that would give them the depth that they needed.

Client requirements have often been very specific, particularly with state government clients (for welfare, social services and MMIS contracts) that believe that having rigid skills requirements lists protects them from discrimination complaints, but that tend to keep the same cadre of people rotating among the contracts.

Perhaps, though, it’s a good time to be in college or graduate school, if the school and professors can help students figure out what employers really need. But employers – both staffing companies and their clients – ought – out of longer term self interest -- to behave in a much more forthcoming manner than they have.

I might as well give the links: My certification summary page, and my Brainbench certification page. My official resume page is here.

Wednesday, October 31, 2007

Resumes: functional or reverse chronological? The advice is generally to go back to reverse chronology


Mary Lorenz has a resume advice page today on CNN “Five easy ways to improve your resume,” and CNN introduces it with “is your resume awesome?” The link is here.

There is some disagreement among career consultants among the functional resume. Outplacement companies like Right Management, back in 2002, suggested a resume that lists “accomplishments” and decreases the importance of chronology, especially for job seekers over 50.

Recruiters, however, tell me otherwise. They really like to see a resume in reverse chronological order. True, as in the article, they like to see quantifiable results, active verbs, and brevity (eliminate redundant words and use sentence fragments within reason).

There are a couple of reasons for this. First, resumes need to be scanable for keywords. Second, employers are concerned about accounting for gaps in employment and eliminating the possibility of fraud. This is especially true for “headhunting” companies that place consultants with clients.

Recruiters disagree as to whether resumes should list activities basically unrelated to the job sought. That’s especially the case in a changing economy where people have down time, take “interim jobs” (that don’t always work out well), or earn incidental income (that doesn’t mean “off the books”) from volunteer-associated activities or even blogging. They don’t want to distract their clients (there is the “too much detail” problem in business and sales), but they want to eliminate possible red flags. This is a tough call, as the Internet complicates the way people are perceived when others find them online, as on social networking sites and blogs (which happens much more than many job seekers realize).

Update: Nov. 11, 2007

Mary Ellen Slayter, career writer for The Washington Post, has a column today (page k01) "Accomplishments, Not Duties, Jump Off the Page", here.

Wednesday, October 24, 2007

How things changed


As I look back over my "career" since 1970 with some historical perspective, I see, indeed, how things changed, and how perspective on a career evolved.

I started out in defense, moved over to vendors and then to commercial financial applications. By the mid 1970s I saw that mainframe applications program was a whole "culture" of values -- perfectionism, and a higher than usual stable income but relatively few perks. The deal then was to "get IBM" and then it became, get IMS and CICS. By the late 1980s the minis were coming in, and then PC's, but it took until the 1990s for people to catch on to the concept of end-user driven computing. First they would do it with DOS and Windows 3.1 applications, and by later in the 90s (as java exploded into the market like a nova) the Internet.

It was no longer possible to make a good living as a "generalist" like it once had. You could "switch" to client-server from mainframe after Y2K but not specialize enough in anything to get another job. You needed to actually develop and implement something to really understand the technology; starting out in support would not be good enough. Then, if you took your retirement, you would find yourself at home, gathering together your skills, finding that vendors would almost give away something like Visual Studio and various SQL's, to see if you could build your own paradigm on your own and make yourself famous.

How quickly times changed.

Monday, October 22, 2007

Again, more reports on shortages in technical sales people


About a year ago, Bob Weinstein had a syndicated column called “Can Techies Sell” and I wrote an entry about it Oct 2 2006. Today, in The Washington Times, Recruitment Times, p. D3, there is a column “Severe Shortage of Technical Salespeople.” Once again, Weinstein poohs the notion that sales people can sell anything, at least in technology.

The great “myth” is that techies are introverted and don’t like to manipulate people as people. In fact, you have to “sell” even to be successful bringing in revenue or just a public reputation with your own content. But, from a psychological point of view, one can understand the resistance to sales culture for its own sake – cold calling, developing and trading leads, schmoozing, manipulating, putting on misleading appearances.

Some people, in fact, get a source of psychological identity in their skills in manipulating others. They may well justify this in terms of their own families, a psychological mechanism that more introverted people could find phony.

A much better fit, as Weinstein says, is the concept of sales engineer – a job oriented around customer service and solving business problems, in support of a marketer who may take more of the responsibility for generating leads through “social” contacts. As Weinstein writes, sometimes such a job does lead to a better understanding of how a business works, and what it takes to support the bottom line.

Even so, many sales jobs that companies offer seek sales experience rather than technical experience, even if the sales experience (including meeting quotas) is in a different area.

A main issue for me is how “public” the job is. If the responsibility is to sell something that I had a hand in developing because I believe in it, there is no problem, even if it means setting aside everything else I do publicly – something I have already discussed on these forums. But I would never want the reputation of a peddler.

Monday, October 01, 2007

Object oriented COBOL and a political science application


In developing the Political Argument and Facts project (look in my main blog, for example here), I thought it would be useful to describe how it could be implemented in COBOL, with object oriented concepts. Of course, a project like this might be more likely to get done in a language like java, C# or C++, with considerable sophistication in architecture for web interface. But it’s helpful to think about how this might be done with older, essentially procedural languages that may be more familiar to many IT professionals.

Remember the basic components of such a system. There are objects, which comprise data and methods to operate on the data. In a system, in practice an object exists as a reference to a location in memory. A template for an object, with data layouts but no data, and corresponding methods (procedural code) is called a class. An object is created by instantiating a class (sometimes with a “constructor.”)

Murach, in his COBOL text, gives an example of a book inventory application. An analyst does some functional decomposition to identify the classes and objects. In Murach’s example, the objects can be “book inventory” (an item), file manager, book manager, and user interface.

In this “political science” application, the “objects” (again, corresponding to template classes) would be arguments, incidents, facts, media references, authors, and resumes. “Facts” could well be “subclasses” of “incidents” as they would be simpler. (An example of a “fact” would be the text or a specific state or federal statute.) From a resume, one could determine if a given author or contributor was a “professional” or “amateur.” Media would comprise books, periodicals, websites, movies, and television series and programs. Whereas in Murach’s example a “book item” is the specific object for a class “book” (that is, “my book” is really “my copy of the book” – definitely relevant to copyright law!) in this application, a book is really the piece of intellectual property. A better decomposition is with periodicals. The meaningful object will usually be a specific “periodical article” that usually has specific dates, page numbers, author, publisher, etc. The information roughly corresponds to footnote or bibliography information in a high school or college term paper.

Microfocus COBOL provides one template for object-oriented code in this language. Mainframe compilers may have templates that are a bit different. The overall scheme starts with a driver program. In the Environment Division there is an Object Section and a Class-Control that enumerates the classes. The Data Division, in the Working-Storage section, will contain (binary – computational) object references words to all of the objects in the application. It also contains working storage areas with the data items that apply to each separate object (for each separate class). The Procedure Division will start with an INVOKE of an object to create it with a “New” option. It then will invoke the various classes, and the applicable methods in each class.

Each class itself becomes a compilable COBOL program. As with the main program, in the Environment Division there is an Object Section with Class-Control that enumerates the classes. There follows an “Object” declaration with a “working-storage section” for that class, followed by a Procedure Division with the Methods for the class. What looks odd is that each Method itself has a Data Division and Linkage Section with the data values for the particular object that the method must work with, and a separate Procedure Division with the statements for the method. The program source ends with an End Object and End Class statement, even within the context of the Environment Division, and this will seem odd to programmers used to procedural use of COBOL.

It’s important to remember that this environment seems to suggest computing environments that may have been popular more than a decade ago, when software vendors wrote DOS applications that did not even require Windows (or something comparable) and where companies wanted to deploy “end user” controlled applications on their work stations, well before XML and web integration became more standard.

So how would this play out with my proposed political application? It’s useful to think about what a typical application might be. One idea would be to trace all the arguments related to a particular topic, say, “mandatory paid maternity leave.” I’ll leave aside for right now the political and social controversy and just say that there seem to be compelling arguments on “both” sides. The application would first invoke the class to look up the argument (maybe by a free form key – certainly an SQL lookup with the appropriate error processing). It would then invoke another class to track all the “incidents” related to each side of the argument. For each incident, it would have to invoke still another method (with some linkage parameters for positioning) to display the bibliographic objects, such as book or periodical or web references. (Such a program might provide hyperlinks with embedded image files, in a manner similar to the way banks and insurance companies or even payroll companies may display information to secured visiting customers.) Finally, for each bibliographic entry, it might provide the resume “object” of the contributor, so the visitor could assess the professional credibility of the source of the information. The visitor would experience as set of professional looking web pages, linked forward and backward conveniently (perhaps with indexes displayed in a frame) and in a ten minute visit could assess what is really going on with a controversial issue like this. With such an application with all the factual information so well organized and publicly available, politicians would have a harder time with one-sided behavior.

Then consider how update applications would be designed. The argument class program would have a “create argument” method, which would accept the argument text and various items of supplementary information. In my own prototype, I have an item called “srcecode” which identifies a body of incidents or facts that would justify the argument. The method could establish (with an SQL Select) whether the “source code” has been used. If not, it could cause an incident panel to come up to force the visitor to enter justifying incidents or facts. The incident record(s) will require unique “incident codes” and bibliographic information which will in turn invoke bibliographic source class’s methods (I call them “media” in my own hard drive). There would also be “contributor” objects and classes that would process resume data for people who contribute to the database, in order to help establish the credibility of what finally appears to visitors.

Typical classes:

Argument
Incident
Fact (subclass of incident)
Media item (could have different classes for books, periodicals, movies, websites, with subclasses and common methods – polymorphism)
Contributor (with resume)

Saturday, September 29, 2007

Corporate sites with javascript and database lookups for consumers: a tip


I’ve noticed that some companies that offer financial or retail services to customers online, when they develop web pages in with javascript, sometimes place hard-coded text content (embedded in javascript functions “newContent” parameters sometimes) that is viewable in browsers under “view source” but that is probably not appropriate for all consumers to see. Sometimes they place all hardcoded content for all possible consumers on one page and really may not want all consumers to see it. It would be more appropriate for this text itself to come from a database and be viewable only to the appropriate visitor or consume. No, I won’t mention any names or misuse any information; I just wanted to pass this on as a programming issue.

Typically most of the pages will do a database call(s) (SQL) to find the information that the consumer requested. Sometimes the database calls are to image index files (images of mainframe documents) and get errors (often security or access-level related), leading to default error messages that are incorrect or misleading. Under “view source” in a browser (IE or Mozilla) the visitor can see his own information from the database and all of the javascript code, include hard-coded text. Companies may really not want visitors to be able to see all of this.

Friday, September 21, 2007

History of Computing Culture 103: Bradford and New York State MMIS (finally on IBM), 1977


What followed NBC was a migration to IBM. Since there were only six or seven Univac installations in New York City in the mid to late 1970s, if someone had a mainframe IBM background he or she could become much more marketable. So I managed to get an interview with Bradford National Corporation when it had to staff up suddenly after getting a contract for New York State Medicaid Management Information System (in 1977). On May 31, 1977 I started there at 100 Church Street in lower Manhattan (a building that would be slightly affected on 9/11). I remember riding down from the headquarters at 1700 Broadway and being told that we were “consultants.”

Bradford National Corporation would eventually be bought by McDonnell-Douglas in the 1980s.

In those days, you wrote program specs in handwriting and gave them to a typing pool. We had a terminal row or “tube city” and used Roscoe procs to compile programs. I worked on the back end, or MARS (“Management and Administrative Reporting”). The system consisted of an extract from the claims detail, sorts of the extracts in various sequences or “legs” and then the reports. New York State auditors came down to analyze the system tests, with the most sensitive reports being those on nursing homes, since SNF’s had more federal reimbursement that (custodial) ICF’s. All the files were tape, and the end of month reports with 1978 technology took extremely long to run. But the operating system was already MVS, with all programming in COBOL.

I had nineteen months MMIS experience. In 2002 and 2003 recruiters started calling programmers with MMIS experience, but most jobs required two-five years MMIS experience. It must have changed a lot since then.

Wednesday, September 19, 2007

History of Computing Culture 102: NBC (with the RCA Spectra and Univac 1110, in the 1970s)



Continuons-nous! I got a call from a director at NBC, who had moved up there from the RCA operations research career program (Sunday post), once my resume was on the loose. On Monday, August 12, 1974, three days after Nixon’s resignation, I started as a programmer-analyst. We worked on the 14th floor of a satellite wing on 6th Ave and 49th Street in the (now GE) RCA Building (there was no 13), with the Univac 1110 and RCA Spectra on the 8th floor.

I even remember Gerald Ford speaking to the nation that night. “I am a Ford, not a model T.” But during the first week of September (after a Labor Day weekend in Mexico City to “celebrate”) I moved into the Cast Iron Building on 11th and Broadway to start a new life. I sold the car to a Univac employee.

The pace was slower in those days. The project was to implement a new general ledger system. One had to read the transaction tapes on the Spectra 70 and convert them to be readable on the Univac 1110. For the general ledger we purchased a general ledger system from Infonational and converted it to Univac Ascii COBOL, which did not cause significant problems.

The Spectra part was the first COBOL program that I ever designed. This was done all with punched cards. This was also in the days before structured programming, go-to-less programming, self-documenting code, top-down testing, etc. were the expected norms. So aesthetically my first programs on that machine were ugly to look at. But once implemented, they ran perfectly every accounting closing. The needed to, because fixing them on the fly would have been unthinkable with an old computer.

Working with the purchased COBOL programs on the Univac was much easier. We had teletype terminals, that did not have a CRT display, but that had a paper tape that kept a record of what you typed and of the system’s responses. We had semi-private offices, with two people per office. There was a rule against “compiling in demand” during normal business hours, but you could schedule a batch job to compile and link and usually it ran right away. Exec 8 was very convenient, much less verbose that IBM DOS or OS JCL, which I would encounter later in my career. It also had an automatic jobstream generator, SSG, which IBM didn’t replicate until JES2 and JES3.

Accounting cycles consist usually of daily or weekly voucher registers, and proofs, including a final proof for end of month. Each proof was printed in carbons and was comprised many stacks of greenbar computer paper, that was separated and given to users. Accountants make adjusting entries to the proofs. There is also a chart of accounts, which is maintained, in those days with batch jobs before the cycle. End of month could be a bear because of the huge detail sort in the last voucher register. In those days, it could take a Univac 1110 three or four hours to sort 300,000 records or so. I learned what it was like to be “on call” for my own applications. By the mid 1980s, an Amdahl or IBM mainframe could to the same in a few minutes.

The mechanics of how we worked deserve note. The paper tape came in handy. These were days long before sophisticated system security and “separation of functions” according to the job. Programmers had full update access to production files. We often set up test files as copies of production ones. (That is not acceptable today in many shops because of consumer privacy, but this was decades before modern security and privacy concerns hit the media.) If a programmer inadvertently reversed the order of file qualifiers in a copy (GL and XGL, for example), a production file could be overwritten and it would not be noticed until after the closing was run. So we kept the hardcopy terminal tapes of exactly what we did; that was the “security.” By the late 1970s, however, companies were learning that it would pay to install security systems and safer ways of working.

This was a job. In time, I came to understand the virtue of good coding practices as we now know them. Generally, I did not think much about the “glamour” of the media. Television studio tours were available (you didn’t dare visit them during working hours, or you could get fired.) The one exception was when we were invited to work on soap opera sets for a few weeks in the spring of 1976 during the NABET strike. (link May 27). That was an interesting taste of the “real world.”

I certainly wonder how the information technology environment must have changed, several times over, since the 1970s, with the GE and Universal mergers, and the new generations of web technology and monumental changes in the legal and reporting environments.

Tuesday, September 18, 2007

History of Computing Culture 101: non-IBM, without a "marketing profile"


Carrying on the History of Computing Culture 101 that I started and presented on Sunday, I venture further into the subject of non-IBM mainframes in olden times – and especially of trying to sell them.

In the early 70s, besides IBM, the other players were Univac, Burroughs, NCR, RCA Spectra, Honeywell, Data General/VAX, and General Electric. In time, they would drop out or merge and various brands would get eliminated. Univac was probably the largest competitor; Sperry Rand, owning Univac, was a large conglomerate with a major skyscraper near Rockefeller Center in New York. It might have outpaced IBM given how things looked in the 50s; IBM, though, turned out to be the better marketer. It had an efficient, easy to learn and code JCL called “Exec 8” with simple commands that are a bit like today’s Unix. Univac sold three large mainframes with its proprietary architecture (1106, 1108, 1110), and an “minicomputer” imitation of the 360 called the 9000 series,

In the spring and summer of 1972, a couple of friends at NAVCOSSACT left and went to work for Univac as instructors in their education center in Tyson’s Corner, VA. I almost did that. I had an interview with Univac at Bell Labs (a revisit) and I remember a bizarre question from the Univac interviewer, “Do you like programming?” Then, I was 29 and wanted more adventure—my friends had it. On Aug. 23, 1972 I got a sudden call at home from a Univac branch manager in New Jersey. I went up and interviewed the Montclair Branch on Aug. 30 and started a “new life” on September 25.

I was officially a “Systems Analyst” and the job was to support sales teams at client sites. I was assigned to Public Service Electric and Gas in downtown Newark, which gave easy access to New York City. My personal life (other blogs) was “changing” but I had a convenient garden apartment in Caldwell, with pretty efficient bus service. There were five staff members assigned to the account, and I was the “processor support person” for FORTRAN and COBOL. At the time, there was still a lot of FORTRAN. In the “Management by Objectives” jargon of the time (now the buzzword is “Total Quality Management” and “Team Handbook”) the goal of the team was to get an 1106 machine on rent by some certain date. A couple of staff members spent all their time analyzing panic dumps (and installing fixes with SYSGENs) from system crashes, which did happen then. (Dumps in Univac were in Octal, not Hex; the most common character sets were Fieldata and Ascii.) Essentially they were what we call today “systems programmers.” The following spring, we had benchmarks of an 1110 at the test facility in Eagan, Minnesota, a suburb of Minneapolis-St. Paul, just off the 494 “strip” (where the Mall of America is now).

Univac tended then to be ahead of IBM in programmer online access; most programmers at PSEG had their own terminals, some of them teletype, a few cathode ray. It also made sophisticated keypunch equipment; in fact, the third floor of the Montclair branch office (where I had a little used desk) was a major center for keypunch distribution and sales. While at PSEG I wrote an assembler program to real the log tapes and monitor how much use each programmer made of various facilities, called BIGBR, or “Big Brother.” That wasn’t that big a deal there, but in those days computer time and use was expensive, and in some companies programmers could be penalized for needing too many “shots” to get a program working. (This was particularly true overseas.)

After the benchmarks, the Branch manager came to the conclusion that I did not have a “marketing profile” (what does that mean behind the lines?) and ought to transfer. (On Oct 2 in this blog, I had talked about, “Can Techies Sell?”) Dress at Univac was not the big deal that it was at IBM; my first day on the job I war a chartreuse colored suit, and other reps had lively, sometimes flamboyant suits that would not have met the more conservative standards at IBM and certainly EDS at the time. Even the salesmen were a bit showy. (EDS, in a memo that I saw once, claimed that the dress code was intended to gain the confidence of customers who did not understand computers.) Rumor had it that companies like that told you what kind of car they expected you to drive. (I had a Pinto.)

I did get to take a two-week COBOL course at Tyson’s, from one of the friends who had left before I did. That was my first introduction to what would become the mainframe procedural language for business applications for three decades. It wasn’t apparent how important COBOL would get until the early 70s, after which so many financial, manufacturing, and retail companies would write their own inhouse applications, before the large software vendors grew.

I was assigned to a smaller account, Axicom or Transport Data, for a while, before interviewing the Bell Labs account and getting transferred to the AT&T account in Pascataway, NJ, farther away from the City and less convenient (although close to the Metro Park commuter station and on the “Blue Star” route). I got an apartment near Bound Brook, near the Raritan River, which has flooded twice after I was long gone.

Pretty soon, I was invited to travel repeatedly to St Paul for another 1110 benchmark, the object of which was to process the magic “1150 transactions per hour” on a new 1110. The Bell Labs programmers had written complicated simulations of the transactions that had to work, with lots of DMS 1100 calls. That database followed the network model also used by IDMS on the IBM mainframe, with a DDL and schema and location modes of CALC and SET. I trained myself by writing a little DMS-1100 application for my classical record library. Now, you ask, isn’t that computer use for personal business? Yes it is, but in those days it was Okay if there was a legitimate learning purpose. Security and misuse (despite the expense of disk space and computer time) was not the big concern then, even on client computers.

We ran the transactions from punched card decks, and at the time keeping the decks organized and ready was part of the job. We usually had computer time from 4 to 12, but as the final demos approached I certainly remember the all nighters and the exhaustion. I was well into adult life. One of the biggest technical problems was the DMS-1100 "rollbacks" caused by "deadly embrace" or "Catch 22" deadlocks, which were finally resolved by processing database updates in parallel transactions in the same sequence.

After the benchmarks, I was assigned to the AT&T account, and spent a lot of time in lower Manhattan, and some in Westchester county. It’s hard to get anywhere just troubleshooting and supporting customer’s applications, unless one moves into marketing. It was apparent that I should code my own applications again, and I wanted to move into the City. That started the next chapter of my career.

Monday, September 17, 2007

Remembering Y2K: When have you tested everything? What data do you keep? What data covers everything?


Remember Y2K? Back in 1999, we had a big workplace debate just on what, from a philosophy 201 perspective, constituted a satisfactory data repository of evidence that all of our systems (in a life and annuity company) would perform properly on and after Saturday Jan. 1, 2000 (and for that matter Mon. Jan. 1 2001), since it was necessary to expand the year to a four-digit position. (It’s a bit more involved than that with some systems, but that was the idea.) There were several questions: what jobs should be run? Which cycles should be run? (End of month? End of year?) What printouts or files should be saved? (File-file compares? Reports? Test data?) What production data should be extracted? How would it be collected and stored? In the fall of 1999, we did wind up boxing a lot of JCL, reports and screen prints and shipping them to an official warehouse. Y2K came and went without a hitch.

We had a similar exercise early in the year with a disaster recovery fire drill (at a company called Comdisco) that I remember well. What data, what files do you collect, what do you run on the backup site to prove it all got copied.

Back in the early 1990s we had philosophical discussions of this sort. One had to make sure that all possible situations were covered by test cases or by extracted production data or selected production cycles (now a bigger issue than then because of privacy considerations). Before any elevation, there would be the exercise of parallel cycles, file-to-file compares, and saving evidence, in the form of printouts, screen prints, and sometimes just on disk or offloaded to diskettes (maybe copied to the “LAN” which then was a real innovation). Because of “personal responsibility” I kept quite a library of test runs in the big black three-ring binders, low-tech. This way it was possible to prove that the system was tested properly if something ever went wrong. That may sound like lack of confidence.

There was also the issue, emerging then, that source management software (then CA-Librarian, today usually CA-Endeavor) had to be used properly to guarantee source-load module integrity.

In fact, as far back as early 1989, I essentially “saved” a small health care consulting business (small then, big now) by saving a huge paper library of test runs. I spent three weeks desk checking numbers in a windowless office with no personal PC terminal. When a major client questioned our numbers, I was able to prove we had run everything properly. Re-examination of Federal register specs and of COBOL code from a federal program showed a discrepancy within the government’s own work. When I replicated federal code in our system, we quickly got the results that the client had expected after running the model and simulations.

There is a lesson in all of this. Remember that undergraduate Philosophy 101 course where the professor asks “how do you know what you believe?” or something like that. Remember those essay questions on epistemology? (I got a B on that.) Systems testing and quality assurance is all about that, when a system must run in production and process millions of client transactions daily, perfectly. It’s volume, buddy, along with absolute perfection. That’s what mainframe culture was all about.

It seems that one can blow this kind of question up when we look at major issues today. How do we know that we have collected all of the relevant data or cases and that it is right?

Sunday, September 16, 2007

Army-Navy and in between: RCA Spectra and Univac: mainframe history


Theoretically, my two years of Army service, 1968-1970, where I “volunteered for the draft” for “enlisted for two years” and wound up with an RA number (RA11937256) and took that 95% chance of winding up in infantry, the queen of battle. Well, my MOS out of Basic (even given a few weeks of STC – Special Training Company) was “01E20” – Mathematician. I spent the summer of 1968 – three months – in the Pentagon, and, after a mysterious transfer, the rest of my hitch at Fort Eustis, VA (“Fort Useless”) with the Combat Development Command Transportation Agency (USACDCTA) in that “white building” that no longer stands.

There, in theory, should have provided computer experience. That was minimal to say the least. At the Pentagon, we coded sheets classifying units as Combat, Combat Support (Engineers), and Combat Service Support (CSS). At Fort Eustis, we coded a library cataloguing system (on coding sheets) called SPIRAL. That kept me out of the rice paddies, a morally controversial ploy in its day. It got a chance to study and read about a simulation package called SIMSCRIPT.

In the middle of 1969 I started researching what my first job would be. Companies would respond with form letters in fancy letterheads, but some of them bit. I got flown to an interview with Rand in California (Rand would write the million dollar unheard proposal in 1993 on how to lift the ban on gays in the military) because of my Simscript background. I was flown to Syracuse in December to interview GE Heavy Military Equipment, to New Jersey for Bell Labs and for RCA Labs. In those days, companies paid the interviewing expenses for people with graduate degrees (I had the MA in math from the University of Kansas).

Rand and GE lost some budget with Nixon’s cutbacks, already taking hold. But Bell Labs and RCA came through with offers. With both interviews, I had to give technical talks on my Master’s Thesis (“Minimax Rational Function Approximation”). I wound up taking the RCA offer, the Operations Research Training Program at David Sarnoff Research Center in Princeton NJ, near the Princeton Junction station, on Route 571, a few miles from the University. (I understand that this Center now belongs to SAIC.) I lived in an apartment in what was then called Cranbury and is now called East Windsor. (RCA also had an MIS training program, where programmers roomed in a motel while being trained for ten weeks in COBOL and assembler, something that conjures up ideas of how EDS used to train its systems engineers during that era).

Operations Research conjures up ideas of linear programming and optimization. It does include these. However, at RCA, the program consisted of a few “assignments” at various RCA locations. After three months at the labs, I was sent to Indianapolis, to a television manufacturing plant. I was supposed to complete a dynamic programming model to optimize production lines. The model was written in Fortran and to be run from punched cards on an RCA Spectra 70. At the time, Spectra was pretty much a clone of IBM. It had the same assembler and languages. The system was totally inadequate for processing the algorithm. Today, there would probably be nothing to it and I suspect that there are dynamic programming algorithms to solve this kind of problem in java libraries.

I also worked on a manpower allocation model at Cherry Hill, NJ. We would work on TTY terminals (with paper roll output, no CRT) and diddle around with the data.

This did not result in an offer. RCA television sales and other sales dropped off in 1970 and I was laid off in February 1971. That was my only layoff until December 2001 (thirty more years). Luckily, I knew someone in the Navy department, and that would lead me to Univac. So I went back to work for the military as a civilian, working on Fortran simulations on a Univac 1108, with “Exec 8” which was a command-like JCL that resembles Unix or Linux, and the Naval Command Systems Support Activity, NAVCOSSACT, in the Washington Navy Yard, now unrecognizable with all of the development. I used to park on Water Street, not too far from the new Nats Stadium. .

Thursday, September 13, 2007

My career began on the IBM 7090 (in 1965)


It seems as if I stumbled into information technology as a career as a safer choice than music and piano, especially in a Cold War world with a draft. I actually got in on the government’s dime. Although my first formal job was at the National Bureau of Standards (at Connecticut Ave and Van Ness Streets in DC – now the site of the University of the District of Columbia but at the time, a brick building campus that would become Federal City College -- complete with underground tunnels) from 1963-1964 as a GS-4, as a chemistry laboratory assistant (rheology, measuring viscosity of standards oils) – the first job to launch me somewhere was at the David Taylor Model Basin, now known as the Naval Ship Warfare Center, in Carderock, MD, right where the I-495 Beltway crosses the Potomac. (The notorious Beltway was already there then.)

The job comprised Fortran programming on the IBM 7090, a predecessor to the 360 architecture. We spent every morning in training in the 1965 summer – a good deal to get a college level course on the government’s dime while being paid. The last summer (1967) I was a GS-7 because I had my BS from GWU. At the time, defense was all the rage, and the projects had to do with underwater buckling pressures. (A bit of positive karma or foreshadowing for my 1993 Norfolk “civilian” submarine visit that would be discussed in my book). We got to walk through the wind tunnels. We got a field trip downtown to see the “new” 360 that first summer. We submitted decks to compile and execute and had little reason to learn the “JCL.” There was also an assembler language called SAL and SLA, much simpler than mainframe MVS assembler that IBM would develop.

During the 1966-1967 academic year in graduate school at the University of Kansas in Lawrence, I worked on similar projects as a research assistant for physics professors, on a General Electric, with Fortran, and the technique was pretty much the same. At least I learned that there were other mainframe companies “trying” to compete IBM, a situation that would become more and more important as my “career” evolved. I think that the last movement of the Shostakovich 13th Symphony is called “A Career.”

In those days, we coded on sheets with columns, and turned them in for keypunching. Or we keypunched ourselves, and got good at it quickly.

In two years, man would set foot on the Moon. What at time.

Saturday, September 08, 2007

An old lesson on the risks of uploading anything even to "private space" (with 1981"mainframe" technology)


In 1979 I left New York City after four and a half very interesting years living in Greenwich Village, for Dallas, working for the Combined A&B Medicare Consortium (“CABCO”) of (then) six Blue Cross and Blue Shield plans around the country. “The Project” was supposed to develop a state-of-the-art Medicare claims processing and reporting system, competing with EDS (a circumstance that created an immediate conflict for the host Texas plan). That would start an interesting 9-1/2 year period in Dallas, some of the personal aspects of which I have discussed on other blogs.

I written in some detail about what happened there, especially on Nov. 13, 2006 (“End User Computing Flexibility”), March 27, and July 16. We used a system development system from M. Bryce Associates called “Pride-Logik” that tended to control the development process and run the show. The project manager had a sign in front of his office in the Zale Building (in 1980 it moved up Stemmons a bit) “Abandon All Ye Who Enter Here.” He had a certain style of Zen management that did not reign on the political and turf battles of the six plans. That led to the failure of the project after three years of running around in circles. This was harmful, as I spent close to three years with some good design experience (IMS and CICS, the staples of the time), but we never implemented. We did hire “programmers” near the end who coded a few reporting modules under my supervision – that was another thing, we had a division between “analysts” and “programmers” which may seem passé today – except that today analysts would be called “architects” and would often be coordinating work done by programmers overseas. Had the project succeeded, I did intend to “move up” because IMS and CICS was considered a competitive production work environment by the job market standards of the day.

There was one particular incident in June, 1981 (I left in October, seeing “the handwriting on the wall” – the project folded at the end of January 1982) that teaches a lesson oddly relevant to today. We had been writing pseudocode (as part of
“Phase 4”) and had gotten a small inhouse mainframe (a 3330 I think – at the time a big deal). We had a crude TSO system (I don’t think we had ISPF, and certainly not Roscoe) and I had saved some bits of pseudocode on a dataset for reuse in various specifications. One day in June a “librarian” pulled off all of our work and gave it to others to “review.” I had not intended that dataset to be “public” and was shocked when people thought I had handed in “junk” work. It was supposed to be a private work dataset, much like a private word document on a home PC (and not published to the Internet). Or perhaps it was analogous to an element on a social networking profile not published to the world but whitelisted to a known list, but that still “gets out.” In those days, people worked with paper and pencil a lot (we hand-drew system flowcharts and structure charts, long before the days of Visio), and logging on to “the system” implied a bit of a commitment. But actually, the MMIS system at Bradford in NYC where I had worked 1977-1978 had been more advanced, with Roscoe and individual office rooms, but still a “tube city.”

Monday, August 27, 2007

Expression Web and existing FrontPage sites; McAfee note


Late in 2006, Microsoft announced that it was replacing its popular Web content editor Front Page with a much more expansive product, Expression Web, which is part of am Expression Studio comprising other products including Blend, Design, Media. The basic link on Microsoft is this: The product is moderate in price, $299 for a new license. Yes, I suppose we could call it "self expression web" but the Microsoft documentation talks a lot about work teams in different locations.

The Microsoft announcement on the FrontPage link is this:

The new Expression Web emphasizes websites to much more stringent W3C standards, and uses tools found in Visual Studio. It tends to develop “standards based” web sites physically organized in a way that parallels the logic of the content, where as manually developed sites (like my doaskdotell.com) tend to have directory structures that seem artificial compared to the content and depend on links to connect the dots. Older mainframe database management systems (like IMS and IDMS) use logical relationships to connect data in a way analogous to some older content-rich sites. The site also offers facilities that make data from a site easier to display on other devices (such as mobile) or for handicapped access. Expression offers multiple task panes and can work with an ASP.NET environment. It emphasizes working with CSS (Cascading Style Sheets), instead of direct HTML, for formatting and layout. Webbots do not exist in Expression Web.

The most recent version of FrontPage was issued in 2003. Microsoft stopped selling it in late 2006. One question that comes up is, what if one has an existing site managed at a shared hosting ISP through Front Page extensions, and one’s machine breaks, or one has to travel with another machine? Can one still update the site? If one used WS-FTP, it might break the extensions. I am not sure what happens if the same CD is loaded onto another computer in terms of licenses (the user might get a trial of 50 accesses). But since Microsoft offers a free trial Expression Web, can one use that? It’s actually pretty hard to find the answer in Microsoft’s own site, and some website message boards suggest that this cannot be done, and that a webmaster might as well start over with a “professional” standard based site; he or she ought to anyway. I found a position paper here on the web:
http://download.microsoft.com/download/f/f/2/ff2d736a-9ec6-4e3b-b094-d782aa7cda72/Microsoft_FrontPage_to_Expression_Web.doc

It is a doc, so I’ll let the user past it in to the browser. The white paper says on p 2

“If your existing site uses Web components, you can still edit those components using Expression Web. However, you won’t be able to add new Web components.”

So, the short answer seems to be, Expression Web is downward compatible, and can update an existing site, but it won’t allow adding any more FrontPage web components. Furthermore, it does not require or use “extensions,” which are specialized scripts that can “break” easily. I remember going through the exercise with a friend at work on my older site in early 1999 (when the boss wasn’t looking). It wasn’t easy and took a week (with calls to Microsoft) to resolve.

Expression Web offers MSNBC components, feeds and links. This would seem to lead to the possibility of networked journalism. It appears that this may work only with sites built with Expression Web.

One other recent matter: On July 30, McAfee replaced its Security Center, which seems to have affected some customers, even causing loss of Internet connectivity. It offers a Virtual Technician for resolution. https://us.mcafee.com/root/fix.html You can go here to find it for download. Also, look here "http://ts.mcafeehelp.com/?siteID=" and look for the “top ten issues”. What happened with me was (1) the new Virus Scan no longer removes advertising cookies (2) it no longer shows “critical files” at the beginning of the scan. I don’t think these are problems; I think McAfee is repackaging its products. If anyone knows, I would appreciate a comment.

Tuesday, August 21, 2007

Adobe and PC/Microsoft environment for movie making and advanced web


To make movies in a Windows Vista or XP PC environment today, one of the leading combination packages appears to be Adobe Creative Suite 3 Production Premium, which would incorporate Flash, After Effects (for special effects) with Premiere Pro CS3 (roughly the functionality of Apple Final Cut). Apparently many of the features are available on Macintosh (some features require BootCamp). The cost today appears to be about $1700. The most ambitious suite is Adobe Creative Suite 3 Master Collection (click on “What’s Inside”), for about $2500.

Production Premium
Master Collection System Requirements (would include 7200 RPM hard drive and 2 GIG RAM for many applications)

The Dell XPS machines, now available with Vista, appear to meet the requirements. The 410 at around $1100 looks quite impressive (with Blu-ray Disc) and the gaming equivalent is about, the 720, about $1700. On DVD, I still think there is quite a shakeout coming on the ultimate universal format and on dealing with all of the DMCA copy protection issues, which many hobbyists resent.

The machines offer Vista. A few years ago, XP users were expected to choose between Media and Professional (or Home), and neither platform really suited all possible needs.

Back in early 2001, Sony Vaio sold a Windows ME (remember that?) machine that it called a “movie maker”, a year before the iMac. The Dell 8300 that I have now is a movie maker and has Roxio and 7200 rpm hard drive, but the features that come with it are primitive compared to these products.

Update: 11/8/2007


The Dell 410 no longer exists and is replaced by the 210 and 420. Link is here.
With the dollar sinking (news media reports today) with budget deficits, I'll be watching the prices of computers and electronics more closely soon, and might come up with a systematic scheme to report it. We need to do more manufacturing at home again.

Saturday, August 18, 2007

Apple: iMac, MacBook, and movie making


Recently some friends have asked me about the Mac. I have an original iMac from 2002, that had certain problems when I tried to burn DVDs, and it had some problems with IE. I remember buying it at the Apple store in the Mall of America near Minneapolis, an awkward location (even on the first level) since it is a long way to parting; but I remember the "Genius Bar" there for support. Right now I use it to play DVD’s. I used to have a technical support subscription, and have once replaced the Mac OS X with OS Panther (there are newer “feline” OS’s). Once, to get a DVD burner unlocked I had to call Apple Support, and they had me go into terminal mode and enter some bizarre Linux commands executing various proprietary Apple scripts (rather like Perl) to free it.

I’ve looked at what Mac offers now. A couple of years ago they were promoting G4’s G5’s etc. Now they use the buzzword MacBook and MacBook Pro. The main Apple Store website with comparisons and prices is this:

The most expensive of these machines runs $2800 and has a 17 inch screen, fine for anamorphic films.

Many home users would be satisfied with the cheaper iMac’s, and the page is with the top machine running $2300.

My interest would be film editing. Of course, the iMac has iMovie, a simplified editor that is good enough for “amateur” films and has some sophistication. I’ve made a 34 minute documentary film with it that is still technically too crude to present formally. The professional editor is Final Cut Studio, which can be enhanced with Logic Pro for music and Shake for advanced digital compositing (animation, rotoscope, some of the special effects that one sees in the director’s contests like “The Lot” on Fox). The websites are
http://www.apple.com/finalcutstudio/ (about $1300)
http://www.apple.com/logicpro/ (about $1000)
http://www.apple.com/shake/ (expensive, about $5000)

It appears that these run on MacBook Pro. (I’m not sure about iMac.) I took a course in Final Cut in 2002 in Minnesota at IFPMSP there, and at the time, smaller Apple’s were linked in series to run it. It has come a long way. For a while, Apple was offering a reduced price on Final Cut with some computer purchases, but I don’t see that there now.

Any visitor who can give links or info on what runs on what is encouraged to comment.

Wednesday, August 15, 2007

Microsoft Visual Studio .NET keeps evolving


Right after my layoff and retirement from my old-school IT job at the end of 2001, computer magazines had plenty of articles like "What's Hot, What's Not." At the time, COBOL was on the NOT list (this was two years after Y2K). That seems to be coming back now, but in the meantime I've looked at all of the other niches around. And the market, since 2000, has indeed fragmented into a mishmash of specific areas, and the nimble professional has to keep up leading edge expertise in some of them.

One of the "HOT" things in early 2002 was supposed to be Visual Studio .NET. I took a course in it with C# (which is a bit simpler and more straightforward that Java because it is strongly typed) at Hennepin County Technical College south of Minneapolis (while still living there). From a class that met once a week for three hours, it was hard to get enough expertise to be marketable. It takes real involvement and building something.

Visual Studio, connecting to ADO (for databases) and ASP (for web development) provides a development platform for complex applications with menus, processes, scripts, behaviors in an object oriented environment. It is comparable, but carries the meta-language skill mentality, to what used to be done on the mainframe with products like Telon.

Here is a link for a .NET Developers blog.

Here is a link for the Visual Studio Development Center for 2005. It offers a Beta version of the 2008 product.

Here is the pricing chart.

Here is the Comparison Chart.
The Express Version is free (takes about 2 hours to download at high speed) but limited in functionality. You can't run ADO and ASP on the same machine in Express. If you pay for the Developer's version, you apparently get a copy of SQL Server with the tools to maintain a database easily (as you can with Access). You then copy the application to a web server enable to run it.

Microsoft MSDN offers such hosting of the .NET Framework, with free introduction, here. Also look at this. Other larger ISPs offer shared or dedicated Windows Server hosting that can run .NET applications.

Microsoft Press offers workbooks in the various languages (like C#, Visual Basic, etc) with sample databases and applications. Most of these will work with the Express version.

Microsoft offers a 26-minute film "Orcas Beta 2" on the new .Net, in which S. Somasager and Scott Guthrie talk about the project management and development issues of the new release (how the checkout process associated with source / module management was tightened by test automation; the also talked about "shell managed code" as simpler than C++ and talked about the paradigm changes in C# 3.0. The film gives a good feel of what it is like to work as a developer in a state-of-the-art software engineering environment. It is more demanding than many people realize.

Here is the link leading to the film. It played only in IE (Mozilla didn't work).

Monday, August 13, 2007

Customer services agents can work from home


Last week, ABC Good Morning America did a broadcast (Tori Johnson: "Take Control of Your Life" link) on working from home opportunities. Although they are quite varied as to credibility, the broadcast mentioned three companies that offer the opportunities for persons to become home customer service agents for various kinds of retail and catalogue sales and other services (sometimes insurance). The pay can be over $20 an hour. Most jobs require a minimum of 15 hours a week (preferably more) and most new agents will be expected to work nights and weekends.

The tradeoff, in exchange for no commuting, is that associates must supply their own computing and communications hardware and software (properly licensed, of course). There needs to be a quiet room in the home where family members do not intrude, and some space for business materials. On the computer itself, there needs to be certainty that business and personal stuff will not be intermingled.

The report said that up to 10000 home customer service agents might be hired by December. Job applications usually require a background investigation (for fraud convictions) that the applicant may have to pay for. There is also a series of progressive home computer skill and phone skills interviews. Many people who apply are not hired.

The main companies are
Alpine Access, with two important sublinks: one on being an employee instead of a free-lance agent, and one on agent skills and computer requirements.

Live-ops FAQ's.

Arise, whose computer requirements (pdf) are more specific. Arise automatically checks home computers linked to its system for the presence of other software that it believes could compromise security and also checks for spyware or malware.

Generally, companies require a stable home computing environment with at least Windows XP (Professional preferred) and a dedicated land business phone line and a hard-wired high-speed Internet connection (DSL or Cable). Agents may need two separate ISP’s and must use business-only email addresses (and not use popular free services or use services usually thought of as home products like AOL). As of now, companies are not willing to work with agents who would use Wireless, although my own observation is that wireless is improving and becoming more stable and more secure. This situation could well change in the future. Some companies are starting to work with Vista, but not all client groups can support it. Vista is more secure, and it is likely to become standard in the future. Some companies and client groups cannot accommodate MacIntosh, but that could change in the future as computer experts tend to consider MacIntosh more secure.

Some companies say that they cannot work with Internet Explorer 7.0 yet, although it seems to be in production from Microsoft and well tested.

All companies insist that home computers be well secured with a complete computer security anti-virus suite and anti-spyware tools. Companies like McAfee, Norton and others offer packages that need to be checked to be sure that they include all components. Among the companies, Arise, in particular, is very strict about not having software on the “work computer” at home (or networking it in certain ways) that could make the computer more vulnerable to hackers (leading to compromise of client information). As a result, an agent could find it more practical to purchase a new computer just for work. Generally, an adequate setup is available from Dell or similar company (like HP) for $500-$700, usually with McAfee or Norton pre-installed, and will typically work properly when set up. (Some software might have to be removed or never enabled.) Be careful if the new computer comes with Vista.

But these companies have some work to do in keeping up with rapidly changing personal PC platforms.

Update: Jan. 28, 2008

NBC Nightly News tonight had a story on companies returning call center work from overseas (where there were language problems) back home, so there may be a considerable boom in the need for home telephone agents, with a larger list of clients. The CEO for Alpine Access was interviewed.

Update: Feb. 27, 2008

There is a story today on AOL giving a large list of companies that hire home workers. The link is here.

Sunday, August 05, 2007

Owning computers over the years


Over the years, I’ve owned a lot of personal computers. It’s useful to run down the list.

1981, Dec. Radio Shack TRS-80 (“Trash 80”) with a Radio Shack dot-matrix printer, $3700. Did buy assembler language for it and an old word-processor.
(64 byte black-and-white monitor).

1985 AT&T 6300, MS-DOS, 20 meg hard drive, from Sears, with Q&A for word-processing, spreadsheet, and database.. About $2500.

1985, soon after. HP laser printer, about $2200.

1988, Dec. AST Research, 286 machine, MS-DOS, monochrome, 40 meg, and RBase (the first DBMS that offered SQL; Ashton-Tate would follow soon with dBase4. About $1900. Soon got Word Perfect.

1992 Sept. Everex Laptop, 386 machine, MS-DOS, with Windows 3.1. Had double-space on harddrive, which turned out to be buggy by reports, although I didn’t have trouble with it. Later a friend gave me a Linux system that would boot on it from a 1.4 floppy. (This is Tom’s Root Boot, here: Tom was running a web server on a 386 machine in the early 1990s.

1993 Dec. IBM PS/1 486 machine (I think), color, used Word Perfect and dBase4. Had Windows 3.1 but some applications had to be started directly without it. Started using AOL in August, 1994, at 2400 baud.

1995 Summer. Erols, with Windows 3.1. Wrote by first book on it, using all of Microsoft office. About $1600

1995, Nov. NEC laptop, little use. Internet could work at 56000 baud dialup.

1997, Sept. Compaq laptop, Windows 95. About $2800. The first two machines (from BestBuy) had defective power supplies, but the third one still runs without a hitch twelve years later.

1998, April custom built destop with Windows 98, built by University Computers in St. Paul, MN, for about $1350. Dialup at 56000. Added Earthlink and Netscape to AOL with IE. Did most of my web maintenance on this machine for four years. Hard drive replaced (with all data copied) in early 2000. Modem failed in 2002.

2000 HP laptop with Windows 98, about $2400.

2001 Sony Vaio micro-desktop with Windows ME (a disaster), about $2700. Added XP Pro in June 2002. Still have, but modem has failed and hard drive is failing.

2002, Feb. MacIntosh iMac. Has Moviemaker. About $2800 incl. cloned version of Office. Not as stable as I hoped, and DVD burner was buggy. Finally got high speed (Time Warner and Earthlink) Internet in Aug. 2002.

2003 Sept. Dell Moviemaker 8300 with XP Home. About $2700 with the extras. Still use heavily. Got Comcast high speed immediately.

2006 June. Dell Inspiron, about $1700 with extras, XP Pro. Downloaded free Express Visual Studio.

As real estate goes up, computers come down. They fill up landfills with toxic heavy metals (as in the film "Manufactured Landscapes" -- see my movies blog for July), and starting in 2004 I used recycling drives organized by NBC4 for some of them. Of course, if there were a pandemic flu epidemic in the Far East, I wonder what would happen.

Monday, July 16, 2007

So why did I remain an individual contributor so long?


A fair question about my background might be, I spent 31 years in the conventional IT world, but why didn’t I “advance”?

In fact, I had direct reports, as a project leader, just once, in 1988, during my last six weeks at Chilton Credit Reporting (aka TRW aka Experian now) in Dallas. But I had already determined that I should leave the company because of political and merger-related circumstances, thinking I was stretching my luck if I sat if out for the severance if I could get a job elsewhere. I came back to Washington DC and worked for a health care policy consulting firm (now Lewin) for 18 months before going to Uslico, which would be absorbed by NWNL – ReliaStar (1995) and ING (2000).

My career started out in operations research and defense (Navy Dept) within the Univac world (1108, 1110, etc). I worked for Univac in marketing and doing benchmarks for a year and a half. I moved into the commercial area by going to NBC (National Broadcasting Company) in New York City in 1974, to work on their general ledger application, in a Univac 1110 shop. Then the big deal for career employability was to “get IBM.” So I went to Bradford National Corporation and worked on Medicaid MMIS for New York State in 1977. I stayed there for 19 months and left, in retrospect quite prematurely, to move to Texas in early 1979. There I worked for the Combined A & B Medicare Consortium (CABCO), hosted by Blue Cross and Blue Shield of Texas.

My intention there was to prosper and advance. The environment was to be IBM, IMS and CICS, the preferred environment of the day. However, because of political infighting the project failed. I got a stable programming job at Chilton in Dallas, working on daily and monthly billing, but the environment was less desirable, being Datacomm DB and DC, now no longer factors in the job market.

In the 80s, with all the mergers, leveraged buyouts and hostile takeovers toward the end of the decade (in an environment of falling oil prices and overbuilding and real estate recession in Texas) companies were already starting to flatten their organizations, with fewer layers of management and larger spans of control per manager. The manager or managing project leader was considered more vulnerable to layoff, often, than the “can do” programmer who did the nightcall and kept the production systems running.

The mentality continued in the 90s, and the big demand for mainframe programmers for Y2K picked up around 1997 or so. The culture very much supported the idea of a career as an individual contributor or perhaps a team lead without direct reports. After Y2k and the first Internet bubble burst in 2000, the market started to tank, and then 9/11 and the accounting scandals or Enron and Worldcomm (and depressed stock market valuations in 2002) really killed it. As the market recovered slowly, job descriptions (especially state government contracts) became much pickier. Since about early 2006 the job gig requirements have loosened a bit, possibly suggesting increased demand, and that seems not entirely clear yet.

Now, with the influence of the Internet and the schizophrenic reaction of employers to individual user generated content and social networking, which can produce publicity conflicts, the whole attitude is mixed and hard to predict. Employers (and the headhunting staffing firms that they use) are uncertain as to what they really need. But this could be a great time to be in college for a student who plans his course work and internships carefully and focuses on the skills that are obviously in demand (security, architecture, OOP, and especially “connecting the dots”, a theme that I talk about on the other blogs.)