Articles

From time to time we’ll be releasing our thoughts on issues and events that we find relevant. To comment, click on the ‘Permalink’ like below. Here are our most recent articles:

Developer as Typist

by

While not conceptually the best metaphor for developers, the typing pool is perhaps the most common view of developers. Before the advent of the word processor it was common to have pool of people who did nothing but type stuff up. Businesspeople wrote long hand and needed letters and memos typed and spell checked. That’s exactly what the typing pool did. The piece was dropped in the in-box of a typist. Almost any typist was fine as long as they typed X words per minute with an error rate of Y. It really didn’t matter as long as the pool continued to churn out reasonably good quality typed documents. Sometimes called a secretarial pool, it was generally cheaper than giving everyone their own secretary just to type up a few documents.

Fast forward a few decades and we see businesses treating developers in much the same way. Developers are in the IT department and are assigned projects based on certain technology standards. Because of the standardized tools and techniques, almost any developer should be able to perform the work. The goal is to produce a functioning application at a certain cost. In part this model allows the centralization of developer resources out of particular departments and into a central IT organization. As a bonus, this model frees you to look for the best price on developer resources locally, nationally or globally.

How can this model possibly be broken? If we’re dealing with very standardized pieces of work, then we should be able to dole them out to any competent individual and expect adequate performance in reasonable time. It should be apparent that there is an underlying assumption that the work asked for is ‘standard.’ Generally, when you need a custom application, you’re asking for something non-standard. If the application were truly standard it would be written by a developer (or company) that took some iteration of the solution and packaged it into an open-source or COTS product. If you need something ‘standard,’ and you’re having it custom built, you’re re-inventing the wheel.

We’re really talking about non-standard items. The problem to be solved might be something like a time-sheet application, but because of some constraint (like the format of the data to be saved, or the sheer scale of the problem) it winds up as a custom piece of work. There are usually dozens of ways to solve similar IT problems that are almost all equally valid. Sometimes the most straightforward and simple are the best. Sometimes even apparently trivial problems require more detailed thought. A developer’s experience, subject area knowledge, and natural talent allow the developer to determine what exactly needs to be done.

As a real example, imagine an on-line forms application. Solution 1 might be to create a data table to store each type of form because they all have different fields and the available tools map 1 object to 1 table. Solution 2 might be to engineer a generic storage mechanism and a meta-data based approach to managing forms and related workflows. Using the tools available many of the client’s developers felt that solution 1 was the less efficient but unavoidable solution. It would result in dozens (if not over 100) database tables. Solution 2 was pursued by asking for a waiver from the official developer tools and server stack. The developer used their experience and knowledge to make their case to use non-standard tools to build a more efficient solution.

The typing pool approach was happy with solution 1 because all pieces stayed the same. The same object-relational mapper. The same database design approach. The same storage mechanism. The result, however, would have been an almost unworkable solution. Solution 2 required thinking outside of the box. That’s what you should be looking for in developers. You should be looking for creative, thoughtful people that have experience and knowledge to craft solutions.

It’s interesting I settled on the word ‘craft’ to describe what a developer does to build a solution. Like a table, chair, or home, a solution can be technically ‘correct’ and working but poorly crafted. A good developer, in addition to being creative, is also like a good craftsman. The work should be able to stand up over time. It should be modifiable and maintainable, even years after its initial launch. As a concrete example, I once worked with a developer that produced working code but it was horrendous code that had to be rewritten. It was enough for him that it worked.

The typing pool approach to managing developers results output that works as long as no one has to meddle with the code. Even with all the modern tools and methodologies at hand, nothing guarantees that the resulting product is well crafted. Unlike set-piece workers (for example assembly line workers or typists) developers aren’t solving the same problem over and over again. It’s hard to judge the quality of their work without actually looking into the code. A typist or set-piece worker usually has some readily identifiable measure of quality – like number of errors, tolerance between fitted parts, etc. Without actually being able to evaluate the code, most consumers of IT services just have to trust it’s good quality work. They can’t compare it to the previous program, which might be in a totally different subject area and a completely different level of difficulty. It’s like having a typing pool that types up your letter in a language you don’t understand and you just have to assume it’s done well.

The only way around this is to look at developers individually. It may seem hard in companies with dozens or hundreds of developers, but managers should really take the time to develop an understanding of the developers as individuals. It’s not that managers de-humanize developers. It’s that it’s easier for managers treat the developers as being fungible resources instead of looking at individual skill level, experience, and aptitude. This is especially true when the manager is not a technical person. This view is reinforced by a market for IT services that sells “resources” with a standard litany of skills, listed in “representative resumes.” The implication, of course, is that there is a pool of clones from which they will pull your ‘resources.’

So, every developer is a wonderful snowflake, unique and special in every way. Why have standard this or that? The reason we have standards, however, is not to homogenize developers, but to ensure they can communicate with each other in a standard manner. It ensures that we all agree on a common set of tools that have common interfaces, so we don’t have to start from scratch on every project. I have yet to see developing custom software, an inherently creative endeavor, reduced to something that was done well in the typing pool model (which works against creativity). The best software. The game changing software. The software that can produce an competitive advantage is crafted by creative people close to the subject matter.

Permalink…

What is Cloud?

by

There are some interesting takes on cloud computing. With some vendors bringing the ‘cloud’ into your data center by streamlining provisioning of new servers. Other vendors are recasting existing services as ‘cloud’ or SaaS (Software as a Service). Github is a ‘cloud’ source code repository that is becoming the unofficial source code repository for the cloud. Companies like Heroku, Google and Salesforce.com, are calling application hosting ‘cloud.’ For some clouds are operating environments. Microsoft is offering a combination of hosted services and application hosting in their cloud offering. So the question is, what is cloud?

Let’s look at leased servers. Amazon’s EC2 is certainly ‘cloudy’ in feel, but why doesn’t a plain old virtual server sound cloudy? Maybe because the virtual server is fixed (although we’ve had clients with virtual servers that moved, were re-IP'edn and mayhem ensued). A leased server is fixed, tied to some physical box in a cabinet. A server on the cloud isn’t fixed, we can imagine it moving around in some sense. It’s elegant, beautiful, whispy and light compared to the boat anchor feel of a leased server with a 2 year contract. But actually, the big difference Amazon points out is financial flexibility. I’m using resources efficiently (from a financial perspective) because I can bring servers up and down based on need.

A few weeks ago Sun was set to make a cloud computing announcement. I thought they might be offering a cloud computing service, but instead they wanted to talk about how they’re an enabler. They enable the cloud. But using your data center as a cloud, as Sun or VMWare might suggest, doesn’t seem very cloudy. You still have to buy racks of servers and maintain some idle capacity. Idle servers sitting around and large fixed investments don’t feel cloudy. Maybe ‘Virtualization’ didn’t roll off the tongue as well as ‘cloud,’ and therefore ‘cloud’ is the new virtualization. Yet it is cloudy in some sense to be able to find a home for a new application without having to provision a fresh server or blade.

So maybe, when everything is said and done, cloud means ‘nimble’ or ‘flexible.’ What Amazon is offering is a leased server with a per hour contract. Most places offering leased servers want you to pre-pay one year, or monthly with a one year agreement. The same can be said of services like Heroku and Google. You can scale up and down your financial commitment on a short-term basis. Getting more capacity out of some hosts could result in an expensive change in service tiers, more long term contract, etc.

What it does not mean is a free lunch. If you look at the pricing for Amazon, for example, the basic server compute unit (equivalent to a 1.2 Ghz core) is $0.10 an hour or $72 per core per month (not including S3 storage). You can buy a fairly modern 8 core machine for about $3000 to $4000. Getting power and bandwidth to the machine might be in the $250 per month range. Amortizing the server over just one year, your cost would be $500-$600 a month for 8 cores (or $360 with a 3 year amortization), versus Amazon’s $576 per 8 compute units per month. And given that you might be looking at a newer 2.5-3.0 Ghz Xeon, around twice Amazon’s stated performance, it may perform more like 12 or 16 compute units. (That assumes you already have or are paying for the person to manage the server).

What buying a server doesn’t get you is the ability to not pay for cores you’re not using. Say, for example, your service is cyclical and peaks only on holidays. Amazon would let your run on 1 core (if that’s sufficient) but then scale up to 8 cores for those short periods. Once the load backs down you go back to $72 per month. You might still spend significant time and money tooling your application so it can scale up and down that way. For example, you might actually use 2 cores, with one core running the database and the other cores running the application so you can just add more cores. Now you’re at $144 per month or more if the database image requires more capacity to handle peak loads.

There are some unanswered questions. The burden to monitor the server is still on you. You need to decide to scale up or down as needed and you may not be able to respond to sudden, short changes in demand (slashdot). In some cases, as we’ve seen, the cloud provider can go down. There are legitimate questions regarding security and what kind of information you can put into cloud based storage. Maybe you have super secret sauce in your code that can’t be disclosed without loosing your competitive edge. What we haven’t seen is how well the cloud services vendors scale up (meet their performance SLA’s) if they get hit with a groundswell of demand. Somebody has to have idle capacity somewhere for users to have their capacity demands met.

But I think there’s a shared vision of the cloud in the current zeitgeist. We can envision on-demand server capacity powering applications hosted by sites like Facebook, so that we only have to build and deploy tiny pieces of simple functionality. In turn these combine into an over all service offering that provides value, and therefore can be monetized with a small initial investment. No big server build outs. No unused capacity. In other words, the service pays for itself, we avoid sunk costs, and we’re all self-funding.

Permalink…

The Tyranny of the Small

by

Recently, the Itanium Solutions Alliance released an upbeat report about adoption of the Itanium, which lags behind SPARC and Power in the enterprise server market. With the Oracle acquisition of Sun, many wonder what the future holds for SPARC. IBM has recently upped their rewards to $4,000 a CPU for customers trading in SPARC hardware to take advantage of this uncertainty. The irony is that all this hubbub and fighting feels like re-arranging deck chairs as the iceberg of x86 performance slices a gaping hole into the side of the USS Big Iron.

Why would you need SPARC, Power, or Itanium based systems anyway? First there are customers that bought SPARC, Itanium, PA Risc, Alpha, or Power systems in the past and need to stay on the platform. If you’re company that spent millions building, installing and configuring systems, it’s often cheaper to stay put. HP has both PA Risc and Alpha migrations to Itanium systems, for example. Sometimes, you’re completely wedded to an architecture that you find yourself running old software on emulators. The other big reason is the big application, usually a huge database instance, that is part of a business intelligence or data warehouse project. This, in itself, is becoming an endangered beast.

It’s no longer true that you inherently need big iron to run big systems. Google’s obvious example aside, modern software is built with horizontal scalability in mind. For example, Oracle has a clustering option called RAC that allows you to build a database from a cluster of smaller machines. Tools like Rails build their scaling strategy on many small boxes behind load balancers. Java clustering tools support deployment on one App server, which then propagates to other servers, all behind a load balancer. Web servers with PHP can hide behind a hardware load-balancer to act as if it were one insanely capable web server. This is also made possible because of protocols like HTTP, or are built like HTTP, and can take advantage of statelessness, proxies, and caching.

Welcome to the tyranny of the small. You can build an enterprise architecture that scales by adding more off the shelf computing power as your needs grow. The lowly x86, with it’s 30 year old legacy going back to 16 bits, is cheap and it’s fast enough. Combined with storage pricing falling through the floor, (a terabyte can be had on a single cheap disk) and the world for the single big server gets smaller and smaller. It doesn’t help big iron that we’re bumping up against some hard physics that makes single processor (or core) performance improvements harder to achieve. In fact, inside of computers, cores are added to improve performance instead of relying on CPU speed. The performance gap between x86 and “high end” processors narrows or reverses on a core to core basis.

That’s not to say the market for single, large servers is gone. There will always be applications that run better on a single large system, and many existing applications built before the Web was a twinkle in Tim Brenners-Lee’s eye. The mainframe ain’t going nowhere. That market, however, is not where a lot of though energy and money is being diverted. There are some interesting systems, like the Sun’s Cool Threads servers. Overall, however, you can’t beat all the money AMD and Intel are putting into x86 chip design. Nor can you really fight against the basic nature of the web, which likes horizontal scalability thanks to technologies like reverse proxies and statelessness. Even desktop client-server applications are relying on web-based protocols (REST or WS-*) which favor a horizontally scalable backend.

That’s why, with our hosted Radiant service I’m not worried about scaling. I’m worried about other things, but not about scaling. I don’t have illusions that there won’t be challenges, but I have embraced the web and how the web scales best. I don’t have to sink tens of thousands of dollars for a build-out for possible future customers. The commodity parts I need to use can be shipped within 24 hours from most vendors. I only buy what I need to service my customers. I don’t spend money on unused infrastructure. In many ways what the cloud promises is already here. In fact, the problem we have today is not that servers are too small, but rather, how do we split up the servers we do have so that we can ensure we’re getting the most out of the existing infrastructure? That’s a topic for another article, though.

Permalink…

Apple Is Not Not Enterprise Ready

by

Can’t do a double negative? Go cry to your English teacher. This is an article on the bear knuckled fight for Enterprise dollars. How can Apple Macs and iPhones possibly work in the enterprise? They’re not enterprise ready. They have a lot of stuff that makes them individual and small business friendly, but they’re not enterprise ready! So, why are business users carrying iPhones and Macs? Don’t these people know there are enterprise systems out there that they should use?

Actually, despite the title, this is not really about Apple. It’s about the progression of technology from ‘useless’ to ‘as necessary as oxygen’ in the context of a business-oriented environments. Everybody is familiar with the adoption curve of new technology and phrases such as ‘early adopter.’ According to current thinking about technology adoption, as technology matures it moves into different phases and different target markets. Eventually it moves from twenty-somethings and teenagers in NYC that have to have the latest and greatest to your average fifty something in Kansas who has had the same TV and VCR for thirteen years.

Technology moves in a slightly different progression in business circles. A lot of it starts off as nothing more than a pass-time, an oddity or plaything. It then morphs and matures until it becomes the darling of the industry and trade press. If there’s anything I’ve learned is that the path from irrelevant to critical happens despite finding willing vendor or consultants pushing it as the next ‘big thing.’ The trade press can love it, hate it or ignore it and yet technology marches on. In many cases the technology moves forward despite the best efforts established vendors, consultants and trade press to marginalize it. We see it all the time – the often thorny path from toy to tool.

The experience most relevant to me is the web. When I started working in IT the infant web was already around, and according to an op/ed piece in the Washington Post was a place where computer geeks wrote about computers for other computer geeks. I worked for a company that had enterprise customers and put together enterprise solutions, based on enterprise technologies. The tools that comprised the ‘internet’ were not seen as a basis for building solutions for ‘enterprise’ clients. We had PowerBuilder, HP/UX, Oracle, VB, PL/SQL, Baan, etc., to build solutions. We were actually discouraged from using the web. Before I left they embraced the internet but were now railing against Linux (developers were supposed to share one very overcrowded HP Unix server).

I fought the same fight with Open Source anything, Java, PHP and MySQL. All these were, or are, not considered ‘enterprise’ by some magic board of managers that decide what enterprise is. (Gartner and their brethren are somewhat to blame). Macs and iPhones are derided as not being enterprise ready but they’re starting to move into the enterprise. At the same time solutions that should be taking the enterprise world by storm, because of their enterprise features, are languishing. I was once told no one would take the company I was working for seriously if it ‘got out’ that we were using tools like PHP instead of an enterprise product like ATG’s Dynamo.

So, here’s the IT cycle. Something is useful and popular. People bring it to work and it helps them do their job. Then, after creating the inevitable new technology dust up, it becomes mainstream. New technology goes through phases where its adoption is even clandestine, lest managers find out and people lose their jobs. Often that same technology becomes so pervasive that people wonder why they were so against it in the first place.

Right now it’s the agile language revolution. Java is enterprise. .NET is enterprise. But Ruby? Is Rails enterprise? How about Python? To make these tools acceptable to administrators, managers and IT governance type people, Jython, JRuby, Iron Python, etc. are the middle way. You are still a Java shop but you can use Jython for some things. In some cases people try to re-invent the enterprise toolkits and frameworks to be more Rails like or be more agile, yet leave them on the core language. Of course, it isn’t nearly as flexible or dynamic, or interesting as straight Ruby on Rails, or Django on Python.

It’s also the RESTful revolution. In many shops the standard is Web Services (in the WS- sense of the term). RESTful services seem not-enterprise enough. You can pluck out some feature of WS- (especially in the sickeningly complicated security area) where RESTful services doesn’t have a straightforward answer, and therefor must not be ‘enterprise ready.’ Except we’re seeing more and more companies eschew offerings based on WS-* and instead publish “API” documents that are based on RESTful methods. And then, to make it easier for implementers, releasing a library for a particular technology stack to ease integration so it doesn’t even matter.

The moral of the story is don’t pick a winner based on what trade press and IT governance groups think the winner ought to be. They look at promised features, product announcements from ‘enterprise’ vendors and white papers from large consulting companies. The Itanium was at one time going to take over the 64 bit server market according to the trade press. Several companies announced server offerings based on the Itanium that never materialized. IBM even stopped selling Itanium systems. Why? Because the lowly x86 made the transition from 32 to 64 bits and was more useful, cheaper and ultimately more popular. It wasn’t that long ago that people could barely take x86 based servers seriously outside of Windows servers.

Your criteria for adopting technologies should be that they are useful to you and your organization. You shouldn’t hop on every bandwagon. There are a lot of false starts. However, it seems like there are a lot more false starts among people hawking ‘enterprise tools’ than there are among people selling lowly, simple, easy to use and cheap technology. It reminds me of Japanese No drama. We all know how it’s going to turn out in the end, but we first all have to play our parts.

Permalink…

Developer as (Fashion) Designer

by

As a developer I generally do one of three things. The first is build custom software from scratch. The second is build a ‘solution’ from COTS pieces wired together with a little bit of custom code. The third is tell the client what to buy in terms and, if necessary, I install or configure the software. Which approach is better is dependent on a variety of factors, including what the software needs to do, what software already exists, and the degree to which the software is core to the client’s business. Because of some very natural analogies between software and construction, many software developers and theorists liken the industry to a form of engineering or construction, specifically civil engineering.

How does an engineer tackle a complex problem? They break it down into sub-problems. At some point those sub-problems are broken down into sub-sub-problems and so on. Eventually you wind up with pieces that are solved or solvable. To make this all possible you must specify every aspect of the system. I have never seen a client or engineer completely and accurately specify all parts of a software system to the mind-numbing detail required. As software becomes more complicated, now encompassing concepts such as ‘social networking feeds’ and ‘user-contributed content,’ this approach isn’t scaling. In fact, I think we need to step away from the engineering mindset and explore analogies to other professions.

Even though it might sound a little odd at first, it occurred to me the other day that my job is a lot more like the fashion industry. Most software is like mass produced clothing. It’s bought off the rack and isn’t tailored to suit any particular body – just anybody the designer considers a ‘Medium.’ Because it’s mass-produced, the price per item is relatively cheap. Writing a word processor from scratch makes about as much sense as custom-fit t-shirts. A lot of software, in fact the vast majority of software falls into this camp. Everything from Windows and Microsoft Office to Mac OS and your typical Linux distribution are essentially ‘off-the-rack.’

At the next step up we can take something that’s off the rack but requires hemming and cutting to fit, like pants. It isn’t specifically tailored to fit you but it is altered to fit you better. For example, nice pants often require hemming as opposed to off the rack slacks at pre-determined inseams. This is stock software with a customization. For example, you might want Radiant (a CMS) with a custom extension or two.

Then is the completely custom suit. You go in, get measured, and for several hundred (or even thousands) of dollars later you get a suit that fits you perfectly. You choose aspects of the suit like the material, but you don’t make every decision. You could tell the tailor to give you 12" lapels and cuff the sleeves, and the tailor may oblige, but you will look a little ridiculous. You would generally trust the tailor – barring minor alterations. The tailor or designer makes detailed decisions and all you care about is that you get a fashionable and well fitting suit. This is the way most custom software should be built.

However, this is the way custom software is actually built: You specify all the exact requirements of the system. Imagine going into the same tailor and buying a suit by telling the tailor how wide the lapels should be. That they should be even on both sides. That the sleeves NOT be cuffed but the pants MUST be cuffed. That there should be only one breast pocket on the left breast. The suit should be made of wool. Jacket pockets should by symmetric on the left and right side. And so on… That would seem ridiculous and would consume a lot of time and energy, and yet, that’s how a lot of custom software is specified.

Many parties are responsible for this current state of affairs. Partly it stems from the customer’s need for control. How do you know the developer will deliver what you want unless you tell them? Partly it’s the developer covering their own rear. Imagine how hard it would be to buy clothing if you had to have detailed specifications each time you went to the mall. You should start thinking of your developer resources like personal shoppers or stylists. You should be able to give them some parameters and get back a reasonable solution without having to cover every possibly insignificant detail.

In fact, a software developer as stylist probably fits better than software developer as engineer. The practitioner should be able to put together COTS software, customized software and custom software together into a coherent whole, just as a fashion designer or stylist can create an outfit or collection from custom pieces, off the rack pieces, and tailored pieces. As a customer, you shouldn’t have to make every decision about every piece going into the end product. As a customer, though, you should resist the temptation to micro-manage the process. Imagine paying for a stylist and then following them around the mall and requiring your approval for every minor decision.

As a developer, it is vitally important to listen to your client and understand you are building a solution to suit their tastes and needs. You need to put something together that fits your client and in which they feel comfortable. As a client, you need to make sure the developer understand what you want, but you also need to step back and trust the developer’s judgment and experience. In some cases what might looks or feels awkward or new in the new software will quickly feel natural and second-nature. If the relationship is too hands-off, then the developer will just return with what they think is a ‘good fit.’ If the relationship is too hands-on you may wind up cursed with exactly what you asked for, 12" lapels, cuffed sleeves and all.

Thinking back at the first paragraph and the reference to civil engineering is important. Why? Because there are really no ‘off-the-rack’ skyscrapers or bridges. Everything has to be specified at the minutest level. When we make the analogy to civil engineering we already put ourselves in the frame of large, complicated projects that are often unique at some level. Instead, we should break out of those frames and look at other industries. Software development hasn’t been around long enough for us to really get our minds around it the same way we understand tailoring or construction. Until then we will feel the need to frame our discussion of software in terms of some other endeavor we all understand. The trick is not to get too trapped by the analogy and to learn by trying on different a new analogies. So, why not fashion?

Permalink… Comments: 1

1999 - 2014 © Saturn Flyer LLC 2321 S. Buchanan St. Arlington, VA 22206

Call Jim Gay at 571 403 0338