Archive | September 2006

On XBRL and Annotation

Network Computing recently had a feature on the Extensible Business Reporting Language (XBRL). In the article, author Edward Hand states:

Perhaps XBRL’s most-valuable feature is its ability to explain why an exceptional case exists in a report. A reporting company can address exceptions within the data through notes and tags, and avoid having a report fail the validation process. These notes are useful if there is a reasonable explanation for missing elements in the report, for example.

Although it’ll take a much deeper dive into XBRL to confirm, I expect that these notes and tags are built into the XBRL schema, and appear within XBRL documents. These notes and tags are also examples of embedded annotations. External annotations, based on XPointer, might also be interesting to consider in this context.

Advertisements

NIST’s Guide to Secure Web Services

NIST has recently released a Guide to Secure Web Services. Their Computer Security Division describes the document as follows:

NIST is pleased to announce the public comment release of draft Special Publication (SP) 800-95, Guide to Secure Web Services. SP 800-95 provides detailed information on standards for Web services security. This document explains the security features of Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), the Universal Description, Discovery and Integration (UDDI) protocol, and related open standards in the area of Web services. It also provides specific recommendations to ensure the security of Web services-based applications.

Writing in Network World, M. E. Kabay extracts from the NIST report:

Perimeter-based network security technologies (e.g., firewalls, intrusion detection) are inadequate to protect SOAs [Service Oriented Architectures] … SOAs are dynamic, and can seldom be fully constrained to the physical boundaries of a single network. SOAP … is transmitted over HTTP, which is allowed to flow without restriction through most firewalls. Moreover, TLS [Transport Layer Security], which is used to authenticate and encrypt Web-based messages, is unsuitable for protecting SOAP messages because it is designed to operate between two endpoints. TLS cannot accommodate Web services’ inherent ability to forward messages to multiple other Web services simultaneously.

The NIST document includes a number of recommendations, the five of which Kabay highlights:

  • Replicate data and services to improve availability.
  • Use logging of transactions to improve accountability.
  • Use secure software design and development techniques to prevent vulnerabilities.
  • Use performance analysis and simulation techniques for end-to-end quality of service and quality of protection.
  • Digitally sign UDDI entries to verify the author of registered entries.

The NIST document definitely warrants consideration for anyone developing Web services.

Email from Outer Space

One of the astronauts on space-shuttle Atlantis is Steve MacLean. Steve received both his bachelor’s and doctoral degrees from York University. Given my long-term affiliation with York, it’s difficult not to feel some sense of pride. As Atlantis’ mission draws to a close, MacLean wrote in his second email message to York president and vice-chancellor Lorna R. Marsden:

I would like to pass a message to the York community.

The entire experience of preparing for launch, launching, reaching orbit, executing a very difficult mission and then…preparing to return allows one in such a short time to feel the full range of human emotions. I find it astounding that it is possible to live so much in such a short time. I look forward to sharing this story with you all on my return.

But more important I would like to thank the many members of the York community for my experience at York. Those years were excellent for me and I realize now that they served to shape the balanced approach that makes each and every day meaningful. York University was wonderful for me and I thank you.

You all should see the stars right now…their penetrating warm glow soothes the soul.

From Outer Space
Steve MacLean.

Email from outer space. Now that’s cool!

Wireless Internet to Blanket Metropolitan Toronto

Recently, Toronto Star energy reporter Tyler Hamilton wrote:

Toronto Hydro launched Canada’s largest Wi-Fi zone yesterday in the heart of the city’s financial district, giving tourists, local businesses and downtown workers free wireless Internet access for the next six months.

Hydro has big plans for extending coverage to a 700 square kilometre area that embraces most of the metropolitan Toronto area. Priced at $29 per month (Canadian), the value proposition of the ultimately ubiquitous offering is even more compelling when other services (like a VoIP phone) based on Internet Protocol (IP) are factored in.

If metropolitan wireless gains mass-market traction, it’ll drive down the cost of convergence devices (like phones that can operate on cellular or WiFi networks), and continue to spur innovation.

Licensing Commercial Software for Grids: A New Usage Paradigm is Required

In the Business section of last Wednesday’s Toronto Star, energy reporter Tyler Hamilton penned a column on power-based billing by datacenter services provider Q9 Networks Inc. Rather than bill for space, Q9 chief executive officer Osama Arafat is quoted in Hamilton’s article stating:

… when customers buy co-location from us, they now buy a certain number of volt-amps, which is a certain amount of peak power. We treat power like space. It’s reserved for the customer.

Power-based billing represents a paradigm shift in quantifying usage for Q9.

Along with an entirely new business model, this shift represents a calculated, proactive response to market realities; to quote Osama from Hamilton’s article again:

Manufacturers started making the equipment smaller and smaller. Customers started telling data centre providers like us that they wanted to consolidate equipment in 10 cabinets into one.

The licensing of commercial software is desparately in need of an analogous overhaul.

Even if attention is restricted to the relatively simple case of the isolated desktop, multicore CPUs and/or virtualized environments are causing commercial software vendors to revisit their licensing models. If the desktop is networked in any sense, the need to recontextualize licensing is heightened.

Commercial software vendors have experimented with licensing locality in:

  • Time – Limiting licenses on the basis of time, e.g., allowing usage for a finite period of time with a temporary or subscription-based license, or time-insensitive usage in the case of a permanent license
  • Place – Limiting licensing on the basis of place, e.g., tieing usage to hardware on the basis of a unique host identifier

Although commercial software vendors have attempted to be responsive to market realities, there have been only incremental modifications to the existing licensing models. Add to this the increased requirements emerging from areas such as Grid Computing, as virtual organizations necessarily transect geographic and/or organizational boundaries, and it becomes very clear that a new usage paradigm is required.

With respect to the licensing of their commercial software, the situation is not unlike Q9’s prior to the development of power-based billing. What’s appealing about Q9’s new way of quantifying usage is its simplicity and, of course, its usefulness.

It’s difficult, however, to conceive such a simple yet effective analog in the case of licensing commercial software. Perhaps this is where the Open Grid Forum (OGF) could play a facilitative role in developing a standardized licensing framework. To move swiftly towards tangible outcomes, however, the initial emphasis needs to focus on a new way of quantifying the usage of commercial software that is not tailored to idealized and/or specific environments.

Licensing Commercial Software: A Defining Challenge for the Open Grid Forum?

Reporting on last week’s GridWorld event, GRIDtoday editor Derrick Harris states: “The 451 Group has consistently found software licensing concerns to be among the biggest barriers to Grid adoption.” Not surprisingly then, William Fellows (a principal analyst with The 451 Group) convened a panel session on the topic.

Because virtual organizations typically span geographic and/or organizational boundaries, the licensing of commercial software has been topical since Grid Computing’s earliest days. As illustrated below, traditional licensing models account for a single organization operating in a single geography (lower-left quadrant). Any deviation from this, as illustrated by any of the other quadrants, creates challenges for these licensing models as multiple geographies and/or multiple organizations are involved. Generally speaking the licensing challenges are most pronounced for vendors of end-user applications, as middleware needs to be pervasive anyway, and physical platforms (hardware plus operating system) have a distinct sense of ownership and place.

grid_sw_licensing_vo.png

The uptake of multicore CPUs and virtualization technologies (like vmWare) has considerably exacerbated the situation, as it breaks the simple, per-CPU licensing model employed by many Independent Software Vendors (ISVs) as illustrated below.

grid_sw_licensing_hw.png
In order to make progress on this issue, all stakeholders need to collaborate towards the development of recontextualized models for licensing commercial software. Even though this was apparently a relatively short panel session, Harris’ report indicates that the discussion resulted in interesting outcomes:

The discussion started to gain steam toward the end, with discussions about the effectiveness of negotiated enterprise licenses, metered licensing, token-based licenses and even the prospect of having the OGF develop a standardized licensing framework, but, unfortunately, time didn’t permit any real fleshing out of these ideas.

Although it’s promising that innovative suggestions were entertained, it’s even more interesting to me how the Open Grid Forum (OGF) was implicated in this context.

The OGF recently resulted from the merger of the Global Grid Forum (GGF) and the Enterprise Grid Alliance (EGA). Whereas Open Source characterizes the GGF’s overarching culture and philosophy regarding software, commercial software more aptly represents the former EGA’s vendor-heavy demographics. If OGF takes on the licensing of commercial software, it’s very clear that there will be a number of technical challenges. That OGF will also need to bridge the two solitudes represented by the former GGF and EGA, however, may present an even graver challenge.

Grid Computing’s Identity Crisis

Hanoch Eiron, Open Grid Forum (OGF) vice president of marketing, recently contributed a special feature to GRIDtoday. Even though Eiron’s contribution spans a mere three paragraphs, there is ample content to comment on.

Eiron opens with:

Let’s face it — the Grid hype by commercial vendors in the past few years was premature. Some would say that it has actually slowed the development of grids as it created customer expectations that could not be met.

IBM’s arrival on the Grid Computing scene, publically marked by their endorsement of the Open Source Globus Toolkit, signified the dawn of vendor-generated hype. However long before IBM sought to paint Grid Computing blue, it was Global Grid Forum (GGF) and Globus Project representatives who were the source of hype. Back in these BBB (Before Big Blue) days, academic gridders evangelized that Grid Computing represented the next phase in the ongoing evolution of Distributed Computing. And specifically with respect to Grid Computing standards and the Globus Toolkit:

This evolution in standards has wreaked havoc on the implementation front. For example, in moving from Versions 2 (protocol-specific implementation based on FTP, HTTP, LDAP, etc.) to 3 (introduction of Web services via OGSI) to 4 (refinement of previously introduced OGSI Web Services to WS-RF), the Open Source Globus Toolkit has undergone significant changes. When such changes break forward-compatibility in subsequent versions of the software, standards evolution becomes an impediment to adoption.

For a specific example, consider CERN’s gamble with Grid Computing:

The standards flux, that resulted in evolving variants of the Globus Toolkit, caused CERN and its affiliates some grief for at least two reasons.

  • First, projects like the LHC require significant advance planning. Evolving standards and implementations make advance planning even more challenging, and the allusions to gambling quite appropriate.
  • Second, despite the fact that CERN’s primary activity is academic research, CERN needs to provide a number of production-quality services. Again, such service levels are difficult to deliver on when standards and implementations are in a state of continuous change.

In other words, it’s not just vendors who have been guilty of hype and over-promising on deliverables.

Later in his first paragraph, Eiron states: “… it is clear that from a public perception standpoint, grids are now in a trough.” I couldn’t agree more. As the recent GridWorld event has ably demonstrated, considerable confusion exists about Grid Computing. Newbies, early adopters and even the Griderati, are uncomfortable with the term, unclear on what it means and how it fits into the broader context of clustering, cyberinfrastructure, Distributed Computing, High Performance Computing (HPC), Service Oriented Architecture (SOA), Utility Computing, virtualization, Web Services, etc. (That adaptive enterprise and autonomic computing don’t receive much play is of mild consolation.) Grid Computing is in a trough because it is suffering from a serious identity crisis. Fortunately, Eiron and OGF are not in denial, and have plans to address this situation.

Eiron refers to Grid Computing’s latest poster child, eBay. And although I haven’t had the benefit of a deep dive on the technical aspects of the eBay Grid, I expect it to be a grid more in positioning than substance. In a GRIDtoday Q&A with Paul Strong, distinguished research scientist at eBay Research Labs, there is evidence of cluster-level workload management, clustered databases, farms of Web servers, and other examples of Distributed Computing technologies. However, nothing that Strong discusses seems that griddy. All of this echoes what I wrote previously in a GRIDtoday article:

The highest-profile demonstrations of Grid computing run the risk of trivializing Grid computing. It may seem harsh to paint the well-intentioned World Community Grid as technologically trivial, but in terms of full disclosure, this is not the most sophisticated demonstration of Grid computing. Equally damaging are those clustered applications (like Oracle 10g) that masquerade as Grid-enabled. Taking such license serves only to confuse and dilute the very essence of Grid computing.

Eiron’s own words serve well in summing up here:

Is it clear that the community needs to do a better job of explaining the role of grids within the landscape of close and perhaps somewhat overlapping technologies, such as virtualization, services-oriented architecture (SOA), automation, etc. The Grid community also needs to better articulate how the architectures, industry standards and products can help customers reap the benefits of grids. It can use the perception trough as an opportunity to re-group and create a solid story that can be delivered upon, or morph into something else. It seems that much of the influence on how things will evolve is now in the Grid community’s own hands.

Of course, only time will tell if this window of opportunity is still open, and if the Grid Computing community is able to capitalize on it.