As some of you know, I’ve recently joined Bright Computing.
Last week, I attended Bio-IT World 2013 in Boston. Bright had an excellent show – lots of great conversations, and even an award!
During numerous conversations, the notion of extending on-site IT infrastructure into the cloud was raised. Bright has an excellent solution for this.
What also emerged during the conversations were two uses for this extension of local IT resources via the cloud. I thought this was worth capturing and sharing. You can read about the use cases I identified over “On the Bright side …“
Over on LinkedIn, there’s an interesting discussion taking place in the “High Performance & Super Computing” group on the recently announced acquisition of Markham-based Platform Computing by IBM. My comment (below) was stimulated by concerns regarding the implications of this acquisition for IBM’s traditional competitors (i.e., other system vendors such as Cray, Dell, HP, etc.):
It could be argued:
“IBM groks vendor-neutral software and services (e.g., IBM Global Services), and therefore coopetition.”
At face value then, it’ll be business-as-usual for IBM-acquired Platform – and therefore its pre-acquisition partners and customers.
While business-as-usual plausibly applies to porting Platform products to offerings from IBM’s traditional competitors, I believe the sensitivity to the new business relationship (Platform as an IBM business unit) escalates rapidly for any solution that has value in HPC.
To deliver a value-rich solution in the HPC context, Platform has to work (extremely) closely with the ‘system vendor’. In many cases, this closeness requires that Intellectual Property (IP) of a technical and/or business nature be communicated – often well before solutions are introduced to the marketplace and made available for purchase. Thus Platform’s new status as an IBM entity, has the potential to seriously complicate matters regarding risk, trust, etc., relating to the exchange of IP.
Although it’s been stated elsewhere that IBM will allow Platform measures of post-acquisition independence, I doubt that this’ll provide sufficient comfort for matters relating to IP. While NDAs specific to the new (and independent) Platform business unit within IBM may offer some measure of additional comfort, I believe that technically oriented approaches offer the greatest promise for mitigating concerns relating to risk, trust, etc., in the exchange of IP.
In principle, one possibility is the adoption of open standards by all stakeholders. Such standards hold the promise of allowing for the integration between products via documented interfaces and protocols, while allowing (proprietary) implementation specifics to remain opaque. Although this may sound appealing, the availability of such standards remains elusive – despite various, well-intended efforts (by HPC, Grid, Cloud, etc., communities).
While Platform’s traditional competitors predictably and understandably gorge themselves sharing FUD, it obviously behooves both Platform and IBM to expend some effort allaying the concerns of their customers and partner ecosystem.
I’d be interested to hear of others’ suggestions as to how this new business relationship might allow for sustained innovation in the HPC context from IBM-acquired Platform.
Disclaimer: Although I do not have a vested financial interest in this acquisition, I did work for Platform from 1998-2005.
To reiterate here then:
How can this new business relationship allow for sustained, partner-friendly innovation in the HPC context from IBM-acquired Platform?
Please feel free to share your thoughts on this via comments to this post.
I was doing some errands earlier this evening (Toronto time) … While I was in the car, the all-news station (680news) I had on played some of Steve Jobs’ 2005 commencement address to Stanford grads. As I listened, and later re-read my own blog post on discovering the same address, I’m struck on the event of his passing by the importance of valuing every experience in life. In Jobs’ case, he eventually leveraged his experience with calligraphy to design the typography for the Apple Mac – after a ten-year incubation period!
I think it’s time to read that Stanford commencement address again …
RIP Steve – and thanks much.
Recently I shared an a-ha! moment on the use of virtual environments for confronting the fear of public speaking.
The more I think about it, the more I’m inclined to claim that the real value of such technology is in targeted skills development.
Once again, I’ll use myself as an example here to make my point.
If I think back to my earliest attempts at public speaking as a graduate student, I’d claim that I did a reasonable job of delivering my presentation. And given that the content of my presentation was likely vetted with my research peers (fellow graduate students) and supervisor ahead of time, this left me with a targeted opportunity for improvement: The Q&A session.
Countless times I can recall having a brilliant answer to a question long after my presentation was finished – e.g., on my way home from the event. Not very useful … and exceedingly frustrating.
I would also assert that this lag, between question and appropriate answer, had a whole lot less to do with my expertise in a particular discipline, and a whole lot more to do with my degree nervousness – how else can I explain the ability to fashion perfect answers on the way home!
Over time, I like to think that I’ve approved my ability to deliver better-quality answers in real time. How have I improved? Experience. I would credit my experience teaching science to non-scientists at York, as well as my public-sector experience as a vendor representative at industry events, as particularly edifying in this regard.
Rather than submit to such baptisms of fire, and because hindsight is 20/20, I would’ve definitely appreciated the opportunity to develop my Q&A skills in virtual environments such as Nortel web.alive. Why? Such environments can easily facilitate the focused effort I required to target the development of my Q&A skills. And, of course, as my skills improve, so can the challenges brought to bear via the virtual environment.
All speculation at this point … Reasonable speculation that needs to be validated …
If you were to embrace such a virtual environment for the development of your public-speaking skills, which skills would you target? And how might you make use of the virtual environment to do so?
Confession: In the past, I’ve been extremely quick to dismiss the value of Second Life in the context of teaching and learning.
Even worse, my dismissal was not fact-based … and, if truth be told, I’ve gone out of my way to avoid opportunities to ‘gather the facts’ by attending presentations at conferences, conducting my own research online, speaking with my colleagues, etc.
So I, dear reader, am as surprised as any of you to have had an egg-on-my-face epiphany this morning …
Please allow me to elaborate:
- Yesterday, I witnessed a demonstration of Nortel web.alive (dubbed by some as ‘Second Life for business’)
- This morning I was brainstorming content with a colleague for an upcoming presentation on computing resources available for researchers at York
It was at some point during this morning’s brainstorming session that the egg hit me squarely in the face:
Why not use Nortel web.alive to prepare graduate students for presenting their research?
Often feared more than death and taxes, public speaking is an essential aspect of academic research – regardless of the discipline.
As a former graduate student, I could easily ‘see’ myself in this environment with increasingly realistic audiences comprised of friends, family and/or pets, fellow graduate students, my research supervisor, my supervisory committee, etc. Because Nortel web.alive only requires a Web browser, my audience isn’t geographically constrained. This geographical freedom is important as it allows for participation – e.g., between graduate students at York in Toronto and their supervisor who just happens to be on sabbatical in the UK. (Trust me, this happens!)
As the manager of Network Operations at York, I’m always keen to encourage novel use of our campus network. The public-speaking use case I’ve described here has the potential to make innovative use of our campus network, regional network (GTAnet), provincial network (ORION), and even national network (CANARIE) that would ultimately allow for global connectivity.
While I busy myself scraping the egg off my face, please chime in with your feedback. Does this sound useful? Are you aware of other efforts to use virtual environments to confront the fear of public speaking? Are there related applications that come to mind for you? (As someone who’s taught classes of about 300 students in large lecture halls, a little bit of a priori experimentation in a virtual environment would’ve been greatly appreciated!)
Update (November 13, 2009): I just Google’d the title of this article and came up with a few, relevant hits; further research is required.
I bumped into a professional acquaintance last week. After describing briefly a presentation I was about to give, he offered to broker introductions to others who might have an interest in the work I’ve been doing. To initiate the introductions, I crafted a brief description of what I’ve been up to for the past 5 years in this area. I’ve also decided to share it here as follows:
As always, [name deleted], I enjoyed our conversation at the recent AGU meeting in Toronto. Below, I’ve tried to provide some context for the work I’ve been doing in the area of knowledge representations over the past few years. I’m deeply interested in any introductions you might be able to broker with others at York who might have an interest in applications of the same.
Since 2004, I’ve been interested in expressive representations of data. My investigations started with a representation of geophysical data in the eXtensible Markup Language (XML). Although this was successful, use of the approach underlined the importance of metadata (data about data) as an oversight. To address this oversight, a subsequent effort introduced a relationship-centric representation via the Resource Description Format (RDF). RDF, by the way, forms the underpinnings of the next-generation Web – variously known as the Semantic Web, Web 3.0, etc. In addition to taking care of issues around metadata, use of RDF paved the way for increasingly expressive representations of the same geophysical data. For example, to represent features in and of the geophysical data, an RDF-based scheme for annotation was introduced using XML Pointer Language (XPointer). Somewhere around this point in my research, I placed all of this into a framework.
In addition to applying my Semantic Framework to use cases in Internet Protocol (IP) networking, I’ve continued to tease out increasingly expressive representations of data. Most recently, these representations have been articulated in RDFS – i.e., RDF Schema. And although I have not reached the final objective of an ontological representation in the Web Ontology Language (OWL), I am indeed progressing in this direction. (Whereas schemas capture the vocabulary of an application domain in geophysics or IT, for example, ontologies allow for knowledge-centric conceptualizations of the same.)
From niche areas of geophysics to IP networking, the Semantic Framework is broadly applicable. As a workflow for systematically enhancing the expressivity of data, the Framework is based on open standards emerging largely from the World Wide Web Consortium (W3C). Because there is significant interest in this next-generation Web from numerous parties and angles, implementation platforms allow for increasingly expressive representations of data today. In making data actionable, the ultimate value of the Semantic Framework is in providing a means for integrating data from seemingly incongruous disciplines. For example, such representations are actually responsible for providing new results – derived by querying the representation through a ‘semantified’ version of the Structured Query Language (SQL) known as SPARQL.
I’ve spoken formally and informally about this research to audiences in the sciences, IT, and elsewhere. With York co-authors spanning academic and non-academic staff, I’ve also published four refereed journal papers on aspects of the Framework, and have an invited book chapter currently under review – interestingly, this chapter has been contributed to a book focusing on data management in the Semantic Web. Of course, I’d be pleased to share any of my publications and discuss aspects of this work with those finding it of interest.
With thanks in advance for any connections you’re able to facilitate, Ian.
If anything comes of this, I’m sure I’ll write about it here – eventually!
In the meantime, feedback is welcome.
I’ve added a few more articles over on Bright Hub: