Confession: In the past, I’ve been extremely quick to dismiss the value of Second Life in the context of teaching and learning.
Even worse, my dismissal was not fact-based … and, if truth be told, I’ve gone out of my way to avoid opportunities to ‘gather the facts’ by attending presentations at conferences, conducting my own research online, speaking with my colleagues, etc.
So I, dear reader, am as surprised as any of you to have had an egg-on-my-face epiphany this morning …
Please allow me to elaborate:
- Yesterday, I witnessed a demonstration of Nortel web.alive (dubbed by some as ‘Second Life for business’)
- This morning I was brainstorming content with a colleague for an upcoming presentation on computing resources available for researchers at York
It was at some point during this morning’s brainstorming session that the egg hit me squarely in the face:
Why not use Nortel web.alive to prepare graduate students for presenting their research?
Often feared more than death and taxes, public speaking is an essential aspect of academic research – regardless of the discipline.
As a former graduate student, I could easily ‘see’ myself in this environment with increasingly realistic audiences comprised of friends, family and/or pets, fellow graduate students, my research supervisor, my supervisory committee, etc. Because Nortel web.alive only requires a Web browser, my audience isn’t geographically constrained. This geographical freedom is important as it allows for participation – e.g., between graduate students at York in Toronto and their supervisor who just happens to be on sabbatical in the UK. (Trust me, this happens!)
As the manager of Network Operations at York, I’m always keen to encourage novel use of our campus network. The public-speaking use case I’ve described here has the potential to make innovative use of our campus network, regional network (GTAnet), provincial network (ORION), and even national network (CANARIE) that would ultimately allow for global connectivity.
While I busy myself scraping the egg off my face, please chime in with your feedback. Does this sound useful? Are you aware of other efforts to use virtual environments to confront the fear of public speaking? Are there related applications that come to mind for you? (As someone who’s taught classes of about 300 students in large lecture halls, a little bit of a priori experimentation in a virtual environment would’ve been greatly appreciated!)
Update (November 13, 2009): I just Google’d the title of this article and came up with a few, relevant hits; further research is required.
Just in case you haven’t heard:
… join us for an exciting national summit on innovation and technology, hosted by ORION and CANARIE, at the Metro Toronto Convention Centre, Nov. 3 and 4, 2008.
“Powering Innovation – a National Summit” brings over 55 keynotes, speakers and panelist from across Canada and the US, including best-selling author of Innovation Nation, Dr. John Kao; President/CEO of Intenet2 Dr. Doug Van Houweling; chancellor of the University of California at Berkeley Dr. Robert J. Birgeneau; advanced visualization guru Dr. Chaomei Chen of Philadelphia’s Drexel University; and many more. The President of the Ontario College of Art & Design’s Sara Diamond chairs “A Boom with View”, a session on visualization technologies. Dr. Gail Anderson presents on forensic science research. Other speakers include the host of CBC Radio’s Spark Nora Young; Delvinia Interactive’s Adam Froman and the President and CEO of Zerofootprint, Ron Dembo.
This is an excellent opportunity to meet and network with up to 250 researchers, scientists, educators, and technologists from across Ontario and Canada and the international community. Attend sessions on the very latest on e-science; network-enabled platforms, cloud computing, the greening of IT; applications in the “cloud”; innovative visualization technologies; teaching and learning in a web 2.0 universe and more. Don’t miss exhibitors and showcases from holographic 3D imaging, to IP-based television platforms, to advanced networking.
For more information, visit http://www.orioncanariesummit.ca.
Our manuscript on annotation modeling is one step closer to publication now, as late last night my co-authors and I received sign-off on the copy-editing phase. The journal, Computers and Geosciences, is now preparing proofs.
For the most part then, as authors, we’re essentially done.
However, we may not be able to resist the urge to include a “Note Added in Proof”. At the very least, this note will allude to:
- The work being done to refactor Annozilla for use in a Firefox 3 context; and
- How annotation is figuring in OWL2 (Google “W3C OWL2” for more).
Stay tuned …
What a difference a day makes!
Yesterday I learned that my paper on semantic platforms was rejected.
Today, however, the news was better as a manuscript on annotation modeling was
accepted for publication.
It’s been a long road for this paper:
- Its conception dates back to a presentation I gave at the 2006 Fall Meeting of the AGU.
- The paper was submitted as a contribution for Computers
& Geosciences Special Issue on Geoscience Knowledge Representation in
- The initial reviews called for major revisions. With tremendous support from my co-authors, the paper was significantly revised, and re-submitted.
- After some additional interactions, I just learned that the paper was finally accepted for publication.
The abstract of the paper is as follows:
Annotation Modeling with Formal Ontologies:
Implications for Informal Ontologies
L. I. Lumb, J. R. Freemantle, J. I. Lederman & K. D.
 Computing and Network Services, York University, 4700 Keele Street,
Toronto, Ontario, M3J 1P3, Canada
 Earth & Space Science and Engineering, York University, 4700 Keele
Street, Toronto, Ontario, M3J 1P3, Canada
Knowledge representation is increasingly recognized as an important component of any cyberinfrastructure (CI). In order to expediently address scientiﬁc needs, geoscientists continue to leverage the standards and implementations emerging from the World Wide Web Consortium’s (W3C) Semantic Web effort. In an ongoing investigation, previous efforts have been aimed towards the development of a semantic framework for the Global Geodynamics Project (GGP). In contrast to other efforts, the approach taken has emphasized the development of informal ontologies, i.e., ontologies that are derived from the successive extraction of Resource Description Format (RDF) representations from eXtensible Markup Language (XML), and then Web Ontology Language (OWL) from RDF. To better understand the challenges and opportunities for incorporating annotations into the emerging semantic framework, the present effort focuses on knowledge-representation modeling involving formal ontologies. Although OWL’s internal mechanism for annotation is constrained to ensure computational completeness and decidability, externally originating annotations based on the XML Pointer Language (XPointer) can easily violate these constraints. Thus the effort of modeling with formal ontologies allows for recommendations applicable to the case of incorporating annotations into informal ontologies.
I expect the whole paper will be made available in the not-too-distant future …
- If we introduce protocol-based QoS, won’t this provide any application using the protocol access to a differentiated QoS? I sense that QoS can be applied in a very granular fashion, but do I really want to turn my entire team of network specialists into QoS specialists? (From an operational perspective, I know I can’t afford to!)
- When is the right time to introduce QoS? Users are clamoring for QoS ASAP, as it’s often perceived as a panacea – a panacea that often masks the root cause of what really ails them … From a routing and switching perspective, do we wait for tangible signs of congestion, before implementing QoS? I certainly have the impression that others managing Campus as well as regional networks plan to do this.
- And what about standards? QoS isn’t baked into IPv4, but there are some implementations that promote interoperability between vendors. Should MPLS, used frequently in service providers’ networks, be employed as a vehicle for QoS in the Campus network context?
- QoS presupposes that use is to be made of an existing network. Completely segmenting networks, i.e., dedicating a network to a VoIP deployment, is also an option. An option that has the potential to bypass the need for QoS.
Earlier this week, I participated in the Net@EDU Annual Meeting 2008: The Next 10 Years. For me, the key takeaways are:
- The Internet can be improved. IP, its transport protocols (RTP, SIP, TCP and UDP), and especially HTTP, are stifling innovation at the edges – everything (device-oriented) on IP and everything (application-oriented) on the Web. There are a number of initiatives that seek to improve the situation. One of these, with tangible outcomes, is the Stanford Clean Slate Internet Design Program.
- Researchers and IT organizations need to be reunited. In the 1970s and 1980s, these demographics worked closely together and delivered a number of significant outcomes. Beyond the 1990s, these group remain separate and distinct. This separation has not benefited either group. As the manager of a team focused on operation of a campus network who still manages to conduct a modest amount of research, this takeaway resonates particularly strongly with me.
- DNSSEC is worth investigating now. DNS is a mission-critical service. It is often, however, an orphaned service in many IT organizations. DNSSEC is comprised of four standards that extend the original concept in security-savvy ways – e.g., they will harden your DNS infrastructure against DNS-targeted attacks. Although production implementation remains a future, the time is now to get involved.
- The US is lagging behind in the case of broadband. An EDUCAUSE blueprint details the current situation, and offers a prescription for rectifying it. As a Canadian, it is noteworthy that Canada’s progress in this area is exceptional, even though it is regarded as a much-more rural nation than the US. The key to the Canadian success, and a key component of the blueprint’s prescription, is the funding model that shares costs evenly between two levels of government (federal and provincial) as well as the network builder/owner.
- Provisioning communications infrastructures for emergency situations is a sobering task. Virginia Tech experienced 100-3000% increases in the demands on their communications infrastructure as a consequence of their April 16, 2007 event. Such stress factors are exceedingly difficult to estimate and account for. In some cases, responding in real time allowed for adequate provisioning through a tremendous amount of collaboration. Mass notification remains a challenge.
- Today’s and tomorrow’s students are different from yesterday’s. Although this may sound obvious, the details are interesting. Ultimately, this difference derives from the fact that today’s and tomorrow’s students have more intimately integrated technology into their lives from a very young age.
- Cyberinfrastructure remains a focus. EDUCAUSE has a Campus Cyberinfrastructure Working Group. Some of their deliverables are soon to include a CI digest, plus contributions from their Framing and Information Management Focus Groups. In addition to the working-group session, Don Middleton of NCAR discussed the role of CI in the atmospheric sciences. I was particularly pleased that Middleton made a point of showcasing semantic aspects of virtual observatories such as the Virtual Solar-Terrestrial Observatory (VSTO).
- The Tempe Mission Palms Hotel is an outstanding venue for a conference. Net@EDU has themed its annual meetings around this hotel, Tempe, Arizona and the month of February. This strategic choice is delivered in spades by the venue. From individual rooms to conference food and logistics to the mini gym and pool, The Tempe Mission Palms Hotel delivers.