Adobe Project Glasswing
(image: Adobe Inc.)

If you are a forward-looking museum professional, often the challenge you face is to move areas of your institution beyond the ‘objects behind a sheet of glass’ state, towards one that allows for a more tactile, interactive relationship between visitor and artefact.

Adobe, best known for their suite of creative software, which includes Photoshop and Illustrator, have previewed some exciting technology that might well enable exhibitors to offer the best of both worlds.

Project Glasswing allows objects to be placed behind a sheet of glass that acts visually like a layer in an Adobe app.  Think – animated objects, video or text, floating on the surface of a window, while the actual object of interest sits behind on full display.  There’s also full touchscreen functionality built in, so users can interact with the graphics in a multitude of ways.

In much of the promotional material Adobe has particularly emphasised the technology’s usefulness in branding and retail.  That probably makes commercial sense for Adobe, but I happen to think the implications of this tech for museum display spaces are the most profound.

Imagine the liberating effect of being able to contextualise and bring to life objects at the very point that they are being viewed by visitors.  Imagine enabling visitors to be able to interact with, dissect, and closely examine, a fragile or valuable object, without having to handle it themselves or remove it from its display case.

Adobe Glasswing animation
(image: Adobe Inc.)

This is Augmented Reality on a somewhat miniature scale, and a reaffirmation that corporations like Adobe very much expect AR to be a big part of everyone’s immediate future.

It should be noted that this project is in its infancy.  There are still some factors we won’t fully understand until it is further rolled out.  I’m interested to know what degree, if any, of actual 3D object scanning and mapping this technology incorporates.  In other words, does the object itself just sit lifeless behind the glass, or can the system make intelligent changes based on its position?

Also, Adobe themselves seem only interested in providing the software framework for this technology.  It’s going to be up to hardware manufacturers to produce the functioning units themselves.  This will inevitably add further friction to the release process, so we won’t be seeing this technology come to market for a little while yet.

LEAVE A REPLY

Please enter your comment!
Please enter your name here