What if you could add UI to anything?
With the rapid emergence of connected, ‘hybrid’ objects, this is an increasingly relevant question. We’ve all heard buzz about the Internet of Things (IoT) – tools, appliances, furnishings that are smarter, Internet-ready, and highly configurable.
Until now, interface design for IoT has involved one of two approaches: stick a screen on the object itself, or design a dedicated app that can configure the smart object using more traditional UI principles.
The Reality Editor, a new project out of the MIT Media Lab, presents a bold new platform for hybrid object interface design, oriented towards the future world of IoT and augmented reality (AR). Reality Editor allows for the creation of contextual, augmented interfaces for hybrid objects – negating the need for a dedicated screen or app. What’s more, it leverages IoT technologies to enable hyper-personalized relationships between the features of hybrid objects.
The mastermind behind Reality Editor is Valentin Heun, a PhD student at the Media Lab’s Fluid Interfaces research group. We talked with Valentin to learn more about his motivations for the Reality Editor platform, thoughts on the intersection of AR/VR and IoT, and inspirations for future interface design.
Describe your long-term vision for the Reality Editor. Was this platform designed with mobile phones and tablets in mind, or were you thinking ahead to a future of AR eyewear (Hololens, Magic Leap)?
I think what I actually have in mind is the physical world around you. I don’t think that we need to narrow down on an eyewear device, or smartphone, or any of these things … what we need to narrow down is the physicality, the affordance of the objects around us in our world.
I find it a bit confusing that we walk around and look at our phones, and stop actually engaging – with our bodies – with the world around us. The vision here (and maybe this sounds a little bit surprising) is that I want to push more and more of the computer technology into the world. So there will be more computers around you, but these computers will feel less and less like interacting with a computer.
When we think about a computer [today], it is a desktop paradigm. We have a screen, a keyboard, a pointer on the screen. But it is so much more. If we deploy it rightfully in the world, it will not feel like interacting with a computer. It will feel like actually interacting with the real world, the physical world, and we will use our body to interact with that world.
How does a world that is built on the Open Hybrid concept change our perception of affordances? Do we begin to see every object as having an action possibility beyond its innate physical form? Or is your vision more about enhancing the existing affordances of an object?
There’s an interesting property when we think about virtual objects on a screen, and physical objects in the world. That property is the state of being static. A physical object is always static. If I had handed you an object a few weeks ago, I could now talk about it, and I know that you would have it, and it looks the same way as I gave it to you. That’s why we can have a conversation about it … the physical objects around you have static behavior.
It’s different on a screen. Virtual objects are non-static. They can be changed all the time. They can have different properties. That’s the reason that you’re not sharing your computer with someone else; it’s a one-person show. The way that you arrange the things on your desktop; the way that you use apps; it’s highly personal.
So, what is interesting is, if you have a physical object that is NOT fully static – one that can actually change how it operates, how it is used after it comes “out of the factory” – it raises a lot of a questions. That is the challenge now: to see, from a design perspective, where we’re going with this technology, what can we do with it.
All these questions – these are what the Reality Editor and Open Hybrid are opening. It’s really just a start; it’s a small first step to figure out – how can we make physical things around us connected, and non-static, and how do we interact [with them]? Because right now, we cannot. We have phones that let us connect to physical objects, but that doesn’t really match with the physical world… it’s more like extending the desktop. The world does not look like a desktop. Not everything can be put on your work desk, you know?
It’s really a [larger] question of how do we relate physical things with each other, how are we related to them, how can we manipulate them? And from that perspective, the Reality Editor is really a tiny, tiny step on the whole vision.
Where this vision comes to is a world of something I would call ‘indistinguishable reality,’ or ‘internet of indistinguishable realities.’ We will have a point where because we don’t interact with computers that are like desktops anymore, we interact with computers that are in the physical world, they are part of the physical world. So, you cannot say they are something ‘virtual’ anymore. There’s no ‘virtual is not real…’ it’s completely merged, and therefore these words disappear.
What are your thoughts on the AR visions offered by Hololens and Magic Leap, or immersive VR like the Rift and Vive? Do you see these technologies fitting into any of the work you’re doing with connected objects?
You know, there’s a whole gray scale – it’s not this-or-that. For example, you can use technology like Oculus to dive into environments remotely and really feel being there. It helps a lot when [you’re] really far away from something you need to control. [VR] is really, really good technology for something like that.
But I think the interesting part is really – how are we connected with the real world? I don’t think that the computer-generated virtual reality is really the killer application for [VR]. I think the telepresence scenarios are really, really interesting for virtual reality. [Regarding] Hololens, you know, they’re all following one vision; the vision is to make the real world and the virtual world one. And from that perspective I think they’re interesting.
AR is a new field for many designers. How would a designer prototype an interface for a hybrid object?
We had a conversation with folks from IDEO: what kind of person is going to be building these [hybrid] objects? A web designer, and product designer, are a perfect fit to generate these future objects.
So, we spent a lot of time to build tools that support the design process of a web designer and a product designer, to work together to build hybrid objects that can be built with the Reality Editor. You use your traditional methods: you make your mockups with paper, you make your drawings, and then you start breaking it down. You can use Illustrator to [design] how the [hybrid] interfaces work; you build your mockups specifically, and then you can basically put your mockups, with our tools, directly on the object, and see how it looks and feels.
Read more about designing an Open Hybrid interface here.
Who are your influences in the world of interface design? What forward-thinking designers can can you recommend to our students?
Look into the work from Bradley Munkovitz. He worked on a lot of the Tron movie. I think [he builds] a really nice type of computer graphics that can inspire real user experience and interface design.
For these kinds of things, you know, it really branches disciplines together: from product design, to design fiction and visual graphics. Science fiction movies are [often] the best inspiration for building these kinds of objects. These kinds of problems that come up – visual problems, visual problems in augmentation – these problems have been explored in science fiction for many, many years, visually, but have never been put in action. They couldn’t put these things in action, but they have been explored – like, what color schemes would I use when I overlay [information] to the physical world?
So, computer graphics for science fiction are a wonderful source of inspiration for distant interfaces. But, not so much from a user interaction aspect, because all of these interfaces built in science fiction are all dysfunctional.
The experience of actually using interacting with these hypothetical interfaces?
Yeah, I mean the thing is – this whole augmented reality – the way that we see it with the Reality Editor – is new. Nobody has done it. It’s a new space. We need to draw inspiration from somewhere.
One inspiration is augmented reality as we can see it in science fiction movies. But, another aspect is, how do you create an [augmented reality] user experience where an interface is simple and useful to a user. And that is a complete new land — there’s no reference for it. I could name you a few scientific papers about it – HCI papers, where they do augmented reality – but all of them don’t really look at the design side of these kinds of interfaces. So this is really something where new names will come up. It will be interesting.