The world we see is a treasure trove of information ready to be unlocked. Blippar is an app that identifies things around you and surfaces content users might find useful in their context.
Sketching, Concept Exploration, Video Prototyping, Interactive Prototypes, Content Prioritisation, Mockups, Guerrilla Testing, Visual Design
Pen & Paper, Sketch, Principle, Invision, Motion 5, Final Cut Pro
UX Designer in the Human Interface Team of three.
Manuel Colom (UX Lead), Sofia Dellera (Junior Designer)
Blippar is a leading augmented reality (AR) company. The company is renowned for creating consumer AR experiences for big clients such as Coca-Cola, Max Factor and Emirates.
For most of the time Blippar operated on an Agency model, now they strive to become their own consumer product for millions of people to use each day.
Blippar's tech stack includes A proprietary 3D engine used to deliver cross platform AR experiences, a knowledge graph with content on several million entities and a computer vision layer capable of recognising millions of objects in the world.
- Bring all three elements together in one unifying experience, allowing a user to point their smartphone at something and have it recognised, display content in the AR space and allow for further exploration.
- The solution should be able to accommodate user profiles (a feature to be added later that would allow make users recognisable to the Blippar app.)
- The app should be built upon the existing tech stack, work across iOS and Android, and to be designed and developed within 3 months.
Who is it for?
The goal was to create a product that might be useful to anyone looking to identify something they see and gather information about it, a Google for the visual world if you will. However, we worked off the assumption that the primary users for such a product would initially be techy early adopters, Millennials and Gen-Z.
What We Knew
From prior research we were able to use the following insights to keep our ideas grounded.
- People are generally overwhelmed by large amounts of content
- Don’t state the obvious
- Optimise for 2 min (max) engagement
- People don’t want to have endless exploration on a mobile phone
- Personalisation is important
Working in AR is an interesting challenge in that regular interaction principles are yet to be defined. Of course regular interaction design principles still apply but there are many key problems that have not been tackled before that would required validation. These details needed to be defined quickly.
How willing are users to consume content in the AR space? How will the user receive feedback from the recognition process? How should visual recognition errors be handled?
Interaction with content in AR can be challenging, how can we avoid the user pointing their phone at things in uncomfortable positions whilst maintaining a strong AR element throughout?
The second challenge was trying to validate these ideas. In UX it’s common to mock-up a UI and validate it with a quick test in Invision. With AR everything happens in the camera so it can be very difficult to produce prototypes without the time consuming process of taking it into code.
This made progressively validating ideas even trickier under the time constraints. Instead we logged assumptions during the design process and would work with data scientists later to disprove or validate these later.
Presenting content in AR was yet another challenge. It had to be short, relevant, be displayed in a way that would accommodate errors and allow users to dive deeper should they wish.
We had plenty of ideas that we felt could work but the final challenge was finding one that could work within the technological constraints of the Blippar AR engine.
It’s very easy to create a picture in your mind of what content in AR might look like. Something like the above maybe?
However, current technology is still limited and the ability to track and localise an object within the camera view meant so many ideas we had were not technically feasible yet.
Through this process we ended up with a rough outline for a journey that would allow for varied content density and meet the project goals.
From stakeholder feedback we were asked to introduce more layers of content whilst remaining in the camera view. The cards were just a little flat and boring. We introduced the idea of an AR Heads Up Display (HUD) for specific content verticals that might benefit from this.
With the aim of resonating with our target users we also used the opportunity to introduce a number of entertaining models that could serve a purpose in the AR space, such as the current weather conditions over your head, promotion materials and characters for treasure hunts.
Creating A Shared Vision
We found the best way to get everyone onboard with our design thinking without the use of a clickable prototype was to create a video prototype illustrating the vision. We used a number of video post-production tools to create the desired front-end user experience.
It was useful to get feedback as we went and would allow us to iterate further. It also acted as a basis for stakeholder sign off on ideas that were feasible.
The solution we created was a card based system. The user points their camera at an object and is presented with a small card identifying it. Past this a number of content cards would display bite size information. The user could tap on any of these and continue to learn about the object on separate details page.
Cards would stack below the screen into the history, allowing a user to retrieve information without having to hold their arm up to the object they were scanning.
Learnings and Future Iterations
Our solution met all the project requirements however there were a number of assumptions we didn’t quite get right.
- General feedback has been that cards trigger too easily. It can be quite overwhelming to be pointing the app at something and then have a stream of information displayed.
- People enjoy the novelty of experiencing content in the AR space.
- Users aren’t particularly interested in exploring the deeper information page of an object.
- Trying to replicate a number of flat UI elements such as Table Views in a 3D engine introduces creates a rift between user expectation and technical implementation.
- Attempting to recognise millions of objects causes a large number of false positives in general use.
- Scale back on the amount of recognisable objects to improve performance on ones with key use cases and reduce the likelihood of false positives.
- Introduce a buffer that requires user input to direct the app to start the recognition process rather than automatically.
- Leverage new AR technologies such as Apple’s AR Kit to allow HUDs to track to their point of origin and remove the need for information cards that end up in a history stack.
- Don’t attempt to keep users in the app past the AR experience and hand them off to the web should they want to learn more.
This project was a particularly challenging one, in many ways it was going to be almost impossible to implement what was asked of us in the timeframe given.
Defining an appropriate MVP to test our assumptions and launching that would have produced a much more focussed and reliable product that we could have iterated on quicker based on insights.
I learnt that in the right scenarios going high-fidelity quickly can be extremely helpful when communicating ideas to team members. It also encourages detailed thinking sooner helping to lower the chance of key design details being overlooked because too much time was spent wireframing. In this project we jumped straight from whiteboard sketches into high-fidelity mockups. This approach may not be right for every project but it really helped with this one.