In this interview, we meet Andrew Bruce, R&D lead at Screenmedia. He talks us through his recent work and discusses the rise of digital assistants.
AB: I'm Andrew Bruce and I’m an Experience Design Lead at Screenmedia which means I lead projects end to end, quite often from a blank sheet of paper through to final launch into the wild and beyond. My core area of expertise is conversational design which means I lead a lot of our work across voice, chat, and AI.
Besides working on client projects, I lead our R&D programmes internally. Continuous innovation is a key part of our culture and we invest around 20% of our profits into internal initiatives and ‘playing’ with new technologies; not everyone can say they get that freedom to explore new platforms and it’s great to work with a really talented group from across the business on that.
Day to day things are incredibly varied for me, which I like. I touch most areas of a delivery programme but I particularly enjoy consulting with clients at the start of a project in Discovery, Research, and Ideation, helping them understand new technology, its pros and cons, how to apply it to their business, bringing examples of wider business references, and then building prototypes to demonstrate the value of initial ideas. I’m really fascinated by the positive impact that technology can have and get quite excited when a real problem can be solved with an interesting and engaging piece of tech.
AB: Our goal is to always be prepared, knowledgeable, and experienced in a new technology before our clients need us to be. To do that we set aside an annual budget for researching and developing emerging technologies, which has helped us consistently offer industry-leading advice and guidance on these platforms, build deep expertise well ahead of demand and really demonstrate what new technologies can offer.
As part of that R&D, we do a lot of industry and solution research, benchmarking, prototyping, and ‘playing’ – with the aim to prove the use cases and experiences that best lend themselves to a platform. For example, in 2008 we were very early to experiment with mobile and in doing so we really tested the platform capabilities. This work ultimately led to us designing and building our own product venture; a health and fitness GPS tracker called Sprint GPS.
This same method has led to best practice responsive web design, cloud solutions, and wearables. In recent years we were also one of the first agencies in the UK to launch a product for voice - mainly due to the R&D work we did early to understand this new paradigm, test the merits and limitations, and craft the best, rewarding interactions for users.
AB: My focus for a long time was on voice technology but, over the past few years we’ve built on that experience and broadened it out a bit more to include chatbots. There’s a lot of attention on chatbots and their fidelity and application are pretty diverse. Our interest is in going beyond just simple chat conversations to create more well-rounded, fully-featured, cross-channel, and functional ‘digital assistants’ that actually deliver and do things, rather than just being pure content.
Recently, I’ve been looking at how to take the concept of a digital assistant and make it work just a bit smarter basically. Our main R&D focus last year was using Microsoft Bot Framework to see if we could put a smart layer between a company’s data and its employees. Our clients are dealing with system proliferation and are spending a lot of time finding and stitching together data from lots of different back-end systems and tools. That’s exacerbated by the pandemic so we wanted to see if a conversational interface could be designed and configured to do much of the heavy lifting for them, and built a suite of prototypes accessible through Microsoft Teams.
It comprised an overarching interface and personality layer named Osborne (named after the street our studios are on) and included multiple plug-n-play ‘taskbots’ which were focused on specific tasks and integrations; from basic calendar controls to identifying topic authorities, automated job tasking, internal database searching, and more.
Now, we’re continuing to refine the project but many of our learnings are already bleeding into our surrounding client projects.
What’s more, several of the technologies and concepts we’re working on have started to appear on Microsoft products, like Project Cortex and Microsoft Viva, which for us validates that these were good challenges to tackle.
The key part for me was to give people smarter access to data and insights through a conversational interface, rather than just taking a piece of data and repeating it back to the user, it’s more about pulling data from multiple sources to provide the correct and contextual answer to a user’s query.
AB: Yes. I am currently working with several enterprise clients on pilot projects. They’ve got big goals and ambitions, and are keen to find out how far they can push conversational technologies to support their customers. I’m also working on a couple of voice projects at the moment, one of which is actually our own product venture aimed at helping older people stay active at home. We're in the later stages of that project and have a fantastic content partner that we're working with to put that together. This will be launching in the next couple of months.
We’re hired by our clients for consultancy a lot, and although we have deep experience in conversational interfaces some of my upcoming projects are actually going more towards augmented reality and personalisation, which for me is new territory and I’m looking forward to getting stuck into them.
AB: Yes, that’s what we’re seeing more and more of in our work, and what we’re keen to push more towards, to create multi-channel products, rather than individual isolated and siloed ones. Our past projects have been focused either entirely on voice assistants (like Alexa and Google Assistant) or chat interfaces via the likes of webchat or Facebook Messenger. The ‘brains’ of an assistant are built independently of the interface, to an extent, although there are unique use cases for each channel, and we wouldn’t simply clone the experience for chat directly to voice or vice versa.
Our focus is on delivering value to the customer, and a key part of that is understanding their context and their choice of channel - some things make more sense through voice while others are best suited to chat.
Even for a single assistant, the way we handle an individual interaction or request will differ slightly depending on what channel it comes through. This type of ‘assistant’ approach has great longevity too as new interactions or platforms are developed that can tap into that smart engine.
AB: Generally speaking, the projects we work on usually boil down to two things; either saving a business time (i.e. money) or improving the customer experience, although these aren’t the only scenarios we’ve worked in. In the first instance, it’s often about taking the burden off frontline support or HR staff as well as offering services to those out of hours or who wouldn’t typically make contact through traditional means and otherwise wouldn’t be serviced.
We’ve helped clients automate both outward-facing customer services and internal employee services, allowing human staff to focus on higher-value interactions.
The other big use of digital assistants is improving the customer experience through faster service and better self-service. So it’s about people being able to get the answers they want faster. This is a kind of a no-brainer and works for both employees and customers. Better self-service and cross channel access are especially vital for organisations targeting younger, more digitally native generations and it's quickly becoming expected.
AB: Voice excels in any hands-free scenarios, when people are driving, or are rushing around in the morning going to work, or they are in the kitchen when their hands are covered in flour or oil. Voice is also very good in command or task-focused scenarios, for example, Google Home and Alexa dominate the smart home. It also has interesting applications in enterprise scenarios where information is vital and hands are occupied; for example, think surgeons checking patient’s vital statistics, or a field engineer standing capturing survey details.
Voice is also good for retrieving specific, identifiable pieces of information, like finding a bank balance or the time of the next bus from your nearest stop. A bad use case for voice, for example, would be asking to read out all of my transactions for the past month, or the times of every bus that day. Those are volume requests and make more sense on a screen where they’re more scannable.
Chat is good for asynchronous conversations – you can leave them and come back to a chat pretty easily. They are also good for the persistence of information – checking back for information later on.
Chat is good at customer services and getting straight to what you want to find. There is a pattern of people Googling brands names and the piece of information they are looking for, then jumping straight from the search results page to a sub-page, effectively bypassing the homepage.
AB: One key thing that companies that have never worked with conversational interfaces before should know is that they do require a bit of a shift in thinking. When you're designing for web or mobile, you can control the journey a little more because you present users with options, and they choose from those options.
With conversational interfaces users can be more spontaneous in their journey, saying anything at any time and you need to be able to accommodate for it.
They could be halfway through what you think is a logical conversation, and then they may suddenly change the topic. They might ask about something else entirely, or they may ask for help, or they may ask you to repeat the last thing that was spoken, and then potentially jump back to the previous conversation thread. It’s a challenge for clients, but we’ve been doing this for years and have a process to facilitate that shift.
AB: Yes, during the pandemic the usage of voice assistants went up by about 70%. As people have been stuck at home, people have not only used them more, but interestingly they have started exploring more with them. People started to realise more and more what they could do, especially with third party brand integrations. That’s interesting for me as it’s starting to look like new behaviour patterns have formed.
AB: Companies are looking at automation, not only to save time and money but to improve the customer experience. I think voice and chatbots were much hyped over the past few years, but if you look in the right places, I think they're starting to approach maturity now. Peoples’ views of digital assistants are maturing and becoming more realistic as the good use cases have started to filter through.
Beyond the appearance of better, more productive, and focused assistants, I think we’ll also start to see proactive assistants appear – ones that reach out to and help users. If you’re trying to change user behaviour for the better, a proactive assistant could be more effective in achieving this.
AB: It can be quite cheap to test these technologies, compared to many others. And building quick and effective prototypes to put in front of users is simpler than you think, as well as a cost-effective way of getting quick, low-risk feedback. For many of our clients, it’s difficult to know how to get an idea off the ground or even consider where their brand or business might offer value in this space. My advice would be to start small and work up. We’re expert at finding and proving the right opportunities and with a prototype, you can bring stakeholders along with the vision much quicker than describing the experience and what it might do. Why not have a bit of fun with it too.