Designing an attractive product is important. Designing a useful product is critical. You can create something that looks great, but if it’s not useful – then what’s the point?
Products are built with the best intentions, but often too much focus is given to how they look too early in the process. Jumping head first into making something look great is tempting, however, over time, this means you end up with a product that looks nice, but is full of UX and code issues that, if not changed, can lead to a failed product.
For all the effort you put into making it beautiful early-on in the process, the end result could be pretty ugly for your users.
Design is a process
Before you even put pen to paper, you need to know who you’re designing for, what you’re designing and why you’re designing it, then build and test prototypes. You shouldn’t even think about nudging pixels around before this.
At Bipsync our iterative design process ensures we never start making something beautiful until we know it will be useful and usable. We start with an idea for the new feature or improvement, which is always something we know our users need – whether that’s from planned requirements in our product roadmap, user feedback or usage metrics. This leads to prototyping, testing and then when we’re confident it’s a viable solution to the specific requirement, we’ll begin creating visuals.
Case Study: Improving the Bipsync Web Clipper
- Identify the requirement
The Bipsync Web Clipper enables users to quickly create research notes by clipping web pages. It’s widely used, however through reviewing metrics and with some feedback from customers, we identified that the feature could be made more efficient by enabling users to tag content without having to leave the web page they were reading.
- Find a solution
To avoid taking the user away from their current screen and into the Bipsync environment, we came up with the idea of a clipper modal that would unobtrusively display in the browser window once the user had clicked their clipper button. The modal could then display all the content needed for the user to clip, and we could use our Machine Learning model to auto-tag based on key words within the content.
- Build a prototype
After we laid out the requirements in full, the development team quickly built a prototype. Very little input was needed from the design team at this stage other than informal conversations about placement. This meant we could quickly roll-out a working prototype onto our QA testing platform.
- Test it, and test it some more
We regularly use the Web Clipper internally so it was an easy feature to test, with the team using the prototype for around one week. This enabled us to quickly identify a number of elements that would need tweaking before delivering it. Most of this involved feedback to the user to let them know their content had been saved successfully, as this can be different depending on the level of content to clip.
- Make it attractive
Once the improvements were defined, it was finally time to refine the design and apply real visual styling. This involved a number of mock-ups in visual design software that could be shown to the development team, who could then quickly turn these into a working prototype. This prototype was of course scrutinized by everyone in Bipsync and further refined, and refined a little more. After a few days we were ready to deliver it.
- Deliver it
It is relatively quick to make these sort of features live. That’s one of the benefits of working with cloud-based software – releases are pushed without affecting client worktime. We also work on a weekly release cycle, meaning any improvements – big or small – get out to our users as soon as we know they are ready. This means we always gain feedback and metrics quickly and are in the best position to improve the feature if needed.
- Get user feedback
User feedback has been great. We’ve made some tweaks, generally to how the clipper works behind the scenes, but this has added to its overall performance. Now that we’ve built our Machine Learning model into the clipper we’re constantly training it to recognize content. This will always be a work in progress as the more content that is brought into Bipsync via the web clipper the more intelligent the model becomes.