Delivering powerful, intelligent voice interface technology for mobile, wearables, and the Internet of Things.
Customised to understand your application and powered by leading-edge artificial intelligence.
Capito Systems brings together an elite team of technology visionaries, industry leading natural language processing & machine learning engineers, world class research scientists in speech & dialogue systems, and product specialists.
Our mission is to continue raising the bar in delivering exceptional contextual spoken language understanding that consumers now expect from voice-driven applications.
All our products, below, are easy to integrate into any application using our APIs
Many applications are difficult to operate because of their deep navigation menus or require form-filling so as to display the desired content.
This makes them inconvenient to use in certain situations, yet this may be when the content is needed or particularly desirable to have. E.g. comparing prices whilst out shopping, or finding your departure platform at a train station, or when operating machinery.
Our contextual voice control overcomes these limitations, and removes the constraints of menu-based navigation.
Searching product catalogues, timetables, and large data sets is time-consuming and often frustrating because most in-app search today is keyword based, requiring precise search terms to yield results.
For businesses with consumer-facing apps this can mean lost revenues. Our natural language search understands the semantics of naturally spoken, or typed, search phrases.
It is fast, intuitive and consumer-friendly.
Task driven applications like call centre, self-diagnosis healthcare, and virtual assistants often can only derive the user intent from a sequence of interactions, namely a dialogue. This dialogue may be conversational, or multi-modal.
Our dialogue manager (in development) will facilitate intelligent interactions to bring human-like conversation to applications.
We are constantly evaluating the industry's leading ASRs to ensure the best fit for your application.
We are also conducting research into ways of improving ASR accuracy.
We have strict requirements in terms of word error rates (WER) and response time to ensure we deliver the best possible user experience. No ASR is perfect (yet), so we apply our own word error correction process before semantic processing.
The correction technique we apply is application-specific and uses machine learning that is trained on the type of language (i.e. phrases, slang, dialect, nicknames, named entities etc.) expected by the application or domain. This makes it possible for us to deliver a very high degree of semantic understanding accuracy.
*Automatic Speech Recognition, also known as â€œspeech-to-textâ€.
We have developed our own industry-leading spoken language understanding technology, based on advanced machine learning algorithms and computational linguistics.
We have developed efficient workflows which enable us to create bespoke semantic understanding models for specific applications and domains.
We are also conducting research into natural language semantic processing as we constantly strive to make improvements and advance â€˜prior artâ€™.
Our platform delivers high performance at scale.
These are critical in delivering a commercially viable service to applications. We measure and log the performance of each system component traversed by each interaction with our system, from the device through to the completion of semantic processing in our cloud.
We aim for a sub 2 second end-to-end response time, and often achieve response times sub 1.5 seconds. Our cloud service also supports dynamic (elastic) scaling to adapt to different load conditions so that performance is not affected under load.
Our technology can be easily integrated into applications via our APIs.
Our APIs enable us to capture user interaction data (anonymised) across voice, text and touch inputs. This data enables us to build detailed user profiles on behalf of our clients which can be used to provide a personalised app experience. For example, in eCommerce this forms the basis of recommendation engines.
Cloud-based natural language understanding
Web dashboard analytics
These short videos showcase our intelligent voice control technology applied to two different mobile app contexts.
Intelligent voice control takes the convenience factor of this type of app to a different level. Conventional train apps are not much use on-the-go when you are rushing to catch a train. Imagine you have just 1 minute to get to the departure platform but you donâ€™t know which platform to head for. Now you can just ask the app!
Sports betting apps are amongst the most complex of all apps to use because there are so many deep navigation pathways to find your market and runner, then you have to input a stake before confirming your bet. With our intelligent voice control, navigation, search and bet slip completion are just one step away, regardless of where you are in the app. And voice control adds a new dimension to in-play betting, making bet placement possible within just a few seconds.