easyDialog is a consulting service for conversational AI, voice technology and everything around it. We design, build and improve voice bots and chatbots on all platforms, including Alexa, Line, Nuance and Bixby, combining state-of-the-art Deep Learning algorithms with years of experience in handcrafted fine-tuning.
Bots let us talk to machines, websites, cars, phones, household appliances, or databases as if they were human. Just “speak and listen” to the device – no more clumsy typing, no more staring at small screens, no more searching for the right buttons.
Conversational AI is definitely something for your products, services and ideas, too – we will show you exactly how.
Christoph Neumann (Founder of easyDialog) introduces Bixby in German to DJ Koh, CEO of Samsung Mobile, at IFA consumer fair, Sept. 2018
Introducing an AI bot is a big decision with a high entry threshold – things will change drastically. easyDialog is independent of and not connected to any specific voice platform, we will help you rethinking hardware, software and user experience from your specific perspective first, and then chose the configuration that fits your product best.
Our experts have designed, built and launched AI solutions on Alexa, Google, Bixby, Siri, Nuance and other platforms, for customers in a wide range of industries. They will sit down with you from start to end to make sure your AI interface works well, too.
From designing an embedded voice control for cars in 14 languages, and localizing the Bixby voice assistant in German, to adding fun to your daily laundry routine with an Alexa skill for washing machines or creating the industry’s first natural language understanding solution in an Asian language – we have successfully deployed groundbreaking and innovative voice applications.
Conversational AI, AI interface, AI bot or AI dialog system is either a voice bot or a chatbot. Both these types of AI bots allow humans to communicate in interactive, natural language with a machine, software or service. That means that 1) you can talk to bot in full, natural sentences just as if you talked to a human – you don’t need to think about search terms or language style, and 2) that the bot will talk back to you so you end up having a 2-sided conversation. Compare this to other interfaces like static websites, where you have to find the information yourself, or to buttons and knobs in a new car. Compare this also to waiting on the phone for eternities to finally talk to a call center agent – AI bots respond immediately (and are never stressed).
Voice control, voice bot, voice assistant, voice-enabling, speech recognition or voice user interface – all these terms refer to the same process: say with your voice, with your mouth, what you want, to a machine or a computer. Alexa skills, Google actions or Bixby capsules different brand names for voice applications that are hosted on voice platforms like Alexa, Google or Bixby. A voice application is similar to a mobile app on a smartphone; it can be an interface to get information, do online shopping , play a game, book a trip, etc., only that the mobile app’s “touch and see” is replaced by “talk and listen”.
Any voice system has two essential modules – the first module, the ASR (automatic speech recognition) converts the sound of your words into written text. The second module, the NLU (natural language understanding) then converts the written text into meaning (“meaning” for a machine typically means one or more instructions to do things). Most voice platforms offer a generic ASR that cannot be changed by individual developers. Their NLU is modified indirectly by supplying sample utterances.
“easyDialog has built convincing customized dialog systems to engage voice assistants like Amazon Alexa”
“Collaborating with you was a really nice opportunity to feel German engineering”