top of page
Innovate companies by visualizing satisfaction from voice
UNIQPLUS provides the services necessary to contribute to the construction of a comfortable society by quantifying people's emotional states through digital solutions and visualizing corporate customer satisfaction such as customer support.
Unix Computer Trading Ltd
08/03 Emotional data flagging and research sample data were generated.
01/03 Succeeded in extracting teacher data for 1200 hours of monthly voice sentiment analysis.
08/02 Homepage has been opened.
AI that analyzes emotions from voice
Since the launch of Amazon Echo and Google Home, the development of voice analysis AI has attracted attention from all over the world, and the most important factor in the development of voice emotion analysis AI is "a large amount of voice data including emotion information". Through our consulting service, we aim to provide an API that can be used on any device with high accuracy and low price by researching and developing unique algorithms. This technology is a solution that can be applied to companies in all fields such as call centers, telephone sales and counseling.
VEA-How to use the algorithm
We provide better customer experience solutions
VEA-algorithm performs three main functions. The primary value for users is the ability to visualize customer satisfaction and status directly from conversational voice. This eliminates the risk of the user responding appropriately as in CAST.
Second, since four emotions of joy, sadness, anger, and normal can be detected in real time and pinpointly, it is easy to grasp which seat the trouble is occurring in the call center, and where the customer is for the agent. You can smoothly understand improvements that you couldn't see before, such as whether you felt angry, which is a great help for improving on-site skills.
Furthermore, if the transition of stress can be detected, it will be possible to appropriately evaluate personnel based on whether customer support was properly provided to the customer and the stress reduction rate for each agent.
VEA-algorithm construction flow
Derivation of emotional relevance from 256 types of voice codes
The voice data is analyzed and classified into 256 types of codes such as pitch and tone.
We will associate the subject's emotion flag with the voice data.
Multifaceted analysis such as clustering and machine learning derives the correlation between coded data and emotion flags.
Depending on the size of the company, we can provide a wide range of services from SaaS format to on-premises. It is expandable because you can choose to embed it in your own server after verifying the effect in cooperation with API etc.
Voice emotion recognition API
Free trial verification
Estimate of paid plan (number of seats, usage, etc.)
Payment of charges
API key notification
Start of operation
Voice emotion recognition SDK（On premises）
Inquiries / hearings
Application / payment
Development version SDK delivery
Start of operation
Preparing (Price will be offered in units of fixed hours)
※We also accept custom-made pre-builds.
8kHz / 16kHz / 44.1kHz / 48kHz
* Analysis itself is possible for other items as well.
Quantization bit depth
8bit / 16bit
* Please let us know if you want 24bit or 32bit.
Input audio data format
wav, mp3, aiff, wma data format
For 10 years from 2008, he worked as a manager of new business creation and AI development team at a Japanese IT company, promoted a personality reproduction project by AI in 2017, and conducted joint research and development with the Center for Artificial Intelligence Science, University of Tsukuba. We have a track record of successfully developing our own sentiment analysis algorithms.
Most speech sentiment analysis software on the market today focuses on educating and improving customer support. Our products aim not only to improve the quality of customer support, but also to collect survey data on customer experience.
Workstation No.73A Building No.280 Taweelah,
Abu Dhabi, United Arab Emirates
bottom of page