Cloud video technology makes robots more entertaining and integrates robots into your life

Cloud video technology makes robots more entertaining and integrates robots into your life

At present, there are many companion robot products on the market, mainly for the elderly or children. Whether it is to let the robot remind the elderly to take medicine or let the robot accompany the child to learn, many robot companies have invested a lot of money in the creation of robot functions.

However, the current problem is the lack of stickiness between the companion robot and the user. After many users use the robot, they basically leave the robot at home to eat ashes. This is a very substantial problem for robot companies.

Perhaps it is because the functions of the robot are not in place, or it may be that these functions themselves do not have the stickiness between users.

For example, before the advent of smart phones, people were not very dependent on mobile phones, but since the advent of smart phones, people’s reliance on mobile phones is very strong, thanks to various applications such as WeChat, iQiyi, Youku, etc. Now people use these software on their mobile phones the most, and these software basically have the attributes of social entertainment.

If the implantation of social entertainment elements in mobile phones can give users such a strong sense of dependence, what changes will there be if companion robot products also implant similar social entertainment elements?

At this No. 1 Robotics Network seminar, after discussions by many mainstream companies, we believe that the live video broadcast will be accomplished in the companion robot.

Many companion robot products on the market now have the ability to make video calls. With the help of video call technology, we can allow robots to monitor the home environment and take care of the safety issues of the elderly or children. In addition, in commercial applications, the robots cannot answer questions. You can also transfer manual video call processing.

Of course, video calls are now a small function of the companion robot. From the video call technology, we can also extend a lot of gameplay.

Xian Niu, senior architect of Shenzhen Jigo Technology Co., Ltd. (hereinafter referred to as Jigo Technology), said: "From a technical point of view, live video technology can be applied to robots. The small classes advocated in Europe and the United States can be broadcast live The online and offline integrated classroom is well structured, and students can have more people watching and interacting in this classroom."

"The companion robot itself is a personal educational traffic portal for children. It is a very good product form for classroom live broadcasts and online classrooms."

He Jinwei, Product Director of Guangdong Wosheng Education Technology Co., Ltd. (hereinafter referred to as Wosheng Education), said: "Our next-generation products are planned in this direction. The upcoming next-generation robot products come with 8-inch screens, and we will play better. The role of the screen, in addition to monitoring, course learning and other functions, we will also cooperate with domestic educational institutions (such as: Le Si school), through the companion robot, parents can also see the children's learning in the classroom, as well as the classroom The learning atmosphere."

Of course, these functions are not enough to make users stick to the robot products, so we need to go further.

For example, children can record a short video of singing or dancing by accompanying the robot and post it to WeChat Moments or Facebook to attract more people to pay attention to themselves.

Xian Niu said: "From a technical point of view, the realization of these functions is not technically difficult, or it can make the companion robot more playable. For example, the integration of synthetic audio on the live broadcast screen can make the person in the live video speak. Voice changes, like bel canto or weird sounds, can also turn me into a bear or a dog in the live broadcast, and let me imitate a bear or a dog to tell stories to children, making them feel more fun."

"Only when children are playing with robots, they can participate and make the screen operable. For example, in an online classroom, teachers or other students are turned into animals, and more people participate in the onlookers, so children will feel very interesting."

He Jinwei said: "With this idea, let the children send messages to each other. The sending of messages through the robot can be a short video instead of a voice."

Zhang Bo, the co-founder of Shenzhen Setaria Intelligent Technology Co., Ltd. (hereinafter referred to as: Setaria), said: “Live broadcasting may have many extended applications on robots. If people can be turned into animals on the screen, can they also Where do people become Trump talking?"

Xian Niu said: "It is technically relevant, mainly the identification of voiceprints. As long as there are other institutions that can distinguish voiceprints, and people can not distinguish between true and false, we only need to open the interface and provide the original video cloud data. , And then you can access various modules, who you want to be speaking, or implanting various props, such as hats, beards, beauty effects, etc. are all feasible, we now provide them to Yingke live broadcast and Huajiao live broadcast Wait for this plan."

Through this No. 1 Robotics Network seminar, we believe that live video technology can be widely used in the robotics field, but the exploration of related business models requires further communication.

According to the current technological development speed, more and more new technologies are now moving towards mutual application, and live video broadcasting is also moving towards the direction of video AI. Of course, there are too many unknown applications in this direction, so I won't do in-depth exploration here.

Video AI will be closely connected with the robot brain, so next we will turn our attention to the development direction of speech semantics. To say that the robot brain will contain speech and semantics.

In terms of the development direction of semantics, Ling Guang, a senior R&D engineer of Baidu’s Natural Language Processing Department, said: “Now there are two directions for semantic development: one is small talk and the other is task. Small chat-style human-machine dialogue is achieved through Baidu’s Big data can support people to chat with robots, but it is difficult to make breakthroughs for a long time, and task-based human-computer dialogue is equivalent to deep cultivation in a professional field, so that robots can communicate with people in this Deep interaction in the field, but a huge investment is needed to make the product well."

"In task-based conversations, whether it is Baidu's UINT, Apple's Siri, or Google's voice assistant, the technologies used are the same. No matter which one, it is necessary to have a deep and deep interactive dialogue in any industry. A lot of time is invested, such as the establishment of the knowledge base, the analysis of intent, and the training of models. This process requires a lot of manpower and funds."

This also means that for the robotics industry, Baidu provides more general semantics. In a specific semantic direction, Baidu will not get involved, because the investment is huge, which also provides a direction for robot companies and semantic companies. To make achievements in the semantic direction, it is necessary to intensively cultivate in a specific field, which is required by the current robot products, and is also unwilling to set foot in enterprises such as BAT.

Regarding the development direction of speech, Gan Chuhui, a technical consultant of Suzhou SPIT Information Technology Co., Ltd. (hereinafter referred to as SPIT), said: "The speed of speech recognition has been improved. The previous recognition of 7 characters in 1 second, now It can recognize more than ten characters in one second. The next development direction is Chinese, English, and other languages, which can be recognized by an engine."

"The next step is to recognize what you are saying, and to know your portrait, approximate age and gender. And these technologies will be demonstrated in the Q3 quarter of this year."

At present, it seems that voiceprint technology is developing faster than visual technology. Now, voiceprint technology can also recognize the emotions of people when they speak.

Going back to the direction of video AI technology, the development of video AI will become closer to the robot brain in the future. For example, when we are hanging out in the mall, we can say to the mobile phone or nearby robots. "I want to eat pizza," the mobile phone or robot will call up the dynamic map of all nearby pizza shops, as well as the evaluation and route, etc. Ling Guang said: "Such an application scenario can be implemented in the next three years."

Finally, back to our live video technology. Live video will be a must-have technology for the companion robot. Because of the follow-up development of the companion robot, live video will be a transitional application technology. You can also develop more through the live video technology. There are many applications. The application scenarios mentioned in this article are just a little bit. Video live broadcast technology can be used as a basic technology, and subsequent development can open a broader application door for companion robots.

Manual Pulse Generator

A manual pulse generator (MPG) is a device normally associated with computer numerically controlled machinery or other devices involved in positioning. It usually consists of a rotating knob that generates electrical pulses that are sent to an equipment controller. The controller will then move the piece of equipment a predetermined distance for each pulse.
The CNC handheld controller MPG Pendant with x1, x10, x100 selectable. It is equipped with our popular machined MPG unit, 4,5,6 axis and scale selector, emergency stop and reset button.

Manual Pulse Generator,Handwheel MPG CNC,Electric Pulse Generator,Signal Pulse Generator

Jilin Lander Intelligent Technology Co., Ltd , https://www.jilinlandermotor.com