share_log

对话利亚德刘耀东:Sora带来效率变革,以后导演不用操心制作了?丨科创100人

A conversation with Riad Liu Yaodong: Sora brought about efficiency changes, so the director doesn't have to worry about production in the future? 丨100 science and innovation people

新浪科技 ·  Mar 5 21:20

Text丨Sina Technology Zhou Wenmeng

Number of issues: No.36

Guest: Liu Yaodong, CEO of Riyadh Virtual Powerpoint

With the release of Sora's video model, generative AI technology represented by Sora is continuously penetrating deep into the film and television industry, starting a new disruptive revolution. The display industry, as an important medium for the dissemination of information and video content, is also undergoing new changes.

Recently, Liu Yaodong, CEO of Riyadh Virtual Mobile, said in an interview with Sina Technology that generative artificial intelligence, represented by Sora, can provide efficient, fast, and low-cost high-quality video content production. For the LED display industry, this means that high-quality visual content can be produced at a lower cost and with higher timeliness, thereby increasing the productivity of the entire industry.

“Sora will bring about changes in LED display industry experience and efficiency”

As a company listed on the Shenzhen Stock Exchange, Riyadh Group has nearly 5,000 employees, and has the world's top ten production bases and nine R&D centers. It has been ranked number one in the global LED display market share for 7 consecutive years. As a company that mainly provides 3D optical motion capture technology for the film, television, education and other industries, its wholly-owned subsidiary Virtual Motion Point has participated in virtual filming of many heavyweight films such as “Avengers 4: Endgame,” “Avatar,” and “Iron Man,” accounting for more than 70% of the Hollywood market.

At the end of last year, Virtual Moving Point released LYDIA, a self-developed professional ability movement model, which can achieve cognitive understanding of motion data in the field of spatial computing and perform efficient motion generation. According to Liu Yaodong, Lydia, like Sora, are generative models based on large amounts of data and deep learning training, but the two are quite different in usage scenarios and areas of focus.

Among them, LYDIA focuses on generating and understanding motion data in spatial computation. It generates three-dimensional movement data, and is compatible with current mainstream digital content creation platforms to achieve relatively accurate character motion generation. However, although Sora can quickly improve the creative efficiency of video content, it cannot control and generate 3D character motion data, making it difficult to achieve relatively accurate control of generated content.

According to Liu Yaodong, for the LED display industry, the emergence of video models such as Sora means that high-quality visual content can be produced at a lower cost and with a higher time efficiency, thereby increasing the productivity of the entire industry. “Content creation will become more efficient and diverse, and production cycles can be drastically shortened while maintaining or even improving the appeal and innovation of visual effects.”

At the same time, the evolution of Sora means that LED display technology can be combined with more intelligent interactive systems to provide a more personalized, rich and interactive viewing experience. For example, the display content can be adjusted in real time according to the audience's reactions and interactions. Moreover, with the continuous development of artificial intelligence, future content is not limited to traditional linear viewing experiences, but may also include more interactive and immersive experiences, such as combining AR/VR, digital humans, etc., to provide viewers with a more interactive gaming experience.

“In the future, directors may only need to use their imagination”

In the early days of animated film and television, after the screenwriter finished writing the script, the director needed to draw a picture of the ground like a comic before organizing filming into a film. Then, slowly, there were film and television previews. By using OptiTrack technology, which is currently popular with agencies such as virtual moving points, to shoot images, then use engines such as Unity to render, and finally, animations can also be created.

According to Liu Yaodong, although it is still very time-consuming, it has taken quite a bit of time. You can shoot something in half a month, but now if you use a big model, you can probably finish it in minutes. Basically, if you can knock out this action, you can do it quickly, shortening the time to the minute level.

He extended the introduction in conjunction with the current mainstream video generation model in the market, taking some dance videos on Douyin as an example. Although details such as the clothes and hair of the characters in these videos often change, it is actually a picture, frame by frame, rather than a single movement; it is not a series of images. However, later, with the maturity of LYDIA's large motion model technology and the advent of generative AI technology such as Sora, it will be possible to quickly and continuously generate 3D human movements through a sentence of description.

Take the “Lei Zhenzi” motion technology support in the “Fengshen” movie, as an example. According to Liu Yaodong, the “Lei Zhenzi” flying away while holding Ji Chang in the movie used to require little by little drawing and rendering. Now, you only need to use digital humans to generate a grimace image with blue hair and knock out the “spin and fly” command with text, and you can directly generate the “fly away” action. “In the future, directors will probably only need to use their wild imagination; there will be big models to solve the problem of mismatch between before and after the film production.” Liu Yaodong said.

Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more
    Write a comment