Render Pool Talks with Tsuyoshi Kobori (Quebico) and Atsushi Yoshimura (AMD)


by Render Pool

August 5, 2021

Quebico might be best known internationally as the CG animation studio behind the Netflix CG drama series, RESIDENT EVIL: Infinite Darkness (Known in Japan as BIOHAZARD: Infinite Darkness). For that production, the studio utilized the Radeon ProRender rendering engine provided by AMD and Render Pool, a cloud rendering service provided by Morgenrot.

We interviewed Tsuyoshi Kobori, Technical Supervisor of Quebico, and Atsushi Yoshimura of AMD‘s development team to ask them about Quebico’s recent Netflix release and the prospects of CG animation and production in the future.

Please tell us a bit about Quebico.

about Quebico

Tsuyoshi Kobori: Our company, Quebico, was established in 2017 by our current president, Kei Miyamoto, who has been involved in the production of 3DCG feature-length animation films such as Resident Evil: Vendetta and high-end game movies. With our accumulated knowledge of video production and our independently developed, cloud-based production platform, we create high-end full 3DCG animations — which until now could only be produced by major studios — in collaboration with excellent creators from Japan and abroad.

What kind of system did you use to create RESIDENT EVIL: Infinite Darkness?

RESIDENT EVIL: Infinite Darkness

Kobori: For RESIDENT EVIL: Infinite Darkness, we didn’t simply undertake the production of the 3DCG part but participated in the production from the development stage of the script, as well as a joint production with TMS Entertainment, Inc. We took charge of the entire production including pre-production and post-production. In the area of 3DCG production, we collaborated with domestic and international studios and freelance artists using our own cloud-based production pipeline and utilized the extensive production knowledge of 3DCG and feature films that our staff have accumulated over the years.

Was there anything that was different or more difficult than your previous video productions?

Kobori: Of all the Resident Evil films that have been produced as full 3DCG animations in the past, I think we are the smallest studio to have been in charge of the entire production. In the past, relatively large studios probably handled the majority of the production volume with their own resources.

It was a challenging endeavor for a small studio like ours to take on the central role of production, but I believe that our basic philosophy of always wanting to work with the right people, regardless of company or country, combined with the sharing of knowledge on video production, the development of the production pipeline, and the strength of the participating studios and artists, resulted in a picture that exceeds the quality we were initially aiming for.

Were there any difficulties you faced in particular?

Kobori: We only have a small number of resources in our company, in terms of software, human resources, as well as hardware, or infrastructure, so if we suddenly need additional resources that were not in the plan, we cannot immediately procure them from within the company. Even if you have a detailed resource plan, there will be times when things don’t go according to plan, but the drawback is that it’s difficult to respond to such situations.

When additional resources are urgently needed, not being able to acquire them quickly can lead to schedule delays and increased costs. In order to make a system like ours work, I think we need to manage it more closely and have a plan B in place.

What kind of impact does rendering have on production time and cost in video production?

impact on production time and cost in video production

Kobori: Since rendering is the latter half of the production process, it’s easily affected by other processes and unplanned responses are likely to occur. When machine resources for rendering are insufficient, using cloud rendering services becomes a viable option.

However, looking at the running cost alone, cloud rendering is often more expensive than on-site machine rendering, so the initial plan was to cover the majority of the rendering volume using two methods: on-site machine rendering and having a subcontractor who owns rendering machine resources take charge of lighting and rendering all at once.

Cloud rendering is often used by freelance artists and subcontractors who don’t have their own rendering resources, or when the planned resources are not sufficient. In this case, we were able to borrow equipment for rendering with the help of AMD, and we were able to increase the processing power available as an on-site resource, which was very helpful from a cost perspective.

What are your impressions of AMD’s ProRender?

Kobori: Rendering using GPUs has become popular, but I feel that AMD’s ProRender is one step ahead in terms of consistency of rendering results between CPU and GPU.

Being accustomed to creating pre-rendered works using CPU renderers, I used to be skeptical about the ability of GPUs to produce high-quality images. With the renderers I’ve used up until now, I’ve found it difficult to produce usable images when rendering on GPUs.

For RESIDENT EVIL: Infinite Darkness, we were required to create a photo-realistic image and ProRender allowed us to create an efficient and high-quality video while meeting the demands of rendering with both GPU and CPU.

I understand that ProRender is based on the CPU renderer and incorporates GPU rendering, aiming to produce the same picture on the GPU as it would on the CPU. On the other hand, even in real-time graphics technology, which has been making great progress in recent years, advances in hardware are making it possible to achieve expressions that were previously only possible with pre-rendering. In terms of producing higher quality results in a more efficient manner, there are similarities between the goals of the two, and I expect that the evolution of rendering technology will accelerate as these two technologies continue to compete.

Data for rendering in video production is usually large in size, and errors and warnings may not be completely removed, or data may have a structure that was not initially envisioned, but in this case, thanks to AMD’s cooperation, we were able to complete the project thanks to their prompt provision of patches for problems and additional functions in parallel with production.

Atsushi Yoshimura: Even now, it’s very difficult to match the consistency of CPU rendering and GPU rendering. Although updates are narrowing the gap, the current level is such that there are still differences even when developers with high technical skills perform the work.

AMD Yoshimura

Kobori: Cloud rendering is used when time is of top priority, such as in situations where unplanned rendering power is needed. On the other hand, there are areas where it’s difficult to use it as a permanent fixture due to cost. In this case, we were able to get support through AMD and were able to try out production using cloud rendering.

Even though the running cost is rather high, I think that in a system like ours, where production is done in partnership with outside parties, or with a small team, there are many cases where the cost of preparing an expensive machine is higher and not worth it.

Another point besides cost is the way data is handled. If the service doesn’t allow you to prepare data for rendering jobs and upload it to the cloud without stress, you can’t use it. I think Render Pool made that process stress-free as well.

What do you feel are the pros and cons of on-site machine rendering and cloud rendering services, respectively?

Kobori: Naturally, the advantage of on-site machine rendering is its low running cost.

Cloud rendering services are disadvantageous in terms of running costs, but their strength is that they require almost no initial investment and do not require hardware maintenance. Depending on the size of the project, its duration, and the composition of the team and working environment, I believe that there will be more cases where the total cost difference will be reduced.

I also feel that the strength of cloud rendering is that it can be flexibly scaled to the required capacity.

How do you distinguish the use of pre-rendering and real-time rendering?

Kobori: In the past, pre-rendering was the mainstream in video production, but the use of real-time rendering has gradually increased. I think it’s changing the production field where speed is required.

The strength of the pre-rendering method is that quality can be pursued over time, but there are more and more cases where “sufficiently” high-quality results can be obtained by real-time rendering.

The same is true for both methods, which are evolving in the direction of higher quality and efficiency. In our company as well, we would like to utilize real-time rendering as a production method and actively use it as a method of expression in future video production.

As an engineer, do you have any issues with rendering?

Kobori: It’s the difficulty of optimizing parameter settings that control rendering quality and computational cost, and optimally allocating rendering resources across the project.

On the creative side, artists and teams have their own strengths and weaknesses, but apart from that, there are also variations in skills for efficient rendering. I believe that there are still some aspects of engineering that can be supported in terms of optimal parameter settings for efficient rendering.

In addition, in order to realize the optimal allocation of rendering costs, it’s necessary to objectively visualize the things that are difficult to quantify, such as differences in the weight of data depending on the scene and the importance of characters and shots in the overall work, but this has not progressed much. At present, I think it’s possible for a team without efficient rendering skills to eat up the amount of computation that would be spared by a team with the proper knowhow.

It takes time to understand the relationship between parameter settings and quality/computation cost, which varies from renderer to renderer. I believe that having the ability to easily understand that relationship and predict the results will be a competitive advantage for renderers.

Lastly, what features or functions would you like to see in the future?

Kobori: Based on the premise of improving quality and running costs, I would like to see enhanced rendering management functions. It would be best if we can visually understand the priority among tasks such as “Material B must be finished before Material A.”

There are many users who are accustomed to the feature-rich rendering managers from on-site rendering farms, and I think some of them feel that the functionality of the web-based interface of cloud rendering services is lacking.

Another major issue is data composition and format. We believe that there is still room for efficiency by thinking through how the data should be, whether pre-rendered or real-time.

We are accustomed to the idea that we must wait for the rendering to finish before we can see the final result, but if the rendering time were reduced to zero, the situation would be completely different. This would change not only the work itself, but also the way we create it.

Take on Larger 3DCG Projects with Render Pool and Radeon ProRender

Through partner collaboration, Quebico is able to produce high-end 3DCG animation, which until now could only be done by major studios. In the future, Quebico will incorporate GPU rendering and real-time rendering to evolve their production workflow even further. With our cloud rendering service Render Pool, along with utilizing industry leading tools like the Radeon ProRender rendering engine, we will continue to assist companies of all sizes and workflows in creating high-quality video productions.

Quebico and Morgenrot