— updated on
Holy crap this is amazing.
Bored in my room
Sitting here doing stuff with FOSS tools, I find that Pepper & Carrot by David Revoy is the best example of what people can do with them, and it stresses AQAP (as quality as possible). It's really interesting, from the story, the gags, and especially the art. Perfect motivator for me to keep using FOSS despite the odds. Not gonna talk about it here though, but as a jumping point to the main subject...
I found this animation of Episode 6, expertly done and very faithful to the source material. (Although that's also because the original's source files are put out there as well!)
I got the source files to the animation itself, poked around some files before realizing I'm too brainlet to understand what everything does lol. Besides, I had to install Blender 2.7 alongside 2.8 because the latter feels more like 3.0, and Ton does what he wants. I found a blog post detailing compilation instructions...
Trying to compile someone else's work
Per their post, they compiled the animation using their home-grown system, RenderChan, which manages animation projects and automatically handles rendering dependencies and stuff. Testing out the system by rendering a single scene, it renders blank textures since my default Blender is of course, 2.8. Here's what I had to do...
--- a/renderchan/contrib/blender.py +++ b/renderchan/contrib/blender.py @@ -15,6 +15,7 @@ class RenderChanBlenderModule(RenderChanModule): self.conf['binary']=os.path.join(os.path.dirname(__file__),"..\\..\\..\\packages\\blender\\blender.exe") else: - self.conf['binary']="blender" + self.conf['binary']="blender-2.7" self.conf["packetSize"]=40 self.conf["gpu_device"]="" # Extra params
So, the way RenderChan renders the files is that it renders a number of frames at a time, which has the advantage of, say, delegating each part to a computer if you have some sort of render farm (which I don't have), and in the end it combines all these chunks into a single video file. The system works... but it resulted in a file that skips on my end. Doesn't matter if I'm rendering in draft mode or final mode, that happens. Something with FFMPEG?
I had to research the nature of FFMPEG's concat, what all the "Non-monotonous DTS in output" spammed at me mean, and hypothesizing that mp3 as audio codec might have something to do with them. I even made FFMPEG re-encode the entire thing instead of simply copying over the data and literally pasting them one after another. Here's what I have brute-forced:
--- a/renderchan/core.py +++ b/renderchan/core.py @@ -1059,8 +1059,10 @@ class RenderChan(): os.remove(profile_output+".done") if format == "avi": + #subprocess.check_call( + # [self.ffmpeg_binary, "-y", "-safe", "0", "-f", "concat", "-i", profile_output_list, "-r", params["fps"], "-c", "copy", profile_output]) subprocess.check_call( - [self.ffmpeg_binary, "-y", "-safe", "0", "-f", "concat", "-i", profile_output_list, "-c", "copy", profile_output]) + [self.ffmpeg_binary, "-y", "-safe", "0", "-f", "concat", "-i", profile_output_list, "-r", params["fps"], "-fflags", "+genpts", "-c:a", "pcm_s16le", "-c:v", "libx264", "-crf", "10", profile_output]) else: # Merge all sequences into single directory for line in segments:
...Yeah. After all that, it still skips. Obviously rendering straight to video files isn't an option for me despite how convenient that would be. Though, this system has some massive potential, and I want to try that out for myself.
Trying to compile my own tests
Within RenderChan's application files, there's a skeleton directory named "projects/default". However, I'm not about to use it. Instead, I'll make my own directory structure, roughly following the one from Pepper&Carrot's animation source. I don't think there are hardcoded directories RenderChan reads that matter much for now.
For my experiment, I have this:
renderchantest/ | |_ assets/ |_ audio/ |_ scenes/ |_ 000/ |_ 000.blend | |_ 001/ |_ 001.blend | | |_ project.blend |_ project.conf
Assets is where I'm gonna store all the textures shared by scenes (backgrounds, objects, particle textures, etc.). Audio is where the music and SFX goes. Scenes are the individual scenes I can make, and RenderChan apparently can render out between different application interfaces.
project.blend is where the final mix is gonna be joined up in Blender's VSE, and project.conf is all the profiles I want to use. I copied that over from P&C's source, but made a few changes:
[main] active_profile=hd [draft-lq] WIDTH=480 HEIGHT=270 FPS=24 FORMAT=png blender.cycles_samples=10 blender.prerender_count=0 BLENDER_VERSION=2.82 blender.packet_size=20 [draft] WIDTH=480 HEIGHT=270 FPS=24 FORMAT=png blender.cycles_samples=75 blender.prerender_count=0 [hd] WIDTH=1280 HEIGHT=720 FPS=24 FORMAT=png blender.cycles_samples=75 blender.packet_size=10 [full-hd] WIDTH=1920 HEIGHT=1080 FPS=24 FORMAT=png blender.cycles_samples=75 blender.packet_size=10
For now, I'm made all the scenes in Blender. It's a test run, so I'm going with the JoJo references. I also set the samples to 1 and made it all jaggedy.
I render out the sequences a scene at a time:
renderchan --profile draft scene/000/000.blend
Adding it to the main project.blend, render again...
renderchan --profile draft scene/001/001.blend
If I were to make changes, I run...
renderchan --profile draft --deps project.blend
to check it out in the main project.
Rendering the project, it made a big image sequence. Even if I can just put the image sequence back into Blender or Olive, there'd be no point in repeating what should already be done by the automated process.
So on top of that, I made scripts.
What this would do is take the image sequences RenderChan spat out and combine it with audio rendered out from the Blender file. First I'm gonna have to find out how to render out sound mixdown in the background.
Blender doesn't let me do that yet, only rendering animations in the background. What it does let me do is load up a file, and then run a Python script with it. Blender has a Python function to render the mixdown, so...
mixdown_audio.py, a two-liner:
import bpy bpy.ops.sound.mixdown(filepath="//render/project.flac")
Next, a way to make the FPS consistent. Came up with this, that finds that out from the first profile in project.conf:
FPS=`grep -oP 'FPS=\K([0-9]+)' project.conf | head -n1`
Putting it all together, I have the
BUILD script (assume renderchan is already in my $PATH):
#!/bin/bash # Crudely get FPS from project.conf FPS=`grep -oP 'FPS=\K([0-9]+)' project.conf | head -n1` # Render out project as a bunch of png's renderchan project.blend && # Run the mixdown script with project.blend blender -b project.blend -P scripts/mixdown_audio.py && # Combine the rendered image sequence with the mixed down audio ffmpeg -i render/project.blend.png/file.%5d.png -i render/project.flac -crf 18 -r $FPS project.mkv
I made a new folder in the root directory titled
scripts, then I placed both the buildscript and mixdown script within it.
From the root directory, I can simply run
bash scripts/BUILD and wait for it to spit out the finished mkv.
So I tried one with a Krita file as an entire scene, and yeah it didn't resize the thing according to the profile. Although I can still use it for things and I can imagine a myriad of uses for this.
RenderChan is awesome, and it makes me wanna animate for once. Even if I had to come up with hacks to ensure things work nicely, it's pretty satisfying to work with. I don't know if I'll end up actually using it though.
Source files of my test will come along shortly...