How to use stable diffusion in Blender Render and Help to quickly get the background of your manga

In this video, I'm going to show you how to integrate artificial intelligence into Blender to enhance your renders significantly. I know this topic can be contentious, but please, hear me out. There's a model known as 'diffusion', developed by a company called Stability AI. They offer a Dream Studio API that can be integrated into various software, including Blender, and it's currently available for free. I should clarify that I'm not affiliated with them; I've simply discovered a workflow that utilises this API within Blender.

Let me demonstrate. Here’s a line art result I achieved in Blender. By switching to camera view, you can achieve effects like this, which admittedly do look somewhat artificial. This might be noticeable if you're using this in a web comic as a background for animations or manga; it can appear as though you're using a 3D model.

So, how can we reduce this artificiality without excessive effort? It’s quite simple with the help of AI. Now, this is purely my personal view, but I believe artists can ethically use AI provided they adhere to certain principles. Firstly, there must be transparency about using AI — no misleading the audience. Secondly, the AI models used should be ethical; they shouldn't just replicate an artist’s work without permission. Lastly, while using AI, it's crucial to add your own vision and creativity to instil a human touch; simply mass-producing art via AI does not constitute true artistry.

Almost all of this is our hard work, by the way. If you've seen my previous video, this scene was created using the default 'ArchMesh' plugin in Blender, which allows us to quickly generate architectural elements like walls and columns. The shading I used, called 'toon shader', employs a basic Principled BSDF shader setup to effectively highlight the contrasts between light and shadow.

I'm currently aiming to create a semi-manga style, and all these lines are created using the grease pencil. If you haven't seen my previous video, please check it out, subscribe, and give it a thumbs up — I really need your support!

In terms of rendering, using AI can significantly transform your results, adding a more hand-drawn feel to backgrounds, which I personally love. By varying seeds and styles, you can explore different rendering outcomes.

I've also developed a plugin that incorporates the Stable Diffusion model directly into Blender, which you can download from GitHub. This allows you to install it from Blender Market for free, although support options are available if you wish to contribute financially.

Once installed, simply sign up for Dream Studio to get your API key, paste it into the plugin, and start rendering. When you press F12 to render, the plugin sends the image to be processed by the AI, which then returns an enhanced version. The result is visible within seconds, depending on your internet connection.

This tool is not just a demonstration of line art; it's applicable to more complex and varied styles. For instance, I've been working on modelling a car with the grease pencil attached. You can specify any style or theme in the plugin, and it will generate results accordingly, which can be a great starting point for further refinement.

To maintain consistency in your artwork without it appearing too uniform, it's advisable to draw main characters by hand and use AI-generated elements as backgrounds. This approach helps avoid any perception of fakeness by the viewers.

I hope you find these tips useful. I’ve been talking a lot about AI recently, and for those of you who've requested more content on this topic, here it is. Stay tuned for more tips on AI, as well as traditional and digital art. Don’t forget to like my video and subscribe if you haven’t already. See you next time!