Talk:Network Rendering

From K-3D

Jump to: navigation, search

Hello, I'm the main developer of DrQueue.

Is there anything I could do to help you out regarding DrQueue ?

I'm sorry, you're software is very well known, but I'm new to... "using" it. So I might also need some help from your side.

Merry Christmas,


Thanks for dropping by! I'm not all that familiar with DrQueue either, but network rendering is something that comes-up every now and then on our mailing list. We have a fairly unsophisticated system for rendering jobs on the localhost which could probably be setup to submit jobs to DrQueue with moderate hacking, but I'd like to develop something better. K-3D supports a wide range of render engines, including all known RenderMan engines and Yafray, and we expect the list to keep growing due to the flexibility of K-3D's architecture. We also have the beginnings of support for multipass render jobs, and we hope to grow that capability in the future. Flexibility is important as we typically want to combine per-frame and per-job resources, compile shaders, etc. as part of a job. I may try setting-up DrQueue over the holiday for experimentation, from looking at the docs it may make sense to add a DrQueue client capability to K-3D. At the same time, we still need local rendering for rapid artist previews, it would be nice if we could have a unified architecture that could provide both, without requiring users to setup a local DrQueue manually. Tshead 12:46, 21 Dec 2006 (MST)
Right now, you can access all internal DrQueue API using the python bindings. It's not documented at all so you'd have to use the sources and my gladly provided help (of course). That would allow you to customize jobs, tasks... with some really minor design limitations. On the other hand, DrQueue uses scripts (any type, even when drqman creates tcsh ones) to handle job's tasks. That means that even without the python bindings you could script directly into the job-script whatever you might need to handle your special conditions. Please, I invite you to join the mailing list drqueue-users or drqueue-dev, and open a discussion about this there. It will be helpful for many other people. Regarding current complexity in the setup, typical unix alibi: "historical reasons". I'm working hard on that. A single station config is as simple as running both "master" and "server" on localhost. In fact, that's my first way for testing new changes. But as mentioned, I see a long thread here, so might be a good idea to move it to the list if you like, or mail me directly if you don't wan't to subscribe. Comadreja 01:22, 22 Dec 2006 (MST)
So, I've updated Network Rendering to include a description of how we do local rendering today, plus some use cases and requirements. After thinking it through, there is one paramount concern for us - that local rendering must work flawlessly after the user installs K-3D. Relying on the user to install and setup DrQueue for local rendering is not an option. We could include DrQueue in our build/install, but it still wouldn't provide the level of GUI integration that we want. So at a minimum we would be creating a K-3D specific client-or-clients, and we would probably end up creating our own master so we can do some K-3D specific customization there. The DrQueue Python bindings aren't much help as all of this would be C++ code. So - the main support we would be interested in from you would be thorough documentation for the DrQueue protocols, filesystem layout for frames, etc. I'm also very unclear on the role of scripting in DrQueue, who generates scripts, where do they go, how do they get there, what environment do they execute in, how do you handle different platforms, etc? Tshead 10:10, 23 Dec 2006 (MST)
Happy New Year to all of you. :) Regarding the points on Requirements.
  1. It's right now possible to run several master daemons on a single computer, but you should count on the concept of pools that allow a single master (including any HA extra master that could be added) to handle sets, where a slave could belong to several sets, while a job can only be part of a single one. Thus, only slaves in that pool (set) of nodes would be consider for that job. And I mean consider like any other job is considered. (The priority issues when mixing pools, slave and job conditions, and job's priorities themselves do get a bit complicated in the sense the would please everyone -> module handler). The Zero Configuration issue is pretty easy when you only consider the local computer, but it gets tricky when you start mixing platforms paths, and shared storages. But a k-3d specific project archiver module could create project bundles and push them to other slaves where they'd be treated like a local one. Even though it's more convenient it's like reinventing the wheel as the already existing network file systems will handle many more issues like authentication, encryption, permissions, acl's... so being something quite easy once you know the project structure and can script a bit, I don't think it's a good final solution. Though I'm eager to have one of those working, because I believe that's what most people would like to have even knowing there would be better options. My vote, of course, would be to offer both.
  2. Including shader compilation wouldn't be an issue as long as the node has all requirements already installed. That project archiver could be extended with the project shader handler, which would include the tests to determine whether the project needs to compile anything, what, as well as whether the remote system would be able to do it or not. Putting that node in the list of blocked hosts if the checks say it wouldn't be possible. That could be even extended to layers, where different sets of shaders might be needed.
  3. the constraints would be only limited by the user selected job options, script/module that would perform them or at the very last level by system's resources. It's not related from my pov by the distribution framework that will only provide the means to share information about systems involved, and send requests to them; either to perform tasks or to provide information about them. That would include systems themselves sending regular updates about the progress of their tasks or status. Used also as a heartbeat/keepalive signal.
  4. again viewing images is something beyong the scope of a distributed rendering framework, but we could make sure it provides all information (or even binary data) needed by the client system to offer that functionality.
  5. agreed, logs, besides the usual way of storage, should be analyzed also by a project log handler that might be able to provide a fast and easy solution to the message or throw the error to a higher level to be handled by some other layer... human or not, that would proceed the same way. Until the problem reaches somebody's cell phone at 4am. :)
So, probably the main point is what I called the push mechanism, allowing binary data to flow from system to system on request. Would that be an image to be shown, a log to be parsed, or a whole project to be rendered. In any case, some abstraction layer should be responsible for any of those tasks even if that abstraction layer does nothing other than relying on some other higher one's default behaviour. (Note for Timothy: did you get my mail ?) Comadreja 23:38, 1 Jan 2007 (MST)
Hello, I read on the mailing lists some comments that made me think you could have misunderstood my interest on this. I'm not writing here, or asking for opinion, in order to have somebody write a K-3D/DrQueue client or whatever. As I mentioned to Timothy, I'm working on a new unpublished release of drqueue which includes major redesigns (at the software/logical/code level). Given the fact that I had to either write again or adapt major pieces of code, one of my major concerns was to make any new one as extensible and generic as possible. That would include making that design compatible with any future extension for K-3D, not being important if it's me the one who codes it, a random Joe or Lockheed Martin. I'm interested in what would K-3D need for the best network rendering, and how could it fit with the rest of the design if it doesn't already. Sorry if there was any misunderstanding on this. In any case and for anyone's interest, if nobody happens to write that module for K-3D I will do it myself. But first of all, I have to finish the new core and set the path for that to be both easy and fast when it happens. Thanks again for all your ideas and insight. I'll remain open to any suggestion, improvement, idea... to do some brainstorming on this subject. Comadreja 14:00, 11 Jan 2007 (MST)