Latest Entries »

Overheads

I have prepared an overhead of Shot 1 (greenscreen) of my revised film. This gives an overview of what I am envisioning for the shot. Camera, lights, actors, grip, and background are included. For crew, I am intending to recruit a camera operator and a dolly grip. The one problem that I may run into is limited studio space, as both of the characters in the film will be running through a corridor as they attempt to escape from the facility.

Film overhead

Advertisements

I did some preliminary research on the web for possible shooting locations using Google Maps (Panoramio) and Flickr. The geotagging features were a great way to associate images with specific locations. This is what I have found so far:

Sunken Gardens, Huntington, IN.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Software

I thought it would be good to list all of the software that I am or plan on using.

Blender: This is my all-around content creation tool. All of the 3D work has been done in Blender, as well as camera tracking, roto, compositing. and video editing in the Video Sequence Editor (VSE).

Photoshop: Photoshop, of course, is an invaluable tool for texturing 3D models. Most of the texturing is done using Photoshop’s 3D workspace.

3DEqualizer: 3DEqualizer is a leading software package for camera tracking that I am investigating. Currently, I have little experience this weekend, but I am learning it for a workshop at CGWorkshops. If I can work out the licensing, it is a definite possibility for the film.

Nuke: The industry-standard for compositing applications, I am also considering adding this program as an alternative to Blender.

Survey data is an important part of matchmoving. The survey data is a group of 3D models or a series of points in 3D space. This data tells the solver where tracking points should be and constrains the camera solve to fit those points. The typical way to obtain the 3D data is by using expensive survey heads or LiDAR. However, I have been experimenting with using a free program from Autodesk called 123D Catch to generate the necessary 3D information instead.

123D Catch uses a technique called photogrammetry and complex algorithms to mathematically reconstruct a real scene in 3D from photos. It uses cloud processing to perform the heavy lifting required and sends the scene back to the user’s computer when done. The user can then view the model, stitch photos manually that failed in the initial computation, create a flythrough animation, and export the final model at various resolutions. By using this software, it will hopefully allow for much quicker creation of set geometry and survey data.

A few of my attempts can be seen below. Transparent areas or areas with insufficient photo coverage appear as holes; however, I think that the results are still promising.

 

virtualset_3

virtualset_1virtualset_2

 

Using Lens Distortion

Lens distortion is the name given to the property of lenses that determines a certain amount of warping or distortion. This is due to the difficulty in producing a lens to extremely fine tolerances. It is important to eliminate this distortion or ‘unwarp’ a plate to correctly track and match CG elements to a plate. I previously used lens distortion correction when tracking the footage from my previous film shoot, but want to make sure for the second shoot that the lenses I use are correctly profiled. I also will need to develop a solid workflow for using lens distortion, unwarping for tracking, and reapplying the distortion in the comp.

lens distortion example

Lens distortion example

Additional Resources:

 

http://log.ericalba.org/post/6110986710/lens-distortion-grids-pdf-for-download-vfx

www.leova.com/vfx_library/Heckman_Set_Documentation_Dec_2006.html

After considering a new direction for “Broken Reality,” I have narrowed down the project to the minimum that I feel satisfied with. I’ve decided since this project was put on hold that the priority is to finish the greenscreen corridor shot. Most of the 3D work has been focused on that scene, and I feel more positive about the tracking for that shot than the rest of the film.

Another important element that I want to accomplish is object tracking. Object tracking is essentially the inverse of camera tracking where a specific object is tracked rather than the entire scene. My plan is to use 3D printing to build trackers that can be attached to my arm. I haven’t done any concepts for this yet, but I have found similar examples of this technique.

The last type of shot that I want to do involves rotomation. Rotomation is a technique where a 3D character is animated to match an actor in a live-action plate. The rotomation can then be used to apply a variety of effects. One idea that I have is to use a piece of footage already shot of a basketball game. I feel that doing rotomation will allow me to build a better matchmove reel.

Inspiration

This summer, I have been attempting to take a fresh look at “Broken Reality.” One thing that I have realized is that the scope of the project ought to be narrowed further, particularly with the shortened production schedule that I must work with. For a long time, I have admired the work coming from Vancouver Film School’s Animation and Visual Effects program. This film created by Yoonteck Oh is a good example that I may use when developing the format of my film further. It contains only two shots, unlike the six or seven that I had planned on for “Broken Reality.” Redesigning the project in this way will, I think, allow for a much better final result and enable me to keep a sane working schedule.

 

 

Another film that I think would make a good model is Colin Levy’s “In Dire Need” PSA. It is well-made and consists of many of the elements that I wish to use in my film.

 

Environments Update

Made quite a bit of progress recently and decided that it was time to post an update. I have been pushing forward mainly in modeling and texturing on several different areas and can finally say that I feel positive about what I have accomplished so far.

First of all, I have completed the first concept to the corridor scene to a “rough final” state. After spending about two months working through a tutorial and trying to create the scene myself, I got something pretty close to being done. Rendered in Blender Cycles @ 256 samples.

blender cycles

 

 

 

 

 

 

Secondly, I have taken what I learned on the first concept and created a second corridor concept. This one is almost fully unwrapped and has temporary textures applied to it.

Without textures

blender uv test

Textured with test image

 

 

 

 

 

 

Finally, James Clugston has modeled a couple of sci-fi buildings, which I have unwrapped and texured, again with a basic test image.

scifi building

Exterior Environments

These are some of the trees that I’ve created with Blender and Paint Effects in Maya.

blender sapling addonblender sapling addon

blender sapling addonblender sapling addonblender sapling addon

View full article »

The concept models that I’ve created so far:

cgcookie sci-fi panelblenderguru sci-fi corridor

sci-fi corridor blenderguru
View full article »